public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
* [gentoo-commits] proj/linux-patches:6.3 commit in: /
@ 2023-04-28 19:29 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2023-04-28 19:29 UTC (permalink / raw
  To: gentoo-commits

commit:     32dcfbc2dcb8bd2ca41cb31cbc4f411aee2bd81d
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Apr 28 19:28:39 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Apr 28 19:28:39 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=32dcfbc2

Add BMQ and PDS alt schedular (USE=experimental)

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                 |     8 +
 5020_BMQ-and-PDS-io-scheduler-v6.3-r1.patch | 10430 ++++++++++++++++++++++++++
 5021_BMQ-and-PDS-gentoo-defaults.patch      |    13 +
 3 files changed, 10451 insertions(+)

diff --git a/0000_README b/0000_README
index 8bb95e22..cdf69ca2 100644
--- a/0000_README
+++ b/0000_README
@@ -82,3 +82,11 @@ Desc:   Add Gentoo Linux support config settings and defaults.
 Patch:  5010_enable-cpu-optimizations-universal.patch
 From:   https://github.com/graysky2/kernel_compiler_patch
 Desc:   Kernel >= 5.15 patch enables gcc = v11.1+ optimizations for additional CPUs.
+
+Patch:  5020_BMQ-and-PDS-io-scheduler-v6.3-r1.patch
+From:   https://github.com/Frogging-Family/linux-tkg https://gitlab.com/alfredchen/projectc
+Desc:   BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.
+
+Patch:  5021_BMQ-and-PDS-gentoo-defaults.patch
+From:   https://gitweb.gentoo.org/proj/linux-patches.git/
+Desc:   Set defaults for BMQ. Add archs as people test, default to N

diff --git a/5020_BMQ-and-PDS-io-scheduler-v6.3-r1.patch b/5020_BMQ-and-PDS-io-scheduler-v6.3-r1.patch
new file mode 100644
index 00000000..4968c0ba
--- /dev/null
+++ b/5020_BMQ-and-PDS-io-scheduler-v6.3-r1.patch
@@ -0,0 +1,10430 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 7016cb12dc4e..a2ff3f6f5170 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -5454,6 +5454,12 @@
+ 	sa1100ir	[NET]
+ 			See drivers/net/irda/sa1100_ir.c.
+ 
++	sched_timeslice=
++			[KNL] Time slice in ms for Project C BMQ/PDS scheduler.
++			Format: integer 2, 4
++			Default: 4
++			See Documentation/scheduler/sched-BMQ.txt
++
+ 	sched_verbose	[KNL] Enables verbose scheduler debug messages.
+ 
+ 	schedstats=	[KNL,X86] Enable or disable scheduled statistics.
+diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
+index 4b7bfea28cd7..fcf90328638c 100644
+--- a/Documentation/admin-guide/sysctl/kernel.rst
++++ b/Documentation/admin-guide/sysctl/kernel.rst
+@@ -1616,3 +1616,13 @@ is 10 seconds.
+ 
+ The softlockup threshold is (``2 * watchdog_thresh``). Setting this
+ tunable to zero will disable lockup detection altogether.
++
++yield_type:
++===========
++
++BMQ/PDS CPU scheduler only. This determines what type of yield calls
++to sched_yield will perform.
++
++  0 - No yield.
++  1 - Deboost and requeue task. (default)
++  2 - Set run queue skip task.
+diff --git a/Documentation/scheduler/sched-BMQ.txt b/Documentation/scheduler/sched-BMQ.txt
+new file mode 100644
+index 000000000000..05c84eec0f31
+--- /dev/null
++++ b/Documentation/scheduler/sched-BMQ.txt
+@@ -0,0 +1,110 @@
++                         BitMap queue CPU Scheduler
++                         --------------------------
++
++CONTENT
++========
++
++ Background
++ Design
++   Overview
++   Task policy
++   Priority management
++   BitMap Queue
++   CPU Assignment and Migration
++
++
++Background
++==========
++
++BitMap Queue CPU scheduler, referred to as BMQ from here on, is an evolution
++of previous Priority and Deadline based Skiplist multiple queue scheduler(PDS),
++and inspired by Zircon scheduler. The goal of it is to keep the scheduler code
++simple, while efficiency and scalable for interactive tasks, such as desktop,
++movie playback and gaming etc.
++
++Design
++======
++
++Overview
++--------
++
++BMQ use per CPU run queue design, each CPU(logical) has it's own run queue,
++each CPU is responsible for scheduling the tasks that are putting into it's
++run queue.
++
++The run queue is a set of priority queues. Note that these queues are fifo
++queue for non-rt tasks or priority queue for rt tasks in data structure. See
++BitMap Queue below for details. BMQ is optimized for non-rt tasks in the fact
++that most applications are non-rt tasks. No matter the queue is fifo or
++priority, In each queue is an ordered list of runnable tasks awaiting execution
++and the data structures are the same. When it is time for a new task to run,
++the scheduler simply looks the lowest numbered queueue that contains a task,
++and runs the first task from the head of that queue. And per CPU idle task is
++also in the run queue, so the scheduler can always find a task to run on from
++its run queue.
++
++Each task will assigned the same timeslice(default 4ms) when it is picked to
++start running. Task will be reinserted at the end of the appropriate priority
++queue when it uses its whole timeslice. When the scheduler selects a new task
++from the priority queue it sets the CPU's preemption timer for the remainder of
++the previous timeslice. When that timer fires the scheduler will stop execution
++on that task, select another task and start over again.
++
++If a task blocks waiting for a shared resource then it's taken out of its
++priority queue and is placed in a wait queue for the shared resource. When it
++is unblocked it will be reinserted in the appropriate priority queue of an
++eligible CPU.
++
++Task policy
++-----------
++
++BMQ supports DEADLINE, FIFO, RR, NORMAL, BATCH and IDLE task policy like the
++mainline CFS scheduler. But BMQ is heavy optimized for non-rt task, that's
++NORMAL/BATCH/IDLE policy tasks. Below is the implementation detail of each
++policy.
++
++DEADLINE
++	It is squashed as priority 0 FIFO task.
++
++FIFO/RR
++	All RT tasks share one single priority queue in BMQ run queue designed. The
++complexity of insert operation is O(n). BMQ is not designed for system runs
++with major rt policy tasks.
++
++NORMAL/BATCH/IDLE
++	BATCH and IDLE tasks are treated as the same policy. They compete CPU with
++NORMAL policy tasks, but they just don't boost. To control the priority of
++NORMAL/BATCH/IDLE tasks, simply use nice level.
++
++ISO
++	ISO policy is not supported in BMQ. Please use nice level -20 NORMAL policy
++task instead.
++
++Priority management
++-------------------
++
++RT tasks have priority from 0-99. For non-rt tasks, there are three different
++factors used to determine the effective priority of a task. The effective
++priority being what is used to determine which queue it will be in.
++
++The first factor is simply the task’s static priority. Which is assigned from
++task's nice level, within [-20, 19] in userland's point of view and [0, 39]
++internally.
++
++The second factor is the priority boost. This is a value bounded between
++[-MAX_PRIORITY_ADJ, MAX_PRIORITY_ADJ] used to offset the base priority, it is
++modified by the following cases:
++
++*When a thread has used up its entire timeslice, always deboost its boost by
++increasing by one.
++*When a thread gives up cpu control(voluntary or non-voluntary) to reschedule,
++and its switch-in time(time after last switch and run) below the thredhold
++based on its priority boost, will boost its boost by decreasing by one buti is
++capped at 0 (won’t go negative).
++
++The intent in this system is to ensure that interactive threads are serviced
++quickly. These are usually the threads that interact directly with the user
++and cause user-perceivable latency. These threads usually do little work and
++spend most of their time blocked awaiting another user event. So they get the
++priority boost from unblocking while background threads that do most of the
++processing receive the priority penalty for using their entire timeslice.
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index 5e0e0ccd47aa..c06d0639fb92 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -479,7 +479,7 @@ static int proc_pid_schedstat(struct seq_file *m, struct pid_namespace *ns,
+ 		seq_puts(m, "0 0 0\n");
+ 	else
+ 		seq_printf(m, "%llu %llu %lu\n",
+-		   (unsigned long long)task->se.sum_exec_runtime,
++		   (unsigned long long)tsk_seruntime(task),
+ 		   (unsigned long long)task->sched_info.run_delay,
+ 		   task->sched_info.pcount);
+ 
+diff --git a/include/asm-generic/resource.h b/include/asm-generic/resource.h
+index 8874f681b056..59eb72bf7d5f 100644
+--- a/include/asm-generic/resource.h
++++ b/include/asm-generic/resource.h
+@@ -23,7 +23,7 @@
+ 	[RLIMIT_LOCKS]		= {  RLIM_INFINITY,  RLIM_INFINITY },	\
+ 	[RLIMIT_SIGPENDING]	= { 		0,	       0 },	\
+ 	[RLIMIT_MSGQUEUE]	= {   MQ_BYTES_MAX,   MQ_BYTES_MAX },	\
+-	[RLIMIT_NICE]		= { 0, 0 },				\
++	[RLIMIT_NICE]		= { 30, 30 },				\
+ 	[RLIMIT_RTPRIO]		= { 0, 0 },				\
+ 	[RLIMIT_RTTIME]		= {  RLIM_INFINITY,  RLIM_INFINITY },	\
+ }
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 63d242164b1a..865d9b6d2f45 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -762,8 +762,14 @@ struct task_struct {
+ 	unsigned int			ptrace;
+ 
+ #ifdef CONFIG_SMP
+-	int				on_cpu;
+ 	struct __call_single_node	wake_entry;
++#endif
++#if defined(CONFIG_SMP) || defined(CONFIG_SCHED_ALT)
++	int				on_cpu;
++#endif
++
++#ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ 	unsigned int			wakee_flips;
+ 	unsigned long			wakee_flip_decay_ts;
+ 	struct task_struct		*last_wakee;
+@@ -777,6 +783,7 @@ struct task_struct {
+ 	 */
+ 	int				recent_used_cpu;
+ 	int				wake_cpu;
++#endif /* !CONFIG_SCHED_ALT */
+ #endif
+ 	int				on_rq;
+ 
+@@ -785,6 +792,20 @@ struct task_struct {
+ 	int				normal_prio;
+ 	unsigned int			rt_priority;
+ 
++#ifdef CONFIG_SCHED_ALT
++	u64				last_ran;
++	s64				time_slice;
++	int				sq_idx;
++	struct list_head		sq_node;
++#ifdef CONFIG_SCHED_BMQ
++	int				boost_prio;
++#endif /* CONFIG_SCHED_BMQ */
++#ifdef CONFIG_SCHED_PDS
++	u64				deadline;
++#endif /* CONFIG_SCHED_PDS */
++	/* sched_clock time spent running */
++	u64				sched_time;
++#else /* !CONFIG_SCHED_ALT */
+ 	struct sched_entity		se;
+ 	struct sched_rt_entity		rt;
+ 	struct sched_dl_entity		dl;
+@@ -795,6 +816,7 @@ struct task_struct {
+ 	unsigned long			core_cookie;
+ 	unsigned int			core_occupation;
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ #ifdef CONFIG_CGROUP_SCHED
+ 	struct task_group		*sched_task_group;
+@@ -1545,6 +1567,15 @@ struct task_struct {
+ 	 */
+ };
+ 
++#ifdef CONFIG_SCHED_ALT
++#define tsk_seruntime(t)		((t)->sched_time)
++/* replace the uncertian rt_timeout with 0UL */
++#define tsk_rttimeout(t)		(0UL)
++#else /* CFS */
++#define tsk_seruntime(t)	((t)->se.sum_exec_runtime)
++#define tsk_rttimeout(t)	((t)->rt.timeout)
++#endif /* !CONFIG_SCHED_ALT */
++
+ static inline struct pid *task_pid(struct task_struct *task)
+ {
+ 	return task->thread_pid;
+diff --git a/include/linux/sched/deadline.h b/include/linux/sched/deadline.h
+index 7c83d4d5a971..fa30f98cb2be 100644
+--- a/include/linux/sched/deadline.h
++++ b/include/linux/sched/deadline.h
+@@ -1,5 +1,24 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+ 
++#ifdef CONFIG_SCHED_ALT
++
++static inline int dl_task(struct task_struct *p)
++{
++	return 0;
++}
++
++#ifdef CONFIG_SCHED_BMQ
++#define __tsk_deadline(p)	(0UL)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define __tsk_deadline(p)	((((u64) ((p)->prio))<<56) | (p)->deadline)
++#endif
++
++#else
++
++#define __tsk_deadline(p)	((p)->dl.deadline)
++
+ /*
+  * SCHED_DEADLINE tasks has negative priorities, reflecting
+  * the fact that any of them has higher prio than RT and
+@@ -21,6 +40,7 @@ static inline int dl_task(struct task_struct *p)
+ {
+ 	return dl_prio(p->prio);
+ }
++#endif /* CONFIG_SCHED_ALT */
+ 
+ static inline bool dl_time_before(u64 a, u64 b)
+ {
+diff --git a/include/linux/sched/prio.h b/include/linux/sched/prio.h
+index ab83d85e1183..6af9ae681116 100644
+--- a/include/linux/sched/prio.h
++++ b/include/linux/sched/prio.h
+@@ -18,6 +18,32 @@
+ #define MAX_PRIO		(MAX_RT_PRIO + NICE_WIDTH)
+ #define DEFAULT_PRIO		(MAX_RT_PRIO + NICE_WIDTH / 2)
+ 
++#ifdef CONFIG_SCHED_ALT
++
++/* Undefine MAX_PRIO and DEFAULT_PRIO */
++#undef MAX_PRIO
++#undef DEFAULT_PRIO
++
++/* +/- priority levels from the base priority */
++#ifdef CONFIG_SCHED_BMQ
++#define MAX_PRIORITY_ADJ	(7)
++
++#define MIN_NORMAL_PRIO		(MAX_RT_PRIO)
++#define MAX_PRIO		(MIN_NORMAL_PRIO + NICE_WIDTH)
++#define DEFAULT_PRIO		(MIN_NORMAL_PRIO + NICE_WIDTH / 2)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define MAX_PRIORITY_ADJ	(0)
++
++#define MIN_NORMAL_PRIO		(128)
++#define NORMAL_PRIO_NUM		(64)
++#define MAX_PRIO		(MIN_NORMAL_PRIO + NORMAL_PRIO_NUM)
++#define DEFAULT_PRIO		(MAX_PRIO - NICE_WIDTH / 2)
++#endif
++
++#endif /* CONFIG_SCHED_ALT */
++
+ /*
+  * Convert user-nice values [ -20 ... 0 ... 19 ]
+  * to static priority [ MAX_RT_PRIO..MAX_PRIO-1 ],
+diff --git a/include/linux/sched/rt.h b/include/linux/sched/rt.h
+index 994c25640e15..8c050a59ece1 100644
+--- a/include/linux/sched/rt.h
++++ b/include/linux/sched/rt.h
+@@ -24,8 +24,10 @@ static inline bool task_is_realtime(struct task_struct *tsk)
+ 
+ 	if (policy == SCHED_FIFO || policy == SCHED_RR)
+ 		return true;
++#ifndef CONFIG_SCHED_ALT
+ 	if (policy == SCHED_DEADLINE)
+ 		return true;
++#endif
+ 	return false;
+ }
+ 
+diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
+index 816df6cc444e..c8da08e18c91 100644
+--- a/include/linux/sched/topology.h
++++ b/include/linux/sched/topology.h
+@@ -234,7 +234,8 @@ static inline bool cpus_share_cache(int this_cpu, int that_cpu)
+ 
+ #endif	/* !CONFIG_SMP */
+ 
+-#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
++#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) && \
++	!defined(CONFIG_SCHED_ALT)
+ extern void rebuild_sched_domains_energy(void);
+ #else
+ static inline void rebuild_sched_domains_energy(void)
+diff --git a/init/Kconfig b/init/Kconfig
+index c88bb30a8b0b..dff86592555a 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -629,6 +629,7 @@ config TASK_IO_ACCOUNTING
+ 
+ config PSI
+ 	bool "Pressure stall information tracking"
++	depends on !SCHED_ALT
+ 	help
+ 	  Collect metrics that indicate how overcommitted the CPU, memory,
+ 	  and IO capacity are in the system.
+@@ -817,6 +818,7 @@ menu "Scheduler features"
+ config UCLAMP_TASK
+ 	bool "Enable utilization clamping for RT/FAIR tasks"
+ 	depends on CPU_FREQ_GOV_SCHEDUTIL
++	depends on !SCHED_ALT
+ 	help
+ 	  This feature enables the scheduler to track the clamped utilization
+ 	  of each CPU based on RUNNABLE tasks scheduled on that CPU.
+@@ -863,6 +865,35 @@ config UCLAMP_BUCKETS_COUNT
+ 
+ 	  If in doubt, use the default value.
+ 
++menuconfig SCHED_ALT
++	bool "Alternative CPU Schedulers"
++	default y
++	help
++	  This feature enable alternative CPU scheduler"
++
++if SCHED_ALT
++
++choice
++	prompt "Alternative CPU Scheduler"
++	default SCHED_BMQ
++
++config SCHED_BMQ
++	bool "BMQ CPU scheduler"
++	help
++	  The BitMap Queue CPU scheduler for excellent interactivity and
++	  responsiveness on the desktop and solid scalability on normal
++	  hardware and commodity servers.
++
++config SCHED_PDS
++	bool "PDS CPU scheduler"
++	help
++	  The Priority and Deadline based Skip list multiple queue CPU
++	  Scheduler.
++
++endchoice
++
++endif
++
+ endmenu
+ 
+ #
+@@ -916,6 +947,7 @@ config NUMA_BALANCING
+ 	depends on ARCH_SUPPORTS_NUMA_BALANCING
+ 	depends on !ARCH_WANT_NUMA_VARIABLE_LOCALITY
+ 	depends on SMP && NUMA && MIGRATION && !PREEMPT_RT
++	depends on !SCHED_ALT
+ 	help
+ 	  This option adds support for automatic NUMA aware memory/task placement.
+ 	  The mechanism is quite primitive and is based on migrating memory when
+@@ -1013,6 +1045,7 @@ config FAIR_GROUP_SCHED
+ 	depends on CGROUP_SCHED
+ 	default CGROUP_SCHED
+ 
++if !SCHED_ALT
+ config CFS_BANDWIDTH
+ 	bool "CPU bandwidth provisioning for FAIR_GROUP_SCHED"
+ 	depends on FAIR_GROUP_SCHED
+@@ -1035,6 +1068,7 @@ config RT_GROUP_SCHED
+ 	  realtime bandwidth for them.
+ 	  See Documentation/scheduler/sched-rt-group.rst for more information.
+ 
++endif #!SCHED_ALT
+ endif #CGROUP_SCHED
+ 
+ config SCHED_MM_CID
+@@ -1283,6 +1317,7 @@ config CHECKPOINT_RESTORE
+ 
+ config SCHED_AUTOGROUP
+ 	bool "Automatic process group scheduling"
++	depends on !SCHED_ALT
+ 	select CGROUPS
+ 	select CGROUP_SCHED
+ 	select FAIR_GROUP_SCHED
+diff --git a/init/init_task.c b/init/init_task.c
+index ff6c4b9bfe6b..19e9c662d1a1 100644
+--- a/init/init_task.c
++++ b/init/init_task.c
+@@ -75,9 +75,15 @@ struct task_struct init_task
+ 	.stack		= init_stack,
+ 	.usage		= REFCOUNT_INIT(2),
+ 	.flags		= PF_KTHREAD,
++#ifdef CONFIG_SCHED_ALT
++	.prio		= DEFAULT_PRIO + MAX_PRIORITY_ADJ,
++	.static_prio	= DEFAULT_PRIO,
++	.normal_prio	= DEFAULT_PRIO + MAX_PRIORITY_ADJ,
++#else
+ 	.prio		= MAX_PRIO - 20,
+ 	.static_prio	= MAX_PRIO - 20,
+ 	.normal_prio	= MAX_PRIO - 20,
++#endif
+ 	.policy		= SCHED_NORMAL,
+ 	.cpus_ptr	= &init_task.cpus_mask,
+ 	.user_cpus_ptr	= NULL,
+@@ -88,6 +94,17 @@ struct task_struct init_task
+ 	.restart_block	= {
+ 		.fn = do_no_restart_syscall,
+ 	},
++#ifdef CONFIG_SCHED_ALT
++	.sq_node	= LIST_HEAD_INIT(init_task.sq_node),
++#ifdef CONFIG_SCHED_BMQ
++	.boost_prio	= 0,
++	.sq_idx		= 15,
++#endif
++#ifdef CONFIG_SCHED_PDS
++	.deadline	= 0,
++#endif
++	.time_slice	= HZ,
++#else
+ 	.se		= {
+ 		.group_node 	= LIST_HEAD_INIT(init_task.se.group_node),
+ 	},
+@@ -95,6 +112,7 @@ struct task_struct init_task
+ 		.run_list	= LIST_HEAD_INIT(init_task.rt.run_list),
+ 		.time_slice	= RR_TIMESLICE,
+ 	},
++#endif
+ 	.tasks		= LIST_HEAD_INIT(init_task.tasks),
+ #ifdef CONFIG_SMP
+ 	.pushable_tasks	= PLIST_NODE_INIT(init_task.pushable_tasks, MAX_PRIO),
+diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
+index c2f1fd95a821..41654679b1b2 100644
+--- a/kernel/Kconfig.preempt
++++ b/kernel/Kconfig.preempt
+@@ -117,7 +117,7 @@ config PREEMPT_DYNAMIC
+ 
+ config SCHED_CORE
+ 	bool "Core Scheduling for SMT"
+-	depends on SCHED_SMT
++	depends on SCHED_SMT && !SCHED_ALT
+ 	help
+ 	  This option permits Core Scheduling, a means of coordinated task
+ 	  selection across SMT siblings. When enabled -- see
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 505d86b16642..2e4139573f72 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -791,7 +791,7 @@ static int validate_change(struct cpuset *cur, struct cpuset *trial)
+ 	return ret;
+ }
+ 
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) && !defined(CONFIG_SCHED_ALT)
+ /*
+  * Helper routine for generate_sched_domains().
+  * Do cpusets a, b have overlapping effective cpus_allowed masks?
+@@ -1187,7 +1187,7 @@ static void rebuild_sched_domains_locked(void)
+ 	/* Have scheduler rebuild the domains */
+ 	partition_and_rebuild_sched_domains(ndoms, doms, attr);
+ }
+-#else /* !CONFIG_SMP */
++#else /* !CONFIG_SMP || CONFIG_SCHED_ALT */
+ static void rebuild_sched_domains_locked(void)
+ {
+ }
+diff --git a/kernel/delayacct.c b/kernel/delayacct.c
+index e39cb696cfbd..463423572e09 100644
+--- a/kernel/delayacct.c
++++ b/kernel/delayacct.c
+@@ -150,7 +150,7 @@ int delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk)
+ 	 */
+ 	t1 = tsk->sched_info.pcount;
+ 	t2 = tsk->sched_info.run_delay;
+-	t3 = tsk->se.sum_exec_runtime;
++	t3 = tsk_seruntime(tsk);
+ 
+ 	d->cpu_count += t1;
+ 
+diff --git a/kernel/exit.c b/kernel/exit.c
+index f2afdb0add7c..e8fb27fbc3db 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -172,7 +172,7 @@ static void __exit_signal(struct task_struct *tsk)
+ 			sig->curr_target = next_thread(tsk);
+ 	}
+ 
+-	add_device_randomness((const void*) &tsk->se.sum_exec_runtime,
++	add_device_randomness((const void*) &tsk_seruntime(tsk),
+ 			      sizeof(unsigned long long));
+ 
+ 	/*
+@@ -193,7 +193,7 @@ static void __exit_signal(struct task_struct *tsk)
+ 	sig->inblock += task_io_get_inblock(tsk);
+ 	sig->oublock += task_io_get_oublock(tsk);
+ 	task_io_accounting_add(&sig->ioac, &tsk->ioac);
+-	sig->sum_sched_runtime += tsk->se.sum_exec_runtime;
++	sig->sum_sched_runtime += tsk_seruntime(tsk);
+ 	sig->nr_threads--;
+ 	__unhash_process(tsk, group_dead);
+ 	write_sequnlock(&sig->stats_lock);
+diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
+index 728f434de2bb..0e1082a4e878 100644
+--- a/kernel/locking/rtmutex.c
++++ b/kernel/locking/rtmutex.c
+@@ -337,21 +337,25 @@ static __always_inline void
+ waiter_update_prio(struct rt_mutex_waiter *waiter, struct task_struct *task)
+ {
+ 	waiter->prio = __waiter_prio(task);
+-	waiter->deadline = task->dl.deadline;
++	waiter->deadline = __tsk_deadline(task);
+ }
+ 
+ /*
+  * Only use with rt_mutex_waiter_{less,equal}()
+  */
+ #define task_to_waiter(p)	\
+-	&(struct rt_mutex_waiter){ .prio = __waiter_prio(p), .deadline = (p)->dl.deadline }
++	&(struct rt_mutex_waiter){ .prio = __waiter_prio(p), .deadline = __tsk_deadline(p) }
+ 
+ static __always_inline int rt_mutex_waiter_less(struct rt_mutex_waiter *left,
+ 						struct rt_mutex_waiter *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++	return (left->deadline < right->deadline);
++#else
+ 	if (left->prio < right->prio)
+ 		return 1;
+ 
++#ifndef CONFIG_SCHED_BMQ
+ 	/*
+ 	 * If both waiters have dl_prio(), we check the deadlines of the
+ 	 * associated tasks.
+@@ -360,16 +364,22 @@ static __always_inline int rt_mutex_waiter_less(struct rt_mutex_waiter *left,
+ 	 */
+ 	if (dl_prio(left->prio))
+ 		return dl_time_before(left->deadline, right->deadline);
++#endif
+ 
+ 	return 0;
++#endif
+ }
+ 
+ static __always_inline int rt_mutex_waiter_equal(struct rt_mutex_waiter *left,
+ 						 struct rt_mutex_waiter *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++	return (left->deadline == right->deadline);
++#else
+ 	if (left->prio != right->prio)
+ 		return 0;
+ 
++#ifndef CONFIG_SCHED_BMQ
+ 	/*
+ 	 * If both waiters have dl_prio(), we check the deadlines of the
+ 	 * associated tasks.
+@@ -378,8 +388,10 @@ static __always_inline int rt_mutex_waiter_equal(struct rt_mutex_waiter *left,
+ 	 */
+ 	if (dl_prio(left->prio))
+ 		return left->deadline == right->deadline;
++#endif
+ 
+ 	return 1;
++#endif
+ }
+ 
+ static inline bool rt_mutex_steal(struct rt_mutex_waiter *waiter,
+diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile
+index 976092b7bd45..31d587c16ec1 100644
+--- a/kernel/sched/Makefile
++++ b/kernel/sched/Makefile
+@@ -28,7 +28,12 @@ endif
+ # These compilation units have roughly the same size and complexity - so their
+ # build parallelizes well and finishes roughly at once:
+ #
++ifdef CONFIG_SCHED_ALT
++obj-y += alt_core.o
++obj-$(CONFIG_SCHED_DEBUG) += alt_debug.o
++else
+ obj-y += core.o
+ obj-y += fair.o
++endif
+ obj-y += build_policy.o
+ obj-y += build_utility.o
+diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
+new file mode 100644
+index 000000000000..fbf506b16bf4
+--- /dev/null
++++ b/kernel/sched/alt_core.c
+@@ -0,0 +1,8174 @@
++/*
++ *  kernel/sched/alt_core.c
++ *
++ *  Core alternative kernel scheduler code and related syscalls
++ *
++ *  Copyright (C) 1991-2002  Linus Torvalds
++ *
++ *  2009-08-13	Brainfuck deadline scheduling policy by Con Kolivas deletes
++ *		a whole lot of those previous things.
++ *  2017-09-06	Priority and Deadline based Skip list multiple queue kernel
++ *		scheduler by Alfred Chen.
++ *  2019-02-20	BMQ(BitMap Queue) kernel scheduler by Alfred Chen.
++ */
++#include <linux/sched/clock.h>
++#include <linux/sched/cputime.h>
++#include <linux/sched/debug.h>
++#include <linux/sched/isolation.h>
++#include <linux/sched/loadavg.h>
++#include <linux/sched/mm.h>
++#include <linux/sched/nohz.h>
++#include <linux/sched/stat.h>
++#include <linux/sched/wake_q.h>
++
++#include <linux/blkdev.h>
++#include <linux/context_tracking.h>
++#include <linux/cpuset.h>
++#include <linux/delayacct.h>
++#include <linux/init_task.h>
++#include <linux/kcov.h>
++#include <linux/kprobes.h>
++#include <linux/nmi.h>
++#include <linux/scs.h>
++
++#include <uapi/linux/sched/types.h>
++
++#include <asm/irq_regs.h>
++#include <asm/switch_to.h>
++
++#define CREATE_TRACE_POINTS
++#include <trace/events/sched.h>
++#undef CREATE_TRACE_POINTS
++
++#include "sched.h"
++
++#include "pelt.h"
++
++#include "../../io_uring/io-wq.h"
++#include "../smpboot.h"
++
++/*
++ * Export tracepoints that act as a bare tracehook (ie: have no trace event
++ * associated with them) to allow external modules to probe them.
++ */
++EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_irq_tp);
++
++#ifdef CONFIG_SCHED_DEBUG
++#define sched_feat(x)	(1)
++/*
++ * Print a warning if need_resched is set for the given duration (if
++ * LATENCY_WARN is enabled).
++ *
++ * If sysctl_resched_latency_warn_once is set, only one warning will be shown
++ * per boot.
++ */
++__read_mostly int sysctl_resched_latency_warn_ms = 100;
++__read_mostly int sysctl_resched_latency_warn_once = 1;
++#else
++#define sched_feat(x)	(0)
++#endif /* CONFIG_SCHED_DEBUG */
++
++#define ALT_SCHED_VERSION "v6.3-r1"
++
++/*
++ * Compile time debug macro
++ * #define ALT_SCHED_DEBUG
++ */
++
++/* rt_prio(prio) defined in include/linux/sched/rt.h */
++#define rt_task(p)		rt_prio((p)->prio)
++#define rt_policy(policy)	((policy) == SCHED_FIFO || (policy) == SCHED_RR)
++#define task_has_rt_policy(p)	(rt_policy((p)->policy))
++
++#define STOP_PRIO		(MAX_RT_PRIO - 1)
++
++/* Default time slice is 4 in ms, can be set via kernel parameter "sched_timeslice" */
++u64 sched_timeslice_ns __read_mostly = (4 << 20);
++
++static inline void requeue_task(struct task_struct *p, struct rq *rq, int idx);
++
++#ifdef CONFIG_SCHED_BMQ
++#include "bmq.h"
++#endif
++#ifdef CONFIG_SCHED_PDS
++#include "pds.h"
++#endif
++
++struct affinity_context {
++	const struct cpumask *new_mask;
++	struct cpumask *user_mask;
++	unsigned int flags;
++};
++
++static int __init sched_timeslice(char *str)
++{
++	int timeslice_ms;
++
++	get_option(&str, &timeslice_ms);
++	if (2 != timeslice_ms)
++		timeslice_ms = 4;
++	sched_timeslice_ns = timeslice_ms << 20;
++	sched_timeslice_imp(timeslice_ms);
++
++	return 0;
++}
++early_param("sched_timeslice", sched_timeslice);
++
++/* Reschedule if less than this many μs left */
++#define RESCHED_NS		(100 << 10)
++
++/**
++ * sched_yield_type - Choose what sort of yield sched_yield will perform.
++ * 0: No yield.
++ * 1: Deboost and requeue task. (default)
++ * 2: Set rq skip task.
++ */
++int sched_yield_type __read_mostly = 1;
++
++#ifdef CONFIG_SMP
++static cpumask_t sched_rq_pending_mask ____cacheline_aligned_in_smp;
++
++DEFINE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_llc_mask);
++DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_topo_end_mask);
++
++#ifdef CONFIG_SCHED_SMT
++DEFINE_STATIC_KEY_FALSE(sched_smt_present);
++EXPORT_SYMBOL_GPL(sched_smt_present);
++#endif
++
++/*
++ * Keep a unique ID per domain (we use the first CPUs number in the cpumask of
++ * the domain), this allows us to quickly tell if two cpus are in the same cache
++ * domain, see cpus_share_cache().
++ */
++DEFINE_PER_CPU(int, sd_llc_id);
++#endif /* CONFIG_SMP */
++
++static DEFINE_MUTEX(sched_hotcpu_mutex);
++
++DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next)	do { } while (0)
++#endif
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch()	do { } while (0)
++#endif
++
++#ifdef CONFIG_SCHED_SMT
++static cpumask_t sched_sg_idle_mask ____cacheline_aligned_in_smp;
++#endif
++static cpumask_t sched_preempt_mask[SCHED_QUEUE_BITS] ____cacheline_aligned_in_smp;
++static cpumask_t *const sched_idle_mask = &sched_preempt_mask[0];
++
++/* task function */
++static inline const struct cpumask *task_user_cpus(struct task_struct *p)
++{
++	if (!p->user_cpus_ptr)
++		return cpu_possible_mask; /* &init_task.cpus_mask */
++	return p->user_cpus_ptr;
++}
++
++/* sched_queue related functions */
++static inline void sched_queue_init(struct sched_queue *q)
++{
++	int i;
++
++	bitmap_zero(q->bitmap, SCHED_QUEUE_BITS);
++	for(i = 0; i < SCHED_LEVELS; i++)
++		INIT_LIST_HEAD(&q->heads[i]);
++}
++
++/*
++ * Init idle task and put into queue structure of rq
++ * IMPORTANT: may be called multiple times for a single cpu
++ */
++static inline void sched_queue_init_idle(struct sched_queue *q,
++					 struct task_struct *idle)
++{
++	idle->sq_idx = IDLE_TASK_SCHED_PRIO;
++	INIT_LIST_HEAD(&q->heads[idle->sq_idx]);
++	list_add(&idle->sq_node, &q->heads[idle->sq_idx]);
++}
++
++static inline void
++clear_recorded_preempt_mask(int pr, int low, int high, int cpu)
++{
++	if (low < pr && pr <= high)
++		cpumask_clear_cpu(cpu, sched_preempt_mask + SCHED_QUEUE_BITS - pr);
++}
++
++static inline void
++set_recorded_preempt_mask(int pr, int low, int high, int cpu)
++{
++	if (low < pr && pr <= high)
++		cpumask_set_cpu(cpu, sched_preempt_mask + SCHED_QUEUE_BITS - pr);
++}
++
++static atomic_t sched_prio_record = ATOMIC_INIT(0);
++
++/* water mark related functions */
++static inline void update_sched_preempt_mask(struct rq *rq)
++{
++	unsigned long prio = find_first_bit(rq->queue.bitmap, SCHED_QUEUE_BITS);
++	unsigned long last_prio = rq->prio;
++	int cpu, pr;
++
++	if (prio == last_prio)
++		return;
++
++	rq->prio = prio;
++	cpu = cpu_of(rq);
++	pr = atomic_read(&sched_prio_record);
++
++	if (prio < last_prio) {
++		if (IDLE_TASK_SCHED_PRIO == last_prio) {
++#ifdef CONFIG_SCHED_SMT
++			if (static_branch_likely(&sched_smt_present))
++				cpumask_andnot(&sched_sg_idle_mask,
++					       &sched_sg_idle_mask, cpu_smt_mask(cpu));
++#endif
++			cpumask_clear_cpu(cpu, sched_idle_mask);
++			last_prio -= 2;
++		}
++		clear_recorded_preempt_mask(pr, prio, last_prio, cpu);
++
++		return;
++	}
++	/* last_prio < prio */
++	if (IDLE_TASK_SCHED_PRIO == prio) {
++#ifdef CONFIG_SCHED_SMT
++		if (static_branch_likely(&sched_smt_present) &&
++		    cpumask_intersects(cpu_smt_mask(cpu), sched_idle_mask))
++			cpumask_or(&sched_sg_idle_mask,
++				   &sched_sg_idle_mask, cpu_smt_mask(cpu));
++#endif
++		cpumask_set_cpu(cpu, sched_idle_mask);
++		prio -= 2;
++	}
++	set_recorded_preempt_mask(pr, last_prio, prio, cpu);
++}
++
++/*
++ * This routine assume that the idle task always in queue
++ */
++static inline struct task_struct *sched_rq_first_task(struct rq *rq)
++{
++	const struct list_head *head = &rq->queue.heads[sched_prio2idx(rq->prio, rq)];
++
++	return list_first_entry(head, struct task_struct, sq_node);
++}
++
++static inline struct task_struct *
++sched_rq_next_task(struct task_struct *p, struct rq *rq)
++{
++	unsigned long idx = p->sq_idx;
++	struct list_head *head = &rq->queue.heads[idx];
++
++	if (list_is_last(&p->sq_node, head)) {
++		idx = find_next_bit(rq->queue.bitmap, SCHED_QUEUE_BITS,
++				    sched_idx2prio(idx, rq) + 1);
++		head = &rq->queue.heads[sched_prio2idx(idx, rq)];
++
++		return list_first_entry(head, struct task_struct, sq_node);
++	}
++
++	return list_next_entry(p, sq_node);
++}
++
++static inline struct task_struct *rq_runnable_task(struct rq *rq)
++{
++	struct task_struct *next = sched_rq_first_task(rq);
++
++	if (unlikely(next == rq->skip))
++		next = sched_rq_next_task(next, rq);
++
++	return next;
++}
++
++/*
++ * Serialization rules:
++ *
++ * Lock order:
++ *
++ *   p->pi_lock
++ *     rq->lock
++ *       hrtimer_cpu_base->lock (hrtimer_start() for bandwidth controls)
++ *
++ *  rq1->lock
++ *    rq2->lock  where: rq1 < rq2
++ *
++ * Regular state:
++ *
++ * Normal scheduling state is serialized by rq->lock. __schedule() takes the
++ * local CPU's rq->lock, it optionally removes the task from the runqueue and
++ * always looks at the local rq data structures to find the most eligible task
++ * to run next.
++ *
++ * Task enqueue is also under rq->lock, possibly taken from another CPU.
++ * Wakeups from another LLC domain might use an IPI to transfer the enqueue to
++ * the local CPU to avoid bouncing the runqueue state around [ see
++ * ttwu_queue_wakelist() ]
++ *
++ * Task wakeup, specifically wakeups that involve migration, are horribly
++ * complicated to avoid having to take two rq->locks.
++ *
++ * Special state:
++ *
++ * System-calls and anything external will use task_rq_lock() which acquires
++ * both p->pi_lock and rq->lock. As a consequence the state they change is
++ * stable while holding either lock:
++ *
++ *  - sched_setaffinity()/
++ *    set_cpus_allowed_ptr():	p->cpus_ptr, p->nr_cpus_allowed
++ *  - set_user_nice():		p->se.load, p->*prio
++ *  - __sched_setscheduler():	p->sched_class, p->policy, p->*prio,
++ *				p->se.load, p->rt_priority,
++ *				p->dl.dl_{runtime, deadline, period, flags, bw, density}
++ *  - sched_setnuma():		p->numa_preferred_nid
++ *  - sched_move_task():        p->sched_task_group
++ *  - uclamp_update_active()	p->uclamp*
++ *
++ * p->state <- TASK_*:
++ *
++ *   is changed locklessly using set_current_state(), __set_current_state() or
++ *   set_special_state(), see their respective comments, or by
++ *   try_to_wake_up(). This latter uses p->pi_lock to serialize against
++ *   concurrent self.
++ *
++ * p->on_rq <- { 0, 1 = TASK_ON_RQ_QUEUED, 2 = TASK_ON_RQ_MIGRATING }:
++ *
++ *   is set by activate_task() and cleared by deactivate_task(), under
++ *   rq->lock. Non-zero indicates the task is runnable, the special
++ *   ON_RQ_MIGRATING state is used for migration without holding both
++ *   rq->locks. It indicates task_cpu() is not stable, see task_rq_lock().
++ *
++ * p->on_cpu <- { 0, 1 }:
++ *
++ *   is set by prepare_task() and cleared by finish_task() such that it will be
++ *   set before p is scheduled-in and cleared after p is scheduled-out, both
++ *   under rq->lock. Non-zero indicates the task is running on its CPU.
++ *
++ *   [ The astute reader will observe that it is possible for two tasks on one
++ *     CPU to have ->on_cpu = 1 at the same time. ]
++ *
++ * task_cpu(p): is changed by set_task_cpu(), the rules are:
++ *
++ *  - Don't call set_task_cpu() on a blocked task:
++ *
++ *    We don't care what CPU we're not running on, this simplifies hotplug,
++ *    the CPU assignment of blocked tasks isn't required to be valid.
++ *
++ *  - for try_to_wake_up(), called under p->pi_lock:
++ *
++ *    This allows try_to_wake_up() to only take one rq->lock, see its comment.
++ *
++ *  - for migration called under rq->lock:
++ *    [ see task_on_rq_migrating() in task_rq_lock() ]
++ *
++ *    o move_queued_task()
++ *    o detach_task()
++ *
++ *  - for migration called under double_rq_lock():
++ *
++ *    o __migrate_swap_task()
++ *    o push_rt_task() / pull_rt_task()
++ *    o push_dl_task() / pull_dl_task()
++ *    o dl_task_offline_migration()
++ *
++ */
++
++/*
++ * Context: p->pi_lock
++ */
++static inline struct rq
++*__task_access_lock(struct task_struct *p, raw_spinlock_t **plock)
++{
++	struct rq *rq;
++	for (;;) {
++		rq = task_rq(p);
++		if (p->on_cpu || task_on_rq_queued(p)) {
++			raw_spin_lock(&rq->lock);
++			if (likely((p->on_cpu || task_on_rq_queued(p))
++				   && rq == task_rq(p))) {
++				*plock = &rq->lock;
++				return rq;
++			}
++			raw_spin_unlock(&rq->lock);
++		} else if (task_on_rq_migrating(p)) {
++			do {
++				cpu_relax();
++			} while (unlikely(task_on_rq_migrating(p)));
++		} else {
++			*plock = NULL;
++			return rq;
++		}
++	}
++}
++
++static inline void
++__task_access_unlock(struct task_struct *p, raw_spinlock_t *lock)
++{
++	if (NULL != lock)
++		raw_spin_unlock(lock);
++}
++
++static inline struct rq
++*task_access_lock_irqsave(struct task_struct *p, raw_spinlock_t **plock,
++			  unsigned long *flags)
++{
++	struct rq *rq;
++	for (;;) {
++		rq = task_rq(p);
++		if (p->on_cpu || task_on_rq_queued(p)) {
++			raw_spin_lock_irqsave(&rq->lock, *flags);
++			if (likely((p->on_cpu || task_on_rq_queued(p))
++				   && rq == task_rq(p))) {
++				*plock = &rq->lock;
++				return rq;
++			}
++			raw_spin_unlock_irqrestore(&rq->lock, *flags);
++		} else if (task_on_rq_migrating(p)) {
++			do {
++				cpu_relax();
++			} while (unlikely(task_on_rq_migrating(p)));
++		} else {
++			raw_spin_lock_irqsave(&p->pi_lock, *flags);
++			if (likely(!p->on_cpu && !p->on_rq &&
++				   rq == task_rq(p))) {
++				*plock = &p->pi_lock;
++				return rq;
++			}
++			raw_spin_unlock_irqrestore(&p->pi_lock, *flags);
++		}
++	}
++}
++
++static inline void
++task_access_unlock_irqrestore(struct task_struct *p, raw_spinlock_t *lock,
++			      unsigned long *flags)
++{
++	raw_spin_unlock_irqrestore(lock, *flags);
++}
++
++/*
++ * __task_rq_lock - lock the rq @p resides on.
++ */
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	lockdep_assert_held(&p->pi_lock);
++
++	for (;;) {
++		rq = task_rq(p);
++		raw_spin_lock(&rq->lock);
++		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p)))
++			return rq;
++		raw_spin_unlock(&rq->lock);
++
++		while (unlikely(task_on_rq_migrating(p)))
++			cpu_relax();
++	}
++}
++
++/*
++ * task_rq_lock - lock p->pi_lock and lock the rq @p resides on.
++ */
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(p->pi_lock)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	for (;;) {
++		raw_spin_lock_irqsave(&p->pi_lock, rf->flags);
++		rq = task_rq(p);
++		raw_spin_lock(&rq->lock);
++		/*
++		 *	move_queued_task()		task_rq_lock()
++		 *
++		 *	ACQUIRE (rq->lock)
++		 *	[S] ->on_rq = MIGRATING		[L] rq = task_rq()
++		 *	WMB (__set_task_cpu())		ACQUIRE (rq->lock);
++		 *	[S] ->cpu = new_cpu		[L] task_rq()
++		 *					[L] ->on_rq
++		 *	RELEASE (rq->lock)
++		 *
++		 * If we observe the old CPU in task_rq_lock(), the acquire of
++		 * the old rq->lock will fully serialize against the stores.
++		 *
++		 * If we observe the new CPU in task_rq_lock(), the address
++		 * dependency headed by '[L] rq = task_rq()' and the acquire
++		 * will pair with the WMB to ensure we then also see migrating.
++		 */
++		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) {
++			return rq;
++		}
++		raw_spin_unlock(&rq->lock);
++		raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++
++		while (unlikely(task_on_rq_migrating(p)))
++			cpu_relax();
++	}
++}
++
++static inline void
++rq_lock_irqsave(struct rq *rq, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	raw_spin_lock_irqsave(&rq->lock, rf->flags);
++}
++
++static inline void
++rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock_irqrestore(&rq->lock, rf->flags);
++}
++
++void raw_spin_rq_lock_nested(struct rq *rq, int subclass)
++{
++	raw_spinlock_t *lock;
++
++	/* Matches synchronize_rcu() in __sched_core_enable() */
++	preempt_disable();
++
++	for (;;) {
++		lock = __rq_lockp(rq);
++		raw_spin_lock_nested(lock, subclass);
++		if (likely(lock == __rq_lockp(rq))) {
++			/* preempt_count *MUST* be > 1 */
++			preempt_enable_no_resched();
++			return;
++		}
++		raw_spin_unlock(lock);
++	}
++}
++
++void raw_spin_rq_unlock(struct rq *rq)
++{
++	raw_spin_unlock(rq_lockp(rq));
++}
++
++/*
++ * RQ-clock updating methods:
++ */
++
++static void update_rq_clock_task(struct rq *rq, s64 delta)
++{
++/*
++ * In theory, the compile should just see 0 here, and optimize out the call
++ * to sched_rt_avg_update. But I don't trust it...
++ */
++	s64 __maybe_unused steal = 0, irq_delta = 0;
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++	irq_delta = irq_time_read(cpu_of(rq)) - rq->prev_irq_time;
++
++	/*
++	 * Since irq_time is only updated on {soft,}irq_exit, we might run into
++	 * this case when a previous update_rq_clock() happened inside a
++	 * {soft,}irq region.
++	 *
++	 * When this happens, we stop ->clock_task and only update the
++	 * prev_irq_time stamp to account for the part that fit, so that a next
++	 * update will consume the rest. This ensures ->clock_task is
++	 * monotonic.
++	 *
++	 * It does however cause some slight miss-attribution of {soft,}irq
++	 * time, a more accurate solution would be to update the irq_time using
++	 * the current rq->clock timestamp, except that would require using
++	 * atomic ops.
++	 */
++	if (irq_delta > delta)
++		irq_delta = delta;
++
++	rq->prev_irq_time += irq_delta;
++	delta -= irq_delta;
++#endif
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++	if (static_key_false((&paravirt_steal_rq_enabled))) {
++		steal = paravirt_steal_clock(cpu_of(rq));
++		steal -= rq->prev_steal_time_rq;
++
++		if (unlikely(steal > delta))
++			steal = delta;
++
++		rq->prev_steal_time_rq += steal;
++		delta -= steal;
++	}
++#endif
++
++	rq->clock_task += delta;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++	if ((irq_delta + steal))
++		update_irq_load_avg(rq, irq_delta + steal);
++#endif
++}
++
++static inline void update_rq_clock(struct rq *rq)
++{
++	s64 delta = sched_clock_cpu(cpu_of(rq)) - rq->clock;
++
++	if (unlikely(delta <= 0))
++		return;
++	rq->clock += delta;
++	update_rq_time_edge(rq);
++	update_rq_clock_task(rq, delta);
++}
++
++/*
++ * RQ Load update routine
++ */
++#define RQ_LOAD_HISTORY_BITS		(sizeof(s32) * 8ULL)
++#define RQ_UTIL_SHIFT			(8)
++#define RQ_LOAD_HISTORY_TO_UTIL(l)	(((l) >> (RQ_LOAD_HISTORY_BITS - 1 - RQ_UTIL_SHIFT)) & 0xff)
++
++#define LOAD_BLOCK(t)		((t) >> 17)
++#define LOAD_HALF_BLOCK(t)	((t) >> 16)
++#define BLOCK_MASK(t)		((t) & ((0x01 << 18) - 1))
++#define LOAD_BLOCK_BIT(b)	(1UL << (RQ_LOAD_HISTORY_BITS - 1 - (b)))
++#define CURRENT_LOAD_BIT	LOAD_BLOCK_BIT(0)
++
++static inline void rq_load_update(struct rq *rq)
++{
++	u64 time = rq->clock;
++	u64 delta = min(LOAD_BLOCK(time) - LOAD_BLOCK(rq->load_stamp),
++			RQ_LOAD_HISTORY_BITS - 1);
++	u64 prev = !!(rq->load_history & CURRENT_LOAD_BIT);
++	u64 curr = !!rq->nr_running;
++
++	if (delta) {
++		rq->load_history = rq->load_history >> delta;
++
++		if (delta < RQ_UTIL_SHIFT) {
++			rq->load_block += (~BLOCK_MASK(rq->load_stamp)) * prev;
++			if (!!LOAD_HALF_BLOCK(rq->load_block) ^ curr)
++				rq->load_history ^= LOAD_BLOCK_BIT(delta);
++		}
++
++		rq->load_block = BLOCK_MASK(time) * prev;
++	} else {
++		rq->load_block += (time - rq->load_stamp) * prev;
++	}
++	if (prev ^ curr)
++		rq->load_history ^= CURRENT_LOAD_BIT;
++	rq->load_stamp = time;
++}
++
++unsigned long rq_load_util(struct rq *rq, unsigned long max)
++{
++	return RQ_LOAD_HISTORY_TO_UTIL(rq->load_history) * (max >> RQ_UTIL_SHIFT);
++}
++
++#ifdef CONFIG_SMP
++unsigned long sched_cpu_util(int cpu)
++{
++	return rq_load_util(cpu_rq(cpu), arch_scale_cpu_capacity(cpu));
++}
++#endif /* CONFIG_SMP */
++
++#ifdef CONFIG_CPU_FREQ
++/**
++ * cpufreq_update_util - Take a note about CPU utilization changes.
++ * @rq: Runqueue to carry out the update for.
++ * @flags: Update reason flags.
++ *
++ * This function is called by the scheduler on the CPU whose utilization is
++ * being updated.
++ *
++ * It can only be called from RCU-sched read-side critical sections.
++ *
++ * The way cpufreq is currently arranged requires it to evaluate the CPU
++ * performance state (frequency/voltage) on a regular basis to prevent it from
++ * being stuck in a completely inadequate performance level for too long.
++ * That is not guaranteed to happen if the updates are only triggered from CFS
++ * and DL, though, because they may not be coming in if only RT tasks are
++ * active all the time (or there are RT tasks only).
++ *
++ * As a workaround for that issue, this function is called periodically by the
++ * RT sched class to trigger extra cpufreq updates to prevent it from stalling,
++ * but that really is a band-aid.  Going forward it should be replaced with
++ * solutions targeted more specifically at RT tasks.
++ */
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++	struct update_util_data *data;
++
++#ifdef CONFIG_SMP
++	rq_load_update(rq);
++#endif
++	data = rcu_dereference_sched(*per_cpu_ptr(&cpufreq_update_util_data,
++						  cpu_of(rq)));
++	if (data)
++		data->func(data, rq_clock(rq), flags);
++}
++#else
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++#ifdef CONFIG_SMP
++	rq_load_update(rq);
++#endif
++}
++#endif /* CONFIG_CPU_FREQ */
++
++#ifdef CONFIG_NO_HZ_FULL
++/*
++ * Tick may be needed by tasks in the runqueue depending on their policy and
++ * requirements. If tick is needed, lets send the target an IPI to kick it out
++ * of nohz mode if necessary.
++ */
++static inline void sched_update_tick_dependency(struct rq *rq)
++{
++	int cpu = cpu_of(rq);
++
++	if (!tick_nohz_full_cpu(cpu))
++		return;
++
++	if (rq->nr_running < 2)
++		tick_nohz_dep_clear_cpu(cpu, TICK_DEP_BIT_SCHED);
++	else
++		tick_nohz_dep_set_cpu(cpu, TICK_DEP_BIT_SCHED);
++}
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_update_tick_dependency(struct rq *rq) { }
++#endif
++
++bool sched_task_on_rq(struct task_struct *p)
++{
++	return task_on_rq_queued(p);
++}
++
++unsigned long get_wchan(struct task_struct *p)
++{
++	unsigned long ip = 0;
++	unsigned int state;
++
++	if (!p || p == current)
++		return 0;
++
++	/* Only get wchan if task is blocked and we can keep it that way. */
++	raw_spin_lock_irq(&p->pi_lock);
++	state = READ_ONCE(p->__state);
++	smp_rmb(); /* see try_to_wake_up() */
++	if (state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq)
++		ip = __get_wchan(p);
++	raw_spin_unlock_irq(&p->pi_lock);
++
++	return ip;
++}
++
++/*
++ * Add/Remove/Requeue task to/from the runqueue routines
++ * Context: rq->lock
++ */
++#define __SCHED_DEQUEUE_TASK(p, rq, flags, func)				\
++	sched_info_dequeue(rq, p);						\
++										\
++	list_del(&p->sq_node);							\
++	if (list_empty(&rq->queue.heads[p->sq_idx])) { 				\
++		clear_bit(sched_idx2prio(p->sq_idx, rq), rq->queue.bitmap);	\
++		func;								\
++	}
++
++#define __SCHED_ENQUEUE_TASK(p, rq, flags)				\
++	sched_info_enqueue(rq, p);					\
++									\
++	p->sq_idx = task_sched_prio_idx(p, rq);				\
++	list_add_tail(&p->sq_node, &rq->queue.heads[p->sq_idx]);	\
++	set_bit(sched_idx2prio(p->sq_idx, rq), rq->queue.bitmap);
++
++static inline void dequeue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++#ifdef ALT_SCHED_DEBUG
++	lockdep_assert_held(&rq->lock);
++
++	/*printk(KERN_INFO "sched: dequeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: dequeue task reside on cpu%d from cpu%d\n",
++		  task_cpu(p), cpu_of(rq));
++#endif
++
++	__SCHED_DEQUEUE_TASK(p, rq, flags, update_sched_preempt_mask(rq));
++	--rq->nr_running;
++#ifdef CONFIG_SMP
++	if (1 == rq->nr_running)
++		cpumask_clear_cpu(cpu_of(rq), &sched_rq_pending_mask);
++#endif
++
++	sched_update_tick_dependency(rq);
++}
++
++static inline void enqueue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++#ifdef ALT_SCHED_DEBUG
++	lockdep_assert_held(&rq->lock);
++
++	/*printk(KERN_INFO "sched: enqueue(%d) %px %d\n", cpu_of(rq), p, p->prio);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: enqueue task reside on cpu%d to cpu%d\n",
++		  task_cpu(p), cpu_of(rq));
++#endif
++
++	__SCHED_ENQUEUE_TASK(p, rq, flags);
++	update_sched_preempt_mask(rq);
++	++rq->nr_running;
++#ifdef CONFIG_SMP
++	if (2 == rq->nr_running)
++		cpumask_set_cpu(cpu_of(rq), &sched_rq_pending_mask);
++#endif
++
++	sched_update_tick_dependency(rq);
++}
++
++static inline void requeue_task(struct task_struct *p, struct rq *rq, int idx)
++{
++#ifdef ALT_SCHED_DEBUG
++	lockdep_assert_held(&rq->lock);
++	/*printk(KERN_INFO "sched: requeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: cpu[%d] requeue task reside on cpu%d\n",
++		  cpu_of(rq), task_cpu(p));
++#endif
++
++	list_del(&p->sq_node);
++	list_add_tail(&p->sq_node, &rq->queue.heads[idx]);
++	if (idx != p->sq_idx) {
++		if (list_empty(&rq->queue.heads[p->sq_idx]))
++			clear_bit(sched_idx2prio(p->sq_idx, rq), rq->queue.bitmap);
++		p->sq_idx = idx;
++		set_bit(sched_idx2prio(p->sq_idx, rq), rq->queue.bitmap);
++		update_sched_preempt_mask(rq);
++	}
++}
++
++/*
++ * cmpxchg based fetch_or, macro so it works for different integer types
++ */
++#define fetch_or(ptr, mask)						\
++	({								\
++		typeof(ptr) _ptr = (ptr);				\
++		typeof(mask) _mask = (mask);				\
++		typeof(*_ptr) _val = *_ptr;				\
++									\
++		do {							\
++		} while (!try_cmpxchg(_ptr, &_val, _val | _mask));	\
++	_val;								\
++})
++
++#if defined(CONFIG_SMP) && defined(TIF_POLLING_NRFLAG)
++/*
++ * Atomically set TIF_NEED_RESCHED and test for TIF_POLLING_NRFLAG,
++ * this avoids any races wrt polling state changes and thereby avoids
++ * spurious IPIs.
++ */
++static inline bool set_nr_and_not_polling(struct task_struct *p)
++{
++	struct thread_info *ti = task_thread_info(p);
++	return !(fetch_or(&ti->flags, _TIF_NEED_RESCHED) & _TIF_POLLING_NRFLAG);
++}
++
++/*
++ * Atomically set TIF_NEED_RESCHED if TIF_POLLING_NRFLAG is set.
++ *
++ * If this returns true, then the idle task promises to call
++ * sched_ttwu_pending() and reschedule soon.
++ */
++static bool set_nr_if_polling(struct task_struct *p)
++{
++	struct thread_info *ti = task_thread_info(p);
++	typeof(ti->flags) val = READ_ONCE(ti->flags);
++
++	for (;;) {
++		if (!(val & _TIF_POLLING_NRFLAG))
++			return false;
++		if (val & _TIF_NEED_RESCHED)
++			return true;
++		if (try_cmpxchg(&ti->flags, &val, val | _TIF_NEED_RESCHED))
++			break;
++	}
++	return true;
++}
++
++#else
++static inline bool set_nr_and_not_polling(struct task_struct *p)
++{
++	set_tsk_need_resched(p);
++	return true;
++}
++
++#ifdef CONFIG_SMP
++static inline bool set_nr_if_polling(struct task_struct *p)
++{
++	return false;
++}
++#endif
++#endif
++
++static bool __wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++	struct wake_q_node *node = &task->wake_q;
++
++	/*
++	 * Atomically grab the task, if ->wake_q is !nil already it means
++	 * it's already queued (either by us or someone else) and will get the
++	 * wakeup due to that.
++	 *
++	 * In order to ensure that a pending wakeup will observe our pending
++	 * state, even in the failed case, an explicit smp_mb() must be used.
++	 */
++	smp_mb__before_atomic();
++	if (unlikely(cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL)))
++		return false;
++
++	/*
++	 * The head is context local, there can be no concurrency.
++	 */
++	*head->lastp = node;
++	head->lastp = &node->next;
++	return true;
++}
++
++/**
++ * wake_q_add() - queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ */
++void wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++	if (__wake_q_add(head, task))
++		get_task_struct(task);
++}
++
++/**
++ * wake_q_add_safe() - safely queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ *
++ * This function is essentially a task-safe equivalent to wake_q_add(). Callers
++ * that already hold reference to @task can call the 'safe' version and trust
++ * wake_q to do the right thing depending whether or not the @task is already
++ * queued for wakeup.
++ */
++void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task)
++{
++	if (!__wake_q_add(head, task))
++		put_task_struct(task);
++}
++
++void wake_up_q(struct wake_q_head *head)
++{
++	struct wake_q_node *node = head->first;
++
++	while (node != WAKE_Q_TAIL) {
++		struct task_struct *task;
++
++		task = container_of(node, struct task_struct, wake_q);
++		/* task can safely be re-inserted now: */
++		node = node->next;
++		task->wake_q.next = NULL;
++
++		/*
++		 * wake_up_process() executes a full barrier, which pairs with
++		 * the queueing in wake_q_add() so as not to miss wakeups.
++		 */
++		wake_up_process(task);
++		put_task_struct(task);
++	}
++}
++
++/*
++ * resched_curr - mark rq's current task 'to be rescheduled now'.
++ *
++ * On UP this means the setting of the need_resched flag, on SMP it
++ * might also involve a cross-CPU call to trigger the scheduler on
++ * the target CPU.
++ */
++void resched_curr(struct rq *rq)
++{
++	struct task_struct *curr = rq->curr;
++	int cpu;
++
++	lockdep_assert_held(&rq->lock);
++
++	if (test_tsk_need_resched(curr))
++		return;
++
++	cpu = cpu_of(rq);
++	if (cpu == smp_processor_id()) {
++		set_tsk_need_resched(curr);
++		set_preempt_need_resched();
++		return;
++	}
++
++	if (set_nr_and_not_polling(curr))
++		smp_send_reschedule(cpu);
++	else
++		trace_sched_wake_idle_without_ipi(cpu);
++}
++
++void resched_cpu(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	if (cpu_online(cpu) || cpu == smp_processor_id())
++		resched_curr(cpu_rq(cpu));
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++}
++
++#ifdef CONFIG_SMP
++#ifdef CONFIG_NO_HZ_COMMON
++void nohz_balance_enter_idle(int cpu) {}
++
++void select_nohz_load_balancer(int stop_tick) {}
++
++void set_cpu_sd_state_idle(void) {}
++
++/*
++ * In the semi idle case, use the nearest busy CPU for migrating timers
++ * from an idle CPU.  This is good for power-savings.
++ *
++ * We don't do similar optimization for completely idle system, as
++ * selecting an idle CPU will add more delays to the timers than intended
++ * (as that CPU's timer base may not be uptodate wrt jiffies etc).
++ */
++int get_nohz_timer_target(void)
++{
++	int i, cpu = smp_processor_id(), default_cpu = -1;
++	struct cpumask *mask;
++	const struct cpumask *hk_mask;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_TIMER)) {
++		if (!idle_cpu(cpu))
++			return cpu;
++		default_cpu = cpu;
++	}
++
++	hk_mask = housekeeping_cpumask(HK_TYPE_TIMER);
++
++	for (mask = per_cpu(sched_cpu_topo_masks, cpu) + 1;
++	     mask < per_cpu(sched_cpu_topo_end_mask, cpu); mask++)
++		for_each_cpu_and(i, mask, hk_mask)
++			if (!idle_cpu(i))
++				return i;
++
++	if (default_cpu == -1)
++		default_cpu = housekeeping_any_cpu(HK_TYPE_TIMER);
++	cpu = default_cpu;
++
++	return cpu;
++}
++
++/*
++ * When add_timer_on() enqueues a timer into the timer wheel of an
++ * idle CPU then this timer might expire before the next timer event
++ * which is scheduled to wake up that CPU. In case of a completely
++ * idle system the next event might even be infinite time into the
++ * future. wake_up_idle_cpu() ensures that the CPU is woken up and
++ * leaves the inner idle loop so the newly added timer is taken into
++ * account when the CPU goes back to idle and evaluates the timer
++ * wheel for the next timer event.
++ */
++static inline void wake_up_idle_cpu(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (cpu == smp_processor_id())
++		return;
++
++	if (set_nr_and_not_polling(rq->idle))
++		smp_send_reschedule(cpu);
++	else
++		trace_sched_wake_idle_without_ipi(cpu);
++}
++
++static inline bool wake_up_full_nohz_cpu(int cpu)
++{
++	/*
++	 * We just need the target to call irq_exit() and re-evaluate
++	 * the next tick. The nohz full kick at least implies that.
++	 * If needed we can still optimize that later with an
++	 * empty IRQ.
++	 */
++	if (cpu_is_offline(cpu))
++		return true;  /* Don't try to wake offline CPUs. */
++	if (tick_nohz_full_cpu(cpu)) {
++		if (cpu != smp_processor_id() ||
++		    tick_nohz_tick_stopped())
++			tick_nohz_full_kick_cpu(cpu);
++		return true;
++	}
++
++	return false;
++}
++
++void wake_up_nohz_cpu(int cpu)
++{
++	if (!wake_up_full_nohz_cpu(cpu))
++		wake_up_idle_cpu(cpu);
++}
++
++static void nohz_csd_func(void *info)
++{
++	struct rq *rq = info;
++	int cpu = cpu_of(rq);
++	unsigned int flags;
++
++	/*
++	 * Release the rq::nohz_csd.
++	 */
++	flags = atomic_fetch_andnot(NOHZ_KICK_MASK, nohz_flags(cpu));
++	WARN_ON(!(flags & NOHZ_KICK_MASK));
++
++	rq->idle_balance = idle_cpu(cpu);
++	if (rq->idle_balance && !need_resched()) {
++		rq->nohz_idle_balance = flags;
++		raise_softirq_irqoff(SCHED_SOFTIRQ);
++	}
++}
++
++#endif /* CONFIG_NO_HZ_COMMON */
++#endif /* CONFIG_SMP */
++
++static inline void check_preempt_curr(struct rq *rq)
++{
++	if (sched_rq_first_task(rq) != rq->curr)
++		resched_curr(rq);
++}
++
++#ifdef CONFIG_SCHED_HRTICK
++/*
++ * Use HR-timers to deliver accurate preemption points.
++ */
++
++static void hrtick_clear(struct rq *rq)
++{
++	if (hrtimer_active(&rq->hrtick_timer))
++		hrtimer_cancel(&rq->hrtick_timer);
++}
++
++/*
++ * High-resolution timer tick.
++ * Runs from hardirq context with interrupts disabled.
++ */
++static enum hrtimer_restart hrtick(struct hrtimer *timer)
++{
++	struct rq *rq = container_of(timer, struct rq, hrtick_timer);
++
++	WARN_ON_ONCE(cpu_of(rq) != smp_processor_id());
++
++	raw_spin_lock(&rq->lock);
++	resched_curr(rq);
++	raw_spin_unlock(&rq->lock);
++
++	return HRTIMER_NORESTART;
++}
++
++/*
++ * Use hrtick when:
++ *  - enabled by features
++ *  - hrtimer is actually high res
++ */
++static inline int hrtick_enabled(struct rq *rq)
++{
++	/**
++	 * Alt schedule FW doesn't support sched_feat yet
++	if (!sched_feat(HRTICK))
++		return 0;
++	*/
++	if (!cpu_active(cpu_of(rq)))
++		return 0;
++	return hrtimer_is_hres_active(&rq->hrtick_timer);
++}
++
++#ifdef CONFIG_SMP
++
++static void __hrtick_restart(struct rq *rq)
++{
++	struct hrtimer *timer = &rq->hrtick_timer;
++	ktime_t time = rq->hrtick_time;
++
++	hrtimer_start(timer, time, HRTIMER_MODE_ABS_PINNED_HARD);
++}
++
++/*
++ * called from hardirq (IPI) context
++ */
++static void __hrtick_start(void *arg)
++{
++	struct rq *rq = arg;
++
++	raw_spin_lock(&rq->lock);
++	__hrtick_restart(rq);
++	raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and irqs disabled
++ */
++void hrtick_start(struct rq *rq, u64 delay)
++{
++	struct hrtimer *timer = &rq->hrtick_timer;
++	s64 delta;
++
++	/*
++	 * Don't schedule slices shorter than 10000ns, that just
++	 * doesn't make sense and can cause timer DoS.
++	 */
++	delta = max_t(s64, delay, 10000LL);
++
++	rq->hrtick_time = ktime_add_ns(timer->base->get_time(), delta);
++
++	if (rq == this_rq())
++		__hrtick_restart(rq);
++	else
++		smp_call_function_single_async(cpu_of(rq), &rq->hrtick_csd);
++}
++
++#else
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and irqs disabled
++ */
++void hrtick_start(struct rq *rq, u64 delay)
++{
++	/*
++	 * Don't schedule slices shorter than 10000ns, that just
++	 * doesn't make sense. Rely on vruntime for fairness.
++	 */
++	delay = max_t(u64, delay, 10000LL);
++	hrtimer_start(&rq->hrtick_timer, ns_to_ktime(delay),
++		      HRTIMER_MODE_REL_PINNED_HARD);
++}
++#endif /* CONFIG_SMP */
++
++static void hrtick_rq_init(struct rq *rq)
++{
++#ifdef CONFIG_SMP
++	INIT_CSD(&rq->hrtick_csd, __hrtick_start, rq);
++#endif
++
++	hrtimer_init(&rq->hrtick_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
++	rq->hrtick_timer.function = hrtick;
++}
++#else	/* CONFIG_SCHED_HRTICK */
++static inline int hrtick_enabled(struct rq *rq)
++{
++	return 0;
++}
++
++static inline void hrtick_clear(struct rq *rq)
++{
++}
++
++static inline void hrtick_rq_init(struct rq *rq)
++{
++}
++#endif	/* CONFIG_SCHED_HRTICK */
++
++static inline int __normal_prio(int policy, int rt_prio, int static_prio)
++{
++	return rt_policy(policy) ? (MAX_RT_PRIO - 1 - rt_prio) :
++		static_prio + MAX_PRIORITY_ADJ;
++}
++
++/*
++ * Calculate the expected normal priority: i.e. priority
++ * without taking RT-inheritance into account. Might be
++ * boosted by interactivity modifiers. Changes upon fork,
++ * setprio syscalls, and whenever the interactivity
++ * estimator recalculates.
++ */
++static inline int normal_prio(struct task_struct *p)
++{
++	return __normal_prio(p->policy, p->rt_priority, p->static_prio);
++}
++
++/*
++ * Calculate the current priority, i.e. the priority
++ * taken into account by the scheduler. This value might
++ * be boosted by RT tasks as it will be RT if the task got
++ * RT-boosted. If not then it returns p->normal_prio.
++ */
++static int effective_prio(struct task_struct *p)
++{
++	p->normal_prio = normal_prio(p);
++	/*
++	 * If we are RT tasks or we were boosted to RT priority,
++	 * keep the priority unchanged. Otherwise, update priority
++	 * to the normal priority:
++	 */
++	if (!rt_prio(p->prio))
++		return p->normal_prio;
++	return p->prio;
++}
++
++/*
++ * activate_task - move a task to the runqueue.
++ *
++ * Context: rq->lock
++ */
++static void activate_task(struct task_struct *p, struct rq *rq)
++{
++	enqueue_task(p, rq, ENQUEUE_WAKEUP);
++	p->on_rq = TASK_ON_RQ_QUEUED;
++
++	/*
++	 * If in_iowait is set, the code below may not trigger any cpufreq
++	 * utilization updates, so do it here explicitly with the IOWAIT flag
++	 * passed.
++	 */
++	cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT * p->in_iowait);
++}
++
++/*
++ * deactivate_task - remove a task from the runqueue.
++ *
++ * Context: rq->lock
++ */
++static inline void deactivate_task(struct task_struct *p, struct rq *rq)
++{
++	dequeue_task(p, rq, DEQUEUE_SLEEP);
++	p->on_rq = 0;
++	cpufreq_update_util(rq, 0);
++}
++
++static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
++{
++#ifdef CONFIG_SMP
++	/*
++	 * After ->cpu is set up to a new value, task_access_lock(p, ...) can be
++	 * successfully executed on another CPU. We must ensure that updates of
++	 * per-task data have been completed by this moment.
++	 */
++	smp_wmb();
++
++	WRITE_ONCE(task_thread_info(p)->cpu, cpu);
++#endif
++}
++
++static inline bool is_migration_disabled(struct task_struct *p)
++{
++#ifdef CONFIG_SMP
++	return p->migration_disabled;
++#else
++	return false;
++#endif
++}
++
++#define SCA_CHECK		0x01
++#define SCA_USER		0x08
++
++#ifdef CONFIG_SMP
++
++void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
++{
++#ifdef CONFIG_SCHED_DEBUG
++	unsigned int state = READ_ONCE(p->__state);
++
++	/*
++	 * We should never call set_task_cpu() on a blocked task,
++	 * ttwu() will sort out the placement.
++	 */
++	WARN_ON_ONCE(state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq);
++
++#ifdef CONFIG_LOCKDEP
++	/*
++	 * The caller should hold either p->pi_lock or rq->lock, when changing
++	 * a task's CPU. ->pi_lock for waking tasks, rq->lock for runnable tasks.
++	 *
++	 * sched_move_task() holds both and thus holding either pins the cgroup,
++	 * see task_group().
++	 */
++	WARN_ON_ONCE(debug_locks && !(lockdep_is_held(&p->pi_lock) ||
++				      lockdep_is_held(&task_rq(p)->lock)));
++#endif
++	/*
++	 * Clearly, migrating tasks to offline CPUs is a fairly daft thing.
++	 */
++	WARN_ON_ONCE(!cpu_online(new_cpu));
++
++	WARN_ON_ONCE(is_migration_disabled(p));
++#endif
++	trace_sched_migrate_task(p, new_cpu);
++
++	if (task_cpu(p) != new_cpu)
++	{
++		rseq_migrate(p);
++		perf_event_task_migrate(p);
++	}
++
++	__set_task_cpu(p, new_cpu);
++}
++
++#define MDF_FORCE_ENABLED	0x80
++
++static void
++__do_set_cpus_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++	/*
++	 * This here violates the locking rules for affinity, since we're only
++	 * supposed to change these variables while holding both rq->lock and
++	 * p->pi_lock.
++	 *
++	 * HOWEVER, it magically works, because ttwu() is the only code that
++	 * accesses these variables under p->pi_lock and only does so after
++	 * smp_cond_load_acquire(&p->on_cpu, !VAL), and we're in __schedule()
++	 * before finish_task().
++	 *
++	 * XXX do further audits, this smells like something putrid.
++	 */
++	SCHED_WARN_ON(!p->on_cpu);
++	p->cpus_ptr = new_mask;
++}
++
++void migrate_disable(void)
++{
++	struct task_struct *p = current;
++	int cpu;
++
++	if (p->migration_disabled) {
++		p->migration_disabled++;
++		return;
++	}
++
++	preempt_disable();
++	cpu = smp_processor_id();
++	if (cpumask_test_cpu(cpu, &p->cpus_mask)) {
++		cpu_rq(cpu)->nr_pinned++;
++		p->migration_disabled = 1;
++		p->migration_flags &= ~MDF_FORCE_ENABLED;
++
++		/*
++		 * Violates locking rules! see comment in __do_set_cpus_ptr().
++		 */
++		if (p->cpus_ptr == &p->cpus_mask)
++			__do_set_cpus_ptr(p, cpumask_of(cpu));
++	}
++	preempt_enable();
++}
++EXPORT_SYMBOL_GPL(migrate_disable);
++
++void migrate_enable(void)
++{
++	struct task_struct *p = current;
++
++	if (0 == p->migration_disabled)
++		return;
++
++	if (p->migration_disabled > 1) {
++		p->migration_disabled--;
++		return;
++	}
++
++	if (WARN_ON_ONCE(!p->migration_disabled))
++		return;
++
++	/*
++	 * Ensure stop_task runs either before or after this, and that
++	 * __set_cpus_allowed_ptr(SCA_MIGRATE_ENABLE) doesn't schedule().
++	 */
++	preempt_disable();
++	/*
++	 * Assumption: current should be running on allowed cpu
++	 */
++	WARN_ON_ONCE(!cpumask_test_cpu(smp_processor_id(), &p->cpus_mask));
++	if (p->cpus_ptr != &p->cpus_mask)
++		__do_set_cpus_ptr(p, &p->cpus_mask);
++	/*
++	 * Mustn't clear migration_disabled() until cpus_ptr points back at the
++	 * regular cpus_mask, otherwise things that race (eg.
++	 * select_fallback_rq) get confused.
++	 */
++	barrier();
++	p->migration_disabled = 0;
++	this_rq()->nr_pinned--;
++	preempt_enable();
++}
++EXPORT_SYMBOL_GPL(migrate_enable);
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++	return rq->nr_pinned;
++}
++
++/*
++ * Per-CPU kthreads are allowed to run on !active && online CPUs, see
++ * __set_cpus_allowed_ptr() and select_fallback_rq().
++ */
++static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
++{
++	/* When not in the task's cpumask, no point in looking further. */
++	if (!cpumask_test_cpu(cpu, p->cpus_ptr))
++		return false;
++
++	/* migrate_disabled() must be allowed to finish. */
++	if (is_migration_disabled(p))
++		return cpu_online(cpu);
++
++	/* Non kernel threads are not allowed during either online or offline. */
++	if (!(p->flags & PF_KTHREAD))
++		return cpu_active(cpu) && task_cpu_possible(cpu, p);
++
++	/* KTHREAD_IS_PER_CPU is always allowed. */
++	if (kthread_is_per_cpu(p))
++		return cpu_online(cpu);
++
++	/* Regular kernel threads don't get to stay during offline. */
++	if (cpu_dying(cpu))
++		return false;
++
++	/* But are allowed during online. */
++	return cpu_online(cpu);
++}
++
++/*
++ * This is how migration works:
++ *
++ * 1) we invoke migration_cpu_stop() on the target CPU using
++ *    stop_one_cpu().
++ * 2) stopper starts to run (implicitly forcing the migrated thread
++ *    off the CPU)
++ * 3) it checks whether the migrated task is still in the wrong runqueue.
++ * 4) if it's in the wrong runqueue then the migration thread removes
++ *    it and puts it into the right queue.
++ * 5) stopper completes and stop_one_cpu() returns and the migration
++ *    is done.
++ */
++
++/*
++ * move_queued_task - move a queued task to new rq.
++ *
++ * Returns (locked) new rq. Old rq's lock is released.
++ */
++static struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int
++				   new_cpu)
++{
++	lockdep_assert_held(&rq->lock);
++
++	WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING);
++	dequeue_task(p, rq, 0);
++	set_task_cpu(p, new_cpu);
++	raw_spin_unlock(&rq->lock);
++
++	rq = cpu_rq(new_cpu);
++
++	raw_spin_lock(&rq->lock);
++	WARN_ON_ONCE(task_cpu(p) != new_cpu);
++	sched_task_sanity_check(p, rq);
++	enqueue_task(p, rq, 0);
++	p->on_rq = TASK_ON_RQ_QUEUED;
++	check_preempt_curr(rq);
++
++	return rq;
++}
++
++struct migration_arg {
++	struct task_struct *task;
++	int dest_cpu;
++};
++
++/*
++ * Move (not current) task off this CPU, onto the destination CPU. We're doing
++ * this because either it can't run here any more (set_cpus_allowed()
++ * away from this CPU, or CPU going down), or because we're
++ * attempting to rebalance this task on exec (sched_exec).
++ *
++ * So we race with normal scheduler movements, but that's OK, as long
++ * as the task is no longer on this CPU.
++ */
++static struct rq *__migrate_task(struct rq *rq, struct task_struct *p, int
++				 dest_cpu)
++{
++	/* Affinity changed (again). */
++	if (!is_cpu_allowed(p, dest_cpu))
++		return rq;
++
++	update_rq_clock(rq);
++	return move_queued_task(rq, p, dest_cpu);
++}
++
++/*
++ * migration_cpu_stop - this will be executed by a highprio stopper thread
++ * and performs thread migration by bumping thread off CPU then
++ * 'pushing' onto another runqueue.
++ */
++static int migration_cpu_stop(void *data)
++{
++	struct migration_arg *arg = data;
++	struct task_struct *p = arg->task;
++	struct rq *rq = this_rq();
++	unsigned long flags;
++
++	/*
++	 * The original target CPU might have gone down and we might
++	 * be on another CPU but it doesn't matter.
++	 */
++	local_irq_save(flags);
++	/*
++	 * We need to explicitly wake pending tasks before running
++	 * __migrate_task() such that we will not miss enforcing cpus_ptr
++	 * during wakeups, see set_cpus_allowed_ptr()'s TASK_WAKING test.
++	 */
++	flush_smp_call_function_queue();
++
++	raw_spin_lock(&p->pi_lock);
++	raw_spin_lock(&rq->lock);
++	/*
++	 * If task_rq(p) != rq, it cannot be migrated here, because we're
++	 * holding rq->lock, if p->on_rq == 0 it cannot get enqueued because
++	 * we're holding p->pi_lock.
++	 */
++	if (task_rq(p) == rq && task_on_rq_queued(p))
++		rq = __migrate_task(rq, p, arg->dest_cpu);
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++	return 0;
++}
++
++static inline void
++set_cpus_allowed_common(struct task_struct *p, struct affinity_context *ctx)
++{
++	cpumask_copy(&p->cpus_mask, ctx->new_mask);
++	p->nr_cpus_allowed = cpumask_weight(ctx->new_mask);
++
++	/*
++	 * Swap in a new user_cpus_ptr if SCA_USER flag set
++	 */
++	if (ctx->flags & SCA_USER)
++		swap(p->user_cpus_ptr, ctx->user_mask);
++}
++
++static void
++__do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx)
++{
++	lockdep_assert_held(&p->pi_lock);
++	set_cpus_allowed_common(p, ctx);
++}
++
++/*
++ * Used for kthread_bind() and select_fallback_rq(), in both cases the user
++ * affinity (if any) should be destroyed too.
++ */
++void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
++{
++	struct affinity_context ac = {
++		.new_mask  = new_mask,
++		.user_mask = NULL,
++		.flags     = SCA_USER,	/* clear the user requested mask */
++	};
++	union cpumask_rcuhead {
++		cpumask_t cpumask;
++		struct rcu_head rcu;
++	};
++
++	__do_set_cpus_allowed(p, &ac);
++
++	/*
++	 * Because this is called with p->pi_lock held, it is not possible
++	 * to use kfree() here (when PREEMPT_RT=y), therefore punt to using
++	 * kfree_rcu().
++	 */
++	kfree_rcu((union cpumask_rcuhead *)ac.user_mask, rcu);
++}
++
++static cpumask_t *alloc_user_cpus_ptr(int node)
++{
++	/*
++	 * See do_set_cpus_allowed() above for the rcu_head usage.
++	 */
++	int size = max_t(int, cpumask_size(), sizeof(struct rcu_head));
++
++	return kmalloc_node(size, GFP_KERNEL, node);
++}
++
++int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src,
++		      int node)
++{
++	cpumask_t *user_mask;
++	unsigned long flags;
++
++	/*
++	 * Always clear dst->user_cpus_ptr first as their user_cpus_ptr's
++	 * may differ by now due to racing.
++	 */
++	dst->user_cpus_ptr = NULL;
++
++	/*
++	 * This check is racy and losing the race is a valid situation.
++	 * It is not worth the extra overhead of taking the pi_lock on
++	 * every fork/clone.
++	 */
++	if (data_race(!src->user_cpus_ptr))
++		return 0;
++
++	user_mask = alloc_user_cpus_ptr(node);
++	if (!user_mask)
++		return -ENOMEM;
++
++	/*
++	 * Use pi_lock to protect content of user_cpus_ptr
++	 *
++	 * Though unlikely, user_cpus_ptr can be reset to NULL by a concurrent
++	 * do_set_cpus_allowed().
++	 */
++	raw_spin_lock_irqsave(&src->pi_lock, flags);
++	if (src->user_cpus_ptr) {
++		swap(dst->user_cpus_ptr, user_mask);
++		cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr);
++	}
++	raw_spin_unlock_irqrestore(&src->pi_lock, flags);
++
++	if (unlikely(user_mask))
++		kfree(user_mask);
++
++	return 0;
++}
++
++static inline struct cpumask *clear_user_cpus_ptr(struct task_struct *p)
++{
++	struct cpumask *user_mask = NULL;
++
++	swap(p->user_cpus_ptr, user_mask);
++
++	return user_mask;
++}
++
++void release_user_cpus_ptr(struct task_struct *p)
++{
++	kfree(clear_user_cpus_ptr(p));
++}
++
++#endif
++
++/**
++ * task_curr - is this task currently executing on a CPU?
++ * @p: the task in question.
++ *
++ * Return: 1 if the task is currently executing. 0 otherwise.
++ */
++inline int task_curr(const struct task_struct *p)
++{
++	return cpu_curr(task_cpu(p)) == p;
++}
++
++#ifdef CONFIG_SMP
++/*
++ * wait_task_inactive - wait for a thread to unschedule.
++ *
++ * Wait for the thread to block in any of the states set in @match_state.
++ * If it changes, i.e. @p might have woken up, then return zero.  When we
++ * succeed in waiting for @p to be off its CPU, we return a positive number
++ * (its total switch count).  If a second call a short while later returns the
++ * same number, the caller can be sure that @p has remained unscheduled the
++ * whole time.
++ *
++ * The caller must ensure that the task *will* unschedule sometime soon,
++ * else this function might spin for a *long* time. This function can't
++ * be called with interrupts off, or it may introduce deadlock with
++ * smp_call_function() if an IPI is sent by the same process we are
++ * waiting to become inactive.
++ */
++unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state)
++{
++	unsigned long flags;
++	bool running, on_rq;
++	unsigned long ncsw;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	for (;;) {
++		rq = task_rq(p);
++
++		/*
++		 * If the task is actively running on another CPU
++		 * still, just relax and busy-wait without holding
++		 * any locks.
++		 *
++		 * NOTE! Since we don't hold any locks, it's not
++		 * even sure that "rq" stays as the right runqueue!
++		 * But we don't care, since this will return false
++		 * if the runqueue has changed and p is actually now
++		 * running somewhere else!
++		 */
++		while (task_on_cpu(p) && p == rq->curr) {
++			if (!(READ_ONCE(p->__state) & match_state))
++				return 0;
++			cpu_relax();
++		}
++
++		/*
++		 * Ok, time to look more closely! We need the rq
++		 * lock now, to be *sure*. If we're wrong, we'll
++		 * just go back and repeat.
++		 */
++		task_access_lock_irqsave(p, &lock, &flags);
++		trace_sched_wait_task(p);
++		running = task_on_cpu(p);
++		on_rq = p->on_rq;
++		ncsw = 0;
++		if (READ_ONCE(p->__state) & match_state)
++			ncsw = p->nvcsw | LONG_MIN; /* sets MSB */
++		task_access_unlock_irqrestore(p, lock, &flags);
++
++		/*
++		 * If it changed from the expected state, bail out now.
++		 */
++		if (unlikely(!ncsw))
++			break;
++
++		/*
++		 * Was it really running after all now that we
++		 * checked with the proper locks actually held?
++		 *
++		 * Oops. Go back and try again..
++		 */
++		if (unlikely(running)) {
++			cpu_relax();
++			continue;
++		}
++
++		/*
++		 * It's not enough that it's not actively running,
++		 * it must be off the runqueue _entirely_, and not
++		 * preempted!
++		 *
++		 * So if it was still runnable (but just not actively
++		 * running right now), it's preempted, and we should
++		 * yield - it could be a while.
++		 */
++		if (unlikely(on_rq)) {
++			ktime_t to = NSEC_PER_SEC / HZ;
++
++			set_current_state(TASK_UNINTERRUPTIBLE);
++			schedule_hrtimeout(&to, HRTIMER_MODE_REL_HARD);
++			continue;
++		}
++
++		/*
++		 * Ahh, all good. It wasn't running, and it wasn't
++		 * runnable, which means that it will never become
++		 * running in the future either. We're all done!
++		 */
++		break;
++	}
++
++	return ncsw;
++}
++
++/***
++ * kick_process - kick a running thread to enter/exit the kernel
++ * @p: the to-be-kicked thread
++ *
++ * Cause a process which is running on another CPU to enter
++ * kernel-mode, without any delay. (to get signals handled.)
++ *
++ * NOTE: this function doesn't have to take the runqueue lock,
++ * because all it wants to ensure is that the remote task enters
++ * the kernel. If the IPI races and the task has been migrated
++ * to another CPU then no harm is done and the purpose has been
++ * achieved as well.
++ */
++void kick_process(struct task_struct *p)
++{
++	int cpu;
++
++	preempt_disable();
++	cpu = task_cpu(p);
++	if ((cpu != smp_processor_id()) && task_curr(p))
++		smp_send_reschedule(cpu);
++	preempt_enable();
++}
++EXPORT_SYMBOL_GPL(kick_process);
++
++/*
++ * ->cpus_ptr is protected by both rq->lock and p->pi_lock
++ *
++ * A few notes on cpu_active vs cpu_online:
++ *
++ *  - cpu_active must be a subset of cpu_online
++ *
++ *  - on CPU-up we allow per-CPU kthreads on the online && !active CPU,
++ *    see __set_cpus_allowed_ptr(). At this point the newly online
++ *    CPU isn't yet part of the sched domains, and balancing will not
++ *    see it.
++ *
++ *  - on cpu-down we clear cpu_active() to mask the sched domains and
++ *    avoid the load balancer to place new tasks on the to be removed
++ *    CPU. Existing tasks will remain running there and will be taken
++ *    off.
++ *
++ * This means that fallback selection must not select !active CPUs.
++ * And can assume that any active CPU must be online. Conversely
++ * select_task_rq() below may allow selection of !active CPUs in order
++ * to satisfy the above rules.
++ */
++static int select_fallback_rq(int cpu, struct task_struct *p)
++{
++	int nid = cpu_to_node(cpu);
++	const struct cpumask *nodemask = NULL;
++	enum { cpuset, possible, fail } state = cpuset;
++	int dest_cpu;
++
++	/*
++	 * If the node that the CPU is on has been offlined, cpu_to_node()
++	 * will return -1. There is no CPU on the node, and we should
++	 * select the CPU on the other node.
++	 */
++	if (nid != -1) {
++		nodemask = cpumask_of_node(nid);
++
++		/* Look for allowed, online CPU in same node. */
++		for_each_cpu(dest_cpu, nodemask) {
++			if (is_cpu_allowed(p, dest_cpu))
++				return dest_cpu;
++		}
++	}
++
++	for (;;) {
++		/* Any allowed, online CPU? */
++		for_each_cpu(dest_cpu, p->cpus_ptr) {
++			if (!is_cpu_allowed(p, dest_cpu))
++				continue;
++			goto out;
++		}
++
++		/* No more Mr. Nice Guy. */
++		switch (state) {
++		case cpuset:
++			if (cpuset_cpus_allowed_fallback(p)) {
++				state = possible;
++				break;
++			}
++			fallthrough;
++		case possible:
++			/*
++			 * XXX When called from select_task_rq() we only
++			 * hold p->pi_lock and again violate locking order.
++			 *
++			 * More yuck to audit.
++			 */
++			do_set_cpus_allowed(p, task_cpu_possible_mask(p));
++			state = fail;
++			break;
++
++		case fail:
++			BUG();
++			break;
++		}
++	}
++
++out:
++	if (state != cpuset) {
++		/*
++		 * Don't tell them about moving exiting tasks or
++		 * kernel threads (both mm NULL), since they never
++		 * leave kernel.
++		 */
++		if (p->mm && printk_ratelimit()) {
++			printk_deferred("process %d (%s) no longer affine to cpu%d\n",
++					task_pid_nr(p), p->comm, cpu);
++		}
++	}
++
++	return dest_cpu;
++}
++
++static inline void
++sched_preempt_mask_flush(cpumask_t *mask, int prio)
++{
++	int cpu;
++
++	cpumask_copy(mask, sched_idle_mask);
++
++	for_each_clear_bit(cpu, cpumask_bits(mask), nr_cpumask_bits) {
++		if (prio < cpu_rq(cpu)->prio)
++			cpumask_set_cpu(cpu, mask);
++	}
++}
++
++static inline int
++preempt_mask_check(struct task_struct *p, cpumask_t *allow_mask, cpumask_t *preempt_mask)
++{
++	int task_prio = task_sched_prio(p);
++	cpumask_t *mask = sched_preempt_mask + SCHED_QUEUE_BITS - 1 - task_prio;
++	int pr = atomic_read(&sched_prio_record);
++
++	if (pr != task_prio) {
++		sched_preempt_mask_flush(mask, task_prio);
++		atomic_set(&sched_prio_record, task_prio);
++	}
++
++	return cpumask_and(preempt_mask, allow_mask, mask);
++}
++
++static inline int select_task_rq(struct task_struct *p)
++{
++	cpumask_t allow_mask, mask;
++
++	if (unlikely(!cpumask_and(&allow_mask, p->cpus_ptr, cpu_active_mask)))
++		return select_fallback_rq(task_cpu(p), p);
++
++	if (
++#ifdef CONFIG_SCHED_SMT
++	    cpumask_and(&mask, &allow_mask, &sched_sg_idle_mask) ||
++#endif
++	    cpumask_and(&mask, &allow_mask, sched_idle_mask) ||
++	    preempt_mask_check(p, &allow_mask, &mask))
++		return best_mask_cpu(task_cpu(p), &mask);
++
++	return best_mask_cpu(task_cpu(p), &allow_mask);
++}
++
++void sched_set_stop_task(int cpu, struct task_struct *stop)
++{
++	static struct lock_class_key stop_pi_lock;
++	struct sched_param stop_param = { .sched_priority = STOP_PRIO };
++	struct sched_param start_param = { .sched_priority = 0 };
++	struct task_struct *old_stop = cpu_rq(cpu)->stop;
++
++	if (stop) {
++		/*
++		 * Make it appear like a SCHED_FIFO task, its something
++		 * userspace knows about and won't get confused about.
++		 *
++		 * Also, it will make PI more or less work without too
++		 * much confusion -- but then, stop work should not
++		 * rely on PI working anyway.
++		 */
++		sched_setscheduler_nocheck(stop, SCHED_FIFO, &stop_param);
++
++		/*
++		 * The PI code calls rt_mutex_setprio() with ->pi_lock held to
++		 * adjust the effective priority of a task. As a result,
++		 * rt_mutex_setprio() can trigger (RT) balancing operations,
++		 * which can then trigger wakeups of the stop thread to push
++		 * around the current task.
++		 *
++		 * The stop task itself will never be part of the PI-chain, it
++		 * never blocks, therefore that ->pi_lock recursion is safe.
++		 * Tell lockdep about this by placing the stop->pi_lock in its
++		 * own class.
++		 */
++		lockdep_set_class(&stop->pi_lock, &stop_pi_lock);
++	}
++
++	cpu_rq(cpu)->stop = stop;
++
++	if (old_stop) {
++		/*
++		 * Reset it back to a normal scheduling policy so that
++		 * it can die in pieces.
++		 */
++		sched_setscheduler_nocheck(old_stop, SCHED_NORMAL, &start_param);
++	}
++}
++
++static int affine_move_task(struct rq *rq, struct task_struct *p, int dest_cpu,
++			    raw_spinlock_t *lock, unsigned long irq_flags)
++	__releases(rq->lock)
++	__releases(p->pi_lock)
++{
++	/* Can the task run on the task's current CPU? If so, we're done */
++	if (!cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) {
++		if (p->migration_disabled) {
++			if (likely(p->cpus_ptr != &p->cpus_mask))
++				__do_set_cpus_ptr(p, &p->cpus_mask);
++			p->migration_disabled = 0;
++			p->migration_flags |= MDF_FORCE_ENABLED;
++			/* When p is migrate_disabled, rq->lock should be held */
++			rq->nr_pinned--;
++		}
++
++		if (task_on_cpu(p) || READ_ONCE(p->__state) == TASK_WAKING) {
++			struct migration_arg arg = { p, dest_cpu };
++
++			/* Need help from migration thread: drop lock and wait. */
++			__task_access_unlock(p, lock);
++			raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++			stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg);
++			return 0;
++		}
++		if (task_on_rq_queued(p)) {
++			/*
++			 * OK, since we're going to drop the lock immediately
++			 * afterwards anyway.
++			 */
++			update_rq_clock(rq);
++			rq = move_queued_task(rq, p, dest_cpu);
++			lock = &rq->lock;
++		}
++	}
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++	return 0;
++}
++
++static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
++					 struct affinity_context *ctx,
++					 struct rq *rq,
++					 raw_spinlock_t *lock,
++					 unsigned long irq_flags)
++{
++	const struct cpumask *cpu_allowed_mask = task_cpu_possible_mask(p);
++	const struct cpumask *cpu_valid_mask = cpu_active_mask;
++	bool kthread = p->flags & PF_KTHREAD;
++	int dest_cpu;
++	int ret = 0;
++
++	if (kthread || is_migration_disabled(p)) {
++		/*
++		 * Kernel threads are allowed on online && !active CPUs,
++		 * however, during cpu-hot-unplug, even these might get pushed
++		 * away if not KTHREAD_IS_PER_CPU.
++		 *
++		 * Specifically, migration_disabled() tasks must not fail the
++		 * cpumask_any_and_distribute() pick below, esp. so on
++		 * SCA_MIGRATE_ENABLE, otherwise we'll not call
++		 * set_cpus_allowed_common() and actually reset p->cpus_ptr.
++		 */
++		cpu_valid_mask = cpu_online_mask;
++	}
++
++	if (!kthread && !cpumask_subset(ctx->new_mask, cpu_allowed_mask)) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	/*
++	 * Must re-check here, to close a race against __kthread_bind(),
++	 * sched_setaffinity() is not guaranteed to observe the flag.
++	 */
++	if ((ctx->flags & SCA_CHECK) && (p->flags & PF_NO_SETAFFINITY)) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	if (cpumask_equal(&p->cpus_mask, ctx->new_mask))
++		goto out;
++
++	dest_cpu = cpumask_any_and(cpu_valid_mask, ctx->new_mask);
++	if (dest_cpu >= nr_cpu_ids) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	__do_set_cpus_allowed(p, ctx);
++
++	return affine_move_task(rq, p, dest_cpu, lock, irq_flags);
++
++out:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++
++	return ret;
++}
++
++/*
++ * Change a given task's CPU affinity. Migrate the thread to a
++ * is removed from the allowed bitmask.
++ *
++ * NOTE: the caller must have a valid reference to the task, the
++ * task must not exit() & deallocate itself prematurely. The
++ * call is not atomic; no spinlocks may be held.
++ */
++static int __set_cpus_allowed_ptr(struct task_struct *p,
++				  struct affinity_context *ctx)
++{
++	unsigned long irq_flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
++	rq = __task_access_lock(p, &lock);
++	/*
++	 * Masking should be skipped if SCA_USER or any of the SCA_MIGRATE_*
++	 * flags are set.
++	 */
++	if (p->user_cpus_ptr &&
++	    !(ctx->flags & SCA_USER) &&
++	    cpumask_and(rq->scratch_mask, ctx->new_mask, p->user_cpus_ptr))
++		ctx->new_mask = rq->scratch_mask;
++
++
++	return __set_cpus_allowed_ptr_locked(p, ctx, rq, lock, irq_flags);
++}
++
++int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++	struct affinity_context ac = {
++		.new_mask  = new_mask,
++		.flags     = 0,
++	};
++
++	return __set_cpus_allowed_ptr(p, &ac);
++}
++EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
++
++/*
++ * Change a given task's CPU affinity to the intersection of its current
++ * affinity mask and @subset_mask, writing the resulting mask to @new_mask.
++ * If user_cpus_ptr is defined, use it as the basis for restricting CPU
++ * affinity or use cpu_online_mask instead.
++ *
++ * If the resulting mask is empty, leave the affinity unchanged and return
++ * -EINVAL.
++ */
++static int restrict_cpus_allowed_ptr(struct task_struct *p,
++				     struct cpumask *new_mask,
++				     const struct cpumask *subset_mask)
++{
++	struct affinity_context ac = {
++		.new_mask  = new_mask,
++		.flags     = 0,
++	};
++	unsigned long irq_flags;
++	raw_spinlock_t *lock;
++	struct rq *rq;
++	int err;
++
++	raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
++	rq = __task_access_lock(p, &lock);
++
++	if (!cpumask_and(new_mask, task_user_cpus(p), subset_mask)) {
++		err = -EINVAL;
++		goto err_unlock;
++	}
++
++	return __set_cpus_allowed_ptr_locked(p, &ac, rq, lock, irq_flags);
++
++err_unlock:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++	return err;
++}
++
++/*
++ * Restrict the CPU affinity of task @p so that it is a subset of
++ * task_cpu_possible_mask() and point @p->user_cpus_ptr to a copy of the
++ * old affinity mask. If the resulting mask is empty, we warn and walk
++ * up the cpuset hierarchy until we find a suitable mask.
++ */
++void force_compatible_cpus_allowed_ptr(struct task_struct *p)
++{
++	cpumask_var_t new_mask;
++	const struct cpumask *override_mask = task_cpu_possible_mask(p);
++
++	alloc_cpumask_var(&new_mask, GFP_KERNEL);
++
++	/*
++	 * __migrate_task() can fail silently in the face of concurrent
++	 * offlining of the chosen destination CPU, so take the hotplug
++	 * lock to ensure that the migration succeeds.
++	 */
++	cpus_read_lock();
++	if (!cpumask_available(new_mask))
++		goto out_set_mask;
++
++	if (!restrict_cpus_allowed_ptr(p, new_mask, override_mask))
++		goto out_free_mask;
++
++	/*
++	 * We failed to find a valid subset of the affinity mask for the
++	 * task, so override it based on its cpuset hierarchy.
++	 */
++	cpuset_cpus_allowed(p, new_mask);
++	override_mask = new_mask;
++
++out_set_mask:
++	if (printk_ratelimit()) {
++		printk_deferred("Overriding affinity for process %d (%s) to CPUs %*pbl\n",
++				task_pid_nr(p), p->comm,
++				cpumask_pr_args(override_mask));
++	}
++
++	WARN_ON(set_cpus_allowed_ptr(p, override_mask));
++out_free_mask:
++	cpus_read_unlock();
++	free_cpumask_var(new_mask);
++}
++
++static int
++__sched_setaffinity(struct task_struct *p, struct affinity_context *ctx);
++
++/*
++ * Restore the affinity of a task @p which was previously restricted by a
++ * call to force_compatible_cpus_allowed_ptr().
++ *
++ * It is the caller's responsibility to serialise this with any calls to
++ * force_compatible_cpus_allowed_ptr(@p).
++ */
++void relax_compatible_cpus_allowed_ptr(struct task_struct *p)
++{
++	struct affinity_context ac = {
++		.new_mask  = task_user_cpus(p),
++		.flags     = 0,
++	};
++	int ret;
++
++	/*
++	 * Try to restore the old affinity mask with __sched_setaffinity().
++	 * Cpuset masking will be done there too.
++	 */
++	ret = __sched_setaffinity(p, &ac);
++	WARN_ON_ONCE(ret);
++}
++
++#else /* CONFIG_SMP */
++
++static inline int select_task_rq(struct task_struct *p)
++{
++	return 0;
++}
++
++static inline int
++__set_cpus_allowed_ptr(struct task_struct *p,
++		       struct affinity_context *ctx)
++{
++	return set_cpus_allowed_ptr(p, ctx->new_mask);
++}
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++	return false;
++}
++
++static inline cpumask_t *alloc_user_cpus_ptr(int node)
++{
++	return NULL;
++}
++
++#endif /* !CONFIG_SMP */
++
++static void
++ttwu_stat(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq;
++
++	if (!schedstat_enabled())
++		return;
++
++	rq = this_rq();
++
++#ifdef CONFIG_SMP
++	if (cpu == rq->cpu) {
++		__schedstat_inc(rq->ttwu_local);
++		__schedstat_inc(p->stats.nr_wakeups_local);
++	} else {
++		/** Alt schedule FW ToDo:
++		 * How to do ttwu_wake_remote
++		 */
++	}
++#endif /* CONFIG_SMP */
++
++	__schedstat_inc(rq->ttwu_count);
++	__schedstat_inc(p->stats.nr_wakeups);
++}
++
++/*
++ * Mark the task runnable.
++ */
++static inline void ttwu_do_wakeup(struct task_struct *p)
++{
++	WRITE_ONCE(p->__state, TASK_RUNNING);
++	trace_sched_wakeup(p);
++}
++
++static inline void
++ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags)
++{
++	if (p->sched_contributes_to_load)
++		rq->nr_uninterruptible--;
++
++	if (
++#ifdef CONFIG_SMP
++	    !(wake_flags & WF_MIGRATED) &&
++#endif
++	    p->in_iowait) {
++		delayacct_blkio_end(p);
++		atomic_dec(&task_rq(p)->nr_iowait);
++	}
++
++	activate_task(p, rq);
++	check_preempt_curr(rq);
++
++	ttwu_do_wakeup(p);
++}
++
++/*
++ * Consider @p being inside a wait loop:
++ *
++ *   for (;;) {
++ *      set_current_state(TASK_UNINTERRUPTIBLE);
++ *
++ *      if (CONDITION)
++ *         break;
++ *
++ *      schedule();
++ *   }
++ *   __set_current_state(TASK_RUNNING);
++ *
++ * between set_current_state() and schedule(). In this case @p is still
++ * runnable, so all that needs doing is change p->state back to TASK_RUNNING in
++ * an atomic manner.
++ *
++ * By taking task_rq(p)->lock we serialize against schedule(), if @p->on_rq
++ * then schedule() must still happen and p->state can be changed to
++ * TASK_RUNNING. Otherwise we lost the race, schedule() has happened, and we
++ * need to do a full wakeup with enqueue.
++ *
++ * Returns: %true when the wakeup is done,
++ *          %false otherwise.
++ */
++static int ttwu_runnable(struct task_struct *p, int wake_flags)
++{
++	struct rq *rq;
++	raw_spinlock_t *lock;
++	int ret = 0;
++
++	rq = __task_access_lock(p, &lock);
++	if (task_on_rq_queued(p)) {
++		if (!task_on_cpu(p)) {
++			/*
++			 * When on_rq && !on_cpu the task is preempted, see if
++			 * it should preempt the task that is current now.
++			 */
++			update_rq_clock(rq);
++			check_preempt_curr(rq);
++		}
++		ttwu_do_wakeup(p);
++		ret = 1;
++	}
++	__task_access_unlock(p, lock);
++
++	return ret;
++}
++
++#ifdef CONFIG_SMP
++void sched_ttwu_pending(void *arg)
++{
++	struct llist_node *llist = arg;
++	struct rq *rq = this_rq();
++	struct task_struct *p, *t;
++	struct rq_flags rf;
++
++	if (!llist)
++		return;
++
++	rq_lock_irqsave(rq, &rf);
++	update_rq_clock(rq);
++
++	llist_for_each_entry_safe(p, t, llist, wake_entry.llist) {
++		if (WARN_ON_ONCE(p->on_cpu))
++			smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++		if (WARN_ON_ONCE(task_cpu(p) != cpu_of(rq)))
++			set_task_cpu(p, cpu_of(rq));
++
++		ttwu_do_activate(rq, p, p->sched_remote_wakeup ? WF_MIGRATED : 0);
++	}
++
++	/*
++	 * Must be after enqueueing at least once task such that
++	 * idle_cpu() does not observe a false-negative -- if it does,
++	 * it is possible for select_idle_siblings() to stack a number
++	 * of tasks on this CPU during that window.
++	 *
++	 * It is ok to clear ttwu_pending when another task pending.
++	 * We will receive IPI after local irq enabled and then enqueue it.
++	 * Since now nr_running > 0, idle_cpu() will always get correct result.
++	 */
++	WRITE_ONCE(rq->ttwu_pending, 0);
++	rq_unlock_irqrestore(rq, &rf);
++}
++
++void send_call_function_single_ipi(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (!set_nr_if_polling(rq->idle))
++		arch_send_call_function_single_ipi(cpu);
++	else
++		trace_sched_wake_idle_without_ipi(cpu);
++}
++
++/*
++ * Queue a task on the target CPUs wake_list and wake the CPU via IPI if
++ * necessary. The wakee CPU on receipt of the IPI will queue the task
++ * via sched_ttwu_wakeup() for activation so the wakee incurs the cost
++ * of the wakeup instead of the waker.
++ */
++static void __ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	p->sched_remote_wakeup = !!(wake_flags & WF_MIGRATED);
++
++	WRITE_ONCE(rq->ttwu_pending, 1);
++	__smp_call_single_queue(cpu, &p->wake_entry.llist);
++}
++
++static inline bool ttwu_queue_cond(struct task_struct *p, int cpu)
++{
++	/*
++	 * Do not complicate things with the async wake_list while the CPU is
++	 * in hotplug state.
++	 */
++	if (!cpu_active(cpu))
++		return false;
++
++	/* Ensure the task will still be allowed to run on the CPU. */
++	if (!cpumask_test_cpu(cpu, p->cpus_ptr))
++		return false;
++
++	/*
++	 * If the CPU does not share cache, then queue the task on the
++	 * remote rqs wakelist to avoid accessing remote data.
++	 */
++	if (!cpus_share_cache(smp_processor_id(), cpu))
++		return true;
++
++	if (cpu == smp_processor_id())
++		return false;
++
++	/*
++	 * If the wakee cpu is idle, or the task is descheduling and the
++	 * only running task on the CPU, then use the wakelist to offload
++	 * the task activation to the idle (or soon-to-be-idle) CPU as
++	 * the current CPU is likely busy. nr_running is checked to
++	 * avoid unnecessary task stacking.
++	 *
++	 * Note that we can only get here with (wakee) p->on_rq=0,
++	 * p->on_cpu can be whatever, we've done the dequeue, so
++	 * the wakee has been accounted out of ->nr_running.
++	 */
++	if (!cpu_rq(cpu)->nr_running)
++		return true;
++
++	return false;
++}
++
++static bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	if (__is_defined(ALT_SCHED_TTWU_QUEUE) && ttwu_queue_cond(p, cpu)) {
++		sched_clock_cpu(cpu); /* Sync clocks across CPUs */
++		__ttwu_queue_wakelist(p, cpu, wake_flags);
++		return true;
++	}
++
++	return false;
++}
++
++void wake_up_if_idle(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	rcu_read_lock();
++
++	if (!is_idle_task(rcu_dereference(rq->curr)))
++		goto out;
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	if (is_idle_task(rq->curr))
++		resched_curr(rq);
++	/* Else CPU is not idle, do nothing here */
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++out:
++	rcu_read_unlock();
++}
++
++bool cpus_share_cache(int this_cpu, int that_cpu)
++{
++	if (this_cpu == that_cpu)
++		return true;
++
++	return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
++}
++#else /* !CONFIG_SMP */
++
++static inline bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	return false;
++}
++
++#endif /* CONFIG_SMP */
++
++static inline void ttwu_queue(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (ttwu_queue_wakelist(p, cpu, wake_flags))
++		return;
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++	ttwu_do_activate(rq, p, wake_flags);
++	raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Invoked from try_to_wake_up() to check whether the task can be woken up.
++ *
++ * The caller holds p::pi_lock if p != current or has preemption
++ * disabled when p == current.
++ *
++ * The rules of PREEMPT_RT saved_state:
++ *
++ *   The related locking code always holds p::pi_lock when updating
++ *   p::saved_state, which means the code is fully serialized in both cases.
++ *
++ *   The lock wait and lock wakeups happen via TASK_RTLOCK_WAIT. No other
++ *   bits set. This allows to distinguish all wakeup scenarios.
++ */
++static __always_inline
++bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
++{
++	if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
++		WARN_ON_ONCE((state & TASK_RTLOCK_WAIT) &&
++			     state != TASK_RTLOCK_WAIT);
++	}
++
++	if (READ_ONCE(p->__state) & state) {
++		*success = 1;
++		return true;
++	}
++
++#ifdef CONFIG_PREEMPT_RT
++	/*
++	 * Saved state preserves the task state across blocking on
++	 * an RT lock.  If the state matches, set p::saved_state to
++	 * TASK_RUNNING, but do not wake the task because it waits
++	 * for a lock wakeup. Also indicate success because from
++	 * the regular waker's point of view this has succeeded.
++	 *
++	 * After acquiring the lock the task will restore p::__state
++	 * from p::saved_state which ensures that the regular
++	 * wakeup is not lost. The restore will also set
++	 * p::saved_state to TASK_RUNNING so any further tests will
++	 * not result in false positives vs. @success
++	 */
++	if (p->saved_state & state) {
++		p->saved_state = TASK_RUNNING;
++		*success = 1;
++	}
++#endif
++	return false;
++}
++
++/*
++ * Notes on Program-Order guarantees on SMP systems.
++ *
++ *  MIGRATION
++ *
++ * The basic program-order guarantee on SMP systems is that when a task [t]
++ * migrates, all its activity on its old CPU [c0] happens-before any subsequent
++ * execution on its new CPU [c1].
++ *
++ * For migration (of runnable tasks) this is provided by the following means:
++ *
++ *  A) UNLOCK of the rq(c0)->lock scheduling out task t
++ *  B) migration for t is required to synchronize *both* rq(c0)->lock and
++ *     rq(c1)->lock (if not at the same time, then in that order).
++ *  C) LOCK of the rq(c1)->lock scheduling in task
++ *
++ * Transitivity guarantees that B happens after A and C after B.
++ * Note: we only require RCpc transitivity.
++ * Note: the CPU doing B need not be c0 or c1
++ *
++ * Example:
++ *
++ *   CPU0            CPU1            CPU2
++ *
++ *   LOCK rq(0)->lock
++ *   sched-out X
++ *   sched-in Y
++ *   UNLOCK rq(0)->lock
++ *
++ *                                   LOCK rq(0)->lock // orders against CPU0
++ *                                   dequeue X
++ *                                   UNLOCK rq(0)->lock
++ *
++ *                                   LOCK rq(1)->lock
++ *                                   enqueue X
++ *                                   UNLOCK rq(1)->lock
++ *
++ *                   LOCK rq(1)->lock // orders against CPU2
++ *                   sched-out Z
++ *                   sched-in X
++ *                   UNLOCK rq(1)->lock
++ *
++ *
++ *  BLOCKING -- aka. SLEEP + WAKEUP
++ *
++ * For blocking we (obviously) need to provide the same guarantee as for
++ * migration. However the means are completely different as there is no lock
++ * chain to provide order. Instead we do:
++ *
++ *   1) smp_store_release(X->on_cpu, 0)   -- finish_task()
++ *   2) smp_cond_load_acquire(!X->on_cpu) -- try_to_wake_up()
++ *
++ * Example:
++ *
++ *   CPU0 (schedule)  CPU1 (try_to_wake_up) CPU2 (schedule)
++ *
++ *   LOCK rq(0)->lock LOCK X->pi_lock
++ *   dequeue X
++ *   sched-out X
++ *   smp_store_release(X->on_cpu, 0);
++ *
++ *                    smp_cond_load_acquire(&X->on_cpu, !VAL);
++ *                    X->state = WAKING
++ *                    set_task_cpu(X,2)
++ *
++ *                    LOCK rq(2)->lock
++ *                    enqueue X
++ *                    X->state = RUNNING
++ *                    UNLOCK rq(2)->lock
++ *
++ *                                          LOCK rq(2)->lock // orders against CPU1
++ *                                          sched-out Z
++ *                                          sched-in X
++ *                                          UNLOCK rq(2)->lock
++ *
++ *                    UNLOCK X->pi_lock
++ *   UNLOCK rq(0)->lock
++ *
++ *
++ * However; for wakeups there is a second guarantee we must provide, namely we
++ * must observe the state that lead to our wakeup. That is, not only must our
++ * task observe its own prior state, it must also observe the stores prior to
++ * its wakeup.
++ *
++ * This means that any means of doing remote wakeups must order the CPU doing
++ * the wakeup against the CPU the task is going to end up running on. This,
++ * however, is already required for the regular Program-Order guarantee above,
++ * since the waking CPU is the one issueing the ACQUIRE (smp_cond_load_acquire).
++ *
++ */
++
++/**
++ * try_to_wake_up - wake up a thread
++ * @p: the thread to be awakened
++ * @state: the mask of task states that can be woken
++ * @wake_flags: wake modifier flags (WF_*)
++ *
++ * Conceptually does:
++ *
++ *   If (@state & @p->state) @p->state = TASK_RUNNING.
++ *
++ * If the task was not queued/runnable, also place it back on a runqueue.
++ *
++ * This function is atomic against schedule() which would dequeue the task.
++ *
++ * It issues a full memory barrier before accessing @p->state, see the comment
++ * with set_current_state().
++ *
++ * Uses p->pi_lock to serialize against concurrent wake-ups.
++ *
++ * Relies on p->pi_lock stabilizing:
++ *  - p->sched_class
++ *  - p->cpus_ptr
++ *  - p->sched_task_group
++ * in order to do migration, see its use of select_task_rq()/set_task_cpu().
++ *
++ * Tries really hard to only take one task_rq(p)->lock for performance.
++ * Takes rq->lock in:
++ *  - ttwu_runnable()    -- old rq, unavoidable, see comment there;
++ *  - ttwu_queue()       -- new rq, for enqueue of the task;
++ *  - psi_ttwu_dequeue() -- much sadness :-( accounting will kill us.
++ *
++ * As a consequence we race really badly with just about everything. See the
++ * many memory barriers and their comments for details.
++ *
++ * Return: %true if @p->state changes (an actual wakeup was done),
++ *	   %false otherwise.
++ */
++static int try_to_wake_up(struct task_struct *p, unsigned int state,
++			  int wake_flags)
++{
++	unsigned long flags;
++	int cpu, success = 0;
++
++	preempt_disable();
++	if (p == current) {
++		/*
++		 * We're waking current, this means 'p->on_rq' and 'task_cpu(p)
++		 * == smp_processor_id()'. Together this means we can special
++		 * case the whole 'p->on_rq && ttwu_runnable()' case below
++		 * without taking any locks.
++		 *
++		 * In particular:
++		 *  - we rely on Program-Order guarantees for all the ordering,
++		 *  - we're serialized against set_special_state() by virtue of
++		 *    it disabling IRQs (this allows not taking ->pi_lock).
++		 */
++		if (!ttwu_state_match(p, state, &success))
++			goto out;
++
++		trace_sched_waking(p);
++		ttwu_do_wakeup(p);
++		goto out;
++	}
++
++	/*
++	 * If we are going to wake up a thread waiting for CONDITION we
++	 * need to ensure that CONDITION=1 done by the caller can not be
++	 * reordered with p->state check below. This pairs with smp_store_mb()
++	 * in set_current_state() that the waiting thread does.
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	smp_mb__after_spinlock();
++	if (!ttwu_state_match(p, state, &success))
++		goto unlock;
++
++	trace_sched_waking(p);
++
++	/*
++	 * Ensure we load p->on_rq _after_ p->state, otherwise it would
++	 * be possible to, falsely, observe p->on_rq == 0 and get stuck
++	 * in smp_cond_load_acquire() below.
++	 *
++	 * sched_ttwu_pending()			try_to_wake_up()
++	 *   STORE p->on_rq = 1			  LOAD p->state
++	 *   UNLOCK rq->lock
++	 *
++	 * __schedule() (switch to task 'p')
++	 *   LOCK rq->lock			  smp_rmb();
++	 *   smp_mb__after_spinlock();
++	 *   UNLOCK rq->lock
++	 *
++	 * [task p]
++	 *   STORE p->state = UNINTERRUPTIBLE	  LOAD p->on_rq
++	 *
++	 * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++	 * __schedule().  See the comment for smp_mb__after_spinlock().
++	 *
++	 * A similar smb_rmb() lives in try_invoke_on_locked_down_task().
++	 */
++	smp_rmb();
++	if (READ_ONCE(p->on_rq) && ttwu_runnable(p, wake_flags))
++		goto unlock;
++
++#ifdef CONFIG_SMP
++	/*
++	 * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be
++	 * possible to, falsely, observe p->on_cpu == 0.
++	 *
++	 * One must be running (->on_cpu == 1) in order to remove oneself
++	 * from the runqueue.
++	 *
++	 * __schedule() (switch to task 'p')	try_to_wake_up()
++	 *   STORE p->on_cpu = 1		  LOAD p->on_rq
++	 *   UNLOCK rq->lock
++	 *
++	 * __schedule() (put 'p' to sleep)
++	 *   LOCK rq->lock			  smp_rmb();
++	 *   smp_mb__after_spinlock();
++	 *   STORE p->on_rq = 0			  LOAD p->on_cpu
++	 *
++	 * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++	 * __schedule().  See the comment for smp_mb__after_spinlock().
++	 *
++	 * Form a control-dep-acquire with p->on_rq == 0 above, to ensure
++	 * schedule()'s deactivate_task() has 'happened' and p will no longer
++	 * care about it's own p->state. See the comment in __schedule().
++	 */
++	smp_acquire__after_ctrl_dep();
++
++	/*
++	 * We're doing the wakeup (@success == 1), they did a dequeue (p->on_rq
++	 * == 0), which means we need to do an enqueue, change p->state to
++	 * TASK_WAKING such that we can unlock p->pi_lock before doing the
++	 * enqueue, such as ttwu_queue_wakelist().
++	 */
++	WRITE_ONCE(p->__state, TASK_WAKING);
++
++	/*
++	 * If the owning (remote) CPU is still in the middle of schedule() with
++	 * this task as prev, considering queueing p on the remote CPUs wake_list
++	 * which potentially sends an IPI instead of spinning on p->on_cpu to
++	 * let the waker make forward progress. This is safe because IRQs are
++	 * disabled and the IPI will deliver after on_cpu is cleared.
++	 *
++	 * Ensure we load task_cpu(p) after p->on_cpu:
++	 *
++	 * set_task_cpu(p, cpu);
++	 *   STORE p->cpu = @cpu
++	 * __schedule() (switch to task 'p')
++	 *   LOCK rq->lock
++	 *   smp_mb__after_spin_lock()          smp_cond_load_acquire(&p->on_cpu)
++	 *   STORE p->on_cpu = 1                LOAD p->cpu
++	 *
++	 * to ensure we observe the correct CPU on which the task is currently
++	 * scheduling.
++	 */
++	if (smp_load_acquire(&p->on_cpu) &&
++	    ttwu_queue_wakelist(p, task_cpu(p), wake_flags))
++		goto unlock;
++
++	/*
++	 * If the owning (remote) CPU is still in the middle of schedule() with
++	 * this task as prev, wait until it's done referencing the task.
++	 *
++	 * Pairs with the smp_store_release() in finish_task().
++	 *
++	 * This ensures that tasks getting woken will be fully ordered against
++	 * their previous state and preserve Program Order.
++	 */
++	smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++	sched_task_ttwu(p);
++
++	cpu = select_task_rq(p);
++
++	if (cpu != task_cpu(p)) {
++		if (p->in_iowait) {
++			delayacct_blkio_end(p);
++			atomic_dec(&task_rq(p)->nr_iowait);
++		}
++
++		wake_flags |= WF_MIGRATED;
++		set_task_cpu(p, cpu);
++	}
++#else
++	cpu = task_cpu(p);
++#endif /* CONFIG_SMP */
++
++	ttwu_queue(p, cpu, wake_flags);
++unlock:
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++out:
++	if (success)
++		ttwu_stat(p, task_cpu(p), wake_flags);
++	preempt_enable();
++
++	return success;
++}
++
++static bool __task_needs_rq_lock(struct task_struct *p)
++{
++	unsigned int state = READ_ONCE(p->__state);
++
++	/*
++	 * Since pi->lock blocks try_to_wake_up(), we don't need rq->lock when
++	 * the task is blocked. Make sure to check @state since ttwu() can drop
++	 * locks at the end, see ttwu_queue_wakelist().
++	 */
++	if (state == TASK_RUNNING || state == TASK_WAKING)
++		return true;
++
++	/*
++	 * Ensure we load p->on_rq after p->__state, otherwise it would be
++	 * possible to, falsely, observe p->on_rq == 0.
++	 *
++	 * See try_to_wake_up() for a longer comment.
++	 */
++	smp_rmb();
++	if (p->on_rq)
++		return true;
++
++#ifdef CONFIG_SMP
++	/*
++	 * Ensure the task has finished __schedule() and will not be referenced
++	 * anymore. Again, see try_to_wake_up() for a longer comment.
++	 */
++	smp_rmb();
++	smp_cond_load_acquire(&p->on_cpu, !VAL);
++#endif
++
++	return false;
++}
++
++/**
++ * task_call_func - Invoke a function on task in fixed state
++ * @p: Process for which the function is to be invoked, can be @current.
++ * @func: Function to invoke.
++ * @arg: Argument to function.
++ *
++ * Fix the task in it's current state by avoiding wakeups and or rq operations
++ * and call @func(@arg) on it.  This function can use ->on_rq and task_curr()
++ * to work out what the state is, if required.  Given that @func can be invoked
++ * with a runqueue lock held, it had better be quite lightweight.
++ *
++ * Returns:
++ *   Whatever @func returns
++ */
++int task_call_func(struct task_struct *p, task_call_f func, void *arg)
++{
++	struct rq *rq = NULL;
++	struct rq_flags rf;
++	int ret;
++
++	raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
++
++	if (__task_needs_rq_lock(p))
++		rq = __task_rq_lock(p, &rf);
++
++	/*
++	 * At this point the task is pinned; either:
++	 *  - blocked and we're holding off wakeups      (pi->lock)
++	 *  - woken, and we're holding off enqueue       (rq->lock)
++	 *  - queued, and we're holding off schedule     (rq->lock)
++	 *  - running, and we're holding off de-schedule (rq->lock)
++	 *
++	 * The called function (@func) can use: task_curr(), p->on_rq and
++	 * p->__state to differentiate between these states.
++	 */
++	ret = func(p, arg);
++
++	if (rq)
++		__task_rq_unlock(rq, &rf);
++
++	raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
++	return ret;
++}
++
++/**
++ * cpu_curr_snapshot - Return a snapshot of the currently running task
++ * @cpu: The CPU on which to snapshot the task.
++ *
++ * Returns the task_struct pointer of the task "currently" running on
++ * the specified CPU.  If the same task is running on that CPU throughout,
++ * the return value will be a pointer to that task's task_struct structure.
++ * If the CPU did any context switches even vaguely concurrently with the
++ * execution of this function, the return value will be a pointer to the
++ * task_struct structure of a randomly chosen task that was running on
++ * that CPU somewhere around the time that this function was executing.
++ *
++ * If the specified CPU was offline, the return value is whatever it
++ * is, perhaps a pointer to the task_struct structure of that CPU's idle
++ * task, but there is no guarantee.  Callers wishing a useful return
++ * value must take some action to ensure that the specified CPU remains
++ * online throughout.
++ *
++ * This function executes full memory barriers before and after fetching
++ * the pointer, which permits the caller to confine this function's fetch
++ * with respect to the caller's accesses to other shared variables.
++ */
++struct task_struct *cpu_curr_snapshot(int cpu)
++{
++	struct task_struct *t;
++
++	smp_mb(); /* Pairing determined by caller's synchronization design. */
++	t = rcu_dereference(cpu_curr(cpu));
++	smp_mb(); /* Pairing determined by caller's synchronization design. */
++	return t;
++}
++
++/**
++ * wake_up_process - Wake up a specific process
++ * @p: The process to be woken up.
++ *
++ * Attempt to wake up the nominated process and move it to the set of runnable
++ * processes.
++ *
++ * Return: 1 if the process was woken up, 0 if it was already running.
++ *
++ * This function executes a full memory barrier before accessing the task state.
++ */
++int wake_up_process(struct task_struct *p)
++{
++	return try_to_wake_up(p, TASK_NORMAL, 0);
++}
++EXPORT_SYMBOL(wake_up_process);
++
++int wake_up_state(struct task_struct *p, unsigned int state)
++{
++	return try_to_wake_up(p, state, 0);
++}
++
++/*
++ * Perform scheduler related setup for a newly forked process p.
++ * p is forked by current.
++ *
++ * __sched_fork() is basic setup used by init_idle() too:
++ */
++static inline void __sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++	p->on_rq			= 0;
++	p->on_cpu			= 0;
++	p->utime			= 0;
++	p->stime			= 0;
++	p->sched_time			= 0;
++
++#ifdef CONFIG_SCHEDSTATS
++	/* Even if schedstat is disabled, there should not be garbage */
++	memset(&p->stats, 0, sizeof(p->stats));
++#endif
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++	INIT_HLIST_HEAD(&p->preempt_notifiers);
++#endif
++
++#ifdef CONFIG_COMPACTION
++	p->capture_control = NULL;
++#endif
++#ifdef CONFIG_SMP
++	p->wake_entry.u_flags = CSD_TYPE_TTWU;
++#endif
++}
++
++/*
++ * fork()/clone()-time setup:
++ */
++int sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++	__sched_fork(clone_flags, p);
++	/*
++	 * We mark the process as NEW here. This guarantees that
++	 * nobody will actually run it, and a signal or other external
++	 * event cannot wake it up and insert it on the runqueue either.
++	 */
++	p->__state = TASK_NEW;
++
++	/*
++	 * Make sure we do not leak PI boosting priority to the child.
++	 */
++	p->prio = current->normal_prio;
++
++	/*
++	 * Revert to default priority/policy on fork if requested.
++	 */
++	if (unlikely(p->sched_reset_on_fork)) {
++		if (task_has_rt_policy(p)) {
++			p->policy = SCHED_NORMAL;
++			p->static_prio = NICE_TO_PRIO(0);
++			p->rt_priority = 0;
++		} else if (PRIO_TO_NICE(p->static_prio) < 0)
++			p->static_prio = NICE_TO_PRIO(0);
++
++		p->prio = p->normal_prio = p->static_prio;
++
++		/*
++		 * We don't need the reset flag anymore after the fork. It has
++		 * fulfilled its duty:
++		 */
++		p->sched_reset_on_fork = 0;
++	}
++
++#ifdef CONFIG_SCHED_INFO
++	if (unlikely(sched_info_on()))
++		memset(&p->sched_info, 0, sizeof(p->sched_info));
++#endif
++	init_task_preempt_count(p);
++
++	return 0;
++}
++
++void sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs)
++{
++	unsigned long flags;
++	struct rq *rq;
++
++	/*
++	 * Because we're not yet on the pid-hash, p->pi_lock isn't strictly
++	 * required yet, but lockdep gets upset if rules are violated.
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	/*
++	 * Share the timeslice between parent and child, thus the
++	 * total amount of pending timeslices in the system doesn't change,
++	 * resulting in more scheduling fairness.
++	 */
++	rq = this_rq();
++	raw_spin_lock(&rq->lock);
++
++	rq->curr->time_slice /= 2;
++	p->time_slice = rq->curr->time_slice;
++#ifdef CONFIG_SCHED_HRTICK
++	hrtick_start(rq, rq->curr->time_slice);
++#endif
++
++	if (p->time_slice < RESCHED_NS) {
++		p->time_slice = sched_timeslice_ns;
++		resched_curr(rq);
++	}
++	sched_task_fork(p, rq);
++	raw_spin_unlock(&rq->lock);
++
++	rseq_migrate(p);
++	/*
++	 * We're setting the CPU for the first time, we don't migrate,
++	 * so use __set_task_cpu().
++	 */
++	__set_task_cpu(p, smp_processor_id());
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++
++void sched_post_fork(struct task_struct *p)
++{
++}
++
++#ifdef CONFIG_SCHEDSTATS
++
++DEFINE_STATIC_KEY_FALSE(sched_schedstats);
++
++static void set_schedstats(bool enabled)
++{
++	if (enabled)
++		static_branch_enable(&sched_schedstats);
++	else
++		static_branch_disable(&sched_schedstats);
++}
++
++void force_schedstat_enabled(void)
++{
++	if (!schedstat_enabled()) {
++		pr_info("kernel profiling enabled schedstats, disable via kernel.sched_schedstats.\n");
++		static_branch_enable(&sched_schedstats);
++	}
++}
++
++static int __init setup_schedstats(char *str)
++{
++	int ret = 0;
++	if (!str)
++		goto out;
++
++	if (!strcmp(str, "enable")) {
++		set_schedstats(true);
++		ret = 1;
++	} else if (!strcmp(str, "disable")) {
++		set_schedstats(false);
++		ret = 1;
++	}
++out:
++	if (!ret)
++		pr_warn("Unable to parse schedstats=\n");
++
++	return ret;
++}
++__setup("schedstats=", setup_schedstats);
++
++#ifdef CONFIG_PROC_SYSCTL
++static int sysctl_schedstats(struct ctl_table *table, int write, void *buffer,
++		size_t *lenp, loff_t *ppos)
++{
++	struct ctl_table t;
++	int err;
++	int state = static_branch_likely(&sched_schedstats);
++
++	if (write && !capable(CAP_SYS_ADMIN))
++		return -EPERM;
++
++	t = *table;
++	t.data = &state;
++	err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
++	if (err < 0)
++		return err;
++	if (write)
++		set_schedstats(state);
++	return err;
++}
++
++static struct ctl_table sched_core_sysctls[] = {
++	{
++		.procname       = "sched_schedstats",
++		.data           = NULL,
++		.maxlen         = sizeof(unsigned int),
++		.mode           = 0644,
++		.proc_handler   = sysctl_schedstats,
++		.extra1         = SYSCTL_ZERO,
++		.extra2         = SYSCTL_ONE,
++	},
++	{}
++};
++static int __init sched_core_sysctl_init(void)
++{
++	register_sysctl_init("kernel", sched_core_sysctls);
++	return 0;
++}
++late_initcall(sched_core_sysctl_init);
++#endif /* CONFIG_PROC_SYSCTL */
++#endif /* CONFIG_SCHEDSTATS */
++
++/*
++ * wake_up_new_task - wake up a newly created task for the first time.
++ *
++ * This function will do some initial scheduler statistics housekeeping
++ * that must be done for every newly created context, then puts the task
++ * on the runqueue and wakes it.
++ */
++void wake_up_new_task(struct task_struct *p)
++{
++	unsigned long flags;
++	struct rq *rq;
++
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	WRITE_ONCE(p->__state, TASK_RUNNING);
++	rq = cpu_rq(select_task_rq(p));
++#ifdef CONFIG_SMP
++	rseq_migrate(p);
++	/*
++	 * Fork balancing, do it here and not earlier because:
++	 * - cpus_ptr can change in the fork path
++	 * - any previously selected CPU might disappear through hotplug
++	 *
++	 * Use __set_task_cpu() to avoid calling sched_class::migrate_task_rq,
++	 * as we're not fully set-up yet.
++	 */
++	__set_task_cpu(p, cpu_of(rq));
++#endif
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++
++	activate_task(p, rq);
++	trace_sched_wakeup_new(p);
++	check_preempt_curr(rq);
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++
++static DEFINE_STATIC_KEY_FALSE(preempt_notifier_key);
++
++void preempt_notifier_inc(void)
++{
++	static_branch_inc(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_inc);
++
++void preempt_notifier_dec(void)
++{
++	static_branch_dec(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_dec);
++
++/**
++ * preempt_notifier_register - tell me when current is being preempted & rescheduled
++ * @notifier: notifier struct to register
++ */
++void preempt_notifier_register(struct preempt_notifier *notifier)
++{
++	if (!static_branch_unlikely(&preempt_notifier_key))
++		WARN(1, "registering preempt_notifier while notifiers disabled\n");
++
++	hlist_add_head(&notifier->link, &current->preempt_notifiers);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_register);
++
++/**
++ * preempt_notifier_unregister - no longer interested in preemption notifications
++ * @notifier: notifier struct to unregister
++ *
++ * This is *not* safe to call from within a preemption notifier.
++ */
++void preempt_notifier_unregister(struct preempt_notifier *notifier)
++{
++	hlist_del(&notifier->link);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_unregister);
++
++static void __fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++	struct preempt_notifier *notifier;
++
++	hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++		notifier->ops->sched_in(notifier, raw_smp_processor_id());
++}
++
++static __always_inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++	if (static_branch_unlikely(&preempt_notifier_key))
++		__fire_sched_in_preempt_notifiers(curr);
++}
++
++static void
++__fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				   struct task_struct *next)
++{
++	struct preempt_notifier *notifier;
++
++	hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++		notifier->ops->sched_out(notifier, next);
++}
++
++static __always_inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				 struct task_struct *next)
++{
++	if (static_branch_unlikely(&preempt_notifier_key))
++		__fire_sched_out_preempt_notifiers(curr, next);
++}
++
++#else /* !CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++}
++
++static inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				 struct task_struct *next)
++{
++}
++
++#endif /* CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void prepare_task(struct task_struct *next)
++{
++	/*
++	 * Claim the task as running, we do this before switching to it
++	 * such that any running task will have this set.
++	 *
++	 * See the smp_load_acquire(&p->on_cpu) case in ttwu() and
++	 * its ordering comment.
++	 */
++	WRITE_ONCE(next->on_cpu, 1);
++}
++
++static inline void finish_task(struct task_struct *prev)
++{
++#ifdef CONFIG_SMP
++	/*
++	 * This must be the very last reference to @prev from this CPU. After
++	 * p->on_cpu is cleared, the task can be moved to a different CPU. We
++	 * must ensure this doesn't happen until the switch is completely
++	 * finished.
++	 *
++	 * In particular, the load of prev->state in finish_task_switch() must
++	 * happen before this.
++	 *
++	 * Pairs with the smp_cond_load_acquire() in try_to_wake_up().
++	 */
++	smp_store_release(&prev->on_cpu, 0);
++#else
++	prev->on_cpu = 0;
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++static void do_balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++	void (*func)(struct rq *rq);
++	struct balance_callback *next;
++
++	lockdep_assert_held(&rq->lock);
++
++	while (head) {
++		func = (void (*)(struct rq *))head->func;
++		next = head->next;
++		head->next = NULL;
++		head = next;
++
++		func(rq);
++	}
++}
++
++static void balance_push(struct rq *rq);
++
++/*
++ * balance_push_callback is a right abuse of the callback interface and plays
++ * by significantly different rules.
++ *
++ * Where the normal balance_callback's purpose is to be ran in the same context
++ * that queued it (only later, when it's safe to drop rq->lock again),
++ * balance_push_callback is specifically targeted at __schedule().
++ *
++ * This abuse is tolerated because it places all the unlikely/odd cases behind
++ * a single test, namely: rq->balance_callback == NULL.
++ */
++struct balance_callback balance_push_callback = {
++	.next = NULL,
++	.func = balance_push,
++};
++
++static inline struct balance_callback *
++__splice_balance_callbacks(struct rq *rq, bool split)
++{
++	struct balance_callback *head = rq->balance_callback;
++
++	if (likely(!head))
++		return NULL;
++
++	lockdep_assert_rq_held(rq);
++	/*
++	 * Must not take balance_push_callback off the list when
++	 * splice_balance_callbacks() and balance_callbacks() are not
++	 * in the same rq->lock section.
++	 *
++	 * In that case it would be possible for __schedule() to interleave
++	 * and observe the list empty.
++	 */
++	if (split && head == &balance_push_callback)
++		head = NULL;
++	else
++		rq->balance_callback = NULL;
++
++	return head;
++}
++
++static inline struct balance_callback *splice_balance_callbacks(struct rq *rq)
++{
++	return __splice_balance_callbacks(rq, true);
++}
++
++static void __balance_callbacks(struct rq *rq)
++{
++	do_balance_callbacks(rq, __splice_balance_callbacks(rq, false));
++}
++
++static inline void balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++	unsigned long flags;
++
++	if (unlikely(head)) {
++		raw_spin_lock_irqsave(&rq->lock, flags);
++		do_balance_callbacks(rq, head);
++		raw_spin_unlock_irqrestore(&rq->lock, flags);
++	}
++}
++
++#else
++
++static inline void __balance_callbacks(struct rq *rq)
++{
++}
++
++static inline struct balance_callback *splice_balance_callbacks(struct rq *rq)
++{
++	return NULL;
++}
++
++static inline void balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++}
++
++#endif
++
++static inline void
++prepare_lock_switch(struct rq *rq, struct task_struct *next)
++{
++	/*
++	 * Since the runqueue lock will be released by the next
++	 * task (which is an invalid locking op but in the case
++	 * of the scheduler it's an obvious special-case), so we
++	 * do an early lockdep release here:
++	 */
++	spin_release(&rq->lock.dep_map, _THIS_IP_);
++#ifdef CONFIG_DEBUG_SPINLOCK
++	/* this is a valid case when another task releases the spinlock */
++	rq->lock.owner = next;
++#endif
++}
++
++static inline void finish_lock_switch(struct rq *rq)
++{
++	/*
++	 * If we are tracking spinlock dependencies then we have to
++	 * fix up the runqueue lock - which gets 'carried over' from
++	 * prev into current:
++	 */
++	spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
++	__balance_callbacks(rq);
++	raw_spin_unlock_irq(&rq->lock);
++}
++
++/*
++ * NOP if the arch has not defined these:
++ */
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next)	do { } while (0)
++#endif
++
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch()	do { } while (0)
++#endif
++
++static inline void kmap_local_sched_out(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++	if (unlikely(current->kmap_ctrl.idx))
++		__kmap_local_sched_out();
++#endif
++}
++
++static inline void kmap_local_sched_in(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++	if (unlikely(current->kmap_ctrl.idx))
++		__kmap_local_sched_in();
++#endif
++}
++
++/**
++ * prepare_task_switch - prepare to switch tasks
++ * @rq: the runqueue preparing to switch
++ * @next: the task we are going to switch to.
++ *
++ * This is called with the rq lock held and interrupts off. It must
++ * be paired with a subsequent finish_task_switch after the context
++ * switch.
++ *
++ * prepare_task_switch sets up locking and calls architecture specific
++ * hooks.
++ */
++static inline void
++prepare_task_switch(struct rq *rq, struct task_struct *prev,
++		    struct task_struct *next)
++{
++	kcov_prepare_switch(prev);
++	sched_info_switch(rq, prev, next);
++	perf_event_task_sched_out(prev, next);
++	rseq_preempt(prev);
++	switch_mm_cid(prev, next);
++	fire_sched_out_preempt_notifiers(prev, next);
++	kmap_local_sched_out();
++	prepare_task(next);
++	prepare_arch_switch(next);
++}
++
++/**
++ * finish_task_switch - clean up after a task-switch
++ * @rq: runqueue associated with task-switch
++ * @prev: the thread we just switched away from.
++ *
++ * finish_task_switch must be called after the context switch, paired
++ * with a prepare_task_switch call before the context switch.
++ * finish_task_switch will reconcile locking set up by prepare_task_switch,
++ * and do any other architecture-specific cleanup actions.
++ *
++ * Note that we may have delayed dropping an mm in context_switch(). If
++ * so, we finish that here outside of the runqueue lock.  (Doing it
++ * with the lock held can cause deadlocks; see schedule() for
++ * details.)
++ *
++ * The context switch have flipped the stack from under us and restored the
++ * local variables which were saved when this task called schedule() in the
++ * past. prev == current is still correct but we need to recalculate this_rq
++ * because prev may have moved to another CPU.
++ */
++static struct rq *finish_task_switch(struct task_struct *prev)
++	__releases(rq->lock)
++{
++	struct rq *rq = this_rq();
++	struct mm_struct *mm = rq->prev_mm;
++	unsigned int prev_state;
++
++	/*
++	 * The previous task will have left us with a preempt_count of 2
++	 * because it left us after:
++	 *
++	 *	schedule()
++	 *	  preempt_disable();			// 1
++	 *	  __schedule()
++	 *	    raw_spin_lock_irq(&rq->lock)	// 2
++	 *
++	 * Also, see FORK_PREEMPT_COUNT.
++	 */
++	if (WARN_ONCE(preempt_count() != 2*PREEMPT_DISABLE_OFFSET,
++		      "corrupted preempt_count: %s/%d/0x%x\n",
++		      current->comm, current->pid, preempt_count()))
++		preempt_count_set(FORK_PREEMPT_COUNT);
++
++	rq->prev_mm = NULL;
++
++	/*
++	 * A task struct has one reference for the use as "current".
++	 * If a task dies, then it sets TASK_DEAD in tsk->state and calls
++	 * schedule one last time. The schedule call will never return, and
++	 * the scheduled task must drop that reference.
++	 *
++	 * We must observe prev->state before clearing prev->on_cpu (in
++	 * finish_task), otherwise a concurrent wakeup can get prev
++	 * running on another CPU and we could rave with its RUNNING -> DEAD
++	 * transition, resulting in a double drop.
++	 */
++	prev_state = READ_ONCE(prev->__state);
++	vtime_task_switch(prev);
++	perf_event_task_sched_in(prev, current);
++	finish_task(prev);
++	tick_nohz_task_switch();
++	finish_lock_switch(rq);
++	finish_arch_post_lock_switch();
++	kcov_finish_switch(current);
++	/*
++	 * kmap_local_sched_out() is invoked with rq::lock held and
++	 * interrupts disabled. There is no requirement for that, but the
++	 * sched out code does not have an interrupt enabled section.
++	 * Restoring the maps on sched in does not require interrupts being
++	 * disabled either.
++	 */
++	kmap_local_sched_in();
++
++	fire_sched_in_preempt_notifiers(current);
++	/*
++	 * When switching through a kernel thread, the loop in
++	 * membarrier_{private,global}_expedited() may have observed that
++	 * kernel thread and not issued an IPI. It is therefore possible to
++	 * schedule between user->kernel->user threads without passing though
++	 * switch_mm(). Membarrier requires a barrier after storing to
++	 * rq->curr, before returning to userspace, so provide them here:
++	 *
++	 * - a full memory barrier for {PRIVATE,GLOBAL}_EXPEDITED, implicitly
++	 *   provided by mmdrop(),
++	 * - a sync_core for SYNC_CORE.
++	 */
++	if (mm) {
++		membarrier_mm_sync_core_before_usermode(mm);
++		mmdrop_sched(mm);
++	}
++	if (unlikely(prev_state == TASK_DEAD)) {
++		/* Task is done with its stack. */
++		put_task_stack(prev);
++
++		put_task_struct_rcu_user(prev);
++	}
++
++	return rq;
++}
++
++/**
++ * schedule_tail - first thing a freshly forked thread must call.
++ * @prev: the thread we just switched away from.
++ */
++asmlinkage __visible void schedule_tail(struct task_struct *prev)
++	__releases(rq->lock)
++{
++	/*
++	 * New tasks start with FORK_PREEMPT_COUNT, see there and
++	 * finish_task_switch() for details.
++	 *
++	 * finish_task_switch() will drop rq->lock() and lower preempt_count
++	 * and the preempt_enable() will end up enabling preemption (on
++	 * PREEMPT_COUNT kernels).
++	 */
++
++	finish_task_switch(prev);
++	preempt_enable();
++
++	if (current->set_child_tid)
++		put_user(task_pid_vnr(current), current->set_child_tid);
++
++	calculate_sigpending();
++}
++
++/*
++ * context_switch - switch to the new MM and the new thread's register state.
++ */
++static __always_inline struct rq *
++context_switch(struct rq *rq, struct task_struct *prev,
++	       struct task_struct *next)
++{
++	prepare_task_switch(rq, prev, next);
++
++	/*
++	 * For paravirt, this is coupled with an exit in switch_to to
++	 * combine the page table reload and the switch backend into
++	 * one hypercall.
++	 */
++	arch_start_context_switch(prev);
++
++	/*
++	 * kernel -> kernel   lazy + transfer active
++	 *   user -> kernel   lazy + mmgrab() active
++	 *
++	 * kernel ->   user   switch + mmdrop() active
++	 *   user ->   user   switch
++	 */
++	if (!next->mm) {                                // to kernel
++		enter_lazy_tlb(prev->active_mm, next);
++
++		next->active_mm = prev->active_mm;
++		if (prev->mm)                           // from user
++			mmgrab(prev->active_mm);
++		else
++			prev->active_mm = NULL;
++	} else {                                        // to user
++		membarrier_switch_mm(rq, prev->active_mm, next->mm);
++		/*
++		 * sys_membarrier() requires an smp_mb() between setting
++		 * rq->curr / membarrier_switch_mm() and returning to userspace.
++		 *
++		 * The below provides this either through switch_mm(), or in
++		 * case 'prev->active_mm == next->mm' through
++		 * finish_task_switch()'s mmdrop().
++		 */
++		switch_mm_irqs_off(prev->active_mm, next->mm, next);
++		lru_gen_use_mm(next->mm);
++
++		if (!prev->mm) {                        // from kernel
++			/* will mmdrop() in finish_task_switch(). */
++			rq->prev_mm = prev->active_mm;
++			prev->active_mm = NULL;
++		}
++	}
++
++	prepare_lock_switch(rq, next);
++
++	/* Here we just switch the register state and the stack. */
++	switch_to(prev, next, prev);
++	barrier();
++
++	return finish_task_switch(prev);
++}
++
++/*
++ * nr_running, nr_uninterruptible and nr_context_switches:
++ *
++ * externally visible scheduler statistics: current number of runnable
++ * threads, total number of context switches performed since bootup.
++ */
++unsigned int nr_running(void)
++{
++	unsigned int i, sum = 0;
++
++	for_each_online_cpu(i)
++		sum += cpu_rq(i)->nr_running;
++
++	return sum;
++}
++
++/*
++ * Check if only the current task is running on the CPU.
++ *
++ * Caution: this function does not check that the caller has disabled
++ * preemption, thus the result might have a time-of-check-to-time-of-use
++ * race.  The caller is responsible to use it correctly, for example:
++ *
++ * - from a non-preemptible section (of course)
++ *
++ * - from a thread that is bound to a single CPU
++ *
++ * - in a loop with very short iterations (e.g. a polling loop)
++ */
++bool single_task_running(void)
++{
++	return raw_rq()->nr_running == 1;
++}
++EXPORT_SYMBOL(single_task_running);
++
++unsigned long long nr_context_switches_cpu(int cpu)
++{
++	return cpu_rq(cpu)->nr_switches;
++}
++
++unsigned long long nr_context_switches(void)
++{
++	int i;
++	unsigned long long sum = 0;
++
++	for_each_possible_cpu(i)
++		sum += cpu_rq(i)->nr_switches;
++
++	return sum;
++}
++
++/*
++ * Consumers of these two interfaces, like for example the cpuidle menu
++ * governor, are using nonsensical data. Preferring shallow idle state selection
++ * for a CPU that has IO-wait which might not even end up running the task when
++ * it does become runnable.
++ */
++
++unsigned int nr_iowait_cpu(int cpu)
++{
++	return atomic_read(&cpu_rq(cpu)->nr_iowait);
++}
++
++/*
++ * IO-wait accounting, and how it's mostly bollocks (on SMP).
++ *
++ * The idea behind IO-wait account is to account the idle time that we could
++ * have spend running if it were not for IO. That is, if we were to improve the
++ * storage performance, we'd have a proportional reduction in IO-wait time.
++ *
++ * This all works nicely on UP, where, when a task blocks on IO, we account
++ * idle time as IO-wait, because if the storage were faster, it could've been
++ * running and we'd not be idle.
++ *
++ * This has been extended to SMP, by doing the same for each CPU. This however
++ * is broken.
++ *
++ * Imagine for instance the case where two tasks block on one CPU, only the one
++ * CPU will have IO-wait accounted, while the other has regular idle. Even
++ * though, if the storage were faster, both could've ran at the same time,
++ * utilising both CPUs.
++ *
++ * This means, that when looking globally, the current IO-wait accounting on
++ * SMP is a lower bound, by reason of under accounting.
++ *
++ * Worse, since the numbers are provided per CPU, they are sometimes
++ * interpreted per CPU, and that is nonsensical. A blocked task isn't strictly
++ * associated with any one particular CPU, it can wake to another CPU than it
++ * blocked on. This means the per CPU IO-wait number is meaningless.
++ *
++ * Task CPU affinities can make all that even more 'interesting'.
++ */
++
++unsigned int nr_iowait(void)
++{
++	unsigned int i, sum = 0;
++
++	for_each_possible_cpu(i)
++		sum += nr_iowait_cpu(i);
++
++	return sum;
++}
++
++#ifdef CONFIG_SMP
++
++/*
++ * sched_exec - execve() is a valuable balancing opportunity, because at
++ * this point the task has the smallest effective memory and cache
++ * footprint.
++ */
++void sched_exec(void)
++{
++}
++
++#endif
++
++DEFINE_PER_CPU(struct kernel_stat, kstat);
++DEFINE_PER_CPU(struct kernel_cpustat, kernel_cpustat);
++
++EXPORT_PER_CPU_SYMBOL(kstat);
++EXPORT_PER_CPU_SYMBOL(kernel_cpustat);
++
++static inline void update_curr(struct rq *rq, struct task_struct *p)
++{
++	s64 ns = rq->clock_task - p->last_ran;
++
++	p->sched_time += ns;
++	cgroup_account_cputime(p, ns);
++	account_group_exec_runtime(p, ns);
++
++	p->time_slice -= ns;
++	p->last_ran = rq->clock_task;
++}
++
++/*
++ * Return accounted runtime for the task.
++ * Return separately the current's pending runtime that have not been
++ * accounted yet.
++ */
++unsigned long long task_sched_runtime(struct task_struct *p)
++{
++	unsigned long flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++	u64 ns;
++
++#if defined(CONFIG_64BIT) && defined(CONFIG_SMP)
++	/*
++	 * 64-bit doesn't need locks to atomically read a 64-bit value.
++	 * So we have a optimization chance when the task's delta_exec is 0.
++	 * Reading ->on_cpu is racy, but this is ok.
++	 *
++	 * If we race with it leaving CPU, we'll take a lock. So we're correct.
++	 * If we race with it entering CPU, unaccounted time is 0. This is
++	 * indistinguishable from the read occurring a few cycles earlier.
++	 * If we see ->on_cpu without ->on_rq, the task is leaving, and has
++	 * been accounted, so we're correct here as well.
++	 */
++	if (!p->on_cpu || !task_on_rq_queued(p))
++		return tsk_seruntime(p);
++#endif
++
++	rq = task_access_lock_irqsave(p, &lock, &flags);
++	/*
++	 * Must be ->curr _and_ ->on_rq.  If dequeued, we would
++	 * project cycles that may never be accounted to this
++	 * thread, breaking clock_gettime().
++	 */
++	if (p == rq->curr && task_on_rq_queued(p)) {
++		update_rq_clock(rq);
++		update_curr(rq, p);
++	}
++	ns = tsk_seruntime(p);
++	task_access_unlock_irqrestore(p, lock, &flags);
++
++	return ns;
++}
++
++/* This manages tasks that have run out of timeslice during a scheduler_tick */
++static inline void scheduler_task_tick(struct rq *rq)
++{
++	struct task_struct *p = rq->curr;
++
++	if (is_idle_task(p))
++		return;
++
++	update_curr(rq, p);
++	cpufreq_update_util(rq, 0);
++
++	/*
++	 * Tasks have less than RESCHED_NS of time slice left they will be
++	 * rescheduled.
++	 */
++	if (p->time_slice >= RESCHED_NS)
++		return;
++	set_tsk_need_resched(p);
++	set_preempt_need_resched();
++}
++
++#ifdef CONFIG_SCHED_DEBUG
++static u64 cpu_resched_latency(struct rq *rq)
++{
++	int latency_warn_ms = READ_ONCE(sysctl_resched_latency_warn_ms);
++	u64 resched_latency, now = rq_clock(rq);
++	static bool warned_once;
++
++	if (sysctl_resched_latency_warn_once && warned_once)
++		return 0;
++
++	if (!need_resched() || !latency_warn_ms)
++		return 0;
++
++	if (system_state == SYSTEM_BOOTING)
++		return 0;
++
++	if (!rq->last_seen_need_resched_ns) {
++		rq->last_seen_need_resched_ns = now;
++		rq->ticks_without_resched = 0;
++		return 0;
++	}
++
++	rq->ticks_without_resched++;
++	resched_latency = now - rq->last_seen_need_resched_ns;
++	if (resched_latency <= latency_warn_ms * NSEC_PER_MSEC)
++		return 0;
++
++	warned_once = true;
++
++	return resched_latency;
++}
++
++static int __init setup_resched_latency_warn_ms(char *str)
++{
++	long val;
++
++	if ((kstrtol(str, 0, &val))) {
++		pr_warn("Unable to set resched_latency_warn_ms\n");
++		return 1;
++	}
++
++	sysctl_resched_latency_warn_ms = val;
++	return 1;
++}
++__setup("resched_latency_warn_ms=", setup_resched_latency_warn_ms);
++#else
++static inline u64 cpu_resched_latency(struct rq *rq) { return 0; }
++#endif /* CONFIG_SCHED_DEBUG */
++
++/*
++ * This function gets called by the timer code, with HZ frequency.
++ * We call it with interrupts disabled.
++ */
++void scheduler_tick(void)
++{
++	int cpu __maybe_unused = smp_processor_id();
++	struct rq *rq = cpu_rq(cpu);
++	u64 resched_latency;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_TICK))
++		arch_scale_freq_tick();
++
++	sched_clock_tick();
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++
++	scheduler_task_tick(rq);
++	if (sched_feat(LATENCY_WARN))
++		resched_latency = cpu_resched_latency(rq);
++	calc_global_load_tick(rq);
++
++	rq->last_tick = rq->clock;
++	raw_spin_unlock(&rq->lock);
++
++	if (sched_feat(LATENCY_WARN) && resched_latency)
++		resched_latency_warn(cpu, resched_latency);
++
++	perf_event_task_tick();
++}
++
++#ifdef CONFIG_SCHED_SMT
++static inline int sg_balance_cpu_stop(void *data)
++{
++	struct rq *rq = this_rq();
++	struct task_struct *p = data;
++	cpumask_t tmp;
++	unsigned long flags;
++
++	local_irq_save(flags);
++
++	raw_spin_lock(&p->pi_lock);
++	raw_spin_lock(&rq->lock);
++
++	rq->active_balance = 0;
++	/* _something_ may have changed the task, double check again */
++	if (task_on_rq_queued(p) && task_rq(p) == rq &&
++	    cpumask_and(&tmp, p->cpus_ptr, &sched_sg_idle_mask) &&
++	    !is_migration_disabled(p)) {
++		int cpu = cpu_of(rq);
++		int dcpu = __best_mask_cpu(&tmp, per_cpu(sched_cpu_llc_mask, cpu));
++		rq = move_queued_task(rq, p, dcpu);
++	}
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock(&p->pi_lock);
++
++	local_irq_restore(flags);
++
++	return 0;
++}
++
++/* sg_balance_trigger - trigger slibing group balance for @cpu */
++static inline int sg_balance_trigger(const int cpu)
++{
++	struct rq *rq= cpu_rq(cpu);
++	unsigned long flags;
++	struct task_struct *curr;
++	int res;
++
++	if (!raw_spin_trylock_irqsave(&rq->lock, flags))
++		return 0;
++	curr = rq->curr;
++	res = (!is_idle_task(curr)) && (1 == rq->nr_running) &&\
++	      cpumask_intersects(curr->cpus_ptr, &sched_sg_idle_mask) &&\
++	      !is_migration_disabled(curr) && (!rq->active_balance);
++
++	if (res)
++		rq->active_balance = 1;
++
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++	if (res)
++		stop_one_cpu_nowait(cpu, sg_balance_cpu_stop, curr,
++				    &rq->active_balance_work);
++	return res;
++}
++
++/*
++ * sg_balance - slibing group balance check for run queue @rq
++ */
++static inline void sg_balance(struct rq *rq, int cpu)
++{
++	cpumask_t chk;
++
++	/* exit when cpu is offline */
++	if (unlikely(!rq->online))
++		return;
++
++	/*
++	 * Only cpu in slibing idle group will do the checking and then
++	 * find potential cpus which can migrate the current running task
++	 */
++	if (cpumask_test_cpu(cpu, &sched_sg_idle_mask) &&
++	    cpumask_andnot(&chk, cpu_online_mask, sched_idle_mask) &&
++	    cpumask_andnot(&chk, &chk, &sched_rq_pending_mask)) {
++		int i;
++
++		for_each_cpu_wrap(i, &chk, cpu) {
++			if (!cpumask_intersects(cpu_smt_mask(i), sched_idle_mask) &&\
++			    sg_balance_trigger(i))
++				return;
++		}
++	}
++}
++#endif /* CONFIG_SCHED_SMT */
++
++#ifdef CONFIG_NO_HZ_FULL
++
++struct tick_work {
++	int			cpu;
++	atomic_t		state;
++	struct delayed_work	work;
++};
++/* Values for ->state, see diagram below. */
++#define TICK_SCHED_REMOTE_OFFLINE	0
++#define TICK_SCHED_REMOTE_OFFLINING	1
++#define TICK_SCHED_REMOTE_RUNNING	2
++
++/*
++ * State diagram for ->state:
++ *
++ *
++ *          TICK_SCHED_REMOTE_OFFLINE
++ *                    |   ^
++ *                    |   |
++ *                    |   | sched_tick_remote()
++ *                    |   |
++ *                    |   |
++ *                    +--TICK_SCHED_REMOTE_OFFLINING
++ *                    |   ^
++ *                    |   |
++ * sched_tick_start() |   | sched_tick_stop()
++ *                    |   |
++ *                    V   |
++ *          TICK_SCHED_REMOTE_RUNNING
++ *
++ *
++ * Other transitions get WARN_ON_ONCE(), except that sched_tick_remote()
++ * and sched_tick_start() are happy to leave the state in RUNNING.
++ */
++
++static struct tick_work __percpu *tick_work_cpu;
++
++static void sched_tick_remote(struct work_struct *work)
++{
++	struct delayed_work *dwork = to_delayed_work(work);
++	struct tick_work *twork = container_of(dwork, struct tick_work, work);
++	int cpu = twork->cpu;
++	struct rq *rq = cpu_rq(cpu);
++	struct task_struct *curr;
++	unsigned long flags;
++	u64 delta;
++	int os;
++
++	/*
++	 * Handle the tick only if it appears the remote CPU is running in full
++	 * dynticks mode. The check is racy by nature, but missing a tick or
++	 * having one too much is no big deal because the scheduler tick updates
++	 * statistics and checks timeslices in a time-independent way, regardless
++	 * of when exactly it is running.
++	 */
++	if (!tick_nohz_tick_stopped_cpu(cpu))
++		goto out_requeue;
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	curr = rq->curr;
++	if (cpu_is_offline(cpu))
++		goto out_unlock;
++
++	update_rq_clock(rq);
++	if (!is_idle_task(curr)) {
++		/*
++		 * Make sure the next tick runs within a reasonable
++		 * amount of time.
++		 */
++		delta = rq_clock_task(rq) - curr->last_ran;
++		WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3);
++	}
++	scheduler_task_tick(rq);
++
++	calc_load_nohz_remote(rq);
++out_unlock:
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++out_requeue:
++	/*
++	 * Run the remote tick once per second (1Hz). This arbitrary
++	 * frequency is large enough to avoid overload but short enough
++	 * to keep scheduler internal stats reasonably up to date.  But
++	 * first update state to reflect hotplug activity if required.
++	 */
++	os = atomic_fetch_add_unless(&twork->state, -1, TICK_SCHED_REMOTE_RUNNING);
++	WARN_ON_ONCE(os == TICK_SCHED_REMOTE_OFFLINE);
++	if (os == TICK_SCHED_REMOTE_RUNNING)
++		queue_delayed_work(system_unbound_wq, dwork, HZ);
++}
++
++static void sched_tick_start(int cpu)
++{
++	int os;
++	struct tick_work *twork;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_TICK))
++		return;
++
++	WARN_ON_ONCE(!tick_work_cpu);
++
++	twork = per_cpu_ptr(tick_work_cpu, cpu);
++	os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_RUNNING);
++	WARN_ON_ONCE(os == TICK_SCHED_REMOTE_RUNNING);
++	if (os == TICK_SCHED_REMOTE_OFFLINE) {
++		twork->cpu = cpu;
++		INIT_DELAYED_WORK(&twork->work, sched_tick_remote);
++		queue_delayed_work(system_unbound_wq, &twork->work, HZ);
++	}
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++static void sched_tick_stop(int cpu)
++{
++	struct tick_work *twork;
++	int os;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_TICK))
++		return;
++
++	WARN_ON_ONCE(!tick_work_cpu);
++
++	twork = per_cpu_ptr(tick_work_cpu, cpu);
++	/* There cannot be competing actions, but don't rely on stop-machine. */
++	os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_OFFLINING);
++	WARN_ON_ONCE(os != TICK_SCHED_REMOTE_RUNNING);
++	/* Don't cancel, as this would mess up the state machine. */
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++int __init sched_tick_offload_init(void)
++{
++	tick_work_cpu = alloc_percpu(struct tick_work);
++	BUG_ON(!tick_work_cpu);
++	return 0;
++}
++
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_tick_start(int cpu) { }
++static inline void sched_tick_stop(int cpu) { }
++#endif
++
++#if defined(CONFIG_PREEMPTION) && (defined(CONFIG_DEBUG_PREEMPT) || \
++				defined(CONFIG_PREEMPT_TRACER))
++/*
++ * If the value passed in is equal to the current preempt count
++ * then we just disabled preemption. Start timing the latency.
++ */
++static inline void preempt_latency_start(int val)
++{
++	if (preempt_count() == val) {
++		unsigned long ip = get_lock_parent_ip();
++#ifdef CONFIG_DEBUG_PREEMPT
++		current->preempt_disable_ip = ip;
++#endif
++		trace_preempt_off(CALLER_ADDR0, ip);
++	}
++}
++
++void preempt_count_add(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Underflow?
++	 */
++	if (DEBUG_LOCKS_WARN_ON((preempt_count() < 0)))
++		return;
++#endif
++	__preempt_count_add(val);
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Spinlock count overflowing soon?
++	 */
++	DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >=
++				PREEMPT_MASK - 10);
++#endif
++	preempt_latency_start(val);
++}
++EXPORT_SYMBOL(preempt_count_add);
++NOKPROBE_SYMBOL(preempt_count_add);
++
++/*
++ * If the value passed in equals to the current preempt count
++ * then we just enabled preemption. Stop timing the latency.
++ */
++static inline void preempt_latency_stop(int val)
++{
++	if (preempt_count() == val)
++		trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
++}
++
++void preempt_count_sub(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Underflow?
++	 */
++	if (DEBUG_LOCKS_WARN_ON(val > preempt_count()))
++		return;
++	/*
++	 * Is the spinlock portion underflowing?
++	 */
++	if (DEBUG_LOCKS_WARN_ON((val < PREEMPT_MASK) &&
++			!(preempt_count() & PREEMPT_MASK)))
++		return;
++#endif
++
++	preempt_latency_stop(val);
++	__preempt_count_sub(val);
++}
++EXPORT_SYMBOL(preempt_count_sub);
++NOKPROBE_SYMBOL(preempt_count_sub);
++
++#else
++static inline void preempt_latency_start(int val) { }
++static inline void preempt_latency_stop(int val) { }
++#endif
++
++static inline unsigned long get_preempt_disable_ip(struct task_struct *p)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	return p->preempt_disable_ip;
++#else
++	return 0;
++#endif
++}
++
++/*
++ * Print scheduling while atomic bug:
++ */
++static noinline void __schedule_bug(struct task_struct *prev)
++{
++	/* Save this before calling printk(), since that will clobber it */
++	unsigned long preempt_disable_ip = get_preempt_disable_ip(current);
++
++	if (oops_in_progress)
++		return;
++
++	printk(KERN_ERR "BUG: scheduling while atomic: %s/%d/0x%08x\n",
++		prev->comm, prev->pid, preempt_count());
++
++	debug_show_held_locks(prev);
++	print_modules();
++	if (irqs_disabled())
++		print_irqtrace_events(prev);
++	if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)
++	    && in_atomic_preempt_off()) {
++		pr_err("Preemption disabled at:");
++		print_ip_sym(KERN_ERR, preempt_disable_ip);
++	}
++	check_panic_on_warn("scheduling while atomic");
++
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++
++/*
++ * Various schedule()-time debugging checks and statistics:
++ */
++static inline void schedule_debug(struct task_struct *prev, bool preempt)
++{
++#ifdef CONFIG_SCHED_STACK_END_CHECK
++	if (task_stack_end_corrupted(prev))
++		panic("corrupted stack end detected inside scheduler\n");
++
++	if (task_scs_end_corrupted(prev))
++		panic("corrupted shadow stack detected inside scheduler\n");
++#endif
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++	if (!preempt && READ_ONCE(prev->__state) && prev->non_block_count) {
++		printk(KERN_ERR "BUG: scheduling in a non-blocking section: %s/%d/%i\n",
++			prev->comm, prev->pid, prev->non_block_count);
++		dump_stack();
++		add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++	}
++#endif
++
++	if (unlikely(in_atomic_preempt_off())) {
++		__schedule_bug(prev);
++		preempt_count_set(PREEMPT_DISABLED);
++	}
++	rcu_sleep_check();
++	SCHED_WARN_ON(ct_state() == CONTEXT_USER);
++
++	profile_hit(SCHED_PROFILING, __builtin_return_address(0));
++
++	schedstat_inc(this_rq()->sched_count);
++}
++
++#ifdef ALT_SCHED_DEBUG
++void alt_sched_debug(void)
++{
++	printk(KERN_INFO "sched: pending: 0x%04lx, idle: 0x%04lx, sg_idle: 0x%04lx\n",
++	       sched_rq_pending_mask.bits[0],
++	       sched_idle_mask->bits[0],
++	       sched_sg_idle_mask.bits[0]);
++}
++#else
++inline void alt_sched_debug(void) {}
++#endif
++
++#ifdef	CONFIG_SMP
++
++#ifdef CONFIG_PREEMPT_RT
++#define SCHED_NR_MIGRATE_BREAK 8
++#else
++#define SCHED_NR_MIGRATE_BREAK 32
++#endif
++
++const_debug unsigned int sysctl_sched_nr_migrate = SCHED_NR_MIGRATE_BREAK;
++
++/*
++ * Migrate pending tasks in @rq to @dest_cpu
++ */
++static inline int
++migrate_pending_tasks(struct rq *rq, struct rq *dest_rq, const int dest_cpu)
++{
++	struct task_struct *p, *skip = rq->curr;
++	int nr_migrated = 0;
++	int nr_tries = min(rq->nr_running / 2, sysctl_sched_nr_migrate);
++
++	/* WA to check rq->curr is still on rq */
++	if (!task_on_rq_queued(skip))
++		return 0;
++
++	while (skip != rq->idle && nr_tries &&
++	       (p = sched_rq_next_task(skip, rq)) != rq->idle) {
++		skip = sched_rq_next_task(p, rq);
++		if (cpumask_test_cpu(dest_cpu, p->cpus_ptr)) {
++			__SCHED_DEQUEUE_TASK(p, rq, 0, );
++			set_task_cpu(p, dest_cpu);
++			sched_task_sanity_check(p, dest_rq);
++			__SCHED_ENQUEUE_TASK(p, dest_rq, 0);
++			nr_migrated++;
++		}
++		nr_tries--;
++	}
++
++	return nr_migrated;
++}
++
++static inline int take_other_rq_tasks(struct rq *rq, int cpu)
++{
++	struct cpumask *topo_mask, *end_mask;
++
++	if (unlikely(!rq->online))
++		return 0;
++
++	if (cpumask_empty(&sched_rq_pending_mask))
++		return 0;
++
++	topo_mask = per_cpu(sched_cpu_topo_masks, cpu) + 1;
++	end_mask = per_cpu(sched_cpu_topo_end_mask, cpu);
++	do {
++		int i;
++		for_each_cpu_and(i, &sched_rq_pending_mask, topo_mask) {
++			int nr_migrated;
++			struct rq *src_rq;
++
++			src_rq = cpu_rq(i);
++			if (!do_raw_spin_trylock(&src_rq->lock))
++				continue;
++			spin_acquire(&src_rq->lock.dep_map,
++				     SINGLE_DEPTH_NESTING, 1, _RET_IP_);
++
++			if ((nr_migrated = migrate_pending_tasks(src_rq, rq, cpu))) {
++				src_rq->nr_running -= nr_migrated;
++				if (src_rq->nr_running < 2)
++					cpumask_clear_cpu(i, &sched_rq_pending_mask);
++
++				spin_release(&src_rq->lock.dep_map, _RET_IP_);
++				do_raw_spin_unlock(&src_rq->lock);
++
++				rq->nr_running += nr_migrated;
++				if (rq->nr_running > 1)
++					cpumask_set_cpu(cpu, &sched_rq_pending_mask);
++
++				update_sched_preempt_mask(rq);
++				cpufreq_update_util(rq, 0);
++
++				return 1;
++			}
++
++			spin_release(&src_rq->lock.dep_map, _RET_IP_);
++			do_raw_spin_unlock(&src_rq->lock);
++		}
++	} while (++topo_mask < end_mask);
++
++	return 0;
++}
++#endif
++
++/*
++ * Timeslices below RESCHED_NS are considered as good as expired as there's no
++ * point rescheduling when there's so little time left.
++ */
++static inline void check_curr(struct task_struct *p, struct rq *rq)
++{
++	if (unlikely(rq->idle == p))
++		return;
++
++	update_curr(rq, p);
++
++	if (p->time_slice < RESCHED_NS)
++		time_slice_expired(p, rq);
++}
++
++static inline struct task_struct *
++choose_next_task(struct rq *rq, int cpu)
++{
++	struct task_struct *next;
++
++	if (unlikely(rq->skip)) {
++		next = rq_runnable_task(rq);
++		if (next == rq->idle) {
++#ifdef	CONFIG_SMP
++			if (!take_other_rq_tasks(rq, cpu)) {
++#endif
++				rq->skip = NULL;
++				schedstat_inc(rq->sched_goidle);
++				return next;
++#ifdef	CONFIG_SMP
++			}
++			next = rq_runnable_task(rq);
++#endif
++		}
++		rq->skip = NULL;
++#ifdef CONFIG_HIGH_RES_TIMERS
++		hrtick_start(rq, next->time_slice);
++#endif
++		return next;
++	}
++
++	next = sched_rq_first_task(rq);
++	if (next == rq->idle) {
++#ifdef	CONFIG_SMP
++		if (!take_other_rq_tasks(rq, cpu)) {
++#endif
++			schedstat_inc(rq->sched_goidle);
++			/*printk(KERN_INFO "sched: choose_next_task(%d) idle %px\n", cpu, next);*/
++			return next;
++#ifdef	CONFIG_SMP
++		}
++		next = sched_rq_first_task(rq);
++#endif
++	}
++#ifdef CONFIG_HIGH_RES_TIMERS
++	hrtick_start(rq, next->time_slice);
++#endif
++	/*printk(KERN_INFO "sched: choose_next_task(%d) next %px\n", cpu, next);*/
++	return next;
++}
++
++/*
++ * Constants for the sched_mode argument of __schedule().
++ *
++ * The mode argument allows RT enabled kernels to differentiate a
++ * preemption from blocking on an 'sleeping' spin/rwlock. Note that
++ * SM_MASK_PREEMPT for !RT has all bits set, which allows the compiler to
++ * optimize the AND operation out and just check for zero.
++ */
++#define SM_NONE			0x0
++#define SM_PREEMPT		0x1
++#define SM_RTLOCK_WAIT		0x2
++
++#ifndef CONFIG_PREEMPT_RT
++# define SM_MASK_PREEMPT	(~0U)
++#else
++# define SM_MASK_PREEMPT	SM_PREEMPT
++#endif
++
++/*
++ * schedule() is the main scheduler function.
++ *
++ * The main means of driving the scheduler and thus entering this function are:
++ *
++ *   1. Explicit blocking: mutex, semaphore, waitqueue, etc.
++ *
++ *   2. TIF_NEED_RESCHED flag is checked on interrupt and userspace return
++ *      paths. For example, see arch/x86/entry_64.S.
++ *
++ *      To drive preemption between tasks, the scheduler sets the flag in timer
++ *      interrupt handler scheduler_tick().
++ *
++ *   3. Wakeups don't really cause entry into schedule(). They add a
++ *      task to the run-queue and that's it.
++ *
++ *      Now, if the new task added to the run-queue preempts the current
++ *      task, then the wakeup sets TIF_NEED_RESCHED and schedule() gets
++ *      called on the nearest possible occasion:
++ *
++ *       - If the kernel is preemptible (CONFIG_PREEMPTION=y):
++ *
++ *         - in syscall or exception context, at the next outmost
++ *           preempt_enable(). (this might be as soon as the wake_up()'s
++ *           spin_unlock()!)
++ *
++ *         - in IRQ context, return from interrupt-handler to
++ *           preemptible context
++ *
++ *       - If the kernel is not preemptible (CONFIG_PREEMPTION is not set)
++ *         then at the next:
++ *
++ *          - cond_resched() call
++ *          - explicit schedule() call
++ *          - return from syscall or exception to user-space
++ *          - return from interrupt-handler to user-space
++ *
++ * WARNING: must be called with preemption disabled!
++ */
++static void __sched notrace __schedule(unsigned int sched_mode)
++{
++	struct task_struct *prev, *next;
++	unsigned long *switch_count;
++	unsigned long prev_state;
++	struct rq *rq;
++	int cpu;
++
++	cpu = smp_processor_id();
++	rq = cpu_rq(cpu);
++	prev = rq->curr;
++
++	schedule_debug(prev, !!sched_mode);
++
++	/* by passing sched_feat(HRTICK) checking which Alt schedule FW doesn't support */
++	hrtick_clear(rq);
++
++	local_irq_disable();
++	rcu_note_context_switch(!!sched_mode);
++
++	/*
++	 * Make sure that signal_pending_state()->signal_pending() below
++	 * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
++	 * done by the caller to avoid the race with signal_wake_up():
++	 *
++	 * __set_current_state(@state)		signal_wake_up()
++	 * schedule()				  set_tsk_thread_flag(p, TIF_SIGPENDING)
++	 *					  wake_up_state(p, state)
++	 *   LOCK rq->lock			    LOCK p->pi_state
++	 *   smp_mb__after_spinlock()		    smp_mb__after_spinlock()
++	 *     if (signal_pending_state())	    if (p->state & @state)
++	 *
++	 * Also, the membarrier system call requires a full memory barrier
++	 * after coming from user-space, before storing to rq->curr.
++	 */
++	raw_spin_lock(&rq->lock);
++	smp_mb__after_spinlock();
++
++	update_rq_clock(rq);
++
++	switch_count = &prev->nivcsw;
++	/*
++	 * We must load prev->state once (task_struct::state is volatile), such
++	 * that we form a control dependency vs deactivate_task() below.
++	 */
++	prev_state = READ_ONCE(prev->__state);
++	if (!(sched_mode & SM_MASK_PREEMPT) && prev_state) {
++		if (signal_pending_state(prev_state, prev)) {
++			WRITE_ONCE(prev->__state, TASK_RUNNING);
++		} else {
++			prev->sched_contributes_to_load =
++				(prev_state & TASK_UNINTERRUPTIBLE) &&
++				!(prev_state & TASK_NOLOAD) &&
++				!(prev_state & TASK_FROZEN);
++
++			if (prev->sched_contributes_to_load)
++				rq->nr_uninterruptible++;
++
++			/*
++			 * __schedule()			ttwu()
++			 *   prev_state = prev->state;    if (p->on_rq && ...)
++			 *   if (prev_state)		    goto out;
++			 *     p->on_rq = 0;		  smp_acquire__after_ctrl_dep();
++			 *				  p->state = TASK_WAKING
++			 *
++			 * Where __schedule() and ttwu() have matching control dependencies.
++			 *
++			 * After this, schedule() must not care about p->state any more.
++			 */
++			sched_task_deactivate(prev, rq);
++			deactivate_task(prev, rq);
++
++			if (prev->in_iowait) {
++				atomic_inc(&rq->nr_iowait);
++				delayacct_blkio_start();
++			}
++		}
++		switch_count = &prev->nvcsw;
++	}
++
++	check_curr(prev, rq);
++
++	next = choose_next_task(rq, cpu);
++	clear_tsk_need_resched(prev);
++	clear_preempt_need_resched();
++#ifdef CONFIG_SCHED_DEBUG
++	rq->last_seen_need_resched_ns = 0;
++#endif
++
++	if (likely(prev != next)) {
++		next->last_ran = rq->clock_task;
++		rq->last_ts_switch = rq->clock;
++
++		/*printk(KERN_INFO "sched: %px -> %px\n", prev, next);*/
++		rq->nr_switches++;
++		/*
++		 * RCU users of rcu_dereference(rq->curr) may not see
++		 * changes to task_struct made by pick_next_task().
++		 */
++		RCU_INIT_POINTER(rq->curr, next);
++		/*
++		 * The membarrier system call requires each architecture
++		 * to have a full memory barrier after updating
++		 * rq->curr, before returning to user-space.
++		 *
++		 * Here are the schemes providing that barrier on the
++		 * various architectures:
++		 * - mm ? switch_mm() : mmdrop() for x86, s390, sparc, PowerPC.
++		 *   switch_mm() rely on membarrier_arch_switch_mm() on PowerPC.
++		 * - finish_lock_switch() for weakly-ordered
++		 *   architectures where spin_unlock is a full barrier,
++		 * - switch_to() for arm64 (weakly-ordered, spin_unlock
++		 *   is a RELEASE barrier),
++		 */
++		++*switch_count;
++
++		trace_sched_switch(sched_mode & SM_MASK_PREEMPT, prev, next, prev_state);
++
++		/* Also unlocks the rq: */
++		rq = context_switch(rq, prev, next);
++
++		cpu = cpu_of(rq);
++	} else {
++		__balance_callbacks(rq);
++		raw_spin_unlock_irq(&rq->lock);
++	}
++
++#ifdef CONFIG_SCHED_SMT
++	sg_balance(rq, cpu);
++#endif
++}
++
++void __noreturn do_task_dead(void)
++{
++	/* Causes final put_task_struct in finish_task_switch(): */
++	set_special_state(TASK_DEAD);
++
++	/* Tell freezer to ignore us: */
++	current->flags |= PF_NOFREEZE;
++
++	__schedule(SM_NONE);
++	BUG();
++
++	/* Avoid "noreturn function does return" - but don't continue if BUG() is a NOP: */
++	for (;;)
++		cpu_relax();
++}
++
++static inline void sched_submit_work(struct task_struct *tsk)
++{
++	unsigned int task_flags;
++
++	if (task_is_running(tsk))
++		return;
++
++	task_flags = tsk->flags;
++	/*
++	 * If a worker goes to sleep, notify and ask workqueue whether it
++	 * wants to wake up a task to maintain concurrency.
++	 */
++	if (task_flags & (PF_WQ_WORKER | PF_IO_WORKER)) {
++		if (task_flags & PF_WQ_WORKER)
++			wq_worker_sleeping(tsk);
++		else
++			io_wq_worker_sleeping(tsk);
++	}
++
++	/*
++	 * spinlock and rwlock must not flush block requests.  This will
++	 * deadlock if the callback attempts to acquire a lock which is
++	 * already acquired.
++	 */
++	SCHED_WARN_ON(current->__state & TASK_RTLOCK_WAIT);
++
++	/*
++	 * If we are going to sleep and we have plugged IO queued,
++	 * make sure to submit it to avoid deadlocks.
++	 */
++	blk_flush_plug(tsk->plug, true);
++}
++
++static void sched_update_worker(struct task_struct *tsk)
++{
++	if (tsk->flags & (PF_WQ_WORKER | PF_IO_WORKER)) {
++		if (tsk->flags & PF_WQ_WORKER)
++			wq_worker_running(tsk);
++		else
++			io_wq_worker_running(tsk);
++	}
++}
++
++asmlinkage __visible void __sched schedule(void)
++{
++	struct task_struct *tsk = current;
++
++	sched_submit_work(tsk);
++	do {
++		preempt_disable();
++		__schedule(SM_NONE);
++		sched_preempt_enable_no_resched();
++	} while (need_resched());
++	sched_update_worker(tsk);
++}
++EXPORT_SYMBOL(schedule);
++
++/*
++ * synchronize_rcu_tasks() makes sure that no task is stuck in preempted
++ * state (have scheduled out non-voluntarily) by making sure that all
++ * tasks have either left the run queue or have gone into user space.
++ * As idle tasks do not do either, they must not ever be preempted
++ * (schedule out non-voluntarily).
++ *
++ * schedule_idle() is similar to schedule_preempt_disable() except that it
++ * never enables preemption because it does not call sched_submit_work().
++ */
++void __sched schedule_idle(void)
++{
++	/*
++	 * As this skips calling sched_submit_work(), which the idle task does
++	 * regardless because that function is a nop when the task is in a
++	 * TASK_RUNNING state, make sure this isn't used someplace that the
++	 * current task can be in any other state. Note, idle is always in the
++	 * TASK_RUNNING state.
++	 */
++	WARN_ON_ONCE(current->__state);
++	do {
++		__schedule(SM_NONE);
++	} while (need_resched());
++}
++
++#if defined(CONFIG_CONTEXT_TRACKING_USER) && !defined(CONFIG_HAVE_CONTEXT_TRACKING_USER_OFFSTACK)
++asmlinkage __visible void __sched schedule_user(void)
++{
++	/*
++	 * If we come here after a random call to set_need_resched(),
++	 * or we have been woken up remotely but the IPI has not yet arrived,
++	 * we haven't yet exited the RCU idle mode. Do it here manually until
++	 * we find a better solution.
++	 *
++	 * NB: There are buggy callers of this function.  Ideally we
++	 * should warn if prev_state != CONTEXT_USER, but that will trigger
++	 * too frequently to make sense yet.
++	 */
++	enum ctx_state prev_state = exception_enter();
++	schedule();
++	exception_exit(prev_state);
++}
++#endif
++
++/**
++ * schedule_preempt_disabled - called with preemption disabled
++ *
++ * Returns with preemption disabled. Note: preempt_count must be 1
++ */
++void __sched schedule_preempt_disabled(void)
++{
++	sched_preempt_enable_no_resched();
++	schedule();
++	preempt_disable();
++}
++
++#ifdef CONFIG_PREEMPT_RT
++void __sched notrace schedule_rtlock(void)
++{
++	do {
++		preempt_disable();
++		__schedule(SM_RTLOCK_WAIT);
++		sched_preempt_enable_no_resched();
++	} while (need_resched());
++}
++NOKPROBE_SYMBOL(schedule_rtlock);
++#endif
++
++static void __sched notrace preempt_schedule_common(void)
++{
++	do {
++		/*
++		 * Because the function tracer can trace preempt_count_sub()
++		 * and it also uses preempt_enable/disable_notrace(), if
++		 * NEED_RESCHED is set, the preempt_enable_notrace() called
++		 * by the function tracer will call this function again and
++		 * cause infinite recursion.
++		 *
++		 * Preemption must be disabled here before the function
++		 * tracer can trace. Break up preempt_disable() into two
++		 * calls. One to disable preemption without fear of being
++		 * traced. The other to still record the preemption latency,
++		 * which can also be traced by the function tracer.
++		 */
++		preempt_disable_notrace();
++		preempt_latency_start(1);
++		__schedule(SM_PREEMPT);
++		preempt_latency_stop(1);
++		preempt_enable_no_resched_notrace();
++
++		/*
++		 * Check again in case we missed a preemption opportunity
++		 * between schedule and now.
++		 */
++	} while (need_resched());
++}
++
++#ifdef CONFIG_PREEMPTION
++/*
++ * This is the entry point to schedule() from in-kernel preemption
++ * off of preempt_enable.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule(void)
++{
++	/*
++	 * If there is a non-zero preempt_count or interrupts are disabled,
++	 * we do not want to preempt the current task. Just return..
++	 */
++	if (likely(!preemptible()))
++		return;
++
++	preempt_schedule_common();
++}
++NOKPROBE_SYMBOL(preempt_schedule);
++EXPORT_SYMBOL(preempt_schedule);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#ifndef preempt_schedule_dynamic_enabled
++#define preempt_schedule_dynamic_enabled	preempt_schedule
++#define preempt_schedule_dynamic_disabled	NULL
++#endif
++DEFINE_STATIC_CALL(preempt_schedule, preempt_schedule_dynamic_enabled);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule);
++void __sched notrace dynamic_preempt_schedule(void)
++{
++	if (!static_branch_unlikely(&sk_dynamic_preempt_schedule))
++		return;
++	preempt_schedule();
++}
++NOKPROBE_SYMBOL(dynamic_preempt_schedule);
++EXPORT_SYMBOL(dynamic_preempt_schedule);
++#endif
++#endif
++
++/**
++ * preempt_schedule_notrace - preempt_schedule called by tracing
++ *
++ * The tracing infrastructure uses preempt_enable_notrace to prevent
++ * recursion and tracing preempt enabling caused by the tracing
++ * infrastructure itself. But as tracing can happen in areas coming
++ * from userspace or just about to enter userspace, a preempt enable
++ * can occur before user_exit() is called. This will cause the scheduler
++ * to be called when the system is still in usermode.
++ *
++ * To prevent this, the preempt_enable_notrace will use this function
++ * instead of preempt_schedule() to exit user context if needed before
++ * calling the scheduler.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
++{
++	enum ctx_state prev_ctx;
++
++	if (likely(!preemptible()))
++		return;
++
++	do {
++		/*
++		 * Because the function tracer can trace preempt_count_sub()
++		 * and it also uses preempt_enable/disable_notrace(), if
++		 * NEED_RESCHED is set, the preempt_enable_notrace() called
++		 * by the function tracer will call this function again and
++		 * cause infinite recursion.
++		 *
++		 * Preemption must be disabled here before the function
++		 * tracer can trace. Break up preempt_disable() into two
++		 * calls. One to disable preemption without fear of being
++		 * traced. The other to still record the preemption latency,
++		 * which can also be traced by the function tracer.
++		 */
++		preempt_disable_notrace();
++		preempt_latency_start(1);
++		/*
++		 * Needs preempt disabled in case user_exit() is traced
++		 * and the tracer calls preempt_enable_notrace() causing
++		 * an infinite recursion.
++		 */
++		prev_ctx = exception_enter();
++		__schedule(SM_PREEMPT);
++		exception_exit(prev_ctx);
++
++		preempt_latency_stop(1);
++		preempt_enable_no_resched_notrace();
++	} while (need_resched());
++}
++EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#ifndef preempt_schedule_notrace_dynamic_enabled
++#define preempt_schedule_notrace_dynamic_enabled	preempt_schedule_notrace
++#define preempt_schedule_notrace_dynamic_disabled	NULL
++#endif
++DEFINE_STATIC_CALL(preempt_schedule_notrace, preempt_schedule_notrace_dynamic_enabled);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule_notrace);
++void __sched notrace dynamic_preempt_schedule_notrace(void)
++{
++	if (!static_branch_unlikely(&sk_dynamic_preempt_schedule_notrace))
++		return;
++	preempt_schedule_notrace();
++}
++NOKPROBE_SYMBOL(dynamic_preempt_schedule_notrace);
++EXPORT_SYMBOL(dynamic_preempt_schedule_notrace);
++#endif
++#endif
++
++#endif /* CONFIG_PREEMPTION */
++
++/*
++ * This is the entry point to schedule() from kernel preemption
++ * off of irq context.
++ * Note, that this is called and return with irqs disabled. This will
++ * protect us against recursive calling from irq.
++ */
++asmlinkage __visible void __sched preempt_schedule_irq(void)
++{
++	enum ctx_state prev_state;
++
++	/* Catch callers which need to be fixed */
++	BUG_ON(preempt_count() || !irqs_disabled());
++
++	prev_state = exception_enter();
++
++	do {
++		preempt_disable();
++		local_irq_enable();
++		__schedule(SM_PREEMPT);
++		local_irq_disable();
++		sched_preempt_enable_no_resched();
++	} while (need_resched());
++
++	exception_exit(prev_state);
++}
++
++int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wake_flags,
++			  void *key)
++{
++	WARN_ON_ONCE(IS_ENABLED(CONFIG_SCHED_DEBUG) && wake_flags & ~WF_SYNC);
++	return try_to_wake_up(curr->private, mode, wake_flags);
++}
++EXPORT_SYMBOL(default_wake_function);
++
++static inline void check_task_changed(struct task_struct *p, struct rq *rq)
++{
++	/* Trigger resched if task sched_prio has been modified. */
++	if (task_on_rq_queued(p)) {
++		int idx;
++
++		update_rq_clock(rq);
++		idx = task_sched_prio_idx(p, rq);
++		if (idx != p->sq_idx) {
++			requeue_task(p, rq, idx);
++			check_preempt_curr(rq);
++		}
++	}
++}
++
++static void __setscheduler_prio(struct task_struct *p, int prio)
++{
++	p->prio = prio;
++}
++
++#ifdef CONFIG_RT_MUTEXES
++
++static inline int __rt_effective_prio(struct task_struct *pi_task, int prio)
++{
++	if (pi_task)
++		prio = min(prio, pi_task->prio);
++
++	return prio;
++}
++
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++	struct task_struct *pi_task = rt_mutex_get_top_task(p);
++
++	return __rt_effective_prio(pi_task, prio);
++}
++
++/*
++ * rt_mutex_setprio - set the current priority of a task
++ * @p: task to boost
++ * @pi_task: donor task
++ *
++ * This function changes the 'effective' priority of a task. It does
++ * not touch ->normal_prio like __setscheduler().
++ *
++ * Used by the rt_mutex code to implement priority inheritance
++ * logic. Call site only calls if the priority of the task changed.
++ */
++void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
++{
++	int prio;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	/* XXX used to be waiter->prio, not waiter->task->prio */
++	prio = __rt_effective_prio(pi_task, p->normal_prio);
++
++	/*
++	 * If nothing changed; bail early.
++	 */
++	if (p->pi_top_task == pi_task && prio == p->prio)
++		return;
++
++	rq = __task_access_lock(p, &lock);
++	/*
++	 * Set under pi_lock && rq->lock, such that the value can be used under
++	 * either lock.
++	 *
++	 * Note that there is loads of tricky to make this pointer cache work
++	 * right. rt_mutex_slowunlock()+rt_mutex_postunlock() work together to
++	 * ensure a task is de-boosted (pi_task is set to NULL) before the
++	 * task is allowed to run again (and can exit). This ensures the pointer
++	 * points to a blocked task -- which guarantees the task is present.
++	 */
++	p->pi_top_task = pi_task;
++
++	/*
++	 * For FIFO/RR we only need to set prio, if that matches we're done.
++	 */
++	if (prio == p->prio)
++		goto out_unlock;
++
++	/*
++	 * Idle task boosting is a nono in general. There is one
++	 * exception, when PREEMPT_RT and NOHZ is active:
++	 *
++	 * The idle task calls get_next_timer_interrupt() and holds
++	 * the timer wheel base->lock on the CPU and another CPU wants
++	 * to access the timer (probably to cancel it). We can safely
++	 * ignore the boosting request, as the idle CPU runs this code
++	 * with interrupts disabled and will complete the lock
++	 * protected section without being interrupted. So there is no
++	 * real need to boost.
++	 */
++	if (unlikely(p == rq->idle)) {
++		WARN_ON(p != rq->curr);
++		WARN_ON(p->pi_blocked_on);
++		goto out_unlock;
++	}
++
++	trace_sched_pi_setprio(p, pi_task);
++
++	__setscheduler_prio(p, prio);
++
++	check_task_changed(p, rq);
++out_unlock:
++	/* Avoid rq from going away on us: */
++	preempt_disable();
++
++	__balance_callbacks(rq);
++	__task_access_unlock(p, lock);
++
++	preempt_enable();
++}
++#else
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++	return prio;
++}
++#endif
++
++void set_user_nice(struct task_struct *p, long nice)
++{
++	unsigned long flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	if (task_nice(p) == nice || nice < MIN_NICE || nice > MAX_NICE)
++		return;
++	/*
++	 * We have to be careful, if called from sys_setpriority(),
++	 * the task might be in the middle of scheduling on another CPU.
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	rq = __task_access_lock(p, &lock);
++
++	p->static_prio = NICE_TO_PRIO(nice);
++	/*
++	 * The RT priorities are set via sched_setscheduler(), but we still
++	 * allow the 'normal' nice value to be set - but as expected
++	 * it won't have any effect on scheduling until the task is
++	 * not SCHED_NORMAL/SCHED_BATCH:
++	 */
++	if (task_has_rt_policy(p))
++		goto out_unlock;
++
++	p->prio = effective_prio(p);
++
++	check_task_changed(p, rq);
++out_unlock:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++EXPORT_SYMBOL(set_user_nice);
++
++/*
++ * is_nice_reduction - check if nice value is an actual reduction
++ *
++ * Similar to can_nice() but does not perform a capability check.
++ *
++ * @p: task
++ * @nice: nice value
++ */
++static bool is_nice_reduction(const struct task_struct *p, const int nice)
++{
++	/* Convert nice value [19,-20] to rlimit style value [1,40]: */
++	int nice_rlim = nice_to_rlimit(nice);
++
++	return (nice_rlim <= task_rlimit(p, RLIMIT_NICE));
++}
++
++/*
++ * can_nice - check if a task can reduce its nice value
++ * @p: task
++ * @nice: nice value
++ */
++int can_nice(const struct task_struct *p, const int nice)
++{
++	return is_nice_reduction(p, nice) || capable(CAP_SYS_NICE);
++}
++
++#ifdef __ARCH_WANT_SYS_NICE
++
++/*
++ * sys_nice - change the priority of the current process.
++ * @increment: priority increment
++ *
++ * sys_setpriority is a more generic, but much slower function that
++ * does similar things.
++ */
++SYSCALL_DEFINE1(nice, int, increment)
++{
++	long nice, retval;
++
++	/*
++	 * Setpriority might change our priority at the same moment.
++	 * We don't have to worry. Conceptually one call occurs first
++	 * and we have a single winner.
++	 */
++
++	increment = clamp(increment, -NICE_WIDTH, NICE_WIDTH);
++	nice = task_nice(current) + increment;
++
++	nice = clamp_val(nice, MIN_NICE, MAX_NICE);
++	if (increment < 0 && !can_nice(current, nice))
++		return -EPERM;
++
++	retval = security_task_setnice(current, nice);
++	if (retval)
++		return retval;
++
++	set_user_nice(current, nice);
++	return 0;
++}
++
++#endif
++
++/**
++ * task_prio - return the priority value of a given task.
++ * @p: the task in question.
++ *
++ * Return: The priority value as seen by users in /proc.
++ *
++ * sched policy         return value   kernel prio    user prio/nice
++ *
++ * (BMQ)normal, batch, idle[0 ... 53]  [100 ... 139]          0/[-20 ... 19]/[-7 ... 7]
++ * (PDS)normal, batch, idle[0 ... 39]            100          0/[-20 ... 19]
++ * fifo, rr             [-1 ... -100]     [99 ... 0]  [0 ... 99]
++ */
++int task_prio(const struct task_struct *p)
++{
++	return (p->prio < MAX_RT_PRIO) ? p->prio - MAX_RT_PRIO :
++		task_sched_prio_normal(p, task_rq(p));
++}
++
++/**
++ * idle_cpu - is a given CPU idle currently?
++ * @cpu: the processor in question.
++ *
++ * Return: 1 if the CPU is currently idle. 0 otherwise.
++ */
++int idle_cpu(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (rq->curr != rq->idle)
++		return 0;
++
++	if (rq->nr_running)
++		return 0;
++
++#ifdef CONFIG_SMP
++	if (rq->ttwu_pending)
++		return 0;
++#endif
++
++	return 1;
++}
++
++/**
++ * idle_task - return the idle task for a given CPU.
++ * @cpu: the processor in question.
++ *
++ * Return: The idle task for the cpu @cpu.
++ */
++struct task_struct *idle_task(int cpu)
++{
++	return cpu_rq(cpu)->idle;
++}
++
++/**
++ * find_process_by_pid - find a process with a matching PID value.
++ * @pid: the pid in question.
++ *
++ * The task of @pid, if found. %NULL otherwise.
++ */
++static inline struct task_struct *find_process_by_pid(pid_t pid)
++{
++	return pid ? find_task_by_vpid(pid) : current;
++}
++
++/*
++ * sched_setparam() passes in -1 for its policy, to let the functions
++ * it calls know not to change it.
++ */
++#define SETPARAM_POLICY -1
++
++static void __setscheduler_params(struct task_struct *p,
++		const struct sched_attr *attr)
++{
++	int policy = attr->sched_policy;
++
++	if (policy == SETPARAM_POLICY)
++		policy = p->policy;
++
++	p->policy = policy;
++
++	/*
++	 * allow normal nice value to be set, but will not have any
++	 * effect on scheduling until the task not SCHED_NORMAL/
++	 * SCHED_BATCH
++	 */
++	p->static_prio = NICE_TO_PRIO(attr->sched_nice);
++
++	/*
++	 * __sched_setscheduler() ensures attr->sched_priority == 0 when
++	 * !rt_policy. Always setting this ensures that things like
++	 * getparam()/getattr() don't report silly values for !rt tasks.
++	 */
++	p->rt_priority = attr->sched_priority;
++	p->normal_prio = normal_prio(p);
++}
++
++/*
++ * check the target process has a UID that matches the current process's
++ */
++static bool check_same_owner(struct task_struct *p)
++{
++	const struct cred *cred = current_cred(), *pcred;
++	bool match;
++
++	rcu_read_lock();
++	pcred = __task_cred(p);
++	match = (uid_eq(cred->euid, pcred->euid) ||
++		 uid_eq(cred->euid, pcred->uid));
++	rcu_read_unlock();
++	return match;
++}
++
++/*
++ * Allow unprivileged RT tasks to decrease priority.
++ * Only issue a capable test if needed and only once to avoid an audit
++ * event on permitted non-privileged operations:
++ */
++static int user_check_sched_setscheduler(struct task_struct *p,
++					 const struct sched_attr *attr,
++					 int policy, int reset_on_fork)
++{
++	if (rt_policy(policy)) {
++		unsigned long rlim_rtprio = task_rlimit(p, RLIMIT_RTPRIO);
++
++		/* Can't set/change the rt policy: */
++		if (policy != p->policy && !rlim_rtprio)
++			goto req_priv;
++
++		/* Can't increase priority: */
++		if (attr->sched_priority > p->rt_priority &&
++		    attr->sched_priority > rlim_rtprio)
++			goto req_priv;
++	}
++
++	/* Can't change other user's priorities: */
++	if (!check_same_owner(p))
++		goto req_priv;
++
++	/* Normal users shall not reset the sched_reset_on_fork flag: */
++	if (p->sched_reset_on_fork && !reset_on_fork)
++		goto req_priv;
++
++	return 0;
++
++req_priv:
++	if (!capable(CAP_SYS_NICE))
++		return -EPERM;
++
++	return 0;
++}
++
++static int __sched_setscheduler(struct task_struct *p,
++				const struct sched_attr *attr,
++				bool user, bool pi)
++{
++	const struct sched_attr dl_squash_attr = {
++		.size		= sizeof(struct sched_attr),
++		.sched_policy	= SCHED_FIFO,
++		.sched_nice	= 0,
++		.sched_priority = 99,
++	};
++	int oldpolicy = -1, policy = attr->sched_policy;
++	int retval, newprio;
++	struct balance_callback *head;
++	unsigned long flags;
++	struct rq *rq;
++	int reset_on_fork;
++	raw_spinlock_t *lock;
++
++	/* The pi code expects interrupts enabled */
++	BUG_ON(pi && in_interrupt());
++
++	/*
++	 * Alt schedule FW supports SCHED_DEADLINE by squash it as prio 0 SCHED_FIFO
++	 */
++	if (unlikely(SCHED_DEADLINE == policy)) {
++		attr = &dl_squash_attr;
++		policy = attr->sched_policy;
++	}
++recheck:
++	/* Double check policy once rq lock held */
++	if (policy < 0) {
++		reset_on_fork = p->sched_reset_on_fork;
++		policy = oldpolicy = p->policy;
++	} else {
++		reset_on_fork = !!(attr->sched_flags & SCHED_RESET_ON_FORK);
++
++		if (policy > SCHED_IDLE)
++			return -EINVAL;
++	}
++
++	if (attr->sched_flags & ~(SCHED_FLAG_ALL))
++		return -EINVAL;
++
++	/*
++	 * Valid priorities for SCHED_FIFO and SCHED_RR are
++	 * 1..MAX_RT_PRIO-1, valid priority for SCHED_NORMAL and
++	 * SCHED_BATCH and SCHED_IDLE is 0.
++	 */
++	if (attr->sched_priority < 0 ||
++	    (p->mm && attr->sched_priority > MAX_RT_PRIO - 1) ||
++	    (!p->mm && attr->sched_priority > MAX_RT_PRIO - 1))
++		return -EINVAL;
++	if ((SCHED_RR == policy || SCHED_FIFO == policy) !=
++	    (attr->sched_priority != 0))
++		return -EINVAL;
++
++	if (user) {
++		retval = user_check_sched_setscheduler(p, attr, policy, reset_on_fork);
++		if (retval)
++			return retval;
++
++		retval = security_task_setscheduler(p);
++		if (retval)
++			return retval;
++	}
++
++	if (pi)
++		cpuset_read_lock();
++
++	/*
++	 * Make sure no PI-waiters arrive (or leave) while we are
++	 * changing the priority of the task:
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++
++	/*
++	 * To be able to change p->policy safely, task_access_lock()
++	 * must be called.
++	 * IF use task_access_lock() here:
++	 * For the task p which is not running, reading rq->stop is
++	 * racy but acceptable as ->stop doesn't change much.
++	 * An enhancemnet can be made to read rq->stop saftly.
++	 */
++	rq = __task_access_lock(p, &lock);
++
++	/*
++	 * Changing the policy of the stop threads its a very bad idea
++	 */
++	if (p == rq->stop) {
++		retval = -EINVAL;
++		goto unlock;
++	}
++
++	/*
++	 * If not changing anything there's no need to proceed further:
++	 */
++	if (unlikely(policy == p->policy)) {
++		if (rt_policy(policy) && attr->sched_priority != p->rt_priority)
++			goto change;
++		if (!rt_policy(policy) &&
++		    NICE_TO_PRIO(attr->sched_nice) != p->static_prio)
++			goto change;
++
++		p->sched_reset_on_fork = reset_on_fork;
++		retval = 0;
++		goto unlock;
++	}
++change:
++
++	/* Re-check policy now with rq lock held */
++	if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) {
++		policy = oldpolicy = -1;
++		__task_access_unlock(p, lock);
++		raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++		if (pi)
++			cpuset_read_unlock();
++		goto recheck;
++	}
++
++	p->sched_reset_on_fork = reset_on_fork;
++
++	newprio = __normal_prio(policy, attr->sched_priority, NICE_TO_PRIO(attr->sched_nice));
++	if (pi) {
++		/*
++		 * Take priority boosted tasks into account. If the new
++		 * effective priority is unchanged, we just store the new
++		 * normal parameters and do not touch the scheduler class and
++		 * the runqueue. This will be done when the task deboost
++		 * itself.
++		 */
++		newprio = rt_effective_prio(p, newprio);
++	}
++
++	if (!(attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)) {
++		__setscheduler_params(p, attr);
++		__setscheduler_prio(p, newprio);
++	}
++
++	check_task_changed(p, rq);
++
++	/* Avoid rq from going away on us: */
++	preempt_disable();
++	head = splice_balance_callbacks(rq);
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++	if (pi) {
++		cpuset_read_unlock();
++		rt_mutex_adjust_pi(p);
++	}
++
++	/* Run balance callbacks after we've adjusted the PI chain: */
++	balance_callbacks(rq, head);
++	preempt_enable();
++
++	return 0;
++
++unlock:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++	if (pi)
++		cpuset_read_unlock();
++	return retval;
++}
++
++static int _sched_setscheduler(struct task_struct *p, int policy,
++			       const struct sched_param *param, bool check)
++{
++	struct sched_attr attr = {
++		.sched_policy   = policy,
++		.sched_priority = param->sched_priority,
++		.sched_nice     = PRIO_TO_NICE(p->static_prio),
++	};
++
++	/* Fixup the legacy SCHED_RESET_ON_FORK hack. */
++	if ((policy != SETPARAM_POLICY) && (policy & SCHED_RESET_ON_FORK)) {
++		attr.sched_flags |= SCHED_FLAG_RESET_ON_FORK;
++		policy &= ~SCHED_RESET_ON_FORK;
++		attr.sched_policy = policy;
++	}
++
++	return __sched_setscheduler(p, &attr, check, true);
++}
++
++/**
++ * sched_setscheduler - change the scheduling policy and/or RT priority of a thread.
++ * @p: the task in question.
++ * @policy: new policy.
++ * @param: structure containing the new RT priority.
++ *
++ * Use sched_set_fifo(), read its comment.
++ *
++ * Return: 0 on success. An error code otherwise.
++ *
++ * NOTE that the task may be already dead.
++ */
++int sched_setscheduler(struct task_struct *p, int policy,
++		       const struct sched_param *param)
++{
++	return _sched_setscheduler(p, policy, param, true);
++}
++
++int sched_setattr(struct task_struct *p, const struct sched_attr *attr)
++{
++	return __sched_setscheduler(p, attr, true, true);
++}
++
++int sched_setattr_nocheck(struct task_struct *p, const struct sched_attr *attr)
++{
++	return __sched_setscheduler(p, attr, false, true);
++}
++EXPORT_SYMBOL_GPL(sched_setattr_nocheck);
++
++/**
++ * sched_setscheduler_nocheck - change the scheduling policy and/or RT priority of a thread from kernelspace.
++ * @p: the task in question.
++ * @policy: new policy.
++ * @param: structure containing the new RT priority.
++ *
++ * Just like sched_setscheduler, only don't bother checking if the
++ * current context has permission.  For example, this is needed in
++ * stop_machine(): we create temporary high priority worker threads,
++ * but our caller might not have that capability.
++ *
++ * Return: 0 on success. An error code otherwise.
++ */
++int sched_setscheduler_nocheck(struct task_struct *p, int policy,
++			       const struct sched_param *param)
++{
++	return _sched_setscheduler(p, policy, param, false);
++}
++
++/*
++ * SCHED_FIFO is a broken scheduler model; that is, it is fundamentally
++ * incapable of resource management, which is the one thing an OS really should
++ * be doing.
++ *
++ * This is of course the reason it is limited to privileged users only.
++ *
++ * Worse still; it is fundamentally impossible to compose static priority
++ * workloads. You cannot take two correctly working static prio workloads
++ * and smash them together and still expect them to work.
++ *
++ * For this reason 'all' FIFO tasks the kernel creates are basically at:
++ *
++ *   MAX_RT_PRIO / 2
++ *
++ * The administrator _MUST_ configure the system, the kernel simply doesn't
++ * know enough information to make a sensible choice.
++ */
++void sched_set_fifo(struct task_struct *p)
++{
++	struct sched_param sp = { .sched_priority = MAX_RT_PRIO / 2 };
++	WARN_ON_ONCE(sched_setscheduler_nocheck(p, SCHED_FIFO, &sp) != 0);
++}
++EXPORT_SYMBOL_GPL(sched_set_fifo);
++
++/*
++ * For when you don't much care about FIFO, but want to be above SCHED_NORMAL.
++ */
++void sched_set_fifo_low(struct task_struct *p)
++{
++	struct sched_param sp = { .sched_priority = 1 };
++	WARN_ON_ONCE(sched_setscheduler_nocheck(p, SCHED_FIFO, &sp) != 0);
++}
++EXPORT_SYMBOL_GPL(sched_set_fifo_low);
++
++void sched_set_normal(struct task_struct *p, int nice)
++{
++	struct sched_attr attr = {
++		.sched_policy = SCHED_NORMAL,
++		.sched_nice = nice,
++	};
++	WARN_ON_ONCE(sched_setattr_nocheck(p, &attr) != 0);
++}
++EXPORT_SYMBOL_GPL(sched_set_normal);
++
++static int
++do_sched_setscheduler(pid_t pid, int policy, struct sched_param __user *param)
++{
++	struct sched_param lparam;
++	struct task_struct *p;
++	int retval;
++
++	if (!param || pid < 0)
++		return -EINVAL;
++	if (copy_from_user(&lparam, param, sizeof(struct sched_param)))
++		return -EFAULT;
++
++	rcu_read_lock();
++	retval = -ESRCH;
++	p = find_process_by_pid(pid);
++	if (likely(p))
++		get_task_struct(p);
++	rcu_read_unlock();
++
++	if (likely(p)) {
++		retval = sched_setscheduler(p, policy, &lparam);
++		put_task_struct(p);
++	}
++
++	return retval;
++}
++
++/*
++ * Mimics kernel/events/core.c perf_copy_attr().
++ */
++static int sched_copy_attr(struct sched_attr __user *uattr, struct sched_attr *attr)
++{
++	u32 size;
++	int ret;
++
++	/* Zero the full structure, so that a short copy will be nice: */
++	memset(attr, 0, sizeof(*attr));
++
++	ret = get_user(size, &uattr->size);
++	if (ret)
++		return ret;
++
++	/* ABI compatibility quirk: */
++	if (!size)
++		size = SCHED_ATTR_SIZE_VER0;
++
++	if (size < SCHED_ATTR_SIZE_VER0 || size > PAGE_SIZE)
++		goto err_size;
++
++	ret = copy_struct_from_user(attr, sizeof(*attr), uattr, size);
++	if (ret) {
++		if (ret == -E2BIG)
++			goto err_size;
++		return ret;
++	}
++
++	/*
++	 * XXX: Do we want to be lenient like existing syscalls; or do we want
++	 * to be strict and return an error on out-of-bounds values?
++	 */
++	attr->sched_nice = clamp(attr->sched_nice, -20, 19);
++
++	/* sched/core.c uses zero here but we already know ret is zero */
++	return 0;
++
++err_size:
++	put_user(sizeof(*attr), &uattr->size);
++	return -E2BIG;
++}
++
++/**
++ * sys_sched_setscheduler - set/change the scheduler policy and RT priority
++ * @pid: the pid in question.
++ * @policy: new policy.
++ *
++ * Return: 0 on success. An error code otherwise.
++ * @param: structure containing the new RT priority.
++ */
++SYSCALL_DEFINE3(sched_setscheduler, pid_t, pid, int, policy, struct sched_param __user *, param)
++{
++	if (policy < 0)
++		return -EINVAL;
++
++	return do_sched_setscheduler(pid, policy, param);
++}
++
++/**
++ * sys_sched_setparam - set/change the RT priority of a thread
++ * @pid: the pid in question.
++ * @param: structure containing the new RT priority.
++ *
++ * Return: 0 on success. An error code otherwise.
++ */
++SYSCALL_DEFINE2(sched_setparam, pid_t, pid, struct sched_param __user *, param)
++{
++	return do_sched_setscheduler(pid, SETPARAM_POLICY, param);
++}
++
++/**
++ * sys_sched_setattr - same as above, but with extended sched_attr
++ * @pid: the pid in question.
++ * @uattr: structure containing the extended parameters.
++ */
++SYSCALL_DEFINE3(sched_setattr, pid_t, pid, struct sched_attr __user *, uattr,
++			       unsigned int, flags)
++{
++	struct sched_attr attr;
++	struct task_struct *p;
++	int retval;
++
++	if (!uattr || pid < 0 || flags)
++		return -EINVAL;
++
++	retval = sched_copy_attr(uattr, &attr);
++	if (retval)
++		return retval;
++
++	if ((int)attr.sched_policy < 0)
++		return -EINVAL;
++
++	rcu_read_lock();
++	retval = -ESRCH;
++	p = find_process_by_pid(pid);
++	if (likely(p))
++		get_task_struct(p);
++	rcu_read_unlock();
++
++	if (likely(p)) {
++		retval = sched_setattr(p, &attr);
++		put_task_struct(p);
++	}
++
++	return retval;
++}
++
++/**
++ * sys_sched_getscheduler - get the policy (scheduling class) of a thread
++ * @pid: the pid in question.
++ *
++ * Return: On success, the policy of the thread. Otherwise, a negative error
++ * code.
++ */
++SYSCALL_DEFINE1(sched_getscheduler, pid_t, pid)
++{
++	struct task_struct *p;
++	int retval = -EINVAL;
++
++	if (pid < 0)
++		goto out_nounlock;
++
++	retval = -ESRCH;
++	rcu_read_lock();
++	p = find_process_by_pid(pid);
++	if (p) {
++		retval = security_task_getscheduler(p);
++		if (!retval)
++			retval = p->policy;
++	}
++	rcu_read_unlock();
++
++out_nounlock:
++	return retval;
++}
++
++/**
++ * sys_sched_getscheduler - get the RT priority of a thread
++ * @pid: the pid in question.
++ * @param: structure containing the RT priority.
++ *
++ * Return: On success, 0 and the RT priority is in @param. Otherwise, an error
++ * code.
++ */
++SYSCALL_DEFINE2(sched_getparam, pid_t, pid, struct sched_param __user *, param)
++{
++	struct sched_param lp = { .sched_priority = 0 };
++	struct task_struct *p;
++	int retval = -EINVAL;
++
++	if (!param || pid < 0)
++		goto out_nounlock;
++
++	rcu_read_lock();
++	p = find_process_by_pid(pid);
++	retval = -ESRCH;
++	if (!p)
++		goto out_unlock;
++
++	retval = security_task_getscheduler(p);
++	if (retval)
++		goto out_unlock;
++
++	if (task_has_rt_policy(p))
++		lp.sched_priority = p->rt_priority;
++	rcu_read_unlock();
++
++	/*
++	 * This one might sleep, we cannot do it with a spinlock held ...
++	 */
++	retval = copy_to_user(param, &lp, sizeof(*param)) ? -EFAULT : 0;
++
++out_nounlock:
++	return retval;
++
++out_unlock:
++	rcu_read_unlock();
++	return retval;
++}
++
++/*
++ * Copy the kernel size attribute structure (which might be larger
++ * than what user-space knows about) to user-space.
++ *
++ * Note that all cases are valid: user-space buffer can be larger or
++ * smaller than the kernel-space buffer. The usual case is that both
++ * have the same size.
++ */
++static int
++sched_attr_copy_to_user(struct sched_attr __user *uattr,
++			struct sched_attr *kattr,
++			unsigned int usize)
++{
++	unsigned int ksize = sizeof(*kattr);
++
++	if (!access_ok(uattr, usize))
++		return -EFAULT;
++
++	/*
++	 * sched_getattr() ABI forwards and backwards compatibility:
++	 *
++	 * If usize == ksize then we just copy everything to user-space and all is good.
++	 *
++	 * If usize < ksize then we only copy as much as user-space has space for,
++	 * this keeps ABI compatibility as well. We skip the rest.
++	 *
++	 * If usize > ksize then user-space is using a newer version of the ABI,
++	 * which part the kernel doesn't know about. Just ignore it - tooling can
++	 * detect the kernel's knowledge of attributes from the attr->size value
++	 * which is set to ksize in this case.
++	 */
++	kattr->size = min(usize, ksize);
++
++	if (copy_to_user(uattr, kattr, kattr->size))
++		return -EFAULT;
++
++	return 0;
++}
++
++/**
++ * sys_sched_getattr - similar to sched_getparam, but with sched_attr
++ * @pid: the pid in question.
++ * @uattr: structure containing the extended parameters.
++ * @usize: sizeof(attr) for fwd/bwd comp.
++ * @flags: for future extension.
++ */
++SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr,
++		unsigned int, usize, unsigned int, flags)
++{
++	struct sched_attr kattr = { };
++	struct task_struct *p;
++	int retval;
++
++	if (!uattr || pid < 0 || usize > PAGE_SIZE ||
++	    usize < SCHED_ATTR_SIZE_VER0 || flags)
++		return -EINVAL;
++
++	rcu_read_lock();
++	p = find_process_by_pid(pid);
++	retval = -ESRCH;
++	if (!p)
++		goto out_unlock;
++
++	retval = security_task_getscheduler(p);
++	if (retval)
++		goto out_unlock;
++
++	kattr.sched_policy = p->policy;
++	if (p->sched_reset_on_fork)
++		kattr.sched_flags |= SCHED_FLAG_RESET_ON_FORK;
++	if (task_has_rt_policy(p))
++		kattr.sched_priority = p->rt_priority;
++	else
++		kattr.sched_nice = task_nice(p);
++	kattr.sched_flags &= SCHED_FLAG_ALL;
++
++#ifdef CONFIG_UCLAMP_TASK
++	kattr.sched_util_min = p->uclamp_req[UCLAMP_MIN].value;
++	kattr.sched_util_max = p->uclamp_req[UCLAMP_MAX].value;
++#endif
++
++	rcu_read_unlock();
++
++	return sched_attr_copy_to_user(uattr, &kattr, usize);
++
++out_unlock:
++	rcu_read_unlock();
++	return retval;
++}
++
++#ifdef CONFIG_SMP
++int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
++{
++	return 0;
++}
++#endif
++
++static int
++__sched_setaffinity(struct task_struct *p, struct affinity_context *ctx)
++{
++	int retval;
++	cpumask_var_t cpus_allowed, new_mask;
++
++	if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL))
++		return -ENOMEM;
++
++	if (!alloc_cpumask_var(&new_mask, GFP_KERNEL)) {
++		retval = -ENOMEM;
++		goto out_free_cpus_allowed;
++	}
++
++	cpuset_cpus_allowed(p, cpus_allowed);
++	cpumask_and(new_mask, ctx->new_mask, cpus_allowed);
++
++	ctx->new_mask = new_mask;
++	ctx->flags |= SCA_CHECK;
++
++	retval = __set_cpus_allowed_ptr(p, ctx);
++	if (retval)
++		goto out_free_new_mask;
++
++	cpuset_cpus_allowed(p, cpus_allowed);
++	if (!cpumask_subset(new_mask, cpus_allowed)) {
++		/*
++		 * We must have raced with a concurrent cpuset
++		 * update. Just reset the cpus_allowed to the
++		 * cpuset's cpus_allowed
++		 */
++		cpumask_copy(new_mask, cpus_allowed);
++
++		/*
++		 * If SCA_USER is set, a 2nd call to __set_cpus_allowed_ptr()
++		 * will restore the previous user_cpus_ptr value.
++		 *
++		 * In the unlikely event a previous user_cpus_ptr exists,
++		 * we need to further restrict the mask to what is allowed
++		 * by that old user_cpus_ptr.
++		 */
++		if (unlikely((ctx->flags & SCA_USER) && ctx->user_mask)) {
++			bool empty = !cpumask_and(new_mask, new_mask,
++						  ctx->user_mask);
++
++			if (WARN_ON_ONCE(empty))
++				cpumask_copy(new_mask, cpus_allowed);
++		}
++		__set_cpus_allowed_ptr(p, ctx);
++		retval = -EINVAL;
++	}
++
++out_free_new_mask:
++	free_cpumask_var(new_mask);
++out_free_cpus_allowed:
++	free_cpumask_var(cpus_allowed);
++	return retval;
++}
++
++long sched_setaffinity(pid_t pid, const struct cpumask *in_mask)
++{
++	struct affinity_context ac;
++	struct cpumask *user_mask;
++	struct task_struct *p;
++	int retval;
++
++	rcu_read_lock();
++
++	p = find_process_by_pid(pid);
++	if (!p) {
++		rcu_read_unlock();
++		return -ESRCH;
++	}
++
++	/* Prevent p going away */
++	get_task_struct(p);
++	rcu_read_unlock();
++
++	if (p->flags & PF_NO_SETAFFINITY) {
++		retval = -EINVAL;
++		goto out_put_task;
++	}
++
++	if (!check_same_owner(p)) {
++		rcu_read_lock();
++		if (!ns_capable(__task_cred(p)->user_ns, CAP_SYS_NICE)) {
++			rcu_read_unlock();
++			retval = -EPERM;
++			goto out_put_task;
++		}
++		rcu_read_unlock();
++	}
++
++	retval = security_task_setscheduler(p);
++	if (retval)
++		goto out_put_task;
++
++	/*
++	 * With non-SMP configs, user_cpus_ptr/user_mask isn't used and
++	 * alloc_user_cpus_ptr() returns NULL.
++	 */
++	user_mask = alloc_user_cpus_ptr(NUMA_NO_NODE);
++	if (user_mask) {
++		cpumask_copy(user_mask, in_mask);
++	} else if (IS_ENABLED(CONFIG_SMP)) {
++		retval = -ENOMEM;
++		goto out_put_task;
++	}
++
++	ac = (struct affinity_context){
++		.new_mask  = in_mask,
++		.user_mask = user_mask,
++		.flags     = SCA_USER,
++	};
++
++	retval = __sched_setaffinity(p, &ac);
++	kfree(ac.user_mask);
++
++out_put_task:
++	put_task_struct(p);
++	return retval;
++}
++
++static int get_user_cpu_mask(unsigned long __user *user_mask_ptr, unsigned len,
++			     struct cpumask *new_mask)
++{
++	if (len < cpumask_size())
++		cpumask_clear(new_mask);
++	else if (len > cpumask_size())
++		len = cpumask_size();
++
++	return copy_from_user(new_mask, user_mask_ptr, len) ? -EFAULT : 0;
++}
++
++/**
++ * sys_sched_setaffinity - set the CPU affinity of a process
++ * @pid: pid of the process
++ * @len: length in bytes of the bitmask pointed to by user_mask_ptr
++ * @user_mask_ptr: user-space pointer to the new CPU mask
++ *
++ * Return: 0 on success. An error code otherwise.
++ */
++SYSCALL_DEFINE3(sched_setaffinity, pid_t, pid, unsigned int, len,
++		unsigned long __user *, user_mask_ptr)
++{
++	cpumask_var_t new_mask;
++	int retval;
++
++	if (!alloc_cpumask_var(&new_mask, GFP_KERNEL))
++		return -ENOMEM;
++
++	retval = get_user_cpu_mask(user_mask_ptr, len, new_mask);
++	if (retval == 0)
++		retval = sched_setaffinity(pid, new_mask);
++	free_cpumask_var(new_mask);
++	return retval;
++}
++
++long sched_getaffinity(pid_t pid, cpumask_t *mask)
++{
++	struct task_struct *p;
++	raw_spinlock_t *lock;
++	unsigned long flags;
++	int retval;
++
++	rcu_read_lock();
++
++	retval = -ESRCH;
++	p = find_process_by_pid(pid);
++	if (!p)
++		goto out_unlock;
++
++	retval = security_task_getscheduler(p);
++	if (retval)
++		goto out_unlock;
++
++	task_access_lock_irqsave(p, &lock, &flags);
++	cpumask_and(mask, &p->cpus_mask, cpu_active_mask);
++	task_access_unlock_irqrestore(p, lock, &flags);
++
++out_unlock:
++	rcu_read_unlock();
++
++	return retval;
++}
++
++/**
++ * sys_sched_getaffinity - get the CPU affinity of a process
++ * @pid: pid of the process
++ * @len: length in bytes of the bitmask pointed to by user_mask_ptr
++ * @user_mask_ptr: user-space pointer to hold the current CPU mask
++ *
++ * Return: size of CPU mask copied to user_mask_ptr on success. An
++ * error code otherwise.
++ */
++SYSCALL_DEFINE3(sched_getaffinity, pid_t, pid, unsigned int, len,
++		unsigned long __user *, user_mask_ptr)
++{
++	int ret;
++	cpumask_var_t mask;
++
++	if ((len * BITS_PER_BYTE) < nr_cpu_ids)
++		return -EINVAL;
++	if (len & (sizeof(unsigned long)-1))
++		return -EINVAL;
++
++	if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
++		return -ENOMEM;
++
++	ret = sched_getaffinity(pid, mask);
++	if (ret == 0) {
++		unsigned int retlen = min(len, cpumask_size());
++
++		if (copy_to_user(user_mask_ptr, cpumask_bits(mask), retlen))
++			ret = -EFAULT;
++		else
++			ret = retlen;
++	}
++	free_cpumask_var(mask);
++
++	return ret;
++}
++
++static void do_sched_yield(void)
++{
++	struct rq *rq;
++	struct rq_flags rf;
++
++	if (!sched_yield_type)
++		return;
++
++	rq = this_rq_lock_irq(&rf);
++
++	schedstat_inc(rq->yld_count);
++
++	if (1 == sched_yield_type) {
++		if (!rt_task(current))
++			do_sched_yield_type_1(current, rq);
++	} else if (2 == sched_yield_type) {
++		if (rq->nr_running > 1)
++			rq->skip = current;
++	}
++
++	preempt_disable();
++	raw_spin_unlock_irq(&rq->lock);
++	sched_preempt_enable_no_resched();
++
++	schedule();
++}
++
++/**
++ * sys_sched_yield - yield the current processor to other threads.
++ *
++ * This function yields the current CPU to other tasks. If there are no
++ * other threads running on this CPU then this function will return.
++ *
++ * Return: 0.
++ */
++SYSCALL_DEFINE0(sched_yield)
++{
++	do_sched_yield();
++	return 0;
++}
++
++#if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
++int __sched __cond_resched(void)
++{
++	if (should_resched(0)) {
++		preempt_schedule_common();
++		return 1;
++	}
++	/*
++	 * In preemptible kernels, ->rcu_read_lock_nesting tells the tick
++	 * whether the current CPU is in an RCU read-side critical section,
++	 * so the tick can report quiescent states even for CPUs looping
++	 * in kernel context.  In contrast, in non-preemptible kernels,
++	 * RCU readers leave no in-memory hints, which means that CPU-bound
++	 * processes executing in kernel context might never report an
++	 * RCU quiescent state.  Therefore, the following code causes
++	 * cond_resched() to report a quiescent state, but only when RCU
++	 * is in urgent need of one.
++	 */
++#ifndef CONFIG_PREEMPT_RCU
++	rcu_all_qs();
++#endif
++	return 0;
++}
++EXPORT_SYMBOL(__cond_resched);
++#endif
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#define cond_resched_dynamic_enabled	__cond_resched
++#define cond_resched_dynamic_disabled	((void *)&__static_call_return0)
++DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(cond_resched);
++
++#define might_resched_dynamic_enabled	__cond_resched
++#define might_resched_dynamic_disabled	((void *)&__static_call_return0)
++DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(might_resched);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_FALSE(sk_dynamic_cond_resched);
++int __sched dynamic_cond_resched(void)
++{
++	if (!static_branch_unlikely(&sk_dynamic_cond_resched))
++		return 0;
++	return __cond_resched();
++}
++EXPORT_SYMBOL(dynamic_cond_resched);
++
++static DEFINE_STATIC_KEY_FALSE(sk_dynamic_might_resched);
++int __sched dynamic_might_resched(void)
++{
++	if (!static_branch_unlikely(&sk_dynamic_might_resched))
++		return 0;
++	return __cond_resched();
++}
++EXPORT_SYMBOL(dynamic_might_resched);
++#endif
++#endif
++
++/*
++ * __cond_resched_lock() - if a reschedule is pending, drop the given lock,
++ * call schedule, and on return reacquire the lock.
++ *
++ * This works OK both with and without CONFIG_PREEMPTION.  We do strange low-level
++ * operations here to prevent schedule() from being called twice (once via
++ * spin_unlock(), once by hand).
++ */
++int __cond_resched_lock(spinlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held(lock);
++
++	if (spin_needbreak(lock) || resched) {
++		spin_unlock(lock);
++		if (!_cond_resched())
++			cpu_relax();
++		ret = 1;
++		spin_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_lock);
++
++int __cond_resched_rwlock_read(rwlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held_read(lock);
++
++	if (rwlock_needbreak(lock) || resched) {
++		read_unlock(lock);
++		if (!_cond_resched())
++			cpu_relax();
++		ret = 1;
++		read_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_read);
++
++int __cond_resched_rwlock_write(rwlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held_write(lock);
++
++	if (rwlock_needbreak(lock) || resched) {
++		write_unlock(lock);
++		if (!_cond_resched())
++			cpu_relax();
++		ret = 1;
++		write_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_write);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++
++#ifdef CONFIG_GENERIC_ENTRY
++#include <linux/entry-common.h>
++#endif
++
++/*
++ * SC:cond_resched
++ * SC:might_resched
++ * SC:preempt_schedule
++ * SC:preempt_schedule_notrace
++ * SC:irqentry_exit_cond_resched
++ *
++ *
++ * NONE:
++ *   cond_resched               <- __cond_resched
++ *   might_resched              <- RET0
++ *   preempt_schedule           <- NOP
++ *   preempt_schedule_notrace   <- NOP
++ *   irqentry_exit_cond_resched <- NOP
++ *
++ * VOLUNTARY:
++ *   cond_resched               <- __cond_resched
++ *   might_resched              <- __cond_resched
++ *   preempt_schedule           <- NOP
++ *   preempt_schedule_notrace   <- NOP
++ *   irqentry_exit_cond_resched <- NOP
++ *
++ * FULL:
++ *   cond_resched               <- RET0
++ *   might_resched              <- RET0
++ *   preempt_schedule           <- preempt_schedule
++ *   preempt_schedule_notrace   <- preempt_schedule_notrace
++ *   irqentry_exit_cond_resched <- irqentry_exit_cond_resched
++ */
++
++enum {
++	preempt_dynamic_undefined = -1,
++	preempt_dynamic_none,
++	preempt_dynamic_voluntary,
++	preempt_dynamic_full,
++};
++
++int preempt_dynamic_mode = preempt_dynamic_undefined;
++
++int sched_dynamic_mode(const char *str)
++{
++	if (!strcmp(str, "none"))
++		return preempt_dynamic_none;
++
++	if (!strcmp(str, "voluntary"))
++		return preempt_dynamic_voluntary;
++
++	if (!strcmp(str, "full"))
++		return preempt_dynamic_full;
++
++	return -EINVAL;
++}
++
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#define preempt_dynamic_enable(f)	static_call_update(f, f##_dynamic_enabled)
++#define preempt_dynamic_disable(f)	static_call_update(f, f##_dynamic_disabled)
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++#define preempt_dynamic_enable(f)	static_key_enable(&sk_dynamic_##f.key)
++#define preempt_dynamic_disable(f)	static_key_disable(&sk_dynamic_##f.key)
++#else
++#error "Unsupported PREEMPT_DYNAMIC mechanism"
++#endif
++
++void sched_dynamic_update(int mode)
++{
++	/*
++	 * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
++	 * the ZERO state, which is invalid.
++	 */
++	preempt_dynamic_enable(cond_resched);
++	preempt_dynamic_enable(might_resched);
++	preempt_dynamic_enable(preempt_schedule);
++	preempt_dynamic_enable(preempt_schedule_notrace);
++	preempt_dynamic_enable(irqentry_exit_cond_resched);
++
++	switch (mode) {
++	case preempt_dynamic_none:
++		preempt_dynamic_enable(cond_resched);
++		preempt_dynamic_disable(might_resched);
++		preempt_dynamic_disable(preempt_schedule);
++		preempt_dynamic_disable(preempt_schedule_notrace);
++		preempt_dynamic_disable(irqentry_exit_cond_resched);
++		pr_info("Dynamic Preempt: none\n");
++		break;
++
++	case preempt_dynamic_voluntary:
++		preempt_dynamic_enable(cond_resched);
++		preempt_dynamic_enable(might_resched);
++		preempt_dynamic_disable(preempt_schedule);
++		preempt_dynamic_disable(preempt_schedule_notrace);
++		preempt_dynamic_disable(irqentry_exit_cond_resched);
++		pr_info("Dynamic Preempt: voluntary\n");
++		break;
++
++	case preempt_dynamic_full:
++		preempt_dynamic_disable(cond_resched);
++		preempt_dynamic_disable(might_resched);
++		preempt_dynamic_enable(preempt_schedule);
++		preempt_dynamic_enable(preempt_schedule_notrace);
++		preempt_dynamic_enable(irqentry_exit_cond_resched);
++		pr_info("Dynamic Preempt: full\n");
++		break;
++	}
++
++	preempt_dynamic_mode = mode;
++}
++
++static int __init setup_preempt_mode(char *str)
++{
++	int mode = sched_dynamic_mode(str);
++	if (mode < 0) {
++		pr_warn("Dynamic Preempt: unsupported mode: %s\n", str);
++		return 0;
++	}
++
++	sched_dynamic_update(mode);
++	return 1;
++}
++__setup("preempt=", setup_preempt_mode);
++
++static void __init preempt_dynamic_init(void)
++{
++	if (preempt_dynamic_mode == preempt_dynamic_undefined) {
++		if (IS_ENABLED(CONFIG_PREEMPT_NONE)) {
++			sched_dynamic_update(preempt_dynamic_none);
++		} else if (IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY)) {
++			sched_dynamic_update(preempt_dynamic_voluntary);
++		} else {
++			/* Default static call setting, nothing to do */
++			WARN_ON_ONCE(!IS_ENABLED(CONFIG_PREEMPT));
++			preempt_dynamic_mode = preempt_dynamic_full;
++			pr_info("Dynamic Preempt: full\n");
++		}
++	}
++}
++
++#define PREEMPT_MODEL_ACCESSOR(mode) \
++	bool preempt_model_##mode(void)						 \
++	{									 \
++		WARN_ON_ONCE(preempt_dynamic_mode == preempt_dynamic_undefined); \
++		return preempt_dynamic_mode == preempt_dynamic_##mode;		 \
++	}									 \
++	EXPORT_SYMBOL_GPL(preempt_model_##mode)
++
++PREEMPT_MODEL_ACCESSOR(none);
++PREEMPT_MODEL_ACCESSOR(voluntary);
++PREEMPT_MODEL_ACCESSOR(full);
++
++#else /* !CONFIG_PREEMPT_DYNAMIC */
++
++static inline void preempt_dynamic_init(void) { }
++
++#endif /* #ifdef CONFIG_PREEMPT_DYNAMIC */
++
++/**
++ * yield - yield the current processor to other threads.
++ *
++ * Do not ever use this function, there's a 99% chance you're doing it wrong.
++ *
++ * The scheduler is at all times free to pick the calling task as the most
++ * eligible task to run, if removing the yield() call from your code breaks
++ * it, it's already broken.
++ *
++ * Typical broken usage is:
++ *
++ * while (!event)
++ * 	yield();
++ *
++ * where one assumes that yield() will let 'the other' process run that will
++ * make event true. If the current task is a SCHED_FIFO task that will never
++ * happen. Never use yield() as a progress guarantee!!
++ *
++ * If you want to use yield() to wait for something, use wait_event().
++ * If you want to use yield() to be 'nice' for others, use cond_resched().
++ * If you still want to use yield(), do not!
++ */
++void __sched yield(void)
++{
++	set_current_state(TASK_RUNNING);
++	do_sched_yield();
++}
++EXPORT_SYMBOL(yield);
++
++/**
++ * yield_to - yield the current processor to another thread in
++ * your thread group, or accelerate that thread toward the
++ * processor it's on.
++ * @p: target task
++ * @preempt: whether task preemption is allowed or not
++ *
++ * It's the caller's job to ensure that the target task struct
++ * can't go away on us before we can do any checks.
++ *
++ * In Alt schedule FW, yield_to is not supported.
++ *
++ * Return:
++ *	true (>0) if we indeed boosted the target task.
++ *	false (0) if we failed to boost the target.
++ *	-ESRCH if there's no task to yield to.
++ */
++int __sched yield_to(struct task_struct *p, bool preempt)
++{
++	return 0;
++}
++EXPORT_SYMBOL_GPL(yield_to);
++
++int io_schedule_prepare(void)
++{
++	int old_iowait = current->in_iowait;
++
++	current->in_iowait = 1;
++	blk_flush_plug(current->plug, true);
++	return old_iowait;
++}
++
++void io_schedule_finish(int token)
++{
++	current->in_iowait = token;
++}
++
++/*
++ * This task is about to go to sleep on IO.  Increment rq->nr_iowait so
++ * that process accounting knows that this is a task in IO wait state.
++ *
++ * But don't do that if it is a deliberate, throttling IO wait (this task
++ * has set its backing_dev_info: the queue against which it should throttle)
++ */
++
++long __sched io_schedule_timeout(long timeout)
++{
++	int token;
++	long ret;
++
++	token = io_schedule_prepare();
++	ret = schedule_timeout(timeout);
++	io_schedule_finish(token);
++
++	return ret;
++}
++EXPORT_SYMBOL(io_schedule_timeout);
++
++void __sched io_schedule(void)
++{
++	int token;
++
++	token = io_schedule_prepare();
++	schedule();
++	io_schedule_finish(token);
++}
++EXPORT_SYMBOL(io_schedule);
++
++/**
++ * sys_sched_get_priority_max - return maximum RT priority.
++ * @policy: scheduling class.
++ *
++ * Return: On success, this syscall returns the maximum
++ * rt_priority that can be used by a given scheduling class.
++ * On failure, a negative error code is returned.
++ */
++SYSCALL_DEFINE1(sched_get_priority_max, int, policy)
++{
++	int ret = -EINVAL;
++
++	switch (policy) {
++	case SCHED_FIFO:
++	case SCHED_RR:
++		ret = MAX_RT_PRIO - 1;
++		break;
++	case SCHED_NORMAL:
++	case SCHED_BATCH:
++	case SCHED_IDLE:
++		ret = 0;
++		break;
++	}
++	return ret;
++}
++
++/**
++ * sys_sched_get_priority_min - return minimum RT priority.
++ * @policy: scheduling class.
++ *
++ * Return: On success, this syscall returns the minimum
++ * rt_priority that can be used by a given scheduling class.
++ * On failure, a negative error code is returned.
++ */
++SYSCALL_DEFINE1(sched_get_priority_min, int, policy)
++{
++	int ret = -EINVAL;
++
++	switch (policy) {
++	case SCHED_FIFO:
++	case SCHED_RR:
++		ret = 1;
++		break;
++	case SCHED_NORMAL:
++	case SCHED_BATCH:
++	case SCHED_IDLE:
++		ret = 0;
++		break;
++	}
++	return ret;
++}
++
++static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
++{
++	struct task_struct *p;
++	int retval;
++
++	alt_sched_debug();
++
++	if (pid < 0)
++		return -EINVAL;
++
++	retval = -ESRCH;
++	rcu_read_lock();
++	p = find_process_by_pid(pid);
++	if (!p)
++		goto out_unlock;
++
++	retval = security_task_getscheduler(p);
++	if (retval)
++		goto out_unlock;
++	rcu_read_unlock();
++
++	*t = ns_to_timespec64(sched_timeslice_ns);
++	return 0;
++
++out_unlock:
++	rcu_read_unlock();
++	return retval;
++}
++
++/**
++ * sys_sched_rr_get_interval - return the default timeslice of a process.
++ * @pid: pid of the process.
++ * @interval: userspace pointer to the timeslice value.
++ *
++ *
++ * Return: On success, 0 and the timeslice is in @interval. Otherwise,
++ * an error code.
++ */
++SYSCALL_DEFINE2(sched_rr_get_interval, pid_t, pid,
++		struct __kernel_timespec __user *, interval)
++{
++	struct timespec64 t;
++	int retval = sched_rr_get_interval(pid, &t);
++
++	if (retval == 0)
++		retval = put_timespec64(&t, interval);
++
++	return retval;
++}
++
++#ifdef CONFIG_COMPAT_32BIT_TIME
++SYSCALL_DEFINE2(sched_rr_get_interval_time32, pid_t, pid,
++		struct old_timespec32 __user *, interval)
++{
++	struct timespec64 t;
++	int retval = sched_rr_get_interval(pid, &t);
++
++	if (retval == 0)
++		retval = put_old_timespec32(&t, interval);
++	return retval;
++}
++#endif
++
++void sched_show_task(struct task_struct *p)
++{
++	unsigned long free = 0;
++	int ppid;
++
++	if (!try_get_task_stack(p))
++		return;
++
++	pr_info("task:%-15.15s state:%c", p->comm, task_state_to_char(p));
++
++	if (task_is_running(p))
++		pr_cont("  running task    ");
++#ifdef CONFIG_DEBUG_STACK_USAGE
++	free = stack_not_used(p);
++#endif
++	ppid = 0;
++	rcu_read_lock();
++	if (pid_alive(p))
++		ppid = task_pid_nr(rcu_dereference(p->real_parent));
++	rcu_read_unlock();
++	pr_cont(" stack:%-5lu pid:%-5d ppid:%-6d flags:0x%08lx\n",
++		free, task_pid_nr(p), ppid,
++		read_task_thread_flags(p));
++
++	print_worker_info(KERN_INFO, p);
++	print_stop_info(KERN_INFO, p);
++	show_stack(p, NULL, KERN_INFO);
++	put_task_stack(p);
++}
++EXPORT_SYMBOL_GPL(sched_show_task);
++
++static inline bool
++state_filter_match(unsigned long state_filter, struct task_struct *p)
++{
++	unsigned int state = READ_ONCE(p->__state);
++
++	/* no filter, everything matches */
++	if (!state_filter)
++		return true;
++
++	/* filter, but doesn't match */
++	if (!(state & state_filter))
++		return false;
++
++	/*
++	 * When looking for TASK_UNINTERRUPTIBLE skip TASK_IDLE (allows
++	 * TASK_KILLABLE).
++	 */
++	if (state_filter == TASK_UNINTERRUPTIBLE && (state & TASK_NOLOAD))
++		return false;
++
++	return true;
++}
++
++
++void show_state_filter(unsigned int state_filter)
++{
++	struct task_struct *g, *p;
++
++	rcu_read_lock();
++	for_each_process_thread(g, p) {
++		/*
++		 * reset the NMI-timeout, listing all files on a slow
++		 * console might take a lot of time:
++		 * Also, reset softlockup watchdogs on all CPUs, because
++		 * another CPU might be blocked waiting for us to process
++		 * an IPI.
++		 */
++		touch_nmi_watchdog();
++		touch_all_softlockup_watchdogs();
++		if (state_filter_match(state_filter, p))
++			sched_show_task(p);
++	}
++
++#ifdef CONFIG_SCHED_DEBUG
++	/* TODO: Alt schedule FW should support this
++	if (!state_filter)
++		sysrq_sched_debug_show();
++	*/
++#endif
++	rcu_read_unlock();
++	/*
++	 * Only show locks if all tasks are dumped:
++	 */
++	if (!state_filter)
++		debug_show_all_locks();
++}
++
++void dump_cpu_task(int cpu)
++{
++	if (cpu == smp_processor_id() && in_hardirq()) {
++		struct pt_regs *regs;
++
++		regs = get_irq_regs();
++		if (regs) {
++			show_regs(regs);
++			return;
++		}
++	}
++
++	if (trigger_single_cpu_backtrace(cpu))
++		return;
++
++	pr_info("Task dump for CPU %d:\n", cpu);
++	sched_show_task(cpu_curr(cpu));
++}
++
++/**
++ * init_idle - set up an idle thread for a given CPU
++ * @idle: task in question
++ * @cpu: CPU the idle task belongs to
++ *
++ * NOTE: this function does not set the idle thread's NEED_RESCHED
++ * flag, to make booting more robust.
++ */
++void __init init_idle(struct task_struct *idle, int cpu)
++{
++#ifdef CONFIG_SMP
++	struct affinity_context ac = (struct affinity_context) {
++		.new_mask  = cpumask_of(cpu),
++		.flags     = 0,
++	};
++#endif
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	__sched_fork(0, idle);
++
++	raw_spin_lock_irqsave(&idle->pi_lock, flags);
++	raw_spin_lock(&rq->lock);
++
++	idle->last_ran = rq->clock_task;
++	idle->__state = TASK_RUNNING;
++	/*
++	 * PF_KTHREAD should already be set at this point; regardless, make it
++	 * look like a proper per-CPU kthread.
++	 */
++	idle->flags |= PF_IDLE | PF_KTHREAD | PF_NO_SETAFFINITY;
++	kthread_set_per_cpu(idle, cpu);
++
++	sched_queue_init_idle(&rq->queue, idle);
++
++#ifdef CONFIG_SMP
++	/*
++	 * It's possible that init_idle() gets called multiple times on a task,
++	 * in that case do_set_cpus_allowed() will not do the right thing.
++	 *
++	 * And since this is boot we can forgo the serialisation.
++	 */
++	set_cpus_allowed_common(idle, &ac);
++#endif
++
++	/* Silence PROVE_RCU */
++	rcu_read_lock();
++	__set_task_cpu(idle, cpu);
++	rcu_read_unlock();
++
++	rq->idle = idle;
++	rcu_assign_pointer(rq->curr, idle);
++	idle->on_cpu = 1;
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&idle->pi_lock, flags);
++
++	/* Set the preempt count _outside_ the spinlocks! */
++	init_idle_preempt_count(idle, cpu);
++
++	ftrace_graph_init_idle_task(idle, cpu);
++	vtime_init_idle(idle, cpu);
++#ifdef CONFIG_SMP
++	sprintf(idle->comm, "%s/%d", INIT_TASK_COMM, cpu);
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++int cpuset_cpumask_can_shrink(const struct cpumask __maybe_unused *cur,
++			      const struct cpumask __maybe_unused *trial)
++{
++	return 1;
++}
++
++int task_can_attach(struct task_struct *p,
++		    const struct cpumask *cs_effective_cpus)
++{
++	int ret = 0;
++
++	/*
++	 * Kthreads which disallow setaffinity shouldn't be moved
++	 * to a new cpuset; we don't want to change their CPU
++	 * affinity and isolating such threads by their set of
++	 * allowed nodes is unnecessary.  Thus, cpusets are not
++	 * applicable for such threads.  This prevents checking for
++	 * success of set_cpus_allowed_ptr() on all attached tasks
++	 * before cpus_mask may be changed.
++	 */
++	if (p->flags & PF_NO_SETAFFINITY)
++		ret = -EINVAL;
++
++	return ret;
++}
++
++bool sched_smp_initialized __read_mostly;
++
++#ifdef CONFIG_HOTPLUG_CPU
++/*
++ * Ensures that the idle task is using init_mm right before its CPU goes
++ * offline.
++ */
++void idle_task_exit(void)
++{
++	struct mm_struct *mm = current->active_mm;
++
++	BUG_ON(current != this_rq()->idle);
++
++	if (mm != &init_mm) {
++		switch_mm(mm, &init_mm, current);
++		finish_arch_post_lock_switch();
++	}
++
++	/* finish_cpu(), as ran on the BP, will clean up the active_mm state */
++}
++
++static int __balance_push_cpu_stop(void *arg)
++{
++	struct task_struct *p = arg;
++	struct rq *rq = this_rq();
++	struct rq_flags rf;
++	int cpu;
++
++	raw_spin_lock_irq(&p->pi_lock);
++	rq_lock(rq, &rf);
++
++	update_rq_clock(rq);
++
++	if (task_rq(p) == rq && task_on_rq_queued(p)) {
++		cpu = select_fallback_rq(rq->cpu, p);
++		rq = __migrate_task(rq, p, cpu);
++	}
++
++	rq_unlock(rq, &rf);
++	raw_spin_unlock_irq(&p->pi_lock);
++
++	put_task_struct(p);
++
++	return 0;
++}
++
++static DEFINE_PER_CPU(struct cpu_stop_work, push_work);
++
++/*
++ * This is enabled below SCHED_AP_ACTIVE; when !cpu_active(), but only
++ * effective when the hotplug motion is down.
++ */
++static void balance_push(struct rq *rq)
++{
++	struct task_struct *push_task = rq->curr;
++
++	lockdep_assert_held(&rq->lock);
++
++	/*
++	 * Ensure the thing is persistent until balance_push_set(.on = false);
++	 */
++	rq->balance_callback = &balance_push_callback;
++
++	/*
++	 * Only active while going offline and when invoked on the outgoing
++	 * CPU.
++	 */
++	if (!cpu_dying(rq->cpu) || rq != this_rq())
++		return;
++
++	/*
++	 * Both the cpu-hotplug and stop task are in this case and are
++	 * required to complete the hotplug process.
++	 */
++	if (kthread_is_per_cpu(push_task) ||
++	    is_migration_disabled(push_task)) {
++
++		/*
++		 * If this is the idle task on the outgoing CPU try to wake
++		 * up the hotplug control thread which might wait for the
++		 * last task to vanish. The rcuwait_active() check is
++		 * accurate here because the waiter is pinned on this CPU
++		 * and can't obviously be running in parallel.
++		 *
++		 * On RT kernels this also has to check whether there are
++		 * pinned and scheduled out tasks on the runqueue. They
++		 * need to leave the migrate disabled section first.
++		 */
++		if (!rq->nr_running && !rq_has_pinned_tasks(rq) &&
++		    rcuwait_active(&rq->hotplug_wait)) {
++			raw_spin_unlock(&rq->lock);
++			rcuwait_wake_up(&rq->hotplug_wait);
++			raw_spin_lock(&rq->lock);
++		}
++		return;
++	}
++
++	get_task_struct(push_task);
++	/*
++	 * Temporarily drop rq->lock such that we can wake-up the stop task.
++	 * Both preemption and IRQs are still disabled.
++	 */
++	raw_spin_unlock(&rq->lock);
++	stop_one_cpu_nowait(rq->cpu, __balance_push_cpu_stop, push_task,
++			    this_cpu_ptr(&push_work));
++	/*
++	 * At this point need_resched() is true and we'll take the loop in
++	 * schedule(). The next pick is obviously going to be the stop task
++	 * which kthread_is_per_cpu() and will push this task away.
++	 */
++	raw_spin_lock(&rq->lock);
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++	struct rq *rq = cpu_rq(cpu);
++	struct rq_flags rf;
++
++	rq_lock_irqsave(rq, &rf);
++	if (on) {
++		WARN_ON_ONCE(rq->balance_callback);
++		rq->balance_callback = &balance_push_callback;
++	} else if (rq->balance_callback == &balance_push_callback) {
++		rq->balance_callback = NULL;
++	}
++	rq_unlock_irqrestore(rq, &rf);
++}
++
++/*
++ * Invoked from a CPUs hotplug control thread after the CPU has been marked
++ * inactive. All tasks which are not per CPU kernel threads are either
++ * pushed off this CPU now via balance_push() or placed on a different CPU
++ * during wakeup. Wait until the CPU is quiescent.
++ */
++static void balance_hotplug_wait(void)
++{
++	struct rq *rq = this_rq();
++
++	rcuwait_wait_event(&rq->hotplug_wait,
++			   rq->nr_running == 1 && !rq_has_pinned_tasks(rq),
++			   TASK_UNINTERRUPTIBLE);
++}
++
++#else
++
++static void balance_push(struct rq *rq)
++{
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++}
++
++static inline void balance_hotplug_wait(void)
++{
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++static void set_rq_offline(struct rq *rq)
++{
++	if (rq->online)
++		rq->online = false;
++}
++
++static void set_rq_online(struct rq *rq)
++{
++	if (!rq->online)
++		rq->online = true;
++}
++
++/*
++ * used to mark begin/end of suspend/resume:
++ */
++static int num_cpus_frozen;
++
++/*
++ * Update cpusets according to cpu_active mask.  If cpusets are
++ * disabled, cpuset_update_active_cpus() becomes a simple wrapper
++ * around partition_sched_domains().
++ *
++ * If we come here as part of a suspend/resume, don't touch cpusets because we
++ * want to restore it back to its original state upon resume anyway.
++ */
++static void cpuset_cpu_active(void)
++{
++	if (cpuhp_tasks_frozen) {
++		/*
++		 * num_cpus_frozen tracks how many CPUs are involved in suspend
++		 * resume sequence. As long as this is not the last online
++		 * operation in the resume sequence, just build a single sched
++		 * domain, ignoring cpusets.
++		 */
++		partition_sched_domains(1, NULL, NULL);
++		if (--num_cpus_frozen)
++			return;
++		/*
++		 * This is the last CPU online operation. So fall through and
++		 * restore the original sched domains by considering the
++		 * cpuset configurations.
++		 */
++		cpuset_force_rebuild();
++	}
++
++	cpuset_update_active_cpus();
++}
++
++static int cpuset_cpu_inactive(unsigned int cpu)
++{
++	if (!cpuhp_tasks_frozen) {
++		cpuset_update_active_cpus();
++	} else {
++		num_cpus_frozen++;
++		partition_sched_domains(1, NULL, NULL);
++	}
++	return 0;
++}
++
++int sched_cpu_activate(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	/*
++	 * Clear the balance_push callback and prepare to schedule
++	 * regular tasks.
++	 */
++	balance_push_set(cpu, false);
++
++#ifdef CONFIG_SCHED_SMT
++	/*
++	 * When going up, increment the number of cores with SMT present.
++	 */
++	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
++		static_branch_inc_cpuslocked(&sched_smt_present);
++#endif
++	set_cpu_active(cpu, true);
++
++	if (sched_smp_initialized)
++		cpuset_cpu_active();
++
++	/*
++	 * Put the rq online, if not already. This happens:
++	 *
++	 * 1) In the early boot process, because we build the real domains
++	 *    after all cpus have been brought up.
++	 *
++	 * 2) At runtime, if cpuset_cpu_active() fails to rebuild the
++	 *    domains.
++	 */
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	set_rq_online(rq);
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++	return 0;
++}
++
++int sched_cpu_deactivate(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++	int ret;
++
++	set_cpu_active(cpu, false);
++
++	/*
++	 * From this point forward, this CPU will refuse to run any task that
++	 * is not: migrate_disable() or KTHREAD_IS_PER_CPU, and will actively
++	 * push those tasks away until this gets cleared, see
++	 * sched_cpu_dying().
++	 */
++	balance_push_set(cpu, true);
++
++	/*
++	 * We've cleared cpu_active_mask, wait for all preempt-disabled and RCU
++	 * users of this state to go away such that all new such users will
++	 * observe it.
++	 *
++	 * Specifically, we rely on ttwu to no longer target this CPU, see
++	 * ttwu_queue_cond() and is_cpu_allowed().
++	 *
++	 * Do sync before park smpboot threads to take care the rcu boost case.
++	 */
++	synchronize_rcu();
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	update_rq_clock(rq);
++	set_rq_offline(rq);
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++#ifdef CONFIG_SCHED_SMT
++	/*
++	 * When going down, decrement the number of cores with SMT present.
++	 */
++	if (cpumask_weight(cpu_smt_mask(cpu)) == 2) {
++		static_branch_dec_cpuslocked(&sched_smt_present);
++		if (!static_branch_likely(&sched_smt_present))
++			cpumask_clear(&sched_sg_idle_mask);
++	}
++#endif
++
++	if (!sched_smp_initialized)
++		return 0;
++
++	ret = cpuset_cpu_inactive(cpu);
++	if (ret) {
++		balance_push_set(cpu, false);
++		set_cpu_active(cpu, true);
++		return ret;
++	}
++
++	return 0;
++}
++
++static void sched_rq_cpu_starting(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	rq->calc_load_update = calc_load_update;
++}
++
++int sched_cpu_starting(unsigned int cpu)
++{
++	sched_rq_cpu_starting(cpu);
++	sched_tick_start(cpu);
++	return 0;
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++
++/*
++ * Invoked immediately before the stopper thread is invoked to bring the
++ * CPU down completely. At this point all per CPU kthreads except the
++ * hotplug thread (current) and the stopper thread (inactive) have been
++ * either parked or have been unbound from the outgoing CPU. Ensure that
++ * any of those which might be on the way out are gone.
++ *
++ * If after this point a bound task is being woken on this CPU then the
++ * responsible hotplug callback has failed to do it's job.
++ * sched_cpu_dying() will catch it with the appropriate fireworks.
++ */
++int sched_cpu_wait_empty(unsigned int cpu)
++{
++	balance_hotplug_wait();
++	return 0;
++}
++
++/*
++ * Since this CPU is going 'away' for a while, fold any nr_active delta we
++ * might have. Called from the CPU stopper task after ensuring that the
++ * stopper is the last running task on the CPU, so nr_active count is
++ * stable. We need to take the teardown thread which is calling this into
++ * account, so we hand in adjust = 1 to the load calculation.
++ *
++ * Also see the comment "Global load-average calculations".
++ */
++static void calc_load_migrate(struct rq *rq)
++{
++	long delta = calc_load_fold_active(rq, 1);
++
++	if (delta)
++		atomic_long_add(delta, &calc_load_tasks);
++}
++
++static void dump_rq_tasks(struct rq *rq, const char *loglvl)
++{
++	struct task_struct *g, *p;
++	int cpu = cpu_of(rq);
++
++	lockdep_assert_held(&rq->lock);
++
++	printk("%sCPU%d enqueued tasks (%u total):\n", loglvl, cpu, rq->nr_running);
++	for_each_process_thread(g, p) {
++		if (task_cpu(p) != cpu)
++			continue;
++
++		if (!task_on_rq_queued(p))
++			continue;
++
++		printk("%s\tpid: %d, name: %s\n", loglvl, p->pid, p->comm);
++	}
++}
++
++int sched_cpu_dying(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	/* Handle pending wakeups and then migrate everything off */
++	sched_tick_stop(cpu);
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	if (rq->nr_running != 1 || rq_has_pinned_tasks(rq)) {
++		WARN(true, "Dying CPU not properly vacated!");
++		dump_rq_tasks(rq, KERN_WARNING);
++	}
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++	calc_load_migrate(rq);
++	hrtick_clear(rq);
++	return 0;
++}
++#endif
++
++#ifdef CONFIG_SMP
++static void sched_init_topology_cpumask_early(void)
++{
++	int cpu;
++	cpumask_t *tmp;
++
++	for_each_possible_cpu(cpu) {
++		/* init topo masks */
++		tmp = per_cpu(sched_cpu_topo_masks, cpu);
++
++		cpumask_copy(tmp, cpumask_of(cpu));
++		tmp++;
++		cpumask_copy(tmp, cpu_possible_mask);
++		per_cpu(sched_cpu_llc_mask, cpu) = tmp;
++		per_cpu(sched_cpu_topo_end_mask, cpu) = ++tmp;
++		/*per_cpu(sd_llc_id, cpu) = cpu;*/
++	}
++}
++
++#define TOPOLOGY_CPUMASK(name, mask, last)\
++	if (cpumask_and(topo, topo, mask)) {					\
++		cpumask_copy(topo, mask);					\
++		printk(KERN_INFO "sched: cpu#%02d topo: 0x%08lx - "#name,	\
++		       cpu, (topo++)->bits[0]);					\
++	}									\
++	if (!last)								\
++		bitmap_complement(cpumask_bits(topo), cpumask_bits(mask),	\
++				  nr_cpumask_bits);
++
++static void sched_init_topology_cpumask(void)
++{
++	int cpu;
++	cpumask_t *topo;
++
++	for_each_online_cpu(cpu) {
++		/* take chance to reset time slice for idle tasks */
++		cpu_rq(cpu)->idle->time_slice = sched_timeslice_ns;
++
++		topo = per_cpu(sched_cpu_topo_masks, cpu) + 1;
++
++		bitmap_complement(cpumask_bits(topo), cpumask_bits(cpumask_of(cpu)),
++				  nr_cpumask_bits);
++#ifdef CONFIG_SCHED_SMT
++		TOPOLOGY_CPUMASK(smt, topology_sibling_cpumask(cpu), false);
++#endif
++		per_cpu(sd_llc_id, cpu) = cpumask_first(cpu_coregroup_mask(cpu));
++		per_cpu(sched_cpu_llc_mask, cpu) = topo;
++		TOPOLOGY_CPUMASK(coregroup, cpu_coregroup_mask(cpu), false);
++
++		TOPOLOGY_CPUMASK(core, topology_core_cpumask(cpu), false);
++
++		TOPOLOGY_CPUMASK(others, cpu_online_mask, true);
++
++		per_cpu(sched_cpu_topo_end_mask, cpu) = topo;
++		printk(KERN_INFO "sched: cpu#%02d llc_id = %d, llc_mask idx = %d\n",
++		       cpu, per_cpu(sd_llc_id, cpu),
++		       (int) (per_cpu(sched_cpu_llc_mask, cpu) -
++			      per_cpu(sched_cpu_topo_masks, cpu)));
++	}
++}
++#endif
++
++void __init sched_init_smp(void)
++{
++	/* Move init over to a non-isolated CPU */
++	if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_TYPE_DOMAIN)) < 0)
++		BUG();
++	current->flags &= ~PF_NO_SETAFFINITY;
++
++	sched_init_topology_cpumask();
++
++	sched_smp_initialized = true;
++}
++
++static int __init migration_init(void)
++{
++	sched_cpu_starting(smp_processor_id());
++	return 0;
++}
++early_initcall(migration_init);
++
++#else
++void __init sched_init_smp(void)
++{
++	cpu_rq(0)->idle->time_slice = sched_timeslice_ns;
++}
++#endif /* CONFIG_SMP */
++
++int in_sched_functions(unsigned long addr)
++{
++	return in_lock_functions(addr) ||
++		(addr >= (unsigned long)__sched_text_start
++		&& addr < (unsigned long)__sched_text_end);
++}
++
++#ifdef CONFIG_CGROUP_SCHED
++/* task group related information */
++struct task_group {
++	struct cgroup_subsys_state css;
++
++	struct rcu_head rcu;
++	struct list_head list;
++
++	struct task_group *parent;
++	struct list_head siblings;
++	struct list_head children;
++#ifdef CONFIG_FAIR_GROUP_SCHED
++	unsigned long		shares;
++#endif
++};
++
++/*
++ * Default task group.
++ * Every task in system belongs to this group at bootup.
++ */
++struct task_group root_task_group;
++LIST_HEAD(task_groups);
++
++/* Cacheline aligned slab cache for task_group */
++static struct kmem_cache *task_group_cache __read_mostly;
++#endif /* CONFIG_CGROUP_SCHED */
++
++void __init sched_init(void)
++{
++	int i;
++	struct rq *rq;
++
++	printk(KERN_INFO "sched/alt: "ALT_SCHED_NAME" CPU Scheduler "ALT_SCHED_VERSION\
++			 " by Alfred Chen.\n");
++
++	wait_bit_init();
++
++#ifdef CONFIG_SMP
++	for (i = 0; i < SCHED_QUEUE_BITS; i++)
++		cpumask_copy(sched_preempt_mask + i, cpu_present_mask);
++#endif
++
++#ifdef CONFIG_CGROUP_SCHED
++	task_group_cache = KMEM_CACHE(task_group, 0);
++
++	list_add(&root_task_group.list, &task_groups);
++	INIT_LIST_HEAD(&root_task_group.children);
++	INIT_LIST_HEAD(&root_task_group.siblings);
++#endif /* CONFIG_CGROUP_SCHED */
++	for_each_possible_cpu(i) {
++		rq = cpu_rq(i);
++
++		sched_queue_init(&rq->queue);
++		rq->prio = IDLE_TASK_SCHED_PRIO;
++		rq->skip = NULL;
++
++		raw_spin_lock_init(&rq->lock);
++		rq->nr_running = rq->nr_uninterruptible = 0;
++		rq->calc_load_active = 0;
++		rq->calc_load_update = jiffies + LOAD_FREQ;
++#ifdef CONFIG_SMP
++		rq->online = false;
++		rq->cpu = i;
++
++#ifdef CONFIG_SCHED_SMT
++		rq->active_balance = 0;
++#endif
++
++#ifdef CONFIG_NO_HZ_COMMON
++		INIT_CSD(&rq->nohz_csd, nohz_csd_func, rq);
++#endif
++		rq->balance_callback = &balance_push_callback;
++#ifdef CONFIG_HOTPLUG_CPU
++		rcuwait_init(&rq->hotplug_wait);
++#endif
++#endif /* CONFIG_SMP */
++		rq->nr_switches = 0;
++
++		hrtick_rq_init(rq);
++		atomic_set(&rq->nr_iowait, 0);
++
++		zalloc_cpumask_var_node(&rq->scratch_mask, GFP_KERNEL, cpu_to_node(i));
++	}
++#ifdef CONFIG_SMP
++	/* Set rq->online for cpu 0 */
++	cpu_rq(0)->online = true;
++#endif
++	/*
++	 * The boot idle thread does lazy MMU switching as well:
++	 */
++	mmgrab(&init_mm);
++	enter_lazy_tlb(&init_mm, current);
++
++	/*
++	 * The idle task doesn't need the kthread struct to function, but it
++	 * is dressed up as a per-CPU kthread and thus needs to play the part
++	 * if we want to avoid special-casing it in code that deals with per-CPU
++	 * kthreads.
++	 */
++	WARN_ON(!set_kthread_struct(current));
++
++	/*
++	 * Make us the idle thread. Technically, schedule() should not be
++	 * called from this thread, however somewhere below it might be,
++	 * but because we are the idle thread, we just pick up running again
++	 * when this runqueue becomes "idle".
++	 */
++	init_idle(current, smp_processor_id());
++
++	calc_load_update = jiffies + LOAD_FREQ;
++
++#ifdef CONFIG_SMP
++	idle_thread_set_boot_cpu();
++	balance_push_set(smp_processor_id(), false);
++
++	sched_init_topology_cpumask_early();
++#endif /* SMP */
++
++	preempt_dynamic_init();
++}
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++
++void __might_sleep(const char *file, int line)
++{
++	unsigned int state = get_current_state();
++	/*
++	 * Blocking primitives will set (and therefore destroy) current->state,
++	 * since we will exit with TASK_RUNNING make sure we enter with it,
++	 * otherwise we will destroy state.
++	 */
++	WARN_ONCE(state != TASK_RUNNING && current->task_state_change,
++			"do not call blocking ops when !TASK_RUNNING; "
++			"state=%x set at [<%p>] %pS\n", state,
++			(void *)current->task_state_change,
++			(void *)current->task_state_change);
++
++	__might_resched(file, line, 0);
++}
++EXPORT_SYMBOL(__might_sleep);
++
++static void print_preempt_disable_ip(int preempt_offset, unsigned long ip)
++{
++	if (!IS_ENABLED(CONFIG_DEBUG_PREEMPT))
++		return;
++
++	if (preempt_count() == preempt_offset)
++		return;
++
++	pr_err("Preemption disabled at:");
++	print_ip_sym(KERN_ERR, ip);
++}
++
++static inline bool resched_offsets_ok(unsigned int offsets)
++{
++	unsigned int nested = preempt_count();
++
++	nested += rcu_preempt_depth() << MIGHT_RESCHED_RCU_SHIFT;
++
++	return nested == offsets;
++}
++
++void __might_resched(const char *file, int line, unsigned int offsets)
++{
++	/* Ratelimiting timestamp: */
++	static unsigned long prev_jiffy;
++
++	unsigned long preempt_disable_ip;
++
++	/* WARN_ON_ONCE() by default, no rate limit required: */
++	rcu_sleep_check();
++
++	if ((resched_offsets_ok(offsets) && !irqs_disabled() &&
++	     !is_idle_task(current) && !current->non_block_count) ||
++	    system_state == SYSTEM_BOOTING || system_state > SYSTEM_RUNNING ||
++	    oops_in_progress)
++		return;
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	/* Save this before calling printk(), since that will clobber it: */
++	preempt_disable_ip = get_preempt_disable_ip(current);
++
++	pr_err("BUG: sleeping function called from invalid context at %s:%d\n",
++	       file, line);
++	pr_err("in_atomic(): %d, irqs_disabled(): %d, non_block: %d, pid: %d, name: %s\n",
++	       in_atomic(), irqs_disabled(), current->non_block_count,
++	       current->pid, current->comm);
++	pr_err("preempt_count: %x, expected: %x\n", preempt_count(),
++	       offsets & MIGHT_RESCHED_PREEMPT_MASK);
++
++	if (IS_ENABLED(CONFIG_PREEMPT_RCU)) {
++		pr_err("RCU nest depth: %d, expected: %u\n",
++		       rcu_preempt_depth(), offsets >> MIGHT_RESCHED_RCU_SHIFT);
++	}
++
++	if (task_stack_end_corrupted(current))
++		pr_emerg("Thread overran stack, or stack corrupted\n");
++
++	debug_show_held_locks(current);
++	if (irqs_disabled())
++		print_irqtrace_events(current);
++
++	print_preempt_disable_ip(offsets & MIGHT_RESCHED_PREEMPT_MASK,
++				 preempt_disable_ip);
++
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL(__might_resched);
++
++void __cant_sleep(const char *file, int line, int preempt_offset)
++{
++	static unsigned long prev_jiffy;
++
++	if (irqs_disabled())
++		return;
++
++	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++		return;
++
++	if (preempt_count() > preempt_offset)
++		return;
++
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	printk(KERN_ERR "BUG: assuming atomic context at %s:%d\n", file, line);
++	printk(KERN_ERR "in_atomic(): %d, irqs_disabled(): %d, pid: %d, name: %s\n",
++			in_atomic(), irqs_disabled(),
++			current->pid, current->comm);
++
++	debug_show_held_locks(current);
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_sleep);
++
++#ifdef CONFIG_SMP
++void __cant_migrate(const char *file, int line)
++{
++	static unsigned long prev_jiffy;
++
++	if (irqs_disabled())
++		return;
++
++	if (is_migration_disabled(current))
++		return;
++
++	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++		return;
++
++	if (preempt_count() > 0)
++		return;
++
++	if (current->migration_flags & MDF_FORCE_ENABLED)
++		return;
++
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	pr_err("BUG: assuming non migratable context at %s:%d\n", file, line);
++	pr_err("in_atomic(): %d, irqs_disabled(): %d, migration_disabled() %u pid: %d, name: %s\n",
++	       in_atomic(), irqs_disabled(), is_migration_disabled(current),
++	       current->pid, current->comm);
++
++	debug_show_held_locks(current);
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_migrate);
++#endif
++#endif
++
++#ifdef CONFIG_MAGIC_SYSRQ
++void normalize_rt_tasks(void)
++{
++	struct task_struct *g, *p;
++	struct sched_attr attr = {
++		.sched_policy = SCHED_NORMAL,
++	};
++
++	read_lock(&tasklist_lock);
++	for_each_process_thread(g, p) {
++		/*
++		 * Only normalize user tasks:
++		 */
++		if (p->flags & PF_KTHREAD)
++			continue;
++
++		schedstat_set(p->stats.wait_start,  0);
++		schedstat_set(p->stats.sleep_start, 0);
++		schedstat_set(p->stats.block_start, 0);
++
++		if (!rt_task(p)) {
++			/*
++			 * Renice negative nice level userspace
++			 * tasks back to 0:
++			 */
++			if (task_nice(p) < 0)
++				set_user_nice(p, 0);
++			continue;
++		}
++
++		__sched_setscheduler(p, &attr, false, false);
++	}
++	read_unlock(&tasklist_lock);
++}
++#endif /* CONFIG_MAGIC_SYSRQ */
++
++#if defined(CONFIG_IA64) || defined(CONFIG_KGDB_KDB)
++/*
++ * These functions are only useful for the IA64 MCA handling, or kdb.
++ *
++ * They can only be called when the whole system has been
++ * stopped - every CPU needs to be quiescent, and no scheduling
++ * activity can take place. Using them for anything else would
++ * be a serious bug, and as a result, they aren't even visible
++ * under any other configuration.
++ */
++
++/**
++ * curr_task - return the current task for a given CPU.
++ * @cpu: the processor in question.
++ *
++ * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
++ *
++ * Return: The current task for @cpu.
++ */
++struct task_struct *curr_task(int cpu)
++{
++	return cpu_curr(cpu);
++}
++
++#endif /* defined(CONFIG_IA64) || defined(CONFIG_KGDB_KDB) */
++
++#ifdef CONFIG_IA64
++/**
++ * ia64_set_curr_task - set the current task for a given CPU.
++ * @cpu: the processor in question.
++ * @p: the task pointer to set.
++ *
++ * Description: This function must only be used when non-maskable interrupts
++ * are serviced on a separate stack.  It allows the architecture to switch the
++ * notion of the current task on a CPU in a non-blocking manner.  This function
++ * must be called with all CPU's synchronised, and interrupts disabled, the
++ * and caller must save the original value of the current task (see
++ * curr_task() above) and restore that value before reenabling interrupts and
++ * re-starting the system.
++ *
++ * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
++ */
++void ia64_set_curr_task(int cpu, struct task_struct *p)
++{
++	cpu_curr(cpu) = p;
++}
++
++#endif
++
++#ifdef CONFIG_CGROUP_SCHED
++static void sched_free_group(struct task_group *tg)
++{
++	kmem_cache_free(task_group_cache, tg);
++}
++
++static void sched_free_group_rcu(struct rcu_head *rhp)
++{
++	sched_free_group(container_of(rhp, struct task_group, rcu));
++}
++
++static void sched_unregister_group(struct task_group *tg)
++{
++	/*
++	 * We have to wait for yet another RCU grace period to expire, as
++	 * print_cfs_stats() might run concurrently.
++	 */
++	call_rcu(&tg->rcu, sched_free_group_rcu);
++}
++
++/* allocate runqueue etc for a new task group */
++struct task_group *sched_create_group(struct task_group *parent)
++{
++	struct task_group *tg;
++
++	tg = kmem_cache_alloc(task_group_cache, GFP_KERNEL | __GFP_ZERO);
++	if (!tg)
++		return ERR_PTR(-ENOMEM);
++
++	return tg;
++}
++
++void sched_online_group(struct task_group *tg, struct task_group *parent)
++{
++}
++
++/* rcu callback to free various structures associated with a task group */
++static void sched_unregister_group_rcu(struct rcu_head *rhp)
++{
++	/* Now it should be safe to free those cfs_rqs: */
++	sched_unregister_group(container_of(rhp, struct task_group, rcu));
++}
++
++void sched_destroy_group(struct task_group *tg)
++{
++	/* Wait for possible concurrent references to cfs_rqs complete: */
++	call_rcu(&tg->rcu, sched_unregister_group_rcu);
++}
++
++void sched_release_group(struct task_group *tg)
++{
++}
++
++static inline struct task_group *css_tg(struct cgroup_subsys_state *css)
++{
++	return css ? container_of(css, struct task_group, css) : NULL;
++}
++
++static struct cgroup_subsys_state *
++cpu_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
++{
++	struct task_group *parent = css_tg(parent_css);
++	struct task_group *tg;
++
++	if (!parent) {
++		/* This is early initialization for the top cgroup */
++		return &root_task_group.css;
++	}
++
++	tg = sched_create_group(parent);
++	if (IS_ERR(tg))
++		return ERR_PTR(-ENOMEM);
++	return &tg->css;
++}
++
++/* Expose task group only after completing cgroup initialization */
++static int cpu_cgroup_css_online(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++	struct task_group *parent = css_tg(css->parent);
++
++	if (parent)
++		sched_online_group(tg, parent);
++	return 0;
++}
++
++static void cpu_cgroup_css_released(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++
++	sched_release_group(tg);
++}
++
++static void cpu_cgroup_css_free(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++
++	/*
++	 * Relies on the RCU grace period between css_released() and this.
++	 */
++	sched_unregister_group(tg);
++}
++
++#ifdef CONFIG_RT_GROUP_SCHED
++static int cpu_cgroup_can_attach(struct cgroup_taskset *tset)
++{
++	return 0;
++}
++#endif
++
++static void cpu_cgroup_attach(struct cgroup_taskset *tset)
++{
++}
++
++#ifdef CONFIG_FAIR_GROUP_SCHED
++static DEFINE_MUTEX(shares_mutex);
++
++int sched_group_set_shares(struct task_group *tg, unsigned long shares)
++{
++	/*
++	 * We can't change the weight of the root cgroup.
++	 */
++	if (&root_task_group == tg)
++		return -EINVAL;
++
++	shares = clamp(shares, scale_load(MIN_SHARES), scale_load(MAX_SHARES));
++
++	mutex_lock(&shares_mutex);
++	if (tg->shares == shares)
++		goto done;
++
++	tg->shares = shares;
++done:
++	mutex_unlock(&shares_mutex);
++	return 0;
++}
++
++static int cpu_shares_write_u64(struct cgroup_subsys_state *css,
++				struct cftype *cftype, u64 shareval)
++{
++	if (shareval > scale_load_down(ULONG_MAX))
++		shareval = MAX_SHARES;
++	return sched_group_set_shares(css_tg(css), scale_load(shareval));
++}
++
++static u64 cpu_shares_read_u64(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	struct task_group *tg = css_tg(css);
++
++	return (u64) scale_load_down(tg->shares);
++}
++#endif
++
++static struct cftype cpu_legacy_files[] = {
++#ifdef CONFIG_FAIR_GROUP_SCHED
++	{
++		.name = "shares",
++		.read_u64 = cpu_shares_read_u64,
++		.write_u64 = cpu_shares_write_u64,
++	},
++#endif
++	{ }	/* Terminate */
++};
++
++
++static struct cftype cpu_files[] = {
++	{ }	/* terminate */
++};
++
++static int cpu_extra_stat_show(struct seq_file *sf,
++			       struct cgroup_subsys_state *css)
++{
++	return 0;
++}
++
++struct cgroup_subsys cpu_cgrp_subsys = {
++	.css_alloc	= cpu_cgroup_css_alloc,
++	.css_online	= cpu_cgroup_css_online,
++	.css_released	= cpu_cgroup_css_released,
++	.css_free	= cpu_cgroup_css_free,
++	.css_extra_stat_show = cpu_extra_stat_show,
++#ifdef CONFIG_RT_GROUP_SCHED
++	.can_attach	= cpu_cgroup_can_attach,
++#endif
++	.attach		= cpu_cgroup_attach,
++	.legacy_cftypes	= cpu_files,
++	.legacy_cftypes	= cpu_legacy_files,
++	.dfl_cftypes	= cpu_files,
++	.early_init	= true,
++	.threaded	= true,
++};
++#endif	/* CONFIG_CGROUP_SCHED */
++
++#undef CREATE_TRACE_POINTS
++
++#ifdef CONFIG_SCHED_MM_CID
++void sched_mm_cid_exit_signals(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	unsigned long flags;
++
++	if (!mm)
++		return;
++	local_irq_save(flags);
++	mm_cid_put(mm, t->mm_cid);
++	t->mm_cid = -1;
++	t->mm_cid_active = 0;
++	local_irq_restore(flags);
++}
++
++void sched_mm_cid_before_execve(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	unsigned long flags;
++
++	if (!mm)
++		return;
++	local_irq_save(flags);
++	mm_cid_put(mm, t->mm_cid);
++	t->mm_cid = -1;
++	t->mm_cid_active = 0;
++	local_irq_restore(flags);
++}
++
++void sched_mm_cid_after_execve(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	unsigned long flags;
++
++	if (!mm)
++		return;
++	local_irq_save(flags);
++	t->mm_cid = mm_cid_get(mm);
++	t->mm_cid_active = 1;
++	local_irq_restore(flags);
++	rseq_set_notify_resume(t);
++}
++
++void sched_mm_cid_fork(struct task_struct *t)
++{
++	WARN_ON_ONCE(!t->mm || t->mm_cid != -1);
++	t->mm_cid_active = 1;
++}
++#endif
+diff --git a/kernel/sched/alt_debug.c b/kernel/sched/alt_debug.c
+new file mode 100644
+index 000000000000..1212a031700e
+--- /dev/null
++++ b/kernel/sched/alt_debug.c
+@@ -0,0 +1,31 @@
++/*
++ * kernel/sched/alt_debug.c
++ *
++ * Print the alt scheduler debugging details
++ *
++ * Author: Alfred Chen
++ * Date  : 2020
++ */
++#include "sched.h"
++
++/*
++ * This allows printing both to /proc/sched_debug and
++ * to the console
++ */
++#define SEQ_printf(m, x...)			\
++ do {						\
++	if (m)					\
++		seq_printf(m, x);		\
++	else					\
++		pr_cont(x);			\
++ } while (0)
++
++void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
++			  struct seq_file *m)
++{
++	SEQ_printf(m, "%s (%d, #threads: %d)\n", p->comm, task_pid_nr_ns(p, ns),
++						get_nr_threads(p));
++}
++
++void proc_sched_set_task(struct task_struct *p)
++{}
+diff --git a/kernel/sched/alt_sched.h b/kernel/sched/alt_sched.h
+new file mode 100644
+index 000000000000..55a15b806e87
+--- /dev/null
++++ b/kernel/sched/alt_sched.h
+@@ -0,0 +1,729 @@
++#ifndef ALT_SCHED_H
++#define ALT_SCHED_H
++
++#include <linux/context_tracking.h>
++#include <linux/profile.h>
++#include <linux/stop_machine.h>
++#include <linux/syscalls.h>
++#include <linux/tick.h>
++
++#include <trace/events/power.h>
++#include <trace/events/sched.h>
++
++#include "../workqueue_internal.h"
++
++#include "cpupri.h"
++
++#ifdef CONFIG_SCHED_BMQ
++/* bits:
++ * RT(0-99), (Low prio adj range, nice width, high prio adj range) / 2, cpu idle task */
++#define SCHED_LEVELS	(MAX_RT_PRIO + NICE_WIDTH / 2 + MAX_PRIORITY_ADJ + 1)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++/* bits: RT(0-24), reserved(25-31), SCHED_NORMAL_PRIO_NUM(32), cpu idle task(1) */
++#define SCHED_LEVELS	(64 + 1)
++#endif /* CONFIG_SCHED_PDS */
++
++#define IDLE_TASK_SCHED_PRIO	(SCHED_LEVELS - 1)
++
++#ifdef CONFIG_SCHED_DEBUG
++# define SCHED_WARN_ON(x)	WARN_ONCE(x, #x)
++extern void resched_latency_warn(int cpu, u64 latency);
++#else
++# define SCHED_WARN_ON(x)	({ (void)(x), 0; })
++static inline void resched_latency_warn(int cpu, u64 latency) {}
++#endif
++
++/*
++ * Increase resolution of nice-level calculations for 64-bit architectures.
++ * The extra resolution improves shares distribution and load balancing of
++ * low-weight task groups (eg. nice +19 on an autogroup), deeper taskgroup
++ * hierarchies, especially on larger systems. This is not a user-visible change
++ * and does not change the user-interface for setting shares/weights.
++ *
++ * We increase resolution only if we have enough bits to allow this increased
++ * resolution (i.e. 64-bit). The costs for increasing resolution when 32-bit
++ * are pretty high and the returns do not justify the increased costs.
++ *
++ * Really only required when CONFIG_FAIR_GROUP_SCHED=y is also set, but to
++ * increase coverage and consistency always enable it on 64-bit platforms.
++ */
++#ifdef CONFIG_64BIT
++# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w)		((w) << SCHED_FIXEDPOINT_SHIFT)
++# define scale_load_down(w) \
++({ \
++	unsigned long __w = (w); \
++	if (__w) \
++		__w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \
++	__w; \
++})
++#else
++# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w)		(w)
++# define scale_load_down(w)	(w)
++#endif
++
++#ifdef CONFIG_FAIR_GROUP_SCHED
++#define ROOT_TASK_GROUP_LOAD	NICE_0_LOAD
++
++/*
++ * A weight of 0 or 1 can cause arithmetics problems.
++ * A weight of a cfs_rq is the sum of weights of which entities
++ * are queued on this cfs_rq, so a weight of a entity should not be
++ * too large, so as the shares value of a task group.
++ * (The default weight is 1024 - so there's no practical
++ *  limitation from this.)
++ */
++#define MIN_SHARES		(1UL <<  1)
++#define MAX_SHARES		(1UL << 18)
++#endif
++
++/*
++ * Tunables that become constants when CONFIG_SCHED_DEBUG is off:
++ */
++#ifdef CONFIG_SCHED_DEBUG
++# define const_debug __read_mostly
++#else
++# define const_debug const
++#endif
++
++/* task_struct::on_rq states: */
++#define TASK_ON_RQ_QUEUED	1
++#define TASK_ON_RQ_MIGRATING	2
++
++static inline int task_on_rq_queued(struct task_struct *p)
++{
++	return p->on_rq == TASK_ON_RQ_QUEUED;
++}
++
++static inline int task_on_rq_migrating(struct task_struct *p)
++{
++	return READ_ONCE(p->on_rq) == TASK_ON_RQ_MIGRATING;
++}
++
++/*
++ * wake flags
++ */
++#define WF_SYNC		0x01		/* waker goes to sleep after wakeup */
++#define WF_FORK		0x02		/* child wakeup after fork */
++#define WF_MIGRATED	0x04		/* internal use, task got migrated */
++
++#define SCHED_QUEUE_BITS	(SCHED_LEVELS - 1)
++
++struct sched_queue {
++	DECLARE_BITMAP(bitmap, SCHED_QUEUE_BITS);
++	struct list_head heads[SCHED_LEVELS];
++};
++
++struct rq;
++struct cpuidle_state;
++
++struct balance_callback {
++	struct balance_callback *next;
++	void (*func)(struct rq *rq);
++};
++
++/*
++ * This is the main, per-CPU runqueue data structure.
++ * This data should only be modified by the local cpu.
++ */
++struct rq {
++	/* runqueue lock: */
++	raw_spinlock_t lock;
++
++	struct task_struct __rcu *curr;
++	struct task_struct *idle, *stop, *skip;
++	struct mm_struct *prev_mm;
++
++	struct sched_queue	queue;
++#ifdef CONFIG_SCHED_PDS
++	u64			time_edge;
++#endif
++	unsigned long		prio;
++
++	/* switch count */
++	u64 nr_switches;
++
++	atomic_t nr_iowait;
++
++#ifdef CONFIG_SCHED_DEBUG
++	u64 last_seen_need_resched_ns;
++	int ticks_without_resched;
++#endif
++
++#ifdef CONFIG_MEMBARRIER
++	int membarrier_state;
++#endif
++
++#ifdef CONFIG_SMP
++	int cpu;		/* cpu of this runqueue */
++	bool online;
++
++	unsigned int		ttwu_pending;
++	unsigned char		nohz_idle_balance;
++	unsigned char		idle_balance;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++	struct sched_avg	avg_irq;
++#endif
++
++#ifdef CONFIG_SCHED_SMT
++	int active_balance;
++	struct cpu_stop_work	active_balance_work;
++#endif
++	struct balance_callback	*balance_callback;
++#ifdef CONFIG_HOTPLUG_CPU
++	struct rcuwait		hotplug_wait;
++#endif
++	unsigned int		nr_pinned;
++
++#endif /* CONFIG_SMP */
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++	u64 prev_irq_time;
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++#ifdef CONFIG_PARAVIRT
++	u64 prev_steal_time;
++#endif /* CONFIG_PARAVIRT */
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++	u64 prev_steal_time_rq;
++#endif /* CONFIG_PARAVIRT_TIME_ACCOUNTING */
++
++	/* For genenal cpu load util */
++	s32 load_history;
++	u64 load_block;
++	u64 load_stamp;
++
++	/* calc_load related fields */
++	unsigned long calc_load_update;
++	long calc_load_active;
++
++	u64 clock, last_tick;
++	u64 last_ts_switch;
++	u64 clock_task;
++
++	unsigned int  nr_running;
++	unsigned long nr_uninterruptible;
++
++#ifdef CONFIG_SCHED_HRTICK
++#ifdef CONFIG_SMP
++	call_single_data_t hrtick_csd;
++#endif
++	struct hrtimer		hrtick_timer;
++	ktime_t			hrtick_time;
++#endif
++
++#ifdef CONFIG_SCHEDSTATS
++
++	/* latency stats */
++	struct sched_info rq_sched_info;
++	unsigned long long rq_cpu_time;
++	/* could above be rq->cfs_rq.exec_clock + rq->rt_rq.rt_runtime ? */
++
++	/* sys_sched_yield() stats */
++	unsigned int yld_count;
++
++	/* schedule() stats */
++	unsigned int sched_switch;
++	unsigned int sched_count;
++	unsigned int sched_goidle;
++
++	/* try_to_wake_up() stats */
++	unsigned int ttwu_count;
++	unsigned int ttwu_local;
++#endif /* CONFIG_SCHEDSTATS */
++
++#ifdef CONFIG_CPU_IDLE
++	/* Must be inspected within a rcu lock section */
++	struct cpuidle_state *idle_state;
++#endif
++
++#ifdef CONFIG_NO_HZ_COMMON
++#ifdef CONFIG_SMP
++	call_single_data_t	nohz_csd;
++#endif
++	atomic_t		nohz_flags;
++#endif /* CONFIG_NO_HZ_COMMON */
++
++	/* Scratch cpumask to be temporarily used under rq_lock */
++	cpumask_var_t		scratch_mask;
++};
++
++extern unsigned long rq_load_util(struct rq *rq, unsigned long max);
++
++extern unsigned long calc_load_update;
++extern atomic_long_t calc_load_tasks;
++
++extern void calc_global_load_tick(struct rq *this_rq);
++extern long calc_load_fold_active(struct rq *this_rq, long adjust);
++
++DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++#define cpu_rq(cpu)		(&per_cpu(runqueues, (cpu)))
++#define this_rq()		this_cpu_ptr(&runqueues)
++#define task_rq(p)		cpu_rq(task_cpu(p))
++#define cpu_curr(cpu)		(cpu_rq(cpu)->curr)
++#define raw_rq()		raw_cpu_ptr(&runqueues)
++
++#ifdef CONFIG_SMP
++#if defined(CONFIG_SCHED_DEBUG) && defined(CONFIG_SYSCTL)
++void register_sched_domain_sysctl(void);
++void unregister_sched_domain_sysctl(void);
++#else
++static inline void register_sched_domain_sysctl(void)
++{
++}
++static inline void unregister_sched_domain_sysctl(void)
++{
++}
++#endif
++
++extern bool sched_smp_initialized;
++
++enum {
++	ITSELF_LEVEL_SPACE_HOLDER,
++#ifdef CONFIG_SCHED_SMT
++	SMT_LEVEL_SPACE_HOLDER,
++#endif
++	COREGROUP_LEVEL_SPACE_HOLDER,
++	CORE_LEVEL_SPACE_HOLDER,
++	OTHER_LEVEL_SPACE_HOLDER,
++	NR_CPU_AFFINITY_LEVELS
++};
++
++DECLARE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++
++static inline int
++__best_mask_cpu(const cpumask_t *cpumask, const cpumask_t *mask)
++{
++	int cpu;
++
++	while ((cpu = cpumask_any_and(cpumask, mask)) >= nr_cpu_ids)
++		mask++;
++
++	return cpu;
++}
++
++static inline int best_mask_cpu(int cpu, const cpumask_t *mask)
++{
++	return __best_mask_cpu(mask, per_cpu(sched_cpu_topo_masks, cpu));
++}
++
++extern void flush_smp_call_function_queue(void);
++
++#else  /* !CONFIG_SMP */
++static inline void flush_smp_call_function_queue(void) { }
++#endif
++
++#ifndef arch_scale_freq_tick
++static __always_inline
++void arch_scale_freq_tick(void)
++{
++}
++#endif
++
++#ifndef arch_scale_freq_capacity
++static __always_inline
++unsigned long arch_scale_freq_capacity(int cpu)
++{
++	return SCHED_CAPACITY_SCALE;
++}
++#endif
++
++static inline u64 __rq_clock_broken(struct rq *rq)
++{
++	return READ_ONCE(rq->clock);
++}
++
++static inline u64 rq_clock(struct rq *rq)
++{
++	/*
++	 * Relax lockdep_assert_held() checking as in VRQ, call to
++	 * sched_info_xxxx() may not held rq->lock
++	 * lockdep_assert_held(&rq->lock);
++	 */
++	return rq->clock;
++}
++
++static inline u64 rq_clock_task(struct rq *rq)
++{
++	/*
++	 * Relax lockdep_assert_held() checking as in VRQ, call to
++	 * sched_info_xxxx() may not held rq->lock
++	 * lockdep_assert_held(&rq->lock);
++	 */
++	return rq->clock_task;
++}
++
++/*
++ * {de,en}queue flags:
++ *
++ * DEQUEUE_SLEEP  - task is no longer runnable
++ * ENQUEUE_WAKEUP - task just became runnable
++ *
++ */
++
++#define DEQUEUE_SLEEP		0x01
++
++#define ENQUEUE_WAKEUP		0x01
++
++
++/*
++ * Below are scheduler API which using in other kernel code
++ * It use the dummy rq_flags
++ * ToDo : BMQ need to support these APIs for compatibility with mainline
++ * scheduler code.
++ */
++struct rq_flags {
++	unsigned long flags;
++};
++
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(rq->lock);
++
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(p->pi_lock)
++	__acquires(rq->lock);
++
++static inline void __task_rq_unlock(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock(&rq->lock);
++}
++
++static inline void
++task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
++	__releases(rq->lock)
++	__releases(p->pi_lock)
++{
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++}
++
++static inline void
++rq_lock(struct rq *rq, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	raw_spin_lock(&rq->lock);
++}
++
++static inline void
++rq_unlock(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock(&rq->lock);
++}
++
++static inline void
++rq_lock_irq(struct rq *rq, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	raw_spin_lock_irq(&rq->lock);
++}
++
++static inline void
++rq_unlock_irq(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock_irq(&rq->lock);
++}
++
++static inline struct rq *
++this_rq_lock_irq(struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	local_irq_disable();
++	rq = this_rq();
++	raw_spin_lock(&rq->lock);
++
++	return rq;
++}
++
++static inline raw_spinlock_t *__rq_lockp(struct rq *rq)
++{
++	return &rq->lock;
++}
++
++static inline raw_spinlock_t *rq_lockp(struct rq *rq)
++{
++	return __rq_lockp(rq);
++}
++
++static inline void lockdep_assert_rq_held(struct rq *rq)
++{
++	lockdep_assert_held(__rq_lockp(rq));
++}
++
++extern void raw_spin_rq_lock_nested(struct rq *rq, int subclass);
++extern void raw_spin_rq_unlock(struct rq *rq);
++
++static inline void raw_spin_rq_lock(struct rq *rq)
++{
++	raw_spin_rq_lock_nested(rq, 0);
++}
++
++static inline void raw_spin_rq_lock_irq(struct rq *rq)
++{
++	local_irq_disable();
++	raw_spin_rq_lock(rq);
++}
++
++static inline void raw_spin_rq_unlock_irq(struct rq *rq)
++{
++	raw_spin_rq_unlock(rq);
++	local_irq_enable();
++}
++
++static inline int task_current(struct rq *rq, struct task_struct *p)
++{
++	return rq->curr == p;
++}
++
++static inline bool task_on_cpu(struct task_struct *p)
++{
++	return p->on_cpu;
++}
++
++extern int task_running_nice(struct task_struct *p);
++
++extern struct static_key_false sched_schedstats;
++
++#ifdef CONFIG_CPU_IDLE
++static inline void idle_set_state(struct rq *rq,
++				  struct cpuidle_state *idle_state)
++{
++	rq->idle_state = idle_state;
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++	WARN_ON(!rcu_read_lock_held());
++	return rq->idle_state;
++}
++#else
++static inline void idle_set_state(struct rq *rq,
++				  struct cpuidle_state *idle_state)
++{
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++	return NULL;
++}
++#endif
++
++static inline int cpu_of(const struct rq *rq)
++{
++#ifdef CONFIG_SMP
++	return rq->cpu;
++#else
++	return 0;
++#endif
++}
++
++#include "stats.h"
++
++#ifdef CONFIG_NO_HZ_COMMON
++#define NOHZ_BALANCE_KICK_BIT	0
++#define NOHZ_STATS_KICK_BIT	1
++
++#define NOHZ_BALANCE_KICK	BIT(NOHZ_BALANCE_KICK_BIT)
++#define NOHZ_STATS_KICK		BIT(NOHZ_STATS_KICK_BIT)
++
++#define NOHZ_KICK_MASK	(NOHZ_BALANCE_KICK | NOHZ_STATS_KICK)
++
++#define nohz_flags(cpu)	(&cpu_rq(cpu)->nohz_flags)
++
++/* TODO: needed?
++extern void nohz_balance_exit_idle(struct rq *rq);
++#else
++static inline void nohz_balance_exit_idle(struct rq *rq) { }
++*/
++#endif
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++struct irqtime {
++	u64			total;
++	u64			tick_delta;
++	u64			irq_start_time;
++	struct u64_stats_sync	sync;
++};
++
++DECLARE_PER_CPU(struct irqtime, cpu_irqtime);
++
++/*
++ * Returns the irqtime minus the softirq time computed by ksoftirqd.
++ * Otherwise ksoftirqd's sum_exec_runtime is substracted its own runtime
++ * and never move forward.
++ */
++static inline u64 irq_time_read(int cpu)
++{
++	struct irqtime *irqtime = &per_cpu(cpu_irqtime, cpu);
++	unsigned int seq;
++	u64 total;
++
++	do {
++		seq = __u64_stats_fetch_begin(&irqtime->sync);
++		total = irqtime->total;
++	} while (__u64_stats_fetch_retry(&irqtime->sync, seq));
++
++	return total;
++}
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++
++#ifdef CONFIG_CPU_FREQ
++DECLARE_PER_CPU(struct update_util_data __rcu *, cpufreq_update_util_data);
++#endif /* CONFIG_CPU_FREQ */
++
++#ifdef CONFIG_NO_HZ_FULL
++extern int __init sched_tick_offload_init(void);
++#else
++static inline int sched_tick_offload_init(void) { return 0; }
++#endif
++
++#ifdef arch_scale_freq_capacity
++#ifndef arch_scale_freq_invariant
++#define arch_scale_freq_invariant()	(true)
++#endif
++#else /* arch_scale_freq_capacity */
++#define arch_scale_freq_invariant()	(false)
++#endif
++
++extern void schedule_idle(void);
++
++#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT)
++
++/*
++ * !! For sched_setattr_nocheck() (kernel) only !!
++ *
++ * This is actually gross. :(
++ *
++ * It is used to make schedutil kworker(s) higher priority than SCHED_DEADLINE
++ * tasks, but still be able to sleep. We need this on platforms that cannot
++ * atomically change clock frequency. Remove once fast switching will be
++ * available on such platforms.
++ *
++ * SUGOV stands for SchedUtil GOVernor.
++ */
++#define SCHED_FLAG_SUGOV	0x10000000
++
++#ifdef CONFIG_MEMBARRIER
++/*
++ * The scheduler provides memory barriers required by membarrier between:
++ * - prior user-space memory accesses and store to rq->membarrier_state,
++ * - store to rq->membarrier_state and following user-space memory accesses.
++ * In the same way it provides those guarantees around store to rq->curr.
++ */
++static inline void membarrier_switch_mm(struct rq *rq,
++					struct mm_struct *prev_mm,
++					struct mm_struct *next_mm)
++{
++	int membarrier_state;
++
++	if (prev_mm == next_mm)
++		return;
++
++	membarrier_state = atomic_read(&next_mm->membarrier_state);
++	if (READ_ONCE(rq->membarrier_state) == membarrier_state)
++		return;
++
++	WRITE_ONCE(rq->membarrier_state, membarrier_state);
++}
++#else
++static inline void membarrier_switch_mm(struct rq *rq,
++					struct mm_struct *prev_mm,
++					struct mm_struct *next_mm)
++{
++}
++#endif
++
++#ifdef CONFIG_NUMA
++extern int sched_numa_find_closest(const struct cpumask *cpus, int cpu);
++#else
++static inline int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++	return nr_cpu_ids;
++}
++#endif
++
++extern void swake_up_all_locked(struct swait_queue_head *q);
++extern void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++extern int preempt_dynamic_mode;
++extern int sched_dynamic_mode(const char *str);
++extern void sched_dynamic_update(int mode);
++#endif
++
++static inline void nohz_run_idle_balance(int cpu) { }
++
++static inline
++unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
++				  struct task_struct *p)
++{
++	return util;
++}
++
++static inline bool uclamp_rq_is_capped(struct rq *rq) { return false; }
++
++#ifdef CONFIG_SCHED_MM_CID
++static inline int __mm_cid_get(struct mm_struct *mm)
++{
++	struct cpumask *cpumask;
++	int cid;
++
++	cpumask = mm_cidmask(mm);
++	cid = cpumask_first_zero(cpumask);
++	if (cid >= nr_cpu_ids)
++		return -1;
++	__cpumask_set_cpu(cid, cpumask);
++	return cid;
++}
++
++static inline void mm_cid_put(struct mm_struct *mm, int cid)
++{
++	lockdep_assert_irqs_disabled();
++	if (cid < 0)
++		return;
++	raw_spin_lock(&mm->cid_lock);
++	__cpumask_clear_cpu(cid, mm_cidmask(mm));
++	raw_spin_unlock(&mm->cid_lock);
++}
++
++static inline int mm_cid_get(struct mm_struct *mm)
++{
++	int ret;
++
++	lockdep_assert_irqs_disabled();
++	raw_spin_lock(&mm->cid_lock);
++	ret = __mm_cid_get(mm);
++	raw_spin_unlock(&mm->cid_lock);
++	return ret;
++}
++
++static inline void switch_mm_cid(struct task_struct *prev, struct task_struct *next)
++{
++	if (prev->mm_cid_active) {
++		if (next->mm_cid_active && next->mm == prev->mm) {
++			/*
++			 * Context switch between threads in same mm, hand over
++			 * the mm_cid from prev to next.
++			 */
++			next->mm_cid = prev->mm_cid;
++			prev->mm_cid = -1;
++			return;
++		}
++		mm_cid_put(prev->mm, prev->mm_cid);
++		prev->mm_cid = -1;
++	}
++	if (next->mm_cid_active)
++		next->mm_cid = mm_cid_get(next->mm);
++}
++
++#else
++static inline void switch_mm_cid(struct task_struct *prev, struct task_struct *next) { }
++#endif
++
++#endif /* ALT_SCHED_H */
+diff --git a/kernel/sched/bmq.h b/kernel/sched/bmq.h
+new file mode 100644
+index 000000000000..f29b8f3aa786
+--- /dev/null
++++ b/kernel/sched/bmq.h
+@@ -0,0 +1,110 @@
++#define ALT_SCHED_NAME "BMQ"
++
++/*
++ * BMQ only routines
++ */
++#define rq_switch_time(rq)	((rq)->clock - (rq)->last_ts_switch)
++#define boost_threshold(p)	(sched_timeslice_ns >>\
++				 (15 - MAX_PRIORITY_ADJ -  (p)->boost_prio))
++
++static inline void boost_task(struct task_struct *p)
++{
++	int limit;
++
++	switch (p->policy) {
++	case SCHED_NORMAL:
++		limit = -MAX_PRIORITY_ADJ;
++		break;
++	case SCHED_BATCH:
++	case SCHED_IDLE:
++		limit = 0;
++		break;
++	default:
++		return;
++	}
++
++	if (p->boost_prio > limit)
++		p->boost_prio--;
++}
++
++static inline void deboost_task(struct task_struct *p)
++{
++	if (p->boost_prio < MAX_PRIORITY_ADJ)
++		p->boost_prio++;
++}
++
++/*
++ * Common interfaces
++ */
++static inline void sched_timeslice_imp(const int timeslice_ms) {}
++
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++	return p->prio + p->boost_prio - MAX_RT_PRIO;
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++	return (p->prio < MAX_RT_PRIO)? p->prio : MAX_RT_PRIO / 2 + (p->prio + p->boost_prio) / 2;
++}
++
++static inline int
++task_sched_prio_idx(const struct task_struct *p, const struct rq *rq)
++{
++	return task_sched_prio(p);
++}
++
++static inline int sched_prio2idx(int prio, struct rq *rq)
++{
++	return prio;
++}
++
++static inline int sched_idx2prio(int idx, struct rq *rq)
++{
++	return idx;
++}
++
++static inline void time_slice_expired(struct task_struct *p, struct rq *rq)
++{
++	p->time_slice = sched_timeslice_ns;
++
++	if (SCHED_FIFO != p->policy && task_on_rq_queued(p)) {
++		if (SCHED_RR != p->policy)
++			deboost_task(p);
++		requeue_task(p, rq, task_sched_prio_idx(p, rq));
++	}
++}
++
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq) {}
++
++inline int task_running_nice(struct task_struct *p)
++{
++	return (p->prio + p->boost_prio > DEFAULT_PRIO + MAX_PRIORITY_ADJ);
++}
++
++static void sched_task_fork(struct task_struct *p, struct rq *rq)
++{
++	p->boost_prio = MAX_PRIORITY_ADJ;
++}
++
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++	p->boost_prio = MAX_PRIORITY_ADJ;
++}
++
++#ifdef CONFIG_SMP
++static inline void sched_task_ttwu(struct task_struct *p)
++{
++	if(this_rq()->clock_task - p->last_ran > sched_timeslice_ns)
++		boost_task(p);
++}
++#endif
++
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq)
++{
++	if (rq_switch_time(rq) < boost_threshold(p))
++		boost_task(p);
++}
++
++static inline void update_rq_time_edge(struct rq *rq) {}
+diff --git a/kernel/sched/build_policy.c b/kernel/sched/build_policy.c
+index d9dc9ab3773f..71a25540d65e 100644
+--- a/kernel/sched/build_policy.c
++++ b/kernel/sched/build_policy.c
+@@ -42,13 +42,19 @@
+ 
+ #include "idle.c"
+ 
++#ifndef CONFIG_SCHED_ALT
+ #include "rt.c"
++#endif
+ 
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ # include "cpudeadline.c"
++#endif
+ # include "pelt.c"
+ #endif
+ 
+ #include "cputime.c"
+-#include "deadline.c"
+ 
++#ifndef CONFIG_SCHED_ALT
++#include "deadline.c"
++#endif
+diff --git a/kernel/sched/build_utility.c b/kernel/sched/build_utility.c
+index 99bdd96f454f..23f80a86d2d7 100644
+--- a/kernel/sched/build_utility.c
++++ b/kernel/sched/build_utility.c
+@@ -85,7 +85,9 @@
+ 
+ #ifdef CONFIG_SMP
+ # include "cpupri.c"
++#ifndef CONFIG_SCHED_ALT
+ # include "stop_task.c"
++#endif
+ # include "topology.c"
+ #endif
+ 
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index e3211455b203..87f7a4f732c8 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -157,9 +157,14 @@ static void sugov_get_util(struct sugov_cpu *sg_cpu)
+ {
+ 	struct rq *rq = cpu_rq(sg_cpu->cpu);
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	sg_cpu->bw_dl = cpu_bw_dl(rq);
+ 	sg_cpu->util = effective_cpu_util(sg_cpu->cpu, cpu_util_cfs(sg_cpu->cpu),
+ 					  FREQUENCY_UTIL, NULL);
++#else
++	sg_cpu->bw_dl = 0;
++	sg_cpu->util = rq_load_util(rq, arch_scale_cpu_capacity(sg_cpu->cpu));
++#endif /* CONFIG_SCHED_ALT */
+ }
+ 
+ /**
+@@ -305,8 +310,10 @@ static inline bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu) { return false; }
+  */
+ static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
+ {
++#ifndef CONFIG_SCHED_ALT
+ 	if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_dl)
+ 		sg_cpu->sg_policy->limits_changed = true;
++#endif
+ }
+ 
+ static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
+@@ -609,6 +616,7 @@ static int sugov_kthread_create(struct sugov_policy *sg_policy)
+ 	}
+ 
+ 	ret = sched_setattr_nocheck(thread, &attr);
++
+ 	if (ret) {
+ 		kthread_stop(thread);
+ 		pr_warn("%s: failed to set SCHED_DEADLINE\n", __func__);
+@@ -841,7 +849,9 @@ cpufreq_governor_init(schedutil_gov);
+ #ifdef CONFIG_ENERGY_MODEL
+ static void rebuild_sd_workfn(struct work_struct *work)
+ {
++#ifndef CONFIG_SCHED_ALT
+ 	rebuild_sched_domains_energy();
++#endif /* CONFIG_SCHED_ALT */
+ }
+ static DECLARE_WORK(rebuild_sd_work, rebuild_sd_workfn);
+ 
+diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
+index af7952f12e6c..6461cbbb734d 100644
+--- a/kernel/sched/cputime.c
++++ b/kernel/sched/cputime.c
+@@ -126,7 +126,7 @@ void account_user_time(struct task_struct *p, u64 cputime)
+ 	p->utime += cputime;
+ 	account_group_user_time(p, cputime);
+ 
+-	index = (task_nice(p) > 0) ? CPUTIME_NICE : CPUTIME_USER;
++	index = task_running_nice(p) ? CPUTIME_NICE : CPUTIME_USER;
+ 
+ 	/* Add user time to cpustat. */
+ 	task_group_account_field(p, index, cputime);
+@@ -150,7 +150,7 @@ void account_guest_time(struct task_struct *p, u64 cputime)
+ 	p->gtime += cputime;
+ 
+ 	/* Add guest time to cpustat. */
+-	if (task_nice(p) > 0) {
++	if (task_running_nice(p)) {
+ 		task_group_account_field(p, CPUTIME_NICE, cputime);
+ 		cpustat[CPUTIME_GUEST_NICE] += cputime;
+ 	} else {
+@@ -288,7 +288,7 @@ static inline u64 account_other_time(u64 max)
+ #ifdef CONFIG_64BIT
+ static inline u64 read_sum_exec_runtime(struct task_struct *t)
+ {
+-	return t->se.sum_exec_runtime;
++	return tsk_seruntime(t);
+ }
+ #else
+ static u64 read_sum_exec_runtime(struct task_struct *t)
+@@ -298,7 +298,7 @@ static u64 read_sum_exec_runtime(struct task_struct *t)
+ 	struct rq *rq;
+ 
+ 	rq = task_rq_lock(t, &rf);
+-	ns = t->se.sum_exec_runtime;
++	ns = tsk_seruntime(t);
+ 	task_rq_unlock(rq, t, &rf);
+ 
+ 	return ns;
+@@ -630,7 +630,7 @@ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev,
+ void task_cputime_adjusted(struct task_struct *p, u64 *ut, u64 *st)
+ {
+ 	struct task_cputime cputime = {
+-		.sum_exec_runtime = p->se.sum_exec_runtime,
++		.sum_exec_runtime = tsk_seruntime(p),
+ 	};
+ 
+ 	if (task_cputime(p, &cputime.utime, &cputime.stime))
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index 1637b65ba07a..033c6deeb515 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -7,6 +7,7 @@
+  * Copyright(C) 2007, Red Hat, Inc., Ingo Molnar
+  */
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * This allows printing both to /proc/sched_debug and
+  * to the console
+@@ -215,6 +216,7 @@ static const struct file_operations sched_scaling_fops = {
+ };
+ 
+ #endif /* SMP */
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+ 
+@@ -278,6 +280,7 @@ static const struct file_operations sched_dynamic_fops = {
+ 
+ #endif /* CONFIG_PREEMPT_DYNAMIC */
+ 
++#ifndef CONFIG_SCHED_ALT
+ __read_mostly bool sched_debug_verbose;
+ 
+ static const struct seq_operations sched_debug_sops;
+@@ -293,6 +296,7 @@ static const struct file_operations sched_debug_fops = {
+ 	.llseek		= seq_lseek,
+ 	.release	= seq_release,
+ };
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ static struct dentry *debugfs_sched;
+ 
+@@ -302,12 +306,15 @@ static __init int sched_init_debug(void)
+ 
+ 	debugfs_sched = debugfs_create_dir("sched", NULL);
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	debugfs_create_file("features", 0644, debugfs_sched, NULL, &sched_feat_fops);
+ 	debugfs_create_bool("verbose", 0644, debugfs_sched, &sched_debug_verbose);
++#endif /* !CONFIG_SCHED_ALT */
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+ 	debugfs_create_file("preempt", 0644, debugfs_sched, NULL, &sched_dynamic_fops);
+ #endif
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	debugfs_create_u32("latency_ns", 0644, debugfs_sched, &sysctl_sched_latency);
+ 	debugfs_create_u32("min_granularity_ns", 0644, debugfs_sched, &sysctl_sched_min_granularity);
+ 	debugfs_create_u32("idle_min_granularity_ns", 0644, debugfs_sched, &sysctl_sched_idle_min_granularity);
+@@ -337,11 +344,13 @@ static __init int sched_init_debug(void)
+ #endif
+ 
+ 	debugfs_create_file("debug", 0444, debugfs_sched, NULL, &sched_debug_fops);
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ 	return 0;
+ }
+ late_initcall(sched_init_debug);
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_SMP
+ 
+ static cpumask_var_t		sd_sysctl_cpus;
+@@ -1068,6 +1077,7 @@ void proc_sched_set_task(struct task_struct *p)
+ 	memset(&p->stats, 0, sizeof(p->stats));
+ #endif
+ }
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ void resched_latency_warn(int cpu, u64 latency)
+ {
+diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
+index e9ef66be2870..4fff3f75a779 100644
+--- a/kernel/sched/idle.c
++++ b/kernel/sched/idle.c
+@@ -379,6 +379,7 @@ void cpu_startup_entry(enum cpuhp_state state)
+ 		do_idle();
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * idle-task scheduling class.
+  */
+@@ -500,3 +501,4 @@ DEFINE_SCHED_CLASS(idle) = {
+ 	.switched_to		= switched_to_idle,
+ 	.update_curr		= update_curr_idle,
+ };
++#endif
+diff --git a/kernel/sched/pds.h b/kernel/sched/pds.h
+new file mode 100644
+index 000000000000..15cc4887efed
+--- /dev/null
++++ b/kernel/sched/pds.h
+@@ -0,0 +1,152 @@
++#define ALT_SCHED_NAME "PDS"
++
++#define MIN_SCHED_NORMAL_PRIO	(32)
++static const u64 RT_MASK = ((1ULL << MIN_SCHED_NORMAL_PRIO) - 1);
++
++#define SCHED_NORMAL_PRIO_NUM	(32)
++#define SCHED_EDGE_DELTA	(SCHED_NORMAL_PRIO_NUM - NICE_WIDTH / 2)
++
++/* PDS assume NORMAL_PRIO_NUM is power of 2 */
++#define SCHED_NORMAL_PRIO_MOD(x)	((x) & (SCHED_NORMAL_PRIO_NUM - 1))
++
++/* default time slice 4ms -> shift 22, 2 time slice slots -> shift 23 */
++static __read_mostly int sched_timeslice_shift = 23;
++
++/*
++ * Common interfaces
++ */
++static inline void sched_timeslice_imp(const int timeslice_ms)
++{
++	if (2 == timeslice_ms)
++		sched_timeslice_shift = 22;
++}
++
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++	s64 delta = p->deadline - rq->time_edge + SCHED_EDGE_DELTA;
++
++#ifdef ALT_SCHED_DEBUG
++	if (WARN_ONCE(delta > NORMAL_PRIO_NUM - 1,
++		      "pds: task_sched_prio_normal() delta %lld\n", delta))
++		return SCHED_NORMAL_PRIO_NUM - 1;
++#endif
++
++	return max(0LL, delta);
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++	return (p->prio < MIN_NORMAL_PRIO) ? (p->prio >> 2) :
++		MIN_SCHED_NORMAL_PRIO + task_sched_prio_normal(p, task_rq(p));
++}
++
++static inline int
++task_sched_prio_idx(const struct task_struct *p, const struct rq *rq)
++{
++	u64 idx;
++
++	if (p->prio < MIN_NORMAL_PRIO)
++		return p->prio >> 2;
++
++	idx = max(p->deadline + SCHED_EDGE_DELTA, rq->time_edge);
++	/*printk(KERN_INFO "sched: task_sched_prio_idx edge:%llu, deadline=%llu idx=%llu\n", rq->time_edge, p->deadline, idx);*/
++	return MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(idx);
++}
++
++static inline int sched_prio2idx(int sched_prio, struct rq *rq)
++{
++	return (IDLE_TASK_SCHED_PRIO == sched_prio || sched_prio < MIN_SCHED_NORMAL_PRIO) ?
++		sched_prio :
++		MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_prio + rq->time_edge);
++}
++
++static inline int sched_idx2prio(int sched_idx, struct rq *rq)
++{
++	return (sched_idx < MIN_SCHED_NORMAL_PRIO) ?
++		sched_idx :
++		MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_idx - rq->time_edge);
++}
++
++static inline void sched_renew_deadline(struct task_struct *p, const struct rq *rq)
++{
++	if (p->prio >= MIN_NORMAL_PRIO)
++		p->deadline = rq->time_edge + (p->static_prio - (MAX_PRIO - NICE_WIDTH)) / 2;
++}
++
++int task_running_nice(struct task_struct *p)
++{
++	return (p->prio > DEFAULT_PRIO);
++}
++
++static inline void update_rq_time_edge(struct rq *rq)
++{
++	struct list_head head;
++	u64 old = rq->time_edge;
++	u64 now = rq->clock >> sched_timeslice_shift;
++	u64 prio, delta;
++	DECLARE_BITMAP(normal, SCHED_QUEUE_BITS);
++
++	if (now == old)
++		return;
++
++	rq->time_edge = now;
++	delta = min_t(u64, SCHED_NORMAL_PRIO_NUM, now - old);
++	INIT_LIST_HEAD(&head);
++
++	/*printk(KERN_INFO "sched: update_rq_time_edge 0x%016lx %llu\n", rq->queue.bitmap[0], delta);*/
++	prio = MIN_SCHED_NORMAL_PRIO;
++	for_each_set_bit_from(prio, rq->queue.bitmap, MIN_SCHED_NORMAL_PRIO + delta)
++		list_splice_tail_init(rq->queue.heads + MIN_SCHED_NORMAL_PRIO +
++				      SCHED_NORMAL_PRIO_MOD(prio + old), &head);
++
++	bitmap_shift_right(normal, rq->queue.bitmap, delta, SCHED_QUEUE_BITS);
++	if (!list_empty(&head)) {
++		struct task_struct *p;
++		u64 idx = MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(now);
++
++		list_for_each_entry(p, &head, sq_node)
++			p->sq_idx = idx;
++
++		list_splice(&head, rq->queue.heads + idx);
++		set_bit(MIN_SCHED_NORMAL_PRIO, normal);
++	}
++	bitmap_replace(rq->queue.bitmap, normal, rq->queue.bitmap,
++		       (const unsigned long *)&RT_MASK, SCHED_QUEUE_BITS);
++
++	if (rq->prio < MIN_SCHED_NORMAL_PRIO || IDLE_TASK_SCHED_PRIO == rq->prio)
++		return;
++
++	rq->prio = (rq->prio < MIN_SCHED_NORMAL_PRIO + delta) ?
++		MIN_SCHED_NORMAL_PRIO : rq->prio - delta;
++}
++
++static inline void time_slice_expired(struct task_struct *p, struct rq *rq)
++{
++	p->time_slice = sched_timeslice_ns;
++	sched_renew_deadline(p, rq);
++	if (SCHED_FIFO != p->policy && task_on_rq_queued(p))
++		requeue_task(p, rq, task_sched_prio_idx(p, rq));
++}
++
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq)
++{
++	u64 max_dl = rq->time_edge + NICE_WIDTH / 2 - 1;
++	if (unlikely(p->deadline > max_dl))
++		p->deadline = max_dl;
++}
++
++static void sched_task_fork(struct task_struct *p, struct rq *rq)
++{
++	sched_renew_deadline(p, rq);
++}
++
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++	time_slice_expired(p, rq);
++}
++
++#ifdef CONFIG_SMP
++static inline void sched_task_ttwu(struct task_struct *p) {}
++#endif
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq) {}
+diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
+index 0f310768260c..bd38bf738fe9 100644
+--- a/kernel/sched/pelt.c
++++ b/kernel/sched/pelt.c
+@@ -266,6 +266,7 @@ ___update_load_avg(struct sched_avg *sa, unsigned long load)
+ 	WRITE_ONCE(sa->util_avg, sa->util_sum / divider);
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * sched_entity:
+  *
+@@ -383,8 +384,9 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
+ 
+ 	return 0;
+ }
++#endif
+ 
+-#ifdef CONFIG_SCHED_THERMAL_PRESSURE
++#if defined(CONFIG_SCHED_THERMAL_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ /*
+  * thermal:
+  *
+diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
+index 3a0e0dc28721..e8a7d84aa5a5 100644
+--- a/kernel/sched/pelt.h
++++ b/kernel/sched/pelt.h
+@@ -1,13 +1,15 @@
+ #ifdef CONFIG_SMP
+ #include "sched-pelt.h"
+ 
++#ifndef CONFIG_SCHED_ALT
+ int __update_load_avg_blocked_se(u64 now, struct sched_entity *se);
+ int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se);
+ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq);
+ int update_rt_rq_load_avg(u64 now, struct rq *rq, int running);
+ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running);
++#endif
+ 
+-#ifdef CONFIG_SCHED_THERMAL_PRESSURE
++#if defined(CONFIG_SCHED_THERMAL_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ int update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity);
+ 
+ static inline u64 thermal_load_avg(struct rq *rq)
+@@ -44,6 +46,7 @@ static inline u32 get_pelt_divider(struct sched_avg *avg)
+ 	return PELT_MIN_DIVIDER + avg->period_contrib;
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline void cfs_se_util_change(struct sched_avg *avg)
+ {
+ 	unsigned int enqueued;
+@@ -180,9 +183,11 @@ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
+ 	return rq_clock_pelt(rq_of(cfs_rq));
+ }
+ #endif
++#endif /* CONFIG_SCHED_ALT */
+ 
+ #else
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline int
+ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
+ {
+@@ -200,6 +205,7 @@ update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
+ {
+ 	return 0;
+ }
++#endif
+ 
+ static inline int
+ update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity)
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 3e8df6d31c1e..a0c1997ac3e1 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -5,6 +5,10 @@
+ #ifndef _KERNEL_SCHED_SCHED_H
+ #define _KERNEL_SCHED_SCHED_H
+ 
++#ifdef CONFIG_SCHED_ALT
++#include "alt_sched.h"
++#else
++
+ #include <linux/sched/affinity.h>
+ #include <linux/sched/autogroup.h>
+ #include <linux/sched/cpufreq.h>
+@@ -3306,4 +3310,9 @@ static inline void switch_mm_cid(struct task_struct *prev, struct task_struct *n
+ static inline void switch_mm_cid(struct task_struct *prev, struct task_struct *next) { }
+ #endif
+ 
++static inline int task_running_nice(struct task_struct *p)
++{
++	return (task_nice(p) > 0);
++}
++#endif /* !CONFIG_SCHED_ALT */
+ #endif /* _KERNEL_SCHED_SCHED_H */
+diff --git a/kernel/sched/stats.c b/kernel/sched/stats.c
+index 857f837f52cb..5486c63e4790 100644
+--- a/kernel/sched/stats.c
++++ b/kernel/sched/stats.c
+@@ -125,8 +125,10 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ 	} else {
+ 		struct rq *rq;
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ 		struct sched_domain *sd;
+ 		int dcount = 0;
++#endif
+ #endif
+ 		cpu = (unsigned long)(v - 2);
+ 		rq = cpu_rq(cpu);
+@@ -143,6 +145,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ 		seq_printf(seq, "\n");
+ 
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ 		/* domain-specific stats */
+ 		rcu_read_lock();
+ 		for_each_domain(cpu, sd) {
+@@ -171,6 +174,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ 			    sd->ttwu_move_balance);
+ 		}
+ 		rcu_read_unlock();
++#endif
+ #endif
+ 	}
+ 	return 0;
+diff --git a/kernel/sched/stats.h b/kernel/sched/stats.h
+index 38f3698f5e5b..b9d597394316 100644
+--- a/kernel/sched/stats.h
++++ b/kernel/sched/stats.h
+@@ -89,6 +89,7 @@ static inline void rq_sched_info_depart  (struct rq *rq, unsigned long long delt
+ 
+ #endif /* CONFIG_SCHEDSTATS */
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_FAIR_GROUP_SCHED
+ struct sched_entity_stats {
+ 	struct sched_entity     se;
+@@ -105,6 +106,7 @@ __schedstats_from_se(struct sched_entity *se)
+ #endif
+ 	return &task_of(se)->stats;
+ }
++#endif /* CONFIG_SCHED_ALT */
+ 
+ #ifdef CONFIG_PSI
+ void psi_task_change(struct task_struct *task, int clear, int set);
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index 051aaf65c749..21256b848f0b 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -3,6 +3,7 @@
+  * Scheduler topology setup/handling methods
+  */
+ 
++#ifndef CONFIG_SCHED_ALT
+ #include <linux/bsearch.h>
+ 
+ DEFINE_MUTEX(sched_domains_mutex);
+@@ -1415,8 +1416,10 @@ static void asym_cpu_capacity_scan(void)
+  */
+ 
+ static int default_relax_domain_level = -1;
++#endif /* CONFIG_SCHED_ALT */
+ int sched_domain_level_max;
+ 
++#ifndef CONFIG_SCHED_ALT
+ static int __init setup_relax_domain_level(char *str)
+ {
+ 	if (kstrtoint(str, 0, &default_relax_domain_level))
+@@ -1649,6 +1652,7 @@ sd_init(struct sched_domain_topology_level *tl,
+ 
+ 	return sd;
+ }
++#endif /* CONFIG_SCHED_ALT */
+ 
+ /*
+  * Topology list, bottom-up.
+@@ -1685,6 +1689,7 @@ void set_sched_topology(struct sched_domain_topology_level *tl)
+ 	sched_domain_topology_saved = NULL;
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_NUMA
+ 
+ static const struct cpumask *sd_numa_mask(int cpu)
+@@ -2740,3 +2745,20 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
+ 	partition_sched_domains_locked(ndoms_new, doms_new, dattr_new);
+ 	mutex_unlock(&sched_domains_mutex);
+ }
++#else /* CONFIG_SCHED_ALT */
++void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
++			     struct sched_domain_attr *dattr_new)
++{}
++
++#ifdef CONFIG_NUMA
++int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++	return best_mask_cpu(cpu, cpus);
++}
++
++int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
++{
++	return cpumask_nth(cpu, cpus);
++}
++#endif /* CONFIG_NUMA */
++#endif
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index 1c240d2c99bc..a57fbd2ceba6 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -93,6 +93,10 @@ EXPORT_SYMBOL_GPL(sysctl_long_vals);
+ 
+ /* Constants used for minimum and maximum */
+ 
++#ifdef CONFIG_SCHED_ALT
++extern int sched_yield_type;
++#endif
++
+ #ifdef CONFIG_PERF_EVENTS
+ static const int six_hundred_forty_kb = 640 * 1024;
+ #endif
+@@ -1939,6 +1943,17 @@ static struct ctl_table kern_table[] = {
+ 		.proc_handler	= proc_dointvec,
+ 	},
+ #endif
++#ifdef CONFIG_SCHED_ALT
++	{
++		.procname	= "yield_type",
++		.data		= &sched_yield_type,
++		.maxlen		= sizeof (int),
++		.mode		= 0644,
++		.proc_handler	= &proc_dointvec_minmax,
++		.extra1		= SYSCTL_ZERO,
++		.extra2		= SYSCTL_TWO,
++	},
++#endif
+ #if defined(CONFIG_S390) && defined(CONFIG_SMP)
+ 	{
+ 		.procname	= "spin_retry",
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index e8c08292defc..3823ff0ddc0f 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -2088,8 +2088,10 @@ long hrtimer_nanosleep(ktime_t rqtp, const enum hrtimer_mode mode,
+ 	int ret = 0;
+ 	u64 slack;
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	slack = current->timer_slack_ns;
+-	if (rt_task(current))
++	if (dl_task(current) || rt_task(current))
++#endif
+ 		slack = 0;
+ 
+ 	hrtimer_init_sleeper_on_stack(&t, clockid, mode);
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index 2f5e9b34022c..ac1e3fd4b8ec 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -223,7 +223,7 @@ static void task_sample_cputime(struct task_struct *p, u64 *samples)
+ 	u64 stime, utime;
+ 
+ 	task_cputime(p, &utime, &stime);
+-	store_samples(samples, stime, utime, p->se.sum_exec_runtime);
++	store_samples(samples, stime, utime, tsk_seruntime(p));
+ }
+ 
+ static void proc_sample_cputime_atomic(struct task_cputime_atomic *at,
+@@ -865,6 +865,7 @@ static void collect_posix_cputimers(struct posix_cputimers *pct, u64 *samples,
+ 	}
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline void check_dl_overrun(struct task_struct *tsk)
+ {
+ 	if (tsk->dl.dl_overrun) {
+@@ -872,6 +873,7 @@ static inline void check_dl_overrun(struct task_struct *tsk)
+ 		send_signal_locked(SIGXCPU, SEND_SIG_PRIV, tsk, PIDTYPE_TGID);
+ 	}
+ }
++#endif
+ 
+ static bool check_rlimit(u64 time, u64 limit, int signo, bool rt, bool hard)
+ {
+@@ -899,8 +901,10 @@ static void check_thread_timers(struct task_struct *tsk,
+ 	u64 samples[CPUCLOCK_MAX];
+ 	unsigned long soft;
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (dl_task(tsk))
+ 		check_dl_overrun(tsk);
++#endif
+ 
+ 	if (expiry_cache_is_inactive(pct))
+ 		return;
+@@ -914,7 +918,7 @@ static void check_thread_timers(struct task_struct *tsk,
+ 	soft = task_rlimit(tsk, RLIMIT_RTTIME);
+ 	if (soft != RLIM_INFINITY) {
+ 		/* Task RT timeout is accounted in jiffies. RTTIME is usec */
+-		unsigned long rttime = tsk->rt.timeout * (USEC_PER_SEC / HZ);
++		unsigned long rttime = tsk_rttimeout(tsk) * (USEC_PER_SEC / HZ);
+ 		unsigned long hard = task_rlimit_max(tsk, RLIMIT_RTTIME);
+ 
+ 		/* At the hard limit, send SIGKILL. No further action. */
+@@ -1150,8 +1154,10 @@ static inline bool fastpath_timer_check(struct task_struct *tsk)
+ 			return true;
+ 	}
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (dl_task(tsk) && tsk->dl.dl_overrun)
+ 		return true;
++#endif
+ 
+ 	return false;
+ }
+diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c
+index ff0536cea968..ce266990006d 100644
+--- a/kernel/trace/trace_selftest.c
++++ b/kernel/trace/trace_selftest.c
+@@ -1150,10 +1150,15 @@ static int trace_wakeup_test_thread(void *data)
+ {
+ 	/* Make this a -deadline thread */
+ 	static const struct sched_attr attr = {
++#ifdef CONFIG_SCHED_ALT
++		/* No deadline on BMQ/PDS, use RR */
++		.sched_policy = SCHED_RR,
++#else
+ 		.sched_policy = SCHED_DEADLINE,
+ 		.sched_runtime = 100000ULL,
+ 		.sched_deadline = 10000000ULL,
+ 		.sched_period = 10000000ULL
++#endif
+ 	};
+ 	struct wakeup_test_data *x = data;
+ 

diff --git a/5021_BMQ-and-PDS-gentoo-defaults.patch b/5021_BMQ-and-PDS-gentoo-defaults.patch
new file mode 100644
index 00000000..6dc48eec
--- /dev/null
+++ b/5021_BMQ-and-PDS-gentoo-defaults.patch
@@ -0,0 +1,13 @@
+--- a/init/Kconfig	2023-02-13 08:16:09.534315265 -0500
++++ b/init/Kconfig	2023-02-13 08:17:24.130237204 -0500
+@@ -867,8 +867,9 @@ config UCLAMP_BUCKETS_COUNT
+ 	  If in doubt, use the default value.
+ 
+ menuconfig SCHED_ALT
++	depends on X86_64
+ 	bool "Alternative CPU Schedulers"
+-	default y
++	default n
+ 	help
+ 	  This feature enable alternative CPU scheduler"
+ 


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:6.3 commit in: /
@ 2023-05-01  0:00 Alice Ferrazzi
  0 siblings, 0 replies; 23+ messages in thread
From: Alice Ferrazzi @ 2023-05-01  0:00 UTC (permalink / raw
  To: gentoo-commits

commit:     a6af0f6a19f6b775b9b4574fb7f3d22eacf55969
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Mon May  1 00:00:28 2023 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Mon May  1 00:00:28 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a6af0f6a

Linux patch 6.3.2

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README            |   4 +
 1000_linux-6.3.1.patch | 272 +++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 276 insertions(+)

diff --git a/0000_README b/0000_README
index cdf69ca2..7c652ce5 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,10 @@ EXPERIMENTAL
 Individual Patch Descriptions:
 --------------------------------------------------------------------------
 
+Patch:  1000_linux-6.3.1.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.3.1
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1000_linux-6.3.1.patch b/1000_linux-6.3.1.patch
new file mode 100644
index 00000000..c54c083b
--- /dev/null
+++ b/1000_linux-6.3.1.patch
@@ -0,0 +1,272 @@
+diff --git a/Makefile b/Makefile
+index f5543eef4f822..340bd45fc8d21 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 3
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index 8def2ba08a821..1b16e0fb7658d 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -708,7 +708,12 @@ static int really_probe_debug(struct device *dev, struct device_driver *drv)
+ 	calltime = ktime_get();
+ 	ret = really_probe(dev, drv);
+ 	rettime = ktime_get();
+-	pr_debug("probe of %s returned %d after %lld usecs\n",
++	/*
++	 * Don't change this to pr_debug() because that requires
++	 * CONFIG_DYNAMIC_DEBUG and we want a simple 'initcall_debug' on the
++	 * kernel commandline to print this all the time at the debug level.
++	 */
++	printk(KERN_DEBUG "probe of %s returned %d after %lld usecs\n",
+ 		 dev_name(dev), ret, ktime_us_delta(rettime, calltime));
+ 	return ret;
+ }
+diff --git a/drivers/gpio/gpiolib-acpi.c b/drivers/gpio/gpiolib-acpi.c
+index 31ae0adbb295a..046ada264889d 100644
+--- a/drivers/gpio/gpiolib-acpi.c
++++ b/drivers/gpio/gpiolib-acpi.c
+@@ -1617,6 +1617,19 @@ static const struct dmi_system_id gpiolib_acpi_quirks[] __initconst = {
+ 			.ignore_interrupt = "AMDI0030:00@18",
+ 		},
+ 	},
++	{
++		/*
++		 * Spurious wakeups from TP_ATTN# pin
++		 * Found in BIOS 1.7.8
++		 * https://gitlab.freedesktop.org/drm/amd/-/issues/1722#note_1720627
++		 */
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "NL5xNU"),
++		},
++		.driver_data = &(struct acpi_gpiolib_dmi_quirk) {
++			.ignore_wake = "ELAN0415:00@9",
++		},
++	},
+ 	{
+ 		/*
+ 		 * Spurious wakeups from TP_ATTN# pin
+diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
+index a39998047f8a8..2fe8349be0995 100644
+--- a/drivers/gpu/drm/drm_fb_helper.c
++++ b/drivers/gpu/drm/drm_fb_helper.c
+@@ -1569,6 +1569,9 @@ int drm_fb_helper_check_var(struct fb_var_screeninfo *var,
+ 		return -EINVAL;
+ 	}
+ 
++	var->xres_virtual = fb->width;
++	var->yres_virtual = fb->height;
++
+ 	/*
+ 	 * Workaround for SDL 1.2, which is known to be setting all pixel format
+ 	 * fields values to zero in some cases. We treat this situation as a
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
+index 65d4799a56584..ff710b0b5071a 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
+@@ -965,6 +965,12 @@ out:
+ 		.driver_data = BRCMF_FWVENDOR_ ## fw_vend \
+ 	}
+ 
++#define CYW_SDIO_DEVICE(dev_id, fw_vend) \
++	{ \
++		SDIO_DEVICE(SDIO_VENDOR_ID_CYPRESS, dev_id), \
++		.driver_data = BRCMF_FWVENDOR_ ## fw_vend \
++	}
++
+ /* devices we support, null terminated */
+ static const struct sdio_device_id brcmf_sdmmc_ids[] = {
+ 	BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_43143, WCC),
+@@ -979,6 +985,7 @@ static const struct sdio_device_id brcmf_sdmmc_ids[] = {
+ 	BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_4335_4339, WCC),
+ 	BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_4339, WCC),
+ 	BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_43430, WCC),
++	BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_43439, WCC),
+ 	BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_4345, WCC),
+ 	BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_43455, WCC),
+ 	BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_4354, WCC),
+@@ -986,9 +993,9 @@ static const struct sdio_device_id brcmf_sdmmc_ids[] = {
+ 	BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_4359, WCC),
+ 	BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_CYPRESS_4373, CYW),
+ 	BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_CYPRESS_43012, CYW),
+-	BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_CYPRESS_43439, CYW),
+ 	BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_CYPRESS_43752, CYW),
+ 	BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_CYPRESS_89359, CYW),
++	CYW_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_CYPRESS_43439, CYW),
+ 	{ /* end: all zeroes */ }
+ };
+ MODULE_DEVICE_TABLE(sdio, brcmf_sdmmc_ids);
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index a9690ec4c850c..5a9f713ea703e 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -6164,6 +6164,11 @@ static s32 brcmf_get_assoc_ies(struct brcmf_cfg80211_info *cfg,
+ 		(struct brcmf_cfg80211_assoc_ielen_le *)cfg->extra_buf;
+ 	req_len = le32_to_cpu(assoc_info->req_len);
+ 	resp_len = le32_to_cpu(assoc_info->resp_len);
++	if (req_len > WL_EXTRA_BUF_MAX || resp_len > WL_EXTRA_BUF_MAX) {
++		bphy_err(drvr, "invalid lengths in assoc info: req %u resp %u\n",
++			 req_len, resp_len);
++		return -EINVAL;
++	}
+ 	if (req_len) {
+ 		err = brcmf_fil_iovar_data_get(ifp, "assoc_req_ies",
+ 					       cfg->extra_buf,
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index f31cc3c763299..644a55447fd7f 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -595,6 +595,11 @@ static void option_instat_callback(struct urb *urb);
+ #define SIERRA_VENDOR_ID			0x1199
+ #define SIERRA_PRODUCT_EM9191			0x90d3
+ 
++/* UNISOC (Spreadtrum) products */
++#define UNISOC_VENDOR_ID			0x1782
++/* TOZED LT70-C based on UNISOC SL8563 uses UNISOC's vendor ID */
++#define TOZED_PRODUCT_LT70C			0x4055
++
+ /* Device flags */
+ 
+ /* Highest interface number which can be used with NCTRL() and RSVD() */
+@@ -2225,6 +2230,7 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(OPPO_VENDOR_ID, OPPO_PRODUCT_R11, 0xff, 0xff, 0x30) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x30) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0, 0) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, TOZED_PRODUCT_LT70C, 0xff, 0, 0) },
+ 	{ } /* Terminating entry */
+ };
+ MODULE_DEVICE_TABLE(usb, option_ids);
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index e5c963bb873db..af2e153543a5c 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -1875,7 +1875,7 @@ static int get_cur_inode_state(struct send_ctx *sctx, u64 ino, u64 gen,
+ 	int left_ret;
+ 	int right_ret;
+ 	u64 left_gen;
+-	u64 right_gen;
++	u64 right_gen = 0;
+ 	struct btrfs_inode_info info;
+ 
+ 	ret = get_inode_info(sctx->send_root, ino, &info);
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index c6d5928704001..db63f9da787f1 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -2618,7 +2618,7 @@ int btrfs_init_new_device(struct btrfs_fs_info *fs_info, const char *device_path
+ 	struct block_device *bdev;
+ 	struct super_block *sb = fs_info->sb;
+ 	struct btrfs_fs_devices *fs_devices = fs_info->fs_devices;
+-	struct btrfs_fs_devices *seed_devices;
++	struct btrfs_fs_devices *seed_devices = NULL;
+ 	u64 orig_super_total_bytes;
+ 	u64 orig_super_num_devices;
+ 	int ret = 0;
+diff --git a/fs/verity/enable.c b/fs/verity/enable.c
+index 7a0e3a84d370b..c547ca4eb05a7 100644
+--- a/fs/verity/enable.c
++++ b/fs/verity/enable.c
+@@ -13,6 +13,7 @@
+ 
+ struct block_buffer {
+ 	u32 filled;
++	bool is_root_hash;
+ 	u8 *data;
+ };
+ 
+@@ -24,6 +25,14 @@ static int hash_one_block(struct inode *inode,
+ 	struct block_buffer *next = cur + 1;
+ 	int err;
+ 
++	/*
++	 * Safety check to prevent a buffer overflow in case of a filesystem bug
++	 * that allows the file size to change despite deny_write_access(), or a
++	 * bug in the Merkle tree logic itself
++	 */
++	if (WARN_ON_ONCE(next->is_root_hash && next->filled != 0))
++		return -EINVAL;
++
+ 	/* Zero-pad the block if it's shorter than the block size. */
+ 	memset(&cur->data[cur->filled], 0, params->block_size - cur->filled);
+ 
+@@ -97,6 +106,7 @@ static int build_merkle_tree(struct file *filp,
+ 		}
+ 	}
+ 	buffers[num_levels].data = root_hash;
++	buffers[num_levels].is_root_hash = true;
+ 
+ 	BUILD_BUG_ON(sizeof(level_offset) != sizeof(params->level_start));
+ 	memcpy(level_offset, params->level_start, sizeof(level_offset));
+@@ -347,6 +357,13 @@ int fsverity_ioctl_enable(struct file *filp, const void __user *uarg)
+ 	err = file_permission(filp, MAY_WRITE);
+ 	if (err)
+ 		return err;
++	/*
++	 * __kernel_read() is used while building the Merkle tree.  So, we can't
++	 * allow file descriptors that were opened for ioctl access only, using
++	 * the special nonstandard access mode 3.  O_RDONLY only, please!
++	 */
++	if (!(filp->f_mode & FMODE_READ))
++		return -EBADF;
+ 
+ 	if (IS_APPEND(inode))
+ 		return -EPERM;
+diff --git a/include/linux/mmc/sdio_ids.h b/include/linux/mmc/sdio_ids.h
+index 0e4ef9c5127ad..bf3c95d8eb8af 100644
+--- a/include/linux/mmc/sdio_ids.h
++++ b/include/linux/mmc/sdio_ids.h
+@@ -74,10 +74,13 @@
+ #define SDIO_DEVICE_ID_BROADCOM_43362		0xa962
+ #define SDIO_DEVICE_ID_BROADCOM_43364		0xa9a4
+ #define SDIO_DEVICE_ID_BROADCOM_43430		0xa9a6
+-#define SDIO_DEVICE_ID_BROADCOM_CYPRESS_43439	0xa9af
++#define SDIO_DEVICE_ID_BROADCOM_43439		0xa9af
+ #define SDIO_DEVICE_ID_BROADCOM_43455		0xa9bf
+ #define SDIO_DEVICE_ID_BROADCOM_CYPRESS_43752	0xaae8
+ 
++#define SDIO_VENDOR_ID_CYPRESS			0x04b4
++#define SDIO_DEVICE_ID_BROADCOM_CYPRESS_43439	0xbd3d
++
+ #define SDIO_VENDOR_ID_MARVELL			0x02df
+ #define SDIO_DEVICE_ID_MARVELL_LIBERTAS		0x9103
+ #define SDIO_DEVICE_ID_MARVELL_8688_WLAN	0x9104
+diff --git a/mm/mmap.c b/mm/mmap.c
+index d5475fbf57296..eefa6f0cda28e 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -978,7 +978,7 @@ struct vm_area_struct *vma_merge(struct vma_iterator *vmi, struct mm_struct *mm,
+ 			vma = next;			/* case 3 */
+ 			vma_start = addr;
+ 			vma_end = next->vm_end;
+-			vma_pgoff = mid->vm_pgoff;
++			vma_pgoff = next->vm_pgoff - pglen;
+ 			err = 0;
+ 			if (mid != next) {		/* case 8 */
+ 				remove = mid;
+diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c
+index 06581223238c5..f597fe0db9f8f 100644
+--- a/net/bluetooth/hci_sock.c
++++ b/net/bluetooth/hci_sock.c
+@@ -1003,7 +1003,14 @@ static int hci_sock_ioctl(struct socket *sock, unsigned int cmd,
+ 	if (hci_sock_gen_cookie(sk)) {
+ 		struct sk_buff *skb;
+ 
+-		if (capable(CAP_NET_ADMIN))
++		/* Perform careful checks before setting the HCI_SOCK_TRUSTED
++		 * flag. Make sure that not only the current task but also
++		 * the socket opener has the required capability, since
++		 * privileged programs can be tricked into making ioctl calls
++		 * on HCI sockets, and the socket should not be marked as
++		 * trusted simply because the ioctl caller is privileged.
++		 */
++		if (sk_capable(sk, CAP_NET_ADMIN))
+ 			hci_sock_set_flag(sk, HCI_SOCK_TRUSTED);
+ 
+ 		/* Send event to monitor */


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:6.3 commit in: /
@ 2023-05-10 17:46 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2023-05-10 17:46 UTC (permalink / raw
  To: gentoo-commits

commit:     7bd872691d6561dcfeb1e93562fe80cce26e91e6
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed May 10 17:45:43 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed May 10 17:45:43 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7bd87269

netfilter: nf_tables: deactivate anonymous set from preparation phase

Bug: https://bugs.gentoo.org/906064

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |   4 +
 ...nf-tables-make-deleted-anon-sets-inactive.patch | 121 +++++++++++++++++++++
 2 files changed, 125 insertions(+)

diff --git a/0000_README b/0000_README
index 7c652ce5..e11cd2ca 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.
 
+Patch:  1520_fs-enable-link-security-restrictions-by-default.patch
+From:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/patch/?id=c1592a89942e9678f7d9c8030efa777c0d57edab
+Desc:   netfilter: nf_tables: deactivate anonymous set from preparation phase
+
 Patch:  1700_sparc-address-warray-bound-warnings.patch
 From:		https://github.com/KSPP/linux/issues/109
 Desc:		Address -Warray-bounds warnings 

diff --git a/1520_nf-tables-make-deleted-anon-sets-inactive.patch b/1520_nf-tables-make-deleted-anon-sets-inactive.patch
new file mode 100644
index 00000000..cd75de5c
--- /dev/null
+++ b/1520_nf-tables-make-deleted-anon-sets-inactive.patch
@@ -0,0 +1,121 @@
+From c1592a89942e9678f7d9c8030efa777c0d57edab Mon Sep 17 00:00:00 2001
+From: Pablo Neira Ayuso <pablo@netfilter.org>
+Date: Tue, 2 May 2023 10:25:24 +0200
+Subject: netfilter: nf_tables: deactivate anonymous set from preparation phase
+
+Toggle deleted anonymous sets as inactive in the next generation, so
+users cannot perform any update on it. Clear the generation bitmask
+in case the transaction is aborted.
+
+The following KASAN splat shows a set element deletion for a bound
+anonymous set that has been already removed in the same transaction.
+
+[   64.921510] ==================================================================
+[   64.923123] BUG: KASAN: wild-memory-access in nf_tables_commit+0xa24/0x1490 [nf_tables]
+[   64.924745] Write of size 8 at addr dead000000000122 by task test/890
+[   64.927903] CPU: 3 PID: 890 Comm: test Not tainted 6.3.0+ #253
+[   64.931120] Call Trace:
+[   64.932699]  <TASK>
+[   64.934292]  dump_stack_lvl+0x33/0x50
+[   64.935908]  ? nf_tables_commit+0xa24/0x1490 [nf_tables]
+[   64.937551]  kasan_report+0xda/0x120
+[   64.939186]  ? nf_tables_commit+0xa24/0x1490 [nf_tables]
+[   64.940814]  nf_tables_commit+0xa24/0x1490 [nf_tables]
+[   64.942452]  ? __kasan_slab_alloc+0x2d/0x60
+[   64.944070]  ? nf_tables_setelem_notify+0x190/0x190 [nf_tables]
+[   64.945710]  ? kasan_set_track+0x21/0x30
+[   64.947323]  nfnetlink_rcv_batch+0x709/0xd90 [nfnetlink]
+[   64.948898]  ? nfnetlink_rcv_msg+0x480/0x480 [nfnetlink]
+
+Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
+---
+ include/net/netfilter/nf_tables.h |  1 +
+ net/netfilter/nf_tables_api.c     | 12 ++++++++++++
+ net/netfilter/nft_dynset.c        |  2 +-
+ net/netfilter/nft_lookup.c        |  2 +-
+ net/netfilter/nft_objref.c        |  2 +-
+ 5 files changed, 16 insertions(+), 3 deletions(-)
+
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 3ed21d2d56590..2e24ea1d744c2 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -619,6 +619,7 @@ struct nft_set_binding {
+ };
+ 
+ enum nft_trans_phase;
++void nf_tables_activate_set(const struct nft_ctx *ctx, struct nft_set *set);
+ void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
+ 			      struct nft_set_binding *binding,
+ 			      enum nft_trans_phase phase);
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 8b6c61a2196cb..59fb8320ab4d7 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -5127,12 +5127,24 @@ static void nf_tables_unbind_set(const struct nft_ctx *ctx, struct nft_set *set,
+ 	}
+ }
+ 
++void nf_tables_activate_set(const struct nft_ctx *ctx, struct nft_set *set)
++{
++	if (nft_set_is_anonymous(set))
++		nft_clear(ctx->net, set);
++
++	set->use++;
++}
++EXPORT_SYMBOL_GPL(nf_tables_activate_set);
++
+ void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
+ 			      struct nft_set_binding *binding,
+ 			      enum nft_trans_phase phase)
+ {
+ 	switch (phase) {
+ 	case NFT_TRANS_PREPARE:
++		if (nft_set_is_anonymous(set))
++			nft_deactivate_next(ctx->net, set);
++
+ 		set->use--;
+ 		return;
+ 	case NFT_TRANS_ABORT:
+diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c
+index 274579b1696e0..bd19c7aec92ee 100644
+--- a/net/netfilter/nft_dynset.c
++++ b/net/netfilter/nft_dynset.c
+@@ -342,7 +342,7 @@ static void nft_dynset_activate(const struct nft_ctx *ctx,
+ {
+ 	struct nft_dynset *priv = nft_expr_priv(expr);
+ 
+-	priv->set->use++;
++	nf_tables_activate_set(ctx, priv->set);
+ }
+ 
+ static void nft_dynset_destroy(const struct nft_ctx *ctx,
+diff --git a/net/netfilter/nft_lookup.c b/net/netfilter/nft_lookup.c
+index cecf8ab90e58f..03ef4fdaa460b 100644
+--- a/net/netfilter/nft_lookup.c
++++ b/net/netfilter/nft_lookup.c
+@@ -167,7 +167,7 @@ static void nft_lookup_activate(const struct nft_ctx *ctx,
+ {
+ 	struct nft_lookup *priv = nft_expr_priv(expr);
+ 
+-	priv->set->use++;
++	nf_tables_activate_set(ctx, priv->set);
+ }
+ 
+ static void nft_lookup_destroy(const struct nft_ctx *ctx,
+diff --git a/net/netfilter/nft_objref.c b/net/netfilter/nft_objref.c
+index cb37169608bab..a48dd5b5d45b1 100644
+--- a/net/netfilter/nft_objref.c
++++ b/net/netfilter/nft_objref.c
+@@ -185,7 +185,7 @@ static void nft_objref_map_activate(const struct nft_ctx *ctx,
+ {
+ 	struct nft_objref_map *priv = nft_expr_priv(expr);
+ 
+-	priv->set->use++;
++	nf_tables_activate_set(ctx, priv->set);
+ }
+ 
+ static void nft_objref_map_destroy(const struct nft_ctx *ctx,
+-- 
+cgit 
+


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:6.3 commit in: /
@ 2023-05-11 14:47 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2023-05-11 14:47 UTC (permalink / raw
  To: gentoo-commits

commit:     418c44a1fee5638f5b0a2937cf8d3c721c1806aa
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu May 11 14:47:43 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu May 11 14:47:43 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=418c44a1

Linux patch 6.3.2

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |     4 +
 1001_linux-6.3.2.patch | 34940 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 34944 insertions(+)

diff --git a/0000_README b/0000_README
index e11cd2ca..d1663380 100644
--- a/0000_README
+++ b/0000_README
@@ -47,6 +47,10 @@ Patch:  1000_linux-6.3.1.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.3.1
 
+Patch:  1001_linux-6.3.2.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.3.2
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1001_linux-6.3.2.patch b/1001_linux-6.3.2.patch
new file mode 100644
index 00000000..7bc81cc0
--- /dev/null
+++ b/1001_linux-6.3.2.patch
@@ -0,0 +1,34940 @@
+diff --git a/Documentation/devicetree/bindings/serial/snps-dw-apb-uart.yaml b/Documentation/devicetree/bindings/serial/snps-dw-apb-uart.yaml
+index 2becdfab4f155..8212a9f483b51 100644
+--- a/Documentation/devicetree/bindings/serial/snps-dw-apb-uart.yaml
++++ b/Documentation/devicetree/bindings/serial/snps-dw-apb-uart.yaml
+@@ -68,7 +68,7 @@ properties:
+       - const: apb_pclk
+ 
+   dmas:
+-    minItems: 2
++    maxItems: 2
+ 
+   dma-names:
+     items:
+diff --git a/Documentation/devicetree/bindings/sound/qcom,lpass-rx-macro.yaml b/Documentation/devicetree/bindings/sound/qcom,lpass-rx-macro.yaml
+index 79c6f8da1319c..b0b95689d78b8 100644
+--- a/Documentation/devicetree/bindings/sound/qcom,lpass-rx-macro.yaml
++++ b/Documentation/devicetree/bindings/sound/qcom,lpass-rx-macro.yaml
+@@ -30,6 +30,7 @@ properties:
+     const: 0
+ 
+   clocks:
++    minItems: 3
+     maxItems: 5
+ 
+   clock-names:
+diff --git a/Makefile b/Makefile
+index 340bd45fc8d21..80cdc03e25aa3 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 3
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/arch/arm/boot/dts/omap3-gta04.dtsi b/arch/arm/boot/dts/omap3-gta04.dtsi
+index 87e0ab1bbe957..e0be0fb23f80f 100644
+--- a/arch/arm/boot/dts/omap3-gta04.dtsi
++++ b/arch/arm/boot/dts/omap3-gta04.dtsi
+@@ -612,6 +612,22 @@
+ 	clock-frequency = <100000>;
+ };
+ 
++&mcspi1 {
++	status = "disabled";
++};
++
++&mcspi2 {
++	status = "disabled";
++};
++
++&mcspi3 {
++	status = "disabled";
++};
++
++&mcspi4 {
++	status = "disabled";
++};
++
+ &usb_otg_hs {
+ 	interface-type = <0>;
+ 	usb-phy = <&usb2_phy>;
+diff --git a/arch/arm/boot/dts/qcom-apq8064.dtsi b/arch/arm/boot/dts/qcom-apq8064.dtsi
+index 92aa2b081901f..3aeac0cabb28b 100644
+--- a/arch/arm/boot/dts/qcom-apq8064.dtsi
++++ b/arch/arm/boot/dts/qcom-apq8064.dtsi
+@@ -1260,7 +1260,7 @@
+ 			gpu_opp_table: opp-table {
+ 				compatible = "operating-points-v2";
+ 
+-				opp-320000000 {
++				opp-450000000 {
+ 					opp-hz = /bits/ 64 <450000000>;
+ 				};
+ 
+diff --git a/arch/arm/boot/dts/qcom-ipq4019.dtsi b/arch/arm/boot/dts/qcom-ipq4019.dtsi
+index 02e9ea78405d0..1159268f06d70 100644
+--- a/arch/arm/boot/dts/qcom-ipq4019.dtsi
++++ b/arch/arm/boot/dts/qcom-ipq4019.dtsi
+@@ -426,8 +426,8 @@
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+ 
+-			ranges = <0x81000000 0 0x40200000 0x40200000 0 0x00100000>,
+-				 <0x82000000 0 0x40300000 0x40300000 0 0x00d00000>;
++			ranges = <0x81000000 0x0 0x00000000 0x40200000 0x0 0x00100000>,
++				 <0x82000000 0x0 0x40300000 0x40300000 0x0 0x00d00000>;
+ 
+ 			interrupts = <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "msi";
+diff --git a/arch/arm/boot/dts/qcom-ipq8064.dtsi b/arch/arm/boot/dts/qcom-ipq8064.dtsi
+index 52d77e105957a..59fc18c448c4c 100644
+--- a/arch/arm/boot/dts/qcom-ipq8064.dtsi
++++ b/arch/arm/boot/dts/qcom-ipq8064.dtsi
+@@ -1081,8 +1081,8 @@
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+ 
+-			ranges = <0x81000000 0 0x0fe00000 0x0fe00000 0 0x00010000   /* downstream I/O */
+-				  0x82000000 0 0x08000000 0x08000000 0 0x07e00000>; /* non-prefetchable memory */
++			ranges = <0x81000000 0x0 0x00000000 0x0fe00000 0x0 0x00010000   /* I/O */
++				  0x82000000 0x0 0x08000000 0x08000000 0x0 0x07e00000>; /* MEM */
+ 
+ 			interrupts = <GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "msi";
+@@ -1132,8 +1132,8 @@
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+ 
+-			ranges = <0x81000000 0 0x31e00000 0x31e00000 0 0x00010000   /* downstream I/O */
+-				  0x82000000 0 0x2e000000 0x2e000000 0 0x03e00000>; /* non-prefetchable memory */
++			ranges = <0x81000000 0x0 0x00000000 0x31e00000 0x0 0x00010000   /* I/O */
++				  0x82000000 0x0 0x2e000000 0x2e000000 0x0 0x03e00000>; /* MEM */
+ 
+ 			interrupts = <GIC_SPI 57 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "msi";
+@@ -1183,8 +1183,8 @@
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+ 
+-			ranges = <0x81000000 0 0x35e00000 0x35e00000 0 0x00010000   /* downstream I/O */
+-				  0x82000000 0 0x32000000 0x32000000 0 0x03e00000>; /* non-prefetchable memory */
++			ranges = <0x81000000 0x0 0x00000000 0x35e00000 0x0 0x00010000   /* I/O */
++				  0x82000000 0x0 0x32000000 0x32000000 0x0 0x03e00000>; /* MEM */
+ 
+ 			interrupts = <GIC_SPI 71 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "msi";
+diff --git a/arch/arm/boot/dts/qcom-sdx55.dtsi b/arch/arm/boot/dts/qcom-sdx55.dtsi
+index df7303c5c843a..7fa542249f1af 100644
+--- a/arch/arm/boot/dts/qcom-sdx55.dtsi
++++ b/arch/arm/boot/dts/qcom-sdx55.dtsi
+@@ -304,6 +304,45 @@
+ 			status = "disabled";
+ 		};
+ 
++		pcie_ep: pcie-ep@1c00000 {
++			compatible = "qcom,sdx55-pcie-ep";
++			reg = <0x01c00000 0x3000>,
++			      <0x40000000 0xf1d>,
++			      <0x40000f20 0xc8>,
++			      <0x40001000 0x1000>,
++			      <0x40200000 0x100000>,
++			      <0x01c03000 0x3000>;
++			reg-names = "parf", "dbi", "elbi", "atu", "addr_space",
++				    "mmio";
++
++			qcom,perst-regs = <&tcsr 0xb258 0xb270>;
++
++			clocks = <&gcc GCC_PCIE_AUX_CLK>,
++				 <&gcc GCC_PCIE_CFG_AHB_CLK>,
++				 <&gcc GCC_PCIE_MSTR_AXI_CLK>,
++				 <&gcc GCC_PCIE_SLV_AXI_CLK>,
++				 <&gcc GCC_PCIE_SLV_Q2A_AXI_CLK>,
++				 <&gcc GCC_PCIE_SLEEP_CLK>,
++				 <&gcc GCC_PCIE_0_CLKREF_CLK>;
++			clock-names = "aux", "cfg", "bus_master", "bus_slave",
++				      "slave_q2a", "sleep", "ref";
++
++			interrupts = <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>;
++			interrupt-names = "global", "doorbell";
++			reset-gpios = <&tlmm 57 GPIO_ACTIVE_LOW>;
++			wake-gpios = <&tlmm 53 GPIO_ACTIVE_LOW>;
++			resets = <&gcc GCC_PCIE_BCR>;
++			reset-names = "core";
++			power-domains = <&gcc PCIE_GDSC>;
++			phys = <&pcie0_lane>;
++			phy-names = "pciephy";
++			max-link-speed = <3>;
++			num-lanes = <2>;
++
++			status = "disabled";
++		};
++
+ 		pcie0_phy: phy@1c07000 {
+ 			compatible = "qcom,sdx55-qmp-pcie-phy";
+ 			reg = <0x01c07000 0x1c4>;
+@@ -401,45 +440,6 @@
+ 			status = "disabled";
+ 		};
+ 
+-		pcie_ep: pcie-ep@40000000 {
+-			compatible = "qcom,sdx55-pcie-ep";
+-			reg = <0x01c00000 0x3000>,
+-			      <0x40000000 0xf1d>,
+-			      <0x40000f20 0xc8>,
+-			      <0x40001000 0x1000>,
+-			      <0x40200000 0x100000>,
+-			      <0x01c03000 0x3000>;
+-			reg-names = "parf", "dbi", "elbi", "atu", "addr_space",
+-				    "mmio";
+-
+-			qcom,perst-regs = <&tcsr 0xb258 0xb270>;
+-
+-			clocks = <&gcc GCC_PCIE_AUX_CLK>,
+-				 <&gcc GCC_PCIE_CFG_AHB_CLK>,
+-				 <&gcc GCC_PCIE_MSTR_AXI_CLK>,
+-				 <&gcc GCC_PCIE_SLV_AXI_CLK>,
+-				 <&gcc GCC_PCIE_SLV_Q2A_AXI_CLK>,
+-				 <&gcc GCC_PCIE_SLEEP_CLK>,
+-				 <&gcc GCC_PCIE_0_CLKREF_CLK>;
+-			clock-names = "aux", "cfg", "bus_master", "bus_slave",
+-				      "slave_q2a", "sleep", "ref";
+-
+-			interrupts = <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>;
+-			interrupt-names = "global", "doorbell";
+-			reset-gpios = <&tlmm 57 GPIO_ACTIVE_LOW>;
+-			wake-gpios = <&tlmm 53 GPIO_ACTIVE_LOW>;
+-			resets = <&gcc GCC_PCIE_BCR>;
+-			reset-names = "core";
+-			power-domains = <&gcc PCIE_GDSC>;
+-			phys = <&pcie0_lane>;
+-			phy-names = "pciephy";
+-			max-link-speed = <3>;
+-			num-lanes = <2>;
+-
+-			status = "disabled";
+-		};
+-
+ 		remoteproc_mpss: remoteproc@4080000 {
+ 			compatible = "qcom,sdx55-mpss-pas";
+ 			reg = <0x04080000 0x4040>;
+diff --git a/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi b/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
+index a9d2bec990141..e15a3b2a9b399 100644
+--- a/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
++++ b/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
+@@ -1880,6 +1880,21 @@
+ 		};
+ 	};
+ 
++	spi1_pins_b: spi1-1 {
++		pins1 {
++			pinmux = <STM32_PINMUX('A', 5, AF5)>, /* SPI1_SCK */
++				 <STM32_PINMUX('B', 5, AF5)>; /* SPI1_MOSI */
++			bias-disable;
++			drive-push-pull;
++			slew-rate = <1>;
++		};
++
++		pins2 {
++			pinmux = <STM32_PINMUX('A', 6, AF5)>; /* SPI1_MISO */
++			bias-disable;
++		};
++	};
++
+ 	spi2_pins_a: spi2-0 {
+ 		pins1 {
+ 			pinmux = <STM32_PINMUX('B', 10, AF5)>, /* SPI2_SCK */
+@@ -2448,19 +2463,4 @@
+ 			bias-disable;
+ 		};
+ 	};
+-
+-	spi1_pins_b: spi1-1 {
+-		pins1 {
+-			pinmux = <STM32_PINMUX('A', 5, AF5)>, /* SPI1_SCK */
+-				 <STM32_PINMUX('B', 5, AF5)>; /* SPI1_MOSI */
+-			bias-disable;
+-			drive-push-pull;
+-			slew-rate = <1>;
+-		};
+-
+-		pins2 {
+-			pinmux = <STM32_PINMUX('A', 6, AF5)>; /* SPI1_MISO */
+-			bias-disable;
+-		};
+-	};
+ };
+diff --git a/arch/arm/vfp/entry.S b/arch/arm/vfp/entry.S
+index 9a89264cdcc0b..6dabb47617781 100644
+--- a/arch/arm/vfp/entry.S
++++ b/arch/arm/vfp/entry.S
+@@ -22,15 +22,13 @@
+ @  IRQs enabled.
+ @
+ ENTRY(do_vfp)
+-	local_bh_disable r10, r4
++	mov	r1, r10
++	mov	r3, r9
+  	ldr	r4, .LCvfp
+-	ldr	r11, [r10, #TI_CPU]	@ CPU number
+-	add	r10, r10, #TI_VFPSTATE	@ r10 = workspace
+ 	ldr	pc, [r4]		@ call VFP entry point
+ ENDPROC(do_vfp)
+ 
+ ENTRY(vfp_null_entry)
+-	local_bh_enable_ti r10, r4
+ 	ret	lr
+ ENDPROC(vfp_null_entry)
+ 
+diff --git a/arch/arm/vfp/vfphw.S b/arch/arm/vfp/vfphw.S
+index 26c4f61ecfa39..60acd42e05786 100644
+--- a/arch/arm/vfp/vfphw.S
++++ b/arch/arm/vfp/vfphw.S
+@@ -6,9 +6,9 @@
+  *  Written by Deep Blue Solutions Limited.
+  *
+  * This code is called from the kernel's undefined instruction trap.
+- * r9 holds the return address for successful handling.
++ * r1 holds the thread_info pointer
++ * r3 holds the return address for successful handling.
+  * lr holds the return address for unrecognised instructions.
+- * r10 points at the start of the private FP workspace in the thread structure
+  * sp points to a struct pt_regs (as defined in include/asm/proc/ptrace.h)
+  */
+ #include <linux/init.h>
+@@ -69,13 +69,17 @@
+ @ VFP hardware support entry point.
+ @
+ @  r0  = instruction opcode (32-bit ARM or two 16-bit Thumb)
++@  r1  = thread_info pointer
+ @  r2  = PC value to resume execution after successful emulation
+-@  r9  = normal "successful" return address
+-@  r10 = vfp_state union
+-@  r11 = CPU number
++@  r3  = normal "successful" return address
+ @  lr  = unrecognised instruction return address
+ @  IRQs enabled.
+ ENTRY(vfp_support_entry)
++	local_bh_disable r1, r4
++
++	ldr	r11, [r1, #TI_CPU]	@ CPU number
++	add	r10, r1, #TI_VFPSTATE	@ r10 = workspace
++
+ 	DBGSTR3	"instr %08x pc %08x state %p", r0, r2, r10
+ 
+ 	.fpu	vfpv2
+@@ -85,9 +89,9 @@ ENTRY(vfp_support_entry)
+ 	bne	look_for_VFP_exceptions	@ VFP is already enabled
+ 
+ 	DBGSTR1 "enable %x", r10
+-	ldr	r3, vfp_current_hw_state_address
++	ldr	r9, vfp_current_hw_state_address
+ 	orr	r1, r1, #FPEXC_EN	@ user FPEXC has the enable bit set
+-	ldr	r4, [r3, r11, lsl #2]	@ vfp_current_hw_state pointer
++	ldr	r4, [r9, r11, lsl #2]	@ vfp_current_hw_state pointer
+ 	bic	r5, r1, #FPEXC_EX	@ make sure exceptions are disabled
+ 	cmp	r4, r10			@ this thread owns the hw context?
+ #ifndef CONFIG_SMP
+@@ -146,7 +150,7 @@ vfp_reload_hw:
+ #endif
+ 
+ 	DBGSTR1	"load state %p", r10
+-	str	r10, [r3, r11, lsl #2]	@ update the vfp_current_hw_state pointer
++	str	r10, [r9, r11, lsl #2]	@ update the vfp_current_hw_state pointer
+ 					@ Load the saved state back into the VFP
+ 	VFPFLDMIA r10, r5		@ reload the working registers while
+ 					@ FPEXC is in a safe state
+@@ -176,7 +180,7 @@ vfp_hw_state_valid:
+ 					@ always subtract 4 from the following
+ 					@ instruction address.
+ 	local_bh_enable_ti r10, r4
+-	ret	r9			@ we think we have handled things
++	ret	r3			@ we think we have handled things
+ 
+ 
+ look_for_VFP_exceptions:
+@@ -206,7 +210,7 @@ skip:
+ process_exception:
+ 	DBGSTR	"bounce"
+ 	mov	r2, sp			@ nothing stacked - regdump is at TOS
+-	mov	lr, r9			@ setup for a return to the user code.
++	mov	lr, r3			@ setup for a return to the user code.
+ 
+ 	@ Now call the C code to package up the bounce to the support code
+ 	@   r0 holds the trigger instruction
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12b-radxa-zero2.dts b/arch/arm64/boot/dts/amlogic/meson-g12b-radxa-zero2.dts
+index 9a60c5ec20725..890f5bfebb030 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12b-radxa-zero2.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-g12b-radxa-zero2.dts
+@@ -360,7 +360,7 @@
+ 	pinctrl-0 = <&pwm_e_pins>;
+ 	pinctrl-names = "default";
+ 	clocks = <&xtal>;
+-	clock-names = "clkin2";
++	clock-names = "clkin0";
+ 	status = "okay";
+ };
+ 
+@@ -368,7 +368,7 @@
+ 	pinctrl-0 = <&pwm_ao_a_pins>;
+ 	pinctrl-names = "default";
+ 	clocks = <&xtal>;
+-	clock-names = "clkin3";
++	clock-names = "clkin0";
+ 	status = "okay";
+ };
+ 
+@@ -376,7 +376,7 @@
+ 	pinctrl-0 = <&pwm_ao_d_e_pins>;
+ 	pinctrl-names = "default";
+ 	clocks = <&xtal>;
+-	clock-names = "clkin4";
++	clock-names = "clkin1";
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/apple/t8103-j274.dts b/arch/arm64/boot/dts/apple/t8103-j274.dts
+index b52ddc4098939..1c3e37f86d46d 100644
+--- a/arch/arm64/boot/dts/apple/t8103-j274.dts
++++ b/arch/arm64/boot/dts/apple/t8103-j274.dts
+@@ -37,10 +37,12 @@
+ 
+ &port01 {
+ 	bus-range = <2 2>;
++	status = "okay";
+ };
+ 
+ &port02 {
+ 	bus-range = <3 3>;
++	status = "okay";
+ 	ethernet0: ethernet@0,0 {
+ 		reg = <0x30000 0x0 0x0 0x0 0x0>;
+ 		/* To be filled by the loader */
+@@ -48,6 +50,14 @@
+ 	};
+ };
+ 
++&pcie0_dart_1 {
++	status = "okay";
++};
++
++&pcie0_dart_2 {
++	status = "okay";
++};
++
+ &i2c2 {
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/apple/t8103-j293.dts b/arch/arm64/boot/dts/apple/t8103-j293.dts
+index 151074109a114..c363dfef80709 100644
+--- a/arch/arm64/boot/dts/apple/t8103-j293.dts
++++ b/arch/arm64/boot/dts/apple/t8103-j293.dts
+@@ -25,21 +25,6 @@
+ 	brcm,board-type = "apple,honshu";
+ };
+ 
+-/*
+- * Remove unused PCIe ports and disable the associated DARTs.
+- */
+-
+-&pcie0_dart_1 {
+-	status = "disabled";
+-};
+-
+-&pcie0_dart_2 {
+-	status = "disabled";
+-};
+-
+-/delete-node/ &port01;
+-/delete-node/ &port02;
+-
+ &i2c2 {
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/apple/t8103-j313.dts b/arch/arm64/boot/dts/apple/t8103-j313.dts
+index bc1f865aa7909..08409be1cf357 100644
+--- a/arch/arm64/boot/dts/apple/t8103-j313.dts
++++ b/arch/arm64/boot/dts/apple/t8103-j313.dts
+@@ -24,18 +24,3 @@
+ &wifi0 {
+ 	brcm,board-type = "apple,shikoku";
+ };
+-
+-/*
+- * Remove unused PCIe ports and disable the associated DARTs.
+- */
+-
+-&pcie0_dart_1 {
+-	status = "disabled";
+-};
+-
+-&pcie0_dart_2 {
+-	status = "disabled";
+-};
+-
+-/delete-node/ &port01;
+-/delete-node/ &port02;
+diff --git a/arch/arm64/boot/dts/apple/t8103-j456.dts b/arch/arm64/boot/dts/apple/t8103-j456.dts
+index 2db425ceb30f6..58c8e43789b48 100644
+--- a/arch/arm64/boot/dts/apple/t8103-j456.dts
++++ b/arch/arm64/boot/dts/apple/t8103-j456.dts
+@@ -55,13 +55,23 @@
+ 
+ &port01 {
+ 	bus-range = <2 2>;
++	status = "okay";
+ };
+ 
+ &port02 {
+ 	bus-range = <3 3>;
++	status = "okay";
+ 	ethernet0: ethernet@0,0 {
+ 		reg = <0x30000 0x0 0x0 0x0 0x0>;
+ 		/* To be filled by the loader */
+ 		local-mac-address = [00 10 18 00 00 00];
+ 	};
+ };
++
++&pcie0_dart_1 {
++	status = "okay";
++};
++
++&pcie0_dart_2 {
++	status = "okay";
++};
+diff --git a/arch/arm64/boot/dts/apple/t8103-j457.dts b/arch/arm64/boot/dts/apple/t8103-j457.dts
+index 3821ff146c56b..152f95fd49a21 100644
+--- a/arch/arm64/boot/dts/apple/t8103-j457.dts
++++ b/arch/arm64/boot/dts/apple/t8103-j457.dts
+@@ -37,6 +37,7 @@
+ 
+ &port02 {
+ 	bus-range = <3 3>;
++	status = "okay";
+ 	ethernet0: ethernet@0,0 {
+ 		reg = <0x30000 0x0 0x0 0x0 0x0>;
+ 		/* To be filled by the loader */
+@@ -44,12 +45,6 @@
+ 	};
+ };
+ 
+-/*
+- * Remove unused PCIe port and disable the associated DART.
+- */
+-
+-&pcie0_dart_1 {
+-	status = "disabled";
++&pcie0_dart_2 {
++	status = "okay";
+ };
+-
+-/delete-node/ &port01;
+diff --git a/arch/arm64/boot/dts/apple/t8103.dtsi b/arch/arm64/boot/dts/apple/t8103.dtsi
+index 9859219699f45..87a9c1ba6d0f4 100644
+--- a/arch/arm64/boot/dts/apple/t8103.dtsi
++++ b/arch/arm64/boot/dts/apple/t8103.dtsi
+@@ -724,6 +724,7 @@
+ 			interrupt-parent = <&aic>;
+ 			interrupts = <AIC_IRQ 699 IRQ_TYPE_LEVEL_HIGH>;
+ 			power-domains = <&ps_apcie_gp>;
++			status = "disabled";
+ 		};
+ 
+ 		pcie0_dart_2: iommu@683008000 {
+@@ -733,6 +734,7 @@
+ 			interrupt-parent = <&aic>;
+ 			interrupts = <AIC_IRQ 702 IRQ_TYPE_LEVEL_HIGH>;
+ 			power-domains = <&ps_apcie_gp>;
++			status = "disabled";
+ 		};
+ 
+ 		pcie0: pcie@690000000 {
+@@ -807,6 +809,7 @@
+ 						<0 0 0 2 &port01 0 0 0 1>,
+ 						<0 0 0 3 &port01 0 0 0 2>,
+ 						<0 0 0 4 &port01 0 0 0 3>;
++				status = "disabled";
+ 			};
+ 
+ 			port02: pci@2,0 {
+@@ -826,6 +829,7 @@
+ 						<0 0 0 2 &port02 0 0 0 1>,
+ 						<0 0 0 3 &port02 0 0 0 2>,
+ 						<0 0 0 4 &port02 0 0 0 3>;
++				status = "disabled";
+ 			};
+ 		};
+ 	};
+diff --git a/arch/arm64/boot/dts/broadcom/bcmbca/bcm4908-asus-gt-ac5300.dts b/arch/arm64/boot/dts/broadcom/bcmbca/bcm4908-asus-gt-ac5300.dts
+index 839ca33178b01..d94a53d68320b 100644
+--- a/arch/arm64/boot/dts/broadcom/bcmbca/bcm4908-asus-gt-ac5300.dts
++++ b/arch/arm64/boot/dts/broadcom/bcmbca/bcm4908-asus-gt-ac5300.dts
+@@ -120,7 +120,7 @@
+ };
+ 
+ &leds {
+-	led-power@11 {
++	led@11 {
+ 		reg = <0x11>;
+ 		function = LED_FUNCTION_POWER;
+ 		color = <LED_COLOR_ID_WHITE>;
+@@ -130,7 +130,7 @@
+ 		pinctrl-0 = <&pins_led_17_a>;
+ 	};
+ 
+-	led-wan-red@12 {
++	led@12 {
+ 		reg = <0x12>;
+ 		function = LED_FUNCTION_WAN;
+ 		color = <LED_COLOR_ID_RED>;
+@@ -139,7 +139,7 @@
+ 		pinctrl-0 = <&pins_led_18_a>;
+ 	};
+ 
+-	led-wps@14 {
++	led@14 {
+ 		reg = <0x14>;
+ 		function = LED_FUNCTION_WPS;
+ 		color = <LED_COLOR_ID_WHITE>;
+@@ -148,7 +148,7 @@
+ 		pinctrl-0 = <&pins_led_20_a>;
+ 	};
+ 
+-	led-wan-white@15 {
++	led@15 {
+ 		reg = <0x15>;
+ 		function = LED_FUNCTION_WAN;
+ 		color = <LED_COLOR_ID_WHITE>;
+@@ -157,7 +157,7 @@
+ 		pinctrl-0 = <&pins_led_21_a>;
+ 	};
+ 
+-	led-lan@19 {
++	led@19 {
+ 		reg = <0x19>;
+ 		function = LED_FUNCTION_LAN;
+ 		color = <LED_COLOR_ID_WHITE>;
+diff --git a/arch/arm64/boot/dts/broadcom/bcmbca/bcm4908.dtsi b/arch/arm64/boot/dts/broadcom/bcmbca/bcm4908.dtsi
+index eb2a78f4e0332..343b320cbd746 100644
+--- a/arch/arm64/boot/dts/broadcom/bcmbca/bcm4908.dtsi
++++ b/arch/arm64/boot/dts/broadcom/bcmbca/bcm4908.dtsi
+@@ -254,7 +254,7 @@
+ 			};
+ 		};
+ 
+-		procmon: syscon@280000 {
++		procmon: bus@280000 {
+ 			compatible = "simple-bus";
+ 			reg = <0x280000 0x1000>;
+ 			ranges;
+@@ -538,7 +538,7 @@
+ 			reg = <0x1800 0x600>, <0x2000 0x10>;
+ 			reg-names = "nand", "nand-int-base";
+ 			interrupts = <GIC_SPI 37 IRQ_TYPE_LEVEL_HIGH>;
+-			interrupt-names = "nand";
++			interrupt-names = "nand_ctlrdy";
+ 			status = "okay";
+ 
+ 			nandcs: nand@0 {
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp.dtsi b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+index a237275ee0179..3f9d67341484b 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+@@ -1151,7 +1151,7 @@
+ 
+ 			media_blk_ctrl: blk-ctrl@32ec0000 {
+ 				compatible = "fsl,imx8mp-media-blk-ctrl",
+-					     "simple-bus", "syscon";
++					     "syscon";
+ 				reg = <0x32ec0000 0x10000>;
+ 				#address-cells = <1>;
+ 				#size-cells = <1>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8192-asurada.dtsi b/arch/arm64/boot/dts/mediatek/mt8192-asurada.dtsi
+index 9f12257ab4e7a..41f9692cbcd47 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8192-asurada.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8192-asurada.dtsi
+@@ -1400,7 +1400,7 @@
+ 				regulator-compatible = "vbuck1";
+ 				regulator-name = "Vgpu";
+ 				regulator-min-microvolt = <606250>;
+-				regulator-max-microvolt = <1193750>;
++				regulator-max-microvolt = <800000>;
+ 				regulator-enable-ramp-delay = <256>;
+ 				regulator-allowed-modes = <0 1 2>;
+ 			};
+diff --git a/arch/arm64/boot/dts/qcom/apq8096-db820c.dts b/arch/arm64/boot/dts/qcom/apq8096-db820c.dts
+index fe6c415e82297..5251dbcab4d90 100644
+--- a/arch/arm64/boot/dts/qcom/apq8096-db820c.dts
++++ b/arch/arm64/boot/dts/qcom/apq8096-db820c.dts
+@@ -706,8 +706,7 @@
+ &pmi8994_spmi_regulators {
+ 	vdd_s2-supply = <&vph_pwr>;
+ 
+-	vdd_gfx: s2@1700 {
+-		reg = <0x1700 0x100>;
++	vdd_gfx: s2 {
+ 		regulator-name = "VDD_GFX";
+ 		regulator-min-microvolt = <980000>;
+ 		regulator-max-microvolt = <980000>;
+diff --git a/arch/arm64/boot/dts/qcom/ipq6018.dtsi b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+index bbd94025ff5d8..9ff4e9d45065b 100644
+--- a/arch/arm64/boot/dts/qcom/ipq6018.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+@@ -738,8 +738,8 @@
+ 			phys = <&pcie_phy0>;
+ 			phy-names = "pciephy";
+ 
+-			ranges = <0x81000000 0 0x20200000 0 0x20200000 0 0x10000>,
+-				 <0x82000000 0 0x20220000 0 0x20220000 0 0xfde0000>;
++			ranges = <0x81000000 0x0 0x00000000 0x0 0x20200000 0x0 0x10000>,
++				 <0x82000000 0x0 0x20220000 0x0 0x20220000 0x0 0xfde0000>;
+ 
+ 			interrupts = <GIC_SPI 52 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "msi";
+diff --git a/arch/arm64/boot/dts/qcom/ipq8074.dtsi b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+index 62d05d740646b..e8dad3ff4fcc7 100644
+--- a/arch/arm64/boot/dts/qcom/ipq8074.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+@@ -780,10 +780,8 @@
+ 			phys = <&pcie_phy1>;
+ 			phy-names = "pciephy";
+ 
+-			ranges = <0x81000000 0 0x10200000 0x10200000
+-				  0 0x10000>,   /* downstream I/O */
+-				 <0x82000000 0 0x10220000 0x10220000
+-				  0 0xfde0000>; /* non-prefetchable memory */
++			ranges = <0x81000000 0x0 0x00000000 0x10200000 0x0 0x10000>,   /* I/O */
++				 <0x82000000 0x0 0x10220000 0x10220000 0x0 0xfde0000>; /* MEM */
+ 
+ 			interrupts = <GIC_SPI 85 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "msi";
+@@ -844,10 +842,8 @@
+ 			phys = <&pcie_phy0>;
+ 			phy-names = "pciephy";
+ 
+-			ranges = <0x81000000 0 0x20200000 0x20200000
+-				  0 0x10000>, /* downstream I/O */
+-				 <0x82000000 0 0x20220000 0x20220000
+-				  0 0xfde0000>; /* non-prefetchable memory */
++			ranges = <0x81000000 0x0 0x00000000 0x20200000 0x0 0x10000>,   /* I/O */
++				 <0x82000000 0x0 0x20220000 0x20220000 0x0 0xfde0000>; /* MEM */
+ 
+ 			interrupts = <GIC_SPI 52 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "msi";
+diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+index 0733c2f4f3798..0d5283805f42c 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+@@ -503,7 +503,7 @@
+ 				bits = <1 7>;
+ 			};
+ 
+-			tsens_mode: mode@ec {
++			tsens_mode: mode@ef {
+ 				reg = <0xef 0x1>;
+ 				bits = <5 3>;
+ 			};
+diff --git a/arch/arm64/boot/dts/qcom/msm8956-sony-xperia-loire.dtsi b/arch/arm64/boot/dts/qcom/msm8956-sony-xperia-loire.dtsi
+index 67baced639c91..085d79542e1bb 100644
+--- a/arch/arm64/boot/dts/qcom/msm8956-sony-xperia-loire.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8956-sony-xperia-loire.dtsi
+@@ -280,3 +280,7 @@
+ 	vdda3p3-supply = <&pm8950_l13>;
+ 	status = "okay";
+ };
++
++&xo_board {
++	clock-frequency = <19200000>;
++};
+diff --git a/arch/arm64/boot/dts/qcom/msm8976.dtsi b/arch/arm64/boot/dts/qcom/msm8976.dtsi
+index 2d360d05aa5ef..e55baafd9efd0 100644
+--- a/arch/arm64/boot/dts/qcom/msm8976.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8976.dtsi
+@@ -20,6 +20,13 @@
+ 
+ 	chosen { };
+ 
++	clocks {
++		xo_board: xo-board {
++			compatible = "fixed-clock";
++			#clock-cells = <0>;
++		};
++	};
++
+ 	cpus {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+@@ -351,6 +358,8 @@
+ 
+ 				rpmcc: clock-controller {
+ 					compatible = "qcom,rpmcc-msm8976", "qcom,rpmcc";
++					clocks = <&xo_board>;
++					clock-names = "xo";
+ 					#clock-cells = <1>;
+ 				};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/msm8992-lg-bullhead.dtsi b/arch/arm64/boot/dts/qcom/msm8992-lg-bullhead.dtsi
+index cd77dcb558722..b8f2a01bcb96c 100644
+--- a/arch/arm64/boot/dts/qcom/msm8992-lg-bullhead.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8992-lg-bullhead.dtsi
+@@ -60,11 +60,6 @@
+ 			reg = <0x0 0x05000000 0x0 0x1a00000>;
+ 			no-map;
+ 		};
+-
+-		reserved@6c00000 {
+-			reg = <0x0 0x06c00000 0x0 0x400000>;
+-			no-map;
+-		};
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/qcom/msm8994-huawei-angler-rev-101.dts b/arch/arm64/boot/dts/qcom/msm8994-huawei-angler-rev-101.dts
+index 7b0f62144c3ee..29e79ae0849d8 100644
+--- a/arch/arm64/boot/dts/qcom/msm8994-huawei-angler-rev-101.dts
++++ b/arch/arm64/boot/dts/qcom/msm8994-huawei-angler-rev-101.dts
+@@ -2,7 +2,7 @@
+ /*
+  * Copyright (c) 2015, Huawei Inc. All rights reserved.
+  * Copyright (c) 2016, The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022, Petr Vorel <petr.vorel@gmail.com>
++ * Copyright (c) 2021-2023, Petr Vorel <petr.vorel@gmail.com>
+  */
+ 
+ /dts-v1/;
+@@ -31,13 +31,18 @@
+ 		#size-cells = <2>;
+ 		ranges;
+ 
++		cont_splash_mem: memory@3401000 {
++			reg = <0 0x03401000 0 0x1000000>;
++			no-map;
++		};
++
+ 		tzapp_mem: tzapp@4800000 {
+ 			reg = <0 0x04800000 0 0x1900000>;
+ 			no-map;
+ 		};
+ 
+-		removed_region: reserved@6300000 {
+-			reg = <0 0x06300000 0 0xD00000>;
++		reserved@6300000 {
++			reg = <0 0x06300000 0 0x700000>;
+ 			no-map;
+ 		};
+ 	};
+diff --git a/arch/arm64/boot/dts/qcom/msm8994-msft-lumia-octagon.dtsi b/arch/arm64/boot/dts/qcom/msm8994-msft-lumia-octagon.dtsi
+index 4520a7e86d5be..0c112b7b57ea1 100644
+--- a/arch/arm64/boot/dts/qcom/msm8994-msft-lumia-octagon.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8994-msft-lumia-octagon.dtsi
+@@ -542,8 +542,7 @@
+ };
+ 
+ &pmi8994_spmi_regulators {
+-	vdd_gfx: s2@1700 {
+-		reg = <0x1700 0x100>;
++	vdd_gfx: s2 {
+ 		regulator-min-microvolt = <980000>;
+ 		regulator-max-microvolt = <980000>;
+ 	};
+diff --git a/arch/arm64/boot/dts/qcom/msm8994-sony-xperia-kitakami.dtsi b/arch/arm64/boot/dts/qcom/msm8994-sony-xperia-kitakami.dtsi
+index 3ceb86b06209a..26059f861250f 100644
+--- a/arch/arm64/boot/dts/qcom/msm8994-sony-xperia-kitakami.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8994-sony-xperia-kitakami.dtsi
+@@ -173,8 +173,7 @@
+ 	 * power domain.. which still isn't enough and forces us to bind
+ 	 * OXILI_CX and OXILI_GX together!
+ 	 */
+-	vdd_gfx: s2@1700 {
+-		reg = <0x1700 0x100>;
++	vdd_gfx: s2 {
+ 		regulator-name = "VDD_GFX";
+ 		regulator-min-microvolt = <980000>;
+ 		regulator-max-microvolt = <980000>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8994.dtsi b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+index 9ff9d35496d21..24c3fced8df71 100644
+--- a/arch/arm64/boot/dts/qcom/msm8994.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+@@ -228,6 +228,11 @@
+ 			reg = <0 0xc9400000 0 0x3f00000>;
+ 			no-map;
+ 		};
++
++		reserved@6c00000 {
++			reg = <0 0x06c00000 0 0x400000>;
++			no-map;
++		};
+ 	};
+ 
+ 	smd {
+diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+index 905678e7175d8..66af9526c98ba 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+@@ -1851,8 +1851,8 @@
+ 
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+-				ranges = <0x01000000 0x0 0x0c200000 0x0c200000 0x0 0x100000>,
+-					<0x02000000 0x0 0x0c300000 0x0c300000 0x0 0xd00000>;
++				ranges = <0x01000000 0x0 0x00000000 0x0c200000 0x0 0x100000>,
++					 <0x02000000 0x0 0x0c300000 0x0c300000 0x0 0xd00000>;
+ 
+ 				device_type = "pci";
+ 
+@@ -1905,8 +1905,8 @@
+ 
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+-				ranges = <0x01000000 0x0 0x0d200000 0x0d200000 0x0 0x100000>,
+-					<0x02000000 0x0 0x0d300000 0x0d300000 0x0 0xd00000>;
++				ranges = <0x01000000 0x0 0x00000000 0x0d200000 0x0 0x100000>,
++					 <0x02000000 0x0 0x0d300000 0x0d300000 0x0 0xd00000>;
+ 
+ 				device_type = "pci";
+ 
+@@ -1956,8 +1956,8 @@
+ 
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+-				ranges = <0x01000000 0x0 0x0e200000 0x0e200000 0x0 0x100000>,
+-					<0x02000000 0x0 0x0e300000 0x0e300000 0x0 0x1d00000>;
++				ranges = <0x01000000 0x0 0x00000000 0x0e200000 0x0 0x100000>,
++					 <0x02000000 0x0 0x0e300000 0x0e300000 0x0 0x1d00000>;
+ 
+ 				device_type = "pci";
+ 
+diff --git a/arch/arm64/boot/dts/qcom/msm8998-oneplus-cheeseburger.dts b/arch/arm64/boot/dts/qcom/msm8998-oneplus-cheeseburger.dts
+index d36b36af49d0b..fac8b3510cd3a 100644
+--- a/arch/arm64/boot/dts/qcom/msm8998-oneplus-cheeseburger.dts
++++ b/arch/arm64/boot/dts/qcom/msm8998-oneplus-cheeseburger.dts
+@@ -34,7 +34,7 @@
+ &pmi8998_gpios {
+ 	button_backlight_default: button-backlight-state {
+ 		pins = "gpio5";
+-		function = "gpio";
++		function = "normal";
+ 		bias-pull-down;
+ 		qcom,drive-strength = <PMIC_GPIO_STRENGTH_NO>;
+ 	};
+diff --git a/arch/arm64/boot/dts/qcom/msm8998.dtsi b/arch/arm64/boot/dts/qcom/msm8998.dtsi
+index 8bc1c59127e50..2a2cfa905f5e0 100644
+--- a/arch/arm64/boot/dts/qcom/msm8998.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8998.dtsi
+@@ -922,7 +922,7 @@
+ 			phy-names = "pciephy";
+ 			status = "disabled";
+ 
+-			ranges = <0x01000000 0x0 0x1b200000 0x1b200000 0x0 0x100000>,
++			ranges = <0x01000000 0x0 0x00000000 0x1b200000 0x0 0x100000>,
+ 				 <0x02000000 0x0 0x1b300000 0x1b300000 0x0 0xd00000>;
+ 
+ 			#interrupt-cells = <1>;
+@@ -1524,7 +1524,7 @@
+ 			compatible = "arm,coresight-stm", "arm,primecell";
+ 			reg = <0x06002000 0x1000>,
+ 			      <0x16280000 0x180000>;
+-			reg-names = "stm-base", "stm-data-base";
++			reg-names = "stm-base", "stm-stimulus-base";
+ 			status = "disabled";
+ 
+ 			clocks = <&rpmcc RPM_SMD_QDSS_CLK>, <&rpmcc RPM_SMD_QDSS_A_CLK>;
+diff --git a/arch/arm64/boot/dts/qcom/pmi8994.dtsi b/arch/arm64/boot/dts/qcom/pmi8994.dtsi
+index a0af91698d497..0192968f4d9b3 100644
+--- a/arch/arm64/boot/dts/qcom/pmi8994.dtsi
++++ b/arch/arm64/boot/dts/qcom/pmi8994.dtsi
+@@ -49,8 +49,6 @@
+ 
+ 		pmi8994_spmi_regulators: regulators {
+ 			compatible = "qcom,pmi8994-regulators";
+-			#address-cells = <1>;
+-			#size-cells = <1>;
+ 		};
+ 
+ 		pmi8994_wled: wled@d800 {
+diff --git a/arch/arm64/boot/dts/qcom/qdu1000.dtsi b/arch/arm64/boot/dts/qcom/qdu1000.dtsi
+index f234159d2060e..c72a51c32a300 100644
+--- a/arch/arm64/boot/dts/qcom/qdu1000.dtsi
++++ b/arch/arm64/boot/dts/qcom/qdu1000.dtsi
+@@ -412,8 +412,6 @@
+ 				pinctrl-0 = <&qup_uart0_default>;
+ 				pinctrl-names = "default";
+ 				interrupts = <GIC_SPI 601 IRQ_TYPE_LEVEL_HIGH>;
+-				#address-cells = <1>;
+-				#size-cells = <0>;
+ 				status = "disabled";
+ 			};
+ 
+@@ -581,8 +579,6 @@
+ 				pinctrl-0 = <&qup_uart7_tx>, <&qup_uart7_rx>;
+ 				pinctrl-names = "default";
+ 				interrupts = <GIC_SPI 608 IRQ_TYPE_LEVEL_HIGH>;
+-				#address-cells = <1>;
+-				#size-cells = <0>;
+ 				status = "disabled";
+ 			};
+ 		};
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-limozeen-nots-r4.dts b/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-limozeen-nots-r4.dts
+index 850776c5323d1..70d5a7aa88735 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-limozeen-nots-r4.dts
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-limozeen-nots-r4.dts
+@@ -26,7 +26,7 @@
+ 		interrupt-parent = <&tlmm>;
+ 		interrupts = <58 IRQ_TYPE_EDGE_FALLING>;
+ 
+-		vcc-supply = <&pp3300_fp_tp>;
++		vdd-supply = <&pp3300_fp_tp>;
+ 		hid-descr-addr = <0x20>;
+ 
+ 		wakeup-source;
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor-pazquel.dtsi b/arch/arm64/boot/dts/qcom/sc7180-trogdor-pazquel.dtsi
+index d06cc4ea33756..8823edbb4d6e2 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor-pazquel.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor-pazquel.dtsi
+@@ -39,7 +39,7 @@
+ 		interrupt-parent = <&tlmm>;
+ 		interrupts = <0 IRQ_TYPE_EDGE_FALLING>;
+ 
+-		vcc-supply = <&pp3300_fp_tp>;
++		vdd-supply = <&pp3300_fp_tp>;
+ 		post-power-on-delay-ms = <100>;
+ 		hid-descr-addr = <0x0001>;
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sc7180.dtsi b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+index ebfa21e9ed8a8..fe62ce516c4e4 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+@@ -1540,7 +1540,7 @@
+ 				function = "qspi_data";
+ 			};
+ 
+-			qspi_data12: qspi-data12-state {
++			qspi_data23: qspi-data23-state {
+ 				pins = "gpio66", "gpio67";
+ 				function = "qspi_data";
+ 			};
+diff --git a/arch/arm64/boot/dts/qcom/sc7280-herobrine-villager.dtsi b/arch/arm64/boot/dts/qcom/sc7280-herobrine-villager.dtsi
+index 818d4046d2c7f..38c8a3679fcb3 100644
+--- a/arch/arm64/boot/dts/qcom/sc7280-herobrine-villager.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7280-herobrine-villager.dtsi
+@@ -33,7 +33,7 @@ ap_tp_i2c: &i2c0 {
+ 		interrupts = <7 IRQ_TYPE_EDGE_FALLING>;
+ 
+ 		hid-descr-addr = <0x20>;
+-		vcc-supply = <&pp3300_z1>;
++		vdd-supply = <&pp3300_z1>;
+ 
+ 		wakeup-source;
+ 	};
+diff --git a/arch/arm64/boot/dts/qcom/sc7280.dtsi b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+index 8f4ab6bd28864..95b3819bf4c0e 100644
+--- a/arch/arm64/boot/dts/qcom/sc7280.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+@@ -2077,7 +2077,7 @@
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+ 
+-			ranges = <0x01000000 0x0 0x40200000 0x0 0x40200000 0x0 0x100000>,
++			ranges = <0x01000000 0x0 0x00000000 0x0 0x40200000 0x0 0x100000>,
+ 				 <0x02000000 0x0 0x40300000 0x0 0x40300000 0x0 0x1fd00000>;
+ 
+ 			interrupts = <GIC_SPI 307 IRQ_TYPE_LEVEL_HIGH>;
+@@ -3595,12 +3595,17 @@
+ 			      <0 0x088e2000 0 0x1000>;
+ 			interrupts-extended = <&pdc 11 IRQ_TYPE_LEVEL_HIGH>;
+ 			ports {
++				#address-cells = <1>;
++				#size-cells = <0>;
++
+ 				port@0 {
++					reg = <0>;
+ 					eud_ep: endpoint {
+ 						remote-endpoint = <&usb2_role_switch>;
+ 					};
+ 				};
+ 				port@1 {
++					reg = <1>;
+ 					eud_con: endpoint {
+ 						remote-endpoint = <&con_eud>;
+ 					};
+@@ -3611,7 +3616,11 @@
+ 		eud_typec: connector {
+ 			compatible = "usb-c-connector";
+ 			ports {
++				#address-cells = <1>;
++				#size-cells = <0>;
++
+ 				port@0 {
++					reg = <0>;
+ 					con_eud: endpoint {
+ 						remote-endpoint = <&eud_con>;
+ 					};
+@@ -4344,7 +4353,7 @@
+ 				function = "qspi_data";
+ 			};
+ 
+-			qspi_data12: qspi-data12-state {
++			qspi_data23: qspi-data23-state {
+ 				pins = "gpio16", "gpio17";
+ 				function = "qspi_data";
+ 			};
+diff --git a/arch/arm64/boot/dts/qcom/sc8280xp.dtsi b/arch/arm64/boot/dts/qcom/sc8280xp.dtsi
+index 42bfa9fa5b967..03b679b75201d 100644
+--- a/arch/arm64/boot/dts/qcom/sc8280xp.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc8280xp.dtsi
+@@ -1657,7 +1657,7 @@
+ 			reg-names = "parf", "dbi", "elbi", "atu", "config";
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+-			ranges = <0x01000000 0x0 0x30200000 0x0 0x30200000 0x0 0x100000>,
++			ranges = <0x01000000 0x0 0x00000000 0x0 0x30200000 0x0 0x100000>,
+ 				 <0x02000000 0x0 0x30300000 0x0 0x30300000 0x0 0x1d00000>;
+ 			bus-range = <0x00 0xff>;
+ 
+@@ -1756,7 +1756,7 @@
+ 			reg-names = "parf", "dbi", "elbi", "atu", "config";
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+-			ranges = <0x01000000 0x0 0x32200000 0x0 0x32200000 0x0 0x100000>,
++			ranges = <0x01000000 0x0 0x00000000 0x0 0x32200000 0x0 0x100000>,
+ 				 <0x02000000 0x0 0x32300000 0x0 0x32300000 0x0 0x1d00000>;
+ 			bus-range = <0x00 0xff>;
+ 
+@@ -1853,7 +1853,7 @@
+ 			reg-names = "parf", "dbi", "elbi", "atu", "config";
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+-			ranges = <0x01000000 0x0 0x34200000 0x0 0x34200000 0x0 0x100000>,
++			ranges = <0x01000000 0x0 0x00000000 0x0 0x34200000 0x0 0x100000>,
+ 				 <0x02000000 0x0 0x34300000 0x0 0x34300000 0x0 0x1d00000>;
+ 			bus-range = <0x00 0xff>;
+ 
+@@ -1953,7 +1953,7 @@
+ 			reg-names = "parf", "dbi", "elbi", "atu", "config";
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+-			ranges = <0x01000000 0x0 0x38200000 0x0 0x38200000 0x0 0x100000>,
++			ranges = <0x01000000 0x0 0x00000000 0x0 0x38200000 0x0 0x100000>,
+ 				 <0x02000000 0x0 0x38300000 0x0 0x38300000 0x0 0x1d00000>;
+ 			bus-range = <0x00 0xff>;
+ 
+@@ -2050,7 +2050,7 @@
+ 			reg-names = "parf", "dbi", "elbi", "atu", "config";
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+-			ranges = <0x01000000 0x0 0x3c200000 0x0 0x3c200000 0x0 0x100000>,
++			ranges = <0x01000000 0x0 0x00000000 0x0 0x3c200000 0x0 0x100000>,
+ 				 <0x02000000 0x0 0x3c300000 0x0 0x3c300000 0x0 0x1d00000>;
+ 			bus-range = <0x00 0xff>;
+ 
+@@ -2598,7 +2598,7 @@
+ 			reg = <0 0x03330000 0 0x2000>;
+ 			interrupts-extended = <&intc GIC_SPI 959 IRQ_TYPE_LEVEL_HIGH>,
+ 					      <&intc GIC_SPI 520 IRQ_TYPE_LEVEL_HIGH>;
+-			interrupt-names = "core", "wake";
++			interrupt-names = "core", "wakeup";
+ 
+ 			clocks = <&txmacro>;
+ 			clock-names = "iface";
+@@ -3253,7 +3253,7 @@
+ 				#sound-dai-cells = <0>;
+ 
+ 				operating-points-v2 = <&mdss0_dp0_opp_table>;
+-				power-domains = <&rpmhpd SC8280XP_CX>;
++				power-domains = <&rpmhpd SC8280XP_MMCX>;
+ 
+ 				status = "disabled";
+ 
+@@ -3331,7 +3331,7 @@
+ 				#sound-dai-cells = <0>;
+ 
+ 				operating-points-v2 = <&mdss0_dp1_opp_table>;
+-				power-domains = <&rpmhpd SC8280XP_CX>;
++				power-domains = <&rpmhpd SC8280XP_MMCX>;
+ 
+ 				status = "disabled";
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+index 479859bd8ab33..c5e92851a4f08 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+@@ -94,7 +94,7 @@
+ 			reg = <0x0 0x0>;
+ 			enable-method = "psci";
+ 			capacity-dmips-mhz = <611>;
+-			dynamic-power-coefficient = <290>;
++			dynamic-power-coefficient = <154>;
+ 			qcom,freq-domain = <&cpufreq_hw 0>;
+ 			operating-points-v2 = <&cpu0_opp_table>;
+ 			interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>,
+@@ -120,7 +120,7 @@
+ 			reg = <0x0 0x100>;
+ 			enable-method = "psci";
+ 			capacity-dmips-mhz = <611>;
+-			dynamic-power-coefficient = <290>;
++			dynamic-power-coefficient = <154>;
+ 			qcom,freq-domain = <&cpufreq_hw 0>;
+ 			operating-points-v2 = <&cpu0_opp_table>;
+ 			interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>,
+@@ -142,7 +142,7 @@
+ 			reg = <0x0 0x200>;
+ 			enable-method = "psci";
+ 			capacity-dmips-mhz = <611>;
+-			dynamic-power-coefficient = <290>;
++			dynamic-power-coefficient = <154>;
+ 			qcom,freq-domain = <&cpufreq_hw 0>;
+ 			operating-points-v2 = <&cpu0_opp_table>;
+ 			interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>,
+@@ -164,7 +164,7 @@
+ 			reg = <0x0 0x300>;
+ 			enable-method = "psci";
+ 			capacity-dmips-mhz = <611>;
+-			dynamic-power-coefficient = <290>;
++			dynamic-power-coefficient = <154>;
+ 			qcom,freq-domain = <&cpufreq_hw 0>;
+ 			operating-points-v2 = <&cpu0_opp_table>;
+ 			interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>,
+@@ -2292,8 +2292,8 @@
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+ 
+-			ranges = <0x01000000 0x0 0x60200000 0 0x60200000 0x0 0x100000>,
+-				 <0x02000000 0x0 0x60300000 0 0x60300000 0x0 0xd00000>;
++			ranges = <0x01000000 0x0 0x00000000 0x0 0x60200000 0x0 0x100000>,
++				 <0x02000000 0x0 0x60300000 0x0 0x60300000 0x0 0xd00000>;
+ 
+ 			interrupts = <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "msi";
+@@ -2397,7 +2397,7 @@
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+ 
+-			ranges = <0x01000000 0x0 0x40200000 0x0 0x40200000 0x0 0x100000>,
++			ranges = <0x01000000 0x0 0x00000000 0x0 0x40200000 0x0 0x100000>,
+ 				 <0x02000000 0x0 0x40300000 0x0 0x40300000 0x0 0x1fd00000>;
+ 
+ 			interrupts = <GIC_SPI 307 IRQ_TYPE_EDGE_RISING>;
+@@ -2763,7 +2763,7 @@
+ 				function = "qspi_data";
+ 			};
+ 
+-			qspi_data12: qspi-data12-state {
++			qspi_data23: qspi-data23-state {
+ 				pins = "gpio93", "gpio94";
+ 				function = "qspi_data";
+ 			};
+diff --git a/arch/arm64/boot/dts/qcom/sm8150.dtsi b/arch/arm64/boot/dts/qcom/sm8150.dtsi
+index 13e0ce8286061..733c896918c83 100644
+--- a/arch/arm64/boot/dts/qcom/sm8150.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8150.dtsi
+@@ -1799,8 +1799,8 @@
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+ 
+-			ranges = <0x01000000 0x0 0x60200000 0 0x60200000 0x0 0x100000>,
+-				 <0x02000000 0x0 0x60300000 0 0x60300000 0x0 0x3d00000>;
++			ranges = <0x01000000 0x0 0x00000000 0x0 0x60200000 0x0 0x100000>,
++				 <0x02000000 0x0 0x60300000 0x0 0x60300000 0x0 0x3d00000>;
+ 
+ 			interrupts = <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "msi";
+@@ -1895,7 +1895,7 @@
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+ 
+-			ranges = <0x01000000 0x0 0x40200000 0x0 0x40200000 0x0 0x100000>,
++			ranges = <0x01000000 0x0 0x00000000 0x0 0x40200000 0x0 0x100000>,
+ 				 <0x02000000 0x0 0x40300000 0x0 0x40300000 0x0 0x1fd00000>;
+ 
+ 			interrupts = <GIC_SPI 307 IRQ_TYPE_EDGE_RISING>;
+diff --git a/arch/arm64/boot/dts/qcom/sm8250-xiaomi-elish.dts b/arch/arm64/boot/dts/qcom/sm8250-xiaomi-elish.dts
+index a85d47f7a9e82..7d2e18f3f432e 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250-xiaomi-elish.dts
++++ b/arch/arm64/boot/dts/qcom/sm8250-xiaomi-elish.dts
+@@ -595,7 +595,7 @@
+ 
+ &usb_1_dwc3 {
+ 	dr_mode = "peripheral";
+-	maximum-spped = "high-speed";
++	maximum-speed = "high-speed";
+ 	/* Remove USB3 phy */
+ 	phys = <&usb_1_hsphy>;
+ 	phy-names = "usb2-phy";
+diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+index 2f0e460acccdc..e592ddcc0f075 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+@@ -1834,8 +1834,8 @@
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+ 
+-			ranges = <0x01000000 0x0 0x60200000 0 0x60200000 0x0 0x100000>,
+-				 <0x02000000 0x0 0x60300000 0 0x60300000 0x0 0x3d00000>;
++			ranges = <0x01000000 0x0 0x00000000 0x0 0x60200000 0x0 0x100000>,
++				 <0x02000000 0x0 0x60300000 0x0 0x60300000 0x0 0x3d00000>;
+ 
+ 			interrupts = <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 142 IRQ_TYPE_LEVEL_HIGH>,
+@@ -1943,7 +1943,7 @@
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+ 
+-			ranges = <0x01000000 0x0 0x40200000 0x0 0x40200000 0x0 0x100000>,
++			ranges = <0x01000000 0x0 0x00000000 0x0 0x40200000 0x0 0x100000>,
+ 				 <0x02000000 0x0 0x40300000 0x0 0x40300000 0x0 0x1fd00000>;
+ 
+ 			interrupts = <GIC_SPI 307 IRQ_TYPE_LEVEL_HIGH>;
+@@ -2051,7 +2051,7 @@
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+ 
+-			ranges = <0x01000000 0x0 0x64200000 0x0 0x64200000 0x0 0x100000>,
++			ranges = <0x01000000 0x0 0x00000000 0x0 0x64200000 0x0 0x100000>,
+ 				 <0x02000000 0x0 0x64300000 0x0 0x64300000 0x0 0x3d00000>;
+ 
+ 			interrupts = <GIC_SPI 243 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm64/boot/dts/qcom/sm8350-microsoft-surface-duo2.dts b/arch/arm64/boot/dts/qcom/sm8350-microsoft-surface-duo2.dts
+index b536ae36ae6de..3bd5e57cbcdaa 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350-microsoft-surface-duo2.dts
++++ b/arch/arm64/boot/dts/qcom/sm8350-microsoft-surface-duo2.dts
+@@ -341,6 +341,9 @@
+ 
+ &usb_1 {
+ 	status = "okay";
++};
++
++&usb_1_dwc3 {
+ 	dr_mode = "peripheral";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+index 1a5a612d4234b..9cb52d7efdd8d 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+@@ -1487,8 +1487,8 @@
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+ 
+-			ranges = <0x01000000 0x0 0x60200000 0 0x60200000 0x0 0x100000>,
+-				 <0x02000000 0x0 0x60300000 0 0x60300000 0x0 0x3d00000>;
++			ranges = <0x01000000 0x0 0x00000000 0x0 0x60200000 0x0 0x100000>,
++				 <0x02000000 0x0 0x60300000 0x0 0x60300000 0x0 0x3d00000>;
+ 
+ 			interrupts = <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 142 IRQ_TYPE_LEVEL_HIGH>,
+@@ -1581,8 +1581,8 @@
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+ 
+-			ranges = <0x01000000 0x0 0x40200000 0 0x40200000 0x0 0x100000>,
+-				 <0x02000000 0x0 0x40300000 0 0x40300000 0x0 0x1fd00000>;
++			ranges = <0x01000000 0x0 0x00000000 0x0 0x40200000 0x0 0x100000>,
++				 <0x02000000 0x0 0x40300000 0x0 0x40300000 0x0 0x1fd00000>;
+ 
+ 			interrupts = <GIC_SPI 307 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "msi";
+diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+index b285b1530c109..243ef642fcef6 100644
+--- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+@@ -1746,8 +1746,8 @@
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+ 
+-			ranges = <0x01000000 0x0 0x60200000 0 0x60200000 0x0 0x100000>,
+-				 <0x02000000 0x0 0x60300000 0 0x60300000 0x0 0x3d00000>;
++			ranges = <0x01000000 0x0 0x00000000 0x0 0x60200000 0x0 0x100000>,
++				 <0x02000000 0x0 0x60300000 0x0 0x60300000 0x0 0x3d00000>;
+ 
+ 			/*
+ 			 * MSIs for BDF (1:0.0) only works with Device ID 0x5980.
+@@ -1862,8 +1862,8 @@
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+ 
+-			ranges = <0x01000000 0x0 0x40200000 0 0x40200000 0x0 0x100000>,
+-				 <0x02000000 0x0 0x40300000 0 0x40300000 0x0 0x1fd00000>;
++			ranges = <0x01000000 0x0 0x00000000 0x0 0x40200000 0x0 0x100000>,
++				 <0x02000000 0x0 0x40300000 0x0 0x40300000 0x0 0x1fd00000>;
+ 
+ 			/*
+ 			 * MSIs for BDF (1:0.0) only works with Device ID 0x5a00.
+@@ -1917,8 +1917,8 @@
+ 			phys = <&pcie1_lane>;
+ 			phy-names = "pciephy";
+ 
+-			perst-gpio = <&tlmm 97 GPIO_ACTIVE_LOW>;
+-			enable-gpio = <&tlmm 99 GPIO_ACTIVE_HIGH>;
++			perst-gpios = <&tlmm 97 GPIO_ACTIVE_LOW>;
++			wake-gpios = <&tlmm 99 GPIO_ACTIVE_HIGH>;
+ 
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&pcie1_default_state>;
+diff --git a/arch/arm64/boot/dts/qcom/sm8550-mtp.dts b/arch/arm64/boot/dts/qcom/sm8550-mtp.dts
+index 5db6e789e6b87..44eab9308ff24 100644
+--- a/arch/arm64/boot/dts/qcom/sm8550-mtp.dts
++++ b/arch/arm64/boot/dts/qcom/sm8550-mtp.dts
+@@ -414,18 +414,27 @@
+ &pcie0 {
+ 	wake-gpios = <&tlmm 96 GPIO_ACTIVE_HIGH>;
+ 	perst-gpios = <&tlmm 94 GPIO_ACTIVE_LOW>;
++
++	pinctrl-names = "default";
++	pinctrl-0 = <&pcie0_default_state>;
++
+ 	status = "okay";
+ };
+ 
+ &pcie0_phy {
+ 	vdda-phy-supply = <&vreg_l1e_0p88>;
+ 	vdda-pll-supply = <&vreg_l3e_1p2>;
++
+ 	status = "okay";
+ };
+ 
+ &pcie1 {
+ 	wake-gpios = <&tlmm 99 GPIO_ACTIVE_HIGH>;
+ 	perst-gpios = <&tlmm 97 GPIO_ACTIVE_LOW>;
++
++	pinctrl-names = "default";
++	pinctrl-0 = <&pcie1_default_state>;
++
+ 	status = "okay";
+ };
+ 
+@@ -433,6 +442,7 @@
+ 	vdda-phy-supply = <&vreg_l3c_0p91>;
+ 	vdda-pll-supply = <&vreg_l3e_1p2>;
+ 	vdda-qref-supply = <&vreg_l1e_0p88>;
++
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sm8550.dtsi b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+index 5d0888398b3c3..f05cdb376a175 100644
+--- a/arch/arm64/boot/dts/qcom/sm8550.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+@@ -412,7 +412,6 @@
+ 			no-map;
+ 		};
+ 
+-
+ 		hyp_tags_reserved_mem: hyp-tags-reserved-region@811d0000 {
+ 			reg = <0 0x811d0000 0 0x30000>;
+ 			no-map;
+@@ -1653,8 +1652,8 @@
+ 			reg-names = "parf", "dbi", "elbi", "atu", "config";
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+-			ranges = <0x01000000 0x0 0x60200000 0 0x60200000 0x0 0x100000>,
+-				 <0x02000000 0x0 0x60300000 0 0x60300000 0x0 0x3d00000>;
++			ranges = <0x01000000 0x0 0x00000000 0x0 0x60200000 0x0 0x100000>,
++				 <0x02000000 0x0 0x60300000 0x0 0x60300000 0x0 0x3d00000>;
+ 			bus-range = <0x00 0xff>;
+ 
+ 			dma-coherent;
+@@ -1672,25 +1671,24 @@
+ 					<0 0 0 3 &intc 0 0 0 151 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
+ 					<0 0 0 4 &intc 0 0 0 152 IRQ_TYPE_LEVEL_HIGH>; /* int_d */
+ 
+-			clocks = <&gcc GCC_PCIE_0_PIPE_CLK>,
+-				 <&gcc GCC_PCIE_0_AUX_CLK>,
++			clocks = <&gcc GCC_PCIE_0_AUX_CLK>,
+ 				 <&gcc GCC_PCIE_0_CFG_AHB_CLK>,
+ 				 <&gcc GCC_PCIE_0_MSTR_AXI_CLK>,
+ 				 <&gcc GCC_PCIE_0_SLV_AXI_CLK>,
+ 				 <&gcc GCC_PCIE_0_SLV_Q2A_AXI_CLK>,
+ 				 <&gcc GCC_DDRSS_PCIE_SF_QTB_CLK>,
+ 				 <&gcc GCC_AGGRE_NOC_PCIE_AXI_CLK>;
+-			clock-names = "pipe",
+-				      "aux",
++			clock-names = "aux",
+ 				      "cfg",
+ 				      "bus_master",
+ 				      "bus_slave",
+ 				      "slave_q2a",
+ 				      "ddrss_sf_tbu",
+-				      "aggre0";
++				      "noc_aggr";
+ 
+-			interconnect-names = "pcie-mem";
+-			interconnects = <&pcie_noc MASTER_PCIE_0 0 &mc_virt SLAVE_EBI1 0>;
++			interconnects = <&pcie_noc MASTER_PCIE_0 0 &mc_virt SLAVE_EBI1 0>,
++					<&gem_noc MASTER_APPSS_PROC 0 &cnoc_main SLAVE_PCIE_0 0>;
++			interconnect-names = "pcie-mem", "cpu-pcie";
+ 
+ 			iommus = <&apps_smmu 0x1400 0x7f>;
+ 			iommu-map = <0x0   &apps_smmu 0x1400 0x1>,
+@@ -1704,12 +1702,6 @@
+ 			phys = <&pcie0_phy>;
+ 			phy-names = "pciephy";
+ 
+-			perst-gpios = <&tlmm 94 GPIO_ACTIVE_LOW>;
+-			wake-gpios = <&tlmm 96 GPIO_ACTIVE_HIGH>;
+-
+-			pinctrl-names = "default";
+-			pinctrl-0 = <&pcie0_default_state>;
+-
+ 			status = "disabled";
+ 		};
+ 
+@@ -1752,8 +1744,8 @@
+ 			reg-names = "parf", "dbi", "elbi", "atu", "config";
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+-			ranges = <0x01000000 0x0 0x40200000 0 0x40200000 0x0 0x100000>,
+-				 <0x02000000 0x0 0x40300000 0 0x40300000 0x0 0x1fd00000>;
++			ranges = <0x01000000 0x0 0x00000000 0x0 0x40200000 0x0 0x100000>,
++				 <0x02000000 0x0 0x40300000 0x0 0x40300000 0x0 0x1fd00000>;
+ 			bus-range = <0x00 0xff>;
+ 
+ 			dma-coherent;
+@@ -1771,8 +1763,7 @@
+ 					<0 0 0 3 &intc 0 0 0 438 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
+ 					<0 0 0 4 &intc 0 0 0 439 IRQ_TYPE_LEVEL_HIGH>; /* int_d */
+ 
+-			clocks = <&gcc GCC_PCIE_1_PIPE_CLK>,
+-				 <&gcc GCC_PCIE_1_AUX_CLK>,
++			clocks = <&gcc GCC_PCIE_1_AUX_CLK>,
+ 				 <&gcc GCC_PCIE_1_CFG_AHB_CLK>,
+ 				 <&gcc GCC_PCIE_1_MSTR_AXI_CLK>,
+ 				 <&gcc GCC_PCIE_1_SLV_AXI_CLK>,
+@@ -1780,21 +1771,21 @@
+ 				 <&gcc GCC_DDRSS_PCIE_SF_QTB_CLK>,
+ 				 <&gcc GCC_AGGRE_NOC_PCIE_AXI_CLK>,
+ 				 <&gcc GCC_CNOC_PCIE_SF_AXI_CLK>;
+-			clock-names = "pipe",
+-				      "aux",
++			clock-names = "aux",
+ 				      "cfg",
+ 				      "bus_master",
+ 				      "bus_slave",
+ 				      "slave_q2a",
+ 				      "ddrss_sf_tbu",
+-				      "aggre1",
+-				      "cnoc_pcie_sf_axi";
++				      "noc_aggr",
++				      "cnoc_sf_axi";
+ 
+ 			assigned-clocks = <&gcc GCC_PCIE_1_AUX_CLK>;
+ 			assigned-clock-rates = <19200000>;
+ 
+-			interconnect-names = "pcie-mem";
+-			interconnects = <&pcie_noc MASTER_PCIE_1 0 &mc_virt SLAVE_EBI1 0>;
++			interconnects = <&pcie_noc MASTER_PCIE_1 0 &mc_virt SLAVE_EBI1 0>,
++					<&gem_noc MASTER_APPSS_PROC 0 &cnoc_main SLAVE_PCIE_1 0>;
++			interconnect-names = "pcie-mem", "cpu-pcie";
+ 
+ 			iommus = <&apps_smmu 0x1480 0x7f>;
+ 			iommu-map = <0x0   &apps_smmu 0x1480 0x1>,
+@@ -1802,20 +1793,13 @@
+ 
+ 			resets = <&gcc GCC_PCIE_1_BCR>,
+ 				<&gcc GCC_PCIE_1_LINK_DOWN_BCR>;
+-			reset-names = "pci",
+-				"pcie_1_link_down_reset";
++			reset-names = "pci", "link_down";
+ 
+ 			power-domains = <&gcc PCIE_1_GDSC>;
+ 
+ 			phys = <&pcie1_phy>;
+ 			phy-names = "pciephy";
+ 
+-			perst-gpios = <&tlmm 97 GPIO_ACTIVE_LOW>;
+-			enable-gpios = <&tlmm 99 GPIO_ACTIVE_HIGH>;
+-
+-			pinctrl-names = "default";
+-			pinctrl-0 = <&pcie1_default_state>;
+-
+ 			status = "disabled";
+ 		};
+ 
+@@ -1823,18 +1807,17 @@
+ 			compatible = "qcom,sm8550-qmp-gen4x2-pcie-phy";
+ 			reg = <0x0 0x01c0e000 0x0 0x2000>;
+ 
+-			clocks = <&gcc GCC_PCIE_1_AUX_CLK>,
++			clocks = <&gcc GCC_PCIE_1_PHY_AUX_CLK>,
+ 				 <&gcc GCC_PCIE_1_CFG_AHB_CLK>,
+ 				 <&tcsr TCSR_PCIE_1_CLKREF_EN>,
+ 				 <&gcc GCC_PCIE_1_PHY_RCHNG_CLK>,
+-				 <&gcc GCC_PCIE_1_PIPE_CLK>,
+-				 <&gcc GCC_PCIE_1_PHY_AUX_CLK>;
++				 <&gcc GCC_PCIE_1_PIPE_CLK>;
+ 			clock-names = "aux", "cfg_ahb", "ref", "rchng",
+-				      "pipe", "aux_phy";
++				      "pipe";
+ 
+ 			resets = <&gcc GCC_PCIE_1_PHY_BCR>,
+ 				 <&gcc GCC_PCIE_1_NOCSR_COM_PHY_BCR>;
+-			reset-names = "phy", "nocsr";
++			reset-names = "phy", "phy_nocsr";
+ 
+ 			assigned-clocks = <&gcc GCC_PCIE_1_PHY_RCHNG_CLK>;
+ 			assigned-clock-rates = <100000000>;
+@@ -2211,7 +2194,8 @@
+ 
+ 				assigned-clocks = <&dispcc DISP_CC_MDSS_BYTE0_CLK_SRC>,
+ 						  <&dispcc DISP_CC_MDSS_PCLK0_CLK_SRC>;
+-				assigned-clock-parents = <&mdss_dsi0_phy 0>, <&mdss_dsi0_phy 1>;
++				assigned-clock-parents = <&mdss_dsi0_phy 0>,
++							 <&mdss_dsi0_phy 1>;
+ 
+ 				operating-points-v2 = <&mdss_dsi_opp_table>;
+ 
+@@ -2303,8 +2287,10 @@
+ 
+ 				power-domains = <&rpmhpd SM8550_MMCX>;
+ 
+-				assigned-clocks = <&dispcc DISP_CC_MDSS_BYTE1_CLK_SRC>, <&dispcc DISP_CC_MDSS_PCLK1_CLK_SRC>;
+-				assigned-clock-parents = <&mdss_dsi1_phy 0>, <&mdss_dsi1_phy 1>;
++				assigned-clocks = <&dispcc DISP_CC_MDSS_BYTE1_CLK_SRC>,
++						  <&dispcc DISP_CC_MDSS_PCLK1_CLK_SRC>;
++				assigned-clock-parents = <&mdss_dsi1_phy 0>,
++							 <&mdss_dsi1_phy 1>;
+ 
+ 				operating-points-v2 = <&mdss_dsi_opp_table>;
+ 
+@@ -2808,10 +2794,10 @@
+ 			};
+ 
+ 			qup_spi0_cs: qup-spi0-cs-state {
+-				cs-pins {
+-					pins = "gpio31";
+-					function = "qup1_se0";
+-				};
++				pins = "gpio31";
++				function = "qup1_se0";
++				drive-strength = <6>;
++				bias-disable;
+ 			};
+ 
+ 			qup_spi0_data_clk: qup-spi0-data-clk-state {
+@@ -3172,7 +3158,7 @@
+ 
+ 		intc: interrupt-controller@17100000 {
+ 			compatible = "arm,gic-v3";
+-			reg = <0 0x17100000 0 0x10000>,	/* GICD */
++			reg = <0 0x17100000 0 0x10000>,		/* GICD */
+ 			      <0 0x17180000 0 0x200000>;	/* GICR * 8 */
+ 			ranges;
+ 			#interrupt-cells = <3>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a774c0.dtsi b/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
+index e21653d862282..10abfde329d00 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
+@@ -49,17 +49,14 @@
+ 		opp-shared;
+ 		opp-800000000 {
+ 			opp-hz = /bits/ 64 <800000000>;
+-			opp-microvolt = <820000>;
+ 			clock-latency-ns = <300000>;
+ 		};
+ 		opp-1000000000 {
+ 			opp-hz = /bits/ 64 <1000000000>;
+-			opp-microvolt = <820000>;
+ 			clock-latency-ns = <300000>;
+ 		};
+ 		opp-1200000000 {
+ 			opp-hz = /bits/ 64 <1200000000>;
+-			opp-microvolt = <820000>;
+ 			clock-latency-ns = <300000>;
+ 			opp-suspend;
+ 		};
+diff --git a/arch/arm64/boot/dts/renesas/r8a77990.dtsi b/arch/arm64/boot/dts/renesas/r8a77990.dtsi
+index d4718f144e33c..4529e9b57c331 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77990.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77990.dtsi
+@@ -49,17 +49,14 @@
+ 		opp-shared;
+ 		opp-800000000 {
+ 			opp-hz = /bits/ 64 <800000000>;
+-			opp-microvolt = <820000>;
+ 			clock-latency-ns = <300000>;
+ 		};
+ 		opp-1000000000 {
+ 			opp-hz = /bits/ 64 <1000000000>;
+-			opp-microvolt = <820000>;
+ 			clock-latency-ns = <300000>;
+ 		};
+ 		opp-1200000000 {
+ 			opp-hz = /bits/ 64 <1200000000>;
+-			opp-microvolt = <820000>;
+ 			clock-latency-ns = <300000>;
+ 			opp-suspend;
+ 		};
+diff --git a/arch/arm64/boot/dts/renesas/r9a07g043.dtsi b/arch/arm64/boot/dts/renesas/r9a07g043.dtsi
+index c8a83e42c4f3a..a9700654b4218 100644
+--- a/arch/arm64/boot/dts/renesas/r9a07g043.dtsi
++++ b/arch/arm64/boot/dts/renesas/r9a07g043.dtsi
+@@ -80,9 +80,8 @@
+ 			reg = <0 0x10049c00 0 0x400>;
+ 			interrupts = <SOC_PERIPHERAL_IRQ(326) IRQ_TYPE_LEVEL_HIGH>,
+ 				     <SOC_PERIPHERAL_IRQ(327) IRQ_TYPE_EDGE_RISING>,
+-				     <SOC_PERIPHERAL_IRQ(328) IRQ_TYPE_EDGE_RISING>,
+-				     <SOC_PERIPHERAL_IRQ(329) IRQ_TYPE_EDGE_RISING>;
+-			interrupt-names = "int_req", "dma_rx", "dma_tx", "dma_rt";
++				     <SOC_PERIPHERAL_IRQ(328) IRQ_TYPE_EDGE_RISING>;
++			interrupt-names = "int_req", "dma_rx", "dma_tx";
+ 			clocks = <&cpg CPG_MOD R9A07G043_SSI0_PCLK2>,
+ 				 <&cpg CPG_MOD R9A07G043_SSI0_PCLK_SFR>,
+ 				 <&audio_clk1>, <&audio_clk2>;
+@@ -101,9 +100,8 @@
+ 			reg = <0 0x1004a000 0 0x400>;
+ 			interrupts = <SOC_PERIPHERAL_IRQ(330) IRQ_TYPE_LEVEL_HIGH>,
+ 				     <SOC_PERIPHERAL_IRQ(331) IRQ_TYPE_EDGE_RISING>,
+-				     <SOC_PERIPHERAL_IRQ(332) IRQ_TYPE_EDGE_RISING>,
+-				     <SOC_PERIPHERAL_IRQ(333) IRQ_TYPE_EDGE_RISING>;
+-			interrupt-names = "int_req", "dma_rx", "dma_tx", "dma_rt";
++				     <SOC_PERIPHERAL_IRQ(332) IRQ_TYPE_EDGE_RISING>;
++			interrupt-names = "int_req", "dma_rx", "dma_tx";
+ 			clocks = <&cpg CPG_MOD R9A07G043_SSI1_PCLK2>,
+ 				 <&cpg CPG_MOD R9A07G043_SSI1_PCLK_SFR>,
+ 				 <&audio_clk1>, <&audio_clk2>;
+@@ -121,10 +119,8 @@
+ 				     "renesas,rz-ssi";
+ 			reg = <0 0x1004a400 0 0x400>;
+ 			interrupts = <SOC_PERIPHERAL_IRQ(334) IRQ_TYPE_LEVEL_HIGH>,
+-				     <SOC_PERIPHERAL_IRQ(335) IRQ_TYPE_EDGE_RISING>,
+-				     <SOC_PERIPHERAL_IRQ(336) IRQ_TYPE_EDGE_RISING>,
+ 				     <SOC_PERIPHERAL_IRQ(337) IRQ_TYPE_EDGE_RISING>;
+-			interrupt-names = "int_req", "dma_rx", "dma_tx", "dma_rt";
++			interrupt-names = "int_req", "dma_rt";
+ 			clocks = <&cpg CPG_MOD R9A07G043_SSI2_PCLK2>,
+ 				 <&cpg CPG_MOD R9A07G043_SSI2_PCLK_SFR>,
+ 				 <&audio_clk1>, <&audio_clk2>;
+@@ -143,9 +139,8 @@
+ 			reg = <0 0x1004a800 0 0x400>;
+ 			interrupts = <SOC_PERIPHERAL_IRQ(338) IRQ_TYPE_LEVEL_HIGH>,
+ 				     <SOC_PERIPHERAL_IRQ(339) IRQ_TYPE_EDGE_RISING>,
+-				     <SOC_PERIPHERAL_IRQ(340) IRQ_TYPE_EDGE_RISING>,
+-				     <SOC_PERIPHERAL_IRQ(341) IRQ_TYPE_EDGE_RISING>;
+-			interrupt-names = "int_req", "dma_rx", "dma_tx", "dma_rt";
++				     <SOC_PERIPHERAL_IRQ(340) IRQ_TYPE_EDGE_RISING>;
++			interrupt-names = "int_req", "dma_rx", "dma_tx";
+ 			clocks = <&cpg CPG_MOD R9A07G043_SSI3_PCLK2>,
+ 				 <&cpg CPG_MOD R9A07G043_SSI3_PCLK_SFR>,
+ 				 <&audio_clk1>, <&audio_clk2>;
+diff --git a/arch/arm64/boot/dts/renesas/r9a07g044.dtsi b/arch/arm64/boot/dts/renesas/r9a07g044.dtsi
+index 487536696d900..6a42df15440cf 100644
+--- a/arch/arm64/boot/dts/renesas/r9a07g044.dtsi
++++ b/arch/arm64/boot/dts/renesas/r9a07g044.dtsi
+@@ -175,9 +175,8 @@
+ 			reg = <0 0x10049c00 0 0x400>;
+ 			interrupts = <GIC_SPI 326 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 327 IRQ_TYPE_EDGE_RISING>,
+-				     <GIC_SPI 328 IRQ_TYPE_EDGE_RISING>,
+-				     <GIC_SPI 329 IRQ_TYPE_EDGE_RISING>;
+-			interrupt-names = "int_req", "dma_rx", "dma_tx", "dma_rt";
++				     <GIC_SPI 328 IRQ_TYPE_EDGE_RISING>;
++			interrupt-names = "int_req", "dma_rx", "dma_tx";
+ 			clocks = <&cpg CPG_MOD R9A07G044_SSI0_PCLK2>,
+ 				 <&cpg CPG_MOD R9A07G044_SSI0_PCLK_SFR>,
+ 				 <&audio_clk1>, <&audio_clk2>;
+@@ -196,9 +195,8 @@
+ 			reg = <0 0x1004a000 0 0x400>;
+ 			interrupts = <GIC_SPI 330 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 331 IRQ_TYPE_EDGE_RISING>,
+-				     <GIC_SPI 332 IRQ_TYPE_EDGE_RISING>,
+-				     <GIC_SPI 333 IRQ_TYPE_EDGE_RISING>;
+-			interrupt-names = "int_req", "dma_rx", "dma_tx", "dma_rt";
++				     <GIC_SPI 332 IRQ_TYPE_EDGE_RISING>;
++			interrupt-names = "int_req", "dma_rx", "dma_tx";
+ 			clocks = <&cpg CPG_MOD R9A07G044_SSI1_PCLK2>,
+ 				 <&cpg CPG_MOD R9A07G044_SSI1_PCLK_SFR>,
+ 				 <&audio_clk1>, <&audio_clk2>;
+@@ -216,10 +214,8 @@
+ 				     "renesas,rz-ssi";
+ 			reg = <0 0x1004a400 0 0x400>;
+ 			interrupts = <GIC_SPI 334 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 335 IRQ_TYPE_EDGE_RISING>,
+-				     <GIC_SPI 336 IRQ_TYPE_EDGE_RISING>,
+ 				     <GIC_SPI 337 IRQ_TYPE_EDGE_RISING>;
+-			interrupt-names = "int_req", "dma_rx", "dma_tx", "dma_rt";
++			interrupt-names = "int_req", "dma_rt";
+ 			clocks = <&cpg CPG_MOD R9A07G044_SSI2_PCLK2>,
+ 				 <&cpg CPG_MOD R9A07G044_SSI2_PCLK_SFR>,
+ 				 <&audio_clk1>, <&audio_clk2>;
+@@ -238,9 +234,8 @@
+ 			reg = <0 0x1004a800 0 0x400>;
+ 			interrupts = <GIC_SPI 338 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 339 IRQ_TYPE_EDGE_RISING>,
+-				     <GIC_SPI 340 IRQ_TYPE_EDGE_RISING>,
+-				     <GIC_SPI 341 IRQ_TYPE_EDGE_RISING>;
+-			interrupt-names = "int_req", "dma_rx", "dma_tx", "dma_rt";
++				     <GIC_SPI 340 IRQ_TYPE_EDGE_RISING>;
++			interrupt-names = "int_req", "dma_rx", "dma_tx";
+ 			clocks = <&cpg CPG_MOD R9A07G044_SSI3_PCLK2>,
+ 				 <&cpg CPG_MOD R9A07G044_SSI3_PCLK_SFR>,
+ 				 <&audio_clk1>, <&audio_clk2>;
+diff --git a/arch/arm64/boot/dts/renesas/r9a07g054.dtsi b/arch/arm64/boot/dts/renesas/r9a07g054.dtsi
+index 304ade54425bf..fea537d9fce66 100644
+--- a/arch/arm64/boot/dts/renesas/r9a07g054.dtsi
++++ b/arch/arm64/boot/dts/renesas/r9a07g054.dtsi
+@@ -175,9 +175,8 @@
+ 			reg = <0 0x10049c00 0 0x400>;
+ 			interrupts = <GIC_SPI 326 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 327 IRQ_TYPE_EDGE_RISING>,
+-				     <GIC_SPI 328 IRQ_TYPE_EDGE_RISING>,
+-				     <GIC_SPI 329 IRQ_TYPE_EDGE_RISING>;
+-			interrupt-names = "int_req", "dma_rx", "dma_tx", "dma_rt";
++				     <GIC_SPI 328 IRQ_TYPE_EDGE_RISING>;
++			interrupt-names = "int_req", "dma_rx", "dma_tx";
+ 			clocks = <&cpg CPG_MOD R9A07G054_SSI0_PCLK2>,
+ 				 <&cpg CPG_MOD R9A07G054_SSI0_PCLK_SFR>,
+ 				 <&audio_clk1>, <&audio_clk2>;
+@@ -196,9 +195,8 @@
+ 			reg = <0 0x1004a000 0 0x400>;
+ 			interrupts = <GIC_SPI 330 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 331 IRQ_TYPE_EDGE_RISING>,
+-				     <GIC_SPI 332 IRQ_TYPE_EDGE_RISING>,
+-				     <GIC_SPI 333 IRQ_TYPE_EDGE_RISING>;
+-			interrupt-names = "int_req", "dma_rx", "dma_tx", "dma_rt";
++				     <GIC_SPI 332 IRQ_TYPE_EDGE_RISING>;
++			interrupt-names = "int_req", "dma_rx", "dma_tx";
+ 			clocks = <&cpg CPG_MOD R9A07G054_SSI1_PCLK2>,
+ 				 <&cpg CPG_MOD R9A07G054_SSI1_PCLK_SFR>,
+ 				 <&audio_clk1>, <&audio_clk2>;
+@@ -216,10 +214,8 @@
+ 				     "renesas,rz-ssi";
+ 			reg = <0 0x1004a400 0 0x400>;
+ 			interrupts = <GIC_SPI 334 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 335 IRQ_TYPE_EDGE_RISING>,
+-				     <GIC_SPI 336 IRQ_TYPE_EDGE_RISING>,
+ 				     <GIC_SPI 337 IRQ_TYPE_EDGE_RISING>;
+-			interrupt-names = "int_req", "dma_rx", "dma_tx", "dma_rt";
++			interrupt-names = "int_req", "dma_rt";
+ 			clocks = <&cpg CPG_MOD R9A07G054_SSI2_PCLK2>,
+ 				 <&cpg CPG_MOD R9A07G054_SSI2_PCLK_SFR>,
+ 				 <&audio_clk1>, <&audio_clk2>;
+@@ -238,9 +234,8 @@
+ 			reg = <0 0x1004a800 0 0x400>;
+ 			interrupts = <GIC_SPI 338 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 339 IRQ_TYPE_EDGE_RISING>,
+-				     <GIC_SPI 340 IRQ_TYPE_EDGE_RISING>,
+-				     <GIC_SPI 341 IRQ_TYPE_EDGE_RISING>;
+-			interrupt-names = "int_req", "dma_rx", "dma_tx", "dma_rt";
++				     <GIC_SPI 340 IRQ_TYPE_EDGE_RISING>;
++			interrupt-names = "int_req", "dma_rx", "dma_tx";
+ 			clocks = <&cpg CPG_MOD R9A07G054_SSI3_PCLK2>,
+ 				 <&cpg CPG_MOD R9A07G054_SSI3_PCLK_SFR>,
+ 				 <&audio_clk1>, <&audio_clk2>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588s.dtsi b/arch/arm64/boot/dts/rockchip/rk3588s.dtsi
+index a506948b5572b..f4eae4dde1751 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588s.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3588s.dtsi
+@@ -423,7 +423,7 @@
+ 			<&cru ACLK_BUS_ROOT>, <&cru CLK_150M_SRC>,
+ 			<&cru CLK_GPU>;
+ 		assigned-clock-rates =
+-			<100000000>, <786432000>,
++			<1100000000>, <786432000>,
+ 			<850000000>, <1188000000>,
+ 			<702000000>,
+ 			<400000000>, <500000000>,
+diff --git a/arch/arm64/boot/dts/ti/k3-am62-main.dtsi b/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
+index ea683fd77d6a5..a143ea5e78a52 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
+@@ -461,7 +461,7 @@
+ 			     <193>, <194>, <195>;
+ 		interrupt-controller;
+ 		#interrupt-cells = <2>;
+-		ti,ngpio = <87>;
++		ti,ngpio = <92>;
+ 		ti,davinci-gpio-unbanked = <0>;
+ 		power-domains = <&k3_pds 77 TI_SCI_PD_EXCLUSIVE>;
+ 		clocks = <&k3_clks 77 0>;
+@@ -478,7 +478,7 @@
+ 			     <183>, <184>, <185>;
+ 		interrupt-controller;
+ 		#interrupt-cells = <2>;
+-		ti,ngpio = <88>;
++		ti,ngpio = <52>;
+ 		ti,davinci-gpio-unbanked = <0>;
+ 		power-domains = <&k3_pds 78 TI_SCI_PD_EXCLUSIVE>;
+ 		clocks = <&k3_clks 78 0>;
+diff --git a/arch/arm64/boot/dts/ti/k3-am625-sk.dts b/arch/arm64/boot/dts/ti/k3-am625-sk.dts
+index 6bc7d63cf52fe..4d5dec890ad66 100644
+--- a/arch/arm64/boot/dts/ti/k3-am625-sk.dts
++++ b/arch/arm64/boot/dts/ti/k3-am625-sk.dts
+@@ -480,6 +480,7 @@
+ 
+ &usbss1 {
+ 	status = "okay";
++	ti,vbus-divider;
+ };
+ 
+ &usb0 {
+diff --git a/arch/arm64/boot/dts/ti/k3-am625.dtsi b/arch/arm64/boot/dts/ti/k3-am625.dtsi
+index acc7f8ab64261..4193c2b3eed60 100644
+--- a/arch/arm64/boot/dts/ti/k3-am625.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am625.dtsi
+@@ -148,7 +148,7 @@
+ 		compatible = "cache";
+ 		cache-unified;
+ 		cache-level = <2>;
+-		cache-size = <0x40000>;
++		cache-size = <0x80000>;
+ 		cache-line-size = <64>;
+ 		cache-sets = <512>;
+ 	};
+diff --git a/arch/arm64/boot/dts/ti/k3-am62a7-sk.dts b/arch/arm64/boot/dts/ti/k3-am62a7-sk.dts
+index 5c9012141ee23..f6a67f072dca6 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62a7-sk.dts
++++ b/arch/arm64/boot/dts/ti/k3-am62a7-sk.dts
+@@ -27,8 +27,9 @@
+ 
+ 	memory@80000000 {
+ 		device_type = "memory";
+-		/* 2G RAM */
+-		reg = <0x00000000 0x80000000 0x00000000 0x80000000>;
++		/* 4G RAM */
++		reg = <0x00000000 0x80000000 0x00000000 0x80000000>,
++		      <0x00000008 0x80000000 0x00000000 0x80000000>;
+ 	};
+ 
+ 	reserved-memory {
+diff --git a/arch/arm64/boot/dts/ti/k3-am62a7.dtsi b/arch/arm64/boot/dts/ti/k3-am62a7.dtsi
+index 9734549851c06..58f1c43edcf8f 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62a7.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62a7.dtsi
+@@ -97,7 +97,7 @@
+ 		compatible = "cache";
+ 		cache-unified;
+ 		cache-level = <2>;
+-		cache-size = <0x40000>;
++		cache-size = <0x80000>;
+ 		cache-line-size = <64>;
+ 		cache-sets = <512>;
+ 	};
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
+index c935622f01028..bfa296dce3a31 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
+@@ -1180,7 +1180,6 @@
+ 		ti,itap-del-sel-mmc-hs = <0xa>;
+ 		ti,itap-del-sel-ddr52 = <0x3>;
+ 		ti,trm-icp = <0x8>;
+-		ti,strobe-sel = <0x77>;
+ 		dma-coherent;
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/ti/k3-j784s4-main.dtsi b/arch/arm64/boot/dts/ti/k3-j784s4-main.dtsi
+index 7edf324ac159b..80a1b08c51a84 100644
+--- a/arch/arm64/boot/dts/ti/k3-j784s4-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j784s4-main.dtsi
+@@ -398,6 +398,7 @@
+ 		#address-cells = <2>;
+ 		#size-cells = <2>;
+ 		ranges = <0x00 0x30000000 0x00 0x30000000 0x00 0x0c400000>;
++		ti,sci-dev-id = <280>;
+ 		dma-coherent;
+ 		dma-ranges;
+ 
+diff --git a/arch/arm64/boot/dts/ti/k3-j784s4-mcu-wakeup.dtsi b/arch/arm64/boot/dts/ti/k3-j784s4-mcu-wakeup.dtsi
+index 93952af618f65..64bd3dee14aa6 100644
+--- a/arch/arm64/boot/dts/ti/k3-j784s4-mcu-wakeup.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j784s4-mcu-wakeup.dtsi
+@@ -209,6 +209,7 @@
+ 		#address-cells = <2>;
+ 		#size-cells = <2>;
+ 		ranges = <0x00 0x28380000 0x00 0x28380000 0x00 0x03880000>;
++		ti,sci-dev-id = <323>;
+ 		dma-coherent;
+ 		dma-ranges;
+ 
+diff --git a/arch/arm64/crypto/aes-neonbs-core.S b/arch/arm64/crypto/aes-neonbs-core.S
+index 7278a37c2d5cd..baf450717b24b 100644
+--- a/arch/arm64/crypto/aes-neonbs-core.S
++++ b/arch/arm64/crypto/aes-neonbs-core.S
+@@ -15,6 +15,7 @@
+  */
+ 
+ #include <linux/linkage.h>
++#include <linux/cfi_types.h>
+ #include <asm/assembler.h>
+ 
+ 	.text
+@@ -620,12 +621,12 @@ SYM_FUNC_END(aesbs_decrypt8)
+ 	.endm
+ 
+ 	.align		4
+-SYM_FUNC_START(aesbs_ecb_encrypt)
++SYM_TYPED_FUNC_START(aesbs_ecb_encrypt)
+ 	__ecb_crypt	aesbs_encrypt8, v0, v1, v4, v6, v3, v7, v2, v5
+ SYM_FUNC_END(aesbs_ecb_encrypt)
+ 
+ 	.align		4
+-SYM_FUNC_START(aesbs_ecb_decrypt)
++SYM_TYPED_FUNC_START(aesbs_ecb_decrypt)
+ 	__ecb_crypt	aesbs_decrypt8, v0, v1, v6, v4, v2, v7, v3, v5
+ SYM_FUNC_END(aesbs_ecb_decrypt)
+ 
+@@ -799,11 +800,11 @@ SYM_FUNC_END(__xts_crypt8)
+ 	ret
+ 	.endm
+ 
+-SYM_FUNC_START(aesbs_xts_encrypt)
++SYM_TYPED_FUNC_START(aesbs_xts_encrypt)
+ 	__xts_crypt	aesbs_encrypt8, v0, v1, v4, v6, v3, v7, v2, v5
+ SYM_FUNC_END(aesbs_xts_encrypt)
+ 
+-SYM_FUNC_START(aesbs_xts_decrypt)
++SYM_TYPED_FUNC_START(aesbs_xts_decrypt)
+ 	__xts_crypt	aesbs_decrypt8, v0, v1, v6, v4, v2, v7, v3, v5
+ SYM_FUNC_END(aesbs_xts_decrypt)
+ 
+diff --git a/arch/arm64/include/asm/debug-monitors.h b/arch/arm64/include/asm/debug-monitors.h
+index 7b7e05c02691c..13d437bcbf58c 100644
+--- a/arch/arm64/include/asm/debug-monitors.h
++++ b/arch/arm64/include/asm/debug-monitors.h
+@@ -104,6 +104,7 @@ void user_regs_reset_single_step(struct user_pt_regs *regs,
+ void kernel_enable_single_step(struct pt_regs *regs);
+ void kernel_disable_single_step(void);
+ int kernel_active_single_step(void);
++void kernel_rewind_single_step(struct pt_regs *regs);
+ 
+ #ifdef CONFIG_HAVE_HW_BREAKPOINT
+ int reinstall_suspended_bps(struct pt_regs *regs);
+diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
+index 3dd691c85ca0d..77d941ad3aed9 100644
+--- a/arch/arm64/include/asm/kvm_host.h
++++ b/arch/arm64/include/asm/kvm_host.h
+@@ -199,6 +199,9 @@ struct kvm_arch {
+ 	/* Mandated version of PSCI */
+ 	u32 psci_version;
+ 
++	/* Protects VM-scoped configuration data */
++	struct mutex config_lock;
++
+ 	/*
+ 	 * If we encounter a data abort without valid instruction syndrome
+ 	 * information, report this to user space.  User space can (and
+@@ -522,6 +525,7 @@ struct kvm_vcpu_arch {
+ 
+ 	/* vcpu power state */
+ 	struct kvm_mp_state mp_state;
++	spinlock_t mp_state_lock;
+ 
+ 	/* Cache some mmu pages needed inside spinlock regions */
+ 	struct kvm_mmu_memory_cache mmu_page_cache;
+diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
+index 3da09778267ec..64f2ecbdfe5c2 100644
+--- a/arch/arm64/kernel/debug-monitors.c
++++ b/arch/arm64/kernel/debug-monitors.c
+@@ -438,6 +438,11 @@ int kernel_active_single_step(void)
+ }
+ NOKPROBE_SYMBOL(kernel_active_single_step);
+ 
++void kernel_rewind_single_step(struct pt_regs *regs)
++{
++	set_regs_spsr_ss(regs);
++}
++
+ /* ptrace API */
+ void user_enable_single_step(struct task_struct *task)
+ {
+diff --git a/arch/arm64/kernel/kgdb.c b/arch/arm64/kernel/kgdb.c
+index cda9c1e9864f7..4e1f983df3d1c 100644
+--- a/arch/arm64/kernel/kgdb.c
++++ b/arch/arm64/kernel/kgdb.c
+@@ -224,6 +224,8 @@ int kgdb_arch_handle_exception(int exception_vector, int signo,
+ 		 */
+ 		if (!kernel_active_single_step())
+ 			kernel_enable_single_step(linux_regs);
++		else
++			kernel_rewind_single_step(linux_regs);
+ 		err = 0;
+ 		break;
+ 	default:
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index 4b2e16e696a80..fbafcbbcc4630 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -128,6 +128,16 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
+ {
+ 	int ret;
+ 
++	mutex_init(&kvm->arch.config_lock);
++
++#ifdef CONFIG_LOCKDEP
++	/* Clue in lockdep that the config_lock must be taken inside kvm->lock */
++	mutex_lock(&kvm->lock);
++	mutex_lock(&kvm->arch.config_lock);
++	mutex_unlock(&kvm->arch.config_lock);
++	mutex_unlock(&kvm->lock);
++#endif
++
+ 	ret = kvm_share_hyp(kvm, kvm + 1);
+ 	if (ret)
+ 		return ret;
+@@ -327,6 +337,16 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
+ {
+ 	int err;
+ 
++	spin_lock_init(&vcpu->arch.mp_state_lock);
++
++#ifdef CONFIG_LOCKDEP
++	/* Inform lockdep that the config_lock is acquired after vcpu->mutex */
++	mutex_lock(&vcpu->mutex);
++	mutex_lock(&vcpu->kvm->arch.config_lock);
++	mutex_unlock(&vcpu->kvm->arch.config_lock);
++	mutex_unlock(&vcpu->mutex);
++#endif
++
+ 	/* Force users to call KVM_ARM_VCPU_INIT */
+ 	vcpu->arch.target = -1;
+ 	bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
+@@ -444,34 +464,41 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
+ 	vcpu->cpu = -1;
+ }
+ 
+-void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu)
++static void __kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu)
+ {
+-	vcpu->arch.mp_state.mp_state = KVM_MP_STATE_STOPPED;
++	WRITE_ONCE(vcpu->arch.mp_state.mp_state, KVM_MP_STATE_STOPPED);
+ 	kvm_make_request(KVM_REQ_SLEEP, vcpu);
+ 	kvm_vcpu_kick(vcpu);
+ }
+ 
++void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu)
++{
++	spin_lock(&vcpu->arch.mp_state_lock);
++	__kvm_arm_vcpu_power_off(vcpu);
++	spin_unlock(&vcpu->arch.mp_state_lock);
++}
++
+ bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu)
+ {
+-	return vcpu->arch.mp_state.mp_state == KVM_MP_STATE_STOPPED;
++	return READ_ONCE(vcpu->arch.mp_state.mp_state) == KVM_MP_STATE_STOPPED;
+ }
+ 
+ static void kvm_arm_vcpu_suspend(struct kvm_vcpu *vcpu)
+ {
+-	vcpu->arch.mp_state.mp_state = KVM_MP_STATE_SUSPENDED;
++	WRITE_ONCE(vcpu->arch.mp_state.mp_state, KVM_MP_STATE_SUSPENDED);
+ 	kvm_make_request(KVM_REQ_SUSPEND, vcpu);
+ 	kvm_vcpu_kick(vcpu);
+ }
+ 
+ static bool kvm_arm_vcpu_suspended(struct kvm_vcpu *vcpu)
+ {
+-	return vcpu->arch.mp_state.mp_state == KVM_MP_STATE_SUSPENDED;
++	return READ_ONCE(vcpu->arch.mp_state.mp_state) == KVM_MP_STATE_SUSPENDED;
+ }
+ 
+ int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
+ 				    struct kvm_mp_state *mp_state)
+ {
+-	*mp_state = vcpu->arch.mp_state;
++	*mp_state = READ_ONCE(vcpu->arch.mp_state);
+ 
+ 	return 0;
+ }
+@@ -481,12 +508,14 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
+ {
+ 	int ret = 0;
+ 
++	spin_lock(&vcpu->arch.mp_state_lock);
++
+ 	switch (mp_state->mp_state) {
+ 	case KVM_MP_STATE_RUNNABLE:
+-		vcpu->arch.mp_state = *mp_state;
++		WRITE_ONCE(vcpu->arch.mp_state, *mp_state);
+ 		break;
+ 	case KVM_MP_STATE_STOPPED:
+-		kvm_arm_vcpu_power_off(vcpu);
++		__kvm_arm_vcpu_power_off(vcpu);
+ 		break;
+ 	case KVM_MP_STATE_SUSPENDED:
+ 		kvm_arm_vcpu_suspend(vcpu);
+@@ -495,6 +524,8 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
+ 		ret = -EINVAL;
+ 	}
+ 
++	spin_unlock(&vcpu->arch.mp_state_lock);
++
+ 	return ret;
+ }
+ 
+@@ -594,9 +625,9 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
+ 	if (kvm_vm_is_protected(kvm))
+ 		kvm_call_hyp_nvhe(__pkvm_vcpu_init_traps, vcpu);
+ 
+-	mutex_lock(&kvm->lock);
++	mutex_lock(&kvm->arch.config_lock);
+ 	set_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &kvm->arch.flags);
+-	mutex_unlock(&kvm->lock);
++	mutex_unlock(&kvm->arch.config_lock);
+ 
+ 	return ret;
+ }
+@@ -1214,7 +1245,7 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
+ 	if (test_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features))
+ 		kvm_arm_vcpu_power_off(vcpu);
+ 	else
+-		vcpu->arch.mp_state.mp_state = KVM_MP_STATE_RUNNABLE;
++		WRITE_ONCE(vcpu->arch.mp_state.mp_state, KVM_MP_STATE_RUNNABLE);
+ 
+ 	return 0;
+ }
+diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
+index 07444fa228888..481c79cf22cd2 100644
+--- a/arch/arm64/kvm/guest.c
++++ b/arch/arm64/kvm/guest.c
+@@ -957,7 +957,9 @@ int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
+ 
+ 	switch (attr->group) {
+ 	case KVM_ARM_VCPU_PMU_V3_CTRL:
++		mutex_lock(&vcpu->kvm->arch.config_lock);
+ 		ret = kvm_arm_pmu_v3_set_attr(vcpu, attr);
++		mutex_unlock(&vcpu->kvm->arch.config_lock);
+ 		break;
+ 	case KVM_ARM_VCPU_TIMER_CTRL:
+ 		ret = kvm_arm_timer_set_attr(vcpu, attr);
+diff --git a/arch/arm64/kvm/hypercalls.c b/arch/arm64/kvm/hypercalls.c
+index c4b4678bc4a45..345998e40250b 100644
+--- a/arch/arm64/kvm/hypercalls.c
++++ b/arch/arm64/kvm/hypercalls.c
+@@ -377,7 +377,7 @@ static int kvm_arm_set_fw_reg_bmap(struct kvm_vcpu *vcpu, u64 reg_id, u64 val)
+ 	if (val & ~fw_reg_features)
+ 		return -EINVAL;
+ 
+-	mutex_lock(&kvm->lock);
++	mutex_lock(&kvm->arch.config_lock);
+ 
+ 	if (test_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &kvm->arch.flags) &&
+ 	    val != *fw_reg_bmap) {
+@@ -387,7 +387,7 @@ static int kvm_arm_set_fw_reg_bmap(struct kvm_vcpu *vcpu, u64 reg_id, u64 val)
+ 
+ 	WRITE_ONCE(*fw_reg_bmap, val);
+ out:
+-	mutex_unlock(&kvm->lock);
++	mutex_unlock(&kvm->arch.config_lock);
+ 	return ret;
+ }
+ 
+diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
+index 5eca0cdd961df..99e990a472a57 100644
+--- a/arch/arm64/kvm/pmu-emul.c
++++ b/arch/arm64/kvm/pmu-emul.c
+@@ -876,7 +876,7 @@ static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id)
+ 	struct arm_pmu *arm_pmu;
+ 	int ret = -ENXIO;
+ 
+-	mutex_lock(&kvm->lock);
++	lockdep_assert_held(&kvm->arch.config_lock);
+ 	mutex_lock(&arm_pmus_lock);
+ 
+ 	list_for_each_entry(entry, &arm_pmus, entry) {
+@@ -896,7 +896,6 @@ static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id)
+ 	}
+ 
+ 	mutex_unlock(&arm_pmus_lock);
+-	mutex_unlock(&kvm->lock);
+ 	return ret;
+ }
+ 
+@@ -904,22 +903,20 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
+ {
+ 	struct kvm *kvm = vcpu->kvm;
+ 
++	lockdep_assert_held(&kvm->arch.config_lock);
++
+ 	if (!kvm_vcpu_has_pmu(vcpu))
+ 		return -ENODEV;
+ 
+ 	if (vcpu->arch.pmu.created)
+ 		return -EBUSY;
+ 
+-	mutex_lock(&kvm->lock);
+ 	if (!kvm->arch.arm_pmu) {
+ 		/* No PMU set, get the default one */
+ 		kvm->arch.arm_pmu = kvm_pmu_probe_armpmu();
+-		if (!kvm->arch.arm_pmu) {
+-			mutex_unlock(&kvm->lock);
++		if (!kvm->arch.arm_pmu)
+ 			return -ENODEV;
+-		}
+ 	}
+-	mutex_unlock(&kvm->lock);
+ 
+ 	switch (attr->attr) {
+ 	case KVM_ARM_VCPU_PMU_V3_IRQ: {
+@@ -963,19 +960,13 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
+ 		     filter.action != KVM_PMU_EVENT_DENY))
+ 			return -EINVAL;
+ 
+-		mutex_lock(&kvm->lock);
+-
+-		if (test_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &kvm->arch.flags)) {
+-			mutex_unlock(&kvm->lock);
++		if (test_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &kvm->arch.flags))
+ 			return -EBUSY;
+-		}
+ 
+ 		if (!kvm->arch.pmu_filter) {
+ 			kvm->arch.pmu_filter = bitmap_alloc(nr_events, GFP_KERNEL_ACCOUNT);
+-			if (!kvm->arch.pmu_filter) {
+-				mutex_unlock(&kvm->lock);
++			if (!kvm->arch.pmu_filter)
+ 				return -ENOMEM;
+-			}
+ 
+ 			/*
+ 			 * The default depends on the first applied filter.
+@@ -994,8 +985,6 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
+ 		else
+ 			bitmap_clear(kvm->arch.pmu_filter, filter.base_event, filter.nevents);
+ 
+-		mutex_unlock(&kvm->lock);
+-
+ 		return 0;
+ 	}
+ 	case KVM_ARM_VCPU_PMU_V3_SET_PMU: {
+diff --git a/arch/arm64/kvm/psci.c b/arch/arm64/kvm/psci.c
+index 7fbc4c1b9df04..5767e6baa61a2 100644
+--- a/arch/arm64/kvm/psci.c
++++ b/arch/arm64/kvm/psci.c
+@@ -62,6 +62,7 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
+ 	struct vcpu_reset_state *reset_state;
+ 	struct kvm *kvm = source_vcpu->kvm;
+ 	struct kvm_vcpu *vcpu = NULL;
++	int ret = PSCI_RET_SUCCESS;
+ 	unsigned long cpu_id;
+ 
+ 	cpu_id = smccc_get_arg1(source_vcpu);
+@@ -76,11 +77,15 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
+ 	 */
+ 	if (!vcpu)
+ 		return PSCI_RET_INVALID_PARAMS;
++
++	spin_lock(&vcpu->arch.mp_state_lock);
+ 	if (!kvm_arm_vcpu_stopped(vcpu)) {
+ 		if (kvm_psci_version(source_vcpu) != KVM_ARM_PSCI_0_1)
+-			return PSCI_RET_ALREADY_ON;
++			ret = PSCI_RET_ALREADY_ON;
+ 		else
+-			return PSCI_RET_INVALID_PARAMS;
++			ret = PSCI_RET_INVALID_PARAMS;
++
++		goto out_unlock;
+ 	}
+ 
+ 	reset_state = &vcpu->arch.reset_state;
+@@ -96,7 +101,7 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
+ 	 */
+ 	reset_state->r0 = smccc_get_arg3(source_vcpu);
+ 
+-	WRITE_ONCE(reset_state->reset, true);
++	reset_state->reset = true;
+ 	kvm_make_request(KVM_REQ_VCPU_RESET, vcpu);
+ 
+ 	/*
+@@ -108,7 +113,9 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
+ 	vcpu->arch.mp_state.mp_state = KVM_MP_STATE_RUNNABLE;
+ 	kvm_vcpu_wake_up(vcpu);
+ 
+-	return PSCI_RET_SUCCESS;
++out_unlock:
++	spin_unlock(&vcpu->arch.mp_state_lock);
++	return ret;
+ }
+ 
+ static unsigned long kvm_psci_vcpu_affinity_info(struct kvm_vcpu *vcpu)
+@@ -168,8 +175,11 @@ static void kvm_prepare_system_event(struct kvm_vcpu *vcpu, u32 type, u64 flags)
+ 	 * after this call is handled and before the VCPUs have been
+ 	 * re-initialized.
+ 	 */
+-	kvm_for_each_vcpu(i, tmp, vcpu->kvm)
+-		tmp->arch.mp_state.mp_state = KVM_MP_STATE_STOPPED;
++	kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
++		spin_lock(&tmp->arch.mp_state_lock);
++		WRITE_ONCE(tmp->arch.mp_state.mp_state, KVM_MP_STATE_STOPPED);
++		spin_unlock(&tmp->arch.mp_state_lock);
++	}
+ 	kvm_make_all_cpus_request(vcpu->kvm, KVM_REQ_SLEEP);
+ 
+ 	memset(&vcpu->run->system_event, 0, sizeof(vcpu->run->system_event));
+@@ -229,7 +239,6 @@ static unsigned long kvm_psci_check_allowed_function(struct kvm_vcpu *vcpu, u32
+ 
+ static int kvm_psci_0_2_call(struct kvm_vcpu *vcpu)
+ {
+-	struct kvm *kvm = vcpu->kvm;
+ 	u32 psci_fn = smccc_get_function(vcpu);
+ 	unsigned long val;
+ 	int ret = 1;
+@@ -254,9 +263,7 @@ static int kvm_psci_0_2_call(struct kvm_vcpu *vcpu)
+ 		kvm_psci_narrow_to_32bit(vcpu);
+ 		fallthrough;
+ 	case PSCI_0_2_FN64_CPU_ON:
+-		mutex_lock(&kvm->lock);
+ 		val = kvm_psci_vcpu_on(vcpu);
+-		mutex_unlock(&kvm->lock);
+ 		break;
+ 	case PSCI_0_2_FN_AFFINITY_INFO:
+ 		kvm_psci_narrow_to_32bit(vcpu);
+@@ -395,7 +402,6 @@ static int kvm_psci_1_x_call(struct kvm_vcpu *vcpu, u32 minor)
+ 
+ static int kvm_psci_0_1_call(struct kvm_vcpu *vcpu)
+ {
+-	struct kvm *kvm = vcpu->kvm;
+ 	u32 psci_fn = smccc_get_function(vcpu);
+ 	unsigned long val;
+ 
+@@ -405,9 +411,7 @@ static int kvm_psci_0_1_call(struct kvm_vcpu *vcpu)
+ 		val = PSCI_RET_SUCCESS;
+ 		break;
+ 	case KVM_PSCI_FN_CPU_ON:
+-		mutex_lock(&kvm->lock);
+ 		val = kvm_psci_vcpu_on(vcpu);
+-		mutex_unlock(&kvm->lock);
+ 		break;
+ 	default:
+ 		val = PSCI_RET_NOT_SUPPORTED;
+diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
+index 49a3257dec46d..b5dee8e57e77a 100644
+--- a/arch/arm64/kvm/reset.c
++++ b/arch/arm64/kvm/reset.c
+@@ -205,7 +205,7 @@ static int kvm_set_vm_width(struct kvm_vcpu *vcpu)
+ 
+ 	is32bit = vcpu_has_feature(vcpu, KVM_ARM_VCPU_EL1_32BIT);
+ 
+-	lockdep_assert_held(&kvm->lock);
++	lockdep_assert_held(&kvm->arch.config_lock);
+ 
+ 	if (test_bit(KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED, &kvm->arch.flags)) {
+ 		/*
+@@ -262,17 +262,18 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
+ 	bool loaded;
+ 	u32 pstate;
+ 
+-	mutex_lock(&vcpu->kvm->lock);
++	mutex_lock(&vcpu->kvm->arch.config_lock);
+ 	ret = kvm_set_vm_width(vcpu);
+-	if (!ret) {
+-		reset_state = vcpu->arch.reset_state;
+-		WRITE_ONCE(vcpu->arch.reset_state.reset, false);
+-	}
+-	mutex_unlock(&vcpu->kvm->lock);
++	mutex_unlock(&vcpu->kvm->arch.config_lock);
+ 
+ 	if (ret)
+ 		return ret;
+ 
++	spin_lock(&vcpu->arch.mp_state_lock);
++	reset_state = vcpu->arch.reset_state;
++	vcpu->arch.reset_state.reset = false;
++	spin_unlock(&vcpu->arch.mp_state_lock);
++
+ 	/* Reset PMU outside of the non-preemptible section */
+ 	kvm_pmu_vcpu_reset(vcpu);
+ 
+diff --git a/arch/arm64/kvm/vgic/vgic-debug.c b/arch/arm64/kvm/vgic/vgic-debug.c
+index 78cde687383ca..07aa0437125a6 100644
+--- a/arch/arm64/kvm/vgic/vgic-debug.c
++++ b/arch/arm64/kvm/vgic/vgic-debug.c
+@@ -85,7 +85,7 @@ static void *vgic_debug_start(struct seq_file *s, loff_t *pos)
+ 	struct kvm *kvm = s->private;
+ 	struct vgic_state_iter *iter;
+ 
+-	mutex_lock(&kvm->lock);
++	mutex_lock(&kvm->arch.config_lock);
+ 	iter = kvm->arch.vgic.iter;
+ 	if (iter) {
+ 		iter = ERR_PTR(-EBUSY);
+@@ -104,7 +104,7 @@ static void *vgic_debug_start(struct seq_file *s, loff_t *pos)
+ 	if (end_of_vgic(iter))
+ 		iter = NULL;
+ out:
+-	mutex_unlock(&kvm->lock);
++	mutex_unlock(&kvm->arch.config_lock);
+ 	return iter;
+ }
+ 
+@@ -132,12 +132,12 @@ static void vgic_debug_stop(struct seq_file *s, void *v)
+ 	if (IS_ERR(v))
+ 		return;
+ 
+-	mutex_lock(&kvm->lock);
++	mutex_lock(&kvm->arch.config_lock);
+ 	iter = kvm->arch.vgic.iter;
+ 	kfree(iter->lpi_array);
+ 	kfree(iter);
+ 	kvm->arch.vgic.iter = NULL;
+-	mutex_unlock(&kvm->lock);
++	mutex_unlock(&kvm->arch.config_lock);
+ }
+ 
+ static void print_dist_state(struct seq_file *s, struct vgic_dist *dist)
+diff --git a/arch/arm64/kvm/vgic/vgic-init.c b/arch/arm64/kvm/vgic/vgic-init.c
+index cd134db41a57c..9d42c7cb2b588 100644
+--- a/arch/arm64/kvm/vgic/vgic-init.c
++++ b/arch/arm64/kvm/vgic/vgic-init.c
+@@ -74,9 +74,6 @@ int kvm_vgic_create(struct kvm *kvm, u32 type)
+ 	unsigned long i;
+ 	int ret;
+ 
+-	if (irqchip_in_kernel(kvm))
+-		return -EEXIST;
+-
+ 	/*
+ 	 * This function is also called by the KVM_CREATE_IRQCHIP handler,
+ 	 * which had no chance yet to check the availability of the GICv2
+@@ -87,10 +84,20 @@ int kvm_vgic_create(struct kvm *kvm, u32 type)
+ 		!kvm_vgic_global_state.can_emulate_gicv2)
+ 		return -ENODEV;
+ 
++	/* Must be held to avoid race with vCPU creation */
++	lockdep_assert_held(&kvm->lock);
++
+ 	ret = -EBUSY;
+ 	if (!lock_all_vcpus(kvm))
+ 		return ret;
+ 
++	mutex_lock(&kvm->arch.config_lock);
++
++	if (irqchip_in_kernel(kvm)) {
++		ret = -EEXIST;
++		goto out_unlock;
++	}
++
+ 	kvm_for_each_vcpu(i, vcpu, kvm) {
+ 		if (vcpu_has_run_once(vcpu))
+ 			goto out_unlock;
+@@ -118,6 +125,7 @@ int kvm_vgic_create(struct kvm *kvm, u32 type)
+ 		INIT_LIST_HEAD(&kvm->arch.vgic.rd_regions);
+ 
+ out_unlock:
++	mutex_unlock(&kvm->arch.config_lock);
+ 	unlock_all_vcpus(kvm);
+ 	return ret;
+ }
+@@ -227,9 +235,9 @@ int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu)
+ 	 * KVM io device for the redistributor that belongs to this VCPU.
+ 	 */
+ 	if (dist->vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) {
+-		mutex_lock(&vcpu->kvm->lock);
++		mutex_lock(&vcpu->kvm->arch.config_lock);
+ 		ret = vgic_register_redist_iodev(vcpu);
+-		mutex_unlock(&vcpu->kvm->lock);
++		mutex_unlock(&vcpu->kvm->arch.config_lock);
+ 	}
+ 	return ret;
+ }
+@@ -250,7 +258,6 @@ static void kvm_vgic_vcpu_enable(struct kvm_vcpu *vcpu)
+  * The function is generally called when nr_spis has been explicitly set
+  * by the guest through the KVM DEVICE API. If not nr_spis is set to 256.
+  * vgic_initialized() returns true when this function has succeeded.
+- * Must be called with kvm->lock held!
+  */
+ int vgic_init(struct kvm *kvm)
+ {
+@@ -259,6 +266,8 @@ int vgic_init(struct kvm *kvm)
+ 	int ret = 0, i;
+ 	unsigned long idx;
+ 
++	lockdep_assert_held(&kvm->arch.config_lock);
++
+ 	if (vgic_initialized(kvm))
+ 		return 0;
+ 
+@@ -373,12 +382,13 @@ void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu)
+ 	vgic_cpu->rd_iodev.base_addr = VGIC_ADDR_UNDEF;
+ }
+ 
+-/* To be called with kvm->lock held */
+ static void __kvm_vgic_destroy(struct kvm *kvm)
+ {
+ 	struct kvm_vcpu *vcpu;
+ 	unsigned long i;
+ 
++	lockdep_assert_held(&kvm->arch.config_lock);
++
+ 	vgic_debug_destroy(kvm);
+ 
+ 	kvm_for_each_vcpu(i, vcpu, kvm)
+@@ -389,9 +399,9 @@ static void __kvm_vgic_destroy(struct kvm *kvm)
+ 
+ void kvm_vgic_destroy(struct kvm *kvm)
+ {
+-	mutex_lock(&kvm->lock);
++	mutex_lock(&kvm->arch.config_lock);
+ 	__kvm_vgic_destroy(kvm);
+-	mutex_unlock(&kvm->lock);
++	mutex_unlock(&kvm->arch.config_lock);
+ }
+ 
+ /**
+@@ -414,9 +424,9 @@ int vgic_lazy_init(struct kvm *kvm)
+ 		if (kvm->arch.vgic.vgic_model != KVM_DEV_TYPE_ARM_VGIC_V2)
+ 			return -EBUSY;
+ 
+-		mutex_lock(&kvm->lock);
++		mutex_lock(&kvm->arch.config_lock);
+ 		ret = vgic_init(kvm);
+-		mutex_unlock(&kvm->lock);
++		mutex_unlock(&kvm->arch.config_lock);
+ 	}
+ 
+ 	return ret;
+@@ -441,7 +451,7 @@ int kvm_vgic_map_resources(struct kvm *kvm)
+ 	if (likely(vgic_ready(kvm)))
+ 		return 0;
+ 
+-	mutex_lock(&kvm->lock);
++	mutex_lock(&kvm->arch.config_lock);
+ 	if (vgic_ready(kvm))
+ 		goto out;
+ 
+@@ -459,7 +469,7 @@ int kvm_vgic_map_resources(struct kvm *kvm)
+ 		dist->ready = true;
+ 
+ out:
+-	mutex_unlock(&kvm->lock);
++	mutex_unlock(&kvm->arch.config_lock);
+ 	return ret;
+ }
+ 
+diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c
+index 2642e9ce28199..750e51e3779a3 100644
+--- a/arch/arm64/kvm/vgic/vgic-its.c
++++ b/arch/arm64/kvm/vgic/vgic-its.c
+@@ -1958,6 +1958,16 @@ static int vgic_its_create(struct kvm_device *dev, u32 type)
+ 	mutex_init(&its->its_lock);
+ 	mutex_init(&its->cmd_lock);
+ 
++	/* Yep, even more trickery for lock ordering... */
++#ifdef CONFIG_LOCKDEP
++	mutex_lock(&dev->kvm->arch.config_lock);
++	mutex_lock(&its->cmd_lock);
++	mutex_lock(&its->its_lock);
++	mutex_unlock(&its->its_lock);
++	mutex_unlock(&its->cmd_lock);
++	mutex_unlock(&dev->kvm->arch.config_lock);
++#endif
++
+ 	its->vgic_its_base = VGIC_ADDR_UNDEF;
+ 
+ 	INIT_LIST_HEAD(&its->device_list);
+@@ -2045,6 +2055,13 @@ static int vgic_its_attr_regs_access(struct kvm_device *dev,
+ 
+ 	mutex_lock(&dev->kvm->lock);
+ 
++	if (!lock_all_vcpus(dev->kvm)) {
++		mutex_unlock(&dev->kvm->lock);
++		return -EBUSY;
++	}
++
++	mutex_lock(&dev->kvm->arch.config_lock);
++
+ 	if (IS_VGIC_ADDR_UNDEF(its->vgic_its_base)) {
+ 		ret = -ENXIO;
+ 		goto out;
+@@ -2058,11 +2075,6 @@ static int vgic_its_attr_regs_access(struct kvm_device *dev,
+ 		goto out;
+ 	}
+ 
+-	if (!lock_all_vcpus(dev->kvm)) {
+-		ret = -EBUSY;
+-		goto out;
+-	}
+-
+ 	addr = its->vgic_its_base + offset;
+ 
+ 	len = region->access_flags & VGIC_ACCESS_64bit ? 8 : 4;
+@@ -2076,8 +2088,9 @@ static int vgic_its_attr_regs_access(struct kvm_device *dev,
+ 	} else {
+ 		*reg = region->its_read(dev->kvm, its, addr, len);
+ 	}
+-	unlock_all_vcpus(dev->kvm);
+ out:
++	mutex_unlock(&dev->kvm->arch.config_lock);
++	unlock_all_vcpus(dev->kvm);
+ 	mutex_unlock(&dev->kvm->lock);
+ 	return ret;
+ }
+@@ -2749,14 +2762,15 @@ static int vgic_its_ctrl(struct kvm *kvm, struct vgic_its *its, u64 attr)
+ 		return 0;
+ 
+ 	mutex_lock(&kvm->lock);
+-	mutex_lock(&its->its_lock);
+ 
+ 	if (!lock_all_vcpus(kvm)) {
+-		mutex_unlock(&its->its_lock);
+ 		mutex_unlock(&kvm->lock);
+ 		return -EBUSY;
+ 	}
+ 
++	mutex_lock(&kvm->arch.config_lock);
++	mutex_lock(&its->its_lock);
++
+ 	switch (attr) {
+ 	case KVM_DEV_ARM_ITS_CTRL_RESET:
+ 		vgic_its_reset(kvm, its);
+@@ -2769,8 +2783,9 @@ static int vgic_its_ctrl(struct kvm *kvm, struct vgic_its *its, u64 attr)
+ 		break;
+ 	}
+ 
+-	unlock_all_vcpus(kvm);
+ 	mutex_unlock(&its->its_lock);
++	mutex_unlock(&kvm->arch.config_lock);
++	unlock_all_vcpus(kvm);
+ 	mutex_unlock(&kvm->lock);
+ 	return ret;
+ }
+diff --git a/arch/arm64/kvm/vgic/vgic-kvm-device.c b/arch/arm64/kvm/vgic/vgic-kvm-device.c
+index edeac2380591f..07e727023deb7 100644
+--- a/arch/arm64/kvm/vgic/vgic-kvm-device.c
++++ b/arch/arm64/kvm/vgic/vgic-kvm-device.c
+@@ -46,7 +46,7 @@ int kvm_set_legacy_vgic_v2_addr(struct kvm *kvm, struct kvm_arm_device_addr *dev
+ 	struct vgic_dist *vgic = &kvm->arch.vgic;
+ 	int r;
+ 
+-	mutex_lock(&kvm->lock);
++	mutex_lock(&kvm->arch.config_lock);
+ 	switch (FIELD_GET(KVM_ARM_DEVICE_TYPE_MASK, dev_addr->id)) {
+ 	case KVM_VGIC_V2_ADDR_TYPE_DIST:
+ 		r = vgic_check_type(kvm, KVM_DEV_TYPE_ARM_VGIC_V2);
+@@ -68,7 +68,7 @@ int kvm_set_legacy_vgic_v2_addr(struct kvm *kvm, struct kvm_arm_device_addr *dev
+ 		r = -ENODEV;
+ 	}
+ 
+-	mutex_unlock(&kvm->lock);
++	mutex_unlock(&kvm->arch.config_lock);
+ 
+ 	return r;
+ }
+@@ -102,7 +102,7 @@ static int kvm_vgic_addr(struct kvm *kvm, struct kvm_device_attr *attr, bool wri
+ 		if (get_user(addr, uaddr))
+ 			return -EFAULT;
+ 
+-	mutex_lock(&kvm->lock);
++	mutex_lock(&kvm->arch.config_lock);
+ 	switch (attr->attr) {
+ 	case KVM_VGIC_V2_ADDR_TYPE_DIST:
+ 		r = vgic_check_type(kvm, KVM_DEV_TYPE_ARM_VGIC_V2);
+@@ -191,7 +191,7 @@ static int kvm_vgic_addr(struct kvm *kvm, struct kvm_device_attr *attr, bool wri
+ 	}
+ 
+ out:
+-	mutex_unlock(&kvm->lock);
++	mutex_unlock(&kvm->arch.config_lock);
+ 
+ 	if (!r && !write)
+ 		r =  put_user(addr, uaddr);
+@@ -227,7 +227,7 @@ static int vgic_set_common_attr(struct kvm_device *dev,
+ 		    (val & 31))
+ 			return -EINVAL;
+ 
+-		mutex_lock(&dev->kvm->lock);
++		mutex_lock(&dev->kvm->arch.config_lock);
+ 
+ 		if (vgic_ready(dev->kvm) || dev->kvm->arch.vgic.nr_spis)
+ 			ret = -EBUSY;
+@@ -235,16 +235,16 @@ static int vgic_set_common_attr(struct kvm_device *dev,
+ 			dev->kvm->arch.vgic.nr_spis =
+ 				val - VGIC_NR_PRIVATE_IRQS;
+ 
+-		mutex_unlock(&dev->kvm->lock);
++		mutex_unlock(&dev->kvm->arch.config_lock);
+ 
+ 		return ret;
+ 	}
+ 	case KVM_DEV_ARM_VGIC_GRP_CTRL: {
+ 		switch (attr->attr) {
+ 		case KVM_DEV_ARM_VGIC_CTRL_INIT:
+-			mutex_lock(&dev->kvm->lock);
++			mutex_lock(&dev->kvm->arch.config_lock);
+ 			r = vgic_init(dev->kvm);
+-			mutex_unlock(&dev->kvm->lock);
++			mutex_unlock(&dev->kvm->arch.config_lock);
+ 			return r;
+ 		case KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES:
+ 			/*
+@@ -260,7 +260,10 @@ static int vgic_set_common_attr(struct kvm_device *dev,
+ 				mutex_unlock(&dev->kvm->lock);
+ 				return -EBUSY;
+ 			}
++
++			mutex_lock(&dev->kvm->arch.config_lock);
+ 			r = vgic_v3_save_pending_tables(dev->kvm);
++			mutex_unlock(&dev->kvm->arch.config_lock);
+ 			unlock_all_vcpus(dev->kvm);
+ 			mutex_unlock(&dev->kvm->lock);
+ 			return r;
+@@ -411,15 +414,17 @@ static int vgic_v2_attr_regs_access(struct kvm_device *dev,
+ 
+ 	mutex_lock(&dev->kvm->lock);
+ 
++	if (!lock_all_vcpus(dev->kvm)) {
++		mutex_unlock(&dev->kvm->lock);
++		return -EBUSY;
++	}
++
++	mutex_lock(&dev->kvm->arch.config_lock);
++
+ 	ret = vgic_init(dev->kvm);
+ 	if (ret)
+ 		goto out;
+ 
+-	if (!lock_all_vcpus(dev->kvm)) {
+-		ret = -EBUSY;
+-		goto out;
+-	}
+-
+ 	switch (attr->group) {
+ 	case KVM_DEV_ARM_VGIC_GRP_CPU_REGS:
+ 		ret = vgic_v2_cpuif_uaccess(vcpu, is_write, addr, &val);
+@@ -432,8 +437,9 @@ static int vgic_v2_attr_regs_access(struct kvm_device *dev,
+ 		break;
+ 	}
+ 
+-	unlock_all_vcpus(dev->kvm);
+ out:
++	mutex_unlock(&dev->kvm->arch.config_lock);
++	unlock_all_vcpus(dev->kvm);
+ 	mutex_unlock(&dev->kvm->lock);
+ 
+ 	if (!ret && !is_write)
+@@ -569,12 +575,14 @@ static int vgic_v3_attr_regs_access(struct kvm_device *dev,
+ 
+ 	mutex_lock(&dev->kvm->lock);
+ 
+-	if (unlikely(!vgic_initialized(dev->kvm))) {
+-		ret = -EBUSY;
+-		goto out;
++	if (!lock_all_vcpus(dev->kvm)) {
++		mutex_unlock(&dev->kvm->lock);
++		return -EBUSY;
+ 	}
+ 
+-	if (!lock_all_vcpus(dev->kvm)) {
++	mutex_lock(&dev->kvm->arch.config_lock);
++
++	if (unlikely(!vgic_initialized(dev->kvm))) {
+ 		ret = -EBUSY;
+ 		goto out;
+ 	}
+@@ -609,8 +617,9 @@ static int vgic_v3_attr_regs_access(struct kvm_device *dev,
+ 		break;
+ 	}
+ 
+-	unlock_all_vcpus(dev->kvm);
+ out:
++	mutex_unlock(&dev->kvm->arch.config_lock);
++	unlock_all_vcpus(dev->kvm);
+ 	mutex_unlock(&dev->kvm->lock);
+ 
+ 	if (!ret && uaccess && !is_write) {
+diff --git a/arch/arm64/kvm/vgic/vgic-mmio-v3.c b/arch/arm64/kvm/vgic/vgic-mmio-v3.c
+index 91201f7430339..472b18ac92a24 100644
+--- a/arch/arm64/kvm/vgic/vgic-mmio-v3.c
++++ b/arch/arm64/kvm/vgic/vgic-mmio-v3.c
+@@ -111,7 +111,7 @@ static void vgic_mmio_write_v3_misc(struct kvm_vcpu *vcpu,
+ 	case GICD_CTLR: {
+ 		bool was_enabled, is_hwsgi;
+ 
+-		mutex_lock(&vcpu->kvm->lock);
++		mutex_lock(&vcpu->kvm->arch.config_lock);
+ 
+ 		was_enabled = dist->enabled;
+ 		is_hwsgi = dist->nassgireq;
+@@ -139,7 +139,7 @@ static void vgic_mmio_write_v3_misc(struct kvm_vcpu *vcpu,
+ 		else if (!was_enabled && dist->enabled)
+ 			vgic_kick_vcpus(vcpu->kvm);
+ 
+-		mutex_unlock(&vcpu->kvm->lock);
++		mutex_unlock(&vcpu->kvm->arch.config_lock);
+ 		break;
+ 	}
+ 	case GICD_TYPER:
+diff --git a/arch/arm64/kvm/vgic/vgic-mmio.c b/arch/arm64/kvm/vgic/vgic-mmio.c
+index e67b3b2c80440..1939c94e0b248 100644
+--- a/arch/arm64/kvm/vgic/vgic-mmio.c
++++ b/arch/arm64/kvm/vgic/vgic-mmio.c
+@@ -530,13 +530,13 @@ unsigned long vgic_mmio_read_active(struct kvm_vcpu *vcpu,
+ 	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
+ 	u32 val;
+ 
+-	mutex_lock(&vcpu->kvm->lock);
++	mutex_lock(&vcpu->kvm->arch.config_lock);
+ 	vgic_access_active_prepare(vcpu, intid);
+ 
+ 	val = __vgic_mmio_read_active(vcpu, addr, len);
+ 
+ 	vgic_access_active_finish(vcpu, intid);
+-	mutex_unlock(&vcpu->kvm->lock);
++	mutex_unlock(&vcpu->kvm->arch.config_lock);
+ 
+ 	return val;
+ }
+@@ -625,13 +625,13 @@ void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
+ {
+ 	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
+ 
+-	mutex_lock(&vcpu->kvm->lock);
++	mutex_lock(&vcpu->kvm->arch.config_lock);
+ 	vgic_access_active_prepare(vcpu, intid);
+ 
+ 	__vgic_mmio_write_cactive(vcpu, addr, len, val);
+ 
+ 	vgic_access_active_finish(vcpu, intid);
+-	mutex_unlock(&vcpu->kvm->lock);
++	mutex_unlock(&vcpu->kvm->arch.config_lock);
+ }
+ 
+ int vgic_mmio_uaccess_write_cactive(struct kvm_vcpu *vcpu,
+@@ -662,13 +662,13 @@ void vgic_mmio_write_sactive(struct kvm_vcpu *vcpu,
+ {
+ 	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
+ 
+-	mutex_lock(&vcpu->kvm->lock);
++	mutex_lock(&vcpu->kvm->arch.config_lock);
+ 	vgic_access_active_prepare(vcpu, intid);
+ 
+ 	__vgic_mmio_write_sactive(vcpu, addr, len, val);
+ 
+ 	vgic_access_active_finish(vcpu, intid);
+-	mutex_unlock(&vcpu->kvm->lock);
++	mutex_unlock(&vcpu->kvm->arch.config_lock);
+ }
+ 
+ int vgic_mmio_uaccess_write_sactive(struct kvm_vcpu *vcpu,
+diff --git a/arch/arm64/kvm/vgic/vgic-v4.c b/arch/arm64/kvm/vgic/vgic-v4.c
+index a413718be92b8..3bb0034780605 100644
+--- a/arch/arm64/kvm/vgic/vgic-v4.c
++++ b/arch/arm64/kvm/vgic/vgic-v4.c
+@@ -232,9 +232,8 @@ int vgic_v4_request_vpe_irq(struct kvm_vcpu *vcpu, int irq)
+  * @kvm:	Pointer to the VM being initialized
+  *
+  * We may be called each time a vITS is created, or when the
+- * vgic is initialized. This relies on kvm->lock to be
+- * held. In both cases, the number of vcpus should now be
+- * fixed.
++ * vgic is initialized. In both cases, the number of vcpus
++ * should now be fixed.
+  */
+ int vgic_v4_init(struct kvm *kvm)
+ {
+@@ -243,6 +242,8 @@ int vgic_v4_init(struct kvm *kvm)
+ 	int nr_vcpus, ret;
+ 	unsigned long i;
+ 
++	lockdep_assert_held(&kvm->arch.config_lock);
++
+ 	if (!kvm_vgic_global_state.has_gicv4)
+ 		return 0; /* Nothing to see here... move along. */
+ 
+@@ -309,14 +310,14 @@ int vgic_v4_init(struct kvm *kvm)
+ /**
+  * vgic_v4_teardown - Free the GICv4 data structures
+  * @kvm:	Pointer to the VM being destroyed
+- *
+- * Relies on kvm->lock to be held.
+  */
+ void vgic_v4_teardown(struct kvm *kvm)
+ {
+ 	struct its_vm *its_vm = &kvm->arch.vgic.its_vm;
+ 	int i;
+ 
++	lockdep_assert_held(&kvm->arch.config_lock);
++
+ 	if (!its_vm->vpes)
+ 		return;
+ 
+diff --git a/arch/arm64/kvm/vgic/vgic.c b/arch/arm64/kvm/vgic/vgic.c
+index d97e6080b4217..0a005da83ae64 100644
+--- a/arch/arm64/kvm/vgic/vgic.c
++++ b/arch/arm64/kvm/vgic/vgic.c
+@@ -24,11 +24,13 @@ struct vgic_global kvm_vgic_global_state __ro_after_init = {
+ /*
+  * Locking order is always:
+  * kvm->lock (mutex)
+- *   its->cmd_lock (mutex)
+- *     its->its_lock (mutex)
+- *       vgic_cpu->ap_list_lock		must be taken with IRQs disabled
+- *         kvm->lpi_list_lock		must be taken with IRQs disabled
+- *           vgic_irq->irq_lock		must be taken with IRQs disabled
++ *   vcpu->mutex (mutex)
++ *     kvm->arch.config_lock (mutex)
++ *       its->cmd_lock (mutex)
++ *         its->its_lock (mutex)
++ *           vgic_cpu->ap_list_lock		must be taken with IRQs disabled
++ *             kvm->lpi_list_lock		must be taken with IRQs disabled
++ *               vgic_irq->irq_lock		must be taken with IRQs disabled
+  *
+  * As the ap_list_lock might be taken from the timer interrupt handler,
+  * we have to disable IRQs before taking this lock and everything lower
+diff --git a/arch/ia64/kernel/salinfo.c b/arch/ia64/kernel/salinfo.c
+index bd3ba276e69c3..03b632c568995 100644
+--- a/arch/ia64/kernel/salinfo.c
++++ b/arch/ia64/kernel/salinfo.c
+@@ -581,7 +581,7 @@ static int salinfo_cpu_pre_down(unsigned int cpu)
+  * 'data' contains an integer that corresponds to the feature we're
+  * testing
+  */
+-static int proc_salinfo_show(struct seq_file *m, void *v)
++static int __maybe_unused proc_salinfo_show(struct seq_file *m, void *v)
+ {
+ 	unsigned long data = (unsigned long)v;
+ 	seq_puts(m, (sal_platform_features & data) ? "1\n" : "0\n");
+diff --git a/arch/ia64/mm/contig.c b/arch/ia64/mm/contig.c
+index 24901d8093015..1e9eaa107eb73 100644
+--- a/arch/ia64/mm/contig.c
++++ b/arch/ia64/mm/contig.c
+@@ -77,7 +77,7 @@ skip:
+ 	return __per_cpu_start + __per_cpu_offset[smp_processor_id()];
+ }
+ 
+-static inline void
++static inline __init void
+ alloc_per_cpu_data(void)
+ {
+ 	size_t size = PERCPU_PAGE_SIZE * num_possible_cpus();
+diff --git a/arch/ia64/mm/hugetlbpage.c b/arch/ia64/mm/hugetlbpage.c
+index 380d2f3966c98..9e8960e499622 100644
+--- a/arch/ia64/mm/hugetlbpage.c
++++ b/arch/ia64/mm/hugetlbpage.c
+@@ -58,7 +58,7 @@ huge_pte_offset (struct mm_struct *mm, unsigned long addr, unsigned long sz)
+ 
+ 	pgd = pgd_offset(mm, taddr);
+ 	if (pgd_present(*pgd)) {
+-		p4d = p4d_offset(pgd, addr);
++		p4d = p4d_offset(pgd, taddr);
+ 		if (p4d_present(*p4d)) {
+ 			pud = pud_offset(p4d, taddr);
+ 			if (pud_present(*pud)) {
+diff --git a/arch/mips/fw/lib/cmdline.c b/arch/mips/fw/lib/cmdline.c
+index f24cbb4a39b50..892765b742bbc 100644
+--- a/arch/mips/fw/lib/cmdline.c
++++ b/arch/mips/fw/lib/cmdline.c
+@@ -53,7 +53,7 @@ char *fw_getenv(char *envname)
+ {
+ 	char *result = NULL;
+ 
+-	if (_fw_envp != NULL) {
++	if (_fw_envp != NULL && fw_envp(0) != NULL) {
+ 		/*
+ 		 * Return a pointer to the given environment variable.
+ 		 * YAMON uses "name", "value" pairs, while U-Boot uses
+diff --git a/arch/openrisc/kernel/entry.S b/arch/openrisc/kernel/entry.S
+index 54a87bba35caa..a130c4dac48d3 100644
+--- a/arch/openrisc/kernel/entry.S
++++ b/arch/openrisc/kernel/entry.S
+@@ -173,7 +173,6 @@ handler:							;\
+ 	l.sw    PT_GPR28(r1),r28					;\
+ 	l.sw    PT_GPR29(r1),r29					;\
+ 	/* r30 already save */					;\
+-/*        l.sw    PT_GPR30(r1),r30*/					;\
+ 	l.sw    PT_GPR31(r1),r31					;\
+ 	TRACE_IRQS_OFF_ENTRY						;\
+ 	/* Store -1 in orig_gpr11 for non-syscall exceptions */	;\
+@@ -211,9 +210,8 @@ handler:							;\
+ 	l.sw    PT_GPR27(r1),r27					;\
+ 	l.sw    PT_GPR28(r1),r28					;\
+ 	l.sw    PT_GPR29(r1),r29					;\
+-	/* r31 already saved */					;\
+-	l.sw    PT_GPR30(r1),r30					;\
+-/*        l.sw    PT_GPR31(r1),r31	*/				;\
++	/* r30 already saved */						;\
++	l.sw    PT_GPR31(r1),r31					;\
+ 	/* Store -1 in orig_gpr11 for non-syscall exceptions */	;\
+ 	l.addi	r30,r0,-1					;\
+ 	l.sw	PT_ORIG_GPR11(r1),r30				;\
+diff --git a/arch/parisc/kernel/pacache.S b/arch/parisc/kernel/pacache.S
+index 9a0018f1f42cb..541370d145594 100644
+--- a/arch/parisc/kernel/pacache.S
++++ b/arch/parisc/kernel/pacache.S
+@@ -889,6 +889,7 @@ ENDPROC_CFI(flush_icache_page_asm)
+ ENTRY_CFI(flush_kernel_dcache_page_asm)
+ 88:	ldil		L%dcache_stride, %r1
+ 	ldw		R%dcache_stride(%r1), %r23
++	depi_safe	0, 31,PAGE_SHIFT, %r26	/* Clear any offset bits */
+ 
+ #ifdef CONFIG_64BIT
+ 	depdi,z		1, 63-PAGE_SHIFT,1, %r25
+@@ -925,6 +926,7 @@ ENDPROC_CFI(flush_kernel_dcache_page_asm)
+ ENTRY_CFI(purge_kernel_dcache_page_asm)
+ 88:	ldil		L%dcache_stride, %r1
+ 	ldw		R%dcache_stride(%r1), %r23
++	depi_safe	0, 31,PAGE_SHIFT, %r26	/* Clear any offset bits */
+ 
+ #ifdef CONFIG_64BIT
+ 	depdi,z		1, 63-PAGE_SHIFT,1, %r25
+diff --git a/arch/parisc/kernel/real2.S b/arch/parisc/kernel/real2.S
+index 4dc12c4c09809..509d18b8e0e65 100644
+--- a/arch/parisc/kernel/real2.S
++++ b/arch/parisc/kernel/real2.S
+@@ -235,9 +235,6 @@ ENTRY_CFI(real64_call_asm)
+ 	/* save fn */
+ 	copy	%arg2, %r31
+ 
+-	/* set up the new ap */
+-	ldo	64(%arg1), %r29
+-
+ 	/* load up the arg registers from the saved arg area */
+ 	/* 32-bit calling convention passes first 4 args in registers */
+ 	ldd	0*REG_SZ(%arg1), %arg0		/* note overwriting arg0 */
+@@ -249,7 +246,9 @@ ENTRY_CFI(real64_call_asm)
+ 	ldd	7*REG_SZ(%arg1), %r19
+ 	ldd	1*REG_SZ(%arg1), %arg1		/* do this one last! */
+ 
++	/* set up real-mode stack and real-mode ap */
+ 	tophys_r1 %sp
++	ldo	-16(%sp), %r29			/* Reference param save area */
+ 
+ 	b,l	rfi_virt2real,%r2
+ 	nop
+diff --git a/arch/powerpc/boot/Makefile b/arch/powerpc/boot/Makefile
+index 295f76df13b55..13fad4f0a6d8f 100644
+--- a/arch/powerpc/boot/Makefile
++++ b/arch/powerpc/boot/Makefile
+@@ -34,6 +34,8 @@ endif
+ 
+ BOOTCFLAGS    := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \
+ 		 -fno-strict-aliasing -O2 -msoft-float -mno-altivec -mno-vsx \
++		 $(call cc-option,-mno-prefixed) $(call cc-option,-mno-pcrel) \
++		 $(call cc-option,-mno-mma) \
+ 		 $(call cc-option,-mno-spe) $(call cc-option,-mspe=no) \
+ 		 -pipe -fomit-frame-pointer -fno-builtin -fPIC -nostdinc \
+ 		 $(LINUXINCLUDE)
+diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
+index 1e8b2e04e626a..8fda87af2fa5e 100644
+--- a/arch/powerpc/include/asm/reg.h
++++ b/arch/powerpc/include/asm/reg.h
+@@ -1310,6 +1310,11 @@
+ #define PVR_VER_E500MC	0x8023
+ #define PVR_VER_E5500	0x8024
+ #define PVR_VER_E6500	0x8040
++#define PVR_VER_7450	0x8000
++#define PVR_VER_7455	0x8001
++#define PVR_VER_7447	0x8002
++#define PVR_VER_7447A	0x8003
++#define PVR_VER_7448	0x8004
+ 
+ /*
+  * For the 8xx processors, all of them report the same PVR family for
+diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c
+index 31175b34856ac..9256cfaa8b6f1 100644
+--- a/arch/powerpc/kernel/rtas.c
++++ b/arch/powerpc/kernel/rtas.c
+@@ -981,7 +981,7 @@ static char *__fetch_rtas_last_error(char *altbuf)
+ 				buf = kmalloc(RTAS_ERROR_LOG_MAX, GFP_ATOMIC);
+ 		}
+ 		if (buf)
+-			memcpy(buf, rtas_err_buf, RTAS_ERROR_LOG_MAX);
++			memmove(buf, rtas_err_buf, RTAS_ERROR_LOG_MAX);
+ 	}
+ 
+ 	return buf;
+diff --git a/arch/powerpc/perf/mpc7450-pmu.c b/arch/powerpc/perf/mpc7450-pmu.c
+index 552d51a925d37..db451b9aac35e 100644
+--- a/arch/powerpc/perf/mpc7450-pmu.c
++++ b/arch/powerpc/perf/mpc7450-pmu.c
+@@ -417,9 +417,9 @@ struct power_pmu mpc7450_pmu = {
+ 
+ static int __init init_mpc7450_pmu(void)
+ {
+-	unsigned int pvr = mfspr(SPRN_PVR);
+-
+-	if (PVR_VER(pvr) != PVR_7450)
++	if (!pvr_version_is(PVR_VER_7450) && !pvr_version_is(PVR_VER_7455) &&
++	    !pvr_version_is(PVR_VER_7447) && !pvr_version_is(PVR_VER_7447A) &&
++	    !pvr_version_is(PVR_VER_7448))
+ 		return -ENODEV;
+ 
+ 	return register_power_pmu(&mpc7450_pmu);
+diff --git a/arch/powerpc/platforms/512x/clock-commonclk.c b/arch/powerpc/platforms/512x/clock-commonclk.c
+index 42abeba4f6983..079cb3627eacd 100644
+--- a/arch/powerpc/platforms/512x/clock-commonclk.c
++++ b/arch/powerpc/platforms/512x/clock-commonclk.c
+@@ -986,7 +986,7 @@ static void __init mpc5121_clk_provide_migration_support(void)
+ 
+ #define NODE_PREP do { \
+ 	of_address_to_resource(np, 0, &res); \
+-	snprintf(devname, sizeof(devname), "%08x.%s", res.start, np->name); \
++	snprintf(devname, sizeof(devname), "%pa.%s", &res.start, np->name); \
+ } while (0)
+ 
+ #define NODE_CHK(clkname, clkitem, regnode, regflag) do { \
+diff --git a/arch/powerpc/platforms/embedded6xx/flipper-pic.c b/arch/powerpc/platforms/embedded6xx/flipper-pic.c
+index 609bda2ad5dd2..4d9200bdba78c 100644
+--- a/arch/powerpc/platforms/embedded6xx/flipper-pic.c
++++ b/arch/powerpc/platforms/embedded6xx/flipper-pic.c
+@@ -145,7 +145,7 @@ static struct irq_domain * __init flipper_pic_init(struct device_node *np)
+ 	}
+ 	io_base = ioremap(res.start, resource_size(&res));
+ 
+-	pr_info("controller at 0x%08x mapped to 0x%p\n", res.start, io_base);
++	pr_info("controller at 0x%pa mapped to 0x%p\n", &res.start, io_base);
+ 
+ 	__flipper_quiesce(io_base);
+ 
+diff --git a/arch/powerpc/platforms/embedded6xx/hlwd-pic.c b/arch/powerpc/platforms/embedded6xx/hlwd-pic.c
+index 380b4285cce47..4d2d92de30afd 100644
+--- a/arch/powerpc/platforms/embedded6xx/hlwd-pic.c
++++ b/arch/powerpc/platforms/embedded6xx/hlwd-pic.c
+@@ -171,7 +171,7 @@ static struct irq_domain *__init hlwd_pic_init(struct device_node *np)
+ 		return NULL;
+ 	}
+ 
+-	pr_info("controller at 0x%08x mapped to 0x%p\n", res.start, io_base);
++	pr_info("controller at 0x%pa mapped to 0x%p\n", &res.start, io_base);
+ 
+ 	__hlwd_quiesce(io_base);
+ 
+diff --git a/arch/powerpc/platforms/embedded6xx/wii.c b/arch/powerpc/platforms/embedded6xx/wii.c
+index f4e654a9d4ff6..219659f2ede06 100644
+--- a/arch/powerpc/platforms/embedded6xx/wii.c
++++ b/arch/powerpc/platforms/embedded6xx/wii.c
+@@ -74,8 +74,8 @@ static void __iomem *__init wii_ioremap_hw_regs(char *name, char *compatible)
+ 
+ 	hw_regs = ioremap(res.start, resource_size(&res));
+ 	if (hw_regs) {
+-		pr_info("%s at 0x%08x mapped to 0x%p\n", name,
+-			res.start, hw_regs);
++		pr_info("%s at 0x%pa mapped to 0x%p\n", name,
++			&res.start, hw_regs);
+ 	}
+ 
+ out_put:
+diff --git a/arch/powerpc/sysdev/tsi108_pci.c b/arch/powerpc/sysdev/tsi108_pci.c
+index 5af4c35ff5842..0e42f7bad7db1 100644
+--- a/arch/powerpc/sysdev/tsi108_pci.c
++++ b/arch/powerpc/sysdev/tsi108_pci.c
+@@ -217,9 +217,8 @@ int __init tsi108_setup_pci(struct device_node *dev, u32 cfg_phys, int primary)
+ 
+ 	(hose)->ops = &tsi108_direct_pci_ops;
+ 
+-	printk(KERN_INFO "Found tsi108 PCI host bridge at 0x%08x. "
+-	       "Firmware bus number: %d->%d\n",
+-	       rsrc.start, hose->first_busno, hose->last_busno);
++	pr_info("Found tsi108 PCI host bridge at 0x%pa. Firmware bus number: %d->%d\n",
++		&rsrc.start, hose->first_busno, hose->last_busno);
+ 
+ 	/* Interpret the "ranges" property */
+ 	/* This also maps the I/O region and sets isa_io/mem_base */
+diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h
+index 945b7be249c10..1193554857038 100644
+--- a/arch/riscv/include/asm/sbi.h
++++ b/arch/riscv/include/asm/sbi.h
+@@ -296,7 +296,7 @@ int sbi_remote_hfence_vvma_asid(const struct cpumask *cpu_mask,
+ 				unsigned long start,
+ 				unsigned long size,
+ 				unsigned long asid);
+-int sbi_probe_extension(int ext);
++long sbi_probe_extension(int ext);
+ 
+ /* Check if current SBI specification version is 0.1 or not */
+ static inline int sbi_spec_is_0_1(void)
+diff --git a/arch/riscv/kernel/cpu_ops.c b/arch/riscv/kernel/cpu_ops.c
+index 8275f237a59df..eb479a88a954e 100644
+--- a/arch/riscv/kernel/cpu_ops.c
++++ b/arch/riscv/kernel/cpu_ops.c
+@@ -27,7 +27,7 @@ const struct cpu_operations cpu_ops_spinwait = {
+ void __init cpu_set_ops(int cpuid)
+ {
+ #if IS_ENABLED(CONFIG_RISCV_SBI)
+-	if (sbi_probe_extension(SBI_EXT_HSM) > 0) {
++	if (sbi_probe_extension(SBI_EXT_HSM)) {
+ 		if (!cpuid)
+ 			pr_info("SBI HSM extension detected\n");
+ 		cpu_ops[cpuid] = &cpu_ops_sbi;
+diff --git a/arch/riscv/kernel/sbi.c b/arch/riscv/kernel/sbi.c
+index 5c87db8fdff2d..015ce8eef2de2 100644
+--- a/arch/riscv/kernel/sbi.c
++++ b/arch/riscv/kernel/sbi.c
+@@ -581,19 +581,18 @@ static void sbi_srst_power_off(void)
+  * sbi_probe_extension() - Check if an SBI extension ID is supported or not.
+  * @extid: The extension ID to be probed.
+  *
+- * Return: Extension specific nonzero value f yes, -ENOTSUPP otherwise.
++ * Return: 1 or an extension specific nonzero value if yes, 0 otherwise.
+  */
+-int sbi_probe_extension(int extid)
++long sbi_probe_extension(int extid)
+ {
+ 	struct sbiret ret;
+ 
+ 	ret = sbi_ecall(SBI_EXT_BASE, SBI_EXT_BASE_PROBE_EXT, extid,
+ 			0, 0, 0, 0, 0);
+ 	if (!ret.error)
+-		if (ret.value)
+-			return ret.value;
++		return ret.value;
+ 
+-	return -ENOTSUPP;
++	return 0;
+ }
+ EXPORT_SYMBOL(sbi_probe_extension);
+ 
+@@ -665,26 +664,26 @@ void __init sbi_init(void)
+ 	if (!sbi_spec_is_0_1()) {
+ 		pr_info("SBI implementation ID=0x%lx Version=0x%lx\n",
+ 			sbi_get_firmware_id(), sbi_get_firmware_version());
+-		if (sbi_probe_extension(SBI_EXT_TIME) > 0) {
++		if (sbi_probe_extension(SBI_EXT_TIME)) {
+ 			__sbi_set_timer = __sbi_set_timer_v02;
+ 			pr_info("SBI TIME extension detected\n");
+ 		} else {
+ 			__sbi_set_timer = __sbi_set_timer_v01;
+ 		}
+-		if (sbi_probe_extension(SBI_EXT_IPI) > 0) {
++		if (sbi_probe_extension(SBI_EXT_IPI)) {
+ 			__sbi_send_ipi	= __sbi_send_ipi_v02;
+ 			pr_info("SBI IPI extension detected\n");
+ 		} else {
+ 			__sbi_send_ipi	= __sbi_send_ipi_v01;
+ 		}
+-		if (sbi_probe_extension(SBI_EXT_RFENCE) > 0) {
++		if (sbi_probe_extension(SBI_EXT_RFENCE)) {
+ 			__sbi_rfence	= __sbi_rfence_v02;
+ 			pr_info("SBI RFENCE extension detected\n");
+ 		} else {
+ 			__sbi_rfence	= __sbi_rfence_v01;
+ 		}
+ 		if ((sbi_spec_version >= sbi_mk_version(0, 3)) &&
+-		    (sbi_probe_extension(SBI_EXT_SRST) > 0)) {
++		    sbi_probe_extension(SBI_EXT_SRST)) {
+ 			pr_info("SBI SRST extension detected\n");
+ 			pm_power_off = sbi_srst_power_off;
+ 			sbi_srst_reboot_nb.notifier_call = sbi_srst_reboot;
+diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c
+index 41ad7639a17bf..c923c113a1293 100644
+--- a/arch/riscv/kvm/main.c
++++ b/arch/riscv/kvm/main.c
+@@ -75,7 +75,7 @@ static int __init riscv_kvm_init(void)
+ 		return -ENODEV;
+ 	}
+ 
+-	if (sbi_probe_extension(SBI_EXT_RFENCE) <= 0) {
++	if (!sbi_probe_extension(SBI_EXT_RFENCE)) {
+ 		kvm_info("require SBI RFENCE extension\n");
+ 		return -ENODEV;
+ 	}
+diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
+index 78211aed36fa6..46d6929958306 100644
+--- a/arch/riscv/kvm/mmu.c
++++ b/arch/riscv/kvm/mmu.c
+@@ -628,6 +628,13 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
+ 			!(memslot->flags & KVM_MEM_READONLY)) ? true : false;
+ 	unsigned long vma_pagesize, mmu_seq;
+ 
++	/* We need minimum second+third level pages */
++	ret = kvm_mmu_topup_memory_cache(pcache, gstage_pgd_levels);
++	if (ret) {
++		kvm_err("Failed to topup G-stage cache\n");
++		return ret;
++	}
++
+ 	mmap_read_lock(current->mm);
+ 
+ 	vma = vma_lookup(current->mm, hva);
+@@ -648,6 +655,15 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
+ 	if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE)
+ 		gfn = (gpa & huge_page_mask(hstate_vma(vma))) >> PAGE_SHIFT;
+ 
++	/*
++	 * Read mmu_invalidate_seq so that KVM can detect if the results of
++	 * vma_lookup() or gfn_to_pfn_prot() become stale priort to acquiring
++	 * kvm->mmu_lock.
++	 *
++	 * Rely on mmap_read_unlock() for an implicit smp_rmb(), which pairs
++	 * with the smp_wmb() in kvm_mmu_invalidate_end().
++	 */
++	mmu_seq = kvm->mmu_invalidate_seq;
+ 	mmap_read_unlock(current->mm);
+ 
+ 	if (vma_pagesize != PUD_SIZE &&
+@@ -657,15 +673,6 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
+ 		return -EFAULT;
+ 	}
+ 
+-	/* We need minimum second+third level pages */
+-	ret = kvm_mmu_topup_memory_cache(pcache, gstage_pgd_levels);
+-	if (ret) {
+-		kvm_err("Failed to topup G-stage cache\n");
+-		return ret;
+-	}
+-
+-	mmu_seq = kvm->mmu_invalidate_seq;
+-
+ 	hfn = gfn_to_pfn_prot(kvm, gfn, is_write, &writable);
+ 	if (hfn == KVM_PFN_ERR_HWPOISON) {
+ 		send_sig_mceerr(BUS_MCEERR_AR, (void __user *)hva,
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index 0f14f4a8d179a..6ebb75a9a6b9f 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -843,8 +843,7 @@ static void __init create_kernel_page_table(pgd_t *pgdir, bool early)
+  * this means 2 PMD entries whereas for 32-bit kernel, this is only 1 PGDIR
+  * entry.
+  */
+-static void __init create_fdt_early_page_table(pgd_t *pgdir,
+-					       uintptr_t fix_fdt_va,
++static void __init create_fdt_early_page_table(uintptr_t fix_fdt_va,
+ 					       uintptr_t dtb_pa)
+ {
+ 	uintptr_t pa = dtb_pa & ~(PMD_SIZE - 1);
+@@ -1034,8 +1033,7 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa)
+ 	create_kernel_page_table(early_pg_dir, true);
+ 
+ 	/* Setup early mapping for FDT early scan */
+-	create_fdt_early_page_table(early_pg_dir,
+-				    __fix_to_virt(FIX_FDT), dtb_pa);
++	create_fdt_early_page_table(__fix_to_virt(FIX_FDT), dtb_pa);
+ 
+ 	/*
+ 	 * Bootime fixmap only can handle PMD_SIZE mapping. Thus, boot-ioremap
+diff --git a/arch/riscv/mm/ptdump.c b/arch/riscv/mm/ptdump.c
+index 830e7de65e3a3..20a9f991a6d74 100644
+--- a/arch/riscv/mm/ptdump.c
++++ b/arch/riscv/mm/ptdump.c
+@@ -59,10 +59,6 @@ struct ptd_mm_info {
+ };
+ 
+ enum address_markers_idx {
+-#ifdef CONFIG_KASAN
+-	KASAN_SHADOW_START_NR,
+-	KASAN_SHADOW_END_NR,
+-#endif
+ 	FIXMAP_START_NR,
+ 	FIXMAP_END_NR,
+ 	PCI_IO_START_NR,
+@@ -74,6 +70,10 @@ enum address_markers_idx {
+ 	VMALLOC_START_NR,
+ 	VMALLOC_END_NR,
+ 	PAGE_OFFSET_NR,
++#ifdef CONFIG_KASAN
++	KASAN_SHADOW_START_NR,
++	KASAN_SHADOW_END_NR,
++#endif
+ #ifdef CONFIG_64BIT
+ 	MODULES_MAPPING_NR,
+ 	KERNEL_MAPPING_NR,
+@@ -82,10 +82,6 @@ enum address_markers_idx {
+ };
+ 
+ static struct addr_marker address_markers[] = {
+-#ifdef CONFIG_KASAN
+-	{0, "Kasan shadow start"},
+-	{0, "Kasan shadow end"},
+-#endif
+ 	{0, "Fixmap start"},
+ 	{0, "Fixmap end"},
+ 	{0, "PCI I/O start"},
+@@ -97,6 +93,10 @@ static struct addr_marker address_markers[] = {
+ 	{0, "vmalloc() area"},
+ 	{0, "vmalloc() end"},
+ 	{0, "Linear mapping"},
++#ifdef CONFIG_KASAN
++	{0, "Kasan shadow start"},
++	{0, "Kasan shadow end"},
++#endif
+ #ifdef CONFIG_64BIT
+ 	{0, "Modules/BPF mapping"},
+ 	{0, "Kernel mapping"},
+@@ -362,10 +362,6 @@ static int __init ptdump_init(void)
+ {
+ 	unsigned int i, j;
+ 
+-#ifdef CONFIG_KASAN
+-	address_markers[KASAN_SHADOW_START_NR].start_address = KASAN_SHADOW_START;
+-	address_markers[KASAN_SHADOW_END_NR].start_address = KASAN_SHADOW_END;
+-#endif
+ 	address_markers[FIXMAP_START_NR].start_address = FIXADDR_START;
+ 	address_markers[FIXMAP_END_NR].start_address = FIXADDR_TOP;
+ 	address_markers[PCI_IO_START_NR].start_address = PCI_IO_START;
+@@ -377,6 +373,10 @@ static int __init ptdump_init(void)
+ 	address_markers[VMALLOC_START_NR].start_address = VMALLOC_START;
+ 	address_markers[VMALLOC_END_NR].start_address = VMALLOC_END;
+ 	address_markers[PAGE_OFFSET_NR].start_address = PAGE_OFFSET;
++#ifdef CONFIG_KASAN
++	address_markers[KASAN_SHADOW_START_NR].start_address = KASAN_SHADOW_START;
++	address_markers[KASAN_SHADOW_END_NR].start_address = KASAN_SHADOW_END;
++#endif
+ #ifdef CONFIG_64BIT
+ 	address_markers[MODULES_MAPPING_NR].start_address = MODULES_VADDR;
+ 	address_markers[KERNEL_MAPPING_NR].start_address = kernel_map.virt_addr;
+diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
+index 9809c74e12406..35f15c23c4913 100644
+--- a/arch/s390/Kconfig
++++ b/arch/s390/Kconfig
+@@ -26,10 +26,6 @@ config GENERIC_BUG
+ config GENERIC_BUG_RELATIVE_POINTERS
+ 	def_bool y
+ 
+-config GENERIC_CSUM
+-	bool
+-	default y if KASAN
+-
+ config GENERIC_LOCKBREAK
+ 	def_bool y if PREEMPTION
+ 
+diff --git a/arch/s390/include/asm/checksum.h b/arch/s390/include/asm/checksum.h
+index d977a3a2f6190..1b6b992cf18ed 100644
+--- a/arch/s390/include/asm/checksum.h
++++ b/arch/s390/include/asm/checksum.h
+@@ -12,12 +12,7 @@
+ #ifndef _S390_CHECKSUM_H
+ #define _S390_CHECKSUM_H
+ 
+-#ifdef CONFIG_GENERIC_CSUM
+-
+-#include <asm-generic/checksum.h>
+-
+-#else /* CONFIG_GENERIC_CSUM */
+-
++#include <linux/kasan-checks.h>
+ #include <linux/uaccess.h>
+ #include <linux/in6.h>
+ 
+@@ -40,6 +35,7 @@ static inline __wsum csum_partial(const void *buff, int len, __wsum sum)
+ 		.odd = (unsigned long) len,
+ 	};
+ 
++	kasan_check_read(buff, len);
+ 	asm volatile(
+ 		"0:	cksm	%[sum],%[rp]\n"
+ 		"	jo	0b\n"
+@@ -135,5 +131,4 @@ static inline __sum16 csum_ipv6_magic(const struct in6_addr *saddr,
+ 	return csum_fold((__force __wsum)(sum >> 32));
+ }
+ 
+-#endif /* CONFIG_GENERIC_CSUM */
+ #endif /* _S390_CHECKSUM_H */
+diff --git a/arch/sh/kernel/cpu/sh4/sq.c b/arch/sh/kernel/cpu/sh4/sq.c
+index 27f2e3da5aa22..6e0bb3f47fa5d 100644
+--- a/arch/sh/kernel/cpu/sh4/sq.c
++++ b/arch/sh/kernel/cpu/sh4/sq.c
+@@ -382,7 +382,7 @@ static int __init sq_api_init(void)
+ 	if (unlikely(!sq_cache))
+ 		return ret;
+ 
+-	sq_bitmap = kzalloc(size, GFP_KERNEL);
++	sq_bitmap = kcalloc(size, sizeof(long), GFP_KERNEL);
+ 	if (unlikely(!sq_bitmap))
+ 		goto out;
+ 
+diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
+index 20d9a604da7c4..7705571100518 100644
+--- a/arch/x86/kernel/apic/apic.c
++++ b/arch/x86/kernel/apic/apic.c
+@@ -422,10 +422,9 @@ static unsigned int reserve_eilvt_offset(int offset, unsigned int new)
+ 		if (vector && !eilvt_entry_is_changeable(vector, new))
+ 			/* may not change if vectors are different */
+ 			return rsvd;
+-		rsvd = atomic_cmpxchg(&eilvt_offsets[offset], rsvd, new);
+-	} while (rsvd != new);
++	} while (!atomic_try_cmpxchg(&eilvt_offsets[offset], &rsvd, new));
+ 
+-	rsvd &= ~APIC_EILVT_MASKED;
++	rsvd = new & ~APIC_EILVT_MASKED;
+ 	if (rsvd && rsvd != vector)
+ 		pr_info("LVT offset %d assigned for vector 0x%02x\n",
+ 			offset, rsvd);
+diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
+index 1f83b052bb74e..f980b38b0227e 100644
+--- a/arch/x86/kernel/apic/io_apic.c
++++ b/arch/x86/kernel/apic/io_apic.c
+@@ -2477,17 +2477,21 @@ static int io_apic_get_redir_entries(int ioapic)
+ 
+ unsigned int arch_dynirq_lower_bound(unsigned int from)
+ {
++	unsigned int ret;
++
+ 	/*
+ 	 * dmar_alloc_hwirq() may be called before setup_IO_APIC(), so use
+ 	 * gsi_top if ioapic_dynirq_base hasn't been initialized yet.
+ 	 */
+-	if (!ioapic_initialized)
+-		return gsi_top;
++	ret = ioapic_dynirq_base ? : gsi_top;
++
+ 	/*
+-	 * For DT enabled machines ioapic_dynirq_base is irrelevant and not
+-	 * updated. So simply return @from if ioapic_dynirq_base == 0.
++	 * For DT enabled machines ioapic_dynirq_base is irrelevant and
++	 * always 0. gsi_top can be 0 if there is no IO/APIC registered.
++	 * 0 is an invalid interrupt number for dynamic allocations. Return
++	 * @from instead.
+ 	 */
+-	return ioapic_dynirq_base ? : from;
++	return ret ? : from;
+ }
+ 
+ #ifdef CONFIG_X86_32
+diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
+index 23c5072fbbb76..06cd3bef62e90 100644
+--- a/arch/x86/kernel/cpu/mce/amd.c
++++ b/arch/x86/kernel/cpu/mce/amd.c
+@@ -235,10 +235,10 @@ static DEFINE_PER_CPU(struct threshold_bank **, threshold_banks);
+  * A list of the banks enabled on each logical CPU. Controls which respective
+  * descriptors to initialize later in mce_threshold_create_device().
+  */
+-static DEFINE_PER_CPU(unsigned int, bank_map);
++static DEFINE_PER_CPU(u64, bank_map);
+ 
+ /* Map of banks that have more than MCA_MISC0 available. */
+-static DEFINE_PER_CPU(u32, smca_misc_banks_map);
++static DEFINE_PER_CPU(u64, smca_misc_banks_map);
+ 
+ static void amd_threshold_interrupt(void);
+ static void amd_deferred_error_interrupt(void);
+@@ -267,7 +267,7 @@ static void smca_set_misc_banks_map(unsigned int bank, unsigned int cpu)
+ 		return;
+ 
+ 	if (low & MASK_BLKPTR_LO)
+-		per_cpu(smca_misc_banks_map, cpu) |= BIT(bank);
++		per_cpu(smca_misc_banks_map, cpu) |= BIT_ULL(bank);
+ 
+ }
+ 
+@@ -530,7 +530,7 @@ static u32 smca_get_block_address(unsigned int bank, unsigned int block,
+ 	if (!block)
+ 		return MSR_AMD64_SMCA_MCx_MISC(bank);
+ 
+-	if (!(per_cpu(smca_misc_banks_map, cpu) & BIT(bank)))
++	if (!(per_cpu(smca_misc_banks_map, cpu) & BIT_ULL(bank)))
+ 		return 0;
+ 
+ 	return MSR_AMD64_SMCA_MCx_MISCy(bank, block - 1);
+@@ -574,7 +574,7 @@ prepare_threshold_block(unsigned int bank, unsigned int block, u32 addr,
+ 	int new;
+ 
+ 	if (!block)
+-		per_cpu(bank_map, cpu) |= (1 << bank);
++		per_cpu(bank_map, cpu) |= BIT_ULL(bank);
+ 
+ 	memset(&b, 0, sizeof(b));
+ 	b.cpu			= cpu;
+@@ -878,7 +878,7 @@ static void amd_threshold_interrupt(void)
+ 		return;
+ 
+ 	for (bank = 0; bank < this_cpu_read(mce_num_banks); ++bank) {
+-		if (!(per_cpu(bank_map, cpu) & (1 << bank)))
++		if (!(per_cpu(bank_map, cpu) & BIT_ULL(bank)))
+ 			continue;
+ 
+ 		first_block = bp[bank]->blocks;
+@@ -1356,7 +1356,7 @@ int mce_threshold_create_device(unsigned int cpu)
+ 		return -ENOMEM;
+ 
+ 	for (bank = 0; bank < numbanks; ++bank) {
+-		if (!(this_cpu_read(bank_map) & (1 << bank)))
++		if (!(this_cpu_read(bank_map) & BIT_ULL(bank)))
+ 			continue;
+ 		err = threshold_create_bank(bp, cpu, bank);
+ 		if (err) {
+diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
+index 7c25dbf32eccb..a8f10825e2889 100644
+--- a/arch/x86/kvm/mmu/tdp_mmu.c
++++ b/arch/x86/kvm/mmu/tdp_mmu.c
+@@ -40,7 +40,17 @@ static __always_inline bool kvm_lockdep_assert_mmu_lock_held(struct kvm *kvm,
+ 
+ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm)
+ {
+-	/* Also waits for any queued work items.  */
++	/*
++	 * Invalidate all roots, which besides the obvious, schedules all roots
++	 * for zapping and thus puts the TDP MMU's reference to each root, i.e.
++	 * ultimately frees all roots.
++	 */
++	kvm_tdp_mmu_invalidate_all_roots(kvm);
++
++	/*
++	 * Destroying a workqueue also first flushes the workqueue, i.e. no
++	 * need to invoke kvm_tdp_mmu_zap_invalidated_roots().
++	 */
+ 	destroy_workqueue(kvm->arch.tdp_mmu_zap_wq);
+ 
+ 	WARN_ON(atomic64_read(&kvm->arch.tdp_mmu_pages));
+@@ -116,16 +126,6 @@ static void tdp_mmu_schedule_zap_root(struct kvm *kvm, struct kvm_mmu_page *root
+ 	queue_work(kvm->arch.tdp_mmu_zap_wq, &root->tdp_mmu_async_work);
+ }
+ 
+-static inline bool kvm_tdp_root_mark_invalid(struct kvm_mmu_page *page)
+-{
+-	union kvm_mmu_page_role role = page->role;
+-	role.invalid = true;
+-
+-	/* No need to use cmpxchg, only the invalid bit can change.  */
+-	role.word = xchg(&page->role.word, role.word);
+-	return role.invalid;
+-}
+-
+ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root,
+ 			  bool shared)
+ {
+@@ -134,45 +134,12 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root,
+ 	if (!refcount_dec_and_test(&root->tdp_mmu_root_count))
+ 		return;
+ 
+-	WARN_ON(!is_tdp_mmu_page(root));
+-
+ 	/*
+-	 * The root now has refcount=0.  It is valid, but readers already
+-	 * cannot acquire a reference to it because kvm_tdp_mmu_get_root()
+-	 * rejects it.  This remains true for the rest of the execution
+-	 * of this function, because readers visit valid roots only
+-	 * (except for tdp_mmu_zap_root_work(), which however
+-	 * does not acquire any reference itself).
+-	 *
+-	 * Even though there are flows that need to visit all roots for
+-	 * correctness, they all take mmu_lock for write, so they cannot yet
+-	 * run concurrently. The same is true after kvm_tdp_root_mark_invalid,
+-	 * since the root still has refcount=0.
+-	 *
+-	 * However, tdp_mmu_zap_root can yield, and writers do not expect to
+-	 * see refcount=0 (see for example kvm_tdp_mmu_invalidate_all_roots()).
+-	 * So the root temporarily gets an extra reference, going to refcount=1
+-	 * while staying invalid.  Readers still cannot acquire any reference;
+-	 * but writers are now allowed to run if tdp_mmu_zap_root yields and
+-	 * they might take an extra reference if they themselves yield.
+-	 * Therefore, when the reference is given back by the worker,
+-	 * there is no guarantee that the refcount is still 1.  If not, whoever
+-	 * puts the last reference will free the page, but they will not have to
+-	 * zap the root because a root cannot go from invalid to valid.
++	 * The TDP MMU itself holds a reference to each root until the root is
++	 * explicitly invalidated, i.e. the final reference should be never be
++	 * put for a valid root.
+ 	 */
+-	if (!kvm_tdp_root_mark_invalid(root)) {
+-		refcount_set(&root->tdp_mmu_root_count, 1);
+-
+-		/*
+-		 * Zapping the root in a worker is not just "nice to have";
+-		 * it is required because kvm_tdp_mmu_invalidate_all_roots()
+-		 * skips already-invalid roots.  If kvm_tdp_mmu_put_root() did
+-		 * not add the root to the workqueue, kvm_tdp_mmu_zap_all_fast()
+-		 * might return with some roots not zapped yet.
+-		 */
+-		tdp_mmu_schedule_zap_root(kvm, root);
+-		return;
+-	}
++	KVM_BUG_ON(!is_tdp_mmu_page(root) || !root->role.invalid, kvm);
+ 
+ 	spin_lock(&kvm->arch.tdp_mmu_pages_lock);
+ 	list_del_rcu(&root->link);
+@@ -320,7 +287,14 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu)
+ 	root = tdp_mmu_alloc_sp(vcpu);
+ 	tdp_mmu_init_sp(root, NULL, 0, role);
+ 
+-	refcount_set(&root->tdp_mmu_root_count, 1);
++	/*
++	 * TDP MMU roots are kept until they are explicitly invalidated, either
++	 * by a memslot update or by the destruction of the VM.  Initialize the
++	 * refcount to two; one reference for the vCPU, and one reference for
++	 * the TDP MMU itself, which is held until the root is invalidated and
++	 * is ultimately put by tdp_mmu_zap_root_work().
++	 */
++	refcount_set(&root->tdp_mmu_root_count, 2);
+ 
+ 	spin_lock(&kvm->arch.tdp_mmu_pages_lock);
+ 	list_add_rcu(&root->link, &kvm->arch.tdp_mmu_roots);
+@@ -1022,32 +996,49 @@ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm)
+ /*
+  * Mark each TDP MMU root as invalid to prevent vCPUs from reusing a root that
+  * is about to be zapped, e.g. in response to a memslots update.  The actual
+- * zapping is performed asynchronously, so a reference is taken on all roots.
+- * Using a separate workqueue makes it easy to ensure that the destruction is
+- * performed before the "fast zap" completes, without keeping a separate list
+- * of invalidated roots; the list is effectively the list of work items in
+- * the workqueue.
+- *
+- * Get a reference even if the root is already invalid, the asynchronous worker
+- * assumes it was gifted a reference to the root it processes.  Because mmu_lock
+- * is held for write, it should be impossible to observe a root with zero refcount,
+- * i.e. the list of roots cannot be stale.
++ * zapping is performed asynchronously.  Using a separate workqueue makes it
++ * easy to ensure that the destruction is performed before the "fast zap"
++ * completes, without keeping a separate list of invalidated roots; the list is
++ * effectively the list of work items in the workqueue.
+  *
+- * This has essentially the same effect for the TDP MMU
+- * as updating mmu_valid_gen does for the shadow MMU.
++ * Note, the asynchronous worker is gifted the TDP MMU's reference.
++ * See kvm_tdp_mmu_get_vcpu_root_hpa().
+  */
+ void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm)
+ {
+ 	struct kvm_mmu_page *root;
+ 
+-	lockdep_assert_held_write(&kvm->mmu_lock);
+-	list_for_each_entry(root, &kvm->arch.tdp_mmu_roots, link) {
+-		if (!root->role.invalid &&
+-		    !WARN_ON_ONCE(!kvm_tdp_mmu_get_root(root))) {
++	/*
++	 * mmu_lock must be held for write to ensure that a root doesn't become
++	 * invalid while there are active readers (invalidating a root while
++	 * there are active readers may or may not be problematic in practice,
++	 * but it's uncharted territory and not supported).
++	 *
++	 * Waive the assertion if there are no users of @kvm, i.e. the VM is
++	 * being destroyed after all references have been put, or if no vCPUs
++	 * have been created (which means there are no roots), i.e. the VM is
++	 * being destroyed in an error path of KVM_CREATE_VM.
++	 */
++	if (IS_ENABLED(CONFIG_PROVE_LOCKING) &&
++	    refcount_read(&kvm->users_count) && kvm->created_vcpus)
++		lockdep_assert_held_write(&kvm->mmu_lock);
++
++	/*
++	 * As above, mmu_lock isn't held when destroying the VM!  There can't
++	 * be other references to @kvm, i.e. nothing else can invalidate roots
++	 * or be consuming roots, but walking the list of roots does need to be
++	 * guarded against roots being deleted by the asynchronous zap worker.
++	 */
++	rcu_read_lock();
++
++	list_for_each_entry_rcu(root, &kvm->arch.tdp_mmu_roots, link) {
++		if (!root->role.invalid) {
+ 			root->role.invalid = true;
+ 			tdp_mmu_schedule_zap_root(kvm, root);
+ 		}
+ 	}
++
++	rcu_read_unlock();
+ }
+ 
+ /*
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index d2d6e1b6c7882..dd92361f41b3f 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -7776,9 +7776,11 @@ static u64 vmx_get_perf_capabilities(void)
+ 	if (boot_cpu_has(X86_FEATURE_PDCM))
+ 		rdmsrl(MSR_IA32_PERF_CAPABILITIES, host_perf_cap);
+ 
+-	x86_perf_get_lbr(&lbr);
+-	if (lbr.nr)
+-		perf_cap |= host_perf_cap & PMU_CAP_LBR_FMT;
++	if (!cpu_feature_enabled(X86_FEATURE_ARCH_LBR)) {
++		x86_perf_get_lbr(&lbr);
++		if (lbr.nr)
++			perf_cap |= host_perf_cap & PMU_CAP_LBR_FMT;
++	}
+ 
+ 	if (vmx_pebs_supported()) {
+ 		perf_cap |= host_perf_cap & PERF_CAP_PEBS_MASK;
+@@ -7918,6 +7920,21 @@ static int vmx_check_intercept(struct kvm_vcpu *vcpu,
+ 		/* FIXME: produce nested vmexit and return X86EMUL_INTERCEPTED.  */
+ 		break;
+ 
++	case x86_intercept_pause:
++		/*
++		 * PAUSE is a single-byte NOP with a REPE prefix, i.e. collides
++		 * with vanilla NOPs in the emulator.  Apply the interception
++		 * check only to actual PAUSE instructions.  Don't check
++		 * PAUSE-loop-exiting, software can't expect a given PAUSE to
++		 * exit, i.e. KVM is within its rights to allow L2 to execute
++		 * the PAUSE.
++		 */
++		if ((info->rep_prefix != REPE_PREFIX) ||
++		    !nested_cpu_has2(vmcs12, CPU_BASED_PAUSE_EXITING))
++			return X86EMUL_CONTINUE;
++
++		break;
++
+ 	/* TODO: check more intercepts... */
+ 	default:
+ 		break;
+diff --git a/block/blk-crypto-internal.h b/block/blk-crypto-internal.h
+index a8cdaf26851e1..4f1de2495f0c3 100644
+--- a/block/blk-crypto-internal.h
++++ b/block/blk-crypto-internal.h
+@@ -65,6 +65,11 @@ static inline bool blk_crypto_rq_is_encrypted(struct request *rq)
+ 	return rq->crypt_ctx;
+ }
+ 
++static inline bool blk_crypto_rq_has_keyslot(struct request *rq)
++{
++	return rq->crypt_keyslot;
++}
++
+ blk_status_t blk_crypto_get_keyslot(struct blk_crypto_profile *profile,
+ 				    const struct blk_crypto_key *key,
+ 				    struct blk_crypto_keyslot **slot_ptr);
+@@ -119,6 +124,11 @@ static inline bool blk_crypto_rq_is_encrypted(struct request *rq)
+ 	return false;
+ }
+ 
++static inline bool blk_crypto_rq_has_keyslot(struct request *rq)
++{
++	return false;
++}
++
+ #endif /* CONFIG_BLK_INLINE_ENCRYPTION */
+ 
+ void __bio_crypt_advance(struct bio *bio, unsigned int bytes);
+@@ -153,14 +163,21 @@ static inline bool blk_crypto_bio_prep(struct bio **bio_ptr)
+ 	return true;
+ }
+ 
+-blk_status_t __blk_crypto_init_request(struct request *rq);
+-static inline blk_status_t blk_crypto_init_request(struct request *rq)
++blk_status_t __blk_crypto_rq_get_keyslot(struct request *rq);
++static inline blk_status_t blk_crypto_rq_get_keyslot(struct request *rq)
+ {
+ 	if (blk_crypto_rq_is_encrypted(rq))
+-		return __blk_crypto_init_request(rq);
++		return __blk_crypto_rq_get_keyslot(rq);
+ 	return BLK_STS_OK;
+ }
+ 
++void __blk_crypto_rq_put_keyslot(struct request *rq);
++static inline void blk_crypto_rq_put_keyslot(struct request *rq)
++{
++	if (blk_crypto_rq_has_keyslot(rq))
++		__blk_crypto_rq_put_keyslot(rq);
++}
++
+ void __blk_crypto_free_request(struct request *rq);
+ static inline void blk_crypto_free_request(struct request *rq)
+ {
+@@ -199,7 +216,7 @@ static inline blk_status_t blk_crypto_insert_cloned_request(struct request *rq)
+ {
+ 
+ 	if (blk_crypto_rq_is_encrypted(rq))
+-		return blk_crypto_init_request(rq);
++		return blk_crypto_rq_get_keyslot(rq);
+ 	return BLK_STS_OK;
+ }
+ 
+diff --git a/block/blk-crypto-profile.c b/block/blk-crypto-profile.c
+index 0307fb0d95d34..3290c03c9918d 100644
+--- a/block/blk-crypto-profile.c
++++ b/block/blk-crypto-profile.c
+@@ -354,28 +354,16 @@ bool __blk_crypto_cfg_supported(struct blk_crypto_profile *profile,
+ 	return true;
+ }
+ 
+-/**
+- * __blk_crypto_evict_key() - Evict a key from a device.
+- * @profile: the crypto profile of the device
+- * @key: the key to evict.  It must not still be used in any I/O.
+- *
+- * If the device has keyslots, this finds the keyslot (if any) that contains the
+- * specified key and calls the driver's keyslot_evict function to evict it.
+- *
+- * Otherwise, this just calls the driver's keyslot_evict function if it is
+- * implemented, passing just the key (without any particular keyslot).  This
+- * allows layered devices to evict the key from their underlying devices.
+- *
+- * Context: Process context. Takes and releases profile->lock.
+- * Return: 0 on success or if there's no keyslot with the specified key, -EBUSY
+- *	   if the keyslot is still in use, or another -errno value on other
+- *	   error.
++/*
++ * This is an internal function that evicts a key from an inline encryption
++ * device that can be either a real device or the blk-crypto-fallback "device".
++ * It is used only by blk_crypto_evict_key(); see that function for details.
+  */
+ int __blk_crypto_evict_key(struct blk_crypto_profile *profile,
+ 			   const struct blk_crypto_key *key)
+ {
+ 	struct blk_crypto_keyslot *slot;
+-	int err = 0;
++	int err;
+ 
+ 	if (profile->num_slots == 0) {
+ 		if (profile->ll_ops.keyslot_evict) {
+@@ -389,22 +377,30 @@ int __blk_crypto_evict_key(struct blk_crypto_profile *profile,
+ 
+ 	blk_crypto_hw_enter(profile);
+ 	slot = blk_crypto_find_keyslot(profile, key);
+-	if (!slot)
+-		goto out_unlock;
++	if (!slot) {
++		/*
++		 * Not an error, since a key not in use by I/O is not guaranteed
++		 * to be in a keyslot.  There can be more keys than keyslots.
++		 */
++		err = 0;
++		goto out;
++	}
+ 
+ 	if (WARN_ON_ONCE(atomic_read(&slot->slot_refs) != 0)) {
++		/* BUG: key is still in use by I/O */
+ 		err = -EBUSY;
+-		goto out_unlock;
++		goto out_remove;
+ 	}
+ 	err = profile->ll_ops.keyslot_evict(profile, key,
+ 					    blk_crypto_keyslot_index(slot));
+-	if (err)
+-		goto out_unlock;
+-
++out_remove:
++	/*
++	 * Callers free the key even on error, so unlink the key from the hash
++	 * table and clear slot->key even on error.
++	 */
+ 	hlist_del(&slot->hash_node);
+ 	slot->key = NULL;
+-	err = 0;
+-out_unlock:
++out:
+ 	blk_crypto_hw_exit(profile);
+ 	return err;
+ }
+diff --git a/block/blk-crypto.c b/block/blk-crypto.c
+index 45378586151f7..4d760b092deb9 100644
+--- a/block/blk-crypto.c
++++ b/block/blk-crypto.c
+@@ -13,6 +13,7 @@
+ #include <linux/blkdev.h>
+ #include <linux/blk-crypto-profile.h>
+ #include <linux/module.h>
++#include <linux/ratelimit.h>
+ #include <linux/slab.h>
+ 
+ #include "blk-crypto-internal.h"
+@@ -224,27 +225,27 @@ static bool bio_crypt_check_alignment(struct bio *bio)
+ 	return true;
+ }
+ 
+-blk_status_t __blk_crypto_init_request(struct request *rq)
++blk_status_t __blk_crypto_rq_get_keyslot(struct request *rq)
+ {
+ 	return blk_crypto_get_keyslot(rq->q->crypto_profile,
+ 				      rq->crypt_ctx->bc_key,
+ 				      &rq->crypt_keyslot);
+ }
+ 
+-/**
+- * __blk_crypto_free_request - Uninitialize the crypto fields of a request.
+- *
+- * @rq: The request whose crypto fields to uninitialize.
+- *
+- * Completely uninitializes the crypto fields of a request. If a keyslot has
+- * been programmed into some inline encryption hardware, that keyslot is
+- * released. The rq->crypt_ctx is also freed.
+- */
+-void __blk_crypto_free_request(struct request *rq)
++void __blk_crypto_rq_put_keyslot(struct request *rq)
+ {
+ 	blk_crypto_put_keyslot(rq->crypt_keyslot);
++	rq->crypt_keyslot = NULL;
++}
++
++void __blk_crypto_free_request(struct request *rq)
++{
++	/* The keyslot, if one was needed, should have been released earlier. */
++	if (WARN_ON_ONCE(rq->crypt_keyslot))
++		__blk_crypto_rq_put_keyslot(rq);
++
+ 	mempool_free(rq->crypt_ctx, bio_crypt_ctx_pool);
+-	blk_crypto_rq_set_defaults(rq);
++	rq->crypt_ctx = NULL;
+ }
+ 
+ /**
+@@ -399,30 +400,39 @@ int blk_crypto_start_using_key(struct block_device *bdev,
+ }
+ 
+ /**
+- * blk_crypto_evict_key() - Evict a key from any inline encryption hardware
+- *			    it may have been programmed into
+- * @bdev: The block_device who's associated inline encryption hardware this key
+- *     might have been programmed into
+- * @key: The key to evict
++ * blk_crypto_evict_key() - Evict a blk_crypto_key from a block_device
++ * @bdev: a block_device on which I/O using the key may have been done
++ * @key: the key to evict
++ *
++ * For a given block_device, this function removes the given blk_crypto_key from
++ * the keyslot management structures and evicts it from any underlying hardware
++ * keyslot(s) or blk-crypto-fallback keyslot it may have been programmed into.
+  *
+- * Upper layers (filesystems) must call this function to ensure that a key is
+- * evicted from any hardware that it might have been programmed into.  The key
+- * must not be in use by any in-flight IO when this function is called.
++ * Upper layers must call this before freeing the blk_crypto_key.  It must be
++ * called for every block_device the key may have been used on.  The key must no
++ * longer be in use by any I/O when this function is called.
+  *
+- * Return: 0 on success or if the key wasn't in any keyslot; -errno on error.
++ * Context: May sleep.
+  */
+-int blk_crypto_evict_key(struct block_device *bdev,
+-			 const struct blk_crypto_key *key)
++void blk_crypto_evict_key(struct block_device *bdev,
++			  const struct blk_crypto_key *key)
+ {
+ 	struct request_queue *q = bdev_get_queue(bdev);
++	int err;
+ 
+ 	if (blk_crypto_config_supported_natively(bdev, &key->crypto_cfg))
+-		return __blk_crypto_evict_key(q->crypto_profile, key);
+-
++		err = __blk_crypto_evict_key(q->crypto_profile, key);
++	else
++		err = blk_crypto_fallback_evict_key(key);
+ 	/*
+-	 * If the block_device didn't support the key, then blk-crypto-fallback
+-	 * may have been used, so try to evict the key from blk-crypto-fallback.
++	 * An error can only occur here if the key failed to be evicted from a
++	 * keyslot (due to a hardware or driver issue) or is allegedly still in
++	 * use by I/O (due to a kernel bug).  Even in these cases, the key is
++	 * still unlinked from the keyslot management structures, and the caller
++	 * is allowed and expected to free it right away.  There's nothing
++	 * callers can do to handle errors, so just log them and return void.
+ 	 */
+-	return blk_crypto_fallback_evict_key(key);
++	if (err)
++		pr_warn_ratelimited("%pg: error %d evicting key\n", bdev, err);
+ }
+ EXPORT_SYMBOL_GPL(blk_crypto_evict_key);
+diff --git a/block/blk-merge.c b/block/blk-merge.c
+index 6460abdb24267..65e75efa9bd36 100644
+--- a/block/blk-merge.c
++++ b/block/blk-merge.c
+@@ -867,6 +867,8 @@ static struct request *attempt_merge(struct request_queue *q,
+ 	if (!blk_discard_mergable(req))
+ 		elv_merge_requests(q, req, next);
+ 
++	blk_crypto_rq_put_keyslot(next);
++
+ 	/*
+ 	 * 'next' is going away, so update stats accordingly
+ 	 */
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 2831f78f86a03..ae08c4936743d 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -840,6 +840,12 @@ static void blk_complete_request(struct request *req)
+ 		req->q->integrity.profile->complete_fn(req, total_bytes);
+ #endif
+ 
++	/*
++	 * Upper layers may call blk_crypto_evict_key() anytime after the last
++	 * bio_endio().  Therefore, the keyslot must be released before that.
++	 */
++	blk_crypto_rq_put_keyslot(req);
++
+ 	blk_account_io_completion(req, total_bytes);
+ 
+ 	do {
+@@ -905,6 +911,13 @@ bool blk_update_request(struct request *req, blk_status_t error,
+ 		req->q->integrity.profile->complete_fn(req, nr_bytes);
+ #endif
+ 
++	/*
++	 * Upper layers may call blk_crypto_evict_key() anytime after the last
++	 * bio_endio().  Therefore, the keyslot must be released before that.
++	 */
++	if (blk_crypto_rq_has_keyslot(req) && nr_bytes >= blk_rq_bytes(req))
++		__blk_crypto_rq_put_keyslot(req);
++
+ 	if (unlikely(error && !blk_rq_is_passthrough(req) &&
+ 		     !(req->rq_flags & RQF_QUIET)) &&
+ 		     !test_bit(GD_DEAD, &req->q->disk->state)) {
+@@ -1332,7 +1345,7 @@ void blk_execute_rq_nowait(struct request *rq, bool at_head)
+ 	 * device, directly accessing the plug instead of using blk_mq_plug()
+ 	 * should not have any consequences.
+ 	 */
+-	if (current->plug)
++	if (current->plug && !at_head)
+ 		blk_add_rq_to_plug(current->plug, rq);
+ 	else
+ 		blk_mq_sched_insert_request(rq, at_head, true, false);
+@@ -2965,7 +2978,7 @@ void blk_mq_submit_bio(struct bio *bio)
+ 
+ 	blk_mq_bio_to_request(rq, bio, nr_segs);
+ 
+-	ret = blk_crypto_init_request(rq);
++	ret = blk_crypto_rq_get_keyslot(rq);
+ 	if (ret != BLK_STS_OK) {
+ 		bio->bi_status = ret;
+ 		bio_endio(bio);
+diff --git a/block/blk-stat.c b/block/blk-stat.c
+index c6ca16abf911e..5621ecf0cfb93 100644
+--- a/block/blk-stat.c
++++ b/block/blk-stat.c
+@@ -190,7 +190,7 @@ void blk_stat_disable_accounting(struct request_queue *q)
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&q->stats->lock, flags);
+-	if (!--q->stats->accounting)
++	if (!--q->stats->accounting && list_empty(&q->stats->callbacks))
+ 		blk_queue_flag_clear(QUEUE_FLAG_STATS, q);
+ 	spin_unlock_irqrestore(&q->stats->lock, flags);
+ }
+@@ -201,7 +201,7 @@ void blk_stat_enable_accounting(struct request_queue *q)
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&q->stats->lock, flags);
+-	if (!q->stats->accounting++)
++	if (!q->stats->accounting++ && list_empty(&q->stats->callbacks))
+ 		blk_queue_flag_set(QUEUE_FLAG_STATS, q);
+ 	spin_unlock_irqrestore(&q->stats->lock, flags);
+ }
+diff --git a/crypto/algapi.c b/crypto/algapi.c
+index d08f864f08bee..9de0677b3643d 100644
+--- a/crypto/algapi.c
++++ b/crypto/algapi.c
+@@ -493,7 +493,9 @@ void crypto_unregister_alg(struct crypto_alg *alg)
+ 	if (WARN(ret, "Algorithm %s is not registered", alg->cra_driver_name))
+ 		return;
+ 
+-	BUG_ON(refcount_read(&alg->cra_refcnt) != 1);
++	if (WARN_ON(refcount_read(&alg->cra_refcnt) != 1))
++		return;
++
+ 	if (alg->cra_destroy)
+ 		alg->cra_destroy(alg);
+ 
+diff --git a/crypto/drbg.c b/crypto/drbg.c
+index 982d4ca4526d8..ff4ebbc68efab 100644
+--- a/crypto/drbg.c
++++ b/crypto/drbg.c
+@@ -1546,7 +1546,7 @@ static int drbg_prepare_hrng(struct drbg_state *drbg)
+ 		const int err = PTR_ERR(drbg->jent);
+ 
+ 		drbg->jent = NULL;
+-		if (fips_enabled || err != -ENOENT)
++		if (fips_enabled)
+ 			return err;
+ 		pr_info("DRBG: Continuing without Jitter RNG\n");
+ 	}
+diff --git a/crypto/testmgr.c b/crypto/testmgr.c
+index c91e93ece20b9..b160eeb12c8e2 100644
+--- a/crypto/testmgr.c
++++ b/crypto/testmgr.c
+@@ -860,12 +860,50 @@ static int prepare_keybuf(const u8 *key, unsigned int ksize,
+ 
+ #ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
+ 
++/*
++ * The fuzz tests use prandom instead of the normal Linux RNG since they don't
++ * need cryptographically secure random numbers.  This greatly improves the
++ * performance of these tests, especially if they are run before the Linux RNG
++ * has been initialized or if they are run on a lockdep-enabled kernel.
++ */
++
++static inline void init_rnd_state(struct rnd_state *rng)
++{
++	prandom_seed_state(rng, get_random_u64());
++}
++
++static inline u8 prandom_u8(struct rnd_state *rng)
++{
++	return prandom_u32_state(rng);
++}
++
++static inline u32 prandom_u32_below(struct rnd_state *rng, u32 ceil)
++{
++	/*
++	 * This is slightly biased for non-power-of-2 values of 'ceil', but this
++	 * isn't important here.
++	 */
++	return prandom_u32_state(rng) % ceil;
++}
++
++static inline bool prandom_bool(struct rnd_state *rng)
++{
++	return prandom_u32_below(rng, 2);
++}
++
++static inline u32 prandom_u32_inclusive(struct rnd_state *rng,
++					u32 floor, u32 ceil)
++{
++	return floor + prandom_u32_below(rng, ceil - floor + 1);
++}
++
+ /* Generate a random length in range [0, max_len], but prefer smaller values */
+-static unsigned int generate_random_length(unsigned int max_len)
++static unsigned int generate_random_length(struct rnd_state *rng,
++					   unsigned int max_len)
+ {
+-	unsigned int len = get_random_u32_below(max_len + 1);
++	unsigned int len = prandom_u32_below(rng, max_len + 1);
+ 
+-	switch (get_random_u32_below(4)) {
++	switch (prandom_u32_below(rng, 4)) {
+ 	case 0:
+ 		return len % 64;
+ 	case 1:
+@@ -878,43 +916,44 @@ static unsigned int generate_random_length(unsigned int max_len)
+ }
+ 
+ /* Flip a random bit in the given nonempty data buffer */
+-static void flip_random_bit(u8 *buf, size_t size)
++static void flip_random_bit(struct rnd_state *rng, u8 *buf, size_t size)
+ {
+ 	size_t bitpos;
+ 
+-	bitpos = get_random_u32_below(size * 8);
++	bitpos = prandom_u32_below(rng, size * 8);
+ 	buf[bitpos / 8] ^= 1 << (bitpos % 8);
+ }
+ 
+ /* Flip a random byte in the given nonempty data buffer */
+-static void flip_random_byte(u8 *buf, size_t size)
++static void flip_random_byte(struct rnd_state *rng, u8 *buf, size_t size)
+ {
+-	buf[get_random_u32_below(size)] ^= 0xff;
++	buf[prandom_u32_below(rng, size)] ^= 0xff;
+ }
+ 
+ /* Sometimes make some random changes to the given nonempty data buffer */
+-static void mutate_buffer(u8 *buf, size_t size)
++static void mutate_buffer(struct rnd_state *rng, u8 *buf, size_t size)
+ {
+ 	size_t num_flips;
+ 	size_t i;
+ 
+ 	/* Sometimes flip some bits */
+-	if (get_random_u32_below(4) == 0) {
+-		num_flips = min_t(size_t, 1 << get_random_u32_below(8), size * 8);
++	if (prandom_u32_below(rng, 4) == 0) {
++		num_flips = min_t(size_t, 1 << prandom_u32_below(rng, 8),
++				  size * 8);
+ 		for (i = 0; i < num_flips; i++)
+-			flip_random_bit(buf, size);
++			flip_random_bit(rng, buf, size);
+ 	}
+ 
+ 	/* Sometimes flip some bytes */
+-	if (get_random_u32_below(4) == 0) {
+-		num_flips = min_t(size_t, 1 << get_random_u32_below(8), size);
++	if (prandom_u32_below(rng, 4) == 0) {
++		num_flips = min_t(size_t, 1 << prandom_u32_below(rng, 8), size);
+ 		for (i = 0; i < num_flips; i++)
+-			flip_random_byte(buf, size);
++			flip_random_byte(rng, buf, size);
+ 	}
+ }
+ 
+ /* Randomly generate 'count' bytes, but sometimes make them "interesting" */
+-static void generate_random_bytes(u8 *buf, size_t count)
++static void generate_random_bytes(struct rnd_state *rng, u8 *buf, size_t count)
+ {
+ 	u8 b;
+ 	u8 increment;
+@@ -923,11 +962,11 @@ static void generate_random_bytes(u8 *buf, size_t count)
+ 	if (count == 0)
+ 		return;
+ 
+-	switch (get_random_u32_below(8)) { /* Choose a generation strategy */
++	switch (prandom_u32_below(rng, 8)) { /* Choose a generation strategy */
+ 	case 0:
+ 	case 1:
+ 		/* All the same byte, plus optional mutations */
+-		switch (get_random_u32_below(4)) {
++		switch (prandom_u32_below(rng, 4)) {
+ 		case 0:
+ 			b = 0x00;
+ 			break;
+@@ -935,28 +974,28 @@ static void generate_random_bytes(u8 *buf, size_t count)
+ 			b = 0xff;
+ 			break;
+ 		default:
+-			b = get_random_u8();
++			b = prandom_u8(rng);
+ 			break;
+ 		}
+ 		memset(buf, b, count);
+-		mutate_buffer(buf, count);
++		mutate_buffer(rng, buf, count);
+ 		break;
+ 	case 2:
+ 		/* Ascending or descending bytes, plus optional mutations */
+-		increment = get_random_u8();
+-		b = get_random_u8();
++		increment = prandom_u8(rng);
++		b = prandom_u8(rng);
+ 		for (i = 0; i < count; i++, b += increment)
+ 			buf[i] = b;
+-		mutate_buffer(buf, count);
++		mutate_buffer(rng, buf, count);
+ 		break;
+ 	default:
+ 		/* Fully random bytes */
+-		for (i = 0; i < count; i++)
+-			buf[i] = get_random_u8();
++		prandom_bytes_state(rng, buf, count);
+ 	}
+ }
+ 
+-static char *generate_random_sgl_divisions(struct test_sg_division *divs,
++static char *generate_random_sgl_divisions(struct rnd_state *rng,
++					   struct test_sg_division *divs,
+ 					   size_t max_divs, char *p, char *end,
+ 					   bool gen_flushes, u32 req_flags)
+ {
+@@ -967,24 +1006,26 @@ static char *generate_random_sgl_divisions(struct test_sg_division *divs,
+ 		unsigned int this_len;
+ 		const char *flushtype_str;
+ 
+-		if (div == &divs[max_divs - 1] || get_random_u32_below(2) == 0)
++		if (div == &divs[max_divs - 1] || prandom_bool(rng))
+ 			this_len = remaining;
+ 		else
+-			this_len = get_random_u32_inclusive(1, remaining);
++			this_len = prandom_u32_inclusive(rng, 1, remaining);
+ 		div->proportion_of_total = this_len;
+ 
+-		if (get_random_u32_below(4) == 0)
+-			div->offset = get_random_u32_inclusive(PAGE_SIZE - 128, PAGE_SIZE - 1);
+-		else if (get_random_u32_below(2) == 0)
+-			div->offset = get_random_u32_below(32);
++		if (prandom_u32_below(rng, 4) == 0)
++			div->offset = prandom_u32_inclusive(rng,
++							    PAGE_SIZE - 128,
++							    PAGE_SIZE - 1);
++		else if (prandom_bool(rng))
++			div->offset = prandom_u32_below(rng, 32);
+ 		else
+-			div->offset = get_random_u32_below(PAGE_SIZE);
+-		if (get_random_u32_below(8) == 0)
++			div->offset = prandom_u32_below(rng, PAGE_SIZE);
++		if (prandom_u32_below(rng, 8) == 0)
+ 			div->offset_relative_to_alignmask = true;
+ 
+ 		div->flush_type = FLUSH_TYPE_NONE;
+ 		if (gen_flushes) {
+-			switch (get_random_u32_below(4)) {
++			switch (prandom_u32_below(rng, 4)) {
+ 			case 0:
+ 				div->flush_type = FLUSH_TYPE_REIMPORT;
+ 				break;
+@@ -996,7 +1037,7 @@ static char *generate_random_sgl_divisions(struct test_sg_division *divs,
+ 
+ 		if (div->flush_type != FLUSH_TYPE_NONE &&
+ 		    !(req_flags & CRYPTO_TFM_REQ_MAY_SLEEP) &&
+-		    get_random_u32_below(2) == 0)
++		    prandom_bool(rng))
+ 			div->nosimd = true;
+ 
+ 		switch (div->flush_type) {
+@@ -1031,7 +1072,8 @@ static char *generate_random_sgl_divisions(struct test_sg_division *divs,
+ }
+ 
+ /* Generate a random testvec_config for fuzz testing */
+-static void generate_random_testvec_config(struct testvec_config *cfg,
++static void generate_random_testvec_config(struct rnd_state *rng,
++					   struct testvec_config *cfg,
+ 					   char *name, size_t max_namelen)
+ {
+ 	char *p = name;
+@@ -1043,7 +1085,7 @@ static void generate_random_testvec_config(struct testvec_config *cfg,
+ 
+ 	p += scnprintf(p, end - p, "random:");
+ 
+-	switch (get_random_u32_below(4)) {
++	switch (prandom_u32_below(rng, 4)) {
+ 	case 0:
+ 	case 1:
+ 		cfg->inplace_mode = OUT_OF_PLACE;
+@@ -1058,12 +1100,12 @@ static void generate_random_testvec_config(struct testvec_config *cfg,
+ 		break;
+ 	}
+ 
+-	if (get_random_u32_below(2) == 0) {
++	if (prandom_bool(rng)) {
+ 		cfg->req_flags |= CRYPTO_TFM_REQ_MAY_SLEEP;
+ 		p += scnprintf(p, end - p, " may_sleep");
+ 	}
+ 
+-	switch (get_random_u32_below(4)) {
++	switch (prandom_u32_below(rng, 4)) {
+ 	case 0:
+ 		cfg->finalization_type = FINALIZATION_TYPE_FINAL;
+ 		p += scnprintf(p, end - p, " use_final");
+@@ -1078,36 +1120,37 @@ static void generate_random_testvec_config(struct testvec_config *cfg,
+ 		break;
+ 	}
+ 
+-	if (!(cfg->req_flags & CRYPTO_TFM_REQ_MAY_SLEEP) &&
+-	    get_random_u32_below(2) == 0) {
++	if (!(cfg->req_flags & CRYPTO_TFM_REQ_MAY_SLEEP) && prandom_bool(rng)) {
+ 		cfg->nosimd = true;
+ 		p += scnprintf(p, end - p, " nosimd");
+ 	}
+ 
+ 	p += scnprintf(p, end - p, " src_divs=[");
+-	p = generate_random_sgl_divisions(cfg->src_divs,
++	p = generate_random_sgl_divisions(rng, cfg->src_divs,
+ 					  ARRAY_SIZE(cfg->src_divs), p, end,
+ 					  (cfg->finalization_type !=
+ 					   FINALIZATION_TYPE_DIGEST),
+ 					  cfg->req_flags);
+ 	p += scnprintf(p, end - p, "]");
+ 
+-	if (cfg->inplace_mode == OUT_OF_PLACE && get_random_u32_below(2) == 0) {
++	if (cfg->inplace_mode == OUT_OF_PLACE && prandom_bool(rng)) {
+ 		p += scnprintf(p, end - p, " dst_divs=[");
+-		p = generate_random_sgl_divisions(cfg->dst_divs,
++		p = generate_random_sgl_divisions(rng, cfg->dst_divs,
+ 						  ARRAY_SIZE(cfg->dst_divs),
+ 						  p, end, false,
+ 						  cfg->req_flags);
+ 		p += scnprintf(p, end - p, "]");
+ 	}
+ 
+-	if (get_random_u32_below(2) == 0) {
+-		cfg->iv_offset = get_random_u32_inclusive(1, MAX_ALGAPI_ALIGNMASK);
++	if (prandom_bool(rng)) {
++		cfg->iv_offset = prandom_u32_inclusive(rng, 1,
++						       MAX_ALGAPI_ALIGNMASK);
+ 		p += scnprintf(p, end - p, " iv_offset=%u", cfg->iv_offset);
+ 	}
+ 
+-	if (get_random_u32_below(2) == 0) {
+-		cfg->key_offset = get_random_u32_inclusive(1, MAX_ALGAPI_ALIGNMASK);
++	if (prandom_bool(rng)) {
++		cfg->key_offset = prandom_u32_inclusive(rng, 1,
++							MAX_ALGAPI_ALIGNMASK);
+ 		p += scnprintf(p, end - p, " key_offset=%u", cfg->key_offset);
+ 	}
+ 
+@@ -1620,11 +1663,14 @@ static int test_hash_vec(const struct hash_testvec *vec, unsigned int vec_num,
+ 
+ #ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
+ 	if (!noextratests) {
++		struct rnd_state rng;
+ 		struct testvec_config cfg;
+ 		char cfgname[TESTVEC_CONFIG_NAMELEN];
+ 
++		init_rnd_state(&rng);
++
+ 		for (i = 0; i < fuzz_iterations; i++) {
+-			generate_random_testvec_config(&cfg, cfgname,
++			generate_random_testvec_config(&rng, &cfg, cfgname,
+ 						       sizeof(cfgname));
+ 			err = test_hash_vec_cfg(vec, vec_name, &cfg,
+ 						req, desc, tsgl, hashstate);
+@@ -1642,15 +1688,16 @@ static int test_hash_vec(const struct hash_testvec *vec, unsigned int vec_num,
+  * Generate a hash test vector from the given implementation.
+  * Assumes the buffers in 'vec' were already allocated.
+  */
+-static void generate_random_hash_testvec(struct shash_desc *desc,
++static void generate_random_hash_testvec(struct rnd_state *rng,
++					 struct shash_desc *desc,
+ 					 struct hash_testvec *vec,
+ 					 unsigned int maxkeysize,
+ 					 unsigned int maxdatasize,
+ 					 char *name, size_t max_namelen)
+ {
+ 	/* Data */
+-	vec->psize = generate_random_length(maxdatasize);
+-	generate_random_bytes((u8 *)vec->plaintext, vec->psize);
++	vec->psize = generate_random_length(rng, maxdatasize);
++	generate_random_bytes(rng, (u8 *)vec->plaintext, vec->psize);
+ 
+ 	/*
+ 	 * Key: length in range [1, maxkeysize], but usually choose maxkeysize.
+@@ -1660,9 +1707,9 @@ static void generate_random_hash_testvec(struct shash_desc *desc,
+ 	vec->ksize = 0;
+ 	if (maxkeysize) {
+ 		vec->ksize = maxkeysize;
+-		if (get_random_u32_below(4) == 0)
+-			vec->ksize = get_random_u32_inclusive(1, maxkeysize);
+-		generate_random_bytes((u8 *)vec->key, vec->ksize);
++		if (prandom_u32_below(rng, 4) == 0)
++			vec->ksize = prandom_u32_inclusive(rng, 1, maxkeysize);
++		generate_random_bytes(rng, (u8 *)vec->key, vec->ksize);
+ 
+ 		vec->setkey_error = crypto_shash_setkey(desc->tfm, vec->key,
+ 							vec->ksize);
+@@ -1696,6 +1743,7 @@ static int test_hash_vs_generic_impl(const char *generic_driver,
+ 	const unsigned int maxdatasize = (2 * PAGE_SIZE) - TESTMGR_POISON_LEN;
+ 	const char *algname = crypto_hash_alg_common(tfm)->base.cra_name;
+ 	const char *driver = crypto_ahash_driver_name(tfm);
++	struct rnd_state rng;
+ 	char _generic_driver[CRYPTO_MAX_ALG_NAME];
+ 	struct crypto_shash *generic_tfm = NULL;
+ 	struct shash_desc *generic_desc = NULL;
+@@ -1709,6 +1757,8 @@ static int test_hash_vs_generic_impl(const char *generic_driver,
+ 	if (noextratests)
+ 		return 0;
+ 
++	init_rnd_state(&rng);
++
+ 	if (!generic_driver) { /* Use default naming convention? */
+ 		err = build_generic_driver_name(algname, _generic_driver);
+ 		if (err)
+@@ -1777,10 +1827,11 @@ static int test_hash_vs_generic_impl(const char *generic_driver,
+ 	}
+ 
+ 	for (i = 0; i < fuzz_iterations * 8; i++) {
+-		generate_random_hash_testvec(generic_desc, &vec,
++		generate_random_hash_testvec(&rng, generic_desc, &vec,
+ 					     maxkeysize, maxdatasize,
+ 					     vec_name, sizeof(vec_name));
+-		generate_random_testvec_config(cfg, cfgname, sizeof(cfgname));
++		generate_random_testvec_config(&rng, cfg, cfgname,
++					       sizeof(cfgname));
+ 
+ 		err = test_hash_vec_cfg(&vec, vec_name, cfg,
+ 					req, desc, tsgl, hashstate);
+@@ -2182,11 +2233,14 @@ static int test_aead_vec(int enc, const struct aead_testvec *vec,
+ 
+ #ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
+ 	if (!noextratests) {
++		struct rnd_state rng;
+ 		struct testvec_config cfg;
+ 		char cfgname[TESTVEC_CONFIG_NAMELEN];
+ 
++		init_rnd_state(&rng);
++
+ 		for (i = 0; i < fuzz_iterations; i++) {
+-			generate_random_testvec_config(&cfg, cfgname,
++			generate_random_testvec_config(&rng, &cfg, cfgname,
+ 						       sizeof(cfgname));
+ 			err = test_aead_vec_cfg(enc, vec, vec_name,
+ 						&cfg, req, tsgls);
+@@ -2202,6 +2256,7 @@ static int test_aead_vec(int enc, const struct aead_testvec *vec,
+ #ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
+ 
+ struct aead_extra_tests_ctx {
++	struct rnd_state rng;
+ 	struct aead_request *req;
+ 	struct crypto_aead *tfm;
+ 	const struct alg_test_desc *test_desc;
+@@ -2220,24 +2275,26 @@ struct aead_extra_tests_ctx {
+  * here means the full ciphertext including the authentication tag.  The
+  * authentication tag (and hence also the ciphertext) is assumed to be nonempty.
+  */
+-static void mutate_aead_message(struct aead_testvec *vec, bool aad_iv,
++static void mutate_aead_message(struct rnd_state *rng,
++				struct aead_testvec *vec, bool aad_iv,
+ 				unsigned int ivsize)
+ {
+ 	const unsigned int aad_tail_size = aad_iv ? ivsize : 0;
+ 	const unsigned int authsize = vec->clen - vec->plen;
+ 
+-	if (get_random_u32_below(2) == 0 && vec->alen > aad_tail_size) {
++	if (prandom_bool(rng) && vec->alen > aad_tail_size) {
+ 		 /* Mutate the AAD */
+-		flip_random_bit((u8 *)vec->assoc, vec->alen - aad_tail_size);
+-		if (get_random_u32_below(2) == 0)
++		flip_random_bit(rng, (u8 *)vec->assoc,
++				vec->alen - aad_tail_size);
++		if (prandom_bool(rng))
+ 			return;
+ 	}
+-	if (get_random_u32_below(2) == 0) {
++	if (prandom_bool(rng)) {
+ 		/* Mutate auth tag (assuming it's at the end of ciphertext) */
+-		flip_random_bit((u8 *)vec->ctext + vec->plen, authsize);
++		flip_random_bit(rng, (u8 *)vec->ctext + vec->plen, authsize);
+ 	} else {
+ 		/* Mutate any part of the ciphertext */
+-		flip_random_bit((u8 *)vec->ctext, vec->clen);
++		flip_random_bit(rng, (u8 *)vec->ctext, vec->clen);
+ 	}
+ }
+ 
+@@ -2248,7 +2305,8 @@ static void mutate_aead_message(struct aead_testvec *vec, bool aad_iv,
+  */
+ #define MIN_COLLISION_FREE_AUTHSIZE 8
+ 
+-static void generate_aead_message(struct aead_request *req,
++static void generate_aead_message(struct rnd_state *rng,
++				  struct aead_request *req,
+ 				  const struct aead_test_suite *suite,
+ 				  struct aead_testvec *vec,
+ 				  bool prefer_inauthentic)
+@@ -2257,17 +2315,18 @@ static void generate_aead_message(struct aead_request *req,
+ 	const unsigned int ivsize = crypto_aead_ivsize(tfm);
+ 	const unsigned int authsize = vec->clen - vec->plen;
+ 	const bool inauthentic = (authsize >= MIN_COLLISION_FREE_AUTHSIZE) &&
+-				 (prefer_inauthentic || get_random_u32_below(4) == 0);
++				 (prefer_inauthentic ||
++				  prandom_u32_below(rng, 4) == 0);
+ 
+ 	/* Generate the AAD. */
+-	generate_random_bytes((u8 *)vec->assoc, vec->alen);
++	generate_random_bytes(rng, (u8 *)vec->assoc, vec->alen);
+ 	if (suite->aad_iv && vec->alen >= ivsize)
+ 		/* Avoid implementation-defined behavior. */
+ 		memcpy((u8 *)vec->assoc + vec->alen - ivsize, vec->iv, ivsize);
+ 
+-	if (inauthentic && get_random_u32_below(2) == 0) {
++	if (inauthentic && prandom_bool(rng)) {
+ 		/* Generate a random ciphertext. */
+-		generate_random_bytes((u8 *)vec->ctext, vec->clen);
++		generate_random_bytes(rng, (u8 *)vec->ctext, vec->clen);
+ 	} else {
+ 		int i = 0;
+ 		struct scatterlist src[2], dst;
+@@ -2279,7 +2338,7 @@ static void generate_aead_message(struct aead_request *req,
+ 		if (vec->alen)
+ 			sg_set_buf(&src[i++], vec->assoc, vec->alen);
+ 		if (vec->plen) {
+-			generate_random_bytes((u8 *)vec->ptext, vec->plen);
++			generate_random_bytes(rng, (u8 *)vec->ptext, vec->plen);
+ 			sg_set_buf(&src[i++], vec->ptext, vec->plen);
+ 		}
+ 		sg_init_one(&dst, vec->ctext, vec->alen + vec->clen);
+@@ -2299,7 +2358,7 @@ static void generate_aead_message(struct aead_request *req,
+ 		 * Mutate the authentic (ciphertext, AAD) pair to get an
+ 		 * inauthentic one.
+ 		 */
+-		mutate_aead_message(vec, suite->aad_iv, ivsize);
++		mutate_aead_message(rng, vec, suite->aad_iv, ivsize);
+ 	}
+ 	vec->novrfy = 1;
+ 	if (suite->einval_allowed)
+@@ -2313,7 +2372,8 @@ static void generate_aead_message(struct aead_request *req,
+  * If 'prefer_inauthentic' is true, then this function will generate inauthentic
+  * test vectors (i.e. vectors with 'vec->novrfy=1') more often.
+  */
+-static void generate_random_aead_testvec(struct aead_request *req,
++static void generate_random_aead_testvec(struct rnd_state *rng,
++					 struct aead_request *req,
+ 					 struct aead_testvec *vec,
+ 					 const struct aead_test_suite *suite,
+ 					 unsigned int maxkeysize,
+@@ -2329,18 +2389,18 @@ static void generate_random_aead_testvec(struct aead_request *req,
+ 
+ 	/* Key: length in [0, maxkeysize], but usually choose maxkeysize */
+ 	vec->klen = maxkeysize;
+-	if (get_random_u32_below(4) == 0)
+-		vec->klen = get_random_u32_below(maxkeysize + 1);
+-	generate_random_bytes((u8 *)vec->key, vec->klen);
++	if (prandom_u32_below(rng, 4) == 0)
++		vec->klen = prandom_u32_below(rng, maxkeysize + 1);
++	generate_random_bytes(rng, (u8 *)vec->key, vec->klen);
+ 	vec->setkey_error = crypto_aead_setkey(tfm, vec->key, vec->klen);
+ 
+ 	/* IV */
+-	generate_random_bytes((u8 *)vec->iv, ivsize);
++	generate_random_bytes(rng, (u8 *)vec->iv, ivsize);
+ 
+ 	/* Tag length: in [0, maxauthsize], but usually choose maxauthsize */
+ 	authsize = maxauthsize;
+-	if (get_random_u32_below(4) == 0)
+-		authsize = get_random_u32_below(maxauthsize + 1);
++	if (prandom_u32_below(rng, 4) == 0)
++		authsize = prandom_u32_below(rng, maxauthsize + 1);
+ 	if (prefer_inauthentic && authsize < MIN_COLLISION_FREE_AUTHSIZE)
+ 		authsize = MIN_COLLISION_FREE_AUTHSIZE;
+ 	if (WARN_ON(authsize > maxdatasize))
+@@ -2349,11 +2409,11 @@ static void generate_random_aead_testvec(struct aead_request *req,
+ 	vec->setauthsize_error = crypto_aead_setauthsize(tfm, authsize);
+ 
+ 	/* AAD, plaintext, and ciphertext lengths */
+-	total_len = generate_random_length(maxdatasize);
+-	if (get_random_u32_below(4) == 0)
++	total_len = generate_random_length(rng, maxdatasize);
++	if (prandom_u32_below(rng, 4) == 0)
+ 		vec->alen = 0;
+ 	else
+-		vec->alen = generate_random_length(total_len);
++		vec->alen = generate_random_length(rng, total_len);
+ 	vec->plen = total_len - vec->alen;
+ 	vec->clen = vec->plen + authsize;
+ 
+@@ -2364,7 +2424,7 @@ static void generate_random_aead_testvec(struct aead_request *req,
+ 	vec->novrfy = 0;
+ 	vec->crypt_error = 0;
+ 	if (vec->setkey_error == 0 && vec->setauthsize_error == 0)
+-		generate_aead_message(req, suite, vec, prefer_inauthentic);
++		generate_aead_message(rng, req, suite, vec, prefer_inauthentic);
+ 	snprintf(name, max_namelen,
+ 		 "\"random: alen=%u plen=%u authsize=%u klen=%u novrfy=%d\"",
+ 		 vec->alen, vec->plen, authsize, vec->klen, vec->novrfy);
+@@ -2376,7 +2436,7 @@ static void try_to_generate_inauthentic_testvec(
+ 	int i;
+ 
+ 	for (i = 0; i < 10; i++) {
+-		generate_random_aead_testvec(ctx->req, &ctx->vec,
++		generate_random_aead_testvec(&ctx->rng, ctx->req, &ctx->vec,
+ 					     &ctx->test_desc->suite.aead,
+ 					     ctx->maxkeysize, ctx->maxdatasize,
+ 					     ctx->vec_name,
+@@ -2407,7 +2467,8 @@ static int test_aead_inauthentic_inputs(struct aead_extra_tests_ctx *ctx)
+ 		 */
+ 		try_to_generate_inauthentic_testvec(ctx);
+ 		if (ctx->vec.novrfy) {
+-			generate_random_testvec_config(&ctx->cfg, ctx->cfgname,
++			generate_random_testvec_config(&ctx->rng, &ctx->cfg,
++						       ctx->cfgname,
+ 						       sizeof(ctx->cfgname));
+ 			err = test_aead_vec_cfg(DECRYPT, &ctx->vec,
+ 						ctx->vec_name, &ctx->cfg,
+@@ -2497,12 +2558,13 @@ static int test_aead_vs_generic_impl(struct aead_extra_tests_ctx *ctx)
+ 	 * the other implementation against them.
+ 	 */
+ 	for (i = 0; i < fuzz_iterations * 8; i++) {
+-		generate_random_aead_testvec(generic_req, &ctx->vec,
++		generate_random_aead_testvec(&ctx->rng, generic_req, &ctx->vec,
+ 					     &ctx->test_desc->suite.aead,
+ 					     ctx->maxkeysize, ctx->maxdatasize,
+ 					     ctx->vec_name,
+ 					     sizeof(ctx->vec_name), false);
+-		generate_random_testvec_config(&ctx->cfg, ctx->cfgname,
++		generate_random_testvec_config(&ctx->rng, &ctx->cfg,
++					       ctx->cfgname,
+ 					       sizeof(ctx->cfgname));
+ 		if (!ctx->vec.novrfy) {
+ 			err = test_aead_vec_cfg(ENCRYPT, &ctx->vec,
+@@ -2541,6 +2603,7 @@ static int test_aead_extra(const struct alg_test_desc *test_desc,
+ 	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+ 	if (!ctx)
+ 		return -ENOMEM;
++	init_rnd_state(&ctx->rng);
+ 	ctx->req = req;
+ 	ctx->tfm = crypto_aead_reqtfm(req);
+ 	ctx->test_desc = test_desc;
+@@ -2930,11 +2993,14 @@ static int test_skcipher_vec(int enc, const struct cipher_testvec *vec,
+ 
+ #ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
+ 	if (!noextratests) {
++		struct rnd_state rng;
+ 		struct testvec_config cfg;
+ 		char cfgname[TESTVEC_CONFIG_NAMELEN];
+ 
++		init_rnd_state(&rng);
++
+ 		for (i = 0; i < fuzz_iterations; i++) {
+-			generate_random_testvec_config(&cfg, cfgname,
++			generate_random_testvec_config(&rng, &cfg, cfgname,
+ 						       sizeof(cfgname));
+ 			err = test_skcipher_vec_cfg(enc, vec, vec_name,
+ 						    &cfg, req, tsgls);
+@@ -2952,7 +3018,8 @@ static int test_skcipher_vec(int enc, const struct cipher_testvec *vec,
+  * Generate a symmetric cipher test vector from the given implementation.
+  * Assumes the buffers in 'vec' were already allocated.
+  */
+-static void generate_random_cipher_testvec(struct skcipher_request *req,
++static void generate_random_cipher_testvec(struct rnd_state *rng,
++					   struct skcipher_request *req,
+ 					   struct cipher_testvec *vec,
+ 					   unsigned int maxdatasize,
+ 					   char *name, size_t max_namelen)
+@@ -2966,17 +3033,17 @@ static void generate_random_cipher_testvec(struct skcipher_request *req,
+ 
+ 	/* Key: length in [0, maxkeysize], but usually choose maxkeysize */
+ 	vec->klen = maxkeysize;
+-	if (get_random_u32_below(4) == 0)
+-		vec->klen = get_random_u32_below(maxkeysize + 1);
+-	generate_random_bytes((u8 *)vec->key, vec->klen);
++	if (prandom_u32_below(rng, 4) == 0)
++		vec->klen = prandom_u32_below(rng, maxkeysize + 1);
++	generate_random_bytes(rng, (u8 *)vec->key, vec->klen);
+ 	vec->setkey_error = crypto_skcipher_setkey(tfm, vec->key, vec->klen);
+ 
+ 	/* IV */
+-	generate_random_bytes((u8 *)vec->iv, ivsize);
++	generate_random_bytes(rng, (u8 *)vec->iv, ivsize);
+ 
+ 	/* Plaintext */
+-	vec->len = generate_random_length(maxdatasize);
+-	generate_random_bytes((u8 *)vec->ptext, vec->len);
++	vec->len = generate_random_length(rng, maxdatasize);
++	generate_random_bytes(rng, (u8 *)vec->ptext, vec->len);
+ 
+ 	/* If the key couldn't be set, no need to continue to encrypt. */
+ 	if (vec->setkey_error)
+@@ -3018,6 +3085,7 @@ static int test_skcipher_vs_generic_impl(const char *generic_driver,
+ 	const unsigned int maxdatasize = (2 * PAGE_SIZE) - TESTMGR_POISON_LEN;
+ 	const char *algname = crypto_skcipher_alg(tfm)->base.cra_name;
+ 	const char *driver = crypto_skcipher_driver_name(tfm);
++	struct rnd_state rng;
+ 	char _generic_driver[CRYPTO_MAX_ALG_NAME];
+ 	struct crypto_skcipher *generic_tfm = NULL;
+ 	struct skcipher_request *generic_req = NULL;
+@@ -3035,6 +3103,8 @@ static int test_skcipher_vs_generic_impl(const char *generic_driver,
+ 	if (strncmp(algname, "kw(", 3) == 0)
+ 		return 0;
+ 
++	init_rnd_state(&rng);
++
+ 	if (!generic_driver) { /* Use default naming convention? */
+ 		err = build_generic_driver_name(algname, _generic_driver);
+ 		if (err)
+@@ -3119,9 +3189,11 @@ static int test_skcipher_vs_generic_impl(const char *generic_driver,
+ 	}
+ 
+ 	for (i = 0; i < fuzz_iterations * 8; i++) {
+-		generate_random_cipher_testvec(generic_req, &vec, maxdatasize,
++		generate_random_cipher_testvec(&rng, generic_req, &vec,
++					       maxdatasize,
+ 					       vec_name, sizeof(vec_name));
+-		generate_random_testvec_config(cfg, cfgname, sizeof(cfgname));
++		generate_random_testvec_config(&rng, cfg, cfgname,
++					       sizeof(cfgname));
+ 
+ 		err = test_skcipher_vec_cfg(ENCRYPT, &vec, vec_name,
+ 					    cfg, req, tsgls);
+diff --git a/drivers/accel/ivpu/ivpu_pm.c b/drivers/accel/ivpu/ivpu_pm.c
+index bde42d6383da6..aa4d56dc52b39 100644
+--- a/drivers/accel/ivpu/ivpu_pm.c
++++ b/drivers/accel/ivpu/ivpu_pm.c
+@@ -239,8 +239,6 @@ int ivpu_rpm_get(struct ivpu_device *vdev)
+ {
+ 	int ret;
+ 
+-	ivpu_dbg(vdev, RPM, "rpm_get count %d\n", atomic_read(&vdev->drm.dev->power.usage_count));
+-
+ 	ret = pm_runtime_resume_and_get(vdev->drm.dev);
+ 	if (!drm_WARN_ON(&vdev->drm, ret < 0))
+ 		vdev->pm->suspend_reschedule_counter = PM_RESCHEDULE_LIMIT;
+@@ -250,8 +248,6 @@ int ivpu_rpm_get(struct ivpu_device *vdev)
+ 
+ void ivpu_rpm_put(struct ivpu_device *vdev)
+ {
+-	ivpu_dbg(vdev, RPM, "rpm_put count %d\n", atomic_read(&vdev->drm.dev->power.usage_count));
+-
+ 	pm_runtime_mark_last_busy(vdev->drm.dev);
+ 	pm_runtime_put_autosuspend(vdev->drm.dev);
+ }
+@@ -321,16 +317,10 @@ void ivpu_pm_enable(struct ivpu_device *vdev)
+ 	pm_runtime_allow(dev);
+ 	pm_runtime_mark_last_busy(dev);
+ 	pm_runtime_put_autosuspend(dev);
+-
+-	ivpu_dbg(vdev, RPM, "Enable RPM count %d\n", atomic_read(&dev->power.usage_count));
+ }
+ 
+ void ivpu_pm_disable(struct ivpu_device *vdev)
+ {
+-	struct device *dev = vdev->drm.dev;
+-
+-	ivpu_dbg(vdev, RPM, "Disable RPM count %d\n", atomic_read(&dev->power.usage_count));
+-
+ 	pm_runtime_get_noresume(vdev->drm.dev);
+ 	pm_runtime_forbid(vdev->drm.dev);
+ }
+diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c
+index a96da65057b19..d34451cf04bc2 100644
+--- a/drivers/acpi/bus.c
++++ b/drivers/acpi/bus.c
+@@ -589,6 +589,7 @@ static void acpi_device_remove_notify_handler(struct acpi_device *device,
+ 		acpi_remove_notify_handler(device->handle, type,
+ 					   acpi_notify_device);
+ 	}
++	acpi_os_wait_events_complete();
+ }
+ 
+ /* Handle events targeting \_SB device (at present only graceful shutdown) */
+diff --git a/drivers/acpi/power.c b/drivers/acpi/power.c
+index 23507d29f0006..c2c70139c4f1d 100644
+--- a/drivers/acpi/power.c
++++ b/drivers/acpi/power.c
+@@ -23,6 +23,7 @@
+ 
+ #define pr_fmt(fmt) "ACPI: PM: " fmt
+ 
++#include <linux/dmi.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/init.h>
+@@ -1022,6 +1023,21 @@ void acpi_resume_power_resources(void)
+ }
+ #endif
+ 
++static const struct dmi_system_id dmi_leave_unused_power_resources_on[] = {
++	{
++		/*
++		 * The Toshiba Click Mini has a CPR3 power-resource which must
++		 * be on for the touchscreen to work, but which is not in any
++		 * _PR? lists. The other 2 affected power-resources are no-ops.
++		 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "SATELLITE Click Mini L9W-B"),
++		},
++	},
++	{}
++};
++
+ /**
+  * acpi_turn_off_unused_power_resources - Turn off power resources not in use.
+  */
+@@ -1029,6 +1045,9 @@ void acpi_turn_off_unused_power_resources(void)
+ {
+ 	struct acpi_power_resource *resource;
+ 
++	if (dmi_check_system(dmi_leave_unused_power_resources_on))
++		return;
++
+ 	mutex_lock(&power_resource_list_lock);
+ 
+ 	list_for_each_entry_reverse(resource, &acpi_power_resource_list, list_node) {
+diff --git a/drivers/acpi/processor_pdc.c b/drivers/acpi/processor_pdc.c
+index 8c3f82c9fff35..18fb04523f93b 100644
+--- a/drivers/acpi/processor_pdc.c
++++ b/drivers/acpi/processor_pdc.c
+@@ -14,6 +14,8 @@
+ #include <linux/acpi.h>
+ #include <acpi/processor.h>
+ 
++#include <xen/xen.h>
++
+ #include "internal.h"
+ 
+ static bool __init processor_physically_present(acpi_handle handle)
+@@ -47,6 +49,15 @@ static bool __init processor_physically_present(acpi_handle handle)
+ 		return false;
+ 	}
+ 
++	if (xen_initial_domain())
++		/*
++		 * When running as a Xen dom0 the number of processors Linux
++		 * sees can be different from the real number of processors on
++		 * the system, and we still need to execute _PDC for all of
++		 * them.
++		 */
++		return xen_processor_present(acpi_id);
++
+ 	type = (acpi_type == ACPI_TYPE_DEVICE) ? 1 : 0;
+ 	cpuid = acpi_get_cpuid(handle, type, acpi_id);
+ 
+diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
+index e85729fc481fd..295744fe7c920 100644
+--- a/drivers/acpi/video_detect.c
++++ b/drivers/acpi/video_detect.c
+@@ -299,20 +299,6 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ 		},
+ 	},
+ 
+-	/*
+-	 * Older models with nvidia GPU which need acpi_video backlight
+-	 * control and where the old nvidia binary driver series does not
+-	 * call acpi_video_register_backlight().
+-	 */
+-	{
+-	 .callback = video_detect_force_video,
+-	 /* ThinkPad W530 */
+-	 .matches = {
+-		DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+-		DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad W530"),
+-		},
+-	},
+-
+ 	/*
+ 	 * These models have a working acpi_video backlight control, and using
+ 	 * native backlight causes a regression where backlight does not work
+diff --git a/drivers/acpi/viot.c b/drivers/acpi/viot.c
+index ed752cbbe6362..c8025921c129b 100644
+--- a/drivers/acpi/viot.c
++++ b/drivers/acpi/viot.c
+@@ -328,6 +328,7 @@ static int viot_pci_dev_iommu_init(struct pci_dev *pdev, u16 dev_id, void *data)
+ {
+ 	u32 epid;
+ 	struct viot_endpoint *ep;
++	struct device *aliased_dev = data;
+ 	u32 domain_nr = pci_domain_nr(pdev->bus);
+ 
+ 	list_for_each_entry(ep, &viot_pci_ranges, list) {
+@@ -338,7 +339,7 @@ static int viot_pci_dev_iommu_init(struct pci_dev *pdev, u16 dev_id, void *data)
+ 			epid = ((domain_nr - ep->segment_start) << 16) +
+ 				dev_id - ep->bdf_start + ep->endpoint_id;
+ 
+-			return viot_dev_iommu_init(&pdev->dev, ep->viommu,
++			return viot_dev_iommu_init(aliased_dev, ep->viommu,
+ 						   epid);
+ 		}
+ 	}
+@@ -372,7 +373,7 @@ int viot_iommu_configure(struct device *dev)
+ {
+ 	if (dev_is_pci(dev))
+ 		return pci_for_each_dma_alias(to_pci_dev(dev),
+-					      viot_pci_dev_iommu_init, NULL);
++					      viot_pci_dev_iommu_init, dev);
+ 	else if (dev_is_platform(dev))
+ 		return viot_mmio_dev_iommu_init(to_platform_device(dev));
+ 	return -ENODEV;
+diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
+index b1c1dd38ab011..c4b6198d74613 100644
+--- a/drivers/base/arch_topology.c
++++ b/drivers/base/arch_topology.c
+@@ -843,10 +843,11 @@ void __init init_cpu_topology(void)
+ 
+ 	for_each_possible_cpu(cpu) {
+ 		ret = fetch_cache_info(cpu);
+-		if (ret) {
++		if (!ret)
++			continue;
++		else if (ret != -ENOENT)
+ 			pr_err("Early cacheinfo failed, ret = %d\n", ret);
+-			break;
+-		}
++		return;
+ 	}
+ }
+ 
+diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c
+index f3903d002819e..ea8f416852bd9 100644
+--- a/drivers/base/cacheinfo.c
++++ b/drivers/base/cacheinfo.c
+@@ -38,11 +38,10 @@ static inline bool cache_leaves_are_shared(struct cacheinfo *this_leaf,
+ {
+ 	/*
+ 	 * For non DT/ACPI systems, assume unique level 1 caches,
+-	 * system-wide shared caches for all other levels. This will be used
+-	 * only if arch specific code has not populated shared_cpu_map
++	 * system-wide shared caches for all other levels.
+ 	 */
+ 	if (!(IS_ENABLED(CONFIG_OF) || IS_ENABLED(CONFIG_ACPI)))
+-		return !(this_leaf->level == 1);
++		return (this_leaf->level != 1) && (sib_leaf->level != 1);
+ 
+ 	if ((sib_leaf->attributes & CACHE_ID) &&
+ 	    (this_leaf->attributes & CACHE_ID))
+@@ -79,6 +78,9 @@ bool last_level_cache_is_shared(unsigned int cpu_x, unsigned int cpu_y)
+ }
+ 
+ #ifdef CONFIG_OF
++
++static bool of_check_cache_nodes(struct device_node *np);
++
+ /* OF properties to query for a given cache type */
+ struct cache_type_info {
+ 	const char *size_prop;
+@@ -206,6 +208,11 @@ static int cache_setup_of_node(unsigned int cpu)
+ 		return -ENOENT;
+ 	}
+ 
++	if (!of_check_cache_nodes(np)) {
++		of_node_put(np);
++		return -ENOENT;
++	}
++
+ 	prev = np;
+ 
+ 	while (index < cache_leaves(cpu)) {
+@@ -230,6 +237,25 @@ static int cache_setup_of_node(unsigned int cpu)
+ 	return 0;
+ }
+ 
++static bool of_check_cache_nodes(struct device_node *np)
++{
++	struct device_node *next;
++
++	if (of_property_present(np, "cache-size")   ||
++	    of_property_present(np, "i-cache-size") ||
++	    of_property_present(np, "d-cache-size") ||
++	    of_property_present(np, "cache-unified"))
++		return true;
++
++	next = of_find_next_cache_node(np);
++	if (next) {
++		of_node_put(next);
++		return true;
++	}
++
++	return false;
++}
++
+ static int of_count_cache_leaves(struct device_node *np)
+ {
+ 	unsigned int leaves = 0;
+@@ -261,6 +287,11 @@ int init_of_cache_level(unsigned int cpu)
+ 	struct device_node *prev = NULL;
+ 	unsigned int levels = 0, leaves, level;
+ 
++	if (!of_check_cache_nodes(np)) {
++		of_node_put(np);
++		return -ENOENT;
++	}
++
+ 	leaves = of_count_cache_leaves(np);
+ 	if (leaves > 0)
+ 		levels = 1;
+diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
+index 182c6122f8152..c1815b9dae68e 100644
+--- a/drivers/base/cpu.c
++++ b/drivers/base/cpu.c
+@@ -487,7 +487,8 @@ static const struct attribute_group *cpu_root_attr_groups[] = {
+ bool cpu_is_hotpluggable(unsigned int cpu)
+ {
+ 	struct device *dev = get_cpu_device(cpu);
+-	return dev && container_of(dev, struct cpu, dev)->hotpluggable;
++	return dev && container_of(dev, struct cpu, dev)->hotpluggable
++		&& tick_nohz_cpu_hotpluggable(cpu);
+ }
+ EXPORT_SYMBOL_GPL(cpu_is_hotpluggable);
+ 
+diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c
+index 757f4692b5bd8..1fc815be12d01 100644
+--- a/drivers/block/drbd/drbd_receiver.c
++++ b/drivers/block/drbd/drbd_receiver.c
+@@ -1283,7 +1283,7 @@ static void one_flush_endio(struct bio *bio)
+ static void submit_one_flush(struct drbd_device *device, struct issue_flush_context *ctx)
+ {
+ 	struct bio *bio = bio_alloc(device->ldev->backing_bdev, 0,
+-				    REQ_OP_FLUSH | REQ_PREFLUSH, GFP_NOIO);
++				    REQ_OP_WRITE | REQ_PREFLUSH, GFP_NOIO);
+ 	struct one_flush_context *octx = kmalloc(sizeof(*octx), GFP_NOIO);
+ 
+ 	if (!octx) {
+diff --git a/drivers/bluetooth/btsdio.c b/drivers/bluetooth/btsdio.c
+index 51000320e1ea8..f19d31ee37ea8 100644
+--- a/drivers/bluetooth/btsdio.c
++++ b/drivers/bluetooth/btsdio.c
+@@ -354,7 +354,6 @@ static void btsdio_remove(struct sdio_func *func)
+ 
+ 	BT_DBG("func %p", func);
+ 
+-	cancel_work_sync(&data->work);
+ 	if (!data)
+ 		return;
+ 
+diff --git a/drivers/bus/mhi/host/boot.c b/drivers/bus/mhi/host/boot.c
+index 1c69feee17030..d2a19b07ccb88 100644
+--- a/drivers/bus/mhi/host/boot.c
++++ b/drivers/bus/mhi/host/boot.c
+@@ -391,6 +391,7 @@ void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl)
+ {
+ 	const struct firmware *firmware = NULL;
+ 	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	enum mhi_pm_state new_state;
+ 	const char *fw_name;
+ 	void *buf;
+ 	dma_addr_t dma_addr;
+@@ -508,14 +509,18 @@ error_ready_state:
+ 	}
+ 
+ error_fw_load:
+-	mhi_cntrl->pm_state = MHI_PM_FW_DL_ERR;
+-	wake_up_all(&mhi_cntrl->state_event);
++	write_lock_irq(&mhi_cntrl->pm_lock);
++	new_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_FW_DL_ERR);
++	write_unlock_irq(&mhi_cntrl->pm_lock);
++	if (new_state == MHI_PM_FW_DL_ERR)
++		wake_up_all(&mhi_cntrl->state_event);
+ }
+ 
+ int mhi_download_amss_image(struct mhi_controller *mhi_cntrl)
+ {
+ 	struct image_info *image_info = mhi_cntrl->fbc_image;
+ 	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	enum mhi_pm_state new_state;
+ 	int ret;
+ 
+ 	if (!image_info)
+@@ -526,8 +531,11 @@ int mhi_download_amss_image(struct mhi_controller *mhi_cntrl)
+ 			       &image_info->mhi_buf[image_info->entries - 1]);
+ 	if (ret) {
+ 		dev_err(dev, "MHI did not load AMSS, ret:%d\n", ret);
+-		mhi_cntrl->pm_state = MHI_PM_FW_DL_ERR;
+-		wake_up_all(&mhi_cntrl->state_event);
++		write_lock_irq(&mhi_cntrl->pm_lock);
++		new_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_FW_DL_ERR);
++		write_unlock_irq(&mhi_cntrl->pm_lock);
++		if (new_state == MHI_PM_FW_DL_ERR)
++			wake_up_all(&mhi_cntrl->state_event);
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/bus/mhi/host/init.c b/drivers/bus/mhi/host/init.c
+index 3d779ee6396d5..b46a0821adf90 100644
+--- a/drivers/bus/mhi/host/init.c
++++ b/drivers/bus/mhi/host/init.c
+@@ -516,6 +516,12 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
+ 		return -EIO;
+ 	}
+ 
++	if (val >= mhi_cntrl->reg_len - (8 * MHI_DEV_WAKE_DB)) {
++		dev_err(dev, "CHDB offset: 0x%x is out of range: 0x%zx\n",
++			val, mhi_cntrl->reg_len - (8 * MHI_DEV_WAKE_DB));
++		return -ERANGE;
++	}
++
+ 	/* Setup wake db */
+ 	mhi_cntrl->wake_db = base + val + (8 * MHI_DEV_WAKE_DB);
+ 	mhi_cntrl->wake_set = false;
+@@ -532,6 +538,12 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
+ 		return -EIO;
+ 	}
+ 
++	if (val >= mhi_cntrl->reg_len - (8 * mhi_cntrl->total_ev_rings)) {
++		dev_err(dev, "ERDB offset: 0x%x is out of range: 0x%zx\n",
++			val, mhi_cntrl->reg_len - (8 * mhi_cntrl->total_ev_rings));
++		return -ERANGE;
++	}
++
+ 	/* Setup event db address for each ev_ring */
+ 	mhi_event = mhi_cntrl->mhi_event;
+ 	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, val += 8, mhi_event++) {
+diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
+index df0fbfee7b78b..0c3a009ed9bb0 100644
+--- a/drivers/bus/mhi/host/main.c
++++ b/drivers/bus/mhi/host/main.c
+@@ -503,7 +503,7 @@ irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *priv)
+ 	}
+ 	write_unlock_irq(&mhi_cntrl->pm_lock);
+ 
+-	if (pm_state != MHI_PM_SYS_ERR_DETECT || ee == mhi_cntrl->ee)
++	if (pm_state != MHI_PM_SYS_ERR_DETECT)
+ 		goto exit_intvec;
+ 
+ 	switch (ee) {
+diff --git a/drivers/bus/mhi/host/pci_generic.c b/drivers/bus/mhi/host/pci_generic.c
+index f39657f71483c..cb5c067e78be1 100644
+--- a/drivers/bus/mhi/host/pci_generic.c
++++ b/drivers/bus/mhi/host/pci_generic.c
+@@ -344,8 +344,6 @@ static const struct mhi_channel_config mhi_foxconn_sdx55_channels[] = {
+ 	MHI_CHANNEL_CONFIG_DL(13, "MBIM", 32, 0),
+ 	MHI_CHANNEL_CONFIG_UL(32, "DUN", 32, 0),
+ 	MHI_CHANNEL_CONFIG_DL(33, "DUN", 32, 0),
+-	MHI_CHANNEL_CONFIG_UL(92, "DUN2", 32, 1),
+-	MHI_CHANNEL_CONFIG_DL(93, "DUN2", 32, 1),
+ 	MHI_CHANNEL_CONFIG_HW_UL(100, "IP_HW0_MBIM", 128, 2),
+ 	MHI_CHANNEL_CONFIG_HW_DL(101, "IP_HW0_MBIM", 128, 3),
+ };
+diff --git a/drivers/char/ipmi/Kconfig b/drivers/char/ipmi/Kconfig
+index b6c0d35fc1a5f..f4adc6feb3b22 100644
+--- a/drivers/char/ipmi/Kconfig
++++ b/drivers/char/ipmi/Kconfig
+@@ -162,7 +162,8 @@ config IPMI_KCS_BMC_SERIO
+ 
+ config ASPEED_BT_IPMI_BMC
+ 	depends on ARCH_ASPEED || COMPILE_TEST
+-	depends on REGMAP && REGMAP_MMIO && MFD_SYSCON
++	depends on MFD_SYSCON
++	select REGMAP_MMIO
+ 	tristate "BT IPMI bmc driver"
+ 	help
+ 	  Provides a driver for the BT (Block Transfer) IPMI interface
+diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c
+index a5ddebb1edea4..d48061ec27dd9 100644
+--- a/drivers/char/ipmi/ipmi_ssif.c
++++ b/drivers/char/ipmi/ipmi_ssif.c
+@@ -557,8 +557,10 @@ static void retry_timeout(struct timer_list *t)
+ 
+ 	if (waiting)
+ 		start_get(ssif_info);
+-	if (resend)
++	if (resend) {
+ 		start_resend(ssif_info);
++		ssif_inc_stat(ssif_info, send_retries);
++	}
+ }
+ 
+ static void watch_timeout(struct timer_list *t)
+@@ -784,9 +786,9 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ 		} else if (data[0] != (IPMI_NETFN_APP_REQUEST | 1) << 2
+ 			   || data[1] != IPMI_GET_MSG_FLAGS_CMD) {
+ 			/*
+-			 * Don't abort here, maybe it was a queued
+-			 * response to a previous command.
++			 * Recv error response, give up.
+ 			 */
++			ssif_info->ssif_state = SSIF_IDLE;
+ 			ipmi_ssif_unlock_cond(ssif_info, flags);
+ 			dev_warn(&ssif_info->client->dev,
+ 				 "Invalid response getting flags: %x %x\n",
+diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
+index 0601e6e5e3263..2a05d8cc0e795 100644
+--- a/drivers/char/tpm/tpm-chip.c
++++ b/drivers/char/tpm/tpm-chip.c
+@@ -682,7 +682,8 @@ EXPORT_SYMBOL_GPL(tpm_chip_register);
+ void tpm_chip_unregister(struct tpm_chip *chip)
+ {
+ 	tpm_del_legacy_sysfs(chip);
+-	if (IS_ENABLED(CONFIG_HW_RANDOM_TPM) && !tpm_is_firmware_upgrade(chip))
++	if (IS_ENABLED(CONFIG_HW_RANDOM_TPM) && !tpm_is_firmware_upgrade(chip) &&
++	    !tpm_amd_is_rng_defective(chip))
+ 		hwrng_unregister(&chip->hwrng);
+ 	tpm_bios_log_teardown(chip);
+ 	if (chip->flags & TPM_CHIP_FLAG_TPM2 && !tpm_is_firmware_upgrade(chip))
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index 3f98e587b3e84..eecfbd7e97867 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -136,16 +136,27 @@ static bool check_locality(struct tpm_chip *chip, int l)
+ 	return false;
+ }
+ 
+-static int release_locality(struct tpm_chip *chip, int l)
++static int __tpm_tis_relinquish_locality(struct tpm_tis_data *priv, int l)
++{
++	tpm_tis_write8(priv, TPM_ACCESS(l), TPM_ACCESS_ACTIVE_LOCALITY);
++
++	return 0;
++}
++
++static int tpm_tis_relinquish_locality(struct tpm_chip *chip, int l)
+ {
+ 	struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
+ 
+-	tpm_tis_write8(priv, TPM_ACCESS(l), TPM_ACCESS_ACTIVE_LOCALITY);
++	mutex_lock(&priv->locality_count_mutex);
++	priv->locality_count--;
++	if (priv->locality_count == 0)
++		__tpm_tis_relinquish_locality(priv, l);
++	mutex_unlock(&priv->locality_count_mutex);
+ 
+ 	return 0;
+ }
+ 
+-static int request_locality(struct tpm_chip *chip, int l)
++static int __tpm_tis_request_locality(struct tpm_chip *chip, int l)
+ {
+ 	struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
+ 	unsigned long stop, timeout;
+@@ -186,6 +197,20 @@ again:
+ 	return -1;
+ }
+ 
++static int tpm_tis_request_locality(struct tpm_chip *chip, int l)
++{
++	struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
++	int ret = 0;
++
++	mutex_lock(&priv->locality_count_mutex);
++	if (priv->locality_count == 0)
++		ret = __tpm_tis_request_locality(chip, l);
++	if (!ret)
++		priv->locality_count++;
++	mutex_unlock(&priv->locality_count_mutex);
++	return ret;
++}
++
+ static u8 tpm_tis_status(struct tpm_chip *chip)
+ {
+ 	struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
+@@ -652,7 +677,7 @@ static int probe_itpm(struct tpm_chip *chip)
+ 	if (vendor != TPM_VID_INTEL)
+ 		return 0;
+ 
+-	if (request_locality(chip, 0) != 0)
++	if (tpm_tis_request_locality(chip, 0) != 0)
+ 		return -EBUSY;
+ 
+ 	rc = tpm_tis_send_data(chip, cmd_getticks, len);
+@@ -673,7 +698,7 @@ static int probe_itpm(struct tpm_chip *chip)
+ 
+ out:
+ 	tpm_tis_ready(chip);
+-	release_locality(chip, priv->locality);
++	tpm_tis_relinquish_locality(chip, priv->locality);
+ 
+ 	return rc;
+ }
+@@ -732,25 +757,17 @@ static irqreturn_t tis_int_handler(int dummy, void *dev_id)
+ 	return IRQ_HANDLED;
+ }
+ 
+-static int tpm_tis_gen_interrupt(struct tpm_chip *chip)
++static void tpm_tis_gen_interrupt(struct tpm_chip *chip)
+ {
+ 	const char *desc = "attempting to generate an interrupt";
+ 	u32 cap2;
+ 	cap_t cap;
+ 	int ret;
+ 
+-	ret = request_locality(chip, 0);
+-	if (ret < 0)
+-		return ret;
+-
+ 	if (chip->flags & TPM_CHIP_FLAG_TPM2)
+ 		ret = tpm2_get_tpm_pt(chip, 0x100, &cap2, desc);
+ 	else
+ 		ret = tpm1_getcap(chip, TPM_CAP_PROP_TIS_TIMEOUT, &cap, desc, 0);
+-
+-	release_locality(chip, 0);
+-
+-	return ret;
+ }
+ 
+ /* Register the IRQ and issue a command that will cause an interrupt. If an
+@@ -773,52 +790,55 @@ static int tpm_tis_probe_irq_single(struct tpm_chip *chip, u32 intmask,
+ 	}
+ 	priv->irq = irq;
+ 
++	rc = tpm_tis_request_locality(chip, 0);
++	if (rc < 0)
++		return rc;
++
+ 	rc = tpm_tis_read8(priv, TPM_INT_VECTOR(priv->locality),
+ 			   &original_int_vec);
+-	if (rc < 0)
++	if (rc < 0) {
++		tpm_tis_relinquish_locality(chip, priv->locality);
+ 		return rc;
++	}
+ 
+ 	rc = tpm_tis_write8(priv, TPM_INT_VECTOR(priv->locality), irq);
+ 	if (rc < 0)
+-		return rc;
++		goto restore_irqs;
+ 
+ 	rc = tpm_tis_read32(priv, TPM_INT_STATUS(priv->locality), &int_status);
+ 	if (rc < 0)
+-		return rc;
++		goto restore_irqs;
+ 
+ 	/* Clear all existing */
+ 	rc = tpm_tis_write32(priv, TPM_INT_STATUS(priv->locality), int_status);
+ 	if (rc < 0)
+-		return rc;
+-
++		goto restore_irqs;
+ 	/* Turn on */
+ 	rc = tpm_tis_write32(priv, TPM_INT_ENABLE(priv->locality),
+ 			     intmask | TPM_GLOBAL_INT_ENABLE);
+ 	if (rc < 0)
+-		return rc;
++		goto restore_irqs;
+ 
+ 	priv->irq_tested = false;
+ 
+ 	/* Generate an interrupt by having the core call through to
+ 	 * tpm_tis_send
+ 	 */
+-	rc = tpm_tis_gen_interrupt(chip);
+-	if (rc < 0)
+-		return rc;
++	tpm_tis_gen_interrupt(chip);
+ 
++restore_irqs:
+ 	/* tpm_tis_send will either confirm the interrupt is working or it
+ 	 * will call disable_irq which undoes all of the above.
+ 	 */
+ 	if (!(chip->flags & TPM_CHIP_FLAG_IRQ)) {
+-		rc = tpm_tis_write8(priv, original_int_vec,
+-				TPM_INT_VECTOR(priv->locality));
+-		if (rc < 0)
+-			return rc;
+-
+-		return 1;
++		tpm_tis_write8(priv, original_int_vec,
++			       TPM_INT_VECTOR(priv->locality));
++		rc = -1;
+ 	}
+ 
+-	return 0;
++	tpm_tis_relinquish_locality(chip, priv->locality);
++
++	return rc;
+ }
+ 
+ /* Try to find the IRQ the TPM is using. This is for legacy x86 systems that
+@@ -932,8 +952,8 @@ static const struct tpm_class_ops tpm_tis = {
+ 	.req_complete_mask = TPM_STS_DATA_AVAIL | TPM_STS_VALID,
+ 	.req_complete_val = TPM_STS_DATA_AVAIL | TPM_STS_VALID,
+ 	.req_canceled = tpm_tis_req_canceled,
+-	.request_locality = request_locality,
+-	.relinquish_locality = release_locality,
++	.request_locality = tpm_tis_request_locality,
++	.relinquish_locality = tpm_tis_relinquish_locality,
+ 	.clk_enable = tpm_tis_clkrun_enable,
+ };
+ 
+@@ -967,6 +987,8 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ 	priv->timeout_min = TPM_TIMEOUT_USECS_MIN;
+ 	priv->timeout_max = TPM_TIMEOUT_USECS_MAX;
+ 	priv->phy_ops = phy_ops;
++	priv->locality_count = 0;
++	mutex_init(&priv->locality_count_mutex);
+ 
+ 	dev_set_drvdata(&chip->dev, priv);
+ 
+@@ -1013,14 +1035,14 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ 		   TPM_INTF_DATA_AVAIL_INT | TPM_INTF_STS_VALID_INT;
+ 	intmask &= ~TPM_GLOBAL_INT_ENABLE;
+ 
+-	rc = request_locality(chip, 0);
++	rc = tpm_tis_request_locality(chip, 0);
+ 	if (rc < 0) {
+ 		rc = -ENODEV;
+ 		goto out_err;
+ 	}
+ 
+ 	tpm_tis_write32(priv, TPM_INT_ENABLE(priv->locality), intmask);
+-	release_locality(chip, 0);
++	tpm_tis_relinquish_locality(chip, 0);
+ 
+ 	rc = tpm_chip_start(chip);
+ 	if (rc)
+@@ -1080,13 +1102,13 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ 		 * proper timeouts for the driver.
+ 		 */
+ 
+-		rc = request_locality(chip, 0);
++		rc = tpm_tis_request_locality(chip, 0);
+ 		if (rc < 0)
+ 			goto out_err;
+ 
+ 		rc = tpm_get_timeouts(chip);
+ 
+-		release_locality(chip, 0);
++		tpm_tis_relinquish_locality(chip, 0);
+ 
+ 		if (rc) {
+ 			dev_err(dev, "Could not get TPM timeouts and durations\n");
+@@ -1094,17 +1116,21 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ 			goto out_err;
+ 		}
+ 
+-		if (irq) {
++		if (irq)
+ 			tpm_tis_probe_irq_single(chip, intmask, IRQF_SHARED,
+ 						 irq);
+-			if (!(chip->flags & TPM_CHIP_FLAG_IRQ)) {
+-				dev_err(&chip->dev, FW_BUG
++		else
++			tpm_tis_probe_irq(chip, intmask);
++
++		if (!(chip->flags & TPM_CHIP_FLAG_IRQ)) {
++			dev_err(&chip->dev, FW_BUG
+ 					"TPM interrupt not working, polling instead\n");
+ 
+-				disable_interrupts(chip);
+-			}
+-		} else {
+-			tpm_tis_probe_irq(chip, intmask);
++			rc = tpm_tis_request_locality(chip, 0);
++			if (rc < 0)
++				goto out_err;
++			disable_interrupts(chip);
++			tpm_tis_relinquish_locality(chip, 0);
+ 		}
+ 	}
+ 
+@@ -1165,28 +1191,27 @@ int tpm_tis_resume(struct device *dev)
+ 	struct tpm_chip *chip = dev_get_drvdata(dev);
+ 	int ret;
+ 
++	ret = tpm_tis_request_locality(chip, 0);
++	if (ret < 0)
++		return ret;
++
+ 	if (chip->flags & TPM_CHIP_FLAG_IRQ)
+ 		tpm_tis_reenable_interrupts(chip);
+ 
+ 	ret = tpm_pm_resume(dev);
+ 	if (ret)
+-		return ret;
++		goto out;
+ 
+ 	/*
+ 	 * TPM 1.2 requires self-test on resume. This function actually returns
+ 	 * an error code but for unknown reason it isn't handled.
+ 	 */
+-	if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) {
+-		ret = request_locality(chip, 0);
+-		if (ret < 0)
+-			return ret;
+-
++	if (!(chip->flags & TPM_CHIP_FLAG_TPM2))
+ 		tpm1_do_selftest(chip);
++out:
++	tpm_tis_relinquish_locality(chip, 0);
+ 
+-		release_locality(chip, 0);
+-	}
+-
+-	return 0;
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(tpm_tis_resume);
+ #endif
+diff --git a/drivers/char/tpm/tpm_tis_core.h b/drivers/char/tpm/tpm_tis_core.h
+index b68479e0de10f..1d51d5168fb6e 100644
+--- a/drivers/char/tpm/tpm_tis_core.h
++++ b/drivers/char/tpm/tpm_tis_core.h
+@@ -91,6 +91,8 @@ enum tpm_tis_flags {
+ 
+ struct tpm_tis_data {
+ 	u16 manufacturer_id;
++	struct mutex locality_count_mutex;
++	unsigned int locality_count;
+ 	int locality;
+ 	int irq;
+ 	bool irq_tested;
+diff --git a/drivers/clk/at91/clk-sam9x60-pll.c b/drivers/clk/at91/clk-sam9x60-pll.c
+index d757003004cbb..0882ed01d5c27 100644
+--- a/drivers/clk/at91/clk-sam9x60-pll.c
++++ b/drivers/clk/at91/clk-sam9x60-pll.c
+@@ -668,7 +668,7 @@ sam9x60_clk_register_frac_pll(struct regmap *regmap, spinlock_t *lock,
+ 
+ 		ret = sam9x60_frac_pll_compute_mul_frac(&frac->core, FCORE_MIN,
+ 							parent_rate, true);
+-		if (ret <= 0) {
++		if (ret < 0) {
+ 			hw = ERR_PTR(ret);
+ 			goto free;
+ 		}
+diff --git a/drivers/clk/clk-conf.c b/drivers/clk/clk-conf.c
+index 2ef819606c417..1a4e6340f95ce 100644
+--- a/drivers/clk/clk-conf.c
++++ b/drivers/clk/clk-conf.c
+@@ -33,9 +33,12 @@ static int __set_clk_parents(struct device_node *node, bool clk_supplier)
+ 			else
+ 				return rc;
+ 		}
+-		if (clkspec.np == node && !clk_supplier)
++		if (clkspec.np == node && !clk_supplier) {
++			of_node_put(clkspec.np);
+ 			return 0;
++		}
+ 		pclk = of_clk_get_from_provider(&clkspec);
++		of_node_put(clkspec.np);
+ 		if (IS_ERR(pclk)) {
+ 			if (PTR_ERR(pclk) != -EPROBE_DEFER)
+ 				pr_warn("clk: couldn't get parent clock %d for %pOF\n",
+@@ -48,10 +51,12 @@ static int __set_clk_parents(struct device_node *node, bool clk_supplier)
+ 		if (rc < 0)
+ 			goto err;
+ 		if (clkspec.np == node && !clk_supplier) {
++			of_node_put(clkspec.np);
+ 			rc = 0;
+ 			goto err;
+ 		}
+ 		clk = of_clk_get_from_provider(&clkspec);
++		of_node_put(clkspec.np);
+ 		if (IS_ERR(clk)) {
+ 			if (PTR_ERR(clk) != -EPROBE_DEFER)
+ 				pr_warn("clk: couldn't get assigned clock %d for %pOF\n",
+@@ -93,10 +98,13 @@ static int __set_clk_rates(struct device_node *node, bool clk_supplier)
+ 				else
+ 					return rc;
+ 			}
+-			if (clkspec.np == node && !clk_supplier)
++			if (clkspec.np == node && !clk_supplier) {
++				of_node_put(clkspec.np);
+ 				return 0;
++			}
+ 
+ 			clk = of_clk_get_from_provider(&clkspec);
++			of_node_put(clkspec.np);
+ 			if (IS_ERR(clk)) {
+ 				if (PTR_ERR(clk) != -EPROBE_DEFER)
+ 					pr_warn("clk: couldn't get clock %d for %pOF\n",
+diff --git a/drivers/clk/imx/clk-fracn-gppll.c b/drivers/clk/imx/clk-fracn-gppll.c
+index a2aaa14fc1aef..f6674110a88e0 100644
+--- a/drivers/clk/imx/clk-fracn-gppll.c
++++ b/drivers/clk/imx/clk-fracn-gppll.c
+@@ -15,6 +15,7 @@
+ #include "clk.h"
+ 
+ #define PLL_CTRL		0x0
++#define HW_CTRL_SEL		BIT(16)
+ #define CLKMUX_BYPASS		BIT(2)
+ #define CLKMUX_EN		BIT(1)
+ #define POWERUP_MASK		BIT(0)
+@@ -60,18 +61,20 @@ struct clk_fracn_gppll {
+ };
+ 
+ /*
+- * Fvco = Fref * (MFI + MFN / MFD)
+- * Fout = Fvco / (rdiv * odiv)
++ * Fvco = (Fref / rdiv) * (MFI + MFN / MFD)
++ * Fout = Fvco / odiv
++ * The (Fref / rdiv) should be in range 20MHz to 40MHz
++ * The Fvco should be in range 2.5Ghz to 5Ghz
+  */
+ static const struct imx_fracn_gppll_rate_table fracn_tbl[] = {
+-	PLL_FRACN_GP(650000000U, 81, 0, 1, 0, 3),
++	PLL_FRACN_GP(650000000U, 162, 50, 100, 0, 6),
+ 	PLL_FRACN_GP(594000000U, 198, 0, 1, 0, 8),
+-	PLL_FRACN_GP(560000000U, 70, 0, 1, 0, 3),
+-	PLL_FRACN_GP(498000000U, 83, 0, 1, 0, 4),
++	PLL_FRACN_GP(560000000U, 140, 0, 1, 0, 6),
++	PLL_FRACN_GP(498000000U, 166, 0, 1, 0, 8),
+ 	PLL_FRACN_GP(484000000U, 121, 0, 1, 0, 6),
+ 	PLL_FRACN_GP(445333333U, 167, 0, 1, 0, 9),
+-	PLL_FRACN_GP(400000000U, 50, 0, 1, 0, 3),
+-	PLL_FRACN_GP(393216000U, 81, 92, 100, 0, 5)
++	PLL_FRACN_GP(400000000U, 200, 0, 1, 0, 12),
++	PLL_FRACN_GP(393216000U, 163, 84, 100, 0, 10)
+ };
+ 
+ struct imx_fracn_gppll_clk imx_fracn_gppll = {
+@@ -191,6 +194,11 @@ static int clk_fracn_gppll_set_rate(struct clk_hw *hw, unsigned long drate,
+ 
+ 	rate = imx_get_pll_settings(pll, drate);
+ 
++	/* Hardware control select disable. PLL is control by register */
++	tmp = readl_relaxed(pll->base + PLL_CTRL);
++	tmp &= ~HW_CTRL_SEL;
++	writel_relaxed(tmp, pll->base + PLL_CTRL);
++
+ 	/* Disable output */
+ 	tmp = readl_relaxed(pll->base + PLL_CTRL);
+ 	tmp &= ~CLKMUX_EN;
+diff --git a/drivers/clk/imx/clk-imx8ulp.c b/drivers/clk/imx/clk-imx8ulp.c
+index a07df3b44703f..89121037a8f0e 100644
+--- a/drivers/clk/imx/clk-imx8ulp.c
++++ b/drivers/clk/imx/clk-imx8ulp.c
+@@ -200,8 +200,8 @@ static int imx8ulp_clk_cgc1_init(struct platform_device *pdev)
+ 	clks[IMX8ULP_CLK_NIC_AD_DIVPLAT] = imx_clk_hw_divider_flags("nic_ad_divplat", "nic_sel", base + 0x34, 21, 6, CLK_SET_RATE_PARENT | CLK_IS_CRITICAL);
+ 	clks[IMX8ULP_CLK_NIC_PER_DIVPLAT] = imx_clk_hw_divider_flags("nic_per_divplat", "nic_ad_divplat", base + 0x34, 14, 6, CLK_SET_RATE_PARENT | CLK_IS_CRITICAL);
+ 	clks[IMX8ULP_CLK_XBAR_AD_DIVPLAT] = imx_clk_hw_divider_flags("xbar_ad_divplat", "nic_ad_divplat", base + 0x38, 14, 6, CLK_SET_RATE_PARENT | CLK_IS_CRITICAL);
+-	clks[IMX8ULP_CLK_XBAR_DIVBUS] = imx_clk_hw_divider_flags("xbar_divbus", "nic_ad_divplat", base + 0x38, 7, 6, CLK_SET_RATE_PARENT | CLK_IS_CRITICAL);
+-	clks[IMX8ULP_CLK_XBAR_AD_SLOW] = imx_clk_hw_divider_flags("xbar_ad_slow", "nic_ad_divplat", base + 0x38, 0, 6, CLK_SET_RATE_PARENT | CLK_IS_CRITICAL);
++	clks[IMX8ULP_CLK_XBAR_DIVBUS] = imx_clk_hw_divider_flags("xbar_divbus", "xbar_ad_divplat", base + 0x38, 7, 6, CLK_SET_RATE_PARENT | CLK_IS_CRITICAL);
++	clks[IMX8ULP_CLK_XBAR_AD_SLOW] = imx_clk_hw_divider_flags("xbar_ad_slow", "xbar_divbus", base + 0x38, 0, 6, CLK_SET_RATE_PARENT | CLK_IS_CRITICAL);
+ 
+ 	clks[IMX8ULP_CLK_SOSC_DIV1_GATE] = imx_clk_hw_gate_dis("sosc_div1_gate", "sosc", base + 0x108, 7);
+ 	clks[IMX8ULP_CLK_SOSC_DIV2_GATE] = imx_clk_hw_gate_dis("sosc_div2_gate", "sosc", base + 0x108, 15);
+diff --git a/drivers/clk/mediatek/clk-mt2701-aud.c b/drivers/clk/mediatek/clk-mt2701-aud.c
+index 1a32d8b7db84f..21f7cc106bbe5 100644
+--- a/drivers/clk/mediatek/clk-mt2701-aud.c
++++ b/drivers/clk/mediatek/clk-mt2701-aud.c
+@@ -15,41 +15,17 @@
+ 
+ #include <dt-bindings/clock/mt2701-clk.h>
+ 
+-#define GATE_AUDIO0(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &audio0_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr,	\
+-	}
++#define GATE_AUDIO0(_id, _name, _parent, _shift)		\
++	GATE_MTK(_id, _name, _parent, &audio0_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr)
+ 
+-#define GATE_AUDIO1(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &audio1_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr,	\
+-	}
++#define GATE_AUDIO1(_id, _name, _parent, _shift)		\
++	GATE_MTK(_id, _name, _parent, &audio1_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr)
+ 
+-#define GATE_AUDIO2(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &audio2_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr,	\
+-	}
++#define GATE_AUDIO2(_id, _name, _parent, _shift)		\
++	GATE_MTK(_id, _name, _parent, &audio2_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr)
+ 
+-#define GATE_AUDIO3(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &audio3_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr,	\
+-	}
++#define GATE_AUDIO3(_id, _name, _parent, _shift)		\
++	GATE_MTK(_id, _name, _parent, &audio3_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr)
+ 
+ static const struct mtk_gate_regs audio0_cg_regs = {
+ 	.set_ofs = 0x0,
+diff --git a/drivers/clk/mediatek/clk-mt2701-bdp.c b/drivers/clk/mediatek/clk-mt2701-bdp.c
+index 435ed4819d563..b0f0572079452 100644
+--- a/drivers/clk/mediatek/clk-mt2701-bdp.c
++++ b/drivers/clk/mediatek/clk-mt2701-bdp.c
+@@ -24,23 +24,11 @@ static const struct mtk_gate_regs bdp1_cg_regs = {
+ 	.sta_ofs = 0x0110,
+ };
+ 
+-#define GATE_BDP0(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &bdp0_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr_inv,	\
+-	}
++#define GATE_BDP0(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &bdp0_cg_regs, _shift, &mtk_clk_gate_ops_setclr_inv)
+ 
+-#define GATE_BDP1(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &bdp1_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr_inv,	\
+-	}
++#define GATE_BDP1(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &bdp1_cg_regs, _shift, &mtk_clk_gate_ops_setclr_inv)
+ 
+ static const struct mtk_gate bdp_clks[] = {
+ 	GATE_BDP0(CLK_BDP_BRG_BA, "brg_baclk", "mm_sel", 0),
+diff --git a/drivers/clk/mediatek/clk-mt2701-eth.c b/drivers/clk/mediatek/clk-mt2701-eth.c
+index f3cb78e7f6e9e..4c830ebdd7613 100644
+--- a/drivers/clk/mediatek/clk-mt2701-eth.c
++++ b/drivers/clk/mediatek/clk-mt2701-eth.c
+@@ -16,14 +16,8 @@ static const struct mtk_gate_regs eth_cg_regs = {
+ 	.sta_ofs = 0x0030,
+ };
+ 
+-#define GATE_ETH(_id, _name, _parent, _shift) {		\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &eth_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr_inv,	\
+-	}
++#define GATE_ETH(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &eth_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr_inv)
+ 
+ static const struct mtk_gate eth_clks[] = {
+ 	GATE_DUMMY(CLK_DUMMY, "eth_dummy"),
+diff --git a/drivers/clk/mediatek/clk-mt2701-g3d.c b/drivers/clk/mediatek/clk-mt2701-g3d.c
+index 499a170ba5f92..ae094046890aa 100644
+--- a/drivers/clk/mediatek/clk-mt2701-g3d.c
++++ b/drivers/clk/mediatek/clk-mt2701-g3d.c
+@@ -16,14 +16,8 @@
+ 
+ #include <dt-bindings/clock/mt2701-clk.h>
+ 
+-#define GATE_G3D(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &g3d_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
++#define GATE_G3D(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &g3d_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+ static const struct mtk_gate_regs g3d_cg_regs = {
+ 	.sta_ofs = 0x0,
+diff --git a/drivers/clk/mediatek/clk-mt2701-hif.c b/drivers/clk/mediatek/clk-mt2701-hif.c
+index d5465d7829935..3583bd1240d55 100644
+--- a/drivers/clk/mediatek/clk-mt2701-hif.c
++++ b/drivers/clk/mediatek/clk-mt2701-hif.c
+@@ -16,14 +16,8 @@ static const struct mtk_gate_regs hif_cg_regs = {
+ 	.sta_ofs = 0x0030,
+ };
+ 
+-#define GATE_HIF(_id, _name, _parent, _shift) {		\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &hif_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr_inv,	\
+-	}
++#define GATE_HIF(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &hif_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr_inv)
+ 
+ static const struct mtk_gate hif_clks[] = {
+ 	GATE_DUMMY(CLK_DUMMY, "hif_dummy"),
+diff --git a/drivers/clk/mediatek/clk-mt2701-img.c b/drivers/clk/mediatek/clk-mt2701-img.c
+index 7e53deb7f9905..eb172473f0755 100644
+--- a/drivers/clk/mediatek/clk-mt2701-img.c
++++ b/drivers/clk/mediatek/clk-mt2701-img.c
+@@ -18,14 +18,8 @@ static const struct mtk_gate_regs img_cg_regs = {
+ 	.sta_ofs = 0x0000,
+ };
+ 
+-#define GATE_IMG(_id, _name, _parent, _shift) {		\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &img_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
++#define GATE_IMG(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &img_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+ static const struct mtk_gate img_clks[] = {
+ 	GATE_IMG(CLK_IMG_SMI_COMM, "img_smi_comm", "mm_sel", 0),
+diff --git a/drivers/clk/mediatek/clk-mt2701-mm.c b/drivers/clk/mediatek/clk-mt2701-mm.c
+index 23d5ddcc1d372..f4885dffb324f 100644
+--- a/drivers/clk/mediatek/clk-mt2701-mm.c
++++ b/drivers/clk/mediatek/clk-mt2701-mm.c
+@@ -24,23 +24,11 @@ static const struct mtk_gate_regs disp1_cg_regs = {
+ 	.sta_ofs = 0x0110,
+ };
+ 
+-#define GATE_DISP0(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &disp0_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
++#define GATE_DISP0(_id, _name, _parent, _shift)	\
++	GATE_MTK(_id, _name, _parent, &disp0_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+-#define GATE_DISP1(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &disp1_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
++#define GATE_DISP1(_id, _name, _parent, _shift)	\
++	GATE_MTK(_id, _name, _parent, &disp1_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+ static const struct mtk_gate mm_clks[] = {
+ 	GATE_DISP0(CLK_MM_SMI_COMMON, "mm_smi_comm", "mm_sel", 0),
+diff --git a/drivers/clk/mediatek/clk-mt2701-vdec.c b/drivers/clk/mediatek/clk-mt2701-vdec.c
+index d3089da0ab62e..0f07c5d731df6 100644
+--- a/drivers/clk/mediatek/clk-mt2701-vdec.c
++++ b/drivers/clk/mediatek/clk-mt2701-vdec.c
+@@ -24,23 +24,11 @@ static const struct mtk_gate_regs vdec1_cg_regs = {
+ 	.sta_ofs = 0x0008,
+ };
+ 
+-#define GATE_VDEC0(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &vdec0_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr_inv,	\
+-	}
++#define GATE_VDEC0(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &vdec0_cg_regs, _shift, &mtk_clk_gate_ops_setclr_inv)
+ 
+-#define GATE_VDEC1(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &vdec1_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr_inv,	\
+-	}
++#define GATE_VDEC1(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &vdec1_cg_regs, _shift, &mtk_clk_gate_ops_setclr_inv)
+ 
+ static const struct mtk_gate vdec_clks[] = {
+ 	GATE_VDEC0(CLK_VDEC_CKGEN, "vdec_cken", "vdec_sel", 0),
+diff --git a/drivers/clk/mediatek/clk-mt2701.c b/drivers/clk/mediatek/clk-mt2701.c
+index 06ca81359d350..dfe328f7a44b2 100644
+--- a/drivers/clk/mediatek/clk-mt2701.c
++++ b/drivers/clk/mediatek/clk-mt2701.c
+@@ -636,14 +636,8 @@ static const struct mtk_gate_regs top_aud_cg_regs = {
+ 	.sta_ofs = 0x012C,
+ };
+ 
+-#define GATE_TOP_AUD(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &top_aud_cg_regs,		\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr,	\
+-	}
++#define GATE_TOP_AUD(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &top_aud_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr)
+ 
+ static const struct mtk_gate top_clks[] = {
+ 	GATE_TOP_AUD(CLK_TOP_AUD_48K_TIMING, "a1sys_hp_ck", "aud_mux1_div",
+@@ -702,14 +696,8 @@ static const struct mtk_gate_regs infra_cg_regs = {
+ 	.sta_ofs = 0x0048,
+ };
+ 
+-#define GATE_ICG(_id, _name, _parent, _shift) {		\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &infra_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
++#define GATE_ICG(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &infra_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+ static const struct mtk_gate infra_clks[] = {
+ 	GATE_ICG(CLK_INFRA_DBG, "dbgclk", "axi_sel", 0),
+@@ -823,23 +811,11 @@ static const struct mtk_gate_regs peri1_cg_regs = {
+ 	.sta_ofs = 0x001c,
+ };
+ 
+-#define GATE_PERI0(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &peri0_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
++#define GATE_PERI0(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &peri0_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+-#define GATE_PERI1(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &peri1_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
++#define GATE_PERI1(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &peri1_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+ static const struct mtk_gate peri_clks[] = {
+ 	GATE_PERI0(CLK_PERI_USB0_MCU, "usb0_mcu_ck", "axi_sel", 31),
+diff --git a/drivers/clk/mediatek/clk-mt2712-bdp.c b/drivers/clk/mediatek/clk-mt2712-bdp.c
+index 684d03e9f6de1..5e668651dd901 100644
+--- a/drivers/clk/mediatek/clk-mt2712-bdp.c
++++ b/drivers/clk/mediatek/clk-mt2712-bdp.c
+@@ -18,14 +18,8 @@ static const struct mtk_gate_regs bdp_cg_regs = {
+ 	.sta_ofs = 0x100,
+ };
+ 
+-#define GATE_BDP(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &bdp_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr,	\
+-	}
++#define GATE_BDP(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &bdp_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr)
+ 
+ static const struct mtk_gate bdp_clks[] = {
+ 	GATE_BDP(CLK_BDP_BRIDGE_B, "bdp_bridge_b", "mm_sel", 0),
+diff --git a/drivers/clk/mediatek/clk-mt2712-img.c b/drivers/clk/mediatek/clk-mt2712-img.c
+index 335049cdc856c..3ffa51384e6b2 100644
+--- a/drivers/clk/mediatek/clk-mt2712-img.c
++++ b/drivers/clk/mediatek/clk-mt2712-img.c
+@@ -18,14 +18,8 @@ static const struct mtk_gate_regs img_cg_regs = {
+ 	.sta_ofs = 0x0,
+ };
+ 
+-#define GATE_IMG(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &img_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr,	\
+-	}
++#define GATE_IMG(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &img_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr)
+ 
+ static const struct mtk_gate img_clks[] = {
+ 	GATE_IMG(CLK_IMG_SMI_LARB2, "img_smi_larb2", "mm_sel", 0),
+diff --git a/drivers/clk/mediatek/clk-mt2712-jpgdec.c b/drivers/clk/mediatek/clk-mt2712-jpgdec.c
+index 07ba7c5e80aff..8c768d5ce24d5 100644
+--- a/drivers/clk/mediatek/clk-mt2712-jpgdec.c
++++ b/drivers/clk/mediatek/clk-mt2712-jpgdec.c
+@@ -18,14 +18,8 @@ static const struct mtk_gate_regs jpgdec_cg_regs = {
+ 	.sta_ofs = 0x0,
+ };
+ 
+-#define GATE_JPGDEC(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &jpgdec_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr_inv,	\
+-	}
++#define GATE_JPGDEC(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &jpgdec_cg_regs, _shift, &mtk_clk_gate_ops_setclr_inv)
+ 
+ static const struct mtk_gate jpgdec_clks[] = {
+ 	GATE_JPGDEC(CLK_JPGDEC_JPGDEC1, "jpgdec_jpgdec1", "jpgdec_sel", 0),
+diff --git a/drivers/clk/mediatek/clk-mt2712-mfg.c b/drivers/clk/mediatek/clk-mt2712-mfg.c
+index 42f8cf3ecf4cb..8949315c2dd20 100644
+--- a/drivers/clk/mediatek/clk-mt2712-mfg.c
++++ b/drivers/clk/mediatek/clk-mt2712-mfg.c
+@@ -18,14 +18,8 @@ static const struct mtk_gate_regs mfg_cg_regs = {
+ 	.sta_ofs = 0x0,
+ };
+ 
+-#define GATE_MFG(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &mfg_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
++#define GATE_MFG(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &mfg_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+ static const struct mtk_gate mfg_clks[] = {
+ 	GATE_MFG(CLK_MFG_BG3D, "mfg_bg3d", "mfg_sel", 0),
+diff --git a/drivers/clk/mediatek/clk-mt2712-mm.c b/drivers/clk/mediatek/clk-mt2712-mm.c
+index 25b8af640c128..e5264f1ce60d0 100644
+--- a/drivers/clk/mediatek/clk-mt2712-mm.c
++++ b/drivers/clk/mediatek/clk-mt2712-mm.c
+@@ -30,32 +30,14 @@ static const struct mtk_gate_regs mm2_cg_regs = {
+ 	.sta_ofs = 0x220,
+ };
+ 
+-#define GATE_MM0(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &mm0_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
+-
+-#define GATE_MM1(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &mm1_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
+-
+-#define GATE_MM2(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &mm2_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
++#define GATE_MM0(_id, _name, _parent, _shift)	\
++	GATE_MTK(_id, _name, _parent, &mm0_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
++
++#define GATE_MM1(_id, _name, _parent, _shift)	\
++	GATE_MTK(_id, _name, _parent, &mm1_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
++
++#define GATE_MM2(_id, _name, _parent, _shift)	\
++	GATE_MTK(_id, _name, _parent, &mm2_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+ static const struct mtk_gate mm_clks[] = {
+ 	/* MM0 */
+diff --git a/drivers/clk/mediatek/clk-mt2712-vdec.c b/drivers/clk/mediatek/clk-mt2712-vdec.c
+index 6296ed5c5b555..572290dd43c87 100644
+--- a/drivers/clk/mediatek/clk-mt2712-vdec.c
++++ b/drivers/clk/mediatek/clk-mt2712-vdec.c
+@@ -24,23 +24,11 @@ static const struct mtk_gate_regs vdec1_cg_regs = {
+ 	.sta_ofs = 0x8,
+ };
+ 
+-#define GATE_VDEC0(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &vdec0_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr_inv,	\
+-	}
++#define GATE_VDEC0(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &vdec0_cg_regs, _shift, &mtk_clk_gate_ops_setclr_inv)
+ 
+-#define GATE_VDEC1(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &vdec1_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr_inv,	\
+-	}
++#define GATE_VDEC1(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &vdec1_cg_regs, _shift, &mtk_clk_gate_ops_setclr_inv)
+ 
+ static const struct mtk_gate vdec_clks[] = {
+ 	/* VDEC0 */
+diff --git a/drivers/clk/mediatek/clk-mt2712-venc.c b/drivers/clk/mediatek/clk-mt2712-venc.c
+index b9bfc35de629c..9588eb03016eb 100644
+--- a/drivers/clk/mediatek/clk-mt2712-venc.c
++++ b/drivers/clk/mediatek/clk-mt2712-venc.c
+@@ -18,14 +18,8 @@ static const struct mtk_gate_regs venc_cg_regs = {
+ 	.sta_ofs = 0x0,
+ };
+ 
+-#define GATE_VENC(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &venc_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr_inv,	\
+-	}
++#define GATE_VENC(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &venc_cg_regs, _shift, &mtk_clk_gate_ops_setclr_inv)
+ 
+ static const struct mtk_gate venc_clks[] = {
+ 	GATE_VENC(CLK_VENC_SMI_COMMON_CON, "venc_smi", "mm_sel", 0),
+diff --git a/drivers/clk/mediatek/clk-mt2712.c b/drivers/clk/mediatek/clk-mt2712.c
+index 94f8fc2a4f7bd..79cdb1e72ec52 100644
+--- a/drivers/clk/mediatek/clk-mt2712.c
++++ b/drivers/clk/mediatek/clk-mt2712.c
+@@ -958,23 +958,11 @@ static const struct mtk_gate_regs top1_cg_regs = {
+ 	.sta_ofs = 0x424,
+ };
+ 
+-#define GATE_TOP0(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &top0_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr,	\
+-	}
++#define GATE_TOP0(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &top0_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr)
+ 
+-#define GATE_TOP1(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &top1_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr_inv,	\
+-	}
++#define GATE_TOP1(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &top1_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr_inv)
+ 
+ static const struct mtk_gate top_clks[] = {
+ 	/* TOP0 */
+@@ -998,14 +986,8 @@ static const struct mtk_gate_regs infra_cg_regs = {
+ 	.sta_ofs = 0x48,
+ };
+ 
+-#define GATE_INFRA(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &infra_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
++#define GATE_INFRA(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &infra_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+ static const struct mtk_gate infra_clks[] = {
+ 	GATE_INFRA(CLK_INFRA_DBGCLK, "infra_dbgclk", "axi_sel", 0),
+@@ -1035,32 +1017,14 @@ static const struct mtk_gate_regs peri2_cg_regs = {
+ 	.sta_ofs = 0x42c,
+ };
+ 
+-#define GATE_PERI0(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &peri0_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
++#define GATE_PERI0(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &peri0_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+-#define GATE_PERI1(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &peri1_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
++#define GATE_PERI1(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &peri1_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+-#define GATE_PERI2(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &peri2_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr_inv,	\
+-	}
++#define GATE_PERI2(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &peri2_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr_inv)
+ 
+ static const struct mtk_gate peri_clks[] = {
+ 	/* PERI0 */
+@@ -1283,15 +1247,25 @@ static int clk_mt2712_apmixed_probe(struct platform_device *pdev)
+ 	struct device_node *node = pdev->dev.of_node;
+ 
+ 	clk_data = mtk_alloc_clk_data(CLK_APMIXED_NR_CLK);
++	if (!clk_data)
++		return -ENOMEM;
+ 
+-	mtk_clk_register_plls(node, plls, ARRAY_SIZE(plls), clk_data);
++	r = mtk_clk_register_plls(node, plls, ARRAY_SIZE(plls), clk_data);
++	if (r)
++		goto free_clk_data;
+ 
+ 	r = of_clk_add_hw_provider(node, of_clk_hw_onecell_get, clk_data);
++	if (r) {
++		dev_err(&pdev->dev, "Cannot register clock provider: %d\n", r);
++		goto unregister_plls;
++	}
+ 
+-	if (r != 0)
+-		pr_err("%s(): could not register clock provider: %d\n",
+-			__func__, r);
++	return 0;
+ 
++unregister_plls:
++	mtk_clk_unregister_plls(plls, ARRAY_SIZE(plls), clk_data);
++free_clk_data:
++	mtk_free_clk_data(clk_data);
+ 	return r;
+ }
+ 
+diff --git a/drivers/clk/mediatek/clk-mt6765-audio.c b/drivers/clk/mediatek/clk-mt6765-audio.c
+index 0aa6c0d352ca5..5682e0302eee2 100644
+--- a/drivers/clk/mediatek/clk-mt6765-audio.c
++++ b/drivers/clk/mediatek/clk-mt6765-audio.c
+@@ -24,23 +24,11 @@ static const struct mtk_gate_regs audio1_cg_regs = {
+ 	.sta_ofs = 0x4,
+ };
+ 
+-#define GATE_AUDIO0(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &audio0_cg_regs,		\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr,	\
+-	}
++#define GATE_AUDIO0(_id, _name, _parent, _shift)		\
++	GATE_MTK(_id, _name, _parent, &audio0_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr)
+ 
+-#define GATE_AUDIO1(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &audio1_cg_regs,		\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr,	\
+-	}
++#define GATE_AUDIO1(_id, _name, _parent, _shift)		\
++	GATE_MTK(_id, _name, _parent, &audio1_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr)
+ 
+ static const struct mtk_gate audio_clks[] = {
+ 	/* AUDIO0 */
+diff --git a/drivers/clk/mediatek/clk-mt6765-cam.c b/drivers/clk/mediatek/clk-mt6765-cam.c
+index 25f2bef38126e..6e7d192c19cb0 100644
+--- a/drivers/clk/mediatek/clk-mt6765-cam.c
++++ b/drivers/clk/mediatek/clk-mt6765-cam.c
+@@ -18,14 +18,8 @@ static const struct mtk_gate_regs cam_cg_regs = {
+ 	.sta_ofs = 0x0,
+ };
+ 
+-#define GATE_CAM(_id, _name, _parent, _shift) {		\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &cam_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
++#define GATE_CAM(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &cam_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+ static const struct mtk_gate cam_clks[] = {
+ 	GATE_CAM(CLK_CAM_LARB3, "cam_larb3", "mm_ck", 0),
+diff --git a/drivers/clk/mediatek/clk-mt6765-img.c b/drivers/clk/mediatek/clk-mt6765-img.c
+index a62303ef4f41d..cfbc907988aff 100644
+--- a/drivers/clk/mediatek/clk-mt6765-img.c
++++ b/drivers/clk/mediatek/clk-mt6765-img.c
+@@ -18,14 +18,8 @@ static const struct mtk_gate_regs img_cg_regs = {
+ 	.sta_ofs = 0x0,
+ };
+ 
+-#define GATE_IMG(_id, _name, _parent, _shift) {		\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &img_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
++#define GATE_IMG(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &img_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+ static const struct mtk_gate img_clks[] = {
+ 	GATE_IMG(CLK_IMG_LARB2, "img_larb2", "mm_ck", 0),
+diff --git a/drivers/clk/mediatek/clk-mt6765-mipi0a.c b/drivers/clk/mediatek/clk-mt6765-mipi0a.c
+index 25c829fc38661..f2b9dc8084801 100644
+--- a/drivers/clk/mediatek/clk-mt6765-mipi0a.c
++++ b/drivers/clk/mediatek/clk-mt6765-mipi0a.c
+@@ -18,14 +18,8 @@ static const struct mtk_gate_regs mipi0a_cg_regs = {
+ 	.sta_ofs = 0x80,
+ };
+ 
+-#define GATE_MIPI0A(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &mipi0a_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr_inv,	\
+-	}
++#define GATE_MIPI0A(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &mipi0a_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr_inv)
+ 
+ static const struct mtk_gate mipi0a_clks[] = {
+ 	GATE_MIPI0A(CLK_MIPI0A_CSR_CSI_EN_0A,
+diff --git a/drivers/clk/mediatek/clk-mt6765-mm.c b/drivers/clk/mediatek/clk-mt6765-mm.c
+index bda774668a361..a4570c9dbefa5 100644
+--- a/drivers/clk/mediatek/clk-mt6765-mm.c
++++ b/drivers/clk/mediatek/clk-mt6765-mm.c
+@@ -18,14 +18,8 @@ static const struct mtk_gate_regs mm_cg_regs = {
+ 	.sta_ofs = 0x100,
+ };
+ 
+-#define GATE_MM(_id, _name, _parent, _shift) {		\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &mm_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
++#define GATE_MM(_id, _name, _parent, _shift)	\
++	GATE_MTK(_id, _name, _parent, &mm_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+ static const struct mtk_gate mm_clks[] = {
+ 	/* MM */
+diff --git a/drivers/clk/mediatek/clk-mt6765-vcodec.c b/drivers/clk/mediatek/clk-mt6765-vcodec.c
+index 2bc1fbde87da9..75d72b9b4032c 100644
+--- a/drivers/clk/mediatek/clk-mt6765-vcodec.c
++++ b/drivers/clk/mediatek/clk-mt6765-vcodec.c
+@@ -18,14 +18,8 @@ static const struct mtk_gate_regs venc_cg_regs = {
+ 	.sta_ofs = 0x0,
+ };
+ 
+-#define GATE_VENC(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &venc_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr_inv,	\
+-	}
++#define GATE_VENC(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &venc_cg_regs, _shift, &mtk_clk_gate_ops_setclr_inv)
+ 
+ static const struct mtk_gate venc_clks[] = {
+ 	GATE_VENC(CLK_VENC_SET0_LARB, "venc_set0_larb", "mm_ck", 0),
+diff --git a/drivers/clk/mediatek/clk-mt6765.c b/drivers/clk/mediatek/clk-mt6765.c
+index 6f5c92a7f6204..0c20ce678350e 100644
+--- a/drivers/clk/mediatek/clk-mt6765.c
++++ b/drivers/clk/mediatek/clk-mt6765.c
+@@ -483,32 +483,14 @@ static const struct mtk_gate_regs top2_cg_regs = {
+ 	.sta_ofs = 0x320,
+ };
+ 
+-#define GATE_TOP0(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &top0_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr,	\
+-	}
++#define GATE_TOP0(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &top0_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr)
+ 
+-#define GATE_TOP1(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &top1_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr_inv,	\
+-	}
++#define GATE_TOP1(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &top1_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr_inv)
+ 
+-#define GATE_TOP2(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &top2_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr,	\
+-	}
++#define GATE_TOP2(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &top2_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr)
+ 
+ static const struct mtk_gate top_clks[] = {
+ 	/* TOP0 */
+@@ -559,41 +541,17 @@ static const struct mtk_gate_regs ifr5_cg_regs = {
+ 	.sta_ofs = 0xc8,
+ };
+ 
+-#define GATE_IFR2(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &ifr2_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
++#define GATE_IFR2(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &ifr2_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+-#define GATE_IFR3(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &ifr3_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
++#define GATE_IFR3(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &ifr3_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+-#define GATE_IFR4(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &ifr4_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
++#define GATE_IFR4(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &ifr4_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+-#define GATE_IFR5(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &ifr5_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
++#define GATE_IFR5(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &ifr5_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+ static const struct mtk_gate ifr_clks[] = {
+ 	/* INFRA_TOPAXI */
+@@ -674,14 +632,8 @@ static const struct mtk_gate_regs apmixed_cg_regs = {
+ 	.sta_ofs = 0x14,
+ };
+ 
+-#define GATE_APMIXED(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &apmixed_cg_regs,		\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr_inv,		\
+-	}
++#define GATE_APMIXED(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &apmixed_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr_inv)
+ 
+ static const struct mtk_gate apmixed_clks[] = {
+ 	/* AUDIO0 */
+diff --git a/drivers/clk/mediatek/clk-mt6797-img.c b/drivers/clk/mediatek/clk-mt6797-img.c
+index 7c6a53fbb8be6..06441393478f6 100644
+--- a/drivers/clk/mediatek/clk-mt6797-img.c
++++ b/drivers/clk/mediatek/clk-mt6797-img.c
+@@ -16,14 +16,8 @@ static const struct mtk_gate_regs img_cg_regs = {
+ 	.sta_ofs = 0x0000,
+ };
+ 
+-#define GATE_IMG(_id, _name, _parent, _shift) {		\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &img_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
++#define GATE_IMG(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &img_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+ static const struct mtk_gate img_clks[] = {
+ 	GATE_IMG(CLK_IMG_FDVT, "img_fdvt", "mm_sel", 11),
+diff --git a/drivers/clk/mediatek/clk-mt6797-mm.c b/drivers/clk/mediatek/clk-mt6797-mm.c
+index deb16a6b16a5e..d5e9fe445e308 100644
+--- a/drivers/clk/mediatek/clk-mt6797-mm.c
++++ b/drivers/clk/mediatek/clk-mt6797-mm.c
+@@ -23,23 +23,11 @@ static const struct mtk_gate_regs mm1_cg_regs = {
+ 	.sta_ofs = 0x0110,
+ };
+ 
+-#define GATE_MM0(_id, _name, _parent, _shift) {			\
+-	.id = _id,					\
+-	.name = _name,					\
+-	.parent_name = _parent,				\
+-	.regs = &mm0_cg_regs,				\
+-	.shift = _shift,				\
+-	.ops = &mtk_clk_gate_ops_setclr,		\
+-}
++#define GATE_MM0(_id, _name, _parent, _shift)	\
++	GATE_MTK(_id, _name, _parent, &mm0_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+-#define GATE_MM1(_id, _name, _parent, _shift) {			\
+-	.id = _id,					\
+-	.name = _name,					\
+-	.parent_name = _parent,				\
+-	.regs = &mm1_cg_regs,				\
+-	.shift = _shift,				\
+-	.ops = &mtk_clk_gate_ops_setclr,		\
+-}
++#define GATE_MM1(_id, _name, _parent, _shift)	\
++	GATE_MTK(_id, _name, _parent, &mm1_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+ static const struct mtk_gate mm_clks[] = {
+ 	GATE_MM0(CLK_MM_SMI_COMMON, "mm_smi_common", "mm_sel", 0),
+diff --git a/drivers/clk/mediatek/clk-mt6797-vdec.c b/drivers/clk/mediatek/clk-mt6797-vdec.c
+index 6120fccc859f1..8622ddd87a5bb 100644
+--- a/drivers/clk/mediatek/clk-mt6797-vdec.c
++++ b/drivers/clk/mediatek/clk-mt6797-vdec.c
+@@ -24,23 +24,11 @@ static const struct mtk_gate_regs vdec1_cg_regs = {
+ 	.sta_ofs = 0x0008,
+ };
+ 
+-#define GATE_VDEC0(_id, _name, _parent, _shift) {		\
+-	.id = _id,					\
+-	.name = _name,					\
+-	.parent_name = _parent,				\
+-	.regs = &vdec0_cg_regs,				\
+-	.shift = _shift,				\
+-	.ops = &mtk_clk_gate_ops_setclr_inv,		\
+-}
++#define GATE_VDEC0(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &vdec0_cg_regs, _shift, &mtk_clk_gate_ops_setclr_inv)
+ 
+-#define GATE_VDEC1(_id, _name, _parent, _shift) {		\
+-	.id = _id,					\
+-	.name = _name,					\
+-	.parent_name = _parent,				\
+-	.regs = &vdec1_cg_regs,				\
+-	.shift = _shift,				\
+-	.ops = &mtk_clk_gate_ops_setclr_inv,		\
+-}
++#define GATE_VDEC1(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &vdec1_cg_regs, _shift, &mtk_clk_gate_ops_setclr_inv)
+ 
+ static const struct mtk_gate vdec_clks[] = {
+ 	GATE_VDEC0(CLK_VDEC_CKEN_ENG, "vdec_cken_eng", "vdec_sel", 8),
+diff --git a/drivers/clk/mediatek/clk-mt6797-venc.c b/drivers/clk/mediatek/clk-mt6797-venc.c
+index 834d3834d2bbc..928d611a476e4 100644
+--- a/drivers/clk/mediatek/clk-mt6797-venc.c
++++ b/drivers/clk/mediatek/clk-mt6797-venc.c
+@@ -18,14 +18,8 @@ static const struct mtk_gate_regs venc_cg_regs = {
+ 	.sta_ofs = 0x0000,
+ };
+ 
+-#define GATE_VENC(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &venc_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr_inv,	\
+-	}
++#define GATE_VENC(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &venc_cg_regs, _shift, &mtk_clk_gate_ops_setclr_inv)
+ 
+ static const struct mtk_gate venc_clks[] = {
+ 	GATE_VENC(CLK_VENC_0, "venc_0", "mm_sel", 0),
+diff --git a/drivers/clk/mediatek/clk-mt6797.c b/drivers/clk/mediatek/clk-mt6797.c
+index 105a512857b3c..17b23ee4faee6 100644
+--- a/drivers/clk/mediatek/clk-mt6797.c
++++ b/drivers/clk/mediatek/clk-mt6797.c
+@@ -421,40 +421,22 @@ static const struct mtk_gate_regs infra2_cg_regs = {
+ 	.sta_ofs = 0x00b0,
+ };
+ 
+-#define GATE_ICG0(_id, _name, _parent, _shift) {		\
+-	.id = _id,						\
+-	.name = _name,						\
+-	.parent_name = _parent,					\
+-	.regs = &infra0_cg_regs,				\
+-	.shift = _shift,					\
+-	.ops = &mtk_clk_gate_ops_setclr,			\
+-}
++#define GATE_ICG0(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &infra0_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+-#define GATE_ICG1(_id, _name, _parent, _shift)			\
+-	GATE_ICG1_FLAGS(_id, _name, _parent, _shift, 0)
++#define GATE_ICG1(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &infra1_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+-#define GATE_ICG1_FLAGS(_id, _name, _parent, _shift, _flags) {	\
+-	.id = _id,						\
+-	.name = _name,						\
+-	.parent_name = _parent,					\
+-	.regs = &infra1_cg_regs,				\
+-	.shift = _shift,					\
+-	.ops = &mtk_clk_gate_ops_setclr,			\
+-	.flags = _flags,					\
+-}
++#define GATE_ICG1_FLAGS(_id, _name, _parent, _shift, _flags)		\
++	GATE_MTK_FLAGS(_id, _name, _parent, &infra1_cg_regs, _shift,	\
++		       &mtk_clk_gate_ops_setclr, _flags)
+ 
+-#define GATE_ICG2(_id, _name, _parent, _shift)			\
+-	GATE_ICG2_FLAGS(_id, _name, _parent, _shift, 0)
++#define GATE_ICG2(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &infra2_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+-#define GATE_ICG2_FLAGS(_id, _name, _parent, _shift, _flags) {	\
+-	.id = _id,						\
+-	.name = _name,						\
+-	.parent_name = _parent,					\
+-	.regs = &infra2_cg_regs,				\
+-	.shift = _shift,					\
+-	.ops = &mtk_clk_gate_ops_setclr,			\
+-	.flags = _flags,					\
+-}
++#define GATE_ICG2_FLAGS(_id, _name, _parent, _shift, _flags)		\
++	GATE_MTK_FLAGS(_id, _name, _parent, &infra2_cg_regs, _shift,	\
++		       &mtk_clk_gate_ops_setclr, _flags)
+ 
+ /*
+  * Clock gates dramc and dramc_b are needed by the DRAM controller.
+diff --git a/drivers/clk/mediatek/clk-mt7622-aud.c b/drivers/clk/mediatek/clk-mt7622-aud.c
+index b8aabfeb1cba4..27c543759f2ab 100644
+--- a/drivers/clk/mediatek/clk-mt7622-aud.c
++++ b/drivers/clk/mediatek/clk-mt7622-aud.c
+@@ -16,41 +16,17 @@
+ 
+ #include <dt-bindings/clock/mt7622-clk.h>
+ 
+-#define GATE_AUDIO0(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &audio0_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr,	\
+-	}
++#define GATE_AUDIO0(_id, _name, _parent, _shift)		\
++	GATE_MTK(_id, _name, _parent, &audio0_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr)
+ 
+-#define GATE_AUDIO1(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &audio1_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr,	\
+-	}
++#define GATE_AUDIO1(_id, _name, _parent, _shift)		\
++	GATE_MTK(_id, _name, _parent, &audio1_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr)
+ 
+-#define GATE_AUDIO2(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &audio2_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr,	\
+-	}
++#define GATE_AUDIO2(_id, _name, _parent, _shift)		\
++	GATE_MTK(_id, _name, _parent, &audio2_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr)
+ 
+-#define GATE_AUDIO3(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &audio3_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr,	\
+-	}
++#define GATE_AUDIO3(_id, _name, _parent, _shift)		\
++	GATE_MTK(_id, _name, _parent, &audio3_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr)
+ 
+ static const struct mtk_gate_regs audio0_cg_regs = {
+ 	.set_ofs = 0x0,
+diff --git a/drivers/clk/mediatek/clk-mt7622-eth.c b/drivers/clk/mediatek/clk-mt7622-eth.c
+index aee583fa77d0c..66b163cc16330 100644
+--- a/drivers/clk/mediatek/clk-mt7622-eth.c
++++ b/drivers/clk/mediatek/clk-mt7622-eth.c
+@@ -16,14 +16,8 @@
+ 
+ #include <dt-bindings/clock/mt7622-clk.h>
+ 
+-#define GATE_ETH(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &eth_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr_inv,	\
+-	}
++#define GATE_ETH(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &eth_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr_inv)
+ 
+ static const struct mtk_gate_regs eth_cg_regs = {
+ 	.set_ofs = 0x30,
+@@ -45,14 +39,8 @@ static const struct mtk_gate_regs sgmii_cg_regs = {
+ 	.sta_ofs = 0xE4,
+ };
+ 
+-#define GATE_SGMII(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &sgmii_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr_inv,	\
+-	}
++#define GATE_SGMII(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &sgmii_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr_inv)
+ 
+ static const struct mtk_gate sgmii_clks[] = {
+ 	GATE_SGMII(CLK_SGMII_TX250M_EN, "sgmii_tx250m_en",
+diff --git a/drivers/clk/mediatek/clk-mt7622-hif.c b/drivers/clk/mediatek/clk-mt7622-hif.c
+index ab5cad0c2b1c9..bcd1dfc6e8e0c 100644
+--- a/drivers/clk/mediatek/clk-mt7622-hif.c
++++ b/drivers/clk/mediatek/clk-mt7622-hif.c
+@@ -16,23 +16,11 @@
+ 
+ #include <dt-bindings/clock/mt7622-clk.h>
+ 
+-#define GATE_PCIE(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &pcie_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr_inv,	\
+-	}
+-
+-#define GATE_SSUSB(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &ssusb_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr_inv,	\
+-	}
++#define GATE_PCIE(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &pcie_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr_inv)
++
++#define GATE_SSUSB(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &ssusb_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr_inv)
+ 
+ static const struct mtk_gate_regs pcie_cg_regs = {
+ 	.set_ofs = 0x30,
+diff --git a/drivers/clk/mediatek/clk-mt7622.c b/drivers/clk/mediatek/clk-mt7622.c
+index 5a82c2270bfbc..1c0049fbeb69c 100644
+--- a/drivers/clk/mediatek/clk-mt7622.c
++++ b/drivers/clk/mediatek/clk-mt7622.c
+@@ -50,59 +50,28 @@
+ 		 _pd_reg, _pd_shift, _tuner_reg, _pcw_reg, _pcw_shift,  \
+ 		 NULL, "clkxtal")
+ 
+-#define GATE_APMIXED(_id, _name, _parent, _shift) {			\
+-		.id = _id,						\
+-		.name = _name,						\
+-		.parent_name = _parent,					\
+-		.regs = &apmixed_cg_regs,				\
+-		.shift = _shift,					\
+-		.ops = &mtk_clk_gate_ops_no_setclr_inv,			\
+-	}
++#define GATE_APMIXED_AO(_id, _name, _parent, _shift)			\
++	GATE_MTK_FLAGS(_id, _name, _parent, &apmixed_cg_regs, _shift,	\
++		 &mtk_clk_gate_ops_no_setclr_inv, CLK_IS_CRITICAL)
+ 
+-#define GATE_INFRA(_id, _name, _parent, _shift) {			\
+-		.id = _id,						\
+-		.name = _name,						\
+-		.parent_name = _parent,					\
+-		.regs = &infra_cg_regs,					\
+-		.shift = _shift,					\
+-		.ops = &mtk_clk_gate_ops_setclr,			\
+-	}
++#define GATE_INFRA(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &infra_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+-#define GATE_TOP0(_id, _name, _parent, _shift) {			\
+-		.id = _id,						\
+-		.name = _name,						\
+-		.parent_name = _parent,					\
+-		.regs = &top0_cg_regs,					\
+-		.shift = _shift,					\
+-		.ops = &mtk_clk_gate_ops_no_setclr,			\
+-	}
++#define GATE_TOP0(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &top0_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr)
+ 
+-#define GATE_TOP1(_id, _name, _parent, _shift) {			\
+-		.id = _id,						\
+-		.name = _name,						\
+-		.parent_name = _parent,					\
+-		.regs = &top1_cg_regs,					\
+-		.shift = _shift,					\
+-		.ops = &mtk_clk_gate_ops_no_setclr,			\
+-	}
++#define GATE_TOP1(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &top1_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr)
+ 
+-#define GATE_PERI0(_id, _name, _parent, _shift) {			\
+-		.id = _id,						\
+-		.name = _name,						\
+-		.parent_name = _parent,					\
+-		.regs = &peri0_cg_regs,					\
+-		.shift = _shift,					\
+-		.ops = &mtk_clk_gate_ops_setclr,			\
+-	}
++#define GATE_PERI0(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &peri0_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+-#define GATE_PERI1(_id, _name, _parent, _shift) {			\
+-		.id = _id,						\
+-		.name = _name,						\
+-		.parent_name = _parent,					\
+-		.regs = &peri1_cg_regs,					\
+-		.shift = _shift,					\
+-		.ops = &mtk_clk_gate_ops_setclr,			\
+-	}
++#define GATE_PERI0_AO(_id, _name, _parent, _shift)			\
++	GATE_MTK_FLAGS(_id, _name, _parent, &peri0_cg_regs, _shift,	\
++		 &mtk_clk_gate_ops_setclr, CLK_IS_CRITICAL)
++
++#define GATE_PERI1(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &peri1_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+ static DEFINE_SPINLOCK(mt7622_clk_lock);
+ 
+@@ -350,7 +319,7 @@ static const struct mtk_pll_data plls[] = {
+ };
+ 
+ static const struct mtk_gate apmixed_clks[] = {
+-	GATE_APMIXED(CLK_APMIXED_MAIN_CORE_EN, "main_core_en", "mainpll", 5),
++	GATE_APMIXED_AO(CLK_APMIXED_MAIN_CORE_EN, "main_core_en", "mainpll", 5),
+ };
+ 
+ static const struct mtk_gate infra_clks[] = {
+@@ -485,7 +454,7 @@ static const struct mtk_gate peri_clks[] = {
+ 	GATE_PERI0(CLK_PERI_AP_DMA_PD, "peri_ap_dma_pd", "axi_sel", 12),
+ 	GATE_PERI0(CLK_PERI_MSDC30_0_PD, "peri_msdc30_0", "msdc30_0_sel", 13),
+ 	GATE_PERI0(CLK_PERI_MSDC30_1_PD, "peri_msdc30_1", "msdc30_1_sel", 14),
+-	GATE_PERI0(CLK_PERI_UART0_PD, "peri_uart0_pd", "axi_sel", 17),
++	GATE_PERI0_AO(CLK_PERI_UART0_PD, "peri_uart0_pd", "axi_sel", 17),
+ 	GATE_PERI0(CLK_PERI_UART1_PD, "peri_uart1_pd", "axi_sel", 18),
+ 	GATE_PERI0(CLK_PERI_UART2_PD, "peri_uart2_pd", "axi_sel", 19),
+ 	GATE_PERI0(CLK_PERI_UART3_PD, "peri_uart3_pd", "axi_sel", 20),
+@@ -513,12 +482,12 @@ static struct mtk_composite infra_muxes[] = {
+ 
+ static struct mtk_composite top_muxes[] = {
+ 	/* CLK_CFG_0 */
+-	MUX_GATE(CLK_TOP_AXI_SEL, "axi_sel", axi_parents,
+-		 0x040, 0, 3, 7),
+-	MUX_GATE(CLK_TOP_MEM_SEL, "mem_sel", mem_parents,
+-		 0x040, 8, 1, 15),
+-	MUX_GATE(CLK_TOP_DDRPHYCFG_SEL, "ddrphycfg_sel", ddrphycfg_parents,
+-		 0x040, 16, 1, 23),
++	MUX_GATE_FLAGS(CLK_TOP_AXI_SEL, "axi_sel", axi_parents,
++		       0x040, 0, 3, 7, CLK_IS_CRITICAL),
++	MUX_GATE_FLAGS(CLK_TOP_MEM_SEL, "mem_sel", mem_parents,
++		       0x040, 8, 1, 15, CLK_IS_CRITICAL),
++	MUX_GATE_FLAGS(CLK_TOP_DDRPHYCFG_SEL, "ddrphycfg_sel", ddrphycfg_parents,
++		       0x040, 16, 1, 23, CLK_IS_CRITICAL),
+ 	MUX_GATE(CLK_TOP_ETH_SEL, "eth_sel", eth_parents,
+ 		 0x040, 24, 3, 31),
+ 
+@@ -656,10 +625,6 @@ static int mtk_topckgen_init(struct platform_device *pdev)
+ 	mtk_clk_register_gates(&pdev->dev, node, top_clks,
+ 			       ARRAY_SIZE(top_clks), clk_data);
+ 
+-	clk_prepare_enable(clk_data->hws[CLK_TOP_AXI_SEL]->clk);
+-	clk_prepare_enable(clk_data->hws[CLK_TOP_MEM_SEL]->clk);
+-	clk_prepare_enable(clk_data->hws[CLK_TOP_DDRPHYCFG_SEL]->clk);
+-
+ 	return of_clk_add_hw_provider(node, of_clk_hw_onecell_get, clk_data);
+ }
+ 
+@@ -702,9 +667,6 @@ static int mtk_apmixedsys_init(struct platform_device *pdev)
+ 	mtk_clk_register_gates(&pdev->dev, node, apmixed_clks,
+ 			       ARRAY_SIZE(apmixed_clks), clk_data);
+ 
+-	clk_prepare_enable(clk_data->hws[CLK_APMIXED_ARMPLL]->clk);
+-	clk_prepare_enable(clk_data->hws[CLK_APMIXED_MAIN_CORE_EN]->clk);
+-
+ 	return of_clk_add_hw_provider(node, of_clk_hw_onecell_get, clk_data);
+ }
+ 
+@@ -732,8 +694,6 @@ static int mtk_pericfg_init(struct platform_device *pdev)
+ 	if (r)
+ 		return r;
+ 
+-	clk_prepare_enable(clk_data->hws[CLK_PERI_UART0_PD]->clk);
+-
+ 	mtk_register_reset_controller_with_dev(&pdev->dev, &clk_rst_desc[1]);
+ 
+ 	return 0;
+diff --git a/drivers/clk/mediatek/clk-mt7629-eth.c b/drivers/clk/mediatek/clk-mt7629-eth.c
+index a4ae7d6c7a71a..719a47fef7980 100644
+--- a/drivers/clk/mediatek/clk-mt7629-eth.c
++++ b/drivers/clk/mediatek/clk-mt7629-eth.c
+@@ -16,14 +16,8 @@
+ 
+ #include <dt-bindings/clock/mt7629-clk.h>
+ 
+-#define GATE_ETH(_id, _name, _parent, _shift) {		\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &eth_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr_inv,	\
+-	}
++#define GATE_ETH(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &eth_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr_inv)
+ 
+ static const struct mtk_gate_regs eth_cg_regs = {
+ 	.set_ofs = 0x30,
+@@ -45,14 +39,8 @@ static const struct mtk_gate_regs sgmii_cg_regs = {
+ 	.sta_ofs = 0xE4,
+ };
+ 
+-#define GATE_SGMII(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &sgmii_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr_inv,	\
+-	}
++#define GATE_SGMII(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &sgmii_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr_inv)
+ 
+ static const struct mtk_gate sgmii_clks[2][4] = {
+ 	{
+diff --git a/drivers/clk/mediatek/clk-mt7629-hif.c b/drivers/clk/mediatek/clk-mt7629-hif.c
+index c3eb09ea6036f..78d85542e4f17 100644
+--- a/drivers/clk/mediatek/clk-mt7629-hif.c
++++ b/drivers/clk/mediatek/clk-mt7629-hif.c
+@@ -16,23 +16,11 @@
+ 
+ #include <dt-bindings/clock/mt7629-clk.h>
+ 
+-#define GATE_PCIE(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &pcie_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr_inv,	\
+-	}
+-
+-#define GATE_SSUSB(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &ssusb_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr_inv,	\
+-	}
++#define GATE_PCIE(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &pcie_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr_inv)
++
++#define GATE_SSUSB(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &ssusb_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr_inv)
+ 
+ static const struct mtk_gate_regs pcie_cg_regs = {
+ 	.set_ofs = 0x30,
+diff --git a/drivers/clk/mediatek/clk-mt7629.c b/drivers/clk/mediatek/clk-mt7629.c
+index cf062d4a7ecc4..09c85fda43d84 100644
+--- a/drivers/clk/mediatek/clk-mt7629.c
++++ b/drivers/clk/mediatek/clk-mt7629.c
+@@ -50,41 +50,17 @@
+ 		_pd_reg, _pd_shift, _tuner_reg, _pcw_reg, _pcw_shift,	\
+ 		NULL, "clk20m")
+ 
+-#define GATE_APMIXED(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &apmixed_cg_regs,		\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr_inv,	\
+-	}
++#define GATE_APMIXED(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &apmixed_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr_inv)
+ 
+-#define GATE_INFRA(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &infra_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
++#define GATE_INFRA(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &infra_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+-#define GATE_PERI0(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &peri0_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
++#define GATE_PERI0(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &peri0_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+-#define GATE_PERI1(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &peri1_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
++#define GATE_PERI1(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &peri1_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+ static DEFINE_SPINLOCK(mt7629_clk_lock);
+ 
+diff --git a/drivers/clk/mediatek/clk-mt7986-eth.c b/drivers/clk/mediatek/clk-mt7986-eth.c
+index 703872239ecca..e04bc6845ea6d 100644
+--- a/drivers/clk/mediatek/clk-mt7986-eth.c
++++ b/drivers/clk/mediatek/clk-mt7986-eth.c
+@@ -22,12 +22,8 @@ static const struct mtk_gate_regs sgmii0_cg_regs = {
+ 	.sta_ofs = 0xe4,
+ };
+ 
+-#define GATE_SGMII0(_id, _name, _parent, _shift)                               \
+-	{                                                                      \
+-		.id = _id, .name = _name, .parent_name = _parent,              \
+-		.regs = &sgmii0_cg_regs, .shift = _shift,                      \
+-		.ops = &mtk_clk_gate_ops_no_setclr_inv,                        \
+-	}
++#define GATE_SGMII0(_id, _name, _parent, _shift)		\
++	GATE_MTK(_id, _name, _parent, &sgmii0_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr_inv)
+ 
+ static const struct mtk_gate sgmii0_clks[] __initconst = {
+ 	GATE_SGMII0(CLK_SGMII0_TX250M_EN, "sgmii0_tx250m_en", "top_xtal", 2),
+@@ -42,12 +38,8 @@ static const struct mtk_gate_regs sgmii1_cg_regs = {
+ 	.sta_ofs = 0xe4,
+ };
+ 
+-#define GATE_SGMII1(_id, _name, _parent, _shift)                               \
+-	{                                                                      \
+-		.id = _id, .name = _name, .parent_name = _parent,              \
+-		.regs = &sgmii1_cg_regs, .shift = _shift,                      \
+-		.ops = &mtk_clk_gate_ops_no_setclr_inv,                        \
+-	}
++#define GATE_SGMII1(_id, _name, _parent, _shift)		\
++	GATE_MTK(_id, _name, _parent, &sgmii1_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr_inv)
+ 
+ static const struct mtk_gate sgmii1_clks[] __initconst = {
+ 	GATE_SGMII1(CLK_SGMII1_TX250M_EN, "sgmii1_tx250m_en", "top_xtal", 2),
+@@ -62,12 +54,8 @@ static const struct mtk_gate_regs eth_cg_regs = {
+ 	.sta_ofs = 0x30,
+ };
+ 
+-#define GATE_ETH(_id, _name, _parent, _shift)                                  \
+-	{                                                                      \
+-		.id = _id, .name = _name, .parent_name = _parent,              \
+-		.regs = &eth_cg_regs, .shift = _shift,                         \
+-		.ops = &mtk_clk_gate_ops_no_setclr_inv,                        \
+-	}
++#define GATE_ETH(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &eth_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr_inv)
+ 
+ static const struct mtk_gate eth_clks[] __initconst = {
+ 	GATE_ETH(CLK_ETH_FE_EN, "eth_fe_en", "netsys_2x_sel", 6),
+diff --git a/drivers/clk/mediatek/clk-mt7986-infracfg.c b/drivers/clk/mediatek/clk-mt7986-infracfg.c
+index e80c92167c8fc..0a4bf87ee1607 100644
+--- a/drivers/clk/mediatek/clk-mt7986-infracfg.c
++++ b/drivers/clk/mediatek/clk-mt7986-infracfg.c
+@@ -87,26 +87,14 @@ static const struct mtk_gate_regs infra2_cg_regs = {
+ 	.sta_ofs = 0x68,
+ };
+ 
+-#define GATE_INFRA0(_id, _name, _parent, _shift)                               \
+-	{                                                                      \
+-		.id = _id, .name = _name, .parent_name = _parent,              \
+-		.regs = &infra0_cg_regs, .shift = _shift,                      \
+-		.ops = &mtk_clk_gate_ops_setclr,                               \
+-	}
++#define GATE_INFRA0(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &infra0_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+-#define GATE_INFRA1(_id, _name, _parent, _shift)                               \
+-	{                                                                      \
+-		.id = _id, .name = _name, .parent_name = _parent,              \
+-		.regs = &infra1_cg_regs, .shift = _shift,                      \
+-		.ops = &mtk_clk_gate_ops_setclr,                               \
+-	}
++#define GATE_INFRA1(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &infra1_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+-#define GATE_INFRA2(_id, _name, _parent, _shift)                               \
+-	{                                                                      \
+-		.id = _id, .name = _name, .parent_name = _parent,              \
+-		.regs = &infra2_cg_regs, .shift = _shift,                      \
+-		.ops = &mtk_clk_gate_ops_setclr,                               \
+-	}
++#define GATE_INFRA2(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &infra2_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+ static const struct mtk_gate infra_clks[] = {
+ 	/* INFRA0 */
+diff --git a/drivers/clk/mediatek/clk-mt8135.c b/drivers/clk/mediatek/clk-mt8135.c
+index 2b9c925c2a2ba..a39ad58e27418 100644
+--- a/drivers/clk/mediatek/clk-mt8135.c
++++ b/drivers/clk/mediatek/clk-mt8135.c
+@@ -2,6 +2,8 @@
+ /*
+  * Copyright (c) 2014 MediaTek Inc.
+  * Author: James Liao <jamesjj.liao@mediatek.com>
++ * Copyright (c) 2023 Collabora, Ltd.
++ *               AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
+  */
+ 
+ #include <linux/clk.h>
+@@ -390,7 +392,7 @@ static const struct mtk_composite top_muxes[] __initconst = {
+ 	MUX_GATE(CLK_TOP_GCPU_SEL, "gcpu_sel", gcpu_parents, 0x0164, 24, 3, 31),
+ 	/* CLK_CFG_9 */
+ 	MUX_GATE(CLK_TOP_DPI1_SEL, "dpi1_sel", dpi1_parents, 0x0168, 0, 2, 7),
+-	MUX_GATE(CLK_TOP_CCI_SEL, "cci_sel", cci_parents, 0x0168, 8, 3, 15),
++	MUX_GATE_FLAGS(CLK_TOP_CCI_SEL, "cci_sel", cci_parents, 0x0168, 8, 3, 15, CLK_IS_CRITICAL),
+ 	MUX_GATE(CLK_TOP_APLL_SEL, "apll_sel", apll_parents, 0x0168, 16, 3, 23),
+ 	MUX_GATE(CLK_TOP_HDMIPLL_SEL, "hdmipll_sel", hdmipll_parents, 0x0168, 24, 2, 31),
+ };
+@@ -401,14 +403,12 @@ static const struct mtk_gate_regs infra_cg_regs = {
+ 	.sta_ofs = 0x0048,
+ };
+ 
+-#define GATE_ICG(_id, _name, _parent, _shift) {	\
+-		.id = _id,					\
+-		.name = _name,					\
+-		.parent_name = _parent,				\
+-		.regs = &infra_cg_regs,				\
+-		.shift = _shift,				\
+-		.ops = &mtk_clk_gate_ops_setclr,		\
+-	}
++#define GATE_ICG(_id, _name, _parent, _shift)	\
++	GATE_MTK(_id, _name, _parent, &infra_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
++
++#define GATE_ICG_AO(_id, _name, _parent, _shift)	\
++	GATE_MTK_FLAGS(_id, _name, _parent, &infra_cg_regs, _shift,	\
++		       &mtk_clk_gate_ops_setclr, CLK_IS_CRITICAL)
+ 
+ static const struct mtk_gate infra_clks[] __initconst = {
+ 	GATE_ICG(CLK_INFRA_PMIC_WRAP, "pmic_wrap_ck", "axi_sel", 23),
+@@ -417,7 +417,7 @@ static const struct mtk_gate infra_clks[] __initconst = {
+ 	GATE_ICG(CLK_INFRA_CCIF0_AP_CTRL, "ccif0_ap_ctrl", "axi_sel", 20),
+ 	GATE_ICG(CLK_INFRA_KP, "kp_ck", "axi_sel", 16),
+ 	GATE_ICG(CLK_INFRA_CPUM, "cpum_ck", "cpum_tck_in", 15),
+-	GATE_ICG(CLK_INFRA_M4U, "m4u_ck", "mem_sel", 8),
++	GATE_ICG_AO(CLK_INFRA_M4U, "m4u_ck", "mem_sel", 8),
+ 	GATE_ICG(CLK_INFRA_MFGAXI, "mfgaxi_ck", "axi_sel", 7),
+ 	GATE_ICG(CLK_INFRA_DEVAPC, "devapc_ck", "axi_sel", 6),
+ 	GATE_ICG(CLK_INFRA_AUDIO, "audio_ck", "aud_intbus_sel", 5),
+@@ -438,23 +438,11 @@ static const struct mtk_gate_regs peri1_cg_regs = {
+ 	.sta_ofs = 0x001c,
+ };
+ 
+-#define GATE_PERI0(_id, _name, _parent, _shift) {	\
+-		.id = _id,					\
+-		.name = _name,					\
+-		.parent_name = _parent,				\
+-		.regs = &peri0_cg_regs,				\
+-		.shift = _shift,				\
+-		.ops = &mtk_clk_gate_ops_setclr,		\
+-	}
++#define GATE_PERI0(_id, _name, _parent, _shift)	\
++	GATE_MTK(_id, _name, _parent, &peri0_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+-#define GATE_PERI1(_id, _name, _parent, _shift) {	\
+-		.id = _id,					\
+-		.name = _name,					\
+-		.parent_name = _parent,				\
+-		.regs = &peri1_cg_regs,				\
+-		.shift = _shift,				\
+-		.ops = &mtk_clk_gate_ops_setclr,		\
+-	}
++#define GATE_PERI1(_id, _name, _parent, _shift)	\
++	GATE_MTK(_id, _name, _parent, &peri1_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+ static const struct mtk_gate peri_gates[] __initconst = {
+ 	/* PERI0 */
+@@ -552,8 +540,6 @@ static void __init mtk_topckgen_init(struct device_node *node)
+ 				    ARRAY_SIZE(top_muxes), base,
+ 				    &mt8135_clk_lock, clk_data);
+ 
+-	clk_prepare_enable(clk_data->hws[CLK_TOP_CCI_SEL]->clk);
+-
+ 	r = of_clk_add_hw_provider(node, of_clk_hw_onecell_get, clk_data);
+ 	if (r)
+ 		pr_err("%s(): could not register clock provider: %d\n",
+@@ -571,8 +557,6 @@ static void __init mtk_infrasys_init(struct device_node *node)
+ 	mtk_clk_register_gates(NULL, node, infra_clks,
+ 			       ARRAY_SIZE(infra_clks), clk_data);
+ 
+-	clk_prepare_enable(clk_data->hws[CLK_INFRA_M4U]->clk);
+-
+ 	r = of_clk_add_hw_provider(node, of_clk_hw_onecell_get, clk_data);
+ 	if (r)
+ 		pr_err("%s(): could not register clock provider: %d\n",
+diff --git a/drivers/clk/mediatek/clk-mt8167-aud.c b/drivers/clk/mediatek/clk-mt8167-aud.c
+index f6bea6e9e6a4e..47a7d89d5777c 100644
+--- a/drivers/clk/mediatek/clk-mt8167-aud.c
++++ b/drivers/clk/mediatek/clk-mt8167-aud.c
+@@ -23,14 +23,9 @@ static const struct mtk_gate_regs aud_cg_regs = {
+ 	.sta_ofs = 0x0,
+ };
+ 
+-#define GATE_AUD(_id, _name, _parent, _shift) {	\
+-		.id = _id,			\
+-		.name = _name,			\
+-		.parent_name = _parent,		\
+-		.regs = &aud_cg_regs,		\
+-		.shift = _shift,		\
+-		.ops = &mtk_clk_gate_ops_no_setclr,		\
+-	}
++#define GATE_AUD(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &aud_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr)
++
+ 
+ static const struct mtk_gate aud_clks[] __initconst = {
+ 	GATE_AUD(CLK_AUD_AFE, "aud_afe", "clk26m_ck", 2),
+diff --git a/drivers/clk/mediatek/clk-mt8167-img.c b/drivers/clk/mediatek/clk-mt8167-img.c
+index 77db13b177fcc..e196b3b894a16 100644
+--- a/drivers/clk/mediatek/clk-mt8167-img.c
++++ b/drivers/clk/mediatek/clk-mt8167-img.c
+@@ -23,14 +23,8 @@ static const struct mtk_gate_regs img_cg_regs = {
+ 	.sta_ofs = 0x0,
+ };
+ 
+-#define GATE_IMG(_id, _name, _parent, _shift) {		\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &img_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
++#define GATE_IMG(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &img_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+ static const struct mtk_gate img_clks[] __initconst = {
+ 	GATE_IMG(CLK_IMG_LARB1_SMI, "img_larb1_smi", "smi_mm", 0),
+diff --git a/drivers/clk/mediatek/clk-mt8167-mfgcfg.c b/drivers/clk/mediatek/clk-mt8167-mfgcfg.c
+index 3c23591b02f7f..602d25f4cb2e2 100644
+--- a/drivers/clk/mediatek/clk-mt8167-mfgcfg.c
++++ b/drivers/clk/mediatek/clk-mt8167-mfgcfg.c
+@@ -23,14 +23,8 @@ static const struct mtk_gate_regs mfg_cg_regs = {
+ 	.sta_ofs = 0x0,
+ };
+ 
+-#define GATE_MFG(_id, _name, _parent, _shift) {		\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &mfg_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
++#define GATE_MFG(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &mfg_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+ static const struct mtk_gate mfg_clks[] __initconst = {
+ 	GATE_MFG(CLK_MFG_BAXI, "mfg_baxi", "ahb_infra_sel", 0),
+diff --git a/drivers/clk/mediatek/clk-mt8167-mm.c b/drivers/clk/mediatek/clk-mt8167-mm.c
+index c0b44104c765a..abc70e1221bf9 100644
+--- a/drivers/clk/mediatek/clk-mt8167-mm.c
++++ b/drivers/clk/mediatek/clk-mt8167-mm.c
+@@ -29,23 +29,11 @@ static const struct mtk_gate_regs mm1_cg_regs = {
+ 	.sta_ofs = 0x110,
+ };
+ 
+-#define GATE_MM0(_id, _name, _parent, _shift) {		\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &mm0_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
+-
+-#define GATE_MM1(_id, _name, _parent, _shift) {		\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &mm1_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
++#define GATE_MM0(_id, _name, _parent, _shift)	\
++	GATE_MTK(_id, _name, _parent, &mm0_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
++
++#define GATE_MM1(_id, _name, _parent, _shift)	\
++	GATE_MTK(_id, _name, _parent, &mm1_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+ static const struct mtk_gate mm_clks[] = {
+ 	/* MM0 */
+diff --git a/drivers/clk/mediatek/clk-mt8167-vdec.c b/drivers/clk/mediatek/clk-mt8167-vdec.c
+index 759e5791599f0..92bc05d997985 100644
+--- a/drivers/clk/mediatek/clk-mt8167-vdec.c
++++ b/drivers/clk/mediatek/clk-mt8167-vdec.c
+@@ -29,23 +29,11 @@ static const struct mtk_gate_regs vdec1_cg_regs = {
+ 	.sta_ofs = 0x8,
+ };
+ 
+-#define GATE_VDEC0_I(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &vdec0_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr_inv,	\
+-	}
++#define GATE_VDEC0_I(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &vdec0_cg_regs, _shift, &mtk_clk_gate_ops_setclr_inv)
+ 
+-#define GATE_VDEC1_I(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &vdec1_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr_inv,	\
+-	}
++#define GATE_VDEC1_I(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &vdec1_cg_regs, _shift, &mtk_clk_gate_ops_setclr_inv)
+ 
+ static const struct mtk_gate vdec_clks[] __initconst = {
+ 	/* VDEC0 */
+diff --git a/drivers/clk/mediatek/clk-mt8173-mm.c b/drivers/clk/mediatek/clk-mt8173-mm.c
+index 315430ad15814..c11ce453a88aa 100644
+--- a/drivers/clk/mediatek/clk-mt8173-mm.c
++++ b/drivers/clk/mediatek/clk-mt8173-mm.c
+@@ -25,23 +25,11 @@ static const struct mtk_gate_regs mm1_cg_regs = {
+ 	.sta_ofs = 0x0110,
+ };
+ 
+-#define GATE_MM0(_id, _name, _parent, _shift) {			\
+-		.id = _id,					\
+-		.name = _name,					\
+-		.parent_name = _parent,				\
+-		.regs = &mm0_cg_regs,				\
+-		.shift = _shift,				\
+-		.ops = &mtk_clk_gate_ops_setclr,		\
+-	}
+-
+-#define GATE_MM1(_id, _name, _parent, _shift) {			\
+-		.id = _id,					\
+-		.name = _name,					\
+-		.parent_name = _parent,				\
+-		.regs = &mm1_cg_regs,				\
+-		.shift = _shift,				\
+-		.ops = &mtk_clk_gate_ops_setclr,		\
+-	}
++#define GATE_MM0(_id, _name, _parent, _shift)	\
++	GATE_MTK(_id, _name, _parent, &mm0_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
++
++#define GATE_MM1(_id, _name, _parent, _shift)	\
++	GATE_MTK(_id, _name, _parent, &mm1_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+ static const struct mtk_gate mt8173_mm_clks[] = {
+ 	/* MM0 */
+diff --git a/drivers/clk/mediatek/clk-mt8516-aud.c b/drivers/clk/mediatek/clk-mt8516-aud.c
+index 00f356fe7c7a6..a6ae8003b9ff6 100644
+--- a/drivers/clk/mediatek/clk-mt8516-aud.c
++++ b/drivers/clk/mediatek/clk-mt8516-aud.c
+@@ -22,14 +22,8 @@ static const struct mtk_gate_regs aud_cg_regs = {
+ 	.sta_ofs = 0x0,
+ };
+ 
+-#define GATE_AUD(_id, _name, _parent, _shift) {	\
+-		.id = _id,			\
+-		.name = _name,			\
+-		.parent_name = _parent,		\
+-		.regs = &aud_cg_regs,		\
+-		.shift = _shift,		\
+-		.ops = &mtk_clk_gate_ops_no_setclr,		\
+-	}
++#define GATE_AUD(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &aud_cg_regs, _shift, &mtk_clk_gate_ops_no_setclr)
+ 
+ static const struct mtk_gate aud_clks[] __initconst = {
+ 	GATE_AUD(CLK_AUD_AFE, "aud_afe", "clk26m_ck", 2),
+diff --git a/drivers/clk/mediatek/clk-mt8516.c b/drivers/clk/mediatek/clk-mt8516.c
+index 2c0cae7b3bcfe..6983d3a48dc9a 100644
+--- a/drivers/clk/mediatek/clk-mt8516.c
++++ b/drivers/clk/mediatek/clk-mt8516.c
+@@ -525,59 +525,23 @@ static const struct mtk_gate_regs top5_cg_regs = {
+ 	.sta_ofs = 0x44,
+ };
+ 
+-#define GATE_TOP1(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &top1_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
++#define GATE_TOP1(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &top1_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+-#define GATE_TOP2(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &top2_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
++#define GATE_TOP2(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &top2_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+-#define GATE_TOP2_I(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &top2_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr_inv,	\
+-	}
++#define GATE_TOP2_I(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &top2_cg_regs, _shift, &mtk_clk_gate_ops_setclr_inv)
+ 
+-#define GATE_TOP3(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &top3_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr,	\
+-	}
++#define GATE_TOP3(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &top3_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+-#define GATE_TOP4_I(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &top4_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_setclr_inv,	\
+-	}
++#define GATE_TOP4_I(_id, _name, _parent, _shift)			\
++	GATE_MTK(_id, _name, _parent, &top4_cg_regs, _shift, &mtk_clk_gate_ops_setclr_inv)
+ 
+-#define GATE_TOP5(_id, _name, _parent, _shift) {	\
+-		.id = _id,				\
+-		.name = _name,				\
+-		.parent_name = _parent,			\
+-		.regs = &top5_cg_regs,			\
+-		.shift = _shift,			\
+-		.ops = &mtk_clk_gate_ops_no_setclr,	\
+-	}
++#define GATE_TOP5(_id, _name, _parent, _shift)				\
++	GATE_MTK(_id, _name, _parent, &top5_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+ 
+ static const struct mtk_gate top_clks[] __initconst = {
+ 	/* TOP1 */
+diff --git a/drivers/clk/mediatek/clk-pllfh.c b/drivers/clk/mediatek/clk-pllfh.c
+index f48780bec5077..f135b32c6dbed 100644
+--- a/drivers/clk/mediatek/clk-pllfh.c
++++ b/drivers/clk/mediatek/clk-pllfh.c
+@@ -75,13 +75,13 @@ void fhctl_parse_dt(const u8 *compatible_node, struct mtk_pllfh_data *pllfhs,
+ 	base = of_iomap(node, 0);
+ 	if (!base) {
+ 		pr_err("%s(): ioremap failed\n", __func__);
+-		return;
++		goto out_node_put;
+ 	}
+ 
+ 	num_clocks = of_clk_get_parent_count(node);
+ 	if (!num_clocks) {
+ 		pr_err("%s(): failed to get clocks property\n", __func__);
+-		return;
++		goto err;
+ 	}
+ 
+ 	for (i = 0; i < num_clocks; i++) {
+@@ -102,6 +102,13 @@ void fhctl_parse_dt(const u8 *compatible_node, struct mtk_pllfh_data *pllfhs,
+ 		pllfh->state.ssc_rate = ssc_rate;
+ 		pllfh->state.base = base;
+ 	}
++
++out_node_put:
++	of_node_put(node);
++	return;
++err:
++	iounmap(base);
++	goto out_node_put;
+ }
+ 
+ static void pllfh_init(struct mtk_fh *fh, struct mtk_pllfh_data *pllfh_data)
+diff --git a/drivers/clk/microchip/clk-mpfs.c b/drivers/clk/microchip/clk-mpfs.c
+index 4f0a19db7ed74..cc5d7dee59f06 100644
+--- a/drivers/clk/microchip/clk-mpfs.c
++++ b/drivers/clk/microchip/clk-mpfs.c
+@@ -374,14 +374,13 @@ static void mpfs_reset_unregister_adev(void *_adev)
+ 	struct auxiliary_device *adev = _adev;
+ 
+ 	auxiliary_device_delete(adev);
++	auxiliary_device_uninit(adev);
+ }
+ 
+ static void mpfs_reset_adev_release(struct device *dev)
+ {
+ 	struct auxiliary_device *adev = to_auxiliary_dev(dev);
+ 
+-	auxiliary_device_uninit(adev);
+-
+ 	kfree(adev);
+ }
+ 
+diff --git a/drivers/clk/qcom/dispcc-qcm2290.c b/drivers/clk/qcom/dispcc-qcm2290.c
+index 2ebd9a02b8950..24755dc841f9d 100644
+--- a/drivers/clk/qcom/dispcc-qcm2290.c
++++ b/drivers/clk/qcom/dispcc-qcm2290.c
+@@ -26,7 +26,6 @@ enum {
+ 	P_DISP_CC_PLL0_OUT_MAIN,
+ 	P_DSI0_PHY_PLL_OUT_BYTECLK,
+ 	P_DSI0_PHY_PLL_OUT_DSICLK,
+-	P_DSI1_PHY_PLL_OUT_DSICLK,
+ 	P_GPLL0_OUT_MAIN,
+ 	P_SLEEP_CLK,
+ };
+@@ -106,13 +105,11 @@ static const struct clk_parent_data disp_cc_parent_data_3[] = {
+ static const struct parent_map disp_cc_parent_map_4[] = {
+ 	{ P_BI_TCXO, 0 },
+ 	{ P_DSI0_PHY_PLL_OUT_DSICLK, 1 },
+-	{ P_DSI1_PHY_PLL_OUT_DSICLK, 2 },
+ };
+ 
+ static const struct clk_parent_data disp_cc_parent_data_4[] = {
+ 	{ .fw_name = "bi_tcxo" },
+ 	{ .fw_name = "dsi0_phy_pll_out_dsiclk" },
+-	{ .fw_name = "dsi1_phy_pll_out_dsiclk" },
+ };
+ 
+ static const struct parent_map disp_cc_parent_map_5[] = {
+diff --git a/drivers/clk/qcom/gcc-qcm2290.c b/drivers/clk/qcom/gcc-qcm2290.c
+index 7792b8f237047..096deff2ba257 100644
+--- a/drivers/clk/qcom/gcc-qcm2290.c
++++ b/drivers/clk/qcom/gcc-qcm2290.c
+@@ -1243,7 +1243,8 @@ static struct clk_rcg2 gcc_sdcc2_apps_clk_src = {
+ 		.name = "gcc_sdcc2_apps_clk_src",
+ 		.parent_data = gcc_parents_12,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_12),
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_floor_ops,
++		.flags = CLK_OPS_PARENT_ENABLE,
+ 	},
+ };
+ 
+diff --git a/drivers/clk/qcom/gcc-sm6115.c b/drivers/clk/qcom/gcc-sm6115.c
+index 5b8222fea2f71..5f09aefa7fb92 100644
+--- a/drivers/clk/qcom/gcc-sm6115.c
++++ b/drivers/clk/qcom/gcc-sm6115.c
+@@ -694,7 +694,7 @@ static struct clk_rcg2 gcc_camss_axi_clk_src = {
+ 		.parent_data = gcc_parents_7,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_7),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -715,7 +715,7 @@ static struct clk_rcg2 gcc_camss_cci_clk_src = {
+ 		.parent_data = gcc_parents_9,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_9),
+ 		.flags = CLK_SET_RATE_PARENT | CLK_OPS_PARENT_ENABLE,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -738,7 +738,7 @@ static struct clk_rcg2 gcc_camss_csi0phytimer_clk_src = {
+ 		.parent_data = gcc_parents_4,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_4),
+ 		.flags = CLK_SET_RATE_PARENT | CLK_OPS_PARENT_ENABLE,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -753,7 +753,7 @@ static struct clk_rcg2 gcc_camss_csi1phytimer_clk_src = {
+ 		.parent_data = gcc_parents_4,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_4),
+ 		.flags = CLK_SET_RATE_PARENT | CLK_OPS_PARENT_ENABLE,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -768,7 +768,7 @@ static struct clk_rcg2 gcc_camss_csi2phytimer_clk_src = {
+ 		.parent_data = gcc_parents_4,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_4),
+ 		.flags = CLK_SET_RATE_PARENT | CLK_OPS_PARENT_ENABLE,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -790,7 +790,7 @@ static struct clk_rcg2 gcc_camss_mclk0_clk_src = {
+ 		.parent_data = gcc_parents_3,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_3),
+ 		.flags = CLK_SET_RATE_PARENT | CLK_OPS_PARENT_ENABLE,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -805,7 +805,7 @@ static struct clk_rcg2 gcc_camss_mclk1_clk_src = {
+ 		.parent_data = gcc_parents_3,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_3),
+ 		.flags = CLK_SET_RATE_PARENT | CLK_OPS_PARENT_ENABLE,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -820,7 +820,7 @@ static struct clk_rcg2 gcc_camss_mclk2_clk_src = {
+ 		.parent_data = gcc_parents_3,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_3),
+ 		.flags = CLK_SET_RATE_PARENT | CLK_OPS_PARENT_ENABLE,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -835,7 +835,7 @@ static struct clk_rcg2 gcc_camss_mclk3_clk_src = {
+ 		.parent_data = gcc_parents_3,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_3),
+ 		.flags = CLK_SET_RATE_PARENT | CLK_OPS_PARENT_ENABLE,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -857,7 +857,7 @@ static struct clk_rcg2 gcc_camss_ope_ahb_clk_src = {
+ 		.parent_data = gcc_parents_8,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_8),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -881,7 +881,7 @@ static struct clk_rcg2 gcc_camss_ope_clk_src = {
+ 		.parent_data = gcc_parents_8,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_8),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -916,7 +916,7 @@ static struct clk_rcg2 gcc_camss_tfe_0_clk_src = {
+ 		.parent_data = gcc_parents_5,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_5),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -941,7 +941,7 @@ static struct clk_rcg2 gcc_camss_tfe_0_csid_clk_src = {
+ 		.parent_data = gcc_parents_6,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_6),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -956,7 +956,7 @@ static struct clk_rcg2 gcc_camss_tfe_1_clk_src = {
+ 		.parent_data = gcc_parents_5,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_5),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -971,7 +971,7 @@ static struct clk_rcg2 gcc_camss_tfe_1_csid_clk_src = {
+ 		.parent_data = gcc_parents_6,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_6),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -986,7 +986,7 @@ static struct clk_rcg2 gcc_camss_tfe_2_clk_src = {
+ 		.parent_data = gcc_parents_5,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_5),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -1001,7 +1001,7 @@ static struct clk_rcg2 gcc_camss_tfe_2_csid_clk_src = {
+ 		.parent_data = gcc_parents_6,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_6),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -1024,7 +1024,7 @@ static struct clk_rcg2 gcc_camss_tfe_cphy_rx_clk_src = {
+ 		.parent_data = gcc_parents_10,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_10),
+ 		.flags = CLK_SET_RATE_PARENT | CLK_OPS_PARENT_ENABLE,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -1046,7 +1046,7 @@ static struct clk_rcg2 gcc_camss_top_ahb_clk_src = {
+ 		.parent_data = gcc_parents_7,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_7),
+ 		.flags = CLK_SET_RATE_PARENT | CLK_OPS_PARENT_ENABLE,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -1116,7 +1116,7 @@ static struct clk_rcg2 gcc_pdm2_clk_src = {
+ 		.name = "gcc_pdm2_clk_src",
+ 		.parent_data = gcc_parents_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_0),
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -1329,7 +1329,7 @@ static struct clk_rcg2 gcc_ufs_phy_axi_clk_src = {
+ 		.name = "gcc_ufs_phy_axi_clk_src",
+ 		.parent_data = gcc_parents_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_0),
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -1351,7 +1351,7 @@ static struct clk_rcg2 gcc_ufs_phy_ice_core_clk_src = {
+ 		.name = "gcc_ufs_phy_ice_core_clk_src",
+ 		.parent_data = gcc_parents_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_0),
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -1392,7 +1392,7 @@ static struct clk_rcg2 gcc_ufs_phy_unipro_core_clk_src = {
+ 		.name = "gcc_ufs_phy_unipro_core_clk_src",
+ 		.parent_data = gcc_parents_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_0),
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -1414,7 +1414,7 @@ static struct clk_rcg2 gcc_usb30_prim_master_clk_src = {
+ 		.name = "gcc_usb30_prim_master_clk_src",
+ 		.parent_data = gcc_parents_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_0),
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -1483,7 +1483,7 @@ static struct clk_rcg2 gcc_video_venus_clk_src = {
+ 		.parent_data = gcc_parents_13,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_13),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+diff --git a/drivers/clk/qcom/gcc-sm8350.c b/drivers/clk/qcom/gcc-sm8350.c
+index af4a1ea284215..1385a98eb3bbe 100644
+--- a/drivers/clk/qcom/gcc-sm8350.c
++++ b/drivers/clk/qcom/gcc-sm8350.c
+@@ -17,6 +17,7 @@
+ #include "clk-regmap.h"
+ #include "clk-regmap-divider.h"
+ #include "clk-regmap-mux.h"
++#include "clk-regmap-phy-mux.h"
+ #include "gdsc.h"
+ #include "reset.h"
+ 
+@@ -158,26 +159,6 @@ static const struct clk_parent_data gcc_parent_data_3[] = {
+ 	{ .fw_name = "bi_tcxo" },
+ };
+ 
+-static const struct parent_map gcc_parent_map_4[] = {
+-	{ P_PCIE_0_PIPE_CLK, 0 },
+-	{ P_BI_TCXO, 2 },
+-};
+-
+-static const struct clk_parent_data gcc_parent_data_4[] = {
+-	{ .fw_name = "pcie_0_pipe_clk", },
+-	{ .fw_name = "bi_tcxo" },
+-};
+-
+-static const struct parent_map gcc_parent_map_5[] = {
+-	{ P_PCIE_1_PIPE_CLK, 0 },
+-	{ P_BI_TCXO, 2 },
+-};
+-
+-static const struct clk_parent_data gcc_parent_data_5[] = {
+-	{ .fw_name = "pcie_1_pipe_clk" },
+-	{ .fw_name = "bi_tcxo" },
+-};
+-
+ static const struct parent_map gcc_parent_map_6[] = {
+ 	{ P_BI_TCXO, 0 },
+ 	{ P_GCC_GPLL0_OUT_MAIN, 1 },
+@@ -274,32 +255,30 @@ static const struct clk_parent_data gcc_parent_data_14[] = {
+ 	{ .fw_name = "bi_tcxo" },
+ };
+ 
+-static struct clk_regmap_mux gcc_pcie_0_pipe_clk_src = {
++static struct clk_regmap_phy_mux gcc_pcie_0_pipe_clk_src = {
+ 	.reg = 0x6b054,
+-	.shift = 0,
+-	.width = 2,
+-	.parent_map = gcc_parent_map_4,
+ 	.clkr = {
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_pcie_0_pipe_clk_src",
+-			.parent_data = gcc_parent_data_4,
+-			.num_parents = ARRAY_SIZE(gcc_parent_data_4),
+-			.ops = &clk_regmap_mux_closest_ops,
++			.parent_data = &(const struct clk_parent_data){
++				.fw_name = "pcie_0_pipe_clk",
++			},
++			.num_parents = 1,
++			.ops = &clk_regmap_phy_mux_ops,
+ 		},
+ 	},
+ };
+ 
+-static struct clk_regmap_mux gcc_pcie_1_pipe_clk_src = {
++static struct clk_regmap_phy_mux gcc_pcie_1_pipe_clk_src = {
+ 	.reg = 0x8d054,
+-	.shift = 0,
+-	.width = 2,
+-	.parent_map = gcc_parent_map_5,
+ 	.clkr = {
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_pcie_1_pipe_clk_src",
+-			.parent_data = gcc_parent_data_5,
+-			.num_parents = ARRAY_SIZE(gcc_parent_data_5),
+-			.ops = &clk_regmap_mux_closest_ops,
++			.parent_data = &(const struct clk_parent_data){
++				.fw_name = "pcie_1_pipe_clk",
++			},
++			.num_parents = 1,
++			.ops = &clk_regmap_phy_mux_ops,
+ 		},
+ 	},
+ };
+diff --git a/drivers/clk/qcom/lpassaudiocc-sc7280.c b/drivers/clk/qcom/lpassaudiocc-sc7280.c
+index 1339f9211a149..134eb1529ede2 100644
+--- a/drivers/clk/qcom/lpassaudiocc-sc7280.c
++++ b/drivers/clk/qcom/lpassaudiocc-sc7280.c
+@@ -696,6 +696,8 @@ static const struct qcom_cc_desc lpass_cc_sc7280_desc = {
+ 	.config = &lpass_audio_cc_sc7280_regmap_config,
+ 	.clks = lpass_cc_sc7280_clocks,
+ 	.num_clks = ARRAY_SIZE(lpass_cc_sc7280_clocks),
++	.gdscs = lpass_aon_cc_sc7280_gdscs,
++	.num_gdscs = ARRAY_SIZE(lpass_aon_cc_sc7280_gdscs),
+ };
+ 
+ static const struct qcom_cc_desc lpass_audio_cc_sc7280_desc = {
+diff --git a/drivers/clk/qcom/lpasscc-sc7280.c b/drivers/clk/qcom/lpasscc-sc7280.c
+index 48432010ce247..0df2b29e95e31 100644
+--- a/drivers/clk/qcom/lpasscc-sc7280.c
++++ b/drivers/clk/qcom/lpasscc-sc7280.c
+@@ -121,14 +121,18 @@ static int lpass_cc_sc7280_probe(struct platform_device *pdev)
+ 		goto destroy_pm_clk;
+ 	}
+ 
+-	lpass_regmap_config.name = "qdsp6ss";
+-	desc = &lpass_qdsp6ss_sc7280_desc;
+-
+-	ret = qcom_cc_probe_by_index(pdev, 0, desc);
+-	if (ret)
+-		goto destroy_pm_clk;
++	if (!of_property_read_bool(pdev->dev.of_node, "qcom,adsp-pil-mode")) {
++		lpass_regmap_config.name = "qdsp6ss";
++		lpass_regmap_config.max_register = 0x3f;
++		desc = &lpass_qdsp6ss_sc7280_desc;
++
++		ret = qcom_cc_probe_by_index(pdev, 0, desc);
++		if (ret)
++			goto destroy_pm_clk;
++	}
+ 
+ 	lpass_regmap_config.name = "top_cc";
++	lpass_regmap_config.max_register = 0x4;
+ 	desc = &lpass_cc_top_sc7280_desc;
+ 
+ 	ret = qcom_cc_probe_by_index(pdev, 1, desc);
+diff --git a/drivers/clk/rockchip/clk-rk3399.c b/drivers/clk/rockchip/clk-rk3399.c
+index 306910a3a0d38..9ebd6c451b3db 100644
+--- a/drivers/clk/rockchip/clk-rk3399.c
++++ b/drivers/clk/rockchip/clk-rk3399.c
+@@ -1263,7 +1263,7 @@ static struct rockchip_clk_branch rk3399_clk_branches[] __initdata = {
+ 			RK3399_CLKSEL_CON(56), 6, 2, MFLAGS,
+ 			RK3399_CLKGATE_CON(10), 7, GFLAGS),
+ 
+-	COMPOSITE_NOGATE(SCLK_CIF_OUT, "clk_cifout", mux_clk_cif_p, 0,
++	COMPOSITE_NOGATE(SCLK_CIF_OUT, "clk_cifout", mux_clk_cif_p, CLK_SET_RATE_PARENT,
+ 			 RK3399_CLKSEL_CON(56), 5, 1, MFLAGS, 0, 5, DFLAGS),
+ 
+ 	/* gic */
+diff --git a/drivers/clocksource/timer-davinci.c b/drivers/clocksource/timer-davinci.c
+index 9996c05425200..b1c248498be46 100644
+--- a/drivers/clocksource/timer-davinci.c
++++ b/drivers/clocksource/timer-davinci.c
+@@ -257,21 +257,25 @@ int __init davinci_timer_register(struct clk *clk,
+ 				resource_size(&timer_cfg->reg),
+ 				"davinci-timer")) {
+ 		pr_err("Unable to request memory region\n");
+-		return -EBUSY;
++		rv = -EBUSY;
++		goto exit_clk_disable;
+ 	}
+ 
+ 	base = ioremap(timer_cfg->reg.start, resource_size(&timer_cfg->reg));
+ 	if (!base) {
+ 		pr_err("Unable to map the register range\n");
+-		return -ENOMEM;
++		rv = -ENOMEM;
++		goto exit_mem_region;
+ 	}
+ 
+ 	davinci_timer_init(base);
+ 	tick_rate = clk_get_rate(clk);
+ 
+ 	clockevent = kzalloc(sizeof(*clockevent), GFP_KERNEL);
+-	if (!clockevent)
+-		return -ENOMEM;
++	if (!clockevent) {
++		rv = -ENOMEM;
++		goto exit_iounmap_base;
++	}
+ 
+ 	clockevent->dev.name = "tim12";
+ 	clockevent->dev.features = CLOCK_EVT_FEAT_ONESHOT;
+@@ -296,7 +300,7 @@ int __init davinci_timer_register(struct clk *clk,
+ 			 "clockevent/tim12", clockevent);
+ 	if (rv) {
+ 		pr_err("Unable to request the clockevent interrupt\n");
+-		return rv;
++		goto exit_free_clockevent;
+ 	}
+ 
+ 	davinci_clocksource.dev.rating = 300;
+@@ -323,13 +327,27 @@ int __init davinci_timer_register(struct clk *clk,
+ 	rv = clocksource_register_hz(&davinci_clocksource.dev, tick_rate);
+ 	if (rv) {
+ 		pr_err("Unable to register clocksource\n");
+-		return rv;
++		goto exit_free_irq;
+ 	}
+ 
+ 	sched_clock_register(davinci_timer_read_sched_clock,
+ 			     DAVINCI_TIMER_CLKSRC_BITS, tick_rate);
+ 
+ 	return 0;
++
++exit_free_irq:
++	free_irq(timer_cfg->irq[DAVINCI_TIMER_CLOCKEVENT_IRQ].start,
++			clockevent);
++exit_free_clockevent:
++	kfree(clockevent);
++exit_iounmap_base:
++	iounmap(base);
++exit_mem_region:
++	release_mem_region(timer_cfg->reg.start,
++			   resource_size(&timer_cfg->reg));
++exit_clk_disable:
++	clk_disable_unprepare(clk);
++	return rv;
+ }
+ 
+ static int __init of_davinci_timer_register(struct device_node *np)
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 6d8fd3b8dcb52..a0742c787014d 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -1727,7 +1727,7 @@ static unsigned int cpufreq_verify_current_freq(struct cpufreq_policy *policy, b
+ 		 * MHz. In such cases it is better to avoid getting into
+ 		 * unnecessary frequency updates.
+ 		 */
+-		if (abs(policy->cur - new_freq) < HZ_PER_MHZ)
++		if (abs(policy->cur - new_freq) < KHZ_PER_MHZ)
+ 			return policy->cur;
+ 
+ 		cpufreq_out_of_sync(policy, new_freq);
+diff --git a/drivers/cpufreq/mediatek-cpufreq.c b/drivers/cpufreq/mediatek-cpufreq.c
+index 7f2680bc9a0f4..9a39a7ccfae96 100644
+--- a/drivers/cpufreq/mediatek-cpufreq.c
++++ b/drivers/cpufreq/mediatek-cpufreq.c
+@@ -373,13 +373,13 @@ static struct device *of_get_cci(struct device *cpu_dev)
+ 	struct platform_device *pdev;
+ 
+ 	np = of_parse_phandle(cpu_dev->of_node, "mediatek,cci", 0);
+-	if (IS_ERR_OR_NULL(np))
+-		return NULL;
++	if (!np)
++		return ERR_PTR(-ENODEV);
+ 
+ 	pdev = of_find_device_by_node(np);
+ 	of_node_put(np);
+-	if (IS_ERR_OR_NULL(pdev))
+-		return NULL;
++	if (!pdev)
++		return ERR_PTR(-ENODEV);
+ 
+ 	return &pdev->dev;
+ }
+@@ -401,7 +401,7 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
+ 	info->ccifreq_bound = false;
+ 	if (info->soc_data->ccifreq_supported) {
+ 		info->cci_dev = of_get_cci(info->cpu_dev);
+-		if (IS_ERR_OR_NULL(info->cci_dev)) {
++		if (IS_ERR(info->cci_dev)) {
+ 			ret = PTR_ERR(info->cci_dev);
+ 			dev_err(cpu_dev, "cpu%d: failed to get cci device\n", cpu);
+ 			return -ENODEV;
+@@ -420,7 +420,7 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
+ 		ret = PTR_ERR(info->inter_clk);
+ 		dev_err_probe(cpu_dev, ret,
+ 			      "cpu%d: failed to get intermediate clk\n", cpu);
+-		goto out_free_resources;
++		goto out_free_mux_clock;
+ 	}
+ 
+ 	info->proc_reg = regulator_get_optional(cpu_dev, "proc");
+@@ -428,13 +428,13 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
+ 		ret = PTR_ERR(info->proc_reg);
+ 		dev_err_probe(cpu_dev, ret,
+ 			      "cpu%d: failed to get proc regulator\n", cpu);
+-		goto out_free_resources;
++		goto out_free_inter_clock;
+ 	}
+ 
+ 	ret = regulator_enable(info->proc_reg);
+ 	if (ret) {
+ 		dev_warn(cpu_dev, "cpu%d: failed to enable vproc\n", cpu);
+-		goto out_free_resources;
++		goto out_free_proc_reg;
+ 	}
+ 
+ 	/* Both presence and absence of sram regulator are valid cases. */
+@@ -442,14 +442,14 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
+ 	if (IS_ERR(info->sram_reg)) {
+ 		ret = PTR_ERR(info->sram_reg);
+ 		if (ret == -EPROBE_DEFER)
+-			goto out_free_resources;
++			goto out_disable_proc_reg;
+ 
+ 		info->sram_reg = NULL;
+ 	} else {
+ 		ret = regulator_enable(info->sram_reg);
+ 		if (ret) {
+ 			dev_warn(cpu_dev, "cpu%d: failed to enable vsram\n", cpu);
+-			goto out_free_resources;
++			goto out_free_sram_reg;
+ 		}
+ 	}
+ 
+@@ -458,13 +458,13 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
+ 	if (ret) {
+ 		dev_err(cpu_dev,
+ 			"cpu%d: failed to get OPP-sharing information\n", cpu);
+-		goto out_free_resources;
++		goto out_disable_sram_reg;
+ 	}
+ 
+ 	ret = dev_pm_opp_of_cpumask_add_table(&info->cpus);
+ 	if (ret) {
+ 		dev_warn(cpu_dev, "cpu%d: no OPP table\n", cpu);
+-		goto out_free_resources;
++		goto out_disable_sram_reg;
+ 	}
+ 
+ 	ret = clk_prepare_enable(info->cpu_clk);
+@@ -533,43 +533,41 @@ out_disable_mux_clock:
+ out_free_opp_table:
+ 	dev_pm_opp_of_cpumask_remove_table(&info->cpus);
+ 
+-out_free_resources:
+-	if (regulator_is_enabled(info->proc_reg))
+-		regulator_disable(info->proc_reg);
+-	if (info->sram_reg && regulator_is_enabled(info->sram_reg))
++out_disable_sram_reg:
++	if (info->sram_reg)
+ 		regulator_disable(info->sram_reg);
+ 
+-	if (!IS_ERR(info->proc_reg))
+-		regulator_put(info->proc_reg);
+-	if (!IS_ERR(info->sram_reg))
++out_free_sram_reg:
++	if (info->sram_reg)
+ 		regulator_put(info->sram_reg);
+-	if (!IS_ERR(info->cpu_clk))
+-		clk_put(info->cpu_clk);
+-	if (!IS_ERR(info->inter_clk))
+-		clk_put(info->inter_clk);
++
++out_disable_proc_reg:
++	regulator_disable(info->proc_reg);
++
++out_free_proc_reg:
++	regulator_put(info->proc_reg);
++
++out_free_inter_clock:
++	clk_put(info->inter_clk);
++
++out_free_mux_clock:
++	clk_put(info->cpu_clk);
+ 
+ 	return ret;
+ }
+ 
+ static void mtk_cpu_dvfs_info_release(struct mtk_cpu_dvfs_info *info)
+ {
+-	if (!IS_ERR(info->proc_reg)) {
+-		regulator_disable(info->proc_reg);
+-		regulator_put(info->proc_reg);
+-	}
+-	if (!IS_ERR(info->sram_reg)) {
++	regulator_disable(info->proc_reg);
++	regulator_put(info->proc_reg);
++	if (info->sram_reg) {
+ 		regulator_disable(info->sram_reg);
+ 		regulator_put(info->sram_reg);
+ 	}
+-	if (!IS_ERR(info->cpu_clk)) {
+-		clk_disable_unprepare(info->cpu_clk);
+-		clk_put(info->cpu_clk);
+-	}
+-	if (!IS_ERR(info->inter_clk)) {
+-		clk_disable_unprepare(info->inter_clk);
+-		clk_put(info->inter_clk);
+-	}
+-
++	clk_disable_unprepare(info->cpu_clk);
++	clk_put(info->cpu_clk);
++	clk_disable_unprepare(info->inter_clk);
++	clk_put(info->inter_clk);
+ 	dev_pm_opp_of_cpumask_remove_table(&info->cpus);
+ 	dev_pm_opp_unregister_notifier(info->cpu_dev, &info->opp_nb);
+ }
+@@ -695,6 +693,15 @@ static const struct mtk_cpufreq_platform_data mt2701_platform_data = {
+ 	.ccifreq_supported = false,
+ };
+ 
++static const struct mtk_cpufreq_platform_data mt7622_platform_data = {
++	.min_volt_shift = 100000,
++	.max_volt_shift = 200000,
++	.proc_max_volt = 1360000,
++	.sram_min_volt = 0,
++	.sram_max_volt = 1360000,
++	.ccifreq_supported = false,
++};
++
+ static const struct mtk_cpufreq_platform_data mt8183_platform_data = {
+ 	.min_volt_shift = 100000,
+ 	.max_volt_shift = 200000,
+@@ -713,20 +720,29 @@ static const struct mtk_cpufreq_platform_data mt8186_platform_data = {
+ 	.ccifreq_supported = true,
+ };
+ 
++static const struct mtk_cpufreq_platform_data mt8516_platform_data = {
++	.min_volt_shift = 100000,
++	.max_volt_shift = 200000,
++	.proc_max_volt = 1310000,
++	.sram_min_volt = 0,
++	.sram_max_volt = 1310000,
++	.ccifreq_supported = false,
++};
++
+ /* List of machines supported by this driver */
+ static const struct of_device_id mtk_cpufreq_machines[] __initconst = {
+ 	{ .compatible = "mediatek,mt2701", .data = &mt2701_platform_data },
+ 	{ .compatible = "mediatek,mt2712", .data = &mt2701_platform_data },
+-	{ .compatible = "mediatek,mt7622", .data = &mt2701_platform_data },
+-	{ .compatible = "mediatek,mt7623", .data = &mt2701_platform_data },
+-	{ .compatible = "mediatek,mt8167", .data = &mt2701_platform_data },
++	{ .compatible = "mediatek,mt7622", .data = &mt7622_platform_data },
++	{ .compatible = "mediatek,mt7623", .data = &mt7622_platform_data },
++	{ .compatible = "mediatek,mt8167", .data = &mt8516_platform_data },
+ 	{ .compatible = "mediatek,mt817x", .data = &mt2701_platform_data },
+ 	{ .compatible = "mediatek,mt8173", .data = &mt2701_platform_data },
+ 	{ .compatible = "mediatek,mt8176", .data = &mt2701_platform_data },
+ 	{ .compatible = "mediatek,mt8183", .data = &mt8183_platform_data },
+ 	{ .compatible = "mediatek,mt8186", .data = &mt8186_platform_data },
+ 	{ .compatible = "mediatek,mt8365", .data = &mt2701_platform_data },
+-	{ .compatible = "mediatek,mt8516", .data = &mt2701_platform_data },
++	{ .compatible = "mediatek,mt8516", .data = &mt8516_platform_data },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(of, mtk_cpufreq_machines);
+diff --git a/drivers/cpufreq/qcom-cpufreq-hw.c b/drivers/cpufreq/qcom-cpufreq-hw.c
+index 2f581d2d617de..9e78640096383 100644
+--- a/drivers/cpufreq/qcom-cpufreq-hw.c
++++ b/drivers/cpufreq/qcom-cpufreq-hw.c
+@@ -14,7 +14,6 @@
+ #include <linux/of_address.h>
+ #include <linux/of_platform.h>
+ #include <linux/pm_opp.h>
+-#include <linux/pm_qos.h>
+ #include <linux/slab.h>
+ #include <linux/spinlock.h>
+ #include <linux/units.h>
+@@ -43,7 +42,6 @@ struct qcom_cpufreq_soc_data {
+ 
+ struct qcom_cpufreq_data {
+ 	void __iomem *base;
+-	struct resource *res;
+ 
+ 	/*
+ 	 * Mutex to synchronize between de-init sequence and re-starting LMh
+@@ -58,8 +56,6 @@ struct qcom_cpufreq_data {
+ 	struct clk_hw cpu_clk;
+ 
+ 	bool per_core_dcvs;
+-
+-	struct freq_qos_request throttle_freq_req;
+ };
+ 
+ static struct {
+@@ -349,8 +345,6 @@ static void qcom_lmh_dcvs_notify(struct qcom_cpufreq_data *data)
+ 
+ 	throttled_freq = freq_hz / HZ_PER_KHZ;
+ 
+-	freq_qos_update_request(&data->throttle_freq_req, throttled_freq);
+-
+ 	/* Update thermal pressure (the boost frequencies are accepted) */
+ 	arch_update_thermal_pressure(policy->related_cpus, throttled_freq);
+ 
+@@ -443,14 +437,6 @@ static int qcom_cpufreq_hw_lmh_init(struct cpufreq_policy *policy, int index)
+ 	if (data->throttle_irq < 0)
+ 		return data->throttle_irq;
+ 
+-	ret = freq_qos_add_request(&policy->constraints,
+-				   &data->throttle_freq_req, FREQ_QOS_MAX,
+-				   FREQ_QOS_MAX_DEFAULT_VALUE);
+-	if (ret < 0) {
+-		dev_err(&pdev->dev, "Failed to add freq constraint (%d)\n", ret);
+-		return ret;
+-	}
+-
+ 	data->cancel_throttle = false;
+ 	data->policy = policy;
+ 
+@@ -517,7 +503,6 @@ static void qcom_cpufreq_hw_lmh_exit(struct qcom_cpufreq_data *data)
+ 	if (data->throttle_irq <= 0)
+ 		return;
+ 
+-	freq_qos_remove_request(&data->throttle_freq_req);
+ 	free_irq(data->throttle_irq, data);
+ }
+ 
+@@ -590,16 +575,12 @@ static int qcom_cpufreq_hw_cpu_exit(struct cpufreq_policy *policy)
+ {
+ 	struct device *cpu_dev = get_cpu_device(policy->cpu);
+ 	struct qcom_cpufreq_data *data = policy->driver_data;
+-	struct resource *res = data->res;
+-	void __iomem *base = data->base;
+ 
+ 	dev_pm_opp_remove_all_dynamic(cpu_dev);
+ 	dev_pm_opp_of_cpumask_remove_table(policy->related_cpus);
+ 	qcom_cpufreq_hw_lmh_exit(data);
+ 	kfree(policy->freq_table);
+ 	kfree(data);
+-	iounmap(base);
+-	release_mem_region(res->start, resource_size(res));
+ 
+ 	return 0;
+ }
+@@ -718,17 +699,15 @@ static int qcom_cpufreq_hw_driver_probe(struct platform_device *pdev)
+ 	for (i = 0; i < num_domains; i++) {
+ 		struct qcom_cpufreq_data *data = &qcom_cpufreq.data[i];
+ 		struct clk_init_data clk_init = {};
+-		struct resource *res;
+ 		void __iomem *base;
+ 
+-		base = devm_platform_get_and_ioremap_resource(pdev, i, &res);
++		base = devm_platform_ioremap_resource(pdev, i);
+ 		if (IS_ERR(base)) {
+-			dev_err(dev, "Failed to map resource %pR\n", res);
++			dev_err(dev, "Failed to map resource index %d\n", i);
+ 			return PTR_ERR(base);
+ 		}
+ 
+ 		data->base = base;
+-		data->res = res;
+ 
+ 		/* Register CPU clock for each frequency domain */
+ 		clk_init.name = kasprintf(GFP_KERNEL, "qcom_cpufreq%d", i);
+diff --git a/drivers/cpuidle/cpuidle-riscv-sbi.c b/drivers/cpuidle/cpuidle-riscv-sbi.c
+index be383f4b68556..c6b5991670362 100644
+--- a/drivers/cpuidle/cpuidle-riscv-sbi.c
++++ b/drivers/cpuidle/cpuidle-riscv-sbi.c
+@@ -613,7 +613,7 @@ static int __init sbi_cpuidle_init(void)
+ 	 * 2) SBI HSM extension is available
+ 	 */
+ 	if ((sbi_spec_version < sbi_mk_version(0, 3)) ||
+-	    sbi_probe_extension(SBI_EXT_HSM) <= 0) {
++	    !sbi_probe_extension(SBI_EXT_HSM)) {
+ 		pr_info("HSM suspend not available\n");
+ 		return 0;
+ 	}
+diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
+index 3b2516d1433f7..20222e321a436 100644
+--- a/drivers/crypto/Kconfig
++++ b/drivers/crypto/Kconfig
+@@ -810,6 +810,7 @@ config CRYPTO_DEV_SA2UL
+ 	select CRYPTO_AES
+ 	select CRYPTO_ALGAPI
+ 	select CRYPTO_AUTHENC
++	select CRYPTO_DES
+ 	select CRYPTO_SHA1
+ 	select CRYPTO_SHA256
+ 	select CRYPTO_SHA512
+diff --git a/drivers/crypto/caam/ctrl.c b/drivers/crypto/caam/ctrl.c
+index 6278afb951c30..71b14269a9979 100644
+--- a/drivers/crypto/caam/ctrl.c
++++ b/drivers/crypto/caam/ctrl.c
+@@ -284,6 +284,10 @@ static int instantiate_rng(struct device *ctrldev, int state_handle_mask,
+ 		const u32 rdsta_if = RDSTA_IF0 << sh_idx;
+ 		const u32 rdsta_pr = RDSTA_PR0 << sh_idx;
+ 		const u32 rdsta_mask = rdsta_if | rdsta_pr;
++
++		/* Clear the contents before using the descriptor */
++		memset(desc, 0x00, CAAM_CMD_SZ * 7);
++
+ 		/*
+ 		 * If the corresponding bit is set, this state handle
+ 		 * was initialized by somebody else, so it's left alone.
+@@ -327,8 +331,6 @@ static int instantiate_rng(struct device *ctrldev, int state_handle_mask,
+ 		}
+ 
+ 		dev_info(ctrldev, "Instantiated RNG4 SH%d\n", sh_idx);
+-		/* Clear the contents before recreating the descriptor */
+-		memset(desc, 0x00, CAAM_CMD_SZ * 7);
+ 	}
+ 
+ 	kfree(desc);
+diff --git a/drivers/crypto/ccp/sp-pci.c b/drivers/crypto/ccp/sp-pci.c
+index cde33b2ac71b2..62936eaa7d4c9 100644
+--- a/drivers/crypto/ccp/sp-pci.c
++++ b/drivers/crypto/ccp/sp-pci.c
+@@ -451,9 +451,9 @@ static const struct pci_device_id sp_pci_table[] = {
+ 	{ PCI_VDEVICE(AMD, 0x1468), (kernel_ulong_t)&dev_vdata[2] },
+ 	{ PCI_VDEVICE(AMD, 0x1486), (kernel_ulong_t)&dev_vdata[3] },
+ 	{ PCI_VDEVICE(AMD, 0x15DF), (kernel_ulong_t)&dev_vdata[4] },
+-	{ PCI_VDEVICE(AMD, 0x1649), (kernel_ulong_t)&dev_vdata[4] },
+ 	{ PCI_VDEVICE(AMD, 0x14CA), (kernel_ulong_t)&dev_vdata[5] },
+ 	{ PCI_VDEVICE(AMD, 0x15C7), (kernel_ulong_t)&dev_vdata[6] },
++	{ PCI_VDEVICE(AMD, 0x1649), (kernel_ulong_t)&dev_vdata[6] },
+ 	/* Last entry must be zero */
+ 	{ 0, }
+ };
+diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c
+index 6858753af6b32..f8645e10ef4b5 100644
+--- a/drivers/crypto/inside-secure/safexcel.c
++++ b/drivers/crypto/inside-secure/safexcel.c
+@@ -1628,19 +1628,23 @@ static int safexcel_probe_generic(void *pdev,
+ 						     &priv->ring[i].rdr);
+ 		if (ret) {
+ 			dev_err(dev, "Failed to initialize rings\n");
+-			return ret;
++			goto err_cleanup_rings;
+ 		}
+ 
+ 		priv->ring[i].rdr_req = devm_kcalloc(dev,
+ 			EIP197_DEFAULT_RING_SIZE,
+ 			sizeof(*priv->ring[i].rdr_req),
+ 			GFP_KERNEL);
+-		if (!priv->ring[i].rdr_req)
+-			return -ENOMEM;
++		if (!priv->ring[i].rdr_req) {
++			ret = -ENOMEM;
++			goto err_cleanup_rings;
++		}
+ 
+ 		ring_irq = devm_kzalloc(dev, sizeof(*ring_irq), GFP_KERNEL);
+-		if (!ring_irq)
+-			return -ENOMEM;
++		if (!ring_irq) {
++			ret = -ENOMEM;
++			goto err_cleanup_rings;
++		}
+ 
+ 		ring_irq->priv = priv;
+ 		ring_irq->ring = i;
+@@ -1654,7 +1658,8 @@ static int safexcel_probe_generic(void *pdev,
+ 						ring_irq);
+ 		if (irq < 0) {
+ 			dev_err(dev, "Failed to get IRQ ID for ring %d\n", i);
+-			return irq;
++			ret = irq;
++			goto err_cleanup_rings;
+ 		}
+ 
+ 		priv->ring[i].irq = irq;
+@@ -1666,8 +1671,10 @@ static int safexcel_probe_generic(void *pdev,
+ 		snprintf(wq_name, 9, "wq_ring%d", i);
+ 		priv->ring[i].workqueue =
+ 			create_singlethread_workqueue(wq_name);
+-		if (!priv->ring[i].workqueue)
+-			return -ENOMEM;
++		if (!priv->ring[i].workqueue) {
++			ret = -ENOMEM;
++			goto err_cleanup_rings;
++		}
+ 
+ 		priv->ring[i].requests = 0;
+ 		priv->ring[i].busy = false;
+@@ -1684,16 +1691,26 @@ static int safexcel_probe_generic(void *pdev,
+ 	ret = safexcel_hw_init(priv);
+ 	if (ret) {
+ 		dev_err(dev, "HW init failed (%d)\n", ret);
+-		return ret;
++		goto err_cleanup_rings;
+ 	}
+ 
+ 	ret = safexcel_register_algorithms(priv);
+ 	if (ret) {
+ 		dev_err(dev, "Failed to register algorithms (%d)\n", ret);
+-		return ret;
++		goto err_cleanup_rings;
+ 	}
+ 
+ 	return 0;
++
++err_cleanup_rings:
++	for (i = 0; i < priv->config.rings; i++) {
++		if (priv->ring[i].irq)
++			irq_set_affinity_hint(priv->ring[i].irq, NULL);
++		if (priv->ring[i].workqueue)
++			destroy_workqueue(priv->ring[i].workqueue);
++	}
++
++	return ret;
+ }
+ 
+ static void safexcel_hw_reset_rings(struct safexcel_crypto_priv *priv)
+diff --git a/drivers/crypto/qat/qat_common/adf_accel_devices.h b/drivers/crypto/qat/qat_common/adf_accel_devices.h
+index 284f5aad3ee0b..7be933d6f0ffa 100644
+--- a/drivers/crypto/qat/qat_common/adf_accel_devices.h
++++ b/drivers/crypto/qat/qat_common/adf_accel_devices.h
+@@ -310,6 +310,7 @@ struct adf_accel_dev {
+ 			u8 pf_compat_ver;
+ 		} vf;
+ 	};
++	struct mutex state_lock; /* protect state of the device */
+ 	bool is_vf;
+ 	u32 accel_id;
+ };
+diff --git a/drivers/crypto/qat/qat_common/adf_common_drv.h b/drivers/crypto/qat/qat_common/adf_common_drv.h
+index 7189265573c08..4bf1fceb7052b 100644
+--- a/drivers/crypto/qat/qat_common/adf_common_drv.h
++++ b/drivers/crypto/qat/qat_common/adf_common_drv.h
+@@ -58,6 +58,9 @@ void adf_dev_stop(struct adf_accel_dev *accel_dev);
+ void adf_dev_shutdown(struct adf_accel_dev *accel_dev);
+ int adf_dev_shutdown_cache_cfg(struct adf_accel_dev *accel_dev);
+ 
++int adf_dev_up(struct adf_accel_dev *accel_dev, bool init_config);
++int adf_dev_down(struct adf_accel_dev *accel_dev, bool cache_config);
++
+ void adf_devmgr_update_class_index(struct adf_hw_device_data *hw_data);
+ void adf_clean_vf_map(bool);
+ 
+diff --git a/drivers/crypto/qat/qat_common/adf_dev_mgr.c b/drivers/crypto/qat/qat_common/adf_dev_mgr.c
+index 4c752eed10fea..86ee36feefad3 100644
+--- a/drivers/crypto/qat/qat_common/adf_dev_mgr.c
++++ b/drivers/crypto/qat/qat_common/adf_dev_mgr.c
+@@ -223,6 +223,7 @@ int adf_devmgr_add_dev(struct adf_accel_dev *accel_dev,
+ 		map->attached = true;
+ 		list_add_tail(&map->list, &vfs_table);
+ 	}
++	mutex_init(&accel_dev->state_lock);
+ unlock:
+ 	mutex_unlock(&table_lock);
+ 	return ret;
+@@ -269,6 +270,7 @@ void adf_devmgr_rm_dev(struct adf_accel_dev *accel_dev,
+ 		}
+ 	}
+ unlock:
++	mutex_destroy(&accel_dev->state_lock);
+ 	list_del(&accel_dev->list);
+ 	mutex_unlock(&table_lock);
+ }
+diff --git a/drivers/crypto/qat/qat_common/adf_init.c b/drivers/crypto/qat/qat_common/adf_init.c
+index cef7bb8ec0073..988cffd0b8338 100644
+--- a/drivers/crypto/qat/qat_common/adf_init.c
++++ b/drivers/crypto/qat/qat_common/adf_init.c
+@@ -400,3 +400,67 @@ int adf_dev_shutdown_cache_cfg(struct adf_accel_dev *accel_dev)
+ 
+ 	return 0;
+ }
++
++int adf_dev_down(struct adf_accel_dev *accel_dev, bool reconfig)
++{
++	int ret = 0;
++
++	if (!accel_dev)
++		return -EINVAL;
++
++	mutex_lock(&accel_dev->state_lock);
++
++	if (!adf_dev_started(accel_dev)) {
++		dev_info(&GET_DEV(accel_dev), "Device qat_dev%d already down\n",
++			 accel_dev->accel_id);
++		ret = -EINVAL;
++		goto out;
++	}
++
++	if (reconfig) {
++		ret = adf_dev_shutdown_cache_cfg(accel_dev);
++		goto out;
++	}
++
++	adf_dev_stop(accel_dev);
++	adf_dev_shutdown(accel_dev);
++
++out:
++	mutex_unlock(&accel_dev->state_lock);
++	return ret;
++}
++EXPORT_SYMBOL_GPL(adf_dev_down);
++
++int adf_dev_up(struct adf_accel_dev *accel_dev, bool config)
++{
++	int ret = 0;
++
++	if (!accel_dev)
++		return -EINVAL;
++
++	mutex_lock(&accel_dev->state_lock);
++
++	if (adf_dev_started(accel_dev)) {
++		dev_info(&GET_DEV(accel_dev), "Device qat_dev%d already up\n",
++			 accel_dev->accel_id);
++		ret = -EALREADY;
++		goto out;
++	}
++
++	if (config && GET_HW_DATA(accel_dev)->dev_config) {
++		ret = GET_HW_DATA(accel_dev)->dev_config(accel_dev);
++		if (unlikely(ret))
++			goto out;
++	}
++
++	ret = adf_dev_init(accel_dev);
++	if (unlikely(ret))
++		goto out;
++
++	ret = adf_dev_start(accel_dev);
++
++out:
++	mutex_unlock(&accel_dev->state_lock);
++	return ret;
++}
++EXPORT_SYMBOL_GPL(adf_dev_up);
+diff --git a/drivers/crypto/qat/qat_common/adf_sysfs.c b/drivers/crypto/qat/qat_common/adf_sysfs.c
+index e8b078e719c20..3eb6611ab1b11 100644
+--- a/drivers/crypto/qat/qat_common/adf_sysfs.c
++++ b/drivers/crypto/qat/qat_common/adf_sysfs.c
+@@ -50,38 +50,21 @@ static ssize_t state_store(struct device *dev, struct device_attribute *attr,
+ 
+ 	switch (ret) {
+ 	case DEV_DOWN:
+-		if (!adf_dev_started(accel_dev)) {
+-			dev_info(dev, "Device qat_dev%d already down\n",
+-				 accel_id);
+-			return -EINVAL;
+-		}
+-
+ 		dev_info(dev, "Stopping device qat_dev%d\n", accel_id);
+ 
+-		ret = adf_dev_shutdown_cache_cfg(accel_dev);
++		ret = adf_dev_down(accel_dev, true);
+ 		if (ret < 0)
+ 			return -EINVAL;
+ 
+ 		break;
+ 	case DEV_UP:
+-		if (adf_dev_started(accel_dev)) {
+-			dev_info(dev, "Device qat_dev%d already up\n",
+-				 accel_id);
+-			return -EINVAL;
+-		}
+-
+ 		dev_info(dev, "Starting device qat_dev%d\n", accel_id);
+ 
+-		ret = GET_HW_DATA(accel_dev)->dev_config(accel_dev);
+-		if (!ret)
+-			ret = adf_dev_init(accel_dev);
+-		if (!ret)
+-			ret = adf_dev_start(accel_dev);
+-
++		ret = adf_dev_up(accel_dev, true);
+ 		if (ret < 0) {
+ 			dev_err(dev, "Failed to start device qat_dev%d\n",
+ 				accel_id);
+-			adf_dev_shutdown_cache_cfg(accel_dev);
++			adf_dev_down(accel_dev, true);
+ 			return ret;
+ 		}
+ 		break;
+diff --git a/drivers/cxl/core/hdm.c b/drivers/cxl/core/hdm.c
+index 02cc2c38b44ba..abe3877cfa637 100644
+--- a/drivers/cxl/core/hdm.c
++++ b/drivers/cxl/core/hdm.c
+@@ -1,6 +1,5 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /* Copyright(c) 2022 Intel Corporation. All rights reserved. */
+-#include <linux/io-64-nonatomic-hi-lo.h>
+ #include <linux/seq_file.h>
+ #include <linux/device.h>
+ #include <linux/delay.h>
+@@ -93,8 +92,9 @@ static int map_hdm_decoder_regs(struct cxl_port *port, void __iomem *crb,
+ 
+ 	cxl_probe_component_regs(&port->dev, crb, &map.component_map);
+ 	if (!map.component_map.hdm_decoder.valid) {
+-		dev_err(&port->dev, "HDM decoder registers invalid\n");
+-		return -ENXIO;
++		dev_dbg(&port->dev, "HDM decoder registers not implemented\n");
++		/* unique error code to indicate no HDM decoder capability */
++		return -ENODEV;
+ 	}
+ 
+ 	return cxl_map_component_regs(&port->dev, regs, &map,
+@@ -269,8 +269,11 @@ static int __cxl_dpa_reserve(struct cxl_endpoint_decoder *cxled,
+ 
+ 	lockdep_assert_held_write(&cxl_dpa_rwsem);
+ 
+-	if (!len)
+-		goto success;
++	if (!len) {
++		dev_warn(dev, "decoder%d.%d: empty reservation attempted\n",
++			 port->id, cxled->cxld.id);
++		return -EINVAL;
++	}
+ 
+ 	if (cxled->dpa_res) {
+ 		dev_dbg(dev, "decoder%d.%d: existing allocation %pr assigned\n",
+@@ -323,7 +326,6 @@ static int __cxl_dpa_reserve(struct cxl_endpoint_decoder *cxled,
+ 		cxled->mode = CXL_DECODER_MIXED;
+ 	}
+ 
+-success:
+ 	port->hdm_end++;
+ 	get_device(&cxled->cxld.dev);
+ 	return 0;
+@@ -783,8 +785,8 @@ static int init_hdm_decoder(struct cxl_port *port, struct cxl_decoder *cxld,
+ 			    int *target_map, void __iomem *hdm, int which,
+ 			    u64 *dpa_base, struct cxl_endpoint_dvsec_info *info)
+ {
++	u64 size, base, skip, dpa_size, lo, hi;
+ 	struct cxl_endpoint_decoder *cxled;
+-	u64 size, base, skip, dpa_size;
+ 	bool committed;
+ 	u32 remainder;
+ 	int i, rc;
+@@ -799,8 +801,12 @@ static int init_hdm_decoder(struct cxl_port *port, struct cxl_decoder *cxld,
+ 							which, info);
+ 
+ 	ctrl = readl(hdm + CXL_HDM_DECODER0_CTRL_OFFSET(which));
+-	base = ioread64_hi_lo(hdm + CXL_HDM_DECODER0_BASE_LOW_OFFSET(which));
+-	size = ioread64_hi_lo(hdm + CXL_HDM_DECODER0_SIZE_LOW_OFFSET(which));
++	lo = readl(hdm + CXL_HDM_DECODER0_BASE_LOW_OFFSET(which));
++	hi = readl(hdm + CXL_HDM_DECODER0_BASE_HIGH_OFFSET(which));
++	base = (hi << 32) + lo;
++	lo = readl(hdm + CXL_HDM_DECODER0_SIZE_LOW_OFFSET(which));
++	hi = readl(hdm + CXL_HDM_DECODER0_SIZE_HIGH_OFFSET(which));
++	size = (hi << 32) + lo;
+ 	committed = !!(ctrl & CXL_HDM_DECODER0_CTRL_COMMITTED);
+ 	cxld->commit = cxl_decoder_commit;
+ 	cxld->reset = cxl_decoder_reset;
+@@ -833,6 +839,13 @@ static int init_hdm_decoder(struct cxl_port *port, struct cxl_decoder *cxld,
+ 				 port->id, cxld->id);
+ 			return -ENXIO;
+ 		}
++
++		if (size == 0) {
++			dev_warn(&port->dev,
++				 "decoder%d.%d: Committed with zero size\n",
++				 port->id, cxld->id);
++			return -ENXIO;
++		}
+ 		port->commit_end = cxld->id;
+ 	} else {
+ 		/* unless / until type-2 drivers arrive, assume type-3 */
+@@ -856,8 +869,9 @@ static int init_hdm_decoder(struct cxl_port *port, struct cxl_decoder *cxld,
+ 		return rc;
+ 
+ 	if (!info) {
+-		target_list.value =
+-			ioread64_hi_lo(hdm + CXL_HDM_DECODER0_TL_LOW(which));
++		lo = readl(hdm + CXL_HDM_DECODER0_TL_LOW(which));
++		hi = readl(hdm + CXL_HDM_DECODER0_TL_HIGH(which));
++		target_list.value = (hi << 32) + lo;
+ 		for (i = 0; i < cxld->interleave_ways; i++)
+ 			target_map[i] = target_list.target_id[i];
+ 
+@@ -874,7 +888,9 @@ static int init_hdm_decoder(struct cxl_port *port, struct cxl_decoder *cxld,
+ 			port->id, cxld->id, size, cxld->interleave_ways);
+ 		return -ENXIO;
+ 	}
+-	skip = ioread64_hi_lo(hdm + CXL_HDM_DECODER0_SKIP_LOW(which));
++	lo = readl(hdm + CXL_HDM_DECODER0_SKIP_LOW(which));
++	hi = readl(hdm + CXL_HDM_DECODER0_SKIP_HIGH(which));
++	skip = (hi << 32) + lo;
+ 	cxled = to_cxl_endpoint_decoder(&cxld->dev);
+ 	rc = devm_cxl_dpa_reserve(cxled, *dpa_base + skip, dpa_size, skip);
+ 	if (rc) {
+diff --git a/drivers/cxl/port.c b/drivers/cxl/port.c
+index 22a7ab2bae7c7..eb57324c4ad4a 100644
+--- a/drivers/cxl/port.c
++++ b/drivers/cxl/port.c
+@@ -66,14 +66,22 @@ static int cxl_switch_port_probe(struct cxl_port *port)
+ 	if (rc < 0)
+ 		return rc;
+ 
+-	if (rc == 1)
+-		return devm_cxl_add_passthrough_decoder(port);
+-
+ 	cxlhdm = devm_cxl_setup_hdm(port, NULL);
+-	if (IS_ERR(cxlhdm))
++	if (!IS_ERR(cxlhdm))
++		return devm_cxl_enumerate_decoders(cxlhdm, NULL);
++
++	if (PTR_ERR(cxlhdm) != -ENODEV) {
++		dev_err(&port->dev, "Failed to map HDM decoder capability\n");
+ 		return PTR_ERR(cxlhdm);
++	}
++
++	if (rc == 1) {
++		dev_dbg(&port->dev, "Fallback to passthrough decoder\n");
++		return devm_cxl_add_passthrough_decoder(port);
++	}
+ 
+-	return devm_cxl_enumerate_decoders(cxlhdm, NULL);
++	dev_err(&port->dev, "HDM decoder capability not found\n");
++	return -ENXIO;
+ }
+ 
+ static int cxl_endpoint_port_probe(struct cxl_port *port)
+diff --git a/drivers/dma/at_xdmac.c b/drivers/dma/at_xdmac.c
+index 1f0fab180f8f1..96f1b69f8a75e 100644
+--- a/drivers/dma/at_xdmac.c
++++ b/drivers/dma/at_xdmac.c
+@@ -187,6 +187,7 @@
+ enum atc_status {
+ 	AT_XDMAC_CHAN_IS_CYCLIC = 0,
+ 	AT_XDMAC_CHAN_IS_PAUSED,
++	AT_XDMAC_CHAN_IS_PAUSED_INTERNAL,
+ };
+ 
+ struct at_xdmac_layout {
+@@ -245,6 +246,7 @@ struct at_xdmac {
+ 	int			irq;
+ 	struct clk		*clk;
+ 	u32			save_gim;
++	u32			save_gs;
+ 	struct dma_pool		*at_xdmac_desc_pool;
+ 	const struct at_xdmac_layout	*layout;
+ 	struct at_xdmac_chan	chan[];
+@@ -347,6 +349,11 @@ static inline int at_xdmac_chan_is_paused(struct at_xdmac_chan *atchan)
+ 	return test_bit(AT_XDMAC_CHAN_IS_PAUSED, &atchan->status);
+ }
+ 
++static inline int at_xdmac_chan_is_paused_internal(struct at_xdmac_chan *atchan)
++{
++	return test_bit(AT_XDMAC_CHAN_IS_PAUSED_INTERNAL, &atchan->status);
++}
++
+ static inline bool at_xdmac_chan_is_peripheral_xfer(u32 cfg)
+ {
+ 	return cfg & AT_XDMAC_CC_TYPE_PER_TRAN;
+@@ -412,7 +419,7 @@ static bool at_xdmac_chan_is_enabled(struct at_xdmac_chan *atchan)
+ 	return ret;
+ }
+ 
+-static void at_xdmac_off(struct at_xdmac *atxdmac)
++static void at_xdmac_off(struct at_xdmac *atxdmac, bool suspend_descriptors)
+ {
+ 	struct dma_chan		*chan, *_chan;
+ 	struct at_xdmac_chan	*atchan;
+@@ -431,7 +438,7 @@ static void at_xdmac_off(struct at_xdmac *atxdmac)
+ 	at_xdmac_write(atxdmac, AT_XDMAC_GID, -1L);
+ 
+ 	/* Decrement runtime PM ref counter for each active descriptor. */
+-	if (!list_empty(&atxdmac->dma.channels)) {
++	if (!list_empty(&atxdmac->dma.channels) && suspend_descriptors) {
+ 		list_for_each_entry_safe(chan, _chan, &atxdmac->dma.channels,
+ 					 device_node) {
+ 			atchan = to_at_xdmac_chan(chan);
+@@ -1898,6 +1905,26 @@ static int at_xdmac_device_config(struct dma_chan *chan,
+ 	return ret;
+ }
+ 
++static void at_xdmac_device_pause_set(struct at_xdmac *atxdmac,
++				      struct at_xdmac_chan *atchan)
++{
++	at_xdmac_write(atxdmac, atxdmac->layout->grws, atchan->mask);
++	while (at_xdmac_chan_read(atchan, AT_XDMAC_CC) &
++	       (AT_XDMAC_CC_WRIP | AT_XDMAC_CC_RDIP))
++		cpu_relax();
++}
++
++static void at_xdmac_device_pause_internal(struct at_xdmac_chan *atchan)
++{
++	struct at_xdmac		*atxdmac = to_at_xdmac(atchan->chan.device);
++	unsigned long		flags;
++
++	spin_lock_irqsave(&atchan->lock, flags);
++	set_bit(AT_XDMAC_CHAN_IS_PAUSED_INTERNAL, &atchan->status);
++	at_xdmac_device_pause_set(atxdmac, atchan);
++	spin_unlock_irqrestore(&atchan->lock, flags);
++}
++
+ static int at_xdmac_device_pause(struct dma_chan *chan)
+ {
+ 	struct at_xdmac_chan	*atchan = to_at_xdmac_chan(chan);
+@@ -1915,11 +1942,8 @@ static int at_xdmac_device_pause(struct dma_chan *chan)
+ 		return ret;
+ 
+ 	spin_lock_irqsave(&atchan->lock, flags);
+-	at_xdmac_write(atxdmac, atxdmac->layout->grws, atchan->mask);
+-	while (at_xdmac_chan_read(atchan, AT_XDMAC_CC)
+-	       & (AT_XDMAC_CC_WRIP | AT_XDMAC_CC_RDIP))
+-		cpu_relax();
+ 
++	at_xdmac_device_pause_set(atxdmac, atchan);
+ 	/* Decrement runtime PM ref counter for each active descriptor. */
+ 	at_xdmac_runtime_suspend_descriptors(atchan);
+ 
+@@ -1931,6 +1955,17 @@ static int at_xdmac_device_pause(struct dma_chan *chan)
+ 	return 0;
+ }
+ 
++static void at_xdmac_device_resume_internal(struct at_xdmac_chan *atchan)
++{
++	struct at_xdmac		*atxdmac = to_at_xdmac(atchan->chan.device);
++	unsigned long		flags;
++
++	spin_lock_irqsave(&atchan->lock, flags);
++	at_xdmac_write(atxdmac, atxdmac->layout->grwr, atchan->mask);
++	clear_bit(AT_XDMAC_CHAN_IS_PAUSED_INTERNAL, &atchan->status);
++	spin_unlock_irqrestore(&atchan->lock, flags);
++}
++
+ static int at_xdmac_device_resume(struct dma_chan *chan)
+ {
+ 	struct at_xdmac_chan	*atchan = to_at_xdmac_chan(chan);
+@@ -2118,19 +2153,24 @@ static int __maybe_unused atmel_xdmac_suspend(struct device *dev)
+ 
+ 		atchan->save_cc = at_xdmac_chan_read(atchan, AT_XDMAC_CC);
+ 		if (at_xdmac_chan_is_cyclic(atchan)) {
+-			if (!at_xdmac_chan_is_paused(atchan))
+-				at_xdmac_device_pause(chan);
++			if (!at_xdmac_chan_is_paused(atchan)) {
++				at_xdmac_device_pause_internal(atchan);
++				at_xdmac_runtime_suspend_descriptors(atchan);
++			}
+ 			atchan->save_cim = at_xdmac_chan_read(atchan, AT_XDMAC_CIM);
+ 			atchan->save_cnda = at_xdmac_chan_read(atchan, AT_XDMAC_CNDA);
+ 			atchan->save_cndc = at_xdmac_chan_read(atchan, AT_XDMAC_CNDC);
+ 		}
+-
+-		at_xdmac_runtime_suspend_descriptors(atchan);
+ 	}
+ 	atxdmac->save_gim = at_xdmac_read(atxdmac, AT_XDMAC_GIM);
++	atxdmac->save_gs = at_xdmac_read(atxdmac, AT_XDMAC_GS);
+ 
+-	at_xdmac_off(atxdmac);
+-	return pm_runtime_force_suspend(atxdmac->dev);
++	at_xdmac_off(atxdmac, false);
++	pm_runtime_mark_last_busy(atxdmac->dev);
++	pm_runtime_put_noidle(atxdmac->dev);
++	clk_disable_unprepare(atxdmac->clk);
++
++	return 0;
+ }
+ 
+ static int __maybe_unused atmel_xdmac_resume(struct device *dev)
+@@ -2142,10 +2182,12 @@ static int __maybe_unused atmel_xdmac_resume(struct device *dev)
+ 	int			i;
+ 	int ret;
+ 
+-	ret = pm_runtime_force_resume(atxdmac->dev);
+-	if (ret < 0)
++	ret = clk_prepare_enable(atxdmac->clk);
++	if (ret)
+ 		return ret;
+ 
++	pm_runtime_get_noresume(atxdmac->dev);
++
+ 	at_xdmac_axi_config(pdev);
+ 
+ 	/* Clear pending interrupts. */
+@@ -2159,19 +2201,33 @@ static int __maybe_unused atmel_xdmac_resume(struct device *dev)
+ 	list_for_each_entry_safe(chan, _chan, &atxdmac->dma.channels, device_node) {
+ 		atchan = to_at_xdmac_chan(chan);
+ 
+-		ret = at_xdmac_runtime_resume_descriptors(atchan);
+-		if (ret < 0)
+-			return ret;
+-
+ 		at_xdmac_chan_write(atchan, AT_XDMAC_CC, atchan->save_cc);
+ 		if (at_xdmac_chan_is_cyclic(atchan)) {
+-			if (at_xdmac_chan_is_paused(atchan))
+-				at_xdmac_device_resume(chan);
++			/*
++			 * Resume only channels not explicitly paused by
++			 * consumers.
++			 */
++			if (at_xdmac_chan_is_paused_internal(atchan)) {
++				ret = at_xdmac_runtime_resume_descriptors(atchan);
++				if (ret < 0)
++					return ret;
++				at_xdmac_device_resume_internal(atchan);
++			}
++
++			/*
++			 * We may resume from a deep sleep state where power
++			 * to DMA controller is cut-off. Thus, restore the
++			 * suspend state of channels set though dmaengine API.
++			 */
++			else if (at_xdmac_chan_is_paused(atchan))
++				at_xdmac_device_pause_set(atxdmac, atchan);
++
+ 			at_xdmac_chan_write(atchan, AT_XDMAC_CNDA, atchan->save_cnda);
+ 			at_xdmac_chan_write(atchan, AT_XDMAC_CNDC, atchan->save_cndc);
+ 			at_xdmac_chan_write(atchan, AT_XDMAC_CIE, atchan->save_cim);
+ 			wmb();
+-			at_xdmac_write(atxdmac, AT_XDMAC_GE, atchan->mask);
++			if (atxdmac->save_gs & atchan->mask)
++				at_xdmac_write(atxdmac, AT_XDMAC_GE, atchan->mask);
+ 		}
+ 	}
+ 
+@@ -2312,7 +2368,7 @@ static int at_xdmac_probe(struct platform_device *pdev)
+ 	INIT_LIST_HEAD(&atxdmac->dma.channels);
+ 
+ 	/* Disable all chans and interrupts. */
+-	at_xdmac_off(atxdmac);
++	at_xdmac_off(atxdmac, true);
+ 
+ 	for (i = 0; i < nr_channels; i++) {
+ 		struct at_xdmac_chan *atchan = &atxdmac->chan[i];
+@@ -2376,7 +2432,7 @@ static int at_xdmac_remove(struct platform_device *pdev)
+ 	struct at_xdmac	*atxdmac = (struct at_xdmac *)platform_get_drvdata(pdev);
+ 	int		i;
+ 
+-	at_xdmac_off(atxdmac);
++	at_xdmac_off(atxdmac, true);
+ 	of_dma_controller_free(pdev->dev.of_node);
+ 	dma_async_device_unregister(&atxdmac->dma);
+ 	pm_runtime_disable(atxdmac->dev);
+diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c
+index 1906a836f0aab..7d2b73ef08727 100644
+--- a/drivers/dma/dw-edma/dw-edma-core.c
++++ b/drivers/dma/dw-edma/dw-edma-core.c
+@@ -181,7 +181,7 @@ static void vchan_free_desc(struct virt_dma_desc *vdesc)
+ 	dw_edma_free_desc(vd2dw_edma_desc(vdesc));
+ }
+ 
+-static void dw_edma_start_transfer(struct dw_edma_chan *chan)
++static int dw_edma_start_transfer(struct dw_edma_chan *chan)
+ {
+ 	struct dw_edma_chunk *child;
+ 	struct dw_edma_desc *desc;
+@@ -189,16 +189,16 @@ static void dw_edma_start_transfer(struct dw_edma_chan *chan)
+ 
+ 	vd = vchan_next_desc(&chan->vc);
+ 	if (!vd)
+-		return;
++		return 0;
+ 
+ 	desc = vd2dw_edma_desc(vd);
+ 	if (!desc)
+-		return;
++		return 0;
+ 
+ 	child = list_first_entry_or_null(&desc->chunk->list,
+ 					 struct dw_edma_chunk, list);
+ 	if (!child)
+-		return;
++		return 0;
+ 
+ 	dw_edma_v0_core_start(child, !desc->xfer_sz);
+ 	desc->xfer_sz += child->ll_region.sz;
+@@ -206,6 +206,8 @@ static void dw_edma_start_transfer(struct dw_edma_chan *chan)
+ 	list_del(&child->list);
+ 	kfree(child);
+ 	desc->chunks_alloc--;
++
++	return 1;
+ }
+ 
+ static void dw_edma_device_caps(struct dma_chan *dchan,
+@@ -306,9 +308,12 @@ static void dw_edma_device_issue_pending(struct dma_chan *dchan)
+ 	struct dw_edma_chan *chan = dchan2dw_edma_chan(dchan);
+ 	unsigned long flags;
+ 
++	if (!chan->configured)
++		return;
++
+ 	spin_lock_irqsave(&chan->vc.lock, flags);
+-	if (chan->configured && chan->request == EDMA_REQ_NONE &&
+-	    chan->status == EDMA_ST_IDLE && vchan_issue_pending(&chan->vc)) {
++	if (vchan_issue_pending(&chan->vc) && chan->request == EDMA_REQ_NONE &&
++	    chan->status == EDMA_ST_IDLE) {
+ 		chan->status = EDMA_ST_BUSY;
+ 		dw_edma_start_transfer(chan);
+ 	}
+@@ -602,14 +607,14 @@ static void dw_edma_done_interrupt(struct dw_edma_chan *chan)
+ 		switch (chan->request) {
+ 		case EDMA_REQ_NONE:
+ 			desc = vd2dw_edma_desc(vd);
+-			if (desc->chunks_alloc) {
+-				chan->status = EDMA_ST_BUSY;
+-				dw_edma_start_transfer(chan);
+-			} else {
++			if (!desc->chunks_alloc) {
+ 				list_del(&vd->node);
+ 				vchan_cookie_complete(vd);
+-				chan->status = EDMA_ST_IDLE;
+ 			}
++
++			/* Continue transferring if there are remaining chunks or issued requests.
++			 */
++			chan->status = dw_edma_start_transfer(chan) ? EDMA_ST_BUSY : EDMA_ST_IDLE;
+ 			break;
+ 
+ 		case EDMA_REQ_STOP:
+diff --git a/drivers/dma/mv_xor_v2.c b/drivers/dma/mv_xor_v2.c
+index 89790beba3052..0991b82658296 100644
+--- a/drivers/dma/mv_xor_v2.c
++++ b/drivers/dma/mv_xor_v2.c
+@@ -752,7 +752,7 @@ static int mv_xor_v2_probe(struct platform_device *pdev)
+ 
+ 	xor_dev->clk = devm_clk_get(&pdev->dev, NULL);
+ 	if (PTR_ERR(xor_dev->clk) == -EPROBE_DEFER) {
+-		ret = EPROBE_DEFER;
++		ret = -EPROBE_DEFER;
+ 		goto disable_reg_clk;
+ 	}
+ 	if (!IS_ERR(xor_dev->clk)) {
+diff --git a/drivers/dma/qcom/gpi.c b/drivers/dma/qcom/gpi.c
+index 59a36cbf9b5f7..932628b319c81 100644
+--- a/drivers/dma/qcom/gpi.c
++++ b/drivers/dma/qcom/gpi.c
+@@ -1966,7 +1966,6 @@ error_alloc_ev_ring:
+ error_config_int:
+ 	gpi_free_ring(&gpii->ev_ring, gpii);
+ exit_gpi_init:
+-	mutex_unlock(&gpii->ctrl_lock);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/edac/skx_base.c b/drivers/edac/skx_base.c
+index 9397abb42c498..0a862336a7ce8 100644
+--- a/drivers/edac/skx_base.c
++++ b/drivers/edac/skx_base.c
+@@ -510,7 +510,7 @@ rir_found:
+ }
+ 
+ static u8 skx_close_row[] = {
+-	15, 16, 17, 18, 20, 21, 22, 28, 10, 11, 12, 13, 29, 30, 31, 32, 33
++	15, 16, 17, 18, 20, 21, 22, 28, 10, 11, 12, 13, 29, 30, 31, 32, 33, 34
+ };
+ 
+ static u8 skx_close_column[] = {
+@@ -518,7 +518,7 @@ static u8 skx_close_column[] = {
+ };
+ 
+ static u8 skx_open_row[] = {
+-	14, 15, 16, 20, 28, 21, 22, 23, 24, 25, 26, 27, 29, 30, 31, 32, 33
++	14, 15, 16, 20, 28, 21, 22, 23, 24, 25, 26, 27, 29, 30, 31, 32, 33, 34
+ };
+ 
+ static u8 skx_open_column[] = {
+diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
+index dbc474ff62b71..e7d97b59963b0 100644
+--- a/drivers/firmware/arm_scmi/driver.c
++++ b/drivers/firmware/arm_scmi/driver.c
+@@ -2289,7 +2289,7 @@ static int scmi_xfer_info_init(struct scmi_info *sinfo)
+ 		return ret;
+ 
+ 	ret = __scmi_xfer_info_init(sinfo, &sinfo->tx_minfo);
+-	if (!ret && idr_find(&sinfo->rx_idr, SCMI_PROTOCOL_BASE))
++	if (!ret && !idr_is_empty(&sinfo->rx_idr))
+ 		ret = __scmi_xfer_info_init(sinfo, &sinfo->rx_minfo);
+ 
+ 	return ret;
+diff --git a/drivers/firmware/qcom_scm.c b/drivers/firmware/qcom_scm.c
+index b1e11f85b8054..5f281cb1ae6ba 100644
+--- a/drivers/firmware/qcom_scm.c
++++ b/drivers/firmware/qcom_scm.c
+@@ -1506,8 +1506,7 @@ static int qcom_scm_probe(struct platform_device *pdev)
+ static void qcom_scm_shutdown(struct platform_device *pdev)
+ {
+ 	/* Clean shutdown, disable download mode to allow normal restart */
+-	if (download_mode)
+-		qcom_scm_set_download_mode(false);
++	qcom_scm_set_download_mode(false);
+ }
+ 
+ static const struct of_device_id qcom_scm_dt_match[] = {
+diff --git a/drivers/firmware/stratix10-svc.c b/drivers/firmware/stratix10-svc.c
+index bde1f543f5298..80f4e2d14e046 100644
+--- a/drivers/firmware/stratix10-svc.c
++++ b/drivers/firmware/stratix10-svc.c
+@@ -1133,8 +1133,8 @@ static int stratix10_svc_drv_probe(struct platform_device *pdev)
+ 		return ret;
+ 
+ 	genpool = svc_create_memory_pool(pdev, sh_memory);
+-	if (!genpool)
+-		return -ENOMEM;
++	if (IS_ERR(genpool))
++		return PTR_ERR(genpool);
+ 
+ 	/* allocate service controller and supporting channel */
+ 	controller = devm_kzalloc(dev, sizeof(*controller), GFP_KERNEL);
+diff --git a/drivers/fpga/fpga-bridge.c b/drivers/fpga/fpga-bridge.c
+index 0953e6e4db041..c07fcec5ab1b7 100644
+--- a/drivers/fpga/fpga-bridge.c
++++ b/drivers/fpga/fpga-bridge.c
+@@ -115,7 +115,7 @@ static int fpga_bridge_dev_match(struct device *dev, const void *data)
+ /**
+  * fpga_bridge_get - get an exclusive reference to an fpga bridge
+  * @dev:	parent device that fpga bridge was registered with
+- * @info:	fpga manager info
++ * @info:	fpga image specific information
+  *
+  * Given a device, get an exclusive reference to an fpga bridge.
+  *
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 3d98fc2ad36b0..6f715fb930bb4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -35,6 +35,7 @@
+ #include <linux/devcoredump.h>
+ #include <generated/utsrelease.h>
+ #include <linux/pci-p2pdma.h>
++#include <linux/apple-gmux.h>
+ 
+ #include <drm/drm_aperture.h>
+ #include <drm/drm_atomic_helper.h>
+@@ -3945,12 +3946,15 @@ fence_driver_init:
+ 	if ((adev->pdev->class >> 8) == PCI_CLASS_DISPLAY_VGA)
+ 		vga_client_register(adev->pdev, amdgpu_device_vga_set_decode);
+ 
+-	if (amdgpu_device_supports_px(ddev)) {
+-		px = true;
++	px = amdgpu_device_supports_px(ddev);
++
++	if (px || (!pci_is_thunderbolt_attached(adev->pdev) &&
++				apple_gmux_detect(NULL, NULL)))
+ 		vga_switcheroo_register_client(adev->pdev,
+ 					       &amdgpu_switcheroo_ops, px);
++
++	if (px)
+ 		vga_switcheroo_init_domain_pm_ops(adev->dev, &adev->vga_pm_domain);
+-	}
+ 
+ 	if (adev->gmc.xgmi.pending_reset)
+ 		queue_delayed_work(system_wq, &mgpu_info.delayed_reset_work,
+@@ -4054,6 +4058,7 @@ void amdgpu_device_fini_hw(struct amdgpu_device *adev)
+ void amdgpu_device_fini_sw(struct amdgpu_device *adev)
+ {
+ 	int idx;
++	bool px;
+ 
+ 	amdgpu_fence_driver_sw_fini(adev);
+ 	amdgpu_device_ip_fini(adev);
+@@ -4072,10 +4077,16 @@ void amdgpu_device_fini_sw(struct amdgpu_device *adev)
+ 
+ 	kfree(adev->bios);
+ 	adev->bios = NULL;
+-	if (amdgpu_device_supports_px(adev_to_drm(adev))) {
++
++	px = amdgpu_device_supports_px(adev_to_drm(adev));
++
++	if (px || (!pci_is_thunderbolt_attached(adev->pdev) &&
++				apple_gmux_detect(NULL, NULL)))
+ 		vga_switcheroo_unregister_client(adev->pdev);
++
++	if (px)
+ 		vga_switcheroo_fini_domain_pm_ops(adev->dev);
+-	}
++
+ 	if ((adev->pdev->class >> 8) == PCI_CLASS_DISPLAY_VGA)
+ 		vga_client_unregister(adev->pdev);
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index a01fd41643fc2..62af874f26e01 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1854,7 +1854,8 @@ static void amdgpu_dm_fini(struct amdgpu_device *adev)
+ 		dc_deinit_callbacks(adev->dm.dc);
+ #endif
+ 
+-	dc_dmub_srv_destroy(&adev->dm.dc->ctx->dmub_srv);
++	if (adev->dm.dc)
++		dc_dmub_srv_destroy(&adev->dm.dc->ctx->dmub_srv);
+ 
+ 	if (dc_enable_dmub_notifications(adev->dm.dc)) {
+ 		kfree(adev->dm.dmub_notify);
+diff --git a/drivers/gpu/drm/amd/display/dc/dce60/Makefile b/drivers/gpu/drm/amd/display/dc/dce60/Makefile
+index dda596fa1cd76..fee331accc0e7 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce60/Makefile
++++ b/drivers/gpu/drm/amd/display/dc/dce60/Makefile
+@@ -23,7 +23,7 @@
+ # Makefile for the 'controller' sub-component of DAL.
+ # It provides the control and status of HW CRTC block.
+ 
+-CFLAGS_AMDDALPATH)/dc/dce60/dce60_resource.o = $(call cc-disable-warning, override-init)
++CFLAGS_$(AMDDALPATH)/dc/dce60/dce60_resource.o = $(call cc-disable-warning, override-init)
+ 
+ DCE60 = dce60_timing_generator.o dce60_hw_sequencer.o \
+ 	dce60_resource.o
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index 0652b001ad549..62ea57114a856 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -161,10 +161,15 @@ int smu_get_dpm_freq_range(struct smu_context *smu,
+ 
+ int smu_set_gfx_power_up_by_imu(struct smu_context *smu)
+ {
+-	if (!smu->ppt_funcs || !smu->ppt_funcs->set_gfx_power_up_by_imu)
+-		return -EOPNOTSUPP;
++	int ret = 0;
++	struct amdgpu_device *adev = smu->adev;
+ 
+-	return smu->ppt_funcs->set_gfx_power_up_by_imu(smu);
++	if (smu->ppt_funcs->set_gfx_power_up_by_imu) {
++		ret = smu->ppt_funcs->set_gfx_power_up_by_imu(smu);
++		if (ret)
++			dev_err(adev->dev, "Failed to enable gfx imu!\n");
++	}
++	return ret;
+ }
+ 
+ static u32 smu_get_mclk(void *handle, bool low)
+@@ -195,6 +200,19 @@ static u32 smu_get_sclk(void *handle, bool low)
+ 	return clk_freq * 100;
+ }
+ 
++static int smu_set_gfx_imu_enable(struct smu_context *smu)
++{
++	struct amdgpu_device *adev = smu->adev;
++
++	if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP)
++		return 0;
++
++	if (amdgpu_in_reset(smu->adev) || adev->in_s0ix)
++		return 0;
++
++	return smu_set_gfx_power_up_by_imu(smu);
++}
++
+ static int smu_dpm_set_vcn_enable(struct smu_context *smu,
+ 				  bool enable)
+ {
+@@ -1390,15 +1408,9 @@ static int smu_hw_init(void *handle)
+ 	}
+ 
+ 	if (smu->is_apu) {
+-		if ((smu->ppt_funcs->set_gfx_power_up_by_imu) &&
+-				likely(adev->firmware.load_type == AMDGPU_FW_LOAD_PSP)) {
+-			ret = smu->ppt_funcs->set_gfx_power_up_by_imu(smu);
+-			if (ret) {
+-				dev_err(adev->dev, "Failed to Enable gfx imu!\n");
+-				return ret;
+-			}
+-		}
+-
++		ret = smu_set_gfx_imu_enable(smu);
++		if (ret)
++			return ret;
+ 		smu_dpm_set_vcn_enable(smu, true);
+ 		smu_dpm_set_jpeg_enable(smu, true);
+ 		smu_set_gfx_cgpg(smu, true);
+@@ -1675,6 +1687,10 @@ static int smu_resume(void *handle)
+ 		return ret;
+ 	}
+ 
++	ret = smu_set_gfx_imu_enable(smu);
++	if (ret)
++		return ret;
++
+ 	smu_set_gfx_cgpg(smu, true);
+ 
+ 	smu->disable_uclk_switch = 0;
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7533.c b/drivers/gpu/drm/bridge/adv7511/adv7533.c
+index fdfeadcefe805..7e3e56441aedc 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7533.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7533.c
+@@ -103,22 +103,19 @@ void adv7533_dsi_power_off(struct adv7511 *adv)
+ enum drm_mode_status adv7533_mode_valid(struct adv7511 *adv,
+ 					const struct drm_display_mode *mode)
+ {
+-	int lanes;
++	unsigned long max_lane_freq;
+ 	struct mipi_dsi_device *dsi = adv->dsi;
++	u8 bpp = mipi_dsi_pixel_format_to_bpp(dsi->format);
+ 
+-	if (mode->clock > 80000)
+-		lanes = 4;
+-	else
+-		lanes = 3;
+-
+-	/*
+-	 * TODO: add support for dynamic switching of lanes
+-	 * by using the bridge pre_enable() op . Till then filter
+-	 * out the modes which shall need different number of lanes
+-	 * than what was configured in the device tree.
+-	 */
+-	if (lanes != dsi->lanes)
+-		return MODE_BAD;
++	/* Check max clock for either 7533 or 7535 */
++	if (mode->clock > (adv->type == ADV7533 ? 80000 : 148500))
++		return MODE_CLOCK_HIGH;
++
++	/* Check max clock for each lane */
++	max_lane_freq = (adv->type == ADV7533 ? 800000 : 891000);
++
++	if (mode->clock * bpp > max_lane_freq * adv->num_dsi_lanes)
++		return MODE_CLOCK_HIGH;
+ 
+ 	return MODE_OK;
+ }
+diff --git a/drivers/gpu/drm/drm_probe_helper.c b/drivers/gpu/drm/drm_probe_helper.c
+index 8127be134c39e..2fb9bf901a2cc 100644
+--- a/drivers/gpu/drm/drm_probe_helper.c
++++ b/drivers/gpu/drm/drm_probe_helper.c
+@@ -590,8 +590,9 @@ retry:
+ 		 */
+ 		dev->mode_config.delayed_event = true;
+ 		if (dev->mode_config.poll_enabled)
+-			schedule_delayed_work(&dev->mode_config.output_poll_work,
+-					      0);
++			mod_delayed_work(system_wq,
++					 &dev->mode_config.output_poll_work,
++					 0);
+ 	}
+ 
+ 	/* Re-enable polling in case the global poll config changed. */
+diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
+index 63b4b73f47c6a..2bef50ab0ad19 100644
+--- a/drivers/gpu/drm/i915/display/intel_display.c
++++ b/drivers/gpu/drm/i915/display/intel_display.c
+@@ -1053,7 +1053,7 @@ intel_get_crtc_new_encoder(const struct intel_atomic_state *state,
+ 		num_encoders++;
+ 	}
+ 
+-	drm_WARN(encoder->base.dev, num_encoders != 1,
++	drm_WARN(state->base.dev, num_encoders != 1,
+ 		 "%d encoders for pipe %c\n",
+ 		 num_encoders, pipe_name(master_crtc->pipe));
+ 
+diff --git a/drivers/gpu/drm/i915/pxp/intel_pxp.h b/drivers/gpu/drm/i915/pxp/intel_pxp.h
+index 04440fada711a..9658d30052224 100644
+--- a/drivers/gpu/drm/i915/pxp/intel_pxp.h
++++ b/drivers/gpu/drm/i915/pxp/intel_pxp.h
+@@ -24,6 +24,7 @@ void intel_pxp_init_hw(struct intel_pxp *pxp);
+ void intel_pxp_fini_hw(struct intel_pxp *pxp);
+ 
+ void intel_pxp_mark_termination_in_progress(struct intel_pxp *pxp);
++void intel_pxp_tee_end_arb_fw_session(struct intel_pxp *pxp, u32 arb_session_id);
+ 
+ int intel_pxp_start(struct intel_pxp *pxp);
+ 
+diff --git a/drivers/gpu/drm/i915/pxp/intel_pxp_cmd_interface_42.h b/drivers/gpu/drm/i915/pxp/intel_pxp_cmd_interface_42.h
+index 739f9072fa5fb..26f7d9f01bf3f 100644
+--- a/drivers/gpu/drm/i915/pxp/intel_pxp_cmd_interface_42.h
++++ b/drivers/gpu/drm/i915/pxp/intel_pxp_cmd_interface_42.h
+@@ -12,6 +12,9 @@
+ /* PXP-Opcode for Init Session */
+ #define PXP42_CMDID_INIT_SESSION 0x1e
+ 
++/* PXP-Opcode for Invalidate Stream Key */
++#define PXP42_CMDID_INVALIDATE_STREAM_KEY 0x00000007
++
+ /* PXP-Input-Packet: Init Session (Arb-Session) */
+ struct pxp42_create_arb_in {
+ 	struct pxp_cmd_header header;
+@@ -25,4 +28,16 @@ struct pxp42_create_arb_out {
+ 	struct pxp_cmd_header header;
+ } __packed;
+ 
++/* PXP-Input-Packet: Invalidate Stream Key */
++struct pxp42_inv_stream_key_in {
++	struct pxp_cmd_header header;
++	u32 rsvd[3];
++} __packed;
++
++/* PXP-Output-Packet: Invalidate Stream Key */
++struct pxp42_inv_stream_key_out {
++	struct pxp_cmd_header header;
++	u32 rsvd;
++} __packed;
++
+ #endif /* __INTEL_PXP_FW_INTERFACE_42_H__ */
+diff --git a/drivers/gpu/drm/i915/pxp/intel_pxp_cmd_interface_cmn.h b/drivers/gpu/drm/i915/pxp/intel_pxp_cmd_interface_cmn.h
+index aaa8187a0afbc..6f6541d5e49a6 100644
+--- a/drivers/gpu/drm/i915/pxp/intel_pxp_cmd_interface_cmn.h
++++ b/drivers/gpu/drm/i915/pxp/intel_pxp_cmd_interface_cmn.h
+@@ -18,6 +18,9 @@
+ enum pxp_status {
+ 	PXP_STATUS_SUCCESS = 0x0,
+ 	PXP_STATUS_ERROR_API_VERSION = 0x1002,
++	PXP_STATUS_NOT_READY = 0x100e,
++	PXP_STATUS_PLATFCONFIG_KF1_NOVERIF = 0x101a,
++	PXP_STATUS_PLATFCONFIG_KF1_BAD = 0x101f,
+ 	PXP_STATUS_OP_NOT_PERMITTED = 0x4013
+ };
+ 
+@@ -28,6 +31,9 @@ struct pxp_cmd_header {
+ 	union {
+ 		u32 status; /* out */
+ 		u32 stream_id; /* in */
++#define PXP_CMDHDR_EXTDATA_SESSION_VALID GENMASK(0, 0)
++#define PXP_CMDHDR_EXTDATA_APP_TYPE GENMASK(1, 1)
++#define PXP_CMDHDR_EXTDATA_SESSION_ID GENMASK(17, 2)
+ 	};
+ 	/* Length of the message (excluding the header) */
+ 	u32 buffer_len;
+diff --git a/drivers/gpu/drm/i915/pxp/intel_pxp_session.c b/drivers/gpu/drm/i915/pxp/intel_pxp_session.c
+index ae413580b81ac..48fa46091984b 100644
+--- a/drivers/gpu/drm/i915/pxp/intel_pxp_session.c
++++ b/drivers/gpu/drm/i915/pxp/intel_pxp_session.c
+@@ -74,7 +74,7 @@ static int pxp_create_arb_session(struct intel_pxp *pxp)
+ 
+ 	ret = pxp_wait_for_session_state(pxp, ARB_SESSION, true);
+ 	if (ret) {
+-		drm_err(&gt->i915->drm, "arb session failed to go in play\n");
++		drm_dbg(&gt->i915->drm, "arb session failed to go in play\n");
+ 		return ret;
+ 	}
+ 	drm_dbg(&gt->i915->drm, "PXP ARB session is alive\n");
+@@ -110,6 +110,8 @@ static int pxp_terminate_arb_session_and_global(struct intel_pxp *pxp)
+ 
+ 	intel_uncore_write(gt->uncore, PXP_GLOBAL_TERMINATE, 1);
+ 
++	intel_pxp_tee_end_arb_fw_session(pxp, ARB_SESSION);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/pxp/intel_pxp_tee.c b/drivers/gpu/drm/i915/pxp/intel_pxp_tee.c
+index 73aa8015f828f..e9322ec1e0027 100644
+--- a/drivers/gpu/drm/i915/pxp/intel_pxp_tee.c
++++ b/drivers/gpu/drm/i915/pxp/intel_pxp_tee.c
+@@ -19,6 +19,37 @@
+ #include "intel_pxp_tee.h"
+ #include "intel_pxp_types.h"
+ 
++static bool
++is_fw_err_platform_config(u32 type)
++{
++	switch (type) {
++	case PXP_STATUS_ERROR_API_VERSION:
++	case PXP_STATUS_PLATFCONFIG_KF1_NOVERIF:
++	case PXP_STATUS_PLATFCONFIG_KF1_BAD:
++		return true;
++	default:
++		break;
++	}
++	return false;
++}
++
++static const char *
++fw_err_to_string(u32 type)
++{
++	switch (type) {
++	case PXP_STATUS_ERROR_API_VERSION:
++		return "ERR_API_VERSION";
++	case PXP_STATUS_NOT_READY:
++		return "ERR_NOT_READY";
++	case PXP_STATUS_PLATFCONFIG_KF1_NOVERIF:
++	case PXP_STATUS_PLATFCONFIG_KF1_BAD:
++		return "ERR_PLATFORM_CONFIG";
++	default:
++		break;
++	}
++	return NULL;
++}
++
+ static int intel_pxp_tee_io_message(struct intel_pxp *pxp,
+ 				    void *msg_in, u32 msg_in_size,
+ 				    void *msg_out, u32 msg_out_max_size,
+@@ -296,15 +327,68 @@ int intel_pxp_tee_cmd_create_arb_session(struct intel_pxp *pxp,
+ 				       &msg_out, sizeof(msg_out),
+ 				       NULL);
+ 
+-	if (ret)
+-		drm_err(&i915->drm, "Failed to send tee msg ret=[%d]\n", ret);
+-	else if (msg_out.header.status == PXP_STATUS_ERROR_API_VERSION)
+-		drm_dbg(&i915->drm, "PXP firmware version unsupported, requested: "
+-			"CMD-ID-[0x%08x] on API-Ver-[0x%08x]\n",
+-			msg_in.header.command_id, msg_in.header.api_version);
+-	else if (msg_out.header.status != 0x0)
+-		drm_warn(&i915->drm, "PXP firmware failed arb session init request ret=[0x%08x]\n",
+-			 msg_out.header.status);
++	if (ret) {
++		drm_err(&i915->drm, "Failed to send tee msg init arb session, ret=[%d]\n", ret);
++	} else if (msg_out.header.status != 0) {
++		if (is_fw_err_platform_config(msg_out.header.status)) {
++			drm_info_once(&i915->drm,
++				      "PXP init-arb-session-%d failed due to BIOS/SOC:0x%08x:%s\n",
++				      arb_session_id, msg_out.header.status,
++				      fw_err_to_string(msg_out.header.status));
++		} else {
++			drm_dbg(&i915->drm, "PXP init-arb-session--%d failed 0x%08x:%st:\n",
++				arb_session_id, msg_out.header.status,
++				fw_err_to_string(msg_out.header.status));
++			drm_dbg(&i915->drm, "     cmd-detail: ID=[0x%08x],API-Ver-[0x%08x]\n",
++				msg_in.header.command_id, msg_in.header.api_version);
++		}
++	}
+ 
+ 	return ret;
+ }
++
++void intel_pxp_tee_end_arb_fw_session(struct intel_pxp *pxp, u32 session_id)
++{
++	struct drm_i915_private *i915 = pxp->ctrl_gt->i915;
++	struct pxp42_inv_stream_key_in msg_in = {0};
++	struct pxp42_inv_stream_key_out msg_out = {0};
++	int ret, trials = 0;
++
++try_again:
++	memset(&msg_in, 0, sizeof(msg_in));
++	memset(&msg_out, 0, sizeof(msg_out));
++	msg_in.header.api_version = PXP_APIVER(4, 2);
++	msg_in.header.command_id = PXP42_CMDID_INVALIDATE_STREAM_KEY;
++	msg_in.header.buffer_len = sizeof(msg_in) - sizeof(msg_in.header);
++
++	msg_in.header.stream_id = FIELD_PREP(PXP_CMDHDR_EXTDATA_SESSION_VALID, 1);
++	msg_in.header.stream_id |= FIELD_PREP(PXP_CMDHDR_EXTDATA_APP_TYPE, 0);
++	msg_in.header.stream_id |= FIELD_PREP(PXP_CMDHDR_EXTDATA_SESSION_ID, session_id);
++
++	ret = intel_pxp_tee_io_message(pxp,
++				       &msg_in, sizeof(msg_in),
++				       &msg_out, sizeof(msg_out),
++				       NULL);
++
++	/* Cleanup coherency between GT and Firmware is critical, so try again if it fails */
++	if ((ret || msg_out.header.status != 0x0) && ++trials < 3)
++		goto try_again;
++
++	if (ret) {
++		drm_err(&i915->drm, "Failed to send tee msg for inv-stream-key-%u, ret=[%d]\n",
++			session_id, ret);
++	} else if (msg_out.header.status != 0) {
++		if (is_fw_err_platform_config(msg_out.header.status)) {
++			drm_info_once(&i915->drm,
++				      "PXP inv-stream-key-%u failed due to BIOS/SOC :0x%08x:%s\n",
++				      session_id, msg_out.header.status,
++				      fw_err_to_string(msg_out.header.status));
++		} else {
++			drm_dbg(&i915->drm, "PXP inv-stream-key-%u failed 0x%08x:%s:\n",
++				session_id, msg_out.header.status,
++				fw_err_to_string(msg_out.header.status));
++			drm_dbg(&i915->drm, "     cmd-detail: ID=[0x%08x],API-Ver-[0x%08x]\n",
++				msg_in.header.command_id, msg_in.header.api_version);
++		}
++	}
++}
+diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+index 01e75160a84ab..22890acd47b78 100644
+--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
++++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+@@ -69,8 +69,10 @@ static int fake_get_pages(struct drm_i915_gem_object *obj)
+ 
+ 	rem = round_up(obj->base.size, BIT(31)) >> 31;
+ 	/* restricted by sg_alloc_table */
+-	if (overflows_type(rem, unsigned int))
++	if (overflows_type(rem, unsigned int)) {
++		kfree(pages);
+ 		return -E2BIG;
++	}
+ 
+ 	if (sg_alloc_table(pages, rem, GFP)) {
+ 		kfree(pages);
+diff --git a/drivers/gpu/drm/lima/lima_drv.c b/drivers/gpu/drm/lima/lima_drv.c
+index 7b8d7178d09aa..39cab4a55f572 100644
+--- a/drivers/gpu/drm/lima/lima_drv.c
++++ b/drivers/gpu/drm/lima/lima_drv.c
+@@ -392,8 +392,10 @@ static int lima_pdev_probe(struct platform_device *pdev)
+ 
+ 	/* Allocate and initialize the DRM device. */
+ 	ddev = drm_dev_alloc(&lima_drm_driver, &pdev->dev);
+-	if (IS_ERR(ddev))
+-		return PTR_ERR(ddev);
++	if (IS_ERR(ddev)) {
++		err = PTR_ERR(ddev);
++		goto err_out0;
++	}
+ 
+ 	ddev->dev_private = ldev;
+ 	ldev->ddev = ddev;
+diff --git a/drivers/gpu/drm/mediatek/mtk_dp.c b/drivers/gpu/drm/mediatek/mtk_dp.c
+index 1f94fcc144d3a..64eee77452c04 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dp.c
++++ b/drivers/gpu/drm/mediatek/mtk_dp.c
+@@ -806,10 +806,9 @@ static int mtk_dp_aux_wait_for_completion(struct mtk_dp *mtk_dp, bool is_read)
+ }
+ 
+ static int mtk_dp_aux_do_transfer(struct mtk_dp *mtk_dp, bool is_read, u8 cmd,
+-				  u32 addr, u8 *buf, size_t length)
++				  u32 addr, u8 *buf, size_t length, u8 *reply_cmd)
+ {
+ 	int ret;
+-	u32 reply_cmd;
+ 
+ 	if (is_read && (length > DP_AUX_MAX_PAYLOAD_BYTES ||
+ 			(cmd == DP_AUX_NATIVE_READ && !length)))
+@@ -841,10 +840,10 @@ static int mtk_dp_aux_do_transfer(struct mtk_dp *mtk_dp, bool is_read, u8 cmd,
+ 	/* Wait for feedback from sink device. */
+ 	ret = mtk_dp_aux_wait_for_completion(mtk_dp, is_read);
+ 
+-	reply_cmd = mtk_dp_read(mtk_dp, MTK_DP_AUX_P0_3624) &
+-		    AUX_RX_REPLY_COMMAND_AUX_TX_P0_MASK;
++	*reply_cmd = mtk_dp_read(mtk_dp, MTK_DP_AUX_P0_3624) &
++		     AUX_RX_REPLY_COMMAND_AUX_TX_P0_MASK;
+ 
+-	if (ret || reply_cmd) {
++	if (ret) {
+ 		u32 phy_status = mtk_dp_read(mtk_dp, MTK_DP_AUX_P0_3628) &
+ 				 AUX_RX_PHY_STATE_AUX_TX_P0_MASK;
+ 		if (phy_status != AUX_RX_PHY_STATE_AUX_TX_P0_RX_IDLE) {
+@@ -1823,7 +1822,8 @@ static irqreturn_t mtk_dp_hpd_event_thread(int hpd, void *dev)
+ 	spin_unlock_irqrestore(&mtk_dp->irq_thread_lock, flags);
+ 
+ 	if (status & MTK_DP_THREAD_CABLE_STATE_CHG) {
+-		drm_helper_hpd_irq_event(mtk_dp->bridge.dev);
++		if (mtk_dp->bridge.dev)
++			drm_helper_hpd_irq_event(mtk_dp->bridge.dev);
+ 
+ 		if (!mtk_dp->train_info.cable_plugged_in) {
+ 			mtk_dp_disable_sdp_aui(mtk_dp);
+@@ -2070,7 +2070,7 @@ static ssize_t mtk_dp_aux_transfer(struct drm_dp_aux *mtk_aux,
+ 		ret = mtk_dp_aux_do_transfer(mtk_dp, is_read, request,
+ 					     msg->address + accessed_bytes,
+ 					     msg->buffer + accessed_bytes,
+-					     to_access);
++					     to_access, &msg->reply);
+ 
+ 		if (ret) {
+ 			drm_info(mtk_dp->drm_dev,
+@@ -2080,7 +2080,6 @@ static ssize_t mtk_dp_aux_transfer(struct drm_dp_aux *mtk_aux,
+ 		accessed_bytes += to_access;
+ 	} while (accessed_bytes < msg->size);
+ 
+-	msg->reply = DP_AUX_NATIVE_REPLY_ACK | DP_AUX_I2C_REPLY_ACK;
+ 	return msg->size;
+ err:
+ 	msg->reply = DP_AUX_NATIVE_REPLY_NACK | DP_AUX_I2C_REPLY_NACK;
+diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+index a1e006ec5dcec..0372f89082022 100644
+--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+@@ -1743,6 +1743,7 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev)
+ 	struct a5xx_gpu *a5xx_gpu = NULL;
+ 	struct adreno_gpu *adreno_gpu;
+ 	struct msm_gpu *gpu;
++	unsigned int nr_rings;
+ 	int ret;
+ 
+ 	if (!pdev) {
+@@ -1763,7 +1764,12 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev)
+ 
+ 	check_speed_bin(&pdev->dev);
+ 
+-	ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs, 4);
++	nr_rings = 4;
++
++	if (adreno_is_a510(adreno_gpu))
++		nr_rings = 1;
++
++	ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs, nr_rings);
+ 	if (ret) {
+ 		a5xx_destroy(&(a5xx_gpu->base.base));
+ 		return ERR_PTR(ret);
+diff --git a/drivers/gpu/drm/msm/adreno/adreno_device.c b/drivers/gpu/drm/msm/adreno/adreno_device.c
+index c5c4c93b3689c..cd009d56d35d5 100644
+--- a/drivers/gpu/drm/msm/adreno/adreno_device.c
++++ b/drivers/gpu/drm/msm/adreno/adreno_device.c
+@@ -438,9 +438,6 @@ struct msm_gpu *adreno_load_gpu(struct drm_device *dev)
+ 	 */
+ 	pm_runtime_enable(&pdev->dev);
+ 
+-	/* Make sure pm runtime is active and reset any previous errors */
+-	pm_runtime_set_active(&pdev->dev);
+-
+ 	ret = pm_runtime_get_sync(&pdev->dev);
+ 	if (ret < 0) {
+ 		pm_runtime_put_sync(&pdev->dev);
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+index 758261e8ac739..c237003670137 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+@@ -652,7 +652,7 @@ static int dpu_encoder_virt_atomic_check(
+ 		if (drm_atomic_crtc_needs_modeset(crtc_state)) {
+ 			dpu_rm_release(global_state, drm_enc);
+ 
+-			if (!crtc_state->active_changed || crtc_state->active)
++			if (!crtc_state->active_changed || crtc_state->enable)
+ 				ret = dpu_rm_reserve(&dpu_kms->rm, global_state,
+ 						drm_enc, crtc_state, topology);
+ 		}
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h
+index e6590302b3bfc..2c5bafacd609c 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h
+@@ -19,8 +19,9 @@
+  */
+ #define MAX_BLOCKS    12
+ 
+-#define DPU_HW_VER(MAJOR, MINOR, STEP) (((MAJOR & 0xF) << 28)    |\
+-		((MINOR & 0xFFF) << 16)  |\
++#define DPU_HW_VER(MAJOR, MINOR, STEP)			\
++		((((unsigned int)MAJOR & 0xF) << 28) |	\
++		((MINOR & 0xFFF) << 16) |		\
+ 		(STEP & 0xFFFF))
+ 
+ #define DPU_HW_MAJOR(rev)		((rev) >> 28)
+diff --git a/drivers/gpu/drm/msm/disp/msm_disp_snapshot.c b/drivers/gpu/drm/msm/disp/msm_disp_snapshot.c
+index b73031cd48e48..e75b97127c0d1 100644
+--- a/drivers/gpu/drm/msm/disp/msm_disp_snapshot.c
++++ b/drivers/gpu/drm/msm/disp/msm_disp_snapshot.c
+@@ -129,9 +129,6 @@ void msm_disp_snapshot_destroy(struct drm_device *drm_dev)
+ 	}
+ 
+ 	priv = drm_dev->dev_private;
+-	if (!priv->kms)
+-		return;
+-
+ 	kms = priv->kms;
+ 
+ 	if (kms->dump_worker)
+diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
+index aca48c868c14d..9ded384acba46 100644
+--- a/drivers/gpu/drm/msm/msm_drv.c
++++ b/drivers/gpu/drm/msm/msm_drv.c
+@@ -150,9 +150,6 @@ static void msm_irq_uninstall(struct drm_device *dev)
+ 	struct msm_drm_private *priv = dev->dev_private;
+ 	struct msm_kms *kms = priv->kms;
+ 
+-	if (!priv->kms)
+-		return;
+-
+ 	kms->funcs->irq_uninstall(kms);
+ 	if (kms->irq_requested)
+ 		free_irq(kms->irq, dev);
+@@ -270,6 +267,8 @@ static int msm_drm_uninit(struct device *dev)
+ 	component_unbind_all(dev, ddev);
+ 
+ 	ddev->dev_private = NULL;
++	drm_dev_put(ddev);
++
+ 	destroy_workqueue(priv->wq);
+ 
+ 	return 0;
+@@ -420,8 +419,6 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv)
+ 	priv->dev = ddev;
+ 
+ 	priv->wq = alloc_ordered_workqueue("msm", 0);
+-	if (!priv->wq)
+-		return -ENOMEM;
+ 
+ 	INIT_LIST_HEAD(&priv->objects);
+ 	mutex_init(&priv->obj_lock);
+@@ -444,12 +441,12 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv)
+ 
+ 	ret = msm_init_vram(ddev);
+ 	if (ret)
+-		goto err_drm_dev_put;
++		return ret;
+ 
+ 	/* Bind all our sub-components: */
+ 	ret = component_bind_all(dev, ddev);
+ 	if (ret)
+-		goto err_drm_dev_put;
++		return ret;
+ 
+ 	dma_set_max_seg_size(dev, UINT_MAX);
+ 
+@@ -544,8 +541,6 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv)
+ 
+ err_msm_uninit:
+ 	msm_drm_uninit(dev);
+-err_drm_dev_put:
+-	drm_dev_put(ddev);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/gpu/drm/panel/panel-novatek-nt35950.c b/drivers/gpu/drm/panel/panel-novatek-nt35950.c
+index abf752b36a523..8b108ac80b556 100644
+--- a/drivers/gpu/drm/panel/panel-novatek-nt35950.c
++++ b/drivers/gpu/drm/panel/panel-novatek-nt35950.c
+@@ -585,8 +585,12 @@ static int nt35950_probe(struct mipi_dsi_device *dsi)
+ 		       DRM_MODE_CONNECTOR_DSI);
+ 
+ 	ret = drm_panel_of_backlight(&nt->panel);
+-	if (ret)
++	if (ret) {
++		if (num_dsis == 2)
++			mipi_dsi_device_unregister(nt->dsi[1]);
++
+ 		return dev_err_probe(dev, ret, "Failed to get backlight\n");
++	}
+ 
+ 	drm_panel_add(&nt->panel);
+ 
+@@ -602,6 +606,10 @@ static int nt35950_probe(struct mipi_dsi_device *dsi)
+ 
+ 		ret = mipi_dsi_attach(nt->dsi[i]);
+ 		if (ret < 0) {
++			/* If we fail to attach to either host, we're done */
++			if (num_dsis == 2)
++				mipi_dsi_device_unregister(nt->dsi[1]);
++
+ 			return dev_err_probe(dev, ret,
+ 					     "Cannot attach to DSI%d host.\n", i);
+ 		}
+diff --git a/drivers/gpu/drm/rcar-du/rcar_du_encoder.c b/drivers/gpu/drm/rcar-du/rcar_du_encoder.c
+index b1787be31e92c..7ecec7b04a8d0 100644
+--- a/drivers/gpu/drm/rcar-du/rcar_du_encoder.c
++++ b/drivers/gpu/drm/rcar-du/rcar_du_encoder.c
+@@ -109,8 +109,8 @@ int rcar_du_encoder_init(struct rcar_du_device *rcdu,
+ 	renc = drmm_encoder_alloc(&rcdu->ddev, struct rcar_du_encoder, base,
+ 				  &rcar_du_encoder_funcs, DRM_MODE_ENCODER_NONE,
+ 				  NULL);
+-	if (!renc)
+-		return -ENOMEM;
++	if (IS_ERR(renc))
++		return PTR_ERR(renc);
+ 
+ 	renc->output = output;
+ 
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
+index 8ea09d915c3ca..6c0800083aad8 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
+@@ -261,9 +261,6 @@ static int rockchip_drm_gem_object_mmap(struct drm_gem_object *obj,
+ 	else
+ 		ret = rockchip_drm_gem_object_mmap_dma(obj, vma);
+ 
+-	if (ret)
+-		drm_gem_vm_close(vma);
+-
+ 	return ret;
+ }
+ 
+diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
+index aa116a7bbae3a..dfce896c4baeb 100644
+--- a/drivers/gpu/drm/ttm/ttm_pool.c
++++ b/drivers/gpu/drm/ttm/ttm_pool.c
+@@ -367,6 +367,43 @@ static int ttm_pool_page_allocated(struct ttm_pool *pool, unsigned int order,
+ 	return 0;
+ }
+ 
++/**
++ * ttm_pool_free_range() - Free a range of TTM pages
++ * @pool: The pool used for allocating.
++ * @tt: The struct ttm_tt holding the page pointers.
++ * @caching: The page caching mode used by the range.
++ * @start_page: index for first page to free.
++ * @end_page: index for last page to free + 1.
++ *
++ * During allocation the ttm_tt page-vector may be populated with ranges of
++ * pages with different attributes if allocation hit an error without being
++ * able to completely fulfill the allocation. This function can be used
++ * to free these individual ranges.
++ */
++static void ttm_pool_free_range(struct ttm_pool *pool, struct ttm_tt *tt,
++				enum ttm_caching caching,
++				pgoff_t start_page, pgoff_t end_page)
++{
++	struct page **pages = tt->pages;
++	unsigned int order;
++	pgoff_t i, nr;
++
++	for (i = start_page; i < end_page; i += nr, pages += nr) {
++		struct ttm_pool_type *pt = NULL;
++
++		order = ttm_pool_page_order(pool, *pages);
++		nr = (1UL << order);
++		if (tt->dma_address)
++			ttm_pool_unmap(pool, tt->dma_address[i], nr);
++
++		pt = ttm_pool_select_type(pool, caching, order);
++		if (pt)
++			ttm_pool_type_give(pt, *pages);
++		else
++			ttm_pool_free_page(pool, caching, order, *pages);
++	}
++}
++
+ /**
+  * ttm_pool_alloc - Fill a ttm_tt object
+  *
+@@ -382,12 +419,14 @@ static int ttm_pool_page_allocated(struct ttm_pool *pool, unsigned int order,
+ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
+ 		   struct ttm_operation_ctx *ctx)
+ {
+-	unsigned long num_pages = tt->num_pages;
++	pgoff_t num_pages = tt->num_pages;
+ 	dma_addr_t *dma_addr = tt->dma_address;
+ 	struct page **caching = tt->pages;
+ 	struct page **pages = tt->pages;
++	enum ttm_caching page_caching;
+ 	gfp_t gfp_flags = GFP_USER;
+-	unsigned int i, order;
++	pgoff_t caching_divide;
++	unsigned int order;
+ 	struct page *p;
+ 	int r;
+ 
+@@ -410,6 +449,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
+ 	     order = min_t(unsigned int, order, __fls(num_pages))) {
+ 		struct ttm_pool_type *pt;
+ 
++		page_caching = tt->caching;
+ 		pt = ttm_pool_select_type(pool, tt->caching, order);
+ 		p = pt ? ttm_pool_type_take(pt) : NULL;
+ 		if (p) {
+@@ -418,6 +458,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
+ 			if (r)
+ 				goto error_free_page;
+ 
++			caching = pages;
+ 			do {
+ 				r = ttm_pool_page_allocated(pool, order, p,
+ 							    &dma_addr,
+@@ -426,14 +467,15 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
+ 				if (r)
+ 					goto error_free_page;
+ 
++				caching = pages;
+ 				if (num_pages < (1 << order))
+ 					break;
+ 
+ 				p = ttm_pool_type_take(pt);
+ 			} while (p);
+-			caching = pages;
+ 		}
+ 
++		page_caching = ttm_cached;
+ 		while (num_pages >= (1 << order) &&
+ 		       (p = ttm_pool_alloc_page(pool, gfp_flags, order))) {
+ 
+@@ -442,6 +484,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
+ 							   tt->caching);
+ 				if (r)
+ 					goto error_free_page;
++				caching = pages;
+ 			}
+ 			r = ttm_pool_page_allocated(pool, order, p, &dma_addr,
+ 						    &num_pages, &pages);
+@@ -468,15 +511,13 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
+ 	return 0;
+ 
+ error_free_page:
+-	ttm_pool_free_page(pool, tt->caching, order, p);
++	ttm_pool_free_page(pool, page_caching, order, p);
+ 
+ error_free_all:
+ 	num_pages = tt->num_pages - num_pages;
+-	for (i = 0; i < num_pages; ) {
+-		order = ttm_pool_page_order(pool, tt->pages[i]);
+-		ttm_pool_free_page(pool, tt->caching, order, tt->pages[i]);
+-		i += 1 << order;
+-	}
++	caching_divide = caching - tt->pages;
++	ttm_pool_free_range(pool, tt, tt->caching, 0, caching_divide);
++	ttm_pool_free_range(pool, tt, ttm_cached, caching_divide, num_pages);
+ 
+ 	return r;
+ }
+@@ -492,27 +533,7 @@ EXPORT_SYMBOL(ttm_pool_alloc);
+  */
+ void ttm_pool_free(struct ttm_pool *pool, struct ttm_tt *tt)
+ {
+-	unsigned int i;
+-
+-	for (i = 0; i < tt->num_pages; ) {
+-		struct page *p = tt->pages[i];
+-		unsigned int order, num_pages;
+-		struct ttm_pool_type *pt;
+-
+-		order = ttm_pool_page_order(pool, p);
+-		num_pages = 1ULL << order;
+-		if (tt->dma_address)
+-			ttm_pool_unmap(pool, tt->dma_address[i], num_pages);
+-
+-		pt = ttm_pool_select_type(pool, tt->caching, order);
+-		if (pt)
+-			ttm_pool_type_give(pt, tt->pages[i]);
+-		else
+-			ttm_pool_free_page(pool, tt->caching, order,
+-					   tt->pages[i]);
+-
+-		i += num_pages;
+-	}
++	ttm_pool_free_range(pool, tt, tt->caching, 0, tt->num_pages);
+ 
+ 	while (atomic_long_read(&allocated_pages) > page_pool_size)
+ 		ttm_pool_shrink();
+diff --git a/drivers/gpu/drm/vgem/vgem_fence.c b/drivers/gpu/drm/vgem/vgem_fence.c
+index c2a879734d407..e157541783959 100644
+--- a/drivers/gpu/drm/vgem/vgem_fence.c
++++ b/drivers/gpu/drm/vgem/vgem_fence.c
+@@ -249,4 +249,5 @@ void vgem_fence_close(struct vgem_file *vfile)
+ {
+ 	idr_for_each(&vfile->fence_idr, __vgem_fence_idr_fini, vfile);
+ 	idr_destroy(&vfile->fence_idr);
++	mutex_destroy(&vfile->fence_mutex);
+ }
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+index 445d619e1fdc8..9fec194cdbf16 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+@@ -1420,70 +1420,10 @@ static void vmw_framebuffer_bo_destroy(struct drm_framebuffer *framebuffer)
+ 	kfree(vfbd);
+ }
+ 
+-static int vmw_framebuffer_bo_dirty(struct drm_framebuffer *framebuffer,
+-				    struct drm_file *file_priv,
+-				    unsigned int flags, unsigned int color,
+-				    struct drm_clip_rect *clips,
+-				    unsigned int num_clips)
+-{
+-	struct vmw_private *dev_priv = vmw_priv(framebuffer->dev);
+-	struct vmw_framebuffer_bo *vfbd =
+-		vmw_framebuffer_to_vfbd(framebuffer);
+-	struct drm_clip_rect norect;
+-	int ret, increment = 1;
+-
+-	drm_modeset_lock_all(&dev_priv->drm);
+-
+-	if (!num_clips) {
+-		num_clips = 1;
+-		clips = &norect;
+-		norect.x1 = norect.y1 = 0;
+-		norect.x2 = framebuffer->width;
+-		norect.y2 = framebuffer->height;
+-	} else if (flags & DRM_MODE_FB_DIRTY_ANNOTATE_COPY) {
+-		num_clips /= 2;
+-		increment = 2;
+-	}
+-
+-	switch (dev_priv->active_display_unit) {
+-	case vmw_du_legacy:
+-		ret = vmw_kms_ldu_do_bo_dirty(dev_priv, &vfbd->base, 0, 0,
+-					      clips, num_clips, increment);
+-		break;
+-	default:
+-		ret = -EINVAL;
+-		WARN_ONCE(true, "Dirty called with invalid display system.\n");
+-		break;
+-	}
+-
+-	vmw_cmd_flush(dev_priv, false);
+-
+-	drm_modeset_unlock_all(&dev_priv->drm);
+-
+-	return ret;
+-}
+-
+-static int vmw_framebuffer_bo_dirty_ext(struct drm_framebuffer *framebuffer,
+-					struct drm_file *file_priv,
+-					unsigned int flags, unsigned int color,
+-					struct drm_clip_rect *clips,
+-					unsigned int num_clips)
+-{
+-	struct vmw_private *dev_priv = vmw_priv(framebuffer->dev);
+-
+-	if (dev_priv->active_display_unit == vmw_du_legacy &&
+-	    vmw_cmd_supported(dev_priv))
+-		return vmw_framebuffer_bo_dirty(framebuffer, file_priv, flags,
+-						color, clips, num_clips);
+-
+-	return drm_atomic_helper_dirtyfb(framebuffer, file_priv, flags, color,
+-					 clips, num_clips);
+-}
+-
+ static const struct drm_framebuffer_funcs vmw_framebuffer_bo_funcs = {
+ 	.create_handle = vmw_framebuffer_bo_create_handle,
+ 	.destroy = vmw_framebuffer_bo_destroy,
+-	.dirty = vmw_framebuffer_bo_dirty_ext,
++	.dirty = drm_atomic_helper_dirtyfb,
+ };
+ 
+ /*
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h
+index 4d6e7b555db79..83595325cc186 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h
+@@ -512,11 +512,6 @@ void vmw_du_connector_destroy_state(struct drm_connector *connector,
+  */
+ int vmw_kms_ldu_init_display(struct vmw_private *dev_priv);
+ int vmw_kms_ldu_close_display(struct vmw_private *dev_priv);
+-int vmw_kms_ldu_do_bo_dirty(struct vmw_private *dev_priv,
+-			    struct vmw_framebuffer *framebuffer,
+-			    unsigned int flags, unsigned int color,
+-			    struct drm_clip_rect *clips,
+-			    unsigned int num_clips, int increment);
+ int vmw_kms_update_proxy(struct vmw_resource *res,
+ 			 const struct drm_clip_rect *clips,
+ 			 unsigned num_clips,
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c
+index a56e5d0ca3c65..ac72c20715f32 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c
+@@ -234,6 +234,7 @@ static const struct drm_crtc_funcs vmw_legacy_crtc_funcs = {
+ 	.atomic_duplicate_state = vmw_du_crtc_duplicate_state,
+ 	.atomic_destroy_state = vmw_du_crtc_destroy_state,
+ 	.set_config = drm_atomic_helper_set_config,
++	.page_flip = drm_atomic_helper_page_flip,
+ };
+ 
+ 
+@@ -273,6 +274,12 @@ static const struct
+ drm_connector_helper_funcs vmw_ldu_connector_helper_funcs = {
+ };
+ 
++static int vmw_kms_ldu_do_bo_dirty(struct vmw_private *dev_priv,
++				   struct vmw_framebuffer *framebuffer,
++				   unsigned int flags, unsigned int color,
++				   struct drm_mode_rect *clips,
++				   unsigned int num_clips);
++
+ /*
+  * Legacy Display Plane Functions
+  */
+@@ -291,7 +298,6 @@ vmw_ldu_primary_plane_atomic_update(struct drm_plane *plane,
+ 	struct drm_framebuffer *fb;
+ 	struct drm_crtc *crtc = new_state->crtc ?: old_state->crtc;
+ 
+-
+ 	ldu = vmw_crtc_to_ldu(crtc);
+ 	dev_priv = vmw_priv(plane->dev);
+ 	fb       = new_state->fb;
+@@ -304,8 +310,31 @@ vmw_ldu_primary_plane_atomic_update(struct drm_plane *plane,
+ 		vmw_ldu_del_active(dev_priv, ldu);
+ 
+ 	vmw_ldu_commit_list(dev_priv);
+-}
+ 
++	if (vfb && vmw_cmd_supported(dev_priv)) {
++		struct drm_mode_rect fb_rect = {
++			.x1 = 0,
++			.y1 = 0,
++			.x2 = vfb->base.width,
++			.y2 = vfb->base.height
++		};
++		struct drm_mode_rect *damage_rects = drm_plane_get_damage_clips(new_state);
++		u32 rect_count = drm_plane_get_damage_clips_count(new_state);
++		int ret;
++
++		if (!damage_rects) {
++			damage_rects = &fb_rect;
++			rect_count = 1;
++		}
++
++		ret = vmw_kms_ldu_do_bo_dirty(dev_priv, vfb, 0, 0, damage_rects, rect_count);
++
++		drm_WARN_ONCE(plane->dev, ret,
++			"vmw_kms_ldu_do_bo_dirty failed with: ret=%d\n", ret);
++
++		vmw_cmd_flush(dev_priv, false);
++	}
++}
+ 
+ static const struct drm_plane_funcs vmw_ldu_plane_funcs = {
+ 	.update_plane = drm_atomic_helper_update_plane,
+@@ -536,11 +565,11 @@ int vmw_kms_ldu_close_display(struct vmw_private *dev_priv)
+ }
+ 
+ 
+-int vmw_kms_ldu_do_bo_dirty(struct vmw_private *dev_priv,
+-			    struct vmw_framebuffer *framebuffer,
+-			    unsigned int flags, unsigned int color,
+-			    struct drm_clip_rect *clips,
+-			    unsigned int num_clips, int increment)
++static int vmw_kms_ldu_do_bo_dirty(struct vmw_private *dev_priv,
++				   struct vmw_framebuffer *framebuffer,
++				   unsigned int flags, unsigned int color,
++				   struct drm_mode_rect *clips,
++				   unsigned int num_clips)
+ {
+ 	size_t fifo_size;
+ 	int i;
+@@ -556,7 +585,7 @@ int vmw_kms_ldu_do_bo_dirty(struct vmw_private *dev_priv,
+ 		return -ENOMEM;
+ 
+ 	memset(cmd, 0, fifo_size);
+-	for (i = 0; i < num_clips; i++, clips += increment) {
++	for (i = 0; i < num_clips; i++, clips++) {
+ 		cmd[i].header = SVGA_CMD_UPDATE;
+ 		cmd[i].body.x = clips->x1;
+ 		cmd[i].body.y = clips->y1;
+diff --git a/drivers/gpu/host1x/context.c b/drivers/gpu/host1x/context.c
+index 8beedcf080abd..9ad89d22c0ca7 100644
+--- a/drivers/gpu/host1x/context.c
++++ b/drivers/gpu/host1x/context.c
+@@ -13,6 +13,11 @@
+ #include "context.h"
+ #include "dev.h"
+ 
++static void host1x_memory_context_release(struct device *dev)
++{
++	/* context device is freed in host1x_memory_context_list_free() */
++}
++
+ int host1x_memory_context_list_init(struct host1x *host1x)
+ {
+ 	struct host1x_memory_context_list *cdl = &host1x->context_list;
+@@ -51,38 +56,41 @@ int host1x_memory_context_list_init(struct host1x *host1x)
+ 		dev_set_name(&ctx->dev, "host1x-ctx.%d", i);
+ 		ctx->dev.bus = &host1x_context_device_bus_type;
+ 		ctx->dev.parent = host1x->dev;
++		ctx->dev.release = host1x_memory_context_release;
+ 
+ 		dma_set_max_seg_size(&ctx->dev, UINT_MAX);
+ 
+ 		err = device_add(&ctx->dev);
+ 		if (err) {
+ 			dev_err(host1x->dev, "could not add context device %d: %d\n", i, err);
+-			goto del_devices;
++			put_device(&ctx->dev);
++			goto unreg_devices;
+ 		}
+ 
+ 		err = of_dma_configure_id(&ctx->dev, node, true, &i);
+ 		if (err) {
+ 			dev_err(host1x->dev, "IOMMU configuration failed for context device %d: %d\n",
+ 				i, err);
+-			device_del(&ctx->dev);
+-			goto del_devices;
++			device_unregister(&ctx->dev);
++			goto unreg_devices;
+ 		}
+ 
+ 		if (!tegra_dev_iommu_get_stream_id(&ctx->dev, &ctx->stream_id) ||
+ 		    !device_iommu_mapped(&ctx->dev)) {
+ 			dev_err(host1x->dev, "Context device %d has no IOMMU!\n", i);
+-			device_del(&ctx->dev);
+-			goto del_devices;
++			device_unregister(&ctx->dev);
++			goto unreg_devices;
+ 		}
+ 	}
+ 
+ 	return 0;
+ 
+-del_devices:
++unreg_devices:
+ 	while (i--)
+-		device_del(&cdl->devs[i].dev);
++		device_unregister(&cdl->devs[i].dev);
+ 
+ 	kfree(cdl->devs);
++	cdl->devs = NULL;
+ 	cdl->len = 0;
+ 
+ 	return err;
+@@ -93,7 +101,7 @@ void host1x_memory_context_list_free(struct host1x_memory_context_list *cdl)
+ 	unsigned int i;
+ 
+ 	for (i = 0; i < cdl->len; i++)
+-		device_del(&cdl->devs[i].dev);
++		device_unregister(&cdl->devs[i].dev);
+ 
+ 	kfree(cdl->devs);
+ 	cdl->len = 0;
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
+index 47774b9ab3de0..c936d6a51c0cd 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
+@@ -367,6 +367,14 @@ init_done:
+ 	return devm_add_action_or_reset(&pdev->dev, privdata->mp2_ops->remove, privdata);
+ }
+ 
++static void amd_sfh_shutdown(struct pci_dev *pdev)
++{
++	struct amd_mp2_dev *mp2 = pci_get_drvdata(pdev);
++
++	if (mp2 && mp2->mp2_ops)
++		mp2->mp2_ops->stop_all(mp2);
++}
++
+ static int __maybe_unused amd_mp2_pci_resume(struct device *dev)
+ {
+ 	struct amd_mp2_dev *mp2 = dev_get_drvdata(dev);
+@@ -401,6 +409,7 @@ static struct pci_driver amd_mp2_pci_driver = {
+ 	.id_table	= amd_mp2_pci_tbl,
+ 	.probe		= amd_mp2_pci_probe,
+ 	.driver.pm	= &amd_mp2_pm_ops,
++	.shutdown	= amd_sfh_shutdown,
+ };
+ module_pci_driver(amd_mp2_pci_driver);
+ 
+diff --git a/drivers/hid/amd-sfh-hid/sfh1_1/amd_sfh_desc.c b/drivers/hid/amd-sfh-hid/sfh1_1/amd_sfh_desc.c
+index 0609fea581c96..6f0d332ccf51c 100644
+--- a/drivers/hid/amd-sfh-hid/sfh1_1/amd_sfh_desc.c
++++ b/drivers/hid/amd-sfh-hid/sfh1_1/amd_sfh_desc.c
+@@ -218,7 +218,7 @@ static u8 get_input_rep(u8 current_index, int sensor_idx, int report_id,
+ 			     OFFSET_SENSOR_DATA_DEFAULT;
+ 		memcpy_fromio(&als_data, sensoraddr, sizeof(struct sfh_als_data));
+ 		get_common_inputs(&als_input.common_property, report_id);
+-		als_input.illuminance_value = als_data.lux;
++		als_input.illuminance_value = float_to_int(als_data.lux);
+ 		report_size = sizeof(als_input);
+ 		memcpy(input_report, &als_input, sizeof(als_input));
+ 		break;
+diff --git a/drivers/hid/amd-sfh-hid/sfh1_1/amd_sfh_init.c b/drivers/hid/amd-sfh-hid/sfh1_1/amd_sfh_init.c
+index a1d6e08fab7d4..bb8bd7892b674 100644
+--- a/drivers/hid/amd-sfh-hid/sfh1_1/amd_sfh_init.c
++++ b/drivers/hid/amd-sfh-hid/sfh1_1/amd_sfh_init.c
+@@ -112,6 +112,7 @@ static int amd_sfh1_1_hid_client_init(struct amd_mp2_dev *privdata)
+ 	cl_data->num_hid_devices = amd_sfh_get_sensor_num(privdata, &cl_data->sensor_idx[0]);
+ 	if (cl_data->num_hid_devices == 0)
+ 		return -ENODEV;
++	cl_data->is_any_sensor_enabled = false;
+ 
+ 	INIT_DELAYED_WORK(&cl_data->work, amd_sfh_work);
+ 	INIT_DELAYED_WORK(&cl_data->work_buffer, amd_sfh_work_buffer);
+@@ -170,6 +171,7 @@ static int amd_sfh1_1_hid_client_init(struct amd_mp2_dev *privdata)
+ 		status = (status == 0) ? SENSOR_ENABLED : SENSOR_DISABLED;
+ 
+ 		if (status == SENSOR_ENABLED) {
++			cl_data->is_any_sensor_enabled = true;
+ 			cl_data->sensor_sts[i] = SENSOR_ENABLED;
+ 			rc = amdtp_hid_probe(i, cl_data);
+ 			if (rc) {
+@@ -186,12 +188,21 @@ static int amd_sfh1_1_hid_client_init(struct amd_mp2_dev *privdata)
+ 					cl_data->sensor_sts[i]);
+ 				goto cleanup;
+ 			}
++		} else {
++			cl_data->sensor_sts[i] = SENSOR_DISABLED;
+ 		}
+ 		dev_dbg(dev, "sid 0x%x (%s) status 0x%x\n",
+ 			cl_data->sensor_idx[i], get_sensor_name(cl_data->sensor_idx[i]),
+ 			cl_data->sensor_sts[i]);
+ 	}
+ 
++	if (!cl_data->is_any_sensor_enabled) {
++		dev_warn(dev, "Failed to discover, sensors not enabled is %d\n",
++			 cl_data->is_any_sensor_enabled);
++		rc = -EOPNOTSUPP;
++		goto cleanup;
++	}
++
+ 	schedule_delayed_work(&cl_data->work_buffer, msecs_to_jiffies(AMD_SFH_IDLE_LOOP));
+ 	return 0;
+ 
+diff --git a/drivers/hid/amd-sfh-hid/sfh1_1/amd_sfh_interface.c b/drivers/hid/amd-sfh-hid/sfh1_1/amd_sfh_interface.c
+index c6df959ec7252..4f81ef2d4f56e 100644
+--- a/drivers/hid/amd-sfh-hid/sfh1_1/amd_sfh_interface.c
++++ b/drivers/hid/amd-sfh-hid/sfh1_1/amd_sfh_interface.c
+@@ -16,11 +16,11 @@ static int amd_sfh_wait_response(struct amd_mp2_dev *mp2, u8 sid, u32 cmd_id)
+ {
+ 	struct sfh_cmd_response cmd_resp;
+ 
+-	/* Get response with status within a max of 1600 ms timeout */
++	/* Get response with status within a max of 10000 ms timeout */
+ 	if (!readl_poll_timeout(mp2->mmio + AMD_P2C_MSG(0), cmd_resp.resp,
+ 				(cmd_resp.response.response == 0 &&
+ 				cmd_resp.response.cmd_id == cmd_id && (sid == 0xff ||
+-				cmd_resp.response.sensor_id == sid)), 500, 1600000))
++				cmd_resp.response.sensor_id == sid)), 500, 10000000))
+ 		return cmd_resp.response.response;
+ 
+ 	return -1;
+@@ -33,6 +33,7 @@ static void amd_start_sensor(struct amd_mp2_dev *privdata, struct amd_mp2_sensor
+ 	cmd_base.ul = 0;
+ 	cmd_base.cmd.cmd_id = ENABLE_SENSOR;
+ 	cmd_base.cmd.intr_disable = 0;
++	cmd_base.cmd.sub_cmd_value = 1;
+ 	cmd_base.cmd.sensor_id = info.sensor_idx;
+ 
+ 	writel(cmd_base.ul, privdata->mmio + AMD_C2P_MSG(0));
+@@ -45,6 +46,7 @@ static void amd_stop_sensor(struct amd_mp2_dev *privdata, u16 sensor_idx)
+ 	cmd_base.ul = 0;
+ 	cmd_base.cmd.cmd_id = DISABLE_SENSOR;
+ 	cmd_base.cmd.intr_disable = 0;
++	cmd_base.cmd.sub_cmd_value = 1;
+ 	cmd_base.cmd.sensor_id = sensor_idx;
+ 
+ 	writeq(0x0, privdata->mmio + AMD_C2P_MSG(1));
+@@ -56,8 +58,10 @@ static void amd_stop_all_sensor(struct amd_mp2_dev *privdata)
+ 	struct sfh_cmd_base cmd_base;
+ 
+ 	cmd_base.ul = 0;
+-	cmd_base.cmd.cmd_id = STOP_ALL_SENSORS;
++	cmd_base.cmd.cmd_id = DISABLE_SENSOR;
+ 	cmd_base.cmd.intr_disable = 0;
++	/* 0xf indicates all sensors */
++	cmd_base.cmd.sensor_id = 0xf;
+ 
+ 	writel(cmd_base.ul, privdata->mmio + AMD_C2P_MSG(0));
+ }
+diff --git a/drivers/hid/amd-sfh-hid/sfh1_1/amd_sfh_interface.h b/drivers/hid/amd-sfh-hid/sfh1_1/amd_sfh_interface.h
+index ae47a369dc05a..9d31d5b510eb4 100644
+--- a/drivers/hid/amd-sfh-hid/sfh1_1/amd_sfh_interface.h
++++ b/drivers/hid/amd-sfh-hid/sfh1_1/amd_sfh_interface.h
+@@ -33,9 +33,9 @@ struct sfh_cmd_base {
+ 		struct {
+ 			u32 sensor_id		: 4;
+ 			u32 cmd_id		: 4;
+-			u32 sub_cmd_id		: 6;
+-			u32 length		: 12;
+-			u32 rsvd		: 5;
++			u32 sub_cmd_id		: 8;
++			u32 sub_cmd_value	: 12;
++			u32 rsvd		: 3;
+ 			u32 intr_disable	: 1;
+ 		} cmd;
+ 	};
+@@ -133,7 +133,7 @@ struct sfh_mag_data {
+ 
+ struct sfh_als_data {
+ 	struct sfh_common_data commondata;
+-	u16 lux;
++	u32 lux;
+ };
+ 
+ struct hpd_status {
+diff --git a/drivers/hte/hte-tegra194-test.c b/drivers/hte/hte-tegra194-test.c
+index 5d776a185bd62..ce8c44e792213 100644
+--- a/drivers/hte/hte-tegra194-test.c
++++ b/drivers/hte/hte-tegra194-test.c
+@@ -6,6 +6,7 @@
+  */
+ 
+ #include <linux/err.h>
++#include <linux/mod_devicetable.h>
+ #include <linux/module.h>
+ #include <linux/moduleparam.h>
+ #include <linux/interrupt.h>
+diff --git a/drivers/hte/hte-tegra194.c b/drivers/hte/hte-tegra194.c
+index 49a27af22742b..d1b579c822797 100644
+--- a/drivers/hte/hte-tegra194.c
++++ b/drivers/hte/hte-tegra194.c
+@@ -251,7 +251,7 @@ static int tegra_hte_map_to_line_id(u32 eid,
+ {
+ 
+ 	if (m) {
+-		if (eid > map_sz)
++		if (eid >= map_sz)
+ 			return -EINVAL;
+ 		if (m[eid].slice == NV_AON_SLICE_INVALID)
+ 			return -EINVAL;
+diff --git a/drivers/hwmon/adt7475.c b/drivers/hwmon/adt7475.c
+index 6e4c92b500b8e..6a6ebcc896b1d 100644
+--- a/drivers/hwmon/adt7475.c
++++ b/drivers/hwmon/adt7475.c
+@@ -1604,9 +1604,9 @@ static int adt7475_set_pwm_polarity(struct i2c_client *client)
+ 	int ret, i;
+ 	u8 val;
+ 
+-	ret = of_property_read_u32_array(client->dev.of_node,
+-					 "adi,pwm-active-state", states,
+-					 ARRAY_SIZE(states));
++	ret = device_property_read_u32_array(&client->dev,
++					     "adi,pwm-active-state", states,
++					     ARRAY_SIZE(states));
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/hwmon/k10temp.c b/drivers/hwmon/k10temp.c
+index 5a9d47a229e40..be8bbb1c3a02d 100644
+--- a/drivers/hwmon/k10temp.c
++++ b/drivers/hwmon/k10temp.c
+@@ -75,6 +75,7 @@ static DEFINE_MUTEX(nb_smu_ind_mutex);
+ 
+ #define ZEN_CUR_TEMP_SHIFT			21
+ #define ZEN_CUR_TEMP_RANGE_SEL_MASK		BIT(19)
++#define ZEN_CUR_TEMP_TJ_SEL_MASK		GENMASK(17, 16)
+ 
+ struct k10temp_data {
+ 	struct pci_dev *pdev;
+@@ -155,7 +156,8 @@ static long get_raw_temp(struct k10temp_data *data)
+ 
+ 	data->read_tempreg(data->pdev, &regval);
+ 	temp = (regval >> ZEN_CUR_TEMP_SHIFT) * 125;
+-	if (regval & data->temp_adjust_mask)
++	if ((regval & data->temp_adjust_mask) ||
++	    (regval & ZEN_CUR_TEMP_TJ_SEL_MASK) == ZEN_CUR_TEMP_TJ_SEL_MASK)
+ 		temp -= 49000;
+ 	return temp;
+ }
+diff --git a/drivers/hwmon/pmbus/fsp-3y.c b/drivers/hwmon/pmbus/fsp-3y.c
+index aec294cc72d1f..c7469d2cdedcf 100644
+--- a/drivers/hwmon/pmbus/fsp-3y.c
++++ b/drivers/hwmon/pmbus/fsp-3y.c
+@@ -180,7 +180,6 @@ static struct pmbus_driver_info fsp3y_info[] = {
+ 			PMBUS_HAVE_FAN12,
+ 		.func[YM2151_PAGE_5VSB_LOG] =
+ 			PMBUS_HAVE_VOUT | PMBUS_HAVE_IOUT,
+-			PMBUS_HAVE_IIN,
+ 		.read_word_data = fsp3y_read_word_data,
+ 		.read_byte_data = fsp3y_read_byte_data,
+ 	},
+diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.c b/drivers/hwtracing/coresight/coresight-etm-perf.c
+index a48c97da81658..711f451b69469 100644
+--- a/drivers/hwtracing/coresight/coresight-etm-perf.c
++++ b/drivers/hwtracing/coresight/coresight-etm-perf.c
+@@ -901,6 +901,7 @@ int __init etm_perf_init(void)
+ 	etm_pmu.addr_filters_sync	= etm_addr_filters_sync;
+ 	etm_pmu.addr_filters_validate	= etm_addr_filters_validate;
+ 	etm_pmu.nr_addr_filters		= ETM_ADDR_CMP_MAX;
++	etm_pmu.module			= THIS_MODULE;
+ 
+ 	ret = perf_pmu_register(&etm_pmu, CORESIGHT_ETM_PMU_NAME, -1);
+ 	if (ret == 0)
+diff --git a/drivers/i2c/busses/i2c-cadence.c b/drivers/i2c/busses/i2c-cadence.c
+index b5d22e7282c22..982c207d473b7 100644
+--- a/drivers/i2c/busses/i2c-cadence.c
++++ b/drivers/i2c/busses/i2c-cadence.c
+@@ -827,8 +827,10 @@ static int cdns_i2c_master_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
+ #if IS_ENABLED(CONFIG_I2C_SLAVE)
+ 	/* Check i2c operating mode and switch if possible */
+ 	if (id->dev_mode == CDNS_I2C_MODE_SLAVE) {
+-		if (id->slave_state != CDNS_I2C_SLAVE_STATE_IDLE)
+-			return -EAGAIN;
++		if (id->slave_state != CDNS_I2C_SLAVE_STATE_IDLE) {
++			ret = -EAGAIN;
++			goto out;
++		}
+ 
+ 		/* Set mode to master */
+ 		cdns_i2c_set_mode(CDNS_I2C_MODE_MASTER, id);
+diff --git a/drivers/i2c/busses/i2c-omap.c b/drivers/i2c/busses/i2c-omap.c
+index f9ae520aed228..7ec2521997061 100644
+--- a/drivers/i2c/busses/i2c-omap.c
++++ b/drivers/i2c/busses/i2c-omap.c
+@@ -1058,7 +1058,7 @@ omap_i2c_isr(int irq, void *dev_id)
+ 	u16 stat;
+ 
+ 	stat = omap_i2c_read_reg(omap, OMAP_I2C_STAT_REG);
+-	mask = omap_i2c_read_reg(omap, OMAP_I2C_IE_REG);
++	mask = omap_i2c_read_reg(omap, OMAP_I2C_IE_REG) & ~OMAP_I2C_STAT_NACK;
+ 
+ 	if (stat & mask)
+ 		ret = IRQ_WAKE_THREAD;
+diff --git a/drivers/i2c/busses/i2c-xiic.c b/drivers/i2c/busses/i2c-xiic.c
+index dbb792fc197ec..3b94b07cb37a6 100644
+--- a/drivers/i2c/busses/i2c-xiic.c
++++ b/drivers/i2c/busses/i2c-xiic.c
+@@ -1164,7 +1164,7 @@ static int xiic_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, int num)
+ 	err = xiic_start_xfer(i2c, msgs, num);
+ 	if (err < 0) {
+ 		dev_err(adap->dev.parent, "Error xiic_start_xfer\n");
+-		return err;
++		goto out;
+ 	}
+ 
+ 	err = wait_for_completion_timeout(&i2c->completion, XIIC_XFER_TIMEOUT);
+@@ -1178,6 +1178,8 @@ static int xiic_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, int num)
+ 		err = (i2c->state == STATE_DONE) ? num : -EIO;
+ 	}
+ 	mutex_unlock(&i2c->lock);
++
++out:
+ 	pm_runtime_mark_last_busy(i2c->dev);
+ 	pm_runtime_put_autosuspend(i2c->dev);
+ 	return err;
+diff --git a/drivers/iio/addac/stx104.c b/drivers/iio/addac/stx104.c
+index 48a91a95e597b..b658a75d4e3a8 100644
+--- a/drivers/iio/addac/stx104.c
++++ b/drivers/iio/addac/stx104.c
+@@ -15,6 +15,7 @@
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/moduleparam.h>
++#include <linux/mutex.h>
+ #include <linux/spinlock.h>
+ #include <linux/types.h>
+ 
+@@ -69,10 +70,12 @@ struct stx104_reg {
+ 
+ /**
+  * struct stx104_iio - IIO device private data structure
++ * @lock: synchronization lock to prevent I/O race conditions
+  * @chan_out_states:	channels' output states
+  * @reg:		I/O address offset for the device registers
+  */
+ struct stx104_iio {
++	struct mutex lock;
+ 	unsigned int chan_out_states[STX104_NUM_OUT_CHAN];
+ 	struct stx104_reg __iomem *reg;
+ };
+@@ -114,6 +117,8 @@ static int stx104_read_raw(struct iio_dev *indio_dev,
+ 			return IIO_VAL_INT;
+ 		}
+ 
++		mutex_lock(&priv->lock);
++
+ 		/* select ADC channel */
+ 		iowrite8(chan->channel | (chan->channel << 4), &reg->achan);
+ 
+@@ -124,6 +129,8 @@ static int stx104_read_raw(struct iio_dev *indio_dev,
+ 		while (ioread8(&reg->cir_asr) & BIT(7));
+ 
+ 		*val = ioread16(&reg->ssr_ad);
++
++		mutex_unlock(&priv->lock);
+ 		return IIO_VAL_INT;
+ 	case IIO_CHAN_INFO_OFFSET:
+ 		/* get ADC bipolar/unipolar configuration */
+@@ -178,9 +185,12 @@ static int stx104_write_raw(struct iio_dev *indio_dev,
+ 			if ((unsigned int)val > 65535)
+ 				return -EINVAL;
+ 
++			mutex_lock(&priv->lock);
++
+ 			priv->chan_out_states[chan->channel] = val;
+ 			iowrite16(val, &priv->reg->dac[chan->channel]);
+ 
++			mutex_unlock(&priv->lock);
+ 			return 0;
+ 		}
+ 		return -EINVAL;
+@@ -351,6 +361,8 @@ static int stx104_probe(struct device *dev, unsigned int id)
+ 
+ 	indio_dev->name = dev_name(dev);
+ 
++	mutex_init(&priv->lock);
++
+ 	/* configure device for software trigger operation */
+ 	iowrite8(0, &priv->reg->acr);
+ 
+diff --git a/drivers/iio/light/max44009.c b/drivers/iio/light/max44009.c
+index 3dadace09fe2e..176dcad6e8e8a 100644
+--- a/drivers/iio/light/max44009.c
++++ b/drivers/iio/light/max44009.c
+@@ -527,6 +527,12 @@ static int max44009_probe(struct i2c_client *client)
+ 	return devm_iio_device_register(&client->dev, indio_dev);
+ }
+ 
++static const struct of_device_id max44009_of_match[] = {
++	{ .compatible = "maxim,max44009" },
++	{ }
++};
++MODULE_DEVICE_TABLE(of, max44009_of_match);
++
+ static const struct i2c_device_id max44009_id[] = {
+ 	{ "max44009", 0 },
+ 	{ }
+@@ -536,18 +542,13 @@ MODULE_DEVICE_TABLE(i2c, max44009_id);
+ static struct i2c_driver max44009_driver = {
+ 	.driver = {
+ 		.name = MAX44009_DRV_NAME,
++		.of_match_table = max44009_of_match,
+ 	},
+ 	.probe_new = max44009_probe,
+ 	.id_table = max44009_id,
+ };
+ module_i2c_driver(max44009_driver);
+ 
+-static const struct of_device_id max44009_of_match[] = {
+-	{ .compatible = "maxim,max44009" },
+-	{ }
+-};
+-MODULE_DEVICE_TABLE(of, max44009_of_match);
+-
+ MODULE_AUTHOR("Robert Eshleman <bobbyeshleman@gmail.com>");
+ MODULE_LICENSE("GPL v2");
+ MODULE_DESCRIPTION("MAX44009 ambient light sensor driver");
+diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
+index 603c0aecc3614..ff58058aeadca 100644
+--- a/drivers/infiniband/core/cm.c
++++ b/drivers/infiniband/core/cm.c
+@@ -2912,6 +2912,8 @@ static int cm_send_rej_locked(struct cm_id_private *cm_id_priv,
+ 	    (ari && ari_length > IB_CM_REJ_ARI_LENGTH))
+ 		return -EINVAL;
+ 
++	trace_icm_send_rej(&cm_id_priv->id, reason);
++
+ 	switch (state) {
+ 	case IB_CM_REQ_SENT:
+ 	case IB_CM_MRA_REQ_RCVD:
+@@ -2942,7 +2944,6 @@ static int cm_send_rej_locked(struct cm_id_private *cm_id_priv,
+ 		return -EINVAL;
+ 	}
+ 
+-	trace_icm_send_rej(&cm_id_priv->id, reason);
+ 	ret = ib_post_send_mad(msg, NULL);
+ 	if (ret) {
+ 		cm_free_msg(msg);
+diff --git a/drivers/infiniband/hw/erdma/erdma_hw.h b/drivers/infiniband/hw/erdma/erdma_hw.h
+index 37ad1bb1917c4..76ce2856be28a 100644
+--- a/drivers/infiniband/hw/erdma/erdma_hw.h
++++ b/drivers/infiniband/hw/erdma/erdma_hw.h
+@@ -112,6 +112,10 @@
+ 
+ #define ERDMA_PAGE_SIZE_SUPPORT 0x7FFFF000
+ 
++/* Hardware page size definition */
++#define ERDMA_HW_PAGE_SHIFT 12
++#define ERDMA_HW_PAGE_SIZE 4096
++
+ /* WQE related. */
+ #define EQE_SIZE 16
+ #define EQE_SHIFT 4
+diff --git a/drivers/infiniband/hw/erdma/erdma_verbs.c b/drivers/infiniband/hw/erdma/erdma_verbs.c
+index 9c30d78730aa1..83e1b0d559771 100644
+--- a/drivers/infiniband/hw/erdma/erdma_verbs.c
++++ b/drivers/infiniband/hw/erdma/erdma_verbs.c
+@@ -38,7 +38,7 @@ static int create_qp_cmd(struct erdma_dev *dev, struct erdma_qp *qp)
+ 		   FIELD_PREP(ERDMA_CMD_CREATE_QP_PD_MASK, pd->pdn);
+ 
+ 	if (rdma_is_kernel_res(&qp->ibqp.res)) {
+-		u32 pgsz_range = ilog2(SZ_1M) - PAGE_SHIFT;
++		u32 pgsz_range = ilog2(SZ_1M) - ERDMA_HW_PAGE_SHIFT;
+ 
+ 		req.sq_cqn_mtt_cfg =
+ 			FIELD_PREP(ERDMA_CMD_CREATE_QP_PAGE_SIZE_MASK,
+@@ -66,13 +66,13 @@ static int create_qp_cmd(struct erdma_dev *dev, struct erdma_qp *qp)
+ 		user_qp = &qp->user_qp;
+ 		req.sq_cqn_mtt_cfg = FIELD_PREP(
+ 			ERDMA_CMD_CREATE_QP_PAGE_SIZE_MASK,
+-			ilog2(user_qp->sq_mtt.page_size) - PAGE_SHIFT);
++			ilog2(user_qp->sq_mtt.page_size) - ERDMA_HW_PAGE_SHIFT);
+ 		req.sq_cqn_mtt_cfg |=
+ 			FIELD_PREP(ERDMA_CMD_CREATE_QP_CQN_MASK, qp->scq->cqn);
+ 
+ 		req.rq_cqn_mtt_cfg = FIELD_PREP(
+ 			ERDMA_CMD_CREATE_QP_PAGE_SIZE_MASK,
+-			ilog2(user_qp->rq_mtt.page_size) - PAGE_SHIFT);
++			ilog2(user_qp->rq_mtt.page_size) - ERDMA_HW_PAGE_SHIFT);
+ 		req.rq_cqn_mtt_cfg |=
+ 			FIELD_PREP(ERDMA_CMD_CREATE_QP_CQN_MASK, qp->rcq->cqn);
+ 
+@@ -162,7 +162,7 @@ static int create_cq_cmd(struct erdma_dev *dev, struct erdma_cq *cq)
+ 	if (rdma_is_kernel_res(&cq->ibcq.res)) {
+ 		page_size = SZ_32M;
+ 		req.cfg0 |= FIELD_PREP(ERDMA_CMD_CREATE_CQ_PAGESIZE_MASK,
+-				       ilog2(page_size) - PAGE_SHIFT);
++				       ilog2(page_size) - ERDMA_HW_PAGE_SHIFT);
+ 		req.qbuf_addr_l = lower_32_bits(cq->kern_cq.qbuf_dma_addr);
+ 		req.qbuf_addr_h = upper_32_bits(cq->kern_cq.qbuf_dma_addr);
+ 
+@@ -175,8 +175,9 @@ static int create_cq_cmd(struct erdma_dev *dev, struct erdma_cq *cq)
+ 			cq->kern_cq.qbuf_dma_addr + (cq->depth << CQE_SHIFT);
+ 	} else {
+ 		mtt = &cq->user_cq.qbuf_mtt;
+-		req.cfg0 |= FIELD_PREP(ERDMA_CMD_CREATE_CQ_PAGESIZE_MASK,
+-				       ilog2(mtt->page_size) - PAGE_SHIFT);
++		req.cfg0 |=
++			FIELD_PREP(ERDMA_CMD_CREATE_CQ_PAGESIZE_MASK,
++				   ilog2(mtt->page_size) - ERDMA_HW_PAGE_SHIFT);
+ 		if (mtt->mtt_nents == 1) {
+ 			req.qbuf_addr_l = lower_32_bits(*(u64 *)mtt->mtt_buf);
+ 			req.qbuf_addr_h = upper_32_bits(*(u64 *)mtt->mtt_buf);
+@@ -636,7 +637,7 @@ static int init_user_qp(struct erdma_qp *qp, struct erdma_ucontext *uctx,
+ 	u32 rq_offset;
+ 	int ret;
+ 
+-	if (len < (PAGE_ALIGN(qp->attrs.sq_size * SQEBB_SIZE) +
++	if (len < (ALIGN(qp->attrs.sq_size * SQEBB_SIZE, ERDMA_HW_PAGE_SIZE) +
+ 		   qp->attrs.rq_size * RQE_SIZE))
+ 		return -EINVAL;
+ 
+@@ -646,7 +647,7 @@ static int init_user_qp(struct erdma_qp *qp, struct erdma_ucontext *uctx,
+ 	if (ret)
+ 		return ret;
+ 
+-	rq_offset = PAGE_ALIGN(qp->attrs.sq_size << SQEBB_SHIFT);
++	rq_offset = ALIGN(qp->attrs.sq_size << SQEBB_SHIFT, ERDMA_HW_PAGE_SIZE);
+ 	qp->user_qp.rq_offset = rq_offset;
+ 
+ 	ret = get_mtt_entries(qp->dev, &qp->user_qp.rq_mtt, va + rq_offset,
+diff --git a/drivers/infiniband/hw/hfi1/ipoib_tx.c b/drivers/infiniband/hw/hfi1/ipoib_tx.c
+index 5d9a7b09ca37e..8973a081d641e 100644
+--- a/drivers/infiniband/hw/hfi1/ipoib_tx.c
++++ b/drivers/infiniband/hw/hfi1/ipoib_tx.c
+@@ -215,6 +215,7 @@ static int hfi1_ipoib_build_ulp_payload(struct ipoib_txreq *tx,
+ 		const skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+ 
+ 		ret = sdma_txadd_page(dd,
++				      NULL,
+ 				      txreq,
+ 				      skb_frag_page(frag),
+ 				      frag->bv_offset,
+@@ -737,10 +738,13 @@ int hfi1_ipoib_txreq_init(struct hfi1_ipoib_dev_priv *priv)
+ 		txq->tx_ring.shift = ilog2(tx_item_size);
+ 		txq->tx_ring.avail = hfi1_ipoib_ring_hwat(txq);
+ 		tx_ring = &txq->tx_ring;
+-		for (j = 0; j < tx_ring_size; j++)
++		for (j = 0; j < tx_ring_size; j++) {
+ 			hfi1_txreq_from_idx(tx_ring, j)->sdma_hdr =
+ 				kzalloc_node(sizeof(*tx->sdma_hdr),
+ 					     GFP_KERNEL, priv->dd->node);
++			if (!hfi1_txreq_from_idx(tx_ring, j)->sdma_hdr)
++				goto free_txqs;
++		}
+ 
+ 		netif_napi_add_tx(dev, &txq->napi, hfi1_ipoib_poll_tx_ring);
+ 	}
+diff --git a/drivers/infiniband/hw/hfi1/mmu_rb.c b/drivers/infiniband/hw/hfi1/mmu_rb.c
+index 7333646021bb8..71b9ac0188875 100644
+--- a/drivers/infiniband/hw/hfi1/mmu_rb.c
++++ b/drivers/infiniband/hw/hfi1/mmu_rb.c
+@@ -126,11 +126,11 @@ int hfi1_mmu_rb_insert(struct mmu_rb_handler *handler,
+ 	spin_lock_irqsave(&handler->lock, flags);
+ 	node = __mmu_rb_search(handler, mnode->addr, mnode->len);
+ 	if (node) {
+-		ret = -EINVAL;
++		ret = -EEXIST;
+ 		goto unlock;
+ 	}
+ 	__mmu_int_rb_insert(mnode, &handler->root);
+-	list_add(&mnode->list, &handler->lru_list);
++	list_add_tail(&mnode->list, &handler->lru_list);
+ 
+ 	ret = handler->ops->insert(handler->ops_arg, mnode);
+ 	if (ret) {
+@@ -143,6 +143,19 @@ unlock:
+ 	return ret;
+ }
+ 
++/* Caller must hold handler lock */
++struct mmu_rb_node *hfi1_mmu_rb_get_first(struct mmu_rb_handler *handler,
++					  unsigned long addr, unsigned long len)
++{
++	struct mmu_rb_node *node;
++
++	trace_hfi1_mmu_rb_search(addr, len);
++	node = __mmu_int_rb_iter_first(&handler->root, addr, (addr + len) - 1);
++	if (node)
++		list_move_tail(&node->list, &handler->lru_list);
++	return node;
++}
++
+ /* Caller must hold handler lock */
+ static struct mmu_rb_node *__mmu_rb_search(struct mmu_rb_handler *handler,
+ 					   unsigned long addr,
+@@ -167,32 +180,6 @@ static struct mmu_rb_node *__mmu_rb_search(struct mmu_rb_handler *handler,
+ 	return node;
+ }
+ 
+-bool hfi1_mmu_rb_remove_unless_exact(struct mmu_rb_handler *handler,
+-				     unsigned long addr, unsigned long len,
+-				     struct mmu_rb_node **rb_node)
+-{
+-	struct mmu_rb_node *node;
+-	unsigned long flags;
+-	bool ret = false;
+-
+-	if (current->mm != handler->mn.mm)
+-		return ret;
+-
+-	spin_lock_irqsave(&handler->lock, flags);
+-	node = __mmu_rb_search(handler, addr, len);
+-	if (node) {
+-		if (node->addr == addr && node->len == len)
+-			goto unlock;
+-		__mmu_int_rb_remove(node, &handler->root);
+-		list_del(&node->list); /* remove from LRU list */
+-		ret = true;
+-	}
+-unlock:
+-	spin_unlock_irqrestore(&handler->lock, flags);
+-	*rb_node = node;
+-	return ret;
+-}
+-
+ void hfi1_mmu_rb_evict(struct mmu_rb_handler *handler, void *evict_arg)
+ {
+ 	struct mmu_rb_node *rbnode, *ptr;
+@@ -206,8 +193,7 @@ void hfi1_mmu_rb_evict(struct mmu_rb_handler *handler, void *evict_arg)
+ 	INIT_LIST_HEAD(&del_list);
+ 
+ 	spin_lock_irqsave(&handler->lock, flags);
+-	list_for_each_entry_safe_reverse(rbnode, ptr, &handler->lru_list,
+-					 list) {
++	list_for_each_entry_safe(rbnode, ptr, &handler->lru_list, list) {
+ 		if (handler->ops->evict(handler->ops_arg, rbnode, evict_arg,
+ 					&stop)) {
+ 			__mmu_int_rb_remove(rbnode, &handler->root);
+@@ -219,36 +205,11 @@ void hfi1_mmu_rb_evict(struct mmu_rb_handler *handler, void *evict_arg)
+ 	}
+ 	spin_unlock_irqrestore(&handler->lock, flags);
+ 
+-	while (!list_empty(&del_list)) {
+-		rbnode = list_first_entry(&del_list, struct mmu_rb_node, list);
+-		list_del(&rbnode->list);
++	list_for_each_entry_safe(rbnode, ptr, &del_list, list) {
+ 		handler->ops->remove(handler->ops_arg, rbnode);
+ 	}
+ }
+ 
+-/*
+- * It is up to the caller to ensure that this function does not race with the
+- * mmu invalidate notifier which may be calling the users remove callback on
+- * 'node'.
+- */
+-void hfi1_mmu_rb_remove(struct mmu_rb_handler *handler,
+-			struct mmu_rb_node *node)
+-{
+-	unsigned long flags;
+-
+-	if (current->mm != handler->mn.mm)
+-		return;
+-
+-	/* Validity of handler and node pointers has been checked by caller. */
+-	trace_hfi1_mmu_rb_remove(node->addr, node->len);
+-	spin_lock_irqsave(&handler->lock, flags);
+-	__mmu_int_rb_remove(node, &handler->root);
+-	list_del(&node->list); /* remove from LRU list */
+-	spin_unlock_irqrestore(&handler->lock, flags);
+-
+-	handler->ops->remove(handler->ops_arg, node);
+-}
+-
+ static int mmu_notifier_range_start(struct mmu_notifier *mn,
+ 		const struct mmu_notifier_range *range)
+ {
+diff --git a/drivers/infiniband/hw/hfi1/mmu_rb.h b/drivers/infiniband/hw/hfi1/mmu_rb.h
+index 7417be2b9dc8a..ed75acdb7b839 100644
+--- a/drivers/infiniband/hw/hfi1/mmu_rb.h
++++ b/drivers/infiniband/hw/hfi1/mmu_rb.h
+@@ -52,10 +52,8 @@ void hfi1_mmu_rb_unregister(struct mmu_rb_handler *handler);
+ int hfi1_mmu_rb_insert(struct mmu_rb_handler *handler,
+ 		       struct mmu_rb_node *mnode);
+ void hfi1_mmu_rb_evict(struct mmu_rb_handler *handler, void *evict_arg);
+-void hfi1_mmu_rb_remove(struct mmu_rb_handler *handler,
+-			struct mmu_rb_node *mnode);
+-bool hfi1_mmu_rb_remove_unless_exact(struct mmu_rb_handler *handler,
+-				     unsigned long addr, unsigned long len,
+-				     struct mmu_rb_node **rb_node);
++struct mmu_rb_node *hfi1_mmu_rb_get_first(struct mmu_rb_handler *handler,
++					  unsigned long addr,
++					  unsigned long len);
+ 
+ #endif /* _HFI1_MMU_RB_H */
+diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
+index 8ed20392e9f0d..bb2552dd29c1e 100644
+--- a/drivers/infiniband/hw/hfi1/sdma.c
++++ b/drivers/infiniband/hw/hfi1/sdma.c
+@@ -1593,22 +1593,7 @@ static inline void sdma_unmap_desc(
+ 	struct hfi1_devdata *dd,
+ 	struct sdma_desc *descp)
+ {
+-	switch (sdma_mapping_type(descp)) {
+-	case SDMA_MAP_SINGLE:
+-		dma_unmap_single(
+-			&dd->pcidev->dev,
+-			sdma_mapping_addr(descp),
+-			sdma_mapping_len(descp),
+-			DMA_TO_DEVICE);
+-		break;
+-	case SDMA_MAP_PAGE:
+-		dma_unmap_page(
+-			&dd->pcidev->dev,
+-			sdma_mapping_addr(descp),
+-			sdma_mapping_len(descp),
+-			DMA_TO_DEVICE);
+-		break;
+-	}
++	system_descriptor_complete(dd, descp);
+ }
+ 
+ /*
+@@ -3128,7 +3113,7 @@ int ext_coal_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx,
+ 
+ 		/* Add descriptor for coalesce buffer */
+ 		tx->desc_limit = MAX_DESC;
+-		return _sdma_txadd_daddr(dd, SDMA_MAP_SINGLE, tx,
++		return _sdma_txadd_daddr(dd, SDMA_MAP_SINGLE, NULL, tx,
+ 					 addr, tx->tlen);
+ 	}
+ 
+@@ -3167,10 +3152,12 @@ int _pad_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx)
+ 			return rval;
+ 		}
+ 	}
++
+ 	/* finish the one just added */
+ 	make_tx_sdma_desc(
+ 		tx,
+ 		SDMA_MAP_NONE,
++		NULL,
+ 		dd->sdma_pad_phys,
+ 		sizeof(u32) - (tx->packet_len & (sizeof(u32) - 1)));
+ 	tx->num_desc++;
+diff --git a/drivers/infiniband/hw/hfi1/sdma.h b/drivers/infiniband/hw/hfi1/sdma.h
+index b023fc461bd51..95aaec14c6c28 100644
+--- a/drivers/infiniband/hw/hfi1/sdma.h
++++ b/drivers/infiniband/hw/hfi1/sdma.h
+@@ -594,6 +594,7 @@ static inline dma_addr_t sdma_mapping_addr(struct sdma_desc *d)
+ static inline void make_tx_sdma_desc(
+ 	struct sdma_txreq *tx,
+ 	int type,
++	void *pinning_ctx,
+ 	dma_addr_t addr,
+ 	size_t len)
+ {
+@@ -612,6 +613,7 @@ static inline void make_tx_sdma_desc(
+ 				<< SDMA_DESC0_PHY_ADDR_SHIFT) |
+ 			(((u64)len & SDMA_DESC0_BYTE_COUNT_MASK)
+ 				<< SDMA_DESC0_BYTE_COUNT_SHIFT);
++	desc->pinning_ctx = pinning_ctx;
+ }
+ 
+ /* helper to extend txreq */
+@@ -643,6 +645,7 @@ static inline void _sdma_close_tx(struct hfi1_devdata *dd,
+ static inline int _sdma_txadd_daddr(
+ 	struct hfi1_devdata *dd,
+ 	int type,
++	void *pinning_ctx,
+ 	struct sdma_txreq *tx,
+ 	dma_addr_t addr,
+ 	u16 len)
+@@ -652,6 +655,7 @@ static inline int _sdma_txadd_daddr(
+ 	make_tx_sdma_desc(
+ 		tx,
+ 		type,
++		pinning_ctx,
+ 		addr, len);
+ 	WARN_ON(len > tx->tlen);
+ 	tx->num_desc++;
+@@ -672,6 +676,7 @@ static inline int _sdma_txadd_daddr(
+ /**
+  * sdma_txadd_page() - add a page to the sdma_txreq
+  * @dd: the device to use for mapping
++ * @pinning_ctx: context to be released at descriptor retirement
+  * @tx: tx request to which the page is added
+  * @page: page to map
+  * @offset: offset within the page
+@@ -687,6 +692,7 @@ static inline int _sdma_txadd_daddr(
+  */
+ static inline int sdma_txadd_page(
+ 	struct hfi1_devdata *dd,
++	void *pinning_ctx,
+ 	struct sdma_txreq *tx,
+ 	struct page *page,
+ 	unsigned long offset,
+@@ -714,8 +720,7 @@ static inline int sdma_txadd_page(
+ 		return -ENOSPC;
+ 	}
+ 
+-	return _sdma_txadd_daddr(
+-			dd, SDMA_MAP_PAGE, tx, addr, len);
++	return _sdma_txadd_daddr(dd, SDMA_MAP_PAGE, pinning_ctx, tx, addr, len);
+ }
+ 
+ /**
+@@ -749,7 +754,8 @@ static inline int sdma_txadd_daddr(
+ 			return rval;
+ 	}
+ 
+-	return _sdma_txadd_daddr(dd, SDMA_MAP_NONE, tx, addr, len);
++	return _sdma_txadd_daddr(dd, SDMA_MAP_NONE, NULL, tx,
++				 addr, len);
+ }
+ 
+ /**
+@@ -795,8 +801,7 @@ static inline int sdma_txadd_kvaddr(
+ 		return -ENOSPC;
+ 	}
+ 
+-	return _sdma_txadd_daddr(
+-			dd, SDMA_MAP_SINGLE, tx, addr, len);
++	return _sdma_txadd_daddr(dd, SDMA_MAP_SINGLE, NULL, tx, addr, len);
+ }
+ 
+ struct iowait_work;
+@@ -1030,4 +1035,5 @@ extern uint mod_num_sdma;
+ 
+ void sdma_update_lmc(struct hfi1_devdata *dd, u64 mask, u32 lid);
+ 
++void system_descriptor_complete(struct hfi1_devdata *dd, struct sdma_desc *descp);
+ #endif
+diff --git a/drivers/infiniband/hw/hfi1/sdma_txreq.h b/drivers/infiniband/hw/hfi1/sdma_txreq.h
+index e262fb5c5ec61..fad946cb5e0d8 100644
+--- a/drivers/infiniband/hw/hfi1/sdma_txreq.h
++++ b/drivers/infiniband/hw/hfi1/sdma_txreq.h
+@@ -19,6 +19,7 @@
+ struct sdma_desc {
+ 	/* private:  don't use directly */
+ 	u64 qw[2];
++	void *pinning_ctx;
+ };
+ 
+ /**
+diff --git a/drivers/infiniband/hw/hfi1/trace_mmu.h b/drivers/infiniband/hw/hfi1/trace_mmu.h
+index 187e9244fe5ed..57900ebb7702e 100644
+--- a/drivers/infiniband/hw/hfi1/trace_mmu.h
++++ b/drivers/infiniband/hw/hfi1/trace_mmu.h
+@@ -37,10 +37,6 @@ DEFINE_EVENT(hfi1_mmu_rb_template, hfi1_mmu_rb_search,
+ 	     TP_PROTO(unsigned long addr, unsigned long len),
+ 	     TP_ARGS(addr, len));
+ 
+-DEFINE_EVENT(hfi1_mmu_rb_template, hfi1_mmu_rb_remove,
+-	     TP_PROTO(unsigned long addr, unsigned long len),
+-	     TP_ARGS(addr, len));
+-
+ DEFINE_EVENT(hfi1_mmu_rb_template, hfi1_mmu_mem_invalidate,
+ 	     TP_PROTO(unsigned long addr, unsigned long len),
+ 	     TP_ARGS(addr, len));
+diff --git a/drivers/infiniband/hw/hfi1/user_sdma.c b/drivers/infiniband/hw/hfi1/user_sdma.c
+index a71c5a36cebab..ae58b48afe074 100644
+--- a/drivers/infiniband/hw/hfi1/user_sdma.c
++++ b/drivers/infiniband/hw/hfi1/user_sdma.c
+@@ -24,7 +24,6 @@
+ 
+ #include "hfi.h"
+ #include "sdma.h"
+-#include "mmu_rb.h"
+ #include "user_sdma.h"
+ #include "verbs.h"  /* for the headers */
+ #include "common.h" /* for struct hfi1_tid_info */
+@@ -39,11 +38,7 @@ static unsigned initial_pkt_count = 8;
+ static int user_sdma_send_pkts(struct user_sdma_request *req, u16 maxpkts);
+ static void user_sdma_txreq_cb(struct sdma_txreq *txreq, int status);
+ static inline void pq_update(struct hfi1_user_sdma_pkt_q *pq);
+-static void user_sdma_free_request(struct user_sdma_request *req, bool unpin);
+-static int pin_vector_pages(struct user_sdma_request *req,
+-			    struct user_sdma_iovec *iovec);
+-static void unpin_vector_pages(struct mm_struct *mm, struct page **pages,
+-			       unsigned start, unsigned npages);
++static void user_sdma_free_request(struct user_sdma_request *req);
+ static int check_header_template(struct user_sdma_request *req,
+ 				 struct hfi1_pkt_header *hdr, u32 lrhlen,
+ 				 u32 datalen);
+@@ -81,6 +76,11 @@ static struct mmu_rb_ops sdma_rb_ops = {
+ 	.invalidate = sdma_rb_invalidate
+ };
+ 
++static int add_system_pages_to_sdma_packet(struct user_sdma_request *req,
++					   struct user_sdma_txreq *tx,
++					   struct user_sdma_iovec *iovec,
++					   u32 *pkt_remaining);
++
+ static int defer_packet_queue(
+ 	struct sdma_engine *sde,
+ 	struct iowait_work *wait,
+@@ -410,6 +410,7 @@ int hfi1_user_sdma_process_request(struct hfi1_filedata *fd,
+ 		ret = -EINVAL;
+ 		goto free_req;
+ 	}
++
+ 	/* Copy the header from the user buffer */
+ 	ret = copy_from_user(&req->hdr, iovec[idx].iov_base + sizeof(info),
+ 			     sizeof(req->hdr));
+@@ -484,9 +485,8 @@ int hfi1_user_sdma_process_request(struct hfi1_filedata *fd,
+ 		memcpy(&req->iovs[i].iov,
+ 		       iovec + idx++,
+ 		       sizeof(req->iovs[i].iov));
+-		ret = pin_vector_pages(req, &req->iovs[i]);
+-		if (ret) {
+-			req->data_iovs = i;
++		if (req->iovs[i].iov.iov_len == 0) {
++			ret = -EINVAL;
+ 			goto free_req;
+ 		}
+ 		req->data_len += req->iovs[i].iov.iov_len;
+@@ -584,7 +584,7 @@ free_req:
+ 		if (req->seqsubmitted)
+ 			wait_event(pq->busy.wait_dma,
+ 				   (req->seqcomp == req->seqsubmitted - 1));
+-		user_sdma_free_request(req, true);
++		user_sdma_free_request(req);
+ 		pq_update(pq);
+ 		set_comp_state(pq, cq, info.comp_idx, ERROR, ret);
+ 	}
+@@ -696,48 +696,6 @@ static int user_sdma_txadd_ahg(struct user_sdma_request *req,
+ 	return ret;
+ }
+ 
+-static int user_sdma_txadd(struct user_sdma_request *req,
+-			   struct user_sdma_txreq *tx,
+-			   struct user_sdma_iovec *iovec, u32 datalen,
+-			   u32 *queued_ptr, u32 *data_sent_ptr,
+-			   u64 *iov_offset_ptr)
+-{
+-	int ret;
+-	unsigned int pageidx, len;
+-	unsigned long base, offset;
+-	u64 iov_offset = *iov_offset_ptr;
+-	u32 queued = *queued_ptr, data_sent = *data_sent_ptr;
+-	struct hfi1_user_sdma_pkt_q *pq = req->pq;
+-
+-	base = (unsigned long)iovec->iov.iov_base;
+-	offset = offset_in_page(base + iovec->offset + iov_offset);
+-	pageidx = (((iovec->offset + iov_offset + base) - (base & PAGE_MASK)) >>
+-		   PAGE_SHIFT);
+-	len = offset + req->info.fragsize > PAGE_SIZE ?
+-		PAGE_SIZE - offset : req->info.fragsize;
+-	len = min((datalen - queued), len);
+-	ret = sdma_txadd_page(pq->dd, &tx->txreq, iovec->pages[pageidx],
+-			      offset, len);
+-	if (ret) {
+-		SDMA_DBG(req, "SDMA txreq add page failed %d\n", ret);
+-		return ret;
+-	}
+-	iov_offset += len;
+-	queued += len;
+-	data_sent += len;
+-	if (unlikely(queued < datalen && pageidx == iovec->npages &&
+-		     req->iov_idx < req->data_iovs - 1)) {
+-		iovec->offset += iov_offset;
+-		iovec = &req->iovs[++req->iov_idx];
+-		iov_offset = 0;
+-	}
+-
+-	*queued_ptr = queued;
+-	*data_sent_ptr = data_sent;
+-	*iov_offset_ptr = iov_offset;
+-	return ret;
+-}
+-
+ static int user_sdma_send_pkts(struct user_sdma_request *req, u16 maxpkts)
+ {
+ 	int ret = 0;
+@@ -769,8 +727,7 @@ static int user_sdma_send_pkts(struct user_sdma_request *req, u16 maxpkts)
+ 		maxpkts = req->info.npkts - req->seqnum;
+ 
+ 	while (npkts < maxpkts) {
+-		u32 datalen = 0, queued = 0, data_sent = 0;
+-		u64 iov_offset = 0;
++		u32 datalen = 0;
+ 
+ 		/*
+ 		 * Check whether any of the completions have come back
+@@ -863,27 +820,17 @@ static int user_sdma_send_pkts(struct user_sdma_request *req, u16 maxpkts)
+ 				goto free_txreq;
+ 		}
+ 
+-		/*
+-		 * If the request contains any data vectors, add up to
+-		 * fragsize bytes to the descriptor.
+-		 */
+-		while (queued < datalen &&
+-		       (req->sent + data_sent) < req->data_len) {
+-			ret = user_sdma_txadd(req, tx, iovec, datalen,
+-					      &queued, &data_sent, &iov_offset);
+-			if (ret)
+-				goto free_txreq;
+-		}
+-		/*
+-		 * The txreq was submitted successfully so we can update
+-		 * the counters.
+-		 */
+ 		req->koffset += datalen;
+ 		if (req_opcode(req->info.ctrl) == EXPECTED)
+ 			req->tidoffset += datalen;
+-		req->sent += data_sent;
+-		if (req->data_len)
+-			iovec->offset += iov_offset;
++		req->sent += datalen;
++		while (datalen) {
++			ret = add_system_pages_to_sdma_packet(req, tx, iovec,
++							      &datalen);
++			if (ret)
++				goto free_txreq;
++			iovec = &req->iovs[req->iov_idx];
++		}
+ 		list_add_tail(&tx->txreq.list, &req->txps);
+ 		/*
+ 		 * It is important to increment this here as it is used to
+@@ -920,133 +867,14 @@ free_tx:
+ static u32 sdma_cache_evict(struct hfi1_user_sdma_pkt_q *pq, u32 npages)
+ {
+ 	struct evict_data evict_data;
++	struct mmu_rb_handler *handler = pq->handler;
+ 
+ 	evict_data.cleared = 0;
+ 	evict_data.target = npages;
+-	hfi1_mmu_rb_evict(pq->handler, &evict_data);
++	hfi1_mmu_rb_evict(handler, &evict_data);
+ 	return evict_data.cleared;
+ }
+ 
+-static int pin_sdma_pages(struct user_sdma_request *req,
+-			  struct user_sdma_iovec *iovec,
+-			  struct sdma_mmu_node *node,
+-			  int npages)
+-{
+-	int pinned, cleared;
+-	struct page **pages;
+-	struct hfi1_user_sdma_pkt_q *pq = req->pq;
+-
+-	pages = kcalloc(npages, sizeof(*pages), GFP_KERNEL);
+-	if (!pages)
+-		return -ENOMEM;
+-	memcpy(pages, node->pages, node->npages * sizeof(*pages));
+-
+-	npages -= node->npages;
+-retry:
+-	if (!hfi1_can_pin_pages(pq->dd, current->mm,
+-				atomic_read(&pq->n_locked), npages)) {
+-		cleared = sdma_cache_evict(pq, npages);
+-		if (cleared >= npages)
+-			goto retry;
+-	}
+-	pinned = hfi1_acquire_user_pages(current->mm,
+-					 ((unsigned long)iovec->iov.iov_base +
+-					 (node->npages * PAGE_SIZE)), npages, 0,
+-					 pages + node->npages);
+-	if (pinned < 0) {
+-		kfree(pages);
+-		return pinned;
+-	}
+-	if (pinned != npages) {
+-		unpin_vector_pages(current->mm, pages, node->npages, pinned);
+-		return -EFAULT;
+-	}
+-	kfree(node->pages);
+-	node->rb.len = iovec->iov.iov_len;
+-	node->pages = pages;
+-	atomic_add(pinned, &pq->n_locked);
+-	return pinned;
+-}
+-
+-static void unpin_sdma_pages(struct sdma_mmu_node *node)
+-{
+-	if (node->npages) {
+-		unpin_vector_pages(mm_from_sdma_node(node), node->pages, 0,
+-				   node->npages);
+-		atomic_sub(node->npages, &node->pq->n_locked);
+-	}
+-}
+-
+-static int pin_vector_pages(struct user_sdma_request *req,
+-			    struct user_sdma_iovec *iovec)
+-{
+-	int ret = 0, pinned, npages;
+-	struct hfi1_user_sdma_pkt_q *pq = req->pq;
+-	struct sdma_mmu_node *node = NULL;
+-	struct mmu_rb_node *rb_node;
+-	struct iovec *iov;
+-	bool extracted;
+-
+-	extracted =
+-		hfi1_mmu_rb_remove_unless_exact(pq->handler,
+-						(unsigned long)
+-						iovec->iov.iov_base,
+-						iovec->iov.iov_len, &rb_node);
+-	if (rb_node) {
+-		node = container_of(rb_node, struct sdma_mmu_node, rb);
+-		if (!extracted) {
+-			atomic_inc(&node->refcount);
+-			iovec->pages = node->pages;
+-			iovec->npages = node->npages;
+-			iovec->node = node;
+-			return 0;
+-		}
+-	}
+-
+-	if (!node) {
+-		node = kzalloc(sizeof(*node), GFP_KERNEL);
+-		if (!node)
+-			return -ENOMEM;
+-
+-		node->rb.addr = (unsigned long)iovec->iov.iov_base;
+-		node->pq = pq;
+-		atomic_set(&node->refcount, 0);
+-	}
+-
+-	iov = &iovec->iov;
+-	npages = num_user_pages((unsigned long)iov->iov_base, iov->iov_len);
+-	if (node->npages < npages) {
+-		pinned = pin_sdma_pages(req, iovec, node, npages);
+-		if (pinned < 0) {
+-			ret = pinned;
+-			goto bail;
+-		}
+-		node->npages += pinned;
+-		npages = node->npages;
+-	}
+-	iovec->pages = node->pages;
+-	iovec->npages = npages;
+-	iovec->node = node;
+-
+-	ret = hfi1_mmu_rb_insert(req->pq->handler, &node->rb);
+-	if (ret) {
+-		iovec->node = NULL;
+-		goto bail;
+-	}
+-	return 0;
+-bail:
+-	unpin_sdma_pages(node);
+-	kfree(node);
+-	return ret;
+-}
+-
+-static void unpin_vector_pages(struct mm_struct *mm, struct page **pages,
+-			       unsigned start, unsigned npages)
+-{
+-	hfi1_release_user_pages(mm, pages + start, npages, false);
+-	kfree(pages);
+-}
+-
+ static int check_header_template(struct user_sdma_request *req,
+ 				 struct hfi1_pkt_header *hdr, u32 lrhlen,
+ 				 u32 datalen)
+@@ -1388,7 +1216,7 @@ static void user_sdma_txreq_cb(struct sdma_txreq *txreq, int status)
+ 	if (req->seqcomp != req->info.npkts - 1)
+ 		return;
+ 
+-	user_sdma_free_request(req, false);
++	user_sdma_free_request(req);
+ 	set_comp_state(pq, cq, req->info.comp_idx, state, status);
+ 	pq_update(pq);
+ }
+@@ -1399,10 +1227,8 @@ static inline void pq_update(struct hfi1_user_sdma_pkt_q *pq)
+ 		wake_up(&pq->wait);
+ }
+ 
+-static void user_sdma_free_request(struct user_sdma_request *req, bool unpin)
++static void user_sdma_free_request(struct user_sdma_request *req)
+ {
+-	int i;
+-
+ 	if (!list_empty(&req->txps)) {
+ 		struct sdma_txreq *t, *p;
+ 
+@@ -1415,21 +1241,6 @@ static void user_sdma_free_request(struct user_sdma_request *req, bool unpin)
+ 		}
+ 	}
+ 
+-	for (i = 0; i < req->data_iovs; i++) {
+-		struct sdma_mmu_node *node = req->iovs[i].node;
+-
+-		if (!node)
+-			continue;
+-
+-		req->iovs[i].node = NULL;
+-
+-		if (unpin)
+-			hfi1_mmu_rb_remove(req->pq->handler,
+-					   &node->rb);
+-		else
+-			atomic_dec(&node->refcount);
+-	}
+-
+ 	kfree(req->tids);
+ 	clear_bit(req->info.comp_idx, req->pq->req_in_use);
+ }
+@@ -1447,6 +1258,368 @@ static inline void set_comp_state(struct hfi1_user_sdma_pkt_q *pq,
+ 					idx, state, ret);
+ }
+ 
++static void unpin_vector_pages(struct mm_struct *mm, struct page **pages,
++			       unsigned int start, unsigned int npages)
++{
++	hfi1_release_user_pages(mm, pages + start, npages, false);
++	kfree(pages);
++}
++
++static void free_system_node(struct sdma_mmu_node *node)
++{
++	if (node->npages) {
++		unpin_vector_pages(mm_from_sdma_node(node), node->pages, 0,
++				   node->npages);
++		atomic_sub(node->npages, &node->pq->n_locked);
++	}
++	kfree(node);
++}
++
++static inline void acquire_node(struct sdma_mmu_node *node)
++{
++	atomic_inc(&node->refcount);
++	WARN_ON(atomic_read(&node->refcount) < 0);
++}
++
++static inline void release_node(struct mmu_rb_handler *handler,
++				struct sdma_mmu_node *node)
++{
++	atomic_dec(&node->refcount);
++	WARN_ON(atomic_read(&node->refcount) < 0);
++}
++
++static struct sdma_mmu_node *find_system_node(struct mmu_rb_handler *handler,
++					      unsigned long start,
++					      unsigned long end)
++{
++	struct mmu_rb_node *rb_node;
++	struct sdma_mmu_node *node;
++	unsigned long flags;
++
++	spin_lock_irqsave(&handler->lock, flags);
++	rb_node = hfi1_mmu_rb_get_first(handler, start, (end - start));
++	if (!rb_node) {
++		spin_unlock_irqrestore(&handler->lock, flags);
++		return NULL;
++	}
++	node = container_of(rb_node, struct sdma_mmu_node, rb);
++	acquire_node(node);
++	spin_unlock_irqrestore(&handler->lock, flags);
++
++	return node;
++}
++
++static int pin_system_pages(struct user_sdma_request *req,
++			    uintptr_t start_address, size_t length,
++			    struct sdma_mmu_node *node, int npages)
++{
++	struct hfi1_user_sdma_pkt_q *pq = req->pq;
++	int pinned, cleared;
++	struct page **pages;
++
++	pages = kcalloc(npages, sizeof(*pages), GFP_KERNEL);
++	if (!pages)
++		return -ENOMEM;
++
++retry:
++	if (!hfi1_can_pin_pages(pq->dd, current->mm, atomic_read(&pq->n_locked),
++				npages)) {
++		SDMA_DBG(req, "Evicting: nlocked %u npages %u",
++			 atomic_read(&pq->n_locked), npages);
++		cleared = sdma_cache_evict(pq, npages);
++		if (cleared >= npages)
++			goto retry;
++	}
++
++	SDMA_DBG(req, "Acquire user pages start_address %lx node->npages %u npages %u",
++		 start_address, node->npages, npages);
++	pinned = hfi1_acquire_user_pages(current->mm, start_address, npages, 0,
++					 pages);
++
++	if (pinned < 0) {
++		kfree(pages);
++		SDMA_DBG(req, "pinned %d", pinned);
++		return pinned;
++	}
++	if (pinned != npages) {
++		unpin_vector_pages(current->mm, pages, node->npages, pinned);
++		SDMA_DBG(req, "npages %u pinned %d", npages, pinned);
++		return -EFAULT;
++	}
++	node->rb.addr = start_address;
++	node->rb.len = length;
++	node->pages = pages;
++	node->npages = npages;
++	atomic_add(pinned, &pq->n_locked);
++	SDMA_DBG(req, "done. pinned %d", pinned);
++	return 0;
++}
++
++static int add_system_pinning(struct user_sdma_request *req,
++			      struct sdma_mmu_node **node_p,
++			      unsigned long start, unsigned long len)
++
++{
++	struct hfi1_user_sdma_pkt_q *pq = req->pq;
++	struct sdma_mmu_node *node;
++	int ret;
++
++	node = kzalloc(sizeof(*node), GFP_KERNEL);
++	if (!node)
++		return -ENOMEM;
++
++	node->pq = pq;
++	ret = pin_system_pages(req, start, len, node, PFN_DOWN(len));
++	if (ret == 0) {
++		ret = hfi1_mmu_rb_insert(pq->handler, &node->rb);
++		if (ret)
++			free_system_node(node);
++		else
++			*node_p = node;
++
++		return ret;
++	}
++
++	kfree(node);
++	return ret;
++}
++
++static int get_system_cache_entry(struct user_sdma_request *req,
++				  struct sdma_mmu_node **node_p,
++				  size_t req_start, size_t req_len)
++{
++	struct hfi1_user_sdma_pkt_q *pq = req->pq;
++	u64 start = ALIGN_DOWN(req_start, PAGE_SIZE);
++	u64 end = PFN_ALIGN(req_start + req_len);
++	struct mmu_rb_handler *handler = pq->handler;
++	int ret;
++
++	if ((end - start) == 0) {
++		SDMA_DBG(req,
++			 "Request for empty cache entry req_start %lx req_len %lx start %llx end %llx",
++			 req_start, req_len, start, end);
++		return -EINVAL;
++	}
++
++	SDMA_DBG(req, "req_start %lx req_len %lu", req_start, req_len);
++
++	while (1) {
++		struct sdma_mmu_node *node =
++			find_system_node(handler, start, end);
++		u64 prepend_len = 0;
++
++		SDMA_DBG(req, "node %p start %llx end %llu", node, start, end);
++		if (!node) {
++			ret = add_system_pinning(req, node_p, start,
++						 end - start);
++			if (ret == -EEXIST) {
++				/*
++				 * Another execution context has inserted a
++				 * conficting entry first.
++				 */
++				continue;
++			}
++			return ret;
++		}
++
++		if (node->rb.addr <= start) {
++			/*
++			 * This entry covers at least part of the region. If it doesn't extend
++			 * to the end, then this will be called again for the next segment.
++			 */
++			*node_p = node;
++			return 0;
++		}
++
++		SDMA_DBG(req, "prepend: node->rb.addr %lx, node->refcount %d",
++			 node->rb.addr, atomic_read(&node->refcount));
++		prepend_len = node->rb.addr - start;
++
++		/*
++		 * This node will not be returned, instead a new node
++		 * will be. So release the reference.
++		 */
++		release_node(handler, node);
++
++		/* Prepend a node to cover the beginning of the allocation */
++		ret = add_system_pinning(req, node_p, start, prepend_len);
++		if (ret == -EEXIST) {
++			/* Another execution context has inserted a conficting entry first. */
++			continue;
++		}
++		return ret;
++	}
++}
++
++static int add_mapping_to_sdma_packet(struct user_sdma_request *req,
++				      struct user_sdma_txreq *tx,
++				      struct sdma_mmu_node *cache_entry,
++				      size_t start,
++				      size_t from_this_cache_entry)
++{
++	struct hfi1_user_sdma_pkt_q *pq = req->pq;
++	unsigned int page_offset;
++	unsigned int from_this_page;
++	size_t page_index;
++	void *ctx;
++	int ret;
++
++	/*
++	 * Because the cache may be more fragmented than the memory that is being accessed,
++	 * it's not strictly necessary to have a descriptor per cache entry.
++	 */
++
++	while (from_this_cache_entry) {
++		page_index = PFN_DOWN(start - cache_entry->rb.addr);
++
++		if (page_index >= cache_entry->npages) {
++			SDMA_DBG(req,
++				 "Request for page_index %zu >= cache_entry->npages %u",
++				 page_index, cache_entry->npages);
++			return -EINVAL;
++		}
++
++		page_offset = start - ALIGN_DOWN(start, PAGE_SIZE);
++		from_this_page = PAGE_SIZE - page_offset;
++
++		if (from_this_page < from_this_cache_entry) {
++			ctx = NULL;
++		} else {
++			/*
++			 * In the case they are equal the next line has no practical effect,
++			 * but it's better to do a register to register copy than a conditional
++			 * branch.
++			 */
++			from_this_page = from_this_cache_entry;
++			ctx = cache_entry;
++		}
++
++		ret = sdma_txadd_page(pq->dd, ctx, &tx->txreq,
++				      cache_entry->pages[page_index],
++				      page_offset, from_this_page);
++		if (ret) {
++			/*
++			 * When there's a failure, the entire request is freed by
++			 * user_sdma_send_pkts().
++			 */
++			SDMA_DBG(req,
++				 "sdma_txadd_page failed %d page_index %lu page_offset %u from_this_page %u",
++				 ret, page_index, page_offset, from_this_page);
++			return ret;
++		}
++		start += from_this_page;
++		from_this_cache_entry -= from_this_page;
++	}
++	return 0;
++}
++
++static int add_system_iovec_to_sdma_packet(struct user_sdma_request *req,
++					   struct user_sdma_txreq *tx,
++					   struct user_sdma_iovec *iovec,
++					   size_t from_this_iovec)
++{
++	struct mmu_rb_handler *handler = req->pq->handler;
++
++	while (from_this_iovec > 0) {
++		struct sdma_mmu_node *cache_entry;
++		size_t from_this_cache_entry;
++		size_t start;
++		int ret;
++
++		start = (uintptr_t)iovec->iov.iov_base + iovec->offset;
++		ret = get_system_cache_entry(req, &cache_entry, start,
++					     from_this_iovec);
++		if (ret) {
++			SDMA_DBG(req, "pin system segment failed %d", ret);
++			return ret;
++		}
++
++		from_this_cache_entry = cache_entry->rb.len - (start - cache_entry->rb.addr);
++		if (from_this_cache_entry > from_this_iovec)
++			from_this_cache_entry = from_this_iovec;
++
++		ret = add_mapping_to_sdma_packet(req, tx, cache_entry, start,
++						 from_this_cache_entry);
++		if (ret) {
++			/*
++			 * We're guaranteed that there will be no descriptor
++			 * completion callback that releases this node
++			 * because only the last descriptor referencing it
++			 * has a context attached, and a failure means the
++			 * last descriptor was never added.
++			 */
++			release_node(handler, cache_entry);
++			SDMA_DBG(req, "add system segment failed %d", ret);
++			return ret;
++		}
++
++		iovec->offset += from_this_cache_entry;
++		from_this_iovec -= from_this_cache_entry;
++	}
++
++	return 0;
++}
++
++static int add_system_pages_to_sdma_packet(struct user_sdma_request *req,
++					   struct user_sdma_txreq *tx,
++					   struct user_sdma_iovec *iovec,
++					   u32 *pkt_data_remaining)
++{
++	size_t remaining_to_add = *pkt_data_remaining;
++	/*
++	 * Walk through iovec entries, ensure the associated pages
++	 * are pinned and mapped, add data to the packet until no more
++	 * data remains to be added.
++	 */
++	while (remaining_to_add > 0) {
++		struct user_sdma_iovec *cur_iovec;
++		size_t from_this_iovec;
++		int ret;
++
++		cur_iovec = iovec;
++		from_this_iovec = iovec->iov.iov_len - iovec->offset;
++
++		if (from_this_iovec > remaining_to_add) {
++			from_this_iovec = remaining_to_add;
++		} else {
++			/* The current iovec entry will be consumed by this pass. */
++			req->iov_idx++;
++			iovec++;
++		}
++
++		ret = add_system_iovec_to_sdma_packet(req, tx, cur_iovec,
++						      from_this_iovec);
++		if (ret)
++			return ret;
++
++		remaining_to_add -= from_this_iovec;
++	}
++	*pkt_data_remaining = remaining_to_add;
++
++	return 0;
++}
++
++void system_descriptor_complete(struct hfi1_devdata *dd,
++				struct sdma_desc *descp)
++{
++	switch (sdma_mapping_type(descp)) {
++	case SDMA_MAP_SINGLE:
++		dma_unmap_single(&dd->pcidev->dev, sdma_mapping_addr(descp),
++				 sdma_mapping_len(descp), DMA_TO_DEVICE);
++		break;
++	case SDMA_MAP_PAGE:
++		dma_unmap_page(&dd->pcidev->dev, sdma_mapping_addr(descp),
++			       sdma_mapping_len(descp), DMA_TO_DEVICE);
++		break;
++	}
++
++	if (descp->pinning_ctx) {
++		struct sdma_mmu_node *node = descp->pinning_ctx;
++
++		release_node(node->rb.handler, node);
++	}
++}
++
+ static bool sdma_rb_filter(struct mmu_rb_node *node, unsigned long addr,
+ 			   unsigned long len)
+ {
+@@ -1493,8 +1666,7 @@ static void sdma_rb_remove(void *arg, struct mmu_rb_node *mnode)
+ 	struct sdma_mmu_node *node =
+ 		container_of(mnode, struct sdma_mmu_node, rb);
+ 
+-	unpin_sdma_pages(node);
+-	kfree(node);
++	free_system_node(node);
+ }
+ 
+ static int sdma_rb_invalidate(void *arg, struct mmu_rb_node *mnode)
+diff --git a/drivers/infiniband/hw/hfi1/user_sdma.h b/drivers/infiniband/hw/hfi1/user_sdma.h
+index ea56eb57e6568..a241836371dc1 100644
+--- a/drivers/infiniband/hw/hfi1/user_sdma.h
++++ b/drivers/infiniband/hw/hfi1/user_sdma.h
+@@ -112,16 +112,11 @@ struct sdma_mmu_node {
+ struct user_sdma_iovec {
+ 	struct list_head list;
+ 	struct iovec iov;
+-	/* number of pages in this vector */
+-	unsigned int npages;
+-	/* array of pinned pages for this vector */
+-	struct page **pages;
+ 	/*
+ 	 * offset into the virtual address space of the vector at
+ 	 * which we last left off.
+ 	 */
+ 	u64 offset;
+-	struct sdma_mmu_node *node;
+ };
+ 
+ /* evict operation argument */
+diff --git a/drivers/infiniband/hw/hfi1/verbs.c b/drivers/infiniband/hw/hfi1/verbs.c
+index 7f6d7fc7951df..fbdcfecb1768c 100644
+--- a/drivers/infiniband/hw/hfi1/verbs.c
++++ b/drivers/infiniband/hw/hfi1/verbs.c
+@@ -778,8 +778,8 @@ static int build_verbs_tx_desc(
+ 
+ 	/* add icrc, lt byte, and padding to flit */
+ 	if (extra_bytes)
+-		ret = sdma_txadd_daddr(sde->dd, &tx->txreq,
+-				       sde->dd->sdma_pad_phys, extra_bytes);
++		ret = sdma_txadd_daddr(sde->dd, &tx->txreq, sde->dd->sdma_pad_phys,
++				       extra_bytes);
+ 
+ bail_txadd:
+ 	return ret;
+diff --git a/drivers/infiniband/hw/hfi1/vnic_sdma.c b/drivers/infiniband/hw/hfi1/vnic_sdma.c
+index c3f0f8d877c37..727eedfba332a 100644
+--- a/drivers/infiniband/hw/hfi1/vnic_sdma.c
++++ b/drivers/infiniband/hw/hfi1/vnic_sdma.c
+@@ -64,6 +64,7 @@ static noinline int build_vnic_ulp_payload(struct sdma_engine *sde,
+ 
+ 		/* combine physically continuous fragments later? */
+ 		ret = sdma_txadd_page(sde->dd,
++				      NULL,
+ 				      &tx->txreq,
+ 				      skb_frag_page(frag),
+ 				      skb_frag_off(frag),
+diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
+index 884825b2e5f77..456656617c33f 100644
+--- a/drivers/infiniband/hw/mlx4/qp.c
++++ b/drivers/infiniband/hw/mlx4/qp.c
+@@ -447,9 +447,13 @@ static int set_user_sq_size(struct mlx4_ib_dev *dev,
+ 			    struct mlx4_ib_qp *qp,
+ 			    struct mlx4_ib_create_qp *ucmd)
+ {
++	u32 cnt;
++
+ 	/* Sanity check SQ size before proceeding */
+-	if ((1 << ucmd->log_sq_bb_count) > dev->dev->caps.max_wqes	 ||
+-	    ucmd->log_sq_stride >
++	if (check_shl_overflow(1, ucmd->log_sq_bb_count, &cnt) ||
++	    cnt > dev->dev->caps.max_wqes)
++		return -EINVAL;
++	if (ucmd->log_sq_stride >
+ 		ilog2(roundup_pow_of_two(dev->dev->caps.max_sq_desc_sz)) ||
+ 	    ucmd->log_sq_stride < MLX4_IB_MIN_SQ_STRIDE)
+ 		return -EINVAL;
+diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
+index 2211a0be16f36..f8e2baed27a5c 100644
+--- a/drivers/infiniband/hw/mlx5/devx.c
++++ b/drivers/infiniband/hw/mlx5/devx.c
+@@ -666,7 +666,21 @@ static bool devx_is_valid_obj_id(struct uverbs_attr_bundle *attrs,
+ 				      obj_id;
+ 
+ 	case MLX5_IB_OBJECT_DEVX_OBJ:
+-		return ((struct devx_obj *)uobj->object)->obj_id == obj_id;
++	{
++		u16 opcode = MLX5_GET(general_obj_in_cmd_hdr, in, opcode);
++		struct devx_obj *devx_uobj = uobj->object;
++
++		if (opcode == MLX5_CMD_OP_QUERY_FLOW_COUNTER &&
++		    devx_uobj->flow_counter_bulk_size) {
++			u64 end;
++
++			end = devx_uobj->obj_id +
++				devx_uobj->flow_counter_bulk_size;
++			return devx_uobj->obj_id <= obj_id && end > obj_id;
++		}
++
++		return devx_uobj->obj_id == obj_id;
++	}
+ 
+ 	default:
+ 		return false;
+@@ -1517,10 +1531,17 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_OBJ_CREATE)(
+ 		goto obj_free;
+ 
+ 	if (opcode == MLX5_CMD_OP_ALLOC_FLOW_COUNTER) {
+-		u8 bulk = MLX5_GET(alloc_flow_counter_in,
+-				   cmd_in,
+-				   flow_counter_bulk);
+-		obj->flow_counter_bulk_size = 128UL * bulk;
++		u32 bulk = MLX5_GET(alloc_flow_counter_in,
++				    cmd_in,
++				    flow_counter_bulk_log_size);
++
++		if (bulk)
++			bulk = 1 << bulk;
++		else
++			bulk = 128UL * MLX5_GET(alloc_flow_counter_in,
++						cmd_in,
++						flow_counter_bulk);
++		obj->flow_counter_bulk_size = bulk;
+ 	}
+ 
+ 	uobj->object = obj;
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index 7cc3b973dec7b..86284aba3470d 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -4485,7 +4485,7 @@ static int mlx5_ib_modify_dct(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+ 			return -EINVAL;
+ 
+ 		if (attr->port_num == 0 ||
+-		    attr->port_num > MLX5_CAP_GEN(dev->mdev, num_ports)) {
++		    attr->port_num > dev->num_ports) {
+ 			mlx5_ib_dbg(dev, "invalid port number %d. number of ports is %d\n",
+ 				    attr->port_num, dev->num_ports);
+ 			return -EINVAL;
+diff --git a/drivers/infiniband/hw/mlx5/umr.c b/drivers/infiniband/hw/mlx5/umr.c
+index 55f4e048d9474..c9e176e8ced46 100644
+--- a/drivers/infiniband/hw/mlx5/umr.c
++++ b/drivers/infiniband/hw/mlx5/umr.c
+@@ -380,6 +380,9 @@ static void mlx5r_umr_set_access_flags(struct mlx5_ib_dev *dev,
+ 				       struct mlx5_mkey_seg *seg,
+ 				       unsigned int access_flags)
+ {
++	bool ro_read = (access_flags & IB_ACCESS_RELAXED_ORDERING) &&
++		       pcie_relaxed_ordering_enabled(dev->mdev->pdev);
++
+ 	MLX5_SET(mkc, seg, a, !!(access_flags & IB_ACCESS_REMOTE_ATOMIC));
+ 	MLX5_SET(mkc, seg, rw, !!(access_flags & IB_ACCESS_REMOTE_WRITE));
+ 	MLX5_SET(mkc, seg, rr, !!(access_flags & IB_ACCESS_REMOTE_READ));
+@@ -387,8 +390,7 @@ static void mlx5r_umr_set_access_flags(struct mlx5_ib_dev *dev,
+ 	MLX5_SET(mkc, seg, lr, 1);
+ 	MLX5_SET(mkc, seg, relaxed_ordering_write,
+ 		 !!(access_flags & IB_ACCESS_RELAXED_ORDERING));
+-	MLX5_SET(mkc, seg, relaxed_ordering_read,
+-		 !!(access_flags & IB_ACCESS_RELAXED_ORDERING));
++	MLX5_SET(mkc, seg, relaxed_ordering_read, ro_read);
+ }
+ 
+ int mlx5r_umr_rereg_pd_access(struct mlx5_ib_mr *mr, struct ib_pd *pd,
+diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c
+index 3acab569fbb94..2bdc4486c3daa 100644
+--- a/drivers/infiniband/sw/rdmavt/qp.c
++++ b/drivers/infiniband/sw/rdmavt/qp.c
+@@ -464,8 +464,6 @@ void rvt_qp_exit(struct rvt_dev_info *rdi)
+ 	if (qps_inuse)
+ 		rvt_pr_err(rdi, "QP memory leak! %u still in use\n",
+ 			   qps_inuse);
+-	if (!rdi->qp_dev)
+-		return;
+ 
+ 	kfree(rdi->qp_dev->qp_table);
+ 	free_qpn_table(&rdi->qp_dev->qpn_table);
+diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c
+index 136c2efe34660..a3f05fdd9fac2 100644
+--- a/drivers/infiniband/sw/rxe/rxe.c
++++ b/drivers/infiniband/sw/rxe/rxe.c
+@@ -175,7 +175,7 @@ int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name)
+ 
+ static int rxe_newlink(const char *ibdev_name, struct net_device *ndev)
+ {
+-	struct rxe_dev *exists;
++	struct rxe_dev *rxe;
+ 	int err = 0;
+ 
+ 	if (is_vlan_dev(ndev)) {
+@@ -184,17 +184,17 @@ static int rxe_newlink(const char *ibdev_name, struct net_device *ndev)
+ 		goto err;
+ 	}
+ 
+-	exists = rxe_get_dev_from_net(ndev);
+-	if (exists) {
+-		ib_device_put(&exists->ib_dev);
+-		rxe_dbg(exists, "already configured on %s\n", ndev->name);
++	rxe = rxe_get_dev_from_net(ndev);
++	if (rxe) {
++		ib_device_put(&rxe->ib_dev);
++		rxe_dbg(rxe, "already configured on %s\n", ndev->name);
+ 		err = -EEXIST;
+ 		goto err;
+ 	}
+ 
+ 	err = rxe_net_add(ibdev_name, ndev);
+ 	if (err) {
+-		rxe_dbg(exists, "failed to add %s\n", ndev->name);
++		pr_debug("failed to add %s\n", ndev->name);
+ 		goto err;
+ 	}
+ err:
+diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c
+index 20737fec392bf..9a9d3ec2bf2fd 100644
+--- a/drivers/infiniband/sw/rxe/rxe_comp.c
++++ b/drivers/infiniband/sw/rxe/rxe_comp.c
+@@ -571,9 +571,8 @@ static void free_pkt(struct rxe_pkt_info *pkt)
+ 	ib_device_put(dev);
+ }
+ 
+-int rxe_completer(void *arg)
++int rxe_completer(struct rxe_qp *qp)
+ {
+-	struct rxe_qp *qp = (struct rxe_qp *)arg;
+ 	struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
+ 	struct rxe_send_wqe *wqe = NULL;
+ 	struct sk_buff *skb = NULL;
+diff --git a/drivers/infiniband/sw/rxe/rxe_cq.c b/drivers/infiniband/sw/rxe/rxe_cq.c
+index 1df186534639a..faf49c50bbaba 100644
+--- a/drivers/infiniband/sw/rxe/rxe_cq.c
++++ b/drivers/infiniband/sw/rxe/rxe_cq.c
+@@ -39,21 +39,6 @@ err1:
+ 	return -EINVAL;
+ }
+ 
+-static void rxe_send_complete(struct tasklet_struct *t)
+-{
+-	struct rxe_cq *cq = from_tasklet(cq, t, comp_task);
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&cq->cq_lock, flags);
+-	if (cq->is_dying) {
+-		spin_unlock_irqrestore(&cq->cq_lock, flags);
+-		return;
+-	}
+-	spin_unlock_irqrestore(&cq->cq_lock, flags);
+-
+-	cq->ibcq.comp_handler(&cq->ibcq, cq->ibcq.cq_context);
+-}
+-
+ int rxe_cq_from_init(struct rxe_dev *rxe, struct rxe_cq *cq, int cqe,
+ 		     int comp_vector, struct ib_udata *udata,
+ 		     struct rxe_create_cq_resp __user *uresp)
+@@ -79,10 +64,6 @@ int rxe_cq_from_init(struct rxe_dev *rxe, struct rxe_cq *cq, int cqe,
+ 
+ 	cq->is_user = uresp;
+ 
+-	cq->is_dying = false;
+-
+-	tasklet_setup(&cq->comp_task, rxe_send_complete);
+-
+ 	spin_lock_init(&cq->cq_lock);
+ 	cq->ibcq.cqe = cqe;
+ 	return 0;
+@@ -103,6 +84,7 @@ int rxe_cq_resize_queue(struct rxe_cq *cq, int cqe,
+ 	return err;
+ }
+ 
++/* caller holds reference to cq */
+ int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited)
+ {
+ 	struct ib_event ev;
+@@ -135,21 +117,13 @@ int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited)
+ 	if ((cq->notify == IB_CQ_NEXT_COMP) ||
+ 	    (cq->notify == IB_CQ_SOLICITED && solicited)) {
+ 		cq->notify = 0;
+-		tasklet_schedule(&cq->comp_task);
++
++		cq->ibcq.comp_handler(&cq->ibcq, cq->ibcq.cq_context);
+ 	}
+ 
+ 	return 0;
+ }
+ 
+-void rxe_cq_disable(struct rxe_cq *cq)
+-{
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&cq->cq_lock, flags);
+-	cq->is_dying = true;
+-	spin_unlock_irqrestore(&cq->cq_lock, flags);
+-}
+-
+ void rxe_cq_cleanup(struct rxe_pool_elem *elem)
+ {
+ 	struct rxe_cq *cq = container_of(elem, typeof(*cq), elem);
+diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
+index 1bb0cb479eb12..d131aaa94d059 100644
+--- a/drivers/infiniband/sw/rxe/rxe_loc.h
++++ b/drivers/infiniband/sw/rxe/rxe_loc.h
+@@ -171,9 +171,9 @@ void rxe_srq_cleanup(struct rxe_pool_elem *elem);
+ 
+ void rxe_dealloc(struct ib_device *ib_dev);
+ 
+-int rxe_completer(void *arg);
+-int rxe_requester(void *arg);
+-int rxe_responder(void *arg);
++int rxe_completer(struct rxe_qp *qp);
++int rxe_requester(struct rxe_qp *qp);
++int rxe_responder(struct rxe_qp *qp);
+ 
+ /* rxe_icrc.c */
+ int rxe_icrc_init(struct rxe_dev *rxe);
+diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
+index ab72db68b58f6..13283ec06f95e 100644
+--- a/drivers/infiniband/sw/rxe/rxe_qp.c
++++ b/drivers/infiniband/sw/rxe/rxe_qp.c
+@@ -473,29 +473,23 @@ static void rxe_qp_reset(struct rxe_qp *qp)
+ {
+ 	/* stop tasks from running */
+ 	rxe_disable_task(&qp->resp.task);
+-
+-	/* stop request/comp */
+-	if (qp->sq.queue) {
+-		if (qp_type(qp) == IB_QPT_RC)
+-			rxe_disable_task(&qp->comp.task);
+-		rxe_disable_task(&qp->req.task);
+-	}
++	rxe_disable_task(&qp->comp.task);
++	rxe_disable_task(&qp->req.task);
+ 
+ 	/* move qp to the reset state */
+ 	qp->req.state = QP_STATE_RESET;
+ 	qp->comp.state = QP_STATE_RESET;
+ 	qp->resp.state = QP_STATE_RESET;
+ 
+-	/* let state machines reset themselves drain work and packet queues
+-	 * etc.
+-	 */
+-	__rxe_do_task(&qp->resp.task);
++	/* drain work and packet queuesc */
++	rxe_requester(qp);
++	rxe_completer(qp);
++	rxe_responder(qp);
+ 
+-	if (qp->sq.queue) {
+-		__rxe_do_task(&qp->comp.task);
+-		__rxe_do_task(&qp->req.task);
++	if (qp->rq.queue)
++		rxe_queue_reset(qp->rq.queue);
++	if (qp->sq.queue)
+ 		rxe_queue_reset(qp->sq.queue);
+-	}
+ 
+ 	/* cleanup attributes */
+ 	atomic_set(&qp->ssn, 0);
+@@ -518,13 +512,8 @@ static void rxe_qp_reset(struct rxe_qp *qp)
+ 
+ 	/* reenable tasks */
+ 	rxe_enable_task(&qp->resp.task);
+-
+-	if (qp->sq.queue) {
+-		if (qp_type(qp) == IB_QPT_RC)
+-			rxe_enable_task(&qp->comp.task);
+-
+-		rxe_enable_task(&qp->req.task);
+-	}
++	rxe_enable_task(&qp->comp.task);
++	rxe_enable_task(&qp->req.task);
+ }
+ 
+ /* drain the send queue */
+@@ -533,10 +522,7 @@ static void rxe_qp_drain(struct rxe_qp *qp)
+ 	if (qp->sq.queue) {
+ 		if (qp->req.state != QP_STATE_DRAINED) {
+ 			qp->req.state = QP_STATE_DRAIN;
+-			if (qp_type(qp) == IB_QPT_RC)
+-				rxe_sched_task(&qp->comp.task);
+-			else
+-				__rxe_do_task(&qp->comp.task);
++			rxe_sched_task(&qp->comp.task);
+ 			rxe_sched_task(&qp->req.task);
+ 		}
+ 	}
+@@ -552,11 +538,7 @@ void rxe_qp_error(struct rxe_qp *qp)
+ 
+ 	/* drain work and packet queues */
+ 	rxe_sched_task(&qp->resp.task);
+-
+-	if (qp_type(qp) == IB_QPT_RC)
+-		rxe_sched_task(&qp->comp.task);
+-	else
+-		__rxe_do_task(&qp->comp.task);
++	rxe_sched_task(&qp->comp.task);
+ 	rxe_sched_task(&qp->req.task);
+ }
+ 
+@@ -773,24 +755,25 @@ static void rxe_qp_do_cleanup(struct work_struct *work)
+ 
+ 	qp->valid = 0;
+ 	qp->qp_timeout_jiffies = 0;
+-	rxe_cleanup_task(&qp->resp.task);
+ 
+ 	if (qp_type(qp) == IB_QPT_RC) {
+ 		del_timer_sync(&qp->retrans_timer);
+ 		del_timer_sync(&qp->rnr_nak_timer);
+ 	}
+ 
+-	rxe_cleanup_task(&qp->req.task);
+-	rxe_cleanup_task(&qp->comp.task);
++	if (qp->resp.task.func)
++		rxe_cleanup_task(&qp->resp.task);
+ 
+-	/* flush out any receive wr's or pending requests */
+ 	if (qp->req.task.func)
+-		__rxe_do_task(&qp->req.task);
++		rxe_cleanup_task(&qp->req.task);
+ 
+-	if (qp->sq.queue) {
+-		__rxe_do_task(&qp->comp.task);
+-		__rxe_do_task(&qp->req.task);
+-	}
++	if (qp->comp.task.func)
++		rxe_cleanup_task(&qp->comp.task);
++
++	/* flush out any receive wr's or pending requests */
++	rxe_requester(qp);
++	rxe_completer(qp);
++	rxe_responder(qp);
+ 
+ 	if (qp->sq.queue)
+ 		rxe_queue_cleanup(qp->sq.queue);
+diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
+index 899c8779f8001..f2dc2d191e16f 100644
+--- a/drivers/infiniband/sw/rxe/rxe_req.c
++++ b/drivers/infiniband/sw/rxe/rxe_req.c
+@@ -635,9 +635,8 @@ static int rxe_do_local_ops(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
+ 	return 0;
+ }
+ 
+-int rxe_requester(void *arg)
++int rxe_requester(struct rxe_qp *qp)
+ {
+-	struct rxe_qp *qp = (struct rxe_qp *)arg;
+ 	struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
+ 	struct rxe_pkt_info pkt;
+ 	struct sk_buff *skb;
+diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
+index 0cc1ba91d48cc..8c68340502769 100644
+--- a/drivers/infiniband/sw/rxe/rxe_resp.c
++++ b/drivers/infiniband/sw/rxe/rxe_resp.c
+@@ -1439,9 +1439,8 @@ static void rxe_drain_req_pkts(struct rxe_qp *qp, bool notify)
+ 		queue_advance_consumer(q, q->type);
+ }
+ 
+-int rxe_responder(void *arg)
++int rxe_responder(struct rxe_qp *qp)
+ {
+-	struct rxe_qp *qp = (struct rxe_qp *)arg;
+ 	struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
+ 	enum resp_states state;
+ 	struct rxe_pkt_info *pkt = NULL;
+diff --git a/drivers/infiniband/sw/rxe/rxe_task.c b/drivers/infiniband/sw/rxe/rxe_task.c
+index 60b90e33a8849..a67f485454436 100644
+--- a/drivers/infiniband/sw/rxe/rxe_task.c
++++ b/drivers/infiniband/sw/rxe/rxe_task.c
+@@ -6,19 +6,6 @@
+ 
+ #include "rxe.h"
+ 
+-int __rxe_do_task(struct rxe_task *task)
+-
+-{
+-	int ret;
+-
+-	while ((ret = task->func(task->arg)) == 0)
+-		;
+-
+-	task->ret = ret;
+-
+-	return ret;
+-}
+-
+ /*
+  * this locking is due to a potential race where
+  * a second caller finds the task already running
+@@ -29,7 +16,7 @@ static void do_task(struct tasklet_struct *t)
+ 	int cont;
+ 	int ret;
+ 	struct rxe_task *task = from_tasklet(task, t, tasklet);
+-	struct rxe_qp *qp = (struct rxe_qp *)task->arg;
++	struct rxe_qp *qp = (struct rxe_qp *)task->qp;
+ 	unsigned int iterations = RXE_MAX_ITERATIONS;
+ 
+ 	spin_lock_bh(&task->lock);
+@@ -54,7 +41,7 @@ static void do_task(struct tasklet_struct *t)
+ 
+ 	do {
+ 		cont = 0;
+-		ret = task->func(task->arg);
++		ret = task->func(task->qp);
+ 
+ 		spin_lock_bh(&task->lock);
+ 		switch (task->state) {
+@@ -91,9 +78,10 @@ static void do_task(struct tasklet_struct *t)
+ 	task->ret = ret;
+ }
+ 
+-int rxe_init_task(struct rxe_task *task, void *arg, int (*func)(void *))
++int rxe_init_task(struct rxe_task *task, struct rxe_qp *qp,
++		  int (*func)(struct rxe_qp *))
+ {
+-	task->arg	= arg;
++	task->qp	= qp;
+ 	task->func	= func;
+ 	task->destroyed	= false;
+ 
+diff --git a/drivers/infiniband/sw/rxe/rxe_task.h b/drivers/infiniband/sw/rxe/rxe_task.h
+index 7b88129702ac6..99585e40cef92 100644
+--- a/drivers/infiniband/sw/rxe/rxe_task.h
++++ b/drivers/infiniband/sw/rxe/rxe_task.h
+@@ -22,28 +22,23 @@ struct rxe_task {
+ 	struct tasklet_struct	tasklet;
+ 	int			state;
+ 	spinlock_t		lock;
+-	void			*arg;
+-	int			(*func)(void *arg);
++	struct rxe_qp		*qp;
++	int			(*func)(struct rxe_qp *qp);
+ 	int			ret;
+ 	bool			destroyed;
+ };
+ 
+ /*
+  * init rxe_task structure
+- *	arg  => parameter to pass to fcn
++ *	qp  => parameter to pass to func
+  *	func => function to call until it returns != 0
+  */
+-int rxe_init_task(struct rxe_task *task, void *arg, int (*func)(void *));
++int rxe_init_task(struct rxe_task *task, struct rxe_qp *qp,
++		  int (*func)(struct rxe_qp *));
+ 
+ /* cleanup task */
+ void rxe_cleanup_task(struct rxe_task *task);
+ 
+-/*
+- * raw call to func in loop without any checking
+- * can call when tasklets are disabled
+- */
+-int __rxe_do_task(struct rxe_task *task);
+-
+ void rxe_run_task(struct rxe_task *task);
+ 
+ void rxe_sched_task(struct rxe_task *task);
+diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
+index e14050a692766..6803ac76ae572 100644
+--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
++++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
+@@ -786,8 +786,6 @@ static int rxe_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
+ 	if (atomic_read(&cq->num_wq))
+ 		return -EINVAL;
+ 
+-	rxe_cq_disable(cq);
+-
+ 	rxe_cleanup(cq);
+ 	return 0;
+ }
+diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h
+index c269ae2a32243..d812093a39166 100644
+--- a/drivers/infiniband/sw/rxe/rxe_verbs.h
++++ b/drivers/infiniband/sw/rxe/rxe_verbs.h
+@@ -63,9 +63,7 @@ struct rxe_cq {
+ 	struct rxe_queue	*queue;
+ 	spinlock_t		cq_lock;
+ 	u8			notify;
+-	bool			is_dying;
+ 	bool			is_user;
+-	struct tasklet_struct	comp_task;
+ 	atomic_t		num_wq;
+ };
+ 
+diff --git a/drivers/infiniband/sw/siw/siw_main.c b/drivers/infiniband/sw/siw/siw_main.c
+index dacc174604bf2..65b5cda5457ba 100644
+--- a/drivers/infiniband/sw/siw/siw_main.c
++++ b/drivers/infiniband/sw/siw/siw_main.c
+@@ -437,9 +437,6 @@ static int siw_netdev_event(struct notifier_block *nb, unsigned long event,
+ 
+ 	dev_dbg(&netdev->dev, "siw: event %lu\n", event);
+ 
+-	if (dev_net(netdev) != &init_net)
+-		return NOTIFY_OK;
+-
+ 	base_dev = ib_device_get_by_netdev(netdev, RDMA_DRIVER_SIW);
+ 	if (!base_dev)
+ 		return NOTIFY_OK;
+diff --git a/drivers/infiniband/sw/siw/siw_qp_tx.c b/drivers/infiniband/sw/siw/siw_qp_tx.c
+index 05052b49107f2..6bb9e9e81ff4c 100644
+--- a/drivers/infiniband/sw/siw/siw_qp_tx.c
++++ b/drivers/infiniband/sw/siw/siw_qp_tx.c
+@@ -558,7 +558,7 @@ static int siw_tx_hdt(struct siw_iwarp_tx *c_tx, struct socket *s)
+ 			data_len -= plen;
+ 			fp_off = 0;
+ 
+-			if (++seg > (int)MAX_ARRAY) {
++			if (++seg >= (int)MAX_ARRAY) {
+ 				siw_dbg_qp(tx_qp(c_tx), "to many fragments\n");
+ 				siw_unmap_pages(iov, kmap_mask, seg-1);
+ 				wqe->processed -= c_tx->bytes_unsent;
+diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
+index 75404885cf981..f290cd49698ea 100644
+--- a/drivers/infiniband/ulp/isert/ib_isert.c
++++ b/drivers/infiniband/ulp/isert/ib_isert.c
+@@ -2506,8 +2506,8 @@ isert_wait4cmds(struct iscsit_conn *conn)
+ 	isert_info("iscsit_conn %p\n", conn);
+ 
+ 	if (conn->sess) {
+-		target_stop_session(conn->sess->se_sess);
+-		target_wait_for_sess_cmds(conn->sess->se_sess);
++		target_stop_cmd_counter(conn->cmd_cnt);
++		target_wait_for_cmds(conn->cmd_cnt);
+ 	}
+ }
+ 
+diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
+index 3c3fae738c3ed..25e799dba999e 100644
+--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
++++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
+@@ -549,6 +549,7 @@ static int srpt_format_guid(char *buf, unsigned int size, const __be64 *guid)
+  */
+ static int srpt_refresh_port(struct srpt_port *sport)
+ {
++	struct ib_mad_agent *mad_agent;
+ 	struct ib_mad_reg_req reg_req;
+ 	struct ib_port_modify port_modify;
+ 	struct ib_port_attr port_attr;
+@@ -593,24 +594,26 @@ static int srpt_refresh_port(struct srpt_port *sport)
+ 		set_bit(IB_MGMT_METHOD_GET, reg_req.method_mask);
+ 		set_bit(IB_MGMT_METHOD_SET, reg_req.method_mask);
+ 
+-		sport->mad_agent = ib_register_mad_agent(sport->sdev->device,
+-							 sport->port,
+-							 IB_QPT_GSI,
+-							 &reg_req, 0,
+-							 srpt_mad_send_handler,
+-							 srpt_mad_recv_handler,
+-							 sport, 0);
+-		if (IS_ERR(sport->mad_agent)) {
++		mad_agent = ib_register_mad_agent(sport->sdev->device,
++						  sport->port,
++						  IB_QPT_GSI,
++						  &reg_req, 0,
++						  srpt_mad_send_handler,
++						  srpt_mad_recv_handler,
++						  sport, 0);
++		if (IS_ERR(mad_agent)) {
+ 			pr_err("%s-%d: MAD agent registration failed (%ld). Note: this is expected if SR-IOV is enabled.\n",
+ 			       dev_name(&sport->sdev->device->dev), sport->port,
+-			       PTR_ERR(sport->mad_agent));
++			       PTR_ERR(mad_agent));
+ 			sport->mad_agent = NULL;
+ 			memset(&port_modify, 0, sizeof(port_modify));
+ 			port_modify.clr_port_cap_mask = IB_PORT_DEVICE_MGMT_SUP;
+ 			ib_modify_port(sport->sdev->device, sport->port, 0,
+ 				       &port_modify);
+-
++			return 0;
+ 		}
++
++		sport->mad_agent = mad_agent;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/input/touchscreen/raspberrypi-ts.c b/drivers/input/touchscreen/raspberrypi-ts.c
+index 5000f5fd9ec38..45c575df994e0 100644
+--- a/drivers/input/touchscreen/raspberrypi-ts.c
++++ b/drivers/input/touchscreen/raspberrypi-ts.c
+@@ -134,7 +134,7 @@ static int rpi_ts_probe(struct platform_device *pdev)
+ 		return -ENOENT;
+ 	}
+ 
+-	fw = rpi_firmware_get(fw_node);
++	fw = devm_rpi_firmware_get(&pdev->dev, fw_node);
+ 	of_node_put(fw_node);
+ 	if (!fw)
+ 		return -EPROBE_DEFER;
+@@ -160,7 +160,6 @@ static int rpi_ts_probe(struct platform_device *pdev)
+ 	touchbuf = (u32)ts->fw_regs_phys;
+ 	error = rpi_firmware_property(fw, RPI_FIRMWARE_FRAMEBUFFER_SET_TOUCHBUF,
+ 				      &touchbuf, sizeof(touchbuf));
+-	rpi_firmware_put(fw);
+ 	if (error || touchbuf != 0) {
+ 		dev_warn(dev, "Failed to set touchbuf, %d\n", error);
+ 		return error;
+diff --git a/drivers/interconnect/qcom/icc-rpm.c b/drivers/interconnect/qcom/icc-rpm.c
+index 4180a06681b2b..c80819557923e 100644
+--- a/drivers/interconnect/qcom/icc-rpm.c
++++ b/drivers/interconnect/qcom/icc-rpm.c
+@@ -11,7 +11,6 @@
+ #include <linux/of_device.h>
+ #include <linux/of_platform.h>
+ #include <linux/platform_device.h>
+-#include <linux/pm_domain.h>
+ #include <linux/regmap.h>
+ #include <linux/slab.h>
+ 
+@@ -496,12 +495,6 @@ regmap_done:
+ 	if (ret)
+ 		return ret;
+ 
+-	if (desc->has_bus_pd) {
+-		ret = dev_pm_domain_attach(dev, true);
+-		if (ret)
+-			return ret;
+-	}
+-
+ 	provider = &qp->provider;
+ 	provider->dev = dev;
+ 	provider->set = qcom_icc_set;
+diff --git a/drivers/interconnect/qcom/icc-rpm.h b/drivers/interconnect/qcom/icc-rpm.h
+index a49af844ab13e..02257b0d3d5c6 100644
+--- a/drivers/interconnect/qcom/icc-rpm.h
++++ b/drivers/interconnect/qcom/icc-rpm.h
+@@ -91,7 +91,6 @@ struct qcom_icc_desc {
+ 	size_t num_nodes;
+ 	const char * const *clocks;
+ 	size_t num_clocks;
+-	bool has_bus_pd;
+ 	enum qcom_icc_type type;
+ 	const struct regmap_config *regmap_cfg;
+ 	unsigned int qos_offset;
+diff --git a/drivers/interconnect/qcom/msm8996.c b/drivers/interconnect/qcom/msm8996.c
+index 25a1a32bc611f..14efd2761b7ab 100644
+--- a/drivers/interconnect/qcom/msm8996.c
++++ b/drivers/interconnect/qcom/msm8996.c
+@@ -1823,7 +1823,6 @@ static const struct qcom_icc_desc msm8996_a0noc = {
+ 	.num_nodes = ARRAY_SIZE(a0noc_nodes),
+ 	.clocks = bus_a0noc_clocks,
+ 	.num_clocks = ARRAY_SIZE(bus_a0noc_clocks),
+-	.has_bus_pd = true,
+ 	.regmap_cfg = &msm8996_a0noc_regmap_config
+ };
+ 
+diff --git a/drivers/interconnect/qcom/osm-l3.c b/drivers/interconnect/qcom/osm-l3.c
+index 1bafb54f14329..a1f4f918b9116 100644
+--- a/drivers/interconnect/qcom/osm-l3.c
++++ b/drivers/interconnect/qcom/osm-l3.c
+@@ -14,13 +14,6 @@
+ 
+ #include <dt-bindings/interconnect/qcom,osm-l3.h>
+ 
+-#include "sc7180.h"
+-#include "sc7280.h"
+-#include "sc8180x.h"
+-#include "sdm845.h"
+-#include "sm8150.h"
+-#include "sm8250.h"
+-
+ #define LUT_MAX_ENTRIES			40U
+ #define LUT_SRC				GENMASK(31, 30)
+ #define LUT_L_VAL			GENMASK(7, 0)
+diff --git a/drivers/interconnect/qcom/sc7180.h b/drivers/interconnect/qcom/sc7180.h
+index 7a2b3eb00923c..2b718922c1090 100644
+--- a/drivers/interconnect/qcom/sc7180.h
++++ b/drivers/interconnect/qcom/sc7180.h
+@@ -145,7 +145,5 @@
+ #define SC7180_SLAVE_SERVICE_SNOC			134
+ #define SC7180_SLAVE_QDSS_STM				135
+ #define SC7180_SLAVE_TCU				136
+-#define SC7180_MASTER_OSM_L3_APPS			137
+-#define SC7180_SLAVE_OSM_L3				138
+ 
+ #endif
+diff --git a/drivers/interconnect/qcom/sc7280.h b/drivers/interconnect/qcom/sc7280.h
+index 1fb9839b2c14b..175e400305c51 100644
+--- a/drivers/interconnect/qcom/sc7280.h
++++ b/drivers/interconnect/qcom/sc7280.h
+@@ -150,7 +150,5 @@
+ #define SC7280_SLAVE_PCIE_1			139
+ #define SC7280_SLAVE_QDSS_STM			140
+ #define SC7280_SLAVE_TCU			141
+-#define SC7280_MASTER_EPSS_L3_APPS		142
+-#define SC7280_SLAVE_EPSS_L3			143
+ 
+ #endif
+diff --git a/drivers/interconnect/qcom/sc8180x.h b/drivers/interconnect/qcom/sc8180x.h
+index c138dcd350f1d..f8d90598335a1 100644
+--- a/drivers/interconnect/qcom/sc8180x.h
++++ b/drivers/interconnect/qcom/sc8180x.h
+@@ -168,8 +168,6 @@
+ #define SC8180X_SLAVE_EBI_CH0_DISPLAY		158
+ #define SC8180X_SLAVE_MNOC_SF_MEM_NOC_DISPLAY	159
+ #define SC8180X_SLAVE_MNOC_HF_MEM_NOC_DISPLAY	160
+-#define SC8180X_MASTER_OSM_L3_APPS		161
+-#define SC8180X_SLAVE_OSM_L3			162
+ 
+ #define SC8180X_MASTER_QUP_CORE_0		163
+ #define SC8180X_MASTER_QUP_CORE_1		164
+diff --git a/drivers/interconnect/qcom/sdm845.h b/drivers/interconnect/qcom/sdm845.h
+index 776e9c2acb278..bc7e425ce9852 100644
+--- a/drivers/interconnect/qcom/sdm845.h
++++ b/drivers/interconnect/qcom/sdm845.h
+@@ -136,7 +136,5 @@
+ #define SDM845_SLAVE_SERVICE_SNOC			128
+ #define SDM845_SLAVE_QDSS_STM				129
+ #define SDM845_SLAVE_TCU				130
+-#define SDM845_MASTER_OSM_L3_APPS			131
+-#define SDM845_SLAVE_OSM_L3				132
+ 
+ #endif /* __DRIVERS_INTERCONNECT_QCOM_SDM845_H__ */
+diff --git a/drivers/interconnect/qcom/sm8150.h b/drivers/interconnect/qcom/sm8150.h
+index 023161681fb87..1d587c94eb06e 100644
+--- a/drivers/interconnect/qcom/sm8150.h
++++ b/drivers/interconnect/qcom/sm8150.h
+@@ -148,7 +148,5 @@
+ #define SM8150_SLAVE_VSENSE_CTRL_CFG		137
+ #define SM8150_SNOC_CNOC_MAS			138
+ #define SM8150_SNOC_CNOC_SLV			139
+-#define SM8150_MASTER_OSM_L3_APPS		140
+-#define SM8150_SLAVE_OSM_L3			141
+ 
+ #endif
+diff --git a/drivers/interconnect/qcom/sm8250.h b/drivers/interconnect/qcom/sm8250.h
+index e3fc56bc7ca0e..209ab195f21fb 100644
+--- a/drivers/interconnect/qcom/sm8250.h
++++ b/drivers/interconnect/qcom/sm8250.h
+@@ -158,7 +158,5 @@
+ #define SM8250_SLAVE_VSENSE_CTRL_CFG		147
+ #define SM8250_SNOC_CNOC_MAS			148
+ #define SM8250_SNOC_CNOC_SLV			149
+-#define SM8250_MASTER_EPSS_L3_APPS		150
+-#define SM8250_SLAVE_EPSS_L3			151
+ 
+ #endif
+diff --git a/drivers/iommu/amd/amd_iommu_types.h b/drivers/iommu/amd/amd_iommu_types.h
+index 3d684190b4d53..f7cb1ce0f9bbc 100644
+--- a/drivers/iommu/amd/amd_iommu_types.h
++++ b/drivers/iommu/amd/amd_iommu_types.h
+@@ -1001,8 +1001,8 @@ struct amd_ir_data {
+ 	 */
+ 	struct irq_cfg *cfg;
+ 	int ga_vector;
+-	int ga_root_ptr;
+-	int ga_tag;
++	u64 ga_root_ptr;
++	u32 ga_tag;
+ };
+ 
+ struct amd_irte_ops {
+diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
+index 5a505ba5467e1..167da5b1a5e31 100644
+--- a/drivers/iommu/amd/iommu.c
++++ b/drivers/iommu/amd/iommu.c
+@@ -1666,10 +1666,6 @@ static void do_attach(struct iommu_dev_data *dev_data,
+ 	domain->dev_iommu[iommu->index] += 1;
+ 	domain->dev_cnt                 += 1;
+ 
+-	/* Override supported page sizes */
+-	if (domain->flags & PD_GIOV_MASK)
+-		domain->domain.pgsize_bitmap = AMD_IOMMU_PGSIZES_V2;
+-
+ 	/* Update device table */
+ 	set_dte_entry(iommu, dev_data->devid, domain,
+ 		      ats, dev_data->iommu_v2);
+@@ -2048,6 +2044,8 @@ static int protection_domain_init_v2(struct protection_domain *domain)
+ 
+ 	domain->flags |= PD_GIOV_MASK;
+ 
++	domain->domain.pgsize_bitmap = AMD_IOMMU_PGSIZES_V2;
++
+ 	if (domain_enable_v2(domain, 1)) {
+ 		domain_id_free(domain->id);
+ 		return -ENOMEM;
+diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
+index 10db680acaed5..256a38371120e 100644
+--- a/drivers/iommu/iommu.c
++++ b/drivers/iommu/iommu.c
+@@ -1964,8 +1964,13 @@ static struct iommu_domain *__iommu_domain_alloc(struct bus_type *bus,
+ 		return NULL;
+ 
+ 	domain->type = type;
+-	/* Assume all sizes by default; the driver may override this later */
+-	domain->pgsize_bitmap = bus->iommu_ops->pgsize_bitmap;
++	/*
++	 * If not already set, assume all sizes by default; the driver
++	 * may override this later
++	 */
++	if (!domain->pgsize_bitmap)
++		domain->pgsize_bitmap = bus->iommu_ops->pgsize_bitmap;
++
+ 	if (!domain->ops)
+ 		domain->ops = bus->iommu_ops->default_domain_ops;
+ 
+diff --git a/drivers/iommu/iommufd/selftest.c b/drivers/iommu/iommufd/selftest.c
+index cfb5fe9a5e0ee..76c46847dc494 100644
+--- a/drivers/iommu/iommufd/selftest.c
++++ b/drivers/iommu/iommufd/selftest.c
+@@ -339,10 +339,12 @@ static int iommufd_test_md_check_pa(struct iommufd_ucmd *ucmd,
+ {
+ 	struct iommufd_hw_pagetable *hwpt;
+ 	struct mock_iommu_domain *mock;
++	uintptr_t end;
+ 	int rc;
+ 
+ 	if (iova % MOCK_IO_PAGE_SIZE || length % MOCK_IO_PAGE_SIZE ||
+-	    (uintptr_t)uptr % MOCK_IO_PAGE_SIZE)
++	    (uintptr_t)uptr % MOCK_IO_PAGE_SIZE ||
++	    check_add_overflow((uintptr_t)uptr, (uintptr_t)length, &end))
+ 		return -EINVAL;
+ 
+ 	hwpt = get_md_pagetable(ucmd, mockpt_id, &mock);
+@@ -390,7 +392,10 @@ static int iommufd_test_md_check_refs(struct iommufd_ucmd *ucmd,
+ 				      void __user *uptr, size_t length,
+ 				      unsigned int refs)
+ {
+-	if (length % PAGE_SIZE || (uintptr_t)uptr % PAGE_SIZE)
++	uintptr_t end;
++
++	if (length % PAGE_SIZE || (uintptr_t)uptr % PAGE_SIZE ||
++	    check_add_overflow((uintptr_t)uptr, (uintptr_t)length, &end))
+ 		return -EINVAL;
+ 
+ 	for (; length; length -= PAGE_SIZE) {
+diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
+index d5a4955910ff5..6a00ce208dc2b 100644
+--- a/drivers/iommu/mtk_iommu.c
++++ b/drivers/iommu/mtk_iommu.c
+@@ -1258,6 +1258,14 @@ static int mtk_iommu_probe(struct platform_device *pdev)
+ 			return PTR_ERR(data->bclk);
+ 	}
+ 
++	if (MTK_IOMMU_HAS_FLAG(data->plat_data, PGTABLE_PA_35_EN)) {
++		ret = dma_set_mask(dev, DMA_BIT_MASK(35));
++		if (ret) {
++			dev_err(dev, "Failed to set dma_mask 35.\n");
++			return ret;
++		}
++	}
++
+ 	pm_runtime_enable(dev);
+ 
+ 	if (MTK_IOMMU_IS_TYPE(data->plat_data, MTK_IOMMU_TYPE_MM)) {
+diff --git a/drivers/leds/Kconfig b/drivers/leds/Kconfig
+index 9dbce09eabacf..aaa9140bc3514 100644
+--- a/drivers/leds/Kconfig
++++ b/drivers/leds/Kconfig
+@@ -795,7 +795,7 @@ config LEDS_SPI_BYTE
+ config LEDS_TI_LMU_COMMON
+ 	tristate "LED driver for TI LMU"
+ 	depends on LEDS_CLASS
+-	depends on REGMAP
++	select REGMAP
+ 	help
+ 	  Say Y to enable the LED driver for TI LMU devices.
+ 	  This supports common features between the TI LM3532, LM3631, LM3632,
+diff --git a/drivers/leds/leds-tca6507.c b/drivers/leds/leds-tca6507.c
+index 07dd12686a696..634cabd5bb796 100644
+--- a/drivers/leds/leds-tca6507.c
++++ b/drivers/leds/leds-tca6507.c
+@@ -691,8 +691,9 @@ tca6507_led_dt_init(struct device *dev)
+ 		if (fwnode_property_read_string(child, "label", &led.name))
+ 			led.name = fwnode_get_name(child);
+ 
+-		fwnode_property_read_string(child, "linux,default-trigger",
+-					    &led.default_trigger);
++		if (fwnode_property_read_string(child, "linux,default-trigger",
++						&led.default_trigger))
++			led.default_trigger = NULL;
+ 
+ 		led.flags = 0;
+ 		if (fwnode_device_is_compatible(child, "gpio"))
+diff --git a/drivers/macintosh/Kconfig b/drivers/macintosh/Kconfig
+index 539a2ed4e13dc..a0e717a986dcb 100644
+--- a/drivers/macintosh/Kconfig
++++ b/drivers/macintosh/Kconfig
+@@ -86,6 +86,7 @@ config ADB_PMU_LED
+ 
+ config ADB_PMU_LED_DISK
+ 	bool "Use front LED as DISK LED by default"
++	depends on ATA
+ 	depends on ADB_PMU_LED
+ 	depends on LEDS_CLASS
+ 	select LEDS_TRIGGERS
+diff --git a/drivers/macintosh/windfarm_smu_sat.c b/drivers/macintosh/windfarm_smu_sat.c
+index ebc4256a9e4a0..089f2743a070d 100644
+--- a/drivers/macintosh/windfarm_smu_sat.c
++++ b/drivers/macintosh/windfarm_smu_sat.c
+@@ -171,6 +171,7 @@ static void wf_sat_release(struct kref *ref)
+ 
+ 	if (sat->nr >= 0)
+ 		sats[sat->nr] = NULL;
++	of_node_put(sat->node);
+ 	kfree(sat);
+ }
+ 
+diff --git a/drivers/mailbox/mailbox-mpfs.c b/drivers/mailbox/mailbox-mpfs.c
+index 853901acaeec2..08aa840cccaca 100644
+--- a/drivers/mailbox/mailbox-mpfs.c
++++ b/drivers/mailbox/mailbox-mpfs.c
+@@ -79,6 +79,13 @@ static bool mpfs_mbox_busy(struct mpfs_mbox *mbox)
+ 	return status & SCB_STATUS_BUSY_MASK;
+ }
+ 
++static bool mpfs_mbox_last_tx_done(struct mbox_chan *chan)
++{
++	struct mpfs_mbox *mbox = (struct mpfs_mbox *)chan->con_priv;
++
++	return !mpfs_mbox_busy(mbox);
++}
++
+ static int mpfs_mbox_send_data(struct mbox_chan *chan, void *data)
+ {
+ 	struct mpfs_mbox *mbox = (struct mpfs_mbox *)chan->con_priv;
+@@ -182,7 +189,6 @@ static irqreturn_t mpfs_mbox_inbox_isr(int irq, void *data)
+ 
+ 	mpfs_mbox_rx_data(chan);
+ 
+-	mbox_chan_txdone(chan, 0);
+ 	return IRQ_HANDLED;
+ }
+ 
+@@ -212,6 +218,7 @@ static const struct mbox_chan_ops mpfs_mbox_ops = {
+ 	.send_data = mpfs_mbox_send_data,
+ 	.startup = mpfs_mbox_startup,
+ 	.shutdown = mpfs_mbox_shutdown,
++	.last_tx_done = mpfs_mbox_last_tx_done,
+ };
+ 
+ static int mpfs_mbox_probe(struct platform_device *pdev)
+@@ -247,7 +254,8 @@ static int mpfs_mbox_probe(struct platform_device *pdev)
+ 	mbox->controller.num_chans = 1;
+ 	mbox->controller.chans = mbox->chans;
+ 	mbox->controller.ops = &mpfs_mbox_ops;
+-	mbox->controller.txdone_irq = true;
++	mbox->controller.txdone_poll = true;
++	mbox->controller.txpoll_period = 10u;
+ 
+ 	ret = devm_mbox_controller_register(&pdev->dev, &mbox->controller);
+ 	if (ret) {
+diff --git a/drivers/mailbox/zynqmp-ipi-mailbox.c b/drivers/mailbox/zynqmp-ipi-mailbox.c
+index a4c8d23c76e21..d097f45b0e5f5 100644
+--- a/drivers/mailbox/zynqmp-ipi-mailbox.c
++++ b/drivers/mailbox/zynqmp-ipi-mailbox.c
+@@ -152,7 +152,7 @@ static irqreturn_t zynqmp_ipi_interrupt(int irq, void *data)
+ 	struct zynqmp_ipi_message *msg;
+ 	u64 arg0, arg3;
+ 	struct arm_smccc_res res;
+-	int ret, i;
++	int ret, i, status = IRQ_NONE;
+ 
+ 	(void)irq;
+ 	arg0 = SMC_IPI_MAILBOX_STATUS_ENQUIRY;
+@@ -170,11 +170,11 @@ static irqreturn_t zynqmp_ipi_interrupt(int irq, void *data)
+ 				memcpy_fromio(msg->data, mchan->req_buf,
+ 					      msg->len);
+ 				mbox_chan_received_data(chan, (void *)msg);
+-				return IRQ_HANDLED;
++				status = IRQ_HANDLED;
+ 			}
+ 		}
+ 	}
+-	return IRQ_NONE;
++	return status;
+ }
+ 
+ /**
+@@ -634,7 +634,12 @@ static int zynqmp_ipi_probe(struct platform_device *pdev)
+ 	struct zynqmp_ipi_mbox *mbox;
+ 	int num_mboxes, ret = -EINVAL;
+ 
+-	num_mboxes = of_get_child_count(np);
++	num_mboxes = of_get_available_child_count(np);
++	if (num_mboxes == 0) {
++		dev_err(dev, "mailbox nodes not available\n");
++		return -EINVAL;
++	}
++
+ 	pdata = devm_kzalloc(dev, struct_size(pdata, ipi_mboxes, num_mboxes),
+ 			     GFP_KERNEL);
+ 	if (!pdata)
+diff --git a/drivers/md/dm-clone-target.c b/drivers/md/dm-clone-target.c
+index f38a27604c7ab..fc30ebd67622c 100644
+--- a/drivers/md/dm-clone-target.c
++++ b/drivers/md/dm-clone-target.c
+@@ -2205,6 +2205,7 @@ static int __init dm_clone_init(void)
+ 	r = dm_register_target(&clone_target);
+ 	if (r < 0) {
+ 		DMERR("Failed to register clone target");
++		kmem_cache_destroy(_hydration_cache);
+ 		return r;
+ 	}
+ 
+diff --git a/drivers/md/dm-flakey.c b/drivers/md/dm-flakey.c
+index 5b7556d2a9d9f..468d025392cb0 100644
+--- a/drivers/md/dm-flakey.c
++++ b/drivers/md/dm-flakey.c
+@@ -125,9 +125,9 @@ static int parse_features(struct dm_arg_set *as, struct flakey_c *fc,
+ 			 * Direction r or w?
+ 			 */
+ 			arg_name = dm_shift_arg(as);
+-			if (!strcasecmp(arg_name, "w"))
++			if (arg_name && !strcasecmp(arg_name, "w"))
+ 				fc->corrupt_bio_rw = WRITE;
+-			else if (!strcasecmp(arg_name, "r"))
++			else if (arg_name && !strcasecmp(arg_name, "r"))
+ 				fc->corrupt_bio_rw = READ;
+ 			else {
+ 				ti->error = "Invalid corrupt bio direction (r or w)";
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index b0d5057fbdd98..54830b07b829a 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -4703,11 +4703,13 @@ static int __init dm_integrity_init(void)
+ 	}
+ 
+ 	r = dm_register_target(&integrity_target);
+-
+-	if (r < 0)
++	if (r < 0) {
+ 		DMERR("register failed %d", r);
++		kmem_cache_destroy(journal_io_cache);
++		return r;
++	}
+ 
+-	return r;
++	return 0;
+ }
+ 
+ static void __exit dm_integrity_exit(void)
+diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
+index 50a1259294d14..cc77cf3d41092 100644
+--- a/drivers/md/dm-ioctl.c
++++ b/drivers/md/dm-ioctl.c
+@@ -1168,10 +1168,13 @@ static int do_resume(struct dm_ioctl *param)
+ 	/* Do we need to load a new map ? */
+ 	if (new_map) {
+ 		sector_t old_size, new_size;
++		int srcu_idx;
+ 
+ 		/* Suspend if it isn't already suspended */
+-		if (param->flags & DM_SKIP_LOCKFS_FLAG)
++		old_map = dm_get_live_table(md, &srcu_idx);
++		if ((param->flags & DM_SKIP_LOCKFS_FLAG) || !old_map)
+ 			suspend_flags &= ~DM_SUSPEND_LOCKFS_FLAG;
++		dm_put_live_table(md, srcu_idx);
+ 		if (param->flags & DM_NOFLUSH_FLAG)
+ 			suspend_flags |= DM_SUSPEND_NOFLUSH_FLAG;
+ 		if (!dm_suspended_md(md))
+@@ -1556,11 +1559,12 @@ static int table_clear(struct file *filp, struct dm_ioctl *param, size_t param_s
+ 		has_new_map = true;
+ 	}
+ 
+-	param->flags &= ~DM_INACTIVE_PRESENT_FLAG;
+-
+-	__dev_status(hc->md, param);
+ 	md = hc->md;
+ 	up_write(&_hash_lock);
++
++	param->flags &= ~DM_INACTIVE_PRESENT_FLAG;
++	__dev_status(md, param);
++
+ 	if (old_map) {
+ 		dm_sync_table(md);
+ 		dm_table_destroy(old_map);
+diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
+index 2055a758541de..7899f5fb4c133 100644
+--- a/drivers/md/dm-table.c
++++ b/drivers/md/dm-table.c
+@@ -1202,21 +1202,12 @@ struct dm_crypto_profile {
+ 	struct mapped_device *md;
+ };
+ 
+-struct dm_keyslot_evict_args {
+-	const struct blk_crypto_key *key;
+-	int err;
+-};
+-
+ static int dm_keyslot_evict_callback(struct dm_target *ti, struct dm_dev *dev,
+ 				     sector_t start, sector_t len, void *data)
+ {
+-	struct dm_keyslot_evict_args *args = data;
+-	int err;
++	const struct blk_crypto_key *key = data;
+ 
+-	err = blk_crypto_evict_key(dev->bdev, args->key);
+-	if (!args->err)
+-		args->err = err;
+-	/* Always try to evict the key from all devices. */
++	blk_crypto_evict_key(dev->bdev, key);
+ 	return 0;
+ }
+ 
+@@ -1229,7 +1220,6 @@ static int dm_keyslot_evict(struct blk_crypto_profile *profile,
+ {
+ 	struct mapped_device *md =
+ 		container_of(profile, struct dm_crypto_profile, profile)->md;
+-	struct dm_keyslot_evict_args args = { key };
+ 	struct dm_table *t;
+ 	int srcu_idx;
+ 
+@@ -1242,11 +1232,12 @@ static int dm_keyslot_evict(struct blk_crypto_profile *profile,
+ 
+ 		if (!ti->type->iterate_devices)
+ 			continue;
+-		ti->type->iterate_devices(ti, dm_keyslot_evict_callback, &args);
++		ti->type->iterate_devices(ti, dm_keyslot_evict_callback,
++					  (void *)key);
+ 	}
+ 
+ 	dm_put_live_table(md, srcu_idx);
+-	return args.err;
++	return 0;
+ }
+ 
+ static int
+diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c
+index ade83ef3b4392..9316399b920ee 100644
+--- a/drivers/md/dm-verity-target.c
++++ b/drivers/md/dm-verity-target.c
+@@ -523,7 +523,7 @@ static int verity_verify_io(struct dm_verity_io *io)
+ 		sector_t cur_block = io->block + b;
+ 		struct ahash_request *req = verity_io_hash_req(v, io);
+ 
+-		if (v->validated_blocks &&
++		if (v->validated_blocks && bio->bi_status == BLK_STS_OK &&
+ 		    likely(test_bit(cur_block, v->validated_blocks))) {
+ 			verity_bv_skip_block(v, io, iter);
+ 			continue;
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 6c66357f92f55..ea6967aeaa02a 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -995,11 +995,15 @@ static bool stop_waiting_barrier(struct r10conf *conf)
+ 	    (!bio_list_empty(&bio_list[0]) || !bio_list_empty(&bio_list[1])))
+ 		return true;
+ 
+-	/* move on if recovery thread is blocked by us */
+-	if (conf->mddev->thread->tsk == current &&
+-	    test_bit(MD_RECOVERY_RUNNING, &conf->mddev->recovery) &&
+-	    conf->nr_queued > 0)
++	/*
++	 * move on if io is issued from raid10d(), nr_pending is not released
++	 * from original io(see handle_read_error()). All raise barrier is
++	 * blocked until this io is done.
++	 */
++	if (conf->mddev->thread->tsk == current) {
++		WARN_ON_ONCE(atomic_read(&conf->nr_pending) == 0);
+ 		return true;
++	}
+ 
+ 	return false;
+ }
+@@ -1244,7 +1248,8 @@ static void raid10_read_request(struct mddev *mddev, struct bio *bio,
+ 	}
+ 	slot = r10_bio->read_slot;
+ 
+-	if (blk_queue_io_stat(bio->bi_bdev->bd_disk->queue))
++	if (!r10_bio->start_time &&
++	    blk_queue_io_stat(bio->bi_bdev->bd_disk->queue))
+ 		r10_bio->start_time = bio_start_io_acct(bio);
+ 	read_bio = bio_alloc_clone(rdev->bdev, bio, gfp, &mddev->bio_set);
+ 
+@@ -1574,6 +1579,7 @@ static void __make_request(struct mddev *mddev, struct bio *bio, int sectors)
+ 	r10_bio->sector = bio->bi_iter.bi_sector;
+ 	r10_bio->state = 0;
+ 	r10_bio->read_slot = -1;
++	r10_bio->start_time = 0;
+ 	memset(r10_bio->devs, 0, sizeof(r10_bio->devs[0]) *
+ 			conf->geo.raid_disks);
+ 
+@@ -2609,11 +2615,22 @@ static void recovery_request_write(struct mddev *mddev, struct r10bio *r10_bio)
+ {
+ 	struct r10conf *conf = mddev->private;
+ 	int d;
+-	struct bio *wbio, *wbio2;
++	struct bio *wbio = r10_bio->devs[1].bio;
++	struct bio *wbio2 = r10_bio->devs[1].repl_bio;
++
++	/* Need to test wbio2->bi_end_io before we call
++	 * submit_bio_noacct as if the former is NULL,
++	 * the latter is free to free wbio2.
++	 */
++	if (wbio2 && !wbio2->bi_end_io)
++		wbio2 = NULL;
+ 
+ 	if (!test_bit(R10BIO_Uptodate, &r10_bio->state)) {
+ 		fix_recovery_read_error(r10_bio);
+-		end_sync_request(r10_bio);
++		if (wbio->bi_end_io)
++			end_sync_request(r10_bio);
++		if (wbio2)
++			end_sync_request(r10_bio);
+ 		return;
+ 	}
+ 
+@@ -2622,14 +2639,6 @@ static void recovery_request_write(struct mddev *mddev, struct r10bio *r10_bio)
+ 	 * and submit the write request
+ 	 */
+ 	d = r10_bio->devs[1].devnum;
+-	wbio = r10_bio->devs[1].bio;
+-	wbio2 = r10_bio->devs[1].repl_bio;
+-	/* Need to test wbio2->bi_end_io before we call
+-	 * submit_bio_noacct as if the former is NULL,
+-	 * the latter is free to free wbio2.
+-	 */
+-	if (wbio2 && !wbio2->bi_end_io)
+-		wbio2 = NULL;
+ 	if (wbio->bi_end_io) {
+ 		atomic_inc(&conf->mirrors[d].rdev->nr_pending);
+ 		md_sync_acct(conf->mirrors[d].rdev->bdev, bio_sectors(wbio));
+@@ -2978,9 +2987,13 @@ static void handle_read_error(struct mddev *mddev, struct r10bio *r10_bio)
+ 		md_error(mddev, rdev);
+ 
+ 	rdev_dec_pending(rdev, mddev);
+-	allow_barrier(conf);
+ 	r10_bio->state = 0;
+ 	raid10_read_request(mddev, r10_bio->master_bio, r10_bio);
++	/*
++	 * allow_barrier after re-submit to ensure no sync io
++	 * can be issued while regular io pending.
++	 */
++	allow_barrier(conf);
+ }
+ 
+ static void handle_write_completed(struct r10conf *conf, struct r10bio *r10_bio)
+@@ -3289,10 +3302,6 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
+ 	sector_t chunk_mask = conf->geo.chunk_mask;
+ 	int page_idx = 0;
+ 
+-	if (!mempool_initialized(&conf->r10buf_pool))
+-		if (init_resync(conf))
+-			return 0;
+-
+ 	/*
+ 	 * Allow skipping a full rebuild for incremental assembly
+ 	 * of a clean array, like RAID1 does.
+@@ -3308,6 +3317,10 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
+ 		return mddev->dev_sectors - sector_nr;
+ 	}
+ 
++	if (!mempool_initialized(&conf->r10buf_pool))
++		if (init_resync(conf))
++			return 0;
++
+  skipped:
+ 	max_sector = mddev->dev_sectors;
+ 	if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery) ||
+@@ -4004,6 +4017,20 @@ static int setup_geo(struct geom *geo, struct mddev *mddev, enum geo_type new)
+ 	return nc*fc;
+ }
+ 
++static void raid10_free_conf(struct r10conf *conf)
++{
++	if (!conf)
++		return;
++
++	mempool_exit(&conf->r10bio_pool);
++	kfree(conf->mirrors);
++	kfree(conf->mirrors_old);
++	kfree(conf->mirrors_new);
++	safe_put_page(conf->tmppage);
++	bioset_exit(&conf->bio_split);
++	kfree(conf);
++}
++
+ static struct r10conf *setup_conf(struct mddev *mddev)
+ {
+ 	struct r10conf *conf = NULL;
+@@ -4086,13 +4113,7 @@ static struct r10conf *setup_conf(struct mddev *mddev)
+ 	return conf;
+ 
+  out:
+-	if (conf) {
+-		mempool_exit(&conf->r10bio_pool);
+-		kfree(conf->mirrors);
+-		safe_put_page(conf->tmppage);
+-		bioset_exit(&conf->bio_split);
+-		kfree(conf);
+-	}
++	raid10_free_conf(conf);
+ 	return ERR_PTR(err);
+ }
+ 
+@@ -4129,6 +4150,9 @@ static int raid10_run(struct mddev *mddev)
+ 	if (!conf)
+ 		goto out;
+ 
++	mddev->thread = conf->thread;
++	conf->thread = NULL;
++
+ 	if (mddev_is_clustered(conf->mddev)) {
+ 		int fc, fo;
+ 
+@@ -4141,9 +4165,6 @@ static int raid10_run(struct mddev *mddev)
+ 		}
+ 	}
+ 
+-	mddev->thread = conf->thread;
+-	conf->thread = NULL;
+-
+ 	if (mddev->queue) {
+ 		blk_queue_max_write_zeroes_sectors(mddev->queue, 0);
+ 		blk_queue_io_min(mddev->queue, mddev->chunk_sectors << 9);
+@@ -4283,10 +4304,7 @@ static int raid10_run(struct mddev *mddev)
+ 
+ out_free_conf:
+ 	md_unregister_thread(&mddev->thread);
+-	mempool_exit(&conf->r10bio_pool);
+-	safe_put_page(conf->tmppage);
+-	kfree(conf->mirrors);
+-	kfree(conf);
++	raid10_free_conf(conf);
+ 	mddev->private = NULL;
+ out:
+ 	return -EIO;
+@@ -4294,15 +4312,7 @@ out:
+ 
+ static void raid10_free(struct mddev *mddev, void *priv)
+ {
+-	struct r10conf *conf = priv;
+-
+-	mempool_exit(&conf->r10bio_pool);
+-	safe_put_page(conf->tmppage);
+-	kfree(conf->mirrors);
+-	kfree(conf->mirrors_old);
+-	kfree(conf->mirrors_new);
+-	bioset_exit(&conf->bio_split);
+-	kfree(conf);
++	raid10_free_conf(priv);
+ }
+ 
+ static void raid10_quiesce(struct mddev *mddev, int quiesce)
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 7b820b81d8c2b..f787c9e5b10e7 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -6079,6 +6079,38 @@ out_release:
+ 	return ret;
+ }
+ 
++/*
++ * If the bio covers multiple data disks, find sector within the bio that has
++ * the lowest chunk offset in the first chunk.
++ */
++static sector_t raid5_bio_lowest_chunk_sector(struct r5conf *conf,
++					      struct bio *bi)
++{
++	int sectors_per_chunk = conf->chunk_sectors;
++	int raid_disks = conf->raid_disks;
++	int dd_idx;
++	struct stripe_head sh;
++	unsigned int chunk_offset;
++	sector_t r_sector = bi->bi_iter.bi_sector & ~((sector_t)RAID5_STRIPE_SECTORS(conf)-1);
++	sector_t sector;
++
++	/* We pass in fake stripe_head to get back parity disk numbers */
++	sector = raid5_compute_sector(conf, r_sector, 0, &dd_idx, &sh);
++	chunk_offset = sector_div(sector, sectors_per_chunk);
++	if (sectors_per_chunk - chunk_offset >= bio_sectors(bi))
++		return r_sector;
++	/*
++	 * Bio crosses to the next data disk. Check whether it's in the same
++	 * chunk.
++	 */
++	dd_idx++;
++	while (dd_idx == sh.pd_idx || dd_idx == sh.qd_idx)
++		dd_idx++;
++	if (dd_idx >= raid_disks)
++		return r_sector;
++	return r_sector + sectors_per_chunk - chunk_offset;
++}
++
+ static bool raid5_make_request(struct mddev *mddev, struct bio * bi)
+ {
+ 	DEFINE_WAIT_FUNC(wait, woken_wake_function);
+@@ -6150,6 +6182,17 @@ static bool raid5_make_request(struct mddev *mddev, struct bio * bi)
+ 	}
+ 	md_account_bio(mddev, &bi);
+ 
++	/*
++	 * Lets start with the stripe with the lowest chunk offset in the first
++	 * chunk. That has the best chances of creating IOs adjacent to
++	 * previous IOs in case of sequential IO and thus creates the most
++	 * sequential IO pattern. We don't bother with the optimization when
++	 * reshaping as the performance benefit is not worth the complexity.
++	 */
++	if (likely(conf->reshape_progress == MaxSector))
++		logical_sector = raid5_bio_lowest_chunk_sector(conf, bi);
++	s = (logical_sector - ctx.first_sector) >> RAID5_STRIPE_SHIFT(conf);
++
+ 	add_wait_queue(&conf->wait_for_overlap, &wait);
+ 	while (1) {
+ 		res = make_stripe_request(mddev, conf, &ctx, logical_sector,
+@@ -6178,7 +6221,7 @@ static bool raid5_make_request(struct mddev *mddev, struct bio * bi)
+ 			continue;
+ 		}
+ 
+-		s = find_first_bit(ctx.sectors_to_do, stripe_cnt);
++		s = find_next_bit_wrap(ctx.sectors_to_do, stripe_cnt, s);
+ 		if (s == stripe_cnt)
+ 			break;
+ 
+diff --git a/drivers/media/i2c/hi846.c b/drivers/media/i2c/hi846.c
+index 7c61873b71981..306dc35e925fd 100644
+--- a/drivers/media/i2c/hi846.c
++++ b/drivers/media/i2c/hi846.c
+@@ -1472,21 +1472,26 @@ static int hi846_init_controls(struct hi846 *hi846)
+ 	if (ctrl_hdlr->error) {
+ 		dev_err(&client->dev, "v4l ctrl handler error: %d\n",
+ 			ctrl_hdlr->error);
+-		return ctrl_hdlr->error;
++		ret = ctrl_hdlr->error;
++		goto error;
+ 	}
+ 
+ 	ret = v4l2_fwnode_device_parse(&client->dev, &props);
+ 	if (ret)
+-		return ret;
++		goto error;
+ 
+ 	ret = v4l2_ctrl_new_fwnode_properties(ctrl_hdlr, &hi846_ctrl_ops,
+ 					      &props);
+ 	if (ret)
+-		return ret;
++		goto error;
+ 
+ 	hi846->sd.ctrl_handler = ctrl_hdlr;
+ 
+ 	return 0;
++
++error:
++	v4l2_ctrl_handler_free(ctrl_hdlr);
++	return ret;
+ }
+ 
+ static int hi846_set_video_mode(struct hi846 *hi846, int fps)
+diff --git a/drivers/media/i2c/max9286.c b/drivers/media/i2c/max9286.c
+index 701038d6d19b1..13a986b885889 100644
+--- a/drivers/media/i2c/max9286.c
++++ b/drivers/media/i2c/max9286.c
+@@ -1122,6 +1122,7 @@ err_async:
+ static void max9286_v4l2_unregister(struct max9286_priv *priv)
+ {
+ 	fwnode_handle_put(priv->sd.fwnode);
++	v4l2_ctrl_handler_free(&priv->ctrls);
+ 	v4l2_async_unregister_subdev(&priv->sd);
+ 	max9286_v4l2_notifier_unregister(priv);
+ }
+diff --git a/drivers/media/i2c/ov5670.c b/drivers/media/i2c/ov5670.c
+index f79d908f4531b..c5e783a06f06c 100644
+--- a/drivers/media/i2c/ov5670.c
++++ b/drivers/media/i2c/ov5670.c
+@@ -2660,7 +2660,7 @@ static int ov5670_probe(struct i2c_client *client)
+ 		goto error_print;
+ 	}
+ 
+-	ov5670->xvclk = devm_clk_get(&client->dev, NULL);
++	ov5670->xvclk = devm_clk_get_optional(&client->dev, NULL);
+ 	if (!IS_ERR_OR_NULL(ov5670->xvclk))
+ 		input_clk = clk_get_rate(ov5670->xvclk);
+ 	else if (PTR_ERR(ov5670->xvclk) == -ENOENT)
+diff --git a/drivers/media/i2c/ov8856.c b/drivers/media/i2c/ov8856.c
+index cf8384e09413b..b5c7881383ca7 100644
+--- a/drivers/media/i2c/ov8856.c
++++ b/drivers/media/i2c/ov8856.c
+@@ -1709,46 +1709,6 @@ static int ov8856_identify_module(struct ov8856 *ov8856)
+ 		return -ENXIO;
+ 	}
+ 
+-	ret = ov8856_write_reg(ov8856, OV8856_REG_MODE_SELECT,
+-			       OV8856_REG_VALUE_08BIT, OV8856_MODE_STREAMING);
+-	if (ret)
+-		return ret;
+-
+-	ret = ov8856_write_reg(ov8856, OV8856_OTP_MODE_CTRL,
+-			       OV8856_REG_VALUE_08BIT, OV8856_OTP_MODE_AUTO);
+-	if (ret) {
+-		dev_err(&client->dev, "failed to set otp mode");
+-		return ret;
+-	}
+-
+-	ret = ov8856_write_reg(ov8856, OV8856_OTP_LOAD_CTRL,
+-			       OV8856_REG_VALUE_08BIT,
+-			       OV8856_OTP_LOAD_CTRL_ENABLE);
+-	if (ret) {
+-		dev_err(&client->dev, "failed to enable load control");
+-		return ret;
+-	}
+-
+-	ret = ov8856_read_reg(ov8856, OV8856_MODULE_REVISION,
+-			      OV8856_REG_VALUE_08BIT, &val);
+-	if (ret) {
+-		dev_err(&client->dev, "failed to read module revision");
+-		return ret;
+-	}
+-
+-	dev_info(&client->dev, "OV8856 revision %x (%s) at address 0x%02x\n",
+-		 val,
+-		 val == OV8856_2A_MODULE ? "2A" :
+-		 val == OV8856_1B_MODULE ? "1B" : "unknown revision",
+-		 client->addr);
+-
+-	ret = ov8856_write_reg(ov8856, OV8856_REG_MODE_SELECT,
+-			       OV8856_REG_VALUE_08BIT, OV8856_MODE_STANDBY);
+-	if (ret) {
+-		dev_err(&client->dev, "failed to exit streaming mode");
+-		return ret;
+-	}
+-
+ 	ov8856->identified = true;
+ 
+ 	return 0;
+diff --git a/drivers/media/pci/dm1105/dm1105.c b/drivers/media/pci/dm1105/dm1105.c
+index 4ac645a56c14e..9e9c7c071accc 100644
+--- a/drivers/media/pci/dm1105/dm1105.c
++++ b/drivers/media/pci/dm1105/dm1105.c
+@@ -1176,6 +1176,7 @@ static void dm1105_remove(struct pci_dev *pdev)
+ 	struct dvb_demux *dvbdemux = &dev->demux;
+ 	struct dmx_demux *dmx = &dvbdemux->dmx;
+ 
++	cancel_work_sync(&dev->ir.work);
+ 	dm1105_ir_exit(dev);
+ 	dmx->close(dmx);
+ 	dvb_net_release(&dev->dvbnet);
+diff --git a/drivers/media/pci/saa7134/saa7134-ts.c b/drivers/media/pci/saa7134/saa7134-ts.c
+index 6a5053126237f..437dbe5e75e29 100644
+--- a/drivers/media/pci/saa7134/saa7134-ts.c
++++ b/drivers/media/pci/saa7134/saa7134-ts.c
+@@ -300,6 +300,7 @@ int saa7134_ts_start(struct saa7134_dev *dev)
+ 
+ int saa7134_ts_fini(struct saa7134_dev *dev)
+ {
++	del_timer_sync(&dev->ts_q.timeout);
+ 	saa7134_pgtable_free(dev->pci, &dev->ts_q.pt);
+ 	return 0;
+ }
+diff --git a/drivers/media/pci/saa7134/saa7134-vbi.c b/drivers/media/pci/saa7134/saa7134-vbi.c
+index 3f0b0933eed69..3e773690468bd 100644
+--- a/drivers/media/pci/saa7134/saa7134-vbi.c
++++ b/drivers/media/pci/saa7134/saa7134-vbi.c
+@@ -185,6 +185,7 @@ int saa7134_vbi_init1(struct saa7134_dev *dev)
+ int saa7134_vbi_fini(struct saa7134_dev *dev)
+ {
+ 	/* nothing */
++	del_timer_sync(&dev->vbi_q.timeout);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/media/pci/saa7134/saa7134-video.c b/drivers/media/pci/saa7134/saa7134-video.c
+index 4d8974c9fcc98..29124756a62bc 100644
+--- a/drivers/media/pci/saa7134/saa7134-video.c
++++ b/drivers/media/pci/saa7134/saa7134-video.c
+@@ -2146,6 +2146,7 @@ int saa7134_video_init1(struct saa7134_dev *dev)
+ 
+ void saa7134_video_fini(struct saa7134_dev *dev)
+ {
++	del_timer_sync(&dev->video_q.timeout);
+ 	/* free stuff */
+ 	saa7134_pgtable_free(dev->pci, &dev->video_q.pt);
+ 	saa7134_pgtable_free(dev->pci, &dev->vbi_q.pt);
+diff --git a/drivers/media/platform/amphion/vdec.c b/drivers/media/platform/amphion/vdec.c
+index 87f9f8e90ab13..70633530d23a1 100644
+--- a/drivers/media/platform/amphion/vdec.c
++++ b/drivers/media/platform/amphion/vdec.c
+@@ -168,7 +168,31 @@ static const struct vpu_format vdec_formats[] = {
+ 	{0, 0, 0, 0},
+ };
+ 
++static int vdec_op_s_ctrl(struct v4l2_ctrl *ctrl)
++{
++	struct vpu_inst *inst = ctrl_to_inst(ctrl);
++	struct vdec_t *vdec = inst->priv;
++	int ret = 0;
++
++	vpu_inst_lock(inst);
++	switch (ctrl->id) {
++	case V4L2_CID_MPEG_VIDEO_DEC_DISPLAY_DELAY_ENABLE:
++		vdec->params.display_delay_enable = ctrl->val;
++		break;
++	case V4L2_CID_MPEG_VIDEO_DEC_DISPLAY_DELAY:
++		vdec->params.display_delay = ctrl->val;
++		break;
++	default:
++		ret = -EINVAL;
++		break;
++	}
++	vpu_inst_unlock(inst);
++
++	return ret;
++}
++
+ static const struct v4l2_ctrl_ops vdec_ctrl_ops = {
++	.s_ctrl = vdec_op_s_ctrl,
+ 	.g_volatile_ctrl = vpu_helper_g_volatile_ctrl,
+ };
+ 
+@@ -181,6 +205,14 @@ static int vdec_ctrl_init(struct vpu_inst *inst)
+ 	if (ret)
+ 		return ret;
+ 
++	v4l2_ctrl_new_std(&inst->ctrl_handler, &vdec_ctrl_ops,
++			  V4L2_CID_MPEG_VIDEO_DEC_DISPLAY_DELAY,
++			  0, 0, 1, 0);
++
++	v4l2_ctrl_new_std(&inst->ctrl_handler, &vdec_ctrl_ops,
++			  V4L2_CID_MPEG_VIDEO_DEC_DISPLAY_DELAY_ENABLE,
++			  0, 1, 1, 0);
++
+ 	ctrl = v4l2_ctrl_new_std(&inst->ctrl_handler, &vdec_ctrl_ops,
+ 				 V4L2_CID_MIN_BUFFERS_FOR_CAPTURE, 1, 32, 1, 2);
+ 	if (ctrl)
+diff --git a/drivers/media/platform/amphion/vpu_codec.h b/drivers/media/platform/amphion/vpu_codec.h
+index 528a93f08ecd4..bac6d0d94f8a5 100644
+--- a/drivers/media/platform/amphion/vpu_codec.h
++++ b/drivers/media/platform/amphion/vpu_codec.h
+@@ -55,7 +55,8 @@ struct vpu_encode_params {
+ struct vpu_decode_params {
+ 	u32 codec_format;
+ 	u32 output_format;
+-	u32 b_dis_reorder;
++	u32 display_delay_enable;
++	u32 display_delay;
+ 	u32 b_non_frame;
+ 	u32 frame_count;
+ 	u32 end_flag;
+diff --git a/drivers/media/platform/amphion/vpu_malone.c b/drivers/media/platform/amphion/vpu_malone.c
+index 2c9bfc6a5a72e..feb5c25e31044 100644
+--- a/drivers/media/platform/amphion/vpu_malone.c
++++ b/drivers/media/platform/amphion/vpu_malone.c
+@@ -641,7 +641,9 @@ static int vpu_malone_set_params(struct vpu_shared_addr *shared,
+ 		hc->jpg[instance].jpg_mjpeg_interlaced = 0;
+ 	}
+ 
+-	hc->codec_param[instance].disp_imm = params->b_dis_reorder ? 1 : 0;
++	hc->codec_param[instance].disp_imm = params->display_delay_enable ? 1 : 0;
++	if (malone_format != MALONE_FMT_AVC)
++		hc->codec_param[instance].disp_imm = 0;
+ 	hc->codec_param[instance].dbglog_enable = 0;
+ 	iface->dbglog_desc.level = 0;
+ 
+diff --git a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c
+index 969516a940ba7..d9584fe5033eb 100644
+--- a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c
++++ b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c
+@@ -1025,9 +1025,6 @@ retry_select:
+ 	if (!dst_buf)
+ 		goto getbuf_fail;
+ 
+-	v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
+-	v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
+-
+ 	v4l2_m2m_buf_copy_metadata(src_buf, dst_buf, true);
+ 
+ 	mtk_jpegenc_set_hw_param(ctx, hw_id, src_buf, dst_buf);
+@@ -1045,6 +1042,9 @@ retry_select:
+ 		goto enc_end;
+ 	}
+ 
++	v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
++	v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
++
+ 	schedule_delayed_work(&comp_jpeg[hw_id]->job_timeout_work,
+ 			      msecs_to_jiffies(MTK_JPEG_HW_TIMEOUT_MSEC));
+ 
+@@ -1220,9 +1220,6 @@ retry_select:
+ 	if (!dst_buf)
+ 		goto getbuf_fail;
+ 
+-	v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
+-	v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
+-
+ 	v4l2_m2m_buf_copy_metadata(src_buf, dst_buf, true);
+ 	jpeg_src_buf = mtk_jpeg_vb2_to_srcbuf(&src_buf->vb2_buf);
+ 	jpeg_dst_buf = mtk_jpeg_vb2_to_srcbuf(&dst_buf->vb2_buf);
+@@ -1231,7 +1228,7 @@ retry_select:
+ 					     &jpeg_src_buf->dec_param)) {
+ 		mtk_jpeg_queue_src_chg_event(ctx);
+ 		ctx->state = MTK_JPEG_SOURCE_CHANGE;
+-		goto dec_end;
++		goto getbuf_fail;
+ 	}
+ 
+ 	jpeg_src_buf->curr_ctx = ctx;
+@@ -1254,6 +1251,9 @@ retry_select:
+ 		goto clk_end;
+ 	}
+ 
++	v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
++	v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
++
+ 	schedule_delayed_work(&comp_jpeg[hw_id]->job_timeout_work,
+ 			      msecs_to_jiffies(MTK_JPEG_HW_TIMEOUT_MSEC));
+ 
+@@ -1692,7 +1692,7 @@ static int mtk_jpeg_probe(struct platform_device *pdev)
+ 		return -EINVAL;
+ 	}
+ 
+-	if (list_empty(&pdev->dev.devres_head)) {
++	if (!jpeg->variant->multi_core) {
+ 		INIT_DELAYED_WORK(&jpeg->job_timeout_work,
+ 				  mtk_jpeg_job_timeout_work);
+ 
+@@ -1874,6 +1874,7 @@ static const struct mtk_jpeg_variant mtk_jpeg_drvdata = {
+ 	.ioctl_ops = &mtk_jpeg_enc_ioctl_ops,
+ 	.out_q_default_fourcc = V4L2_PIX_FMT_YUYV,
+ 	.cap_q_default_fourcc = V4L2_PIX_FMT_JPEG,
++	.multi_core = false,
+ };
+ 
+ static struct mtk_jpeg_variant mtk8195_jpegenc_drvdata = {
+@@ -1885,6 +1886,7 @@ static struct mtk_jpeg_variant mtk8195_jpegenc_drvdata = {
+ 	.ioctl_ops = &mtk_jpeg_enc_ioctl_ops,
+ 	.out_q_default_fourcc = V4L2_PIX_FMT_YUYV,
+ 	.cap_q_default_fourcc = V4L2_PIX_FMT_JPEG,
++	.multi_core = true,
+ };
+ 
+ static const struct mtk_jpeg_variant mtk8195_jpegdec_drvdata = {
+@@ -1896,6 +1898,7 @@ static const struct mtk_jpeg_variant mtk8195_jpegdec_drvdata = {
+ 	.ioctl_ops = &mtk_jpeg_dec_ioctl_ops,
+ 	.out_q_default_fourcc = V4L2_PIX_FMT_JPEG,
+ 	.cap_q_default_fourcc = V4L2_PIX_FMT_YUV420M,
++	.multi_core = true,
+ };
+ 
+ #if defined(CONFIG_OF)
+diff --git a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.h b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.h
+index b9126476be8fa..f87358cc9f47f 100644
+--- a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.h
++++ b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.h
+@@ -60,6 +60,7 @@ enum mtk_jpeg_ctx_state {
+  * @ioctl_ops:			the callback of jpeg v4l2_ioctl_ops
+  * @out_q_default_fourcc:	output queue default fourcc
+  * @cap_q_default_fourcc:	capture queue default fourcc
++ * @multi_core:		mark jpeg hw is multi_core or not
+  */
+ struct mtk_jpeg_variant {
+ 	struct clk_bulk_data *clks;
+@@ -74,6 +75,7 @@ struct mtk_jpeg_variant {
+ 	const struct v4l2_ioctl_ops *ioctl_ops;
+ 	u32 out_q_default_fourcc;
+ 	u32 cap_q_default_fourcc;
++	bool multi_core;
+ };
+ 
+ struct mtk_jpeg_src_buf {
+diff --git a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_enc_hw.c b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_enc_hw.c
+index 1bbb712d78d0e..867f4c1a09fa6 100644
+--- a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_enc_hw.c
++++ b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_enc_hw.c
+@@ -286,10 +286,6 @@ static irqreturn_t mtk_jpegenc_hw_irq_handler(int irq, void *priv)
+ 	mtk_jpegenc_put_buf(jpeg);
+ 	pm_runtime_put(ctx->jpeg->dev);
+ 	clk_disable_unprepare(jpeg->venc_clk.clks->clk);
+-	if (!list_empty(&ctx->fh.m2m_ctx->out_q_ctx.rdy_queue) ||
+-	    !list_empty(&ctx->fh.m2m_ctx->cap_q_ctx.rdy_queue)) {
+-		queue_work(master_jpeg->workqueue, &ctx->jpeg_work);
+-	}
+ 
+ 	jpeg->hw_state = MTK_JPEG_HW_IDLE;
+ 	wake_up(&master_jpeg->enc_hw_wq);
+diff --git a/drivers/media/platform/mediatek/mdp3/mtk-mdp3-m2m.c b/drivers/media/platform/mediatek/mdp3/mtk-mdp3-m2m.c
+index 5f74ea3b7a524..8612a48bde10f 100644
+--- a/drivers/media/platform/mediatek/mdp3/mtk-mdp3-m2m.c
++++ b/drivers/media/platform/mediatek/mdp3/mtk-mdp3-m2m.c
+@@ -566,7 +566,11 @@ static int mdp_m2m_open(struct file *file)
+ 		goto err_free_ctx;
+ 	}
+ 
+-	ctx->id = ida_alloc(&mdp->mdp_ida, GFP_KERNEL);
++	ret = ida_alloc(&mdp->mdp_ida, GFP_KERNEL);
++	if (ret < 0)
++		goto err_unlock_mutex;
++	ctx->id = ret;
++
+ 	ctx->mdp_dev = mdp;
+ 
+ 	v4l2_fh_init(&ctx->fh, vdev);
+@@ -617,6 +621,8 @@ err_release_handler:
+ 	v4l2_fh_del(&ctx->fh);
+ err_exit_fh:
+ 	v4l2_fh_exit(&ctx->fh);
++	ida_free(&mdp->mdp_ida, ctx->id);
++err_unlock_mutex:
+ 	mutex_unlock(&mdp->m2m_lock);
+ err_free_ctx:
+ 	kfree(ctx);
+diff --git a/drivers/media/platform/mediatek/mdp3/mtk-mdp3-regs.c b/drivers/media/platform/mediatek/mdp3/mtk-mdp3-regs.c
+index 4e84a37ecdfc1..36336d169bd91 100644
+--- a/drivers/media/platform/mediatek/mdp3/mtk-mdp3-regs.c
++++ b/drivers/media/platform/mediatek/mdp3/mtk-mdp3-regs.c
+@@ -4,6 +4,7 @@
+  * Author: Ping-Hsun Wu <ping-hsun.wu@mediatek.com>
+  */
+ 
++#include <linux/math64.h>
+ #include <media/v4l2-common.h>
+ #include <media/videobuf2-v4l2.h>
+ #include <media/videobuf2-dma-contig.h>
+@@ -428,14 +429,15 @@ const struct mdp_format *mdp_try_fmt_mplane(struct v4l2_format *f,
+ 		u32 bpl = pix_mp->plane_fmt[i].bytesperline;
+ 		u32 min_si, max_si;
+ 		u32 si = pix_mp->plane_fmt[i].sizeimage;
++		u64 di;
+ 
+ 		bpl = clamp(bpl, min_bpl, max_bpl);
+ 		pix_mp->plane_fmt[i].bytesperline = bpl;
+ 
+-		min_si = (bpl * pix_mp->height * fmt->depth[i]) /
+-			 fmt->row_depth[i];
+-		max_si = (bpl * s.max_height * fmt->depth[i]) /
+-			 fmt->row_depth[i];
++		di = (u64)bpl * pix_mp->height * fmt->depth[i];
++		min_si = (u32)div_u64(di, fmt->row_depth[i]);
++		di = (u64)bpl * s.max_height * fmt->depth[i];
++		max_si = (u32)div_u64(di, fmt->row_depth[i]);
+ 
+ 		si = clamp(si, min_si, max_si);
+ 		pix_mp->plane_fmt[i].sizeimage = si;
+diff --git a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec.c b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec.c
+index 641f533c417fd..c99705681a03e 100644
+--- a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec.c
++++ b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec.c
+@@ -39,10 +39,9 @@ static bool mtk_vdec_get_cap_fmt(struct mtk_vcodec_ctx *ctx, int format_index)
+ {
+ 	const struct mtk_vcodec_dec_pdata *dec_pdata = ctx->dev->vdec_pdata;
+ 	const struct mtk_video_fmt *fmt;
+-	struct mtk_q_data *q_data;
+ 	int num_frame_count = 0, i;
+-	bool ret = true;
+ 
++	fmt = &dec_pdata->vdec_formats[format_index];
+ 	for (i = 0; i < *dec_pdata->num_formats; i++) {
+ 		if (dec_pdata->vdec_formats[i].type != MTK_FMT_FRAME)
+ 			continue;
+@@ -50,27 +49,10 @@ static bool mtk_vdec_get_cap_fmt(struct mtk_vcodec_ctx *ctx, int format_index)
+ 		num_frame_count++;
+ 	}
+ 
+-	if (num_frame_count == 1)
++	if (num_frame_count == 1 || fmt->fourcc == V4L2_PIX_FMT_MM21)
+ 		return true;
+ 
+-	fmt = &dec_pdata->vdec_formats[format_index];
+-	q_data = &ctx->q_data[MTK_Q_DATA_SRC];
+-	switch (q_data->fmt->fourcc) {
+-	case V4L2_PIX_FMT_VP8_FRAME:
+-		if (fmt->fourcc == V4L2_PIX_FMT_MM21)
+-			ret = true;
+-		break;
+-	case V4L2_PIX_FMT_H264_SLICE:
+-	case V4L2_PIX_FMT_VP9_FRAME:
+-		if (fmt->fourcc == V4L2_PIX_FMT_MM21)
+-			ret = false;
+-		break;
+-	default:
+-		ret = true;
+-		break;
+-	}
+-
+-	return ret;
++	return false;
+ }
+ 
+ static struct mtk_q_data *mtk_vdec_get_q_data(struct mtk_vcodec_ctx *ctx,
+diff --git a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_drv.c b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_drv.c
+index 174a6eec2f549..42df901e8beb4 100644
+--- a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_drv.c
++++ b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_drv.c
+@@ -451,7 +451,8 @@ err_core_workq:
+ 	if (IS_VDEC_LAT_ARCH(dev->vdec_pdata->hw_arch))
+ 		destroy_workqueue(dev->core_workqueue);
+ err_res:
+-	pm_runtime_disable(dev->pm.dev);
++	if (!dev->vdec_pdata->is_subdev_supported)
++		pm_runtime_disable(dev->pm.dev);
+ err_dec_pm:
+ 	mtk_vcodec_fw_release(dev->fw_handler);
+ 	return ret;
+diff --git a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_hw.c b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_hw.c
+index 376db0e433d75..b753bf54ebd90 100644
+--- a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_hw.c
++++ b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_hw.c
+@@ -193,8 +193,16 @@ err:
+ 	return ret;
+ }
+ 
++static int mtk_vdec_hw_remove(struct platform_device *pdev)
++{
++	pm_runtime_disable(&pdev->dev);
++
++	return 0;
++}
++
+ static struct platform_driver mtk_vdec_driver = {
+ 	.probe	= mtk_vdec_hw_probe,
++	.remove = mtk_vdec_hw_remove,
+ 	.driver	= {
+ 		.name	= "mtk-vdec-comp",
+ 		.of_match_table = mtk_vdec_hw_match,
+diff --git a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_stateful.c b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_stateful.c
+index 035c86e7809fd..29991551cf614 100644
+--- a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_stateful.c
++++ b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_stateful.c
+@@ -11,7 +11,7 @@
+ #include "mtk_vcodec_dec_pm.h"
+ #include "vdec_drv_if.h"
+ 
+-static const struct mtk_video_fmt mtk_video_formats[] = {
++static struct mtk_video_fmt mtk_video_formats[] = {
+ 	{
+ 		.fourcc = V4L2_PIX_FMT_H264,
+ 		.type = MTK_FMT_DEC,
+@@ -580,6 +580,16 @@ static int mtk_vcodec_dec_ctrls_setup(struct mtk_vcodec_ctx *ctx)
+ 
+ static void mtk_init_vdec_params(struct mtk_vcodec_ctx *ctx)
+ {
++	unsigned int i;
++
++	if (!(ctx->dev->dec_capability & VCODEC_CAPABILITY_4K_DISABLED)) {
++		for (i = 0; i < num_supported_formats; i++) {
++			mtk_video_formats[i].frmsize.max_width =
++				VCODEC_DEC_4K_CODED_WIDTH;
++			mtk_video_formats[i].frmsize.max_height =
++				VCODEC_DEC_4K_CODED_HEIGHT;
++		}
++	}
+ }
+ 
+ static struct vb2_ops mtk_vdec_frame_vb2_ops = {
+diff --git a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_stateless.c b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_stateless.c
+index ffbcee04dc26f..3000db975e5f5 100644
+--- a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_stateless.c
++++ b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_stateless.c
+@@ -258,8 +258,10 @@ static void mtk_vdec_worker(struct work_struct *work)
+ 		if (src_buf_req)
+ 			v4l2_ctrl_request_complete(src_buf_req, &ctx->ctrl_hdl);
+ 	} else {
+-		v4l2_m2m_src_buf_remove(ctx->m2m_ctx);
+-		v4l2_m2m_buf_done(vb2_v4l2_src, state);
++		if (ret != -EAGAIN) {
++			v4l2_m2m_src_buf_remove(ctx->m2m_ctx);
++			v4l2_m2m_buf_done(vb2_v4l2_src, state);
++		}
+ 		v4l2_m2m_job_finish(dev->m2m_dev_dec, ctx->m2m_ctx);
+ 	}
+ }
+@@ -390,14 +392,14 @@ static void mtk_vcodec_get_supported_formats(struct mtk_vcodec_ctx *ctx)
+ 	if (num_formats)
+ 		return;
+ 
+-	if (ctx->dev->dec_capability & MTK_VDEC_FORMAT_MM21) {
+-		mtk_vcodec_add_formats(V4L2_PIX_FMT_MM21, ctx);
+-		cap_format_count++;
+-	}
+ 	if (ctx->dev->dec_capability & MTK_VDEC_FORMAT_MT21C) {
+ 		mtk_vcodec_add_formats(V4L2_PIX_FMT_MT21C, ctx);
+ 		cap_format_count++;
+ 	}
++	if (ctx->dev->dec_capability & MTK_VDEC_FORMAT_MM21) {
++		mtk_vcodec_add_formats(V4L2_PIX_FMT_MM21, ctx);
++		cap_format_count++;
++	}
+ 	if (ctx->dev->dec_capability & MTK_VDEC_FORMAT_H264_SLICE) {
+ 		mtk_vcodec_add_formats(V4L2_PIX_FMT_H264_SLICE, ctx);
+ 		out_format_count++;
+diff --git a/drivers/media/platform/mediatek/vcodec/vdec/vdec_h264_req_multi_if.c b/drivers/media/platform/mediatek/vcodec/vdec/vdec_h264_req_multi_if.c
+index 955b2d0c8f53f..999ce7ee5fdc2 100644
+--- a/drivers/media/platform/mediatek/vcodec/vdec/vdec_h264_req_multi_if.c
++++ b/drivers/media/platform/mediatek/vcodec/vdec/vdec_h264_req_multi_if.c
+@@ -597,7 +597,7 @@ static int vdec_h264_slice_lat_decode(void *h_vdec, struct mtk_vcodec_mem *bs,
+ 	lat_buf = vdec_msg_queue_dqbuf(&inst->ctx->msg_queue.lat_ctx);
+ 	if (!lat_buf) {
+ 		mtk_vcodec_err(inst, "failed to get lat buffer");
+-		return -EINVAL;
++		return -EAGAIN;
+ 	}
+ 	share_info = lat_buf->private_data;
+ 	src_buf_info = container_of(bs, struct mtk_video_dec_buf, bs_buffer);
+diff --git a/drivers/media/platform/mediatek/vcodec/vdec/vdec_vp9_req_lat_if.c b/drivers/media/platform/mediatek/vcodec/vdec/vdec_vp9_req_lat_if.c
+index cbb6728b8a40b..cf16cf2807f07 100644
+--- a/drivers/media/platform/mediatek/vcodec/vdec/vdec_vp9_req_lat_if.c
++++ b/drivers/media/platform/mediatek/vcodec/vdec/vdec_vp9_req_lat_if.c
+@@ -2070,7 +2070,7 @@ static int vdec_vp9_slice_lat_decode(void *h_vdec, struct mtk_vcodec_mem *bs,
+ 	lat_buf = vdec_msg_queue_dqbuf(&instance->ctx->msg_queue.lat_ctx);
+ 	if (!lat_buf) {
+ 		mtk_vcodec_err(instance, "Failed to get VP9 lat buf\n");
+-		return -EBUSY;
++		return -EAGAIN;
+ 	}
+ 	pfc = (struct vdec_vp9_slice_pfc *)lat_buf->private_data;
+ 	if (!pfc) {
+diff --git a/drivers/media/platform/mediatek/vcodec/vdec_msg_queue.c b/drivers/media/platform/mediatek/vcodec/vdec_msg_queue.c
+index dc2004790a472..f3073d1e7f420 100644
+--- a/drivers/media/platform/mediatek/vcodec/vdec_msg_queue.c
++++ b/drivers/media/platform/mediatek/vcodec/vdec_msg_queue.c
+@@ -52,9 +52,26 @@ static struct list_head *vdec_get_buf_list(int hardware_index, struct vdec_lat_b
+ 	}
+ }
+ 
++static void vdec_msg_queue_inc(struct vdec_msg_queue *msg_queue, int hardware_index)
++{
++	if (hardware_index == MTK_VDEC_CORE)
++		atomic_inc(&msg_queue->core_list_cnt);
++	else
++		atomic_inc(&msg_queue->lat_list_cnt);
++}
++
++static void vdec_msg_queue_dec(struct vdec_msg_queue *msg_queue, int hardware_index)
++{
++	if (hardware_index == MTK_VDEC_CORE)
++		atomic_dec(&msg_queue->core_list_cnt);
++	else
++		atomic_dec(&msg_queue->lat_list_cnt);
++}
++
+ int vdec_msg_queue_qbuf(struct vdec_msg_queue_ctx *msg_ctx, struct vdec_lat_buf *buf)
+ {
+ 	struct list_head *head;
++	int status;
+ 
+ 	head = vdec_get_buf_list(msg_ctx->hardware_index, buf);
+ 	if (!head) {
+@@ -66,11 +83,18 @@ int vdec_msg_queue_qbuf(struct vdec_msg_queue_ctx *msg_ctx, struct vdec_lat_buf
+ 	list_add_tail(head, &msg_ctx->ready_queue);
+ 	msg_ctx->ready_num++;
+ 
+-	if (msg_ctx->hardware_index != MTK_VDEC_CORE)
++	vdec_msg_queue_inc(&buf->ctx->msg_queue, msg_ctx->hardware_index);
++	if (msg_ctx->hardware_index != MTK_VDEC_CORE) {
+ 		wake_up_all(&msg_ctx->ready_to_use);
+-	else
+-		queue_work(buf->ctx->dev->core_workqueue,
+-			   &buf->ctx->msg_queue.core_work);
++	} else {
++		if (buf->ctx->msg_queue.core_work_cnt <
++			atomic_read(&buf->ctx->msg_queue.core_list_cnt)) {
++			status = queue_work(buf->ctx->dev->core_workqueue,
++					    &buf->ctx->msg_queue.core_work);
++			if (status)
++				buf->ctx->msg_queue.core_work_cnt++;
++		}
++	}
+ 
+ 	mtk_v4l2_debug(3, "enqueue buf type: %d addr: 0x%p num: %d",
+ 		       msg_ctx->hardware_index, buf, msg_ctx->ready_num);
+@@ -127,6 +151,7 @@ struct vdec_lat_buf *vdec_msg_queue_dqbuf(struct vdec_msg_queue_ctx *msg_ctx)
+ 		return NULL;
+ 	}
+ 	list_del(head);
++	vdec_msg_queue_dec(&buf->ctx->msg_queue, msg_ctx->hardware_index);
+ 
+ 	msg_ctx->ready_num--;
+ 	mtk_v4l2_debug(3, "dqueue buf type:%d addr: 0x%p num: %d",
+@@ -156,11 +181,29 @@ void vdec_msg_queue_update_ube_wptr(struct vdec_msg_queue *msg_queue, uint64_t u
+ 
+ bool vdec_msg_queue_wait_lat_buf_full(struct vdec_msg_queue *msg_queue)
+ {
++	struct vdec_lat_buf *buf, *tmp;
++	struct list_head *list_core[3];
++	struct vdec_msg_queue_ctx *core_ctx;
++	int ret, i, in_core_count = 0, count = 0;
+ 	long timeout_jiff;
+-	int ret;
++
++	core_ctx = &msg_queue->ctx->dev->msg_queue_core_ctx;
++	spin_lock(&core_ctx->ready_lock);
++	list_for_each_entry_safe(buf, tmp, &core_ctx->ready_queue, core_list) {
++		if (buf && buf->ctx == msg_queue->ctx) {
++			list_core[in_core_count++] = &buf->core_list;
++			list_del(&buf->core_list);
++		}
++	}
++
++	for (i = 0; i < in_core_count; i++) {
++		list_add(list_core[in_core_count - (1 + i)], &core_ctx->ready_queue);
++		queue_work(msg_queue->ctx->dev->core_workqueue, &msg_queue->core_work);
++	}
++	spin_unlock(&core_ctx->ready_lock);
+ 
+ 	timeout_jiff = msecs_to_jiffies(1000 * (NUM_BUFFER_COUNT + 2));
+-	ret = wait_event_timeout(msg_queue->lat_ctx.ready_to_use,
++	ret = wait_event_timeout(msg_queue->ctx->msg_queue.core_dec_done,
+ 				 msg_queue->lat_ctx.ready_num == NUM_BUFFER_COUNT,
+ 				 timeout_jiff);
+ 	if (ret) {
+@@ -168,8 +211,20 @@ bool vdec_msg_queue_wait_lat_buf_full(struct vdec_msg_queue *msg_queue)
+ 			       msg_queue->lat_ctx.ready_num);
+ 		return true;
+ 	}
+-	mtk_v4l2_err("failed with lat buf isn't full: %d",
+-		     msg_queue->lat_ctx.ready_num);
++
++	spin_lock(&core_ctx->ready_lock);
++	list_for_each_entry_safe(buf, tmp, &core_ctx->ready_queue, core_list) {
++		if (buf && buf->ctx == msg_queue->ctx) {
++			count++;
++			list_del(&buf->core_list);
++		}
++	}
++	spin_unlock(&core_ctx->ready_lock);
++
++	mtk_v4l2_err("failed with lat buf isn't full: list(%d %d) count:%d",
++		     atomic_read(&msg_queue->lat_list_cnt),
++		     atomic_read(&msg_queue->core_list_cnt), count);
++
+ 	return false;
+ }
+ 
+@@ -206,6 +261,7 @@ static void vdec_msg_queue_core_work(struct work_struct *work)
+ 		container_of(msg_queue, struct mtk_vcodec_ctx, msg_queue);
+ 	struct mtk_vcodec_dev *dev = ctx->dev;
+ 	struct vdec_lat_buf *lat_buf;
++	int status;
+ 
+ 	lat_buf = vdec_msg_queue_dqbuf(&dev->msg_queue_core_ctx);
+ 	if (!lat_buf)
+@@ -221,11 +277,18 @@ static void vdec_msg_queue_core_work(struct work_struct *work)
+ 	mtk_vcodec_dec_disable_hardware(ctx, MTK_VDEC_CORE);
+ 	vdec_msg_queue_qbuf(&ctx->msg_queue.lat_ctx, lat_buf);
+ 
+-	if (!list_empty(&dev->msg_queue_core_ctx.ready_queue)) {
+-		mtk_v4l2_debug(3, "re-schedule to decode for core: %d",
+-			       dev->msg_queue_core_ctx.ready_num);
+-		queue_work(dev->core_workqueue, &msg_queue->core_work);
++	wake_up_all(&ctx->msg_queue.core_dec_done);
++	spin_lock(&dev->msg_queue_core_ctx.ready_lock);
++	lat_buf->ctx->msg_queue.core_work_cnt--;
++
++	if (lat_buf->ctx->msg_queue.core_work_cnt <
++		atomic_read(&lat_buf->ctx->msg_queue.core_list_cnt)) {
++		status = queue_work(lat_buf->ctx->dev->core_workqueue,
++				    &lat_buf->ctx->msg_queue.core_work);
++		if (status)
++			lat_buf->ctx->msg_queue.core_work_cnt++;
+ 	}
++	spin_unlock(&dev->msg_queue_core_ctx.ready_lock);
+ }
+ 
+ int vdec_msg_queue_init(struct vdec_msg_queue *msg_queue,
+@@ -239,12 +302,18 @@ int vdec_msg_queue_init(struct vdec_msg_queue *msg_queue,
+ 	if (msg_queue->wdma_addr.size)
+ 		return 0;
+ 
++	msg_queue->ctx = ctx;
++	msg_queue->core_work_cnt = 0;
+ 	vdec_msg_queue_init_ctx(&msg_queue->lat_ctx, MTK_VDEC_LAT0);
+ 	INIT_WORK(&msg_queue->core_work, vdec_msg_queue_core_work);
++
++	atomic_set(&msg_queue->lat_list_cnt, 0);
++	atomic_set(&msg_queue->core_list_cnt, 0);
++	init_waitqueue_head(&msg_queue->core_dec_done);
++
+ 	msg_queue->wdma_addr.size =
+ 		vde_msg_queue_get_trans_size(ctx->picinfo.buf_w,
+ 					     ctx->picinfo.buf_h);
+-
+ 	err = mtk_vcodec_mem_alloc(ctx, &msg_queue->wdma_addr);
+ 	if (err) {
+ 		mtk_v4l2_err("failed to allocate wdma_addr buf");
+diff --git a/drivers/media/platform/mediatek/vcodec/vdec_msg_queue.h b/drivers/media/platform/mediatek/vcodec/vdec_msg_queue.h
+index c43d427f5f544..a5d44bc97c16b 100644
+--- a/drivers/media/platform/mediatek/vcodec/vdec_msg_queue.h
++++ b/drivers/media/platform/mediatek/vcodec/vdec_msg_queue.h
+@@ -72,6 +72,12 @@ struct vdec_lat_buf {
+  * @wdma_wptr_addr: ube write point
+  * @core_work: core hardware work
+  * @lat_ctx: used to store lat buffer list
++ * @ctx: point to mtk_vcodec_ctx
++ *
++ * @lat_list_cnt: used to record each instance lat list count
++ * @core_list_cnt: used to record each instance core list count
++ * @core_dec_done: core work queue decode done event
++ * @core_work_cnt: the number of core work in work queue
+  */
+ struct vdec_msg_queue {
+ 	struct vdec_lat_buf lat_buf[NUM_BUFFER_COUNT];
+@@ -82,6 +88,12 @@ struct vdec_msg_queue {
+ 
+ 	struct work_struct core_work;
+ 	struct vdec_msg_queue_ctx lat_ctx;
++	struct mtk_vcodec_ctx *ctx;
++
++	atomic_t lat_list_cnt;
++	atomic_t core_list_cnt;
++	wait_queue_head_t core_dec_done;
++	int core_work_cnt;
+ };
+ 
+ /**
+diff --git a/drivers/media/platform/qcom/venus/vdec.c b/drivers/media/platform/qcom/venus/vdec.c
+index 4ceaba37e2e57..1a52c2ea2da5b 100644
+--- a/drivers/media/platform/qcom/venus/vdec.c
++++ b/drivers/media/platform/qcom/venus/vdec.c
+@@ -31,15 +31,15 @@
+  */
+ static const struct venus_format vdec_formats[] = {
+ 	{
+-		.pixfmt = V4L2_PIX_FMT_QC08C,
++		.pixfmt = V4L2_PIX_FMT_NV12,
+ 		.num_planes = 1,
+ 		.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE,
+ 	}, {
+-		.pixfmt = V4L2_PIX_FMT_QC10C,
++		.pixfmt = V4L2_PIX_FMT_QC08C,
+ 		.num_planes = 1,
+ 		.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE,
+-	},{
+-		.pixfmt = V4L2_PIX_FMT_NV12,
++	}, {
++		.pixfmt = V4L2_PIX_FMT_QC10C,
+ 		.num_planes = 1,
+ 		.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE,
+ 	}, {
+@@ -526,6 +526,7 @@ static int
+ vdec_decoder_cmd(struct file *file, void *fh, struct v4l2_decoder_cmd *cmd)
+ {
+ 	struct venus_inst *inst = to_inst(file);
++	struct vb2_queue *dst_vq;
+ 	struct hfi_frame_data fdata = {0};
+ 	int ret;
+ 
+@@ -556,6 +557,13 @@ vdec_decoder_cmd(struct file *file, void *fh, struct v4l2_decoder_cmd *cmd)
+ 			inst->codec_state = VENUS_DEC_STATE_DRAIN;
+ 			inst->drain_active = true;
+ 		}
++	} else if (cmd->cmd == V4L2_DEC_CMD_START &&
++		   inst->codec_state == VENUS_DEC_STATE_STOPPED) {
++		dst_vq = v4l2_m2m_get_vq(inst->fh.m2m_ctx,
++					 V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE);
++		vb2_clear_last_buffer_dequeued(dst_vq);
++
++		inst->codec_state = VENUS_DEC_STATE_DECODING;
+ 	}
+ 
+ unlock:
+diff --git a/drivers/media/platform/renesas/rcar_fdp1.c b/drivers/media/platform/renesas/rcar_fdp1.c
+index 37ecf489d112e..c548cb01957b0 100644
+--- a/drivers/media/platform/renesas/rcar_fdp1.c
++++ b/drivers/media/platform/renesas/rcar_fdp1.c
+@@ -2313,8 +2313,10 @@ static int fdp1_probe(struct platform_device *pdev)
+ 
+ 	/* Determine our clock rate */
+ 	clk = clk_get(&pdev->dev, NULL);
+-	if (IS_ERR(clk))
+-		return PTR_ERR(clk);
++	if (IS_ERR(clk)) {
++		ret = PTR_ERR(clk);
++		goto put_dev;
++	}
+ 
+ 	fdp1->clk_rate = clk_get_rate(clk);
+ 	clk_put(clk);
+@@ -2323,7 +2325,7 @@ static int fdp1_probe(struct platform_device *pdev)
+ 	ret = v4l2_device_register(&pdev->dev, &fdp1->v4l2_dev);
+ 	if (ret) {
+ 		v4l2_err(&fdp1->v4l2_dev, "Failed to register video device\n");
+-		return ret;
++		goto put_dev;
+ 	}
+ 
+ 	/* M2M registration */
+@@ -2393,10 +2395,12 @@ release_m2m:
+ unreg_dev:
+ 	v4l2_device_unregister(&fdp1->v4l2_dev);
+ 
++put_dev:
++	rcar_fcp_put(fdp1->fcp);
+ 	return ret;
+ }
+ 
+-static int fdp1_remove(struct platform_device *pdev)
++static void fdp1_remove(struct platform_device *pdev)
+ {
+ 	struct fdp1_dev *fdp1 = platform_get_drvdata(pdev);
+ 
+@@ -2404,8 +2408,7 @@ static int fdp1_remove(struct platform_device *pdev)
+ 	video_unregister_device(&fdp1->vfd);
+ 	v4l2_device_unregister(&fdp1->v4l2_dev);
+ 	pm_runtime_disable(&pdev->dev);
+-
+-	return 0;
++	rcar_fcp_put(fdp1->fcp);
+ }
+ 
+ static int __maybe_unused fdp1_pm_runtime_suspend(struct device *dev)
+@@ -2441,7 +2444,7 @@ MODULE_DEVICE_TABLE(of, fdp1_dt_ids);
+ 
+ static struct platform_driver fdp1_pdrv = {
+ 	.probe		= fdp1_probe,
+-	.remove		= fdp1_remove,
++	.remove_new	= fdp1_remove,
+ 	.driver		= {
+ 		.name	= DRIVER_NAME,
+ 		.of_match_table = fdp1_dt_ids,
+diff --git a/drivers/media/platform/renesas/vsp1/vsp1_video.c b/drivers/media/platform/renesas/vsp1/vsp1_video.c
+index 544012fd1fe93..3664c87e4afb5 100644
+--- a/drivers/media/platform/renesas/vsp1/vsp1_video.c
++++ b/drivers/media/platform/renesas/vsp1/vsp1_video.c
+@@ -776,7 +776,7 @@ static void vsp1_video_buffer_queue(struct vb2_buffer *vb)
+ 	video->rwpf->mem = buf->mem;
+ 	pipe->buffers_ready |= 1 << video->pipe_index;
+ 
+-	if (vb2_is_streaming(&video->queue) &&
++	if (vb2_start_streaming_called(&video->queue) &&
+ 	    vsp1_pipeline_ready(pipe))
+ 		vsp1_video_pipeline_run(pipe);
+ 
+diff --git a/drivers/media/platform/st/sti/bdisp/bdisp-v4l2.c b/drivers/media/platform/st/sti/bdisp/bdisp-v4l2.c
+index dd74cc43920d3..080da254b9109 100644
+--- a/drivers/media/platform/st/sti/bdisp/bdisp-v4l2.c
++++ b/drivers/media/platform/st/sti/bdisp/bdisp-v4l2.c
+@@ -1309,6 +1309,8 @@ static int bdisp_probe(struct platform_device *pdev)
+ 	init_waitqueue_head(&bdisp->irq_queue);
+ 	INIT_DELAYED_WORK(&bdisp->timeout_work, bdisp_irq_timeout);
+ 	bdisp->work_queue = create_workqueue(BDISP_NAME);
++	if (!bdisp->work_queue)
++		return -ENOMEM;
+ 
+ 	spin_lock_init(&bdisp->slock);
+ 	mutex_init(&bdisp->lock);
+diff --git a/drivers/media/rc/gpio-ir-recv.c b/drivers/media/rc/gpio-ir-recv.c
+index 8dbe780dae4e7..41ef8cdba28c4 100644
+--- a/drivers/media/rc/gpio-ir-recv.c
++++ b/drivers/media/rc/gpio-ir-recv.c
+@@ -103,6 +103,8 @@ static int gpio_ir_recv_probe(struct platform_device *pdev)
+ 		rcdev->map_name = RC_MAP_EMPTY;
+ 
+ 	gpio_dev->rcdev = rcdev;
++	if (of_property_read_bool(np, "wakeup-source"))
++		device_init_wakeup(dev, true);
+ 
+ 	rc = devm_rc_register_device(dev, rcdev);
+ 	if (rc < 0) {
+diff --git a/drivers/media/v4l2-core/v4l2-async.c b/drivers/media/v4l2-core/v4l2-async.c
+index d7e9ffc7aa237..b16b5f4cb91e2 100644
+--- a/drivers/media/v4l2-core/v4l2-async.c
++++ b/drivers/media/v4l2-core/v4l2-async.c
+@@ -416,7 +416,8 @@ static void v4l2_async_cleanup(struct v4l2_subdev *sd)
+ 
+ /* Unbind all sub-devices in the notifier tree. */
+ static void
+-v4l2_async_nf_unbind_all_subdevs(struct v4l2_async_notifier *notifier)
++v4l2_async_nf_unbind_all_subdevs(struct v4l2_async_notifier *notifier,
++				 bool readd)
+ {
+ 	struct v4l2_subdev *sd, *tmp;
+ 
+@@ -425,9 +426,11 @@ v4l2_async_nf_unbind_all_subdevs(struct v4l2_async_notifier *notifier)
+ 			v4l2_async_find_subdev_notifier(sd);
+ 
+ 		if (subdev_notifier)
+-			v4l2_async_nf_unbind_all_subdevs(subdev_notifier);
++			v4l2_async_nf_unbind_all_subdevs(subdev_notifier, true);
+ 
+ 		v4l2_async_nf_call_unbind(notifier, sd, sd->asd);
++		if (readd)
++			list_add_tail(&sd->asd->list, &notifier->waiting);
+ 		v4l2_async_cleanup(sd);
+ 
+ 		list_move(&sd->async_list, &subdev_list);
+@@ -559,7 +562,7 @@ err_unbind:
+ 	/*
+ 	 * On failure, unbind all sub-devices registered through this notifier.
+ 	 */
+-	v4l2_async_nf_unbind_all_subdevs(notifier);
++	v4l2_async_nf_unbind_all_subdevs(notifier, false);
+ 
+ err_unlock:
+ 	mutex_unlock(&list_lock);
+@@ -609,7 +612,7 @@ __v4l2_async_nf_unregister(struct v4l2_async_notifier *notifier)
+ 	if (!notifier || (!notifier->v4l2_dev && !notifier->sd))
+ 		return;
+ 
+-	v4l2_async_nf_unbind_all_subdevs(notifier);
++	v4l2_async_nf_unbind_all_subdevs(notifier, false);
+ 
+ 	notifier->sd = NULL;
+ 	notifier->v4l2_dev = NULL;
+@@ -807,7 +810,7 @@ err_unbind:
+ 	 */
+ 	subdev_notifier = v4l2_async_find_subdev_notifier(sd);
+ 	if (subdev_notifier)
+-		v4l2_async_nf_unbind_all_subdevs(subdev_notifier);
++		v4l2_async_nf_unbind_all_subdevs(subdev_notifier, false);
+ 
+ 	if (sd->asd)
+ 		v4l2_async_nf_call_unbind(notifier, sd, sd->asd);
+diff --git a/drivers/media/v4l2-core/v4l2-subdev.c b/drivers/media/v4l2-core/v4l2-subdev.c
+index b10045c02f434..9a0c23267626d 100644
+--- a/drivers/media/v4l2-core/v4l2-subdev.c
++++ b/drivers/media/v4l2-core/v4l2-subdev.c
+@@ -1236,6 +1236,16 @@ int v4l2_subdev_link_validate(struct media_link *link)
+ 	struct v4l2_subdev_state *source_state, *sink_state;
+ 	int ret;
+ 
++	if (!is_media_entity_v4l2_subdev(link->sink->entity) ||
++	    !is_media_entity_v4l2_subdev(link->source->entity)) {
++		pr_warn_once("%s of link '%s':%u->'%s':%u is not a V4L2 sub-device, driver bug!\n",
++			     !is_media_entity_v4l2_subdev(link->sink->entity) ?
++			     "sink" : "source",
++			     link->source->entity->name, link->source->index,
++			     link->sink->entity->name, link->sink->index);
++		return 0;
++	}
++
+ 	sink_sd = media_entity_to_v4l2_subdev(link->sink->entity);
+ 	source_sd = media_entity_to_v4l2_subdev(link->source->entity);
+ 
+diff --git a/drivers/mfd/arizona-spi.c b/drivers/mfd/arizona-spi.c
+index da05b966d48c6..02cf4f3e91d76 100644
+--- a/drivers/mfd/arizona-spi.c
++++ b/drivers/mfd/arizona-spi.c
+@@ -277,6 +277,7 @@ static const struct of_device_id arizona_spi_of_match[] = {
+ 	{ .compatible = "cirrus,cs47l24", .data = (void *)CS47L24 },
+ 	{},
+ };
++MODULE_DEVICE_TABLE(of, arizona_spi_of_match);
+ #endif
+ 
+ static struct spi_driver arizona_spi_driver = {
+diff --git a/drivers/mfd/ocelot-spi.c b/drivers/mfd/ocelot-spi.c
+index 2ecd271de2fb9..85021f94e5874 100644
+--- a/drivers/mfd/ocelot-spi.c
++++ b/drivers/mfd/ocelot-spi.c
+@@ -130,6 +130,7 @@ static const struct regmap_config ocelot_spi_regmap_config = {
+ 
+ 	.write_flag_mask = 0x80,
+ 
++	.use_single_read = true,
+ 	.use_single_write = true,
+ 	.can_multi_write = false,
+ 
+diff --git a/drivers/mfd/tqmx86.c b/drivers/mfd/tqmx86.c
+index 7ae906ff8e353..fac02875fe7d9 100644
+--- a/drivers/mfd/tqmx86.c
++++ b/drivers/mfd/tqmx86.c
+@@ -16,8 +16,8 @@
+ #include <linux/platform_data/i2c-ocores.h>
+ #include <linux/platform_device.h>
+ 
+-#define TQMX86_IOBASE	0x160
+-#define TQMX86_IOSIZE	0x3f
++#define TQMX86_IOBASE	0x180
++#define TQMX86_IOSIZE	0x20
+ #define TQMX86_IOBASE_I2C	0x1a0
+ #define TQMX86_IOSIZE_I2C	0xa
+ #define TQMX86_IOBASE_WATCHDOG	0x18b
+@@ -25,14 +25,14 @@
+ #define TQMX86_IOBASE_GPIO	0x18d
+ #define TQMX86_IOSIZE_GPIO	0x4
+ 
+-#define TQMX86_REG_BOARD_ID	0x20
++#define TQMX86_REG_BOARD_ID	0x00
+ #define TQMX86_REG_BOARD_ID_E38M	1
+ #define TQMX86_REG_BOARD_ID_50UC	2
+ #define TQMX86_REG_BOARD_ID_E38C	3
+ #define TQMX86_REG_BOARD_ID_60EB	4
+-#define TQMX86_REG_BOARD_ID_E39M	5
+-#define TQMX86_REG_BOARD_ID_E39C	6
+-#define TQMX86_REG_BOARD_ID_E39x	7
++#define TQMX86_REG_BOARD_ID_E39MS	5
++#define TQMX86_REG_BOARD_ID_E39C1	6
++#define TQMX86_REG_BOARD_ID_E39C2	7
+ #define TQMX86_REG_BOARD_ID_70EB	8
+ #define TQMX86_REG_BOARD_ID_80UC	9
+ #define TQMX86_REG_BOARD_ID_110EB	11
+@@ -40,18 +40,18 @@
+ #define TQMX86_REG_BOARD_ID_E40S	13
+ #define TQMX86_REG_BOARD_ID_E40C1	14
+ #define TQMX86_REG_BOARD_ID_E40C2	15
+-#define TQMX86_REG_BOARD_REV	0x21
+-#define TQMX86_REG_IO_EXT_INT	0x26
++#define TQMX86_REG_BOARD_REV	0x01
++#define TQMX86_REG_IO_EXT_INT	0x06
+ #define TQMX86_REG_IO_EXT_INT_NONE		0
+ #define TQMX86_REG_IO_EXT_INT_7			1
+ #define TQMX86_REG_IO_EXT_INT_9			2
+ #define TQMX86_REG_IO_EXT_INT_12		3
+ #define TQMX86_REG_IO_EXT_INT_MASK		0x3
+ #define TQMX86_REG_IO_EXT_INT_GPIO_SHIFT	4
++#define TQMX86_REG_SAUC		0x17
+ 
+-#define TQMX86_REG_I2C_DETECT	0x47
++#define TQMX86_REG_I2C_DETECT	0x1a7
+ #define TQMX86_REG_I2C_DETECT_SOFT		0xa5
+-#define TQMX86_REG_I2C_INT_EN	0x49
+ 
+ static uint gpio_irq;
+ module_param(gpio_irq, uint, 0);
+@@ -111,7 +111,7 @@ static const struct mfd_cell tqmx86_devs[] = {
+ 	},
+ };
+ 
+-static const char *tqmx86_board_id_to_name(u8 board_id)
++static const char *tqmx86_board_id_to_name(u8 board_id, u8 sauc)
+ {
+ 	switch (board_id) {
+ 	case TQMX86_REG_BOARD_ID_E38M:
+@@ -122,12 +122,12 @@ static const char *tqmx86_board_id_to_name(u8 board_id)
+ 		return "TQMxE38C";
+ 	case TQMX86_REG_BOARD_ID_60EB:
+ 		return "TQMx60EB";
+-	case TQMX86_REG_BOARD_ID_E39M:
+-		return "TQMxE39M";
+-	case TQMX86_REG_BOARD_ID_E39C:
+-		return "TQMxE39C";
+-	case TQMX86_REG_BOARD_ID_E39x:
+-		return "TQMxE39x";
++	case TQMX86_REG_BOARD_ID_E39MS:
++		return (sauc == 0xff) ? "TQMxE39M" : "TQMxE39S";
++	case TQMX86_REG_BOARD_ID_E39C1:
++		return "TQMxE39C1";
++	case TQMX86_REG_BOARD_ID_E39C2:
++		return "TQMxE39C2";
+ 	case TQMX86_REG_BOARD_ID_70EB:
+ 		return "TQMx70EB";
+ 	case TQMX86_REG_BOARD_ID_80UC:
+@@ -160,9 +160,9 @@ static int tqmx86_board_id_to_clk_rate(struct device *dev, u8 board_id)
+ 	case TQMX86_REG_BOARD_ID_E40C1:
+ 	case TQMX86_REG_BOARD_ID_E40C2:
+ 		return 24000;
+-	case TQMX86_REG_BOARD_ID_E39M:
+-	case TQMX86_REG_BOARD_ID_E39C:
+-	case TQMX86_REG_BOARD_ID_E39x:
++	case TQMX86_REG_BOARD_ID_E39MS:
++	case TQMX86_REG_BOARD_ID_E39C1:
++	case TQMX86_REG_BOARD_ID_E39C2:
+ 		return 25000;
+ 	case TQMX86_REG_BOARD_ID_E38M:
+ 	case TQMX86_REG_BOARD_ID_E38C:
+@@ -176,7 +176,7 @@ static int tqmx86_board_id_to_clk_rate(struct device *dev, u8 board_id)
+ 
+ static int tqmx86_probe(struct platform_device *pdev)
+ {
+-	u8 board_id, rev, i2c_det, io_ext_int_val;
++	u8 board_id, sauc, rev, i2c_det, io_ext_int_val;
+ 	struct device *dev = &pdev->dev;
+ 	u8 gpio_irq_cfg, readback;
+ 	const char *board_name;
+@@ -206,14 +206,20 @@ static int tqmx86_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	board_id = ioread8(io_base + TQMX86_REG_BOARD_ID);
+-	board_name = tqmx86_board_id_to_name(board_id);
++	sauc = ioread8(io_base + TQMX86_REG_SAUC);
++	board_name = tqmx86_board_id_to_name(board_id, sauc);
+ 	rev = ioread8(io_base + TQMX86_REG_BOARD_REV);
+ 
+ 	dev_info(dev,
+ 		 "Found %s - Board ID %d, PCB Revision %d, PLD Revision %d\n",
+ 		 board_name, board_id, rev >> 4, rev & 0xf);
+ 
+-	i2c_det = ioread8(io_base + TQMX86_REG_I2C_DETECT);
++	/*
++	 * The I2C_DETECT register is in the range assigned to the I2C driver
++	 * later, so we don't extend TQMX86_IOSIZE. Use inb() for this one-off
++	 * access instead of ioport_map + unmap.
++	 */
++	i2c_det = inb(TQMX86_REG_I2C_DETECT);
+ 
+ 	if (gpio_irq_cfg) {
+ 		io_ext_int_val =
+diff --git a/drivers/misc/vmw_vmci/vmci_host.c b/drivers/misc/vmw_vmci/vmci_host.c
+index 857b9851402a6..abe79f6fd2a79 100644
+--- a/drivers/misc/vmw_vmci/vmci_host.c
++++ b/drivers/misc/vmw_vmci/vmci_host.c
+@@ -165,10 +165,16 @@ static int vmci_host_close(struct inode *inode, struct file *filp)
+ static __poll_t vmci_host_poll(struct file *filp, poll_table *wait)
+ {
+ 	struct vmci_host_dev *vmci_host_dev = filp->private_data;
+-	struct vmci_ctx *context = vmci_host_dev->context;
++	struct vmci_ctx *context;
+ 	__poll_t mask = 0;
+ 
+ 	if (vmci_host_dev->ct_type == VMCIOBJ_CONTEXT) {
++		/*
++		 * Read context only if ct_type == VMCIOBJ_CONTEXT to make
++		 * sure that context is initialized
++		 */
++		context = vmci_host_dev->context;
++
+ 		/* Check for VMCI calls to this VM context. */
+ 		if (wait)
+ 			poll_wait(filp, &context->host_context.wait_queue,
+diff --git a/drivers/mmc/host/sdhci-of-esdhc.c b/drivers/mmc/host/sdhci-of-esdhc.c
+index 4712adac7f7c0..48ca1cf15b199 100644
+--- a/drivers/mmc/host/sdhci-of-esdhc.c
++++ b/drivers/mmc/host/sdhci-of-esdhc.c
+@@ -133,6 +133,7 @@ static u32 esdhc_readl_fixup(struct sdhci_host *host,
+ 			return ret;
+ 		}
+ 	}
++
+ 	/*
+ 	 * The DAT[3:0] line signal levels and the CMD line signal level are
+ 	 * not compatible with standard SDHC register. The line signal levels
+@@ -144,6 +145,16 @@ static u32 esdhc_readl_fixup(struct sdhci_host *host,
+ 		ret = value & 0x000fffff;
+ 		ret |= (value >> 4) & SDHCI_DATA_LVL_MASK;
+ 		ret |= (value << 1) & SDHCI_CMD_LVL;
++
++		/*
++		 * Some controllers have unreliable Data Line Active
++		 * bit for commands with busy signal. This affects
++		 * Command Inhibit (data) bit. Just ignore it since
++		 * MMC core driver has already polled card status
++		 * with CMD13 after any command with busy siganl.
++		 */
++		if (esdhc->quirk_ignore_data_inhibit)
++			ret &= ~SDHCI_DATA_INHIBIT;
+ 		return ret;
+ 	}
+ 
+@@ -158,19 +169,6 @@ static u32 esdhc_readl_fixup(struct sdhci_host *host,
+ 		return ret;
+ 	}
+ 
+-	/*
+-	 * Some controllers have unreliable Data Line Active
+-	 * bit for commands with busy signal. This affects
+-	 * Command Inhibit (data) bit. Just ignore it since
+-	 * MMC core driver has already polled card status
+-	 * with CMD13 after any command with busy siganl.
+-	 */
+-	if ((spec_reg == SDHCI_PRESENT_STATE) &&
+-	(esdhc->quirk_ignore_data_inhibit == true)) {
+-		ret = value & ~SDHCI_DATA_INHIBIT;
+-		return ret;
+-	}
+-
+ 	ret = value;
+ 	return ret;
+ }
+diff --git a/drivers/mtd/mtdcore.c b/drivers/mtd/mtdcore.c
+index 0feacb9fbdac5..0bc9676fe0299 100644
+--- a/drivers/mtd/mtdcore.c
++++ b/drivers/mtd/mtdcore.c
+@@ -888,8 +888,8 @@ static struct nvmem_device *mtd_otp_nvmem_register(struct mtd_info *mtd,
+ 
+ 	/* OTP nvmem will be registered on the physical device */
+ 	config.dev = mtd->dev.parent;
+-	config.name = kasprintf(GFP_KERNEL, "%s-%s", dev_name(&mtd->dev), compatible);
+-	config.id = NVMEM_DEVID_NONE;
++	config.name = compatible;
++	config.id = NVMEM_DEVID_AUTO;
+ 	config.owner = THIS_MODULE;
+ 	config.type = NVMEM_TYPE_OTP;
+ 	config.root_only = true;
+@@ -905,7 +905,6 @@ static struct nvmem_device *mtd_otp_nvmem_register(struct mtd_info *mtd,
+ 		nvmem = NULL;
+ 
+ 	of_node_put(np);
+-	kfree(config.name);
+ 
+ 	return nvmem;
+ }
+@@ -940,6 +939,7 @@ static int mtd_nvmem_fact_otp_reg_read(void *priv, unsigned int offset,
+ 
+ static int mtd_otp_nvmem_add(struct mtd_info *mtd)
+ {
++	struct device *dev = mtd->dev.parent;
+ 	struct nvmem_device *nvmem;
+ 	ssize_t size;
+ 	int err;
+@@ -953,7 +953,7 @@ static int mtd_otp_nvmem_add(struct mtd_info *mtd)
+ 			nvmem = mtd_otp_nvmem_register(mtd, "user-otp", size,
+ 						       mtd_nvmem_user_otp_reg_read);
+ 			if (IS_ERR(nvmem)) {
+-				dev_err(&mtd->dev, "Failed to register OTP NVMEM device\n");
++				dev_err(dev, "Failed to register OTP NVMEM device\n");
+ 				return PTR_ERR(nvmem);
+ 			}
+ 			mtd->otp_user_nvmem = nvmem;
+@@ -971,7 +971,7 @@ static int mtd_otp_nvmem_add(struct mtd_info *mtd)
+ 			nvmem = mtd_otp_nvmem_register(mtd, "factory-otp", size,
+ 						       mtd_nvmem_fact_otp_reg_read);
+ 			if (IS_ERR(nvmem)) {
+-				dev_err(&mtd->dev, "Failed to register OTP NVMEM device\n");
++				dev_err(dev, "Failed to register OTP NVMEM device\n");
+ 				err = PTR_ERR(nvmem);
+ 				goto err;
+ 			}
+@@ -1023,10 +1023,14 @@ int mtd_device_parse_register(struct mtd_info *mtd, const char * const *types,
+ 
+ 	mtd_set_dev_defaults(mtd);
+ 
++	ret = mtd_otp_nvmem_add(mtd);
++	if (ret)
++		goto out;
++
+ 	if (IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER)) {
+ 		ret = add_mtd_device(mtd);
+ 		if (ret)
+-			return ret;
++			goto out;
+ 	}
+ 
+ 	/* Prefer parsed partitions over driver-provided fallback */
+@@ -1061,9 +1065,12 @@ int mtd_device_parse_register(struct mtd_info *mtd, const char * const *types,
+ 		register_reboot_notifier(&mtd->reboot_notifier);
+ 	}
+ 
+-	ret = mtd_otp_nvmem_add(mtd);
+-
+ out:
++	if (ret) {
++		nvmem_unregister(mtd->otp_user_nvmem);
++		nvmem_unregister(mtd->otp_factory_nvmem);
++	}
++
+ 	if (ret && device_is_registered(&mtd->dev))
+ 		del_mtd_device(mtd);
+ 
+diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c
+index 522d375aeccff..71ea5b2e10140 100644
+--- a/drivers/mtd/spi-nor/core.c
++++ b/drivers/mtd/spi-nor/core.c
+@@ -2732,6 +2732,7 @@ static int spi_nor_quad_enable(struct spi_nor *nor)
+ 
+ static int spi_nor_init(struct spi_nor *nor)
+ {
++	struct spi_nor_flash_parameter *params = nor->params;
+ 	int err;
+ 
+ 	err = spi_nor_octal_dtr_enable(nor, true);
+@@ -2773,9 +2774,10 @@ static int spi_nor_init(struct spi_nor *nor)
+ 		 */
+ 		WARN_ONCE(nor->flags & SNOR_F_BROKEN_RESET,
+ 			  "enabling reset hack; may not recover from unexpected reboots\n");
+-		err = nor->params->set_4byte_addr_mode(nor, true);
++		err = params->set_4byte_addr_mode(nor, true);
+ 		if (err && err != -ENOTSUPP)
+ 			return err;
++		params->addr_mode_nbytes = 4;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/mtd/ubi/eba.c b/drivers/mtd/ubi/eba.c
+index 403b79d6efd5a..655ff41863e2b 100644
+--- a/drivers/mtd/ubi/eba.c
++++ b/drivers/mtd/ubi/eba.c
+@@ -946,7 +946,7 @@ static int try_write_vid_and_data(struct ubi_volume *vol, int lnum,
+ 				  int offset, int len)
+ {
+ 	struct ubi_device *ubi = vol->ubi;
+-	int pnum, opnum, err, vol_id = vol->vol_id;
++	int pnum, opnum, err, err2, vol_id = vol->vol_id;
+ 
+ 	pnum = ubi_wl_get_peb(ubi);
+ 	if (pnum < 0) {
+@@ -981,10 +981,19 @@ static int try_write_vid_and_data(struct ubi_volume *vol, int lnum,
+ out_put:
+ 	up_read(&ubi->fm_eba_sem);
+ 
+-	if (err && pnum >= 0)
+-		err = ubi_wl_put_peb(ubi, vol_id, lnum, pnum, 1);
+-	else if (!err && opnum >= 0)
+-		err = ubi_wl_put_peb(ubi, vol_id, lnum, opnum, 0);
++	if (err && pnum >= 0) {
++		err2 = ubi_wl_put_peb(ubi, vol_id, lnum, pnum, 1);
++		if (err2) {
++			ubi_warn(ubi, "failed to return physical eraseblock %d, error %d",
++				 pnum, err2);
++		}
++	} else if (!err && opnum >= 0) {
++		err2 = ubi_wl_put_peb(ubi, vol_id, lnum, opnum, 0);
++		if (err2) {
++			ubi_warn(ubi, "failed to return physical eraseblock %d, error %d",
++				 opnum, err2);
++		}
++	}
+ 
+ 	return err;
+ }
+diff --git a/drivers/net/dsa/qca/qca8k-8xxx.c b/drivers/net/dsa/qca/qca8k-8xxx.c
+index 55df4479ea30b..62810903f1b31 100644
+--- a/drivers/net/dsa/qca/qca8k-8xxx.c
++++ b/drivers/net/dsa/qca/qca8k-8xxx.c
+@@ -1483,7 +1483,6 @@ static void qca8k_pcs_get_state(struct phylink_pcs *pcs,
+ 
+ 	state->link = !!(reg & QCA8K_PORT_STATUS_LINK_UP);
+ 	state->an_complete = state->link;
+-	state->an_enabled = !!(reg & QCA8K_PORT_STATUS_LINK_AUTO);
+ 	state->duplex = (reg & QCA8K_PORT_STATUS_DUPLEX) ? DUPLEX_FULL :
+ 							   DUPLEX_HALF;
+ 
+diff --git a/drivers/net/ethernet/amd/nmclan_cs.c b/drivers/net/ethernet/amd/nmclan_cs.c
+index 823a329a921f4..0dd391c84c138 100644
+--- a/drivers/net/ethernet/amd/nmclan_cs.c
++++ b/drivers/net/ethernet/amd/nmclan_cs.c
+@@ -651,7 +651,7 @@ static int nmclan_config(struct pcmcia_device *link)
+     } else {
+       pr_notice("mace id not found: %x %x should be 0x40 0x?9\n",
+ 		sig[0], sig[1]);
+-      return -ENODEV;
++      goto failed;
+     }
+   }
+ 
+diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+index 9318a2554056d..f961966171210 100644
+--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
++++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+@@ -299,7 +299,8 @@ static int dpaa_stop(struct net_device *net_dev)
+ {
+ 	struct mac_device *mac_dev;
+ 	struct dpaa_priv *priv;
+-	int i, err, error;
++	int i, error;
++	int err = 0;
+ 
+ 	priv = netdev_priv(net_dev);
+ 	mac_dev = priv->mac_dev;
+diff --git a/drivers/net/ethernet/intel/igc/igc_base.h b/drivers/net/ethernet/intel/igc/igc_base.h
+index 7a992befca249..9f3827eda157c 100644
+--- a/drivers/net/ethernet/intel/igc/igc_base.h
++++ b/drivers/net/ethernet/intel/igc/igc_base.h
+@@ -87,8 +87,13 @@ union igc_adv_rx_desc {
+ #define IGC_RXDCTL_SWFLUSH		0x04000000 /* Receive Software Flush */
+ 
+ /* SRRCTL bit definitions */
+-#define IGC_SRRCTL_BSIZEPKT_SHIFT		10 /* Shift _right_ */
+-#define IGC_SRRCTL_BSIZEHDRSIZE_SHIFT		2  /* Shift _left_ */
+-#define IGC_SRRCTL_DESCTYPE_ADV_ONEBUF	0x02000000
++#define IGC_SRRCTL_BSIZEPKT_MASK	GENMASK(6, 0)
++#define IGC_SRRCTL_BSIZEPKT(x)		FIELD_PREP(IGC_SRRCTL_BSIZEPKT_MASK, \
++					(x) / 1024) /* in 1 KB resolution */
++#define IGC_SRRCTL_BSIZEHDR_MASK	GENMASK(13, 8)
++#define IGC_SRRCTL_BSIZEHDR(x)		FIELD_PREP(IGC_SRRCTL_BSIZEHDR_MASK, \
++					(x) / 64) /* in 64 bytes resolution */
++#define IGC_SRRCTL_DESCTYPE_MASK	GENMASK(27, 25)
++#define IGC_SRRCTL_DESCTYPE_ADV_ONEBUF	FIELD_PREP(IGC_SRRCTL_DESCTYPE_MASK, 1)
+ 
+ #endif /* _IGC_BASE_H */
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index 25fc6c65209bf..a2d823e646095 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -641,8 +641,11 @@ static void igc_configure_rx_ring(struct igc_adapter *adapter,
+ 	else
+ 		buf_size = IGC_RXBUFFER_2048;
+ 
+-	srrctl = IGC_RX_HDR_LEN << IGC_SRRCTL_BSIZEHDRSIZE_SHIFT;
+-	srrctl |= buf_size >> IGC_SRRCTL_BSIZEPKT_SHIFT;
++	srrctl = rd32(IGC_SRRCTL(reg_idx));
++	srrctl &= ~(IGC_SRRCTL_BSIZEPKT_MASK | IGC_SRRCTL_BSIZEHDR_MASK |
++		    IGC_SRRCTL_DESCTYPE_MASK);
++	srrctl |= IGC_SRRCTL_BSIZEHDR(IGC_RX_HDR_LEN);
++	srrctl |= IGC_SRRCTL_BSIZEPKT(buf_size);
+ 	srrctl |= IGC_SRRCTL_DESCTYPE_ADV_ONEBUF;
+ 
+ 	wr32(IGC_SRRCTL(reg_idx), srrctl);
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+index 6cfc9dc165378..0bbad4a5cc2f5 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+@@ -2665,6 +2665,14 @@ static int ixgbe_get_rss_hash_opts(struct ixgbe_adapter *adapter,
+ 	return 0;
+ }
+ 
++static int ixgbe_rss_indir_tbl_max(struct ixgbe_adapter *adapter)
++{
++	if (adapter->hw.mac.type < ixgbe_mac_X550)
++		return 16;
++	else
++		return 64;
++}
++
+ static int ixgbe_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd,
+ 			   u32 *rule_locs)
+ {
+@@ -2673,7 +2681,8 @@ static int ixgbe_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd,
+ 
+ 	switch (cmd->cmd) {
+ 	case ETHTOOL_GRXRINGS:
+-		cmd->data = adapter->num_rx_queues;
++		cmd->data = min_t(int, adapter->num_rx_queues,
++				  ixgbe_rss_indir_tbl_max(adapter));
+ 		ret = 0;
+ 		break;
+ 	case ETHTOOL_GRXCLSRLCNT:
+@@ -3075,14 +3084,6 @@ static int ixgbe_set_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd)
+ 	return ret;
+ }
+ 
+-static int ixgbe_rss_indir_tbl_max(struct ixgbe_adapter *adapter)
+-{
+-	if (adapter->hw.mac.type < ixgbe_mac_X550)
+-		return 16;
+-	else
+-		return 64;
+-}
+-
+ static u32 ixgbe_get_rxfh_key_size(struct net_device *netdev)
+ {
+ 	return IXGBE_RSS_KEY_SIZE;
+@@ -3131,8 +3132,8 @@ static int ixgbe_set_rxfh(struct net_device *netdev, const u32 *indir,
+ 	int i;
+ 	u32 reta_entries = ixgbe_rss_indir_tbl_entries(adapter);
+ 
+-	if (hfunc)
+-		return -EINVAL;
++	if (hfunc != ETH_RSS_HASH_NO_CHANGE && hfunc != ETH_RSS_HASH_TOP)
++		return -EOPNOTSUPP;
+ 
+ 	/* Fill out the redirection table */
+ 	if (indir) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+index c5d2fdcabd566..e5f03d071a37a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+@@ -202,7 +202,7 @@ static int mlx5_devlink_reload_up(struct devlink *devlink, enum devlink_reload_a
+ 			break;
+ 		/* On fw_activate action, also driver is reloaded and reinit performed */
+ 		*actions_performed |= BIT(DEVLINK_RELOAD_ACTION_DRIVER_REINIT);
+-		ret = mlx5_load_one_devl_locked(dev, false);
++		ret = mlx5_load_one_devl_locked(dev, true);
+ 		break;
+ 	default:
+ 		/* Unsupported action should not get to this function */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c
+index 8f7452dc00ee3..668fdee9cf057 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c
+@@ -715,5 +715,6 @@ forward:
+ 	return;
+ 
+ free_skb:
++	dev_put(tc_priv.fwd_dev);
+ 	dev_kfree_skb_any(skb);
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_act.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_act.c
+index 4e48946c4c2ac..0290e0dea5390 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_act.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_act.c
+@@ -106,22 +106,17 @@ err_rule:
+ }
+ 
+ struct mlx5e_post_act_handle *
+-mlx5e_tc_post_act_add(struct mlx5e_post_act *post_act, struct mlx5_flow_attr *attr)
++mlx5e_tc_post_act_add(struct mlx5e_post_act *post_act, struct mlx5_flow_attr *post_attr)
+ {
+-	u32 attr_sz = ns_to_attr_sz(post_act->ns_type);
+ 	struct mlx5e_post_act_handle *handle;
+-	struct mlx5_flow_attr *post_attr;
+ 	int err;
+ 
+ 	handle = kzalloc(sizeof(*handle), GFP_KERNEL);
+-	post_attr = mlx5_alloc_flow_attr(post_act->ns_type);
+-	if (!handle || !post_attr) {
+-		kfree(post_attr);
++	if (!handle) {
+ 		kfree(handle);
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+ 
+-	memcpy(post_attr, attr, attr_sz);
+ 	post_attr->chain = 0;
+ 	post_attr->prio = 0;
+ 	post_attr->ft = post_act->ft;
+@@ -145,7 +140,6 @@ mlx5e_tc_post_act_add(struct mlx5e_post_act *post_act, struct mlx5_flow_attr *at
+ 	return handle;
+ 
+ err_xarray:
+-	kfree(post_attr);
+ 	kfree(handle);
+ 	return ERR_PTR(err);
+ }
+@@ -164,7 +158,6 @@ mlx5e_tc_post_act_del(struct mlx5e_post_act *post_act, struct mlx5e_post_act_han
+ 	if (!IS_ERR_OR_NULL(handle->rule))
+ 		mlx5e_tc_post_act_unoffload(post_act, handle);
+ 	xa_erase(&post_act->ids, handle->id);
+-	kfree(handle->attr);
+ 	kfree(handle);
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_act.h b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_act.h
+index f476774c0b75d..40b8df184af51 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_act.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_act.h
+@@ -19,7 +19,7 @@ void
+ mlx5e_tc_post_act_destroy(struct mlx5e_post_act *post_act);
+ 
+ struct mlx5e_post_act_handle *
+-mlx5e_tc_post_act_add(struct mlx5e_post_act *post_act, struct mlx5_flow_attr *attr);
++mlx5e_tc_post_act_add(struct mlx5e_post_act *post_act, struct mlx5_flow_attr *post_attr);
+ 
+ void
+ mlx5e_tc_post_act_del(struct mlx5e_post_act *post_act, struct mlx5e_post_act_handle *handle);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/sample.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/sample.c
+index 558a776359af6..5db239cae8145 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/sample.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/sample.c
+@@ -14,10 +14,10 @@
+ 
+ #define MLX5_ESW_VPORT_TBL_SIZE_SAMPLE (64 * 1024)
+ 
+-static const struct esw_vport_tbl_namespace mlx5_esw_vport_tbl_sample_ns = {
++static struct esw_vport_tbl_namespace mlx5_esw_vport_tbl_sample_ns = {
+ 	.max_fte = MLX5_ESW_VPORT_TBL_SIZE_SAMPLE,
+ 	.max_num_groups = 0,    /* default num of groups */
+-	.flags = MLX5_FLOW_TABLE_TUNNEL_EN_REFORMAT | MLX5_FLOW_TABLE_TUNNEL_EN_DECAP,
++	.flags = 0,
+ };
+ 
+ struct mlx5e_tc_psample {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+index 314983bc6f085..ee49bd2461e46 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+@@ -920,6 +920,7 @@ mlx5_tc_ct_entry_replace_rule(struct mlx5_tc_ct_priv *ct_priv,
+ 	zone_rule->rule = rule;
+ 	mlx5_tc_ct_entry_destroy_mod_hdr(ct_priv, old_attr, zone_rule->mh);
+ 	zone_rule->mh = mh;
++	mlx5_put_label_mapping(ct_priv, old_attr->ct_attr.ct_labels_id);
+ 
+ 	kfree(old_attr);
+ 	kvfree(spec);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
+index 05796f8b1d7cf..33bfe4d7338be 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
+@@ -783,6 +783,7 @@ static int mlx5e_create_promisc_table(struct mlx5e_flow_steering *fs)
+ 	ft->t = mlx5_create_auto_grouped_flow_table(fs->ns, &ft_attr);
+ 	if (IS_ERR(ft->t)) {
+ 		err = PTR_ERR(ft->t);
++		ft->t = NULL;
+ 		fs_err(fs, "fail to create promisc table err=%d\n", err);
+ 		return err;
+ 	}
+@@ -810,7 +811,7 @@ static void mlx5e_del_promisc_rule(struct mlx5e_flow_steering *fs)
+ 
+ static void mlx5e_destroy_promisc_table(struct mlx5e_flow_steering *fs)
+ {
+-	if (WARN(!fs->promisc.ft.t, "Trying to remove non-existing promiscuous table"))
++	if (!fs->promisc.ft.t)
+ 		return;
+ 	mlx5e_del_promisc_rule(fs);
+ 	mlx5_destroy_flow_table(fs->promisc.ft.t);
+@@ -1490,6 +1491,8 @@ err:
+ 
+ void mlx5e_fs_cleanup(struct mlx5e_flow_steering *fs)
+ {
++	if (!fs)
++		return;
+ 	debugfs_remove_recursive(fs->dfs_root);
+ 	mlx5e_fs_ethtool_free(fs);
+ 	mlx5e_fs_tc_free(fs);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 7ca7e9b57607f..579c2d217fdc6 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -5270,6 +5270,7 @@ static void mlx5e_nic_cleanup(struct mlx5e_priv *priv)
+ 	mlx5e_health_destroy_reporters(priv);
+ 	mlx5e_ktls_cleanup(priv);
+ 	mlx5e_fs_cleanup(priv->fs);
++	priv->fs = NULL;
+ }
+ 
+ static int mlx5e_init_nic_rx(struct mlx5e_priv *priv)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+index 8ff654b4e9e14..6e18d91c3d766 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+@@ -828,6 +828,7 @@ static int mlx5e_init_ul_rep(struct mlx5_core_dev *mdev,
+ static void mlx5e_cleanup_rep(struct mlx5e_priv *priv)
+ {
+ 	mlx5e_fs_cleanup(priv->fs);
++	priv->fs = NULL;
+ }
+ 
+ static int mlx5e_create_rep_ttc_table(struct mlx5e_priv *priv)
+@@ -994,6 +995,7 @@ err_close_drop_rq:
+ 	priv->rx_res = NULL;
+ err_free_fs:
+ 	mlx5e_fs_cleanup(priv->fs);
++	priv->fs = NULL;
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/vporttbl.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/vporttbl.c
+index 9e72118f2e4c0..749c3957a1280 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/vporttbl.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/vporttbl.c
+@@ -11,7 +11,7 @@ struct mlx5_vport_key {
+ 	u16 prio;
+ 	u16 vport;
+ 	u16 vhca_id;
+-	const struct esw_vport_tbl_namespace *vport_ns;
++	struct esw_vport_tbl_namespace *vport_ns;
+ } __packed;
+ 
+ struct mlx5_vport_table {
+@@ -21,6 +21,14 @@ struct mlx5_vport_table {
+ 	struct mlx5_vport_key key;
+ };
+ 
++static void
++esw_vport_tbl_init(struct mlx5_eswitch *esw, struct esw_vport_tbl_namespace *ns)
++{
++	if (esw->offloads.encap != DEVLINK_ESWITCH_ENCAP_MODE_NONE)
++		ns->flags |= (MLX5_FLOW_TABLE_TUNNEL_EN_REFORMAT |
++			      MLX5_FLOW_TABLE_TUNNEL_EN_DECAP);
++}
++
+ static struct mlx5_flow_table *
+ esw_vport_tbl_create(struct mlx5_eswitch *esw, struct mlx5_flow_namespace *ns,
+ 		     const struct esw_vport_tbl_namespace *vport_ns)
+@@ -80,6 +88,7 @@ mlx5_esw_vporttbl_get(struct mlx5_eswitch *esw, struct mlx5_vport_tbl_attr *attr
+ 	u32 hkey;
+ 
+ 	mutex_lock(&esw->fdb_table.offloads.vports.lock);
++	esw_vport_tbl_init(esw, attr->vport_ns);
+ 	hkey = flow_attr_to_vport_key(esw, attr, &skey);
+ 	e = esw_vport_tbl_lookup(esw, &skey, hkey);
+ 	if (e) {
+@@ -127,6 +136,7 @@ mlx5_esw_vporttbl_put(struct mlx5_eswitch *esw, struct mlx5_vport_tbl_attr *attr
+ 	u32 hkey;
+ 
+ 	mutex_lock(&esw->fdb_table.offloads.vports.lock);
++	esw_vport_tbl_init(esw, attr->vport_ns);
+ 	hkey = flow_attr_to_vport_key(esw, attr, &key);
+ 	e = esw_vport_tbl_lookup(esw, &key, hkey);
+ 	if (!e || --e->num_rules)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+index 19e9a77c46336..9d5a5756a15a9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+@@ -674,7 +674,7 @@ struct mlx5_vport_tbl_attr {
+ 	u32 chain;
+ 	u16 prio;
+ 	u16 vport;
+-	const struct esw_vport_tbl_namespace *vport_ns;
++	struct esw_vport_tbl_namespace *vport_ns;
+ };
+ 
+ struct mlx5_flow_table *
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index 25a8076a77bff..c99d208722f58 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -73,7 +73,7 @@
+ 
+ #define MLX5_ESW_FT_OFFLOADS_DROP_RULE (1)
+ 
+-static const struct esw_vport_tbl_namespace mlx5_esw_vport_tbl_mirror_ns = {
++static struct esw_vport_tbl_namespace mlx5_esw_vport_tbl_mirror_ns = {
+ 	.max_fte = MLX5_ESW_VPORT_TBL_SIZE,
+ 	.max_num_groups = MLX5_ESW_VPORT_TBL_NUM_GROUPS,
+ 	.flags = 0,
+@@ -760,7 +760,6 @@ mlx5_eswitch_add_fwd_rule(struct mlx5_eswitch *esw,
+ 	kfree(dest);
+ 	return rule;
+ err_chain_src_rewrite:
+-	esw_put_dest_tables_loop(esw, attr, 0, i);
+ 	mlx5_esw_vporttbl_put(esw, &fwd_attr);
+ err_get_fwd:
+ 	mlx5_chains_put_table(chains, attr->chain, attr->prio, 0);
+@@ -803,7 +802,6 @@ __mlx5_eswitch_del_rule(struct mlx5_eswitch *esw,
+ 	if (fwd_rule)  {
+ 		mlx5_esw_vporttbl_put(esw, &fwd_attr);
+ 		mlx5_chains_put_table(chains, attr->chain, attr->prio, 0);
+-		esw_put_dest_tables_loop(esw, attr, 0, esw_attr->split_count);
+ 	} else {
+ 		if (split)
+ 			mlx5_esw_vporttbl_put(esw, &fwd_attr);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
+index 3a9a6bb9158de..edd9102583144 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
+@@ -210,18 +210,6 @@ static bool mlx5_eswitch_offload_is_uplink_port(const struct mlx5_eswitch *esw,
+ 	return (port_mask & port_value) == MLX5_VPORT_UPLINK;
+ }
+ 
+-static bool
+-mlx5_eswitch_is_push_vlan_no_cap(struct mlx5_eswitch *esw,
+-				 struct mlx5_flow_act *flow_act)
+-{
+-	if (flow_act->action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH &&
+-	    !(mlx5_fs_get_capabilities(esw->dev, MLX5_FLOW_NAMESPACE_FDB) &
+-	      MLX5_FLOW_STEERING_CAP_VLAN_PUSH_ON_RX))
+-		return true;
+-
+-	return false;
+-}
+-
+ bool
+ mlx5_eswitch_termtbl_required(struct mlx5_eswitch *esw,
+ 			      struct mlx5_flow_attr *attr,
+@@ -237,7 +225,10 @@ mlx5_eswitch_termtbl_required(struct mlx5_eswitch *esw,
+ 	    (!mlx5_eswitch_offload_is_uplink_port(esw, spec) && !esw_attr->int_port))
+ 		return false;
+ 
+-	if (mlx5_eswitch_is_push_vlan_no_cap(esw, flow_act))
++	/* push vlan on RX */
++	if (flow_act->action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH &&
++	    !(mlx5_fs_get_capabilities(esw->dev, MLX5_FLOW_NAMESPACE_FDB) &
++	      MLX5_FLOW_STEERING_CAP_VLAN_PUSH_ON_RX))
+ 		return true;
+ 
+ 	/* hairpin */
+@@ -261,31 +252,19 @@ mlx5_eswitch_add_termtbl_rule(struct mlx5_eswitch *esw,
+ 	struct mlx5_flow_act term_tbl_act = {};
+ 	struct mlx5_flow_handle *rule = NULL;
+ 	bool term_table_created = false;
+-	bool is_push_vlan_on_rx;
+ 	int num_vport_dests = 0;
+ 	int i, curr_dest;
+ 
+-	is_push_vlan_on_rx = mlx5_eswitch_is_push_vlan_no_cap(esw, flow_act);
+ 	mlx5_eswitch_termtbl_actions_move(flow_act, &term_tbl_act);
+ 	term_tbl_act.action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
+ 
+ 	for (i = 0; i < num_dest; i++) {
+ 		struct mlx5_termtbl_handle *tt;
+-		bool hairpin = false;
+ 
+ 		/* only vport destinations can be terminated */
+ 		if (dest[i].type != MLX5_FLOW_DESTINATION_TYPE_VPORT)
+ 			continue;
+ 
+-		if (attr->dests[num_vport_dests].rep &&
+-		    attr->dests[num_vport_dests].rep->vport == MLX5_VPORT_UPLINK)
+-			hairpin = true;
+-
+-		if (!is_push_vlan_on_rx && !hairpin) {
+-			num_vport_dests++;
+-			continue;
+-		}
+-
+ 		if (attr->dests[num_vport_dests].flags & MLX5_ESW_DEST_ENCAP) {
+ 			term_tbl_act.action |= MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT;
+ 			term_tbl_act.pkt_reformat = attr->dests[num_vport_dests].pkt_reformat;
+@@ -333,9 +312,6 @@ revert_changes:
+ 	for (curr_dest = 0; curr_dest < num_vport_dests; curr_dest++) {
+ 		struct mlx5_termtbl_handle *tt = attr->dests[curr_dest].termtbl;
+ 
+-		if (!tt)
+-			continue;
+-
+ 		attr->dests[curr_dest].termtbl = NULL;
+ 
+ 		/* search for the destination associated with the
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+index 4c2dad9d7cfb8..50022e7565f14 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+@@ -167,7 +167,7 @@ static void mlx5_fw_reset_complete_reload(struct mlx5_core_dev *dev)
+ 		if (mlx5_health_wait_pci_up(dev))
+ 			mlx5_core_err(dev, "reset reload flow aborted, PCI reads still not working\n");
+ 		else
+-			mlx5_load_one(dev);
++			mlx5_load_one(dev, true);
+ 		devlink_remote_reload_actions_performed(priv_to_devlink(dev), 0,
+ 							BIT(DEVLINK_RELOAD_ACTION_DRIVER_REINIT) |
+ 							BIT(DEVLINK_RELOAD_ACTION_FW_ACTIVATE));
+@@ -499,7 +499,7 @@ int mlx5_fw_reset_wait_reset_done(struct mlx5_core_dev *dev)
+ 	err = fw_reset->ret;
+ 	if (test_and_clear_bit(MLX5_FW_RESET_FLAGS_RELOAD_REQUIRED, &fw_reset->reset_flags)) {
+ 		mlx5_unload_one_devl_locked(dev, false);
+-		mlx5_load_one_devl_locked(dev, false);
++		mlx5_load_one_devl_locked(dev, true);
+ 	}
+ out:
+ 	clear_bit(MLX5_FW_RESET_FLAGS_PENDING_COMP, &fw_reset->reset_flags);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index f1de152a61135..ad90bf125e94f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -1509,13 +1509,13 @@ out:
+ 	return err;
+ }
+ 
+-int mlx5_load_one(struct mlx5_core_dev *dev)
++int mlx5_load_one(struct mlx5_core_dev *dev, bool recovery)
+ {
+ 	struct devlink *devlink = priv_to_devlink(dev);
+ 	int ret;
+ 
+ 	devl_lock(devlink);
+-	ret = mlx5_load_one_devl_locked(dev, false);
++	ret = mlx5_load_one_devl_locked(dev, recovery);
+ 	devl_unlock(devlink);
+ 	return ret;
+ }
+@@ -1912,7 +1912,8 @@ static void mlx5_pci_resume(struct pci_dev *pdev)
+ 
+ 	mlx5_pci_trace(dev, "Enter, loading driver..\n");
+ 
+-	err = mlx5_load_one(dev);
++	err = mlx5_load_one(dev, false);
++
+ 	if (!err)
+ 		devlink_health_reporter_state_update(dev->priv.health.fw_fatal_reporter,
+ 						     DEVLINK_HEALTH_REPORTER_STATE_HEALTHY);
+@@ -2003,7 +2004,7 @@ static int mlx5_resume(struct pci_dev *pdev)
+ {
+ 	struct mlx5_core_dev *dev = pci_get_drvdata(pdev);
+ 
+-	return mlx5_load_one(dev);
++	return mlx5_load_one(dev, false);
+ }
+ 
+ static const struct pci_device_id mlx5_core_pci_table[] = {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
+index be0785f83083a..a3c5c2dab5fd7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
+@@ -321,7 +321,7 @@ int mlx5_init_one(struct mlx5_core_dev *dev);
+ void mlx5_uninit_one(struct mlx5_core_dev *dev);
+ void mlx5_unload_one(struct mlx5_core_dev *dev, bool suspend);
+ void mlx5_unload_one_devl_locked(struct mlx5_core_dev *dev, bool suspend);
+-int mlx5_load_one(struct mlx5_core_dev *dev);
++int mlx5_load_one(struct mlx5_core_dev *dev, bool recovery);
+ int mlx5_load_one_devl_locked(struct mlx5_core_dev *dev, bool recovery);
+ 
+ int mlx5_vport_set_other_func_cap(struct mlx5_core_dev *dev, const void *hca_cap, u16 function_id,
+diff --git a/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c b/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c
+index c0dcce8ae4375..b1f026b81dea1 100644
+--- a/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c
++++ b/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c
+@@ -269,7 +269,7 @@ static void set_sha2_512hmac(struct nfp_ipsec_cfg_add_sa *cfg, int *trunc_len)
+ static int nfp_net_xfrm_add_state(struct xfrm_state *x,
+ 				  struct netlink_ext_ack *extack)
+ {
+-	struct net_device *netdev = x->xso.dev;
++	struct net_device *netdev = x->xso.real_dev;
+ 	struct nfp_ipsec_cfg_mssg msg = {};
+ 	int i, key_len, trunc_len, err = 0;
+ 	struct nfp_ipsec_cfg_add_sa *cfg;
+@@ -513,7 +513,7 @@ static void nfp_net_xfrm_del_state(struct xfrm_state *x)
+ 		.cmd = NFP_IPSEC_CFG_MSSG_INV_SA,
+ 		.sa_idx = x->xso.offload_handle - 1,
+ 	};
+-	struct net_device *netdev = x->xso.dev;
++	struct net_device *netdev = x->xso.real_dev;
+ 	struct nfp_net *nn;
+ 	int err;
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
+index 4b8fd11563e49..4ea31ccf24d09 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
+@@ -39,6 +39,24 @@ struct rk_gmac_ops {
+ 	u32 regs[];
+ };
+ 
++static const char * const rk_clocks[] = {
++	"aclk_mac", "pclk_mac", "mac_clk_tx", "clk_mac_speed",
++};
++
++static const char * const rk_rmii_clocks[] = {
++	"mac_clk_rx", "clk_mac_ref", "clk_mac_refout",
++};
++
++enum rk_clocks_index {
++	RK_ACLK_MAC = 0,
++	RK_PCLK_MAC,
++	RK_MAC_CLK_TX,
++	RK_CLK_MAC_SPEED,
++	RK_MAC_CLK_RX,
++	RK_CLK_MAC_REF,
++	RK_CLK_MAC_REFOUT,
++};
++
+ struct rk_priv_data {
+ 	struct platform_device *pdev;
+ 	phy_interface_t phy_iface;
+@@ -51,15 +69,9 @@ struct rk_priv_data {
+ 	bool clock_input;
+ 	bool integrated_phy;
+ 
++	struct clk_bulk_data *clks;
++	int num_clks;
+ 	struct clk *clk_mac;
+-	struct clk *gmac_clkin;
+-	struct clk *mac_clk_rx;
+-	struct clk *mac_clk_tx;
+-	struct clk *clk_mac_ref;
+-	struct clk *clk_mac_refout;
+-	struct clk *clk_mac_speed;
+-	struct clk *aclk_mac;
+-	struct clk *pclk_mac;
+ 	struct clk *clk_phy;
+ 
+ 	struct reset_control *phy_reset;
+@@ -104,10 +116,11 @@ static void px30_set_to_rmii(struct rk_priv_data *bsp_priv)
+ 
+ static void px30_set_rmii_speed(struct rk_priv_data *bsp_priv, int speed)
+ {
++	struct clk *clk_mac_speed = bsp_priv->clks[RK_CLK_MAC_SPEED].clk;
+ 	struct device *dev = &bsp_priv->pdev->dev;
+ 	int ret;
+ 
+-	if (IS_ERR(bsp_priv->clk_mac_speed)) {
++	if (!clk_mac_speed) {
+ 		dev_err(dev, "%s: Missing clk_mac_speed clock\n", __func__);
+ 		return;
+ 	}
+@@ -116,7 +129,7 @@ static void px30_set_rmii_speed(struct rk_priv_data *bsp_priv, int speed)
+ 		regmap_write(bsp_priv->grf, PX30_GRF_GMAC_CON1,
+ 			     PX30_GMAC_SPEED_10M);
+ 
+-		ret = clk_set_rate(bsp_priv->clk_mac_speed, 2500000);
++		ret = clk_set_rate(clk_mac_speed, 2500000);
+ 		if (ret)
+ 			dev_err(dev, "%s: set clk_mac_speed rate 2500000 failed: %d\n",
+ 				__func__, ret);
+@@ -124,7 +137,7 @@ static void px30_set_rmii_speed(struct rk_priv_data *bsp_priv, int speed)
+ 		regmap_write(bsp_priv->grf, PX30_GRF_GMAC_CON1,
+ 			     PX30_GMAC_SPEED_100M);
+ 
+-		ret = clk_set_rate(bsp_priv->clk_mac_speed, 25000000);
++		ret = clk_set_rate(clk_mac_speed, 25000000);
+ 		if (ret)
+ 			dev_err(dev, "%s: set clk_mac_speed rate 25000000 failed: %d\n",
+ 				__func__, ret);
+@@ -1066,6 +1079,7 @@ static void rk3568_set_to_rmii(struct rk_priv_data *bsp_priv)
+ 
+ static void rk3568_set_gmac_speed(struct rk_priv_data *bsp_priv, int speed)
+ {
++	struct clk *clk_mac_speed = bsp_priv->clks[RK_CLK_MAC_SPEED].clk;
+ 	struct device *dev = &bsp_priv->pdev->dev;
+ 	unsigned long rate;
+ 	int ret;
+@@ -1085,7 +1099,7 @@ static void rk3568_set_gmac_speed(struct rk_priv_data *bsp_priv, int speed)
+ 		return;
+ 	}
+ 
+-	ret = clk_set_rate(bsp_priv->clk_mac_speed, rate);
++	ret = clk_set_rate(clk_mac_speed, rate);
+ 	if (ret)
+ 		dev_err(dev, "%s: set clk_mac_speed rate %ld failed %d\n",
+ 			__func__, rate, ret);
+@@ -1371,6 +1385,7 @@ static void rv1126_set_to_rmii(struct rk_priv_data *bsp_priv)
+ 
+ static void rv1126_set_rgmii_speed(struct rk_priv_data *bsp_priv, int speed)
+ {
++	struct clk *clk_mac_speed = bsp_priv->clks[RK_CLK_MAC_SPEED].clk;
+ 	struct device *dev = &bsp_priv->pdev->dev;
+ 	unsigned long rate;
+ 	int ret;
+@@ -1390,7 +1405,7 @@ static void rv1126_set_rgmii_speed(struct rk_priv_data *bsp_priv, int speed)
+ 		return;
+ 	}
+ 
+-	ret = clk_set_rate(bsp_priv->clk_mac_speed, rate);
++	ret = clk_set_rate(clk_mac_speed, rate);
+ 	if (ret)
+ 		dev_err(dev, "%s: set clk_mac_speed rate %ld failed %d\n",
+ 			__func__, rate, ret);
+@@ -1398,6 +1413,7 @@ static void rv1126_set_rgmii_speed(struct rk_priv_data *bsp_priv, int speed)
+ 
+ static void rv1126_set_rmii_speed(struct rk_priv_data *bsp_priv, int speed)
+ {
++	struct clk *clk_mac_speed = bsp_priv->clks[RK_CLK_MAC_SPEED].clk;
+ 	struct device *dev = &bsp_priv->pdev->dev;
+ 	unsigned long rate;
+ 	int ret;
+@@ -1414,7 +1430,7 @@ static void rv1126_set_rmii_speed(struct rk_priv_data *bsp_priv, int speed)
+ 		return;
+ 	}
+ 
+-	ret = clk_set_rate(bsp_priv->clk_mac_speed, rate);
++	ret = clk_set_rate(clk_mac_speed, rate);
+ 	if (ret)
+ 		dev_err(dev, "%s: set clk_mac_speed rate %ld failed %d\n",
+ 			__func__, rate, ret);
+@@ -1475,68 +1491,50 @@ static int rk_gmac_clk_init(struct plat_stmmacenet_data *plat)
+ {
+ 	struct rk_priv_data *bsp_priv = plat->bsp_priv;
+ 	struct device *dev = &bsp_priv->pdev->dev;
+-	int ret;
++	int phy_iface = bsp_priv->phy_iface;
++	int i, j, ret;
+ 
+ 	bsp_priv->clk_enabled = false;
+ 
+-	bsp_priv->mac_clk_rx = devm_clk_get(dev, "mac_clk_rx");
+-	if (IS_ERR(bsp_priv->mac_clk_rx))
+-		dev_err(dev, "cannot get clock %s\n",
+-			"mac_clk_rx");
++	bsp_priv->num_clks = ARRAY_SIZE(rk_clocks);
++	if (phy_iface == PHY_INTERFACE_MODE_RMII)
++		bsp_priv->num_clks += ARRAY_SIZE(rk_rmii_clocks);
+ 
+-	bsp_priv->mac_clk_tx = devm_clk_get(dev, "mac_clk_tx");
+-	if (IS_ERR(bsp_priv->mac_clk_tx))
+-		dev_err(dev, "cannot get clock %s\n",
+-			"mac_clk_tx");
++	bsp_priv->clks = devm_kcalloc(dev, bsp_priv->num_clks,
++				      sizeof(*bsp_priv->clks), GFP_KERNEL);
++	if (!bsp_priv->clks)
++		return -ENOMEM;
+ 
+-	bsp_priv->aclk_mac = devm_clk_get(dev, "aclk_mac");
+-	if (IS_ERR(bsp_priv->aclk_mac))
+-		dev_err(dev, "cannot get clock %s\n",
+-			"aclk_mac");
++	for (i = 0; i < ARRAY_SIZE(rk_clocks); i++)
++		bsp_priv->clks[i].id = rk_clocks[i];
+ 
+-	bsp_priv->pclk_mac = devm_clk_get(dev, "pclk_mac");
+-	if (IS_ERR(bsp_priv->pclk_mac))
+-		dev_err(dev, "cannot get clock %s\n",
+-			"pclk_mac");
+-
+-	bsp_priv->clk_mac = devm_clk_get(dev, "stmmaceth");
+-	if (IS_ERR(bsp_priv->clk_mac))
+-		dev_err(dev, "cannot get clock %s\n",
+-			"stmmaceth");
+-
+-	if (bsp_priv->phy_iface == PHY_INTERFACE_MODE_RMII) {
+-		bsp_priv->clk_mac_ref = devm_clk_get(dev, "clk_mac_ref");
+-		if (IS_ERR(bsp_priv->clk_mac_ref))
+-			dev_err(dev, "cannot get clock %s\n",
+-				"clk_mac_ref");
+-
+-		if (!bsp_priv->clock_input) {
+-			bsp_priv->clk_mac_refout =
+-				devm_clk_get(dev, "clk_mac_refout");
+-			if (IS_ERR(bsp_priv->clk_mac_refout))
+-				dev_err(dev, "cannot get clock %s\n",
+-					"clk_mac_refout");
+-		}
++	if (phy_iface == PHY_INTERFACE_MODE_RMII) {
++		for (j = 0; j < ARRAY_SIZE(rk_rmii_clocks); j++)
++			bsp_priv->clks[i++].id = rk_rmii_clocks[j];
+ 	}
+ 
+-	bsp_priv->clk_mac_speed = devm_clk_get(dev, "clk_mac_speed");
+-	if (IS_ERR(bsp_priv->clk_mac_speed))
+-		dev_err(dev, "cannot get clock %s\n", "clk_mac_speed");
++	ret = devm_clk_bulk_get_optional(dev, bsp_priv->num_clks,
++					 bsp_priv->clks);
++	if (ret)
++		return dev_err_probe(dev, ret, "Failed to get clocks\n");
++
++	/* "stmmaceth" will be enabled by the core */
++	bsp_priv->clk_mac = devm_clk_get(dev, "stmmaceth");
++	ret = PTR_ERR_OR_ZERO(bsp_priv->clk_mac);
++	if (ret)
++		return dev_err_probe(dev, ret, "Cannot get stmmaceth clock\n");
+ 
+ 	if (bsp_priv->clock_input) {
+ 		dev_info(dev, "clock input from PHY\n");
+-	} else {
+-		if (bsp_priv->phy_iface == PHY_INTERFACE_MODE_RMII)
+-			clk_set_rate(bsp_priv->clk_mac, 50000000);
++	} else if (phy_iface == PHY_INTERFACE_MODE_RMII) {
++		clk_set_rate(bsp_priv->clk_mac, 50000000);
+ 	}
+ 
+ 	if (plat->phy_node && bsp_priv->integrated_phy) {
+ 		bsp_priv->clk_phy = of_clk_get(plat->phy_node, 0);
+-		if (IS_ERR(bsp_priv->clk_phy)) {
+-			ret = PTR_ERR(bsp_priv->clk_phy);
+-			dev_err(dev, "Cannot get PHY clock: %d\n", ret);
+-			return -EINVAL;
+-		}
++		ret = PTR_ERR_OR_ZERO(bsp_priv->clk_phy);
++		if (ret)
++			return dev_err_probe(dev, ret, "Cannot get PHY clock\n");
+ 		clk_set_rate(bsp_priv->clk_phy, 50000000);
+ 	}
+ 
+@@ -1545,77 +1543,36 @@ static int rk_gmac_clk_init(struct plat_stmmacenet_data *plat)
+ 
+ static int gmac_clk_enable(struct rk_priv_data *bsp_priv, bool enable)
+ {
+-	int phy_iface = bsp_priv->phy_iface;
++	int ret;
+ 
+ 	if (enable) {
+ 		if (!bsp_priv->clk_enabled) {
+-			if (phy_iface == PHY_INTERFACE_MODE_RMII) {
+-				if (!IS_ERR(bsp_priv->mac_clk_rx))
+-					clk_prepare_enable(
+-						bsp_priv->mac_clk_rx);
+-
+-				if (!IS_ERR(bsp_priv->clk_mac_ref))
+-					clk_prepare_enable(
+-						bsp_priv->clk_mac_ref);
+-
+-				if (!IS_ERR(bsp_priv->clk_mac_refout))
+-					clk_prepare_enable(
+-						bsp_priv->clk_mac_refout);
+-			}
+-
+-			if (!IS_ERR(bsp_priv->clk_phy))
+-				clk_prepare_enable(bsp_priv->clk_phy);
++			ret = clk_bulk_prepare_enable(bsp_priv->num_clks,
++						      bsp_priv->clks);
++			if (ret)
++				return ret;
+ 
+-			if (!IS_ERR(bsp_priv->aclk_mac))
+-				clk_prepare_enable(bsp_priv->aclk_mac);
+-
+-			if (!IS_ERR(bsp_priv->pclk_mac))
+-				clk_prepare_enable(bsp_priv->pclk_mac);
+-
+-			if (!IS_ERR(bsp_priv->mac_clk_tx))
+-				clk_prepare_enable(bsp_priv->mac_clk_tx);
+-
+-			if (!IS_ERR(bsp_priv->clk_mac_speed))
+-				clk_prepare_enable(bsp_priv->clk_mac_speed);
++			ret = clk_prepare_enable(bsp_priv->clk_phy);
++			if (ret)
++				return ret;
+ 
+ 			if (bsp_priv->ops && bsp_priv->ops->set_clock_selection)
+ 				bsp_priv->ops->set_clock_selection(bsp_priv,
+ 					       bsp_priv->clock_input, true);
+ 
+-			/**
+-			 * if (!IS_ERR(bsp_priv->clk_mac))
+-			 *	clk_prepare_enable(bsp_priv->clk_mac);
+-			 */
+ 			mdelay(5);
+ 			bsp_priv->clk_enabled = true;
+ 		}
+ 	} else {
+ 		if (bsp_priv->clk_enabled) {
+-			if (phy_iface == PHY_INTERFACE_MODE_RMII) {
+-				clk_disable_unprepare(bsp_priv->mac_clk_rx);
+-
+-				clk_disable_unprepare(bsp_priv->clk_mac_ref);
+-
+-				clk_disable_unprepare(bsp_priv->clk_mac_refout);
+-			}
+-
++			clk_bulk_disable_unprepare(bsp_priv->num_clks,
++						   bsp_priv->clks);
+ 			clk_disable_unprepare(bsp_priv->clk_phy);
+ 
+-			clk_disable_unprepare(bsp_priv->aclk_mac);
+-
+-			clk_disable_unprepare(bsp_priv->pclk_mac);
+-
+-			clk_disable_unprepare(bsp_priv->mac_clk_tx);
+-
+-			clk_disable_unprepare(bsp_priv->clk_mac_speed);
+-
+ 			if (bsp_priv->ops && bsp_priv->ops->set_clock_selection)
+ 				bsp_priv->ops->set_clock_selection(bsp_priv,
+ 					      bsp_priv->clock_input, false);
+-			/**
+-			 * if (!IS_ERR(bsp_priv->clk_mac))
+-			 *	clk_disable_unprepare(bsp_priv->clk_mac);
+-			 */
++
+ 			bsp_priv->clk_enabled = false;
+ 		}
+ 	}
+@@ -1629,9 +1586,6 @@ static int phy_power_on(struct rk_priv_data *bsp_priv, bool enable)
+ 	int ret;
+ 	struct device *dev = &bsp_priv->pdev->dev;
+ 
+-	if (!ldo)
+-		return 0;
+-
+ 	if (enable) {
+ 		ret = regulator_enable(ldo);
+ 		if (ret)
+@@ -1679,14 +1633,11 @@ static struct rk_priv_data *rk_gmac_setup(struct platform_device *pdev,
+ 		}
+ 	}
+ 
+-	bsp_priv->regulator = devm_regulator_get_optional(dev, "phy");
++	bsp_priv->regulator = devm_regulator_get(dev, "phy");
+ 	if (IS_ERR(bsp_priv->regulator)) {
+-		if (PTR_ERR(bsp_priv->regulator) == -EPROBE_DEFER) {
+-			dev_err(dev, "phy regulator is not available yet, deferred probing\n");
+-			return ERR_PTR(-EPROBE_DEFER);
+-		}
+-		dev_err(dev, "no regulator found\n");
+-		bsp_priv->regulator = NULL;
++		ret = PTR_ERR(bsp_priv->regulator);
++		dev_err_probe(dev, ret, "failed to get phy regulator\n");
++		return ERR_PTR(ret);
+ 	}
+ 
+ 	ret = of_property_read_string(dev->of_node, "clock_in_out", &strings);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index d7fcab0570322..f9cd063f1fe30 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -6350,6 +6350,10 @@ static int stmmac_vlan_rx_add_vid(struct net_device *ndev, __be16 proto, u16 vid
+ 	bool is_double = false;
+ 	int ret;
+ 
++	ret = pm_runtime_resume_and_get(priv->device);
++	if (ret < 0)
++		return ret;
++
+ 	if (be16_to_cpu(proto) == ETH_P_8021AD)
+ 		is_double = true;
+ 
+@@ -6357,16 +6361,18 @@ static int stmmac_vlan_rx_add_vid(struct net_device *ndev, __be16 proto, u16 vid
+ 	ret = stmmac_vlan_update(priv, is_double);
+ 	if (ret) {
+ 		clear_bit(vid, priv->active_vlans);
+-		return ret;
++		goto err_pm_put;
+ 	}
+ 
+ 	if (priv->hw->num_vlan) {
+ 		ret = stmmac_add_hw_vlan_rx_fltr(priv, ndev, priv->hw, proto, vid);
+ 		if (ret)
+-			return ret;
++			goto err_pm_put;
+ 	}
++err_pm_put:
++	pm_runtime_put(priv->device);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int stmmac_vlan_rx_kill_vid(struct net_device *ndev, __be16 proto, u16 vid)
+diff --git a/drivers/net/ethernet/sun/sunhme.c b/drivers/net/ethernet/sun/sunhme.c
+index b0c7ab74a82ed..7cf8210ebbec3 100644
+--- a/drivers/net/ethernet/sun/sunhme.c
++++ b/drivers/net/ethernet/sun/sunhme.c
+@@ -2834,7 +2834,7 @@ static int happy_meal_pci_probe(struct pci_dev *pdev,
+ 	int i, qfe_slot = -1;
+ 	char prom_name[64];
+ 	u8 addr[ETH_ALEN];
+-	int err;
++	int err = -ENODEV;
+ 
+ 	/* Now make sure pci_dev cookie is there. */
+ #ifdef CONFIG_SPARC
+diff --git a/drivers/net/ethernet/wangxun/libwx/wx_lib.c b/drivers/net/ethernet/wangxun/libwx/wx_lib.c
+index eb89a274083e7..1e8d8b7b0c62e 100644
+--- a/drivers/net/ethernet/wangxun/libwx/wx_lib.c
++++ b/drivers/net/ethernet/wangxun/libwx/wx_lib.c
+@@ -1798,10 +1798,13 @@ static int wx_setup_rx_resources(struct wx_ring *rx_ring)
+ 	ret = wx_alloc_page_pool(rx_ring);
+ 	if (ret < 0) {
+ 		dev_err(rx_ring->dev, "Page pool creation failed: %d\n", ret);
+-		goto err;
++		goto err_desc;
+ 	}
+ 
+ 	return 0;
++
++err_desc:
++	dma_free_coherent(dev, rx_ring->size, rx_ring->desc, rx_ring->dma);
+ err:
+ 	kvfree(rx_ring->rx_buffer_info);
+ 	rx_ring->rx_buffer_info = NULL;
+diff --git a/drivers/net/pcs/pcs-xpcs.c b/drivers/net/pcs/pcs-xpcs.c
+index bc428a816719d..04a6853530418 100644
+--- a/drivers/net/pcs/pcs-xpcs.c
++++ b/drivers/net/pcs/pcs-xpcs.c
+@@ -321,7 +321,7 @@ static int xpcs_read_fault_c73(struct dw_xpcs *xpcs,
+ 	return 0;
+ }
+ 
+-static int xpcs_read_link_c73(struct dw_xpcs *xpcs, bool an)
++static int xpcs_read_link_c73(struct dw_xpcs *xpcs)
+ {
+ 	bool link = true;
+ 	int ret;
+@@ -333,15 +333,6 @@ static int xpcs_read_link_c73(struct dw_xpcs *xpcs, bool an)
+ 	if (!(ret & MDIO_STAT1_LSTATUS))
+ 		link = false;
+ 
+-	if (an) {
+-		ret = xpcs_read(xpcs, MDIO_MMD_AN, MDIO_STAT1);
+-		if (ret < 0)
+-			return ret;
+-
+-		if (!(ret & MDIO_STAT1_LSTATUS))
+-			link = false;
+-	}
+-
+ 	return link;
+ }
+ 
+@@ -935,7 +926,7 @@ static int xpcs_get_state_c73(struct dw_xpcs *xpcs,
+ 	int ret;
+ 
+ 	/* Link needs to be read first ... */
+-	state->link = xpcs_read_link_c73(xpcs, state->an_enabled) > 0 ? 1 : 0;
++	state->link = xpcs_read_link_c73(xpcs) > 0 ? 1 : 0;
+ 
+ 	/* ... and then we check the faults. */
+ 	ret = xpcs_read_fault_c73(xpcs, state);
+diff --git a/drivers/net/wireless/ath/ath11k/ahb.c b/drivers/net/wireless/ath/ath11k/ahb.c
+index 920abce9053a5..5cbba9a8b6ba9 100644
+--- a/drivers/net/wireless/ath/ath11k/ahb.c
++++ b/drivers/net/wireless/ath/ath11k/ahb.c
+@@ -874,11 +874,11 @@ static int ath11k_ahb_setup_msi_resources(struct ath11k_base *ab)
+ 	ab->pci.msi.ep_base_data = int_prop + 32;
+ 
+ 	for (i = 0; i < ab->pci.msi.config->total_vectors; i++) {
+-		res = platform_get_resource(pdev, IORESOURCE_IRQ, i);
+-		if (!res)
+-			return -ENODEV;
++		ret = platform_get_irq(pdev, i);
++		if (ret < 0)
++			return ret;
+ 
+-		ab->pci.msi.irqs[i] = res->start;
++		ab->pci.msi.irqs[i] = ret;
+ 	}
+ 
+ 	set_bit(ATH11K_FLAG_MULTI_MSI_VECTORS, &ab->dev_flags);
+@@ -1078,6 +1078,12 @@ static int ath11k_ahb_fw_resource_deinit(struct ath11k_base *ab)
+ 	struct iommu_domain *iommu;
+ 	size_t unmapped_size;
+ 
++	/* Chipsets not requiring MSA would have not initialized
++	 * MSA resources, return success in such cases.
++	 */
++	if (!ab->hw_params.fixed_fw_mem)
++		return 0;
++
+ 	if (ab_ahb->fw.use_tz)
+ 		return 0;
+ 
+@@ -1174,7 +1180,7 @@ static int ath11k_ahb_probe(struct platform_device *pdev)
+ 		 * to a new space for accessing them.
+ 		 */
+ 		ab->mem_ce = ioremap(ce_remap->base, ce_remap->size);
+-		if (IS_ERR(ab->mem_ce)) {
++		if (!ab->mem_ce) {
+ 			dev_err(&pdev->dev, "ce ioremap error\n");
+ 			ret = -ENOMEM;
+ 			goto err_core_free;
+diff --git a/drivers/net/wireless/ath/ath11k/dbring.c b/drivers/net/wireless/ath/ath11k/dbring.c
+index 2107ec05d14fd..5536e86423312 100644
+--- a/drivers/net/wireless/ath/ath11k/dbring.c
++++ b/drivers/net/wireless/ath/ath11k/dbring.c
+@@ -26,13 +26,13 @@ int ath11k_dbring_validate_buffer(struct ath11k *ar, void *buffer, u32 size)
+ static void ath11k_dbring_fill_magic_value(struct ath11k *ar,
+ 					   void *buffer, u32 size)
+ {
+-	u32 *temp;
+-	int idx;
+-
+-	size = size >> 2;
++	/* memset32 function fills buffer payload with the ATH11K_DB_MAGIC_VALUE
++	 * and the variable size is expected to be the number of u32 values
++	 * to be stored, not the number of bytes.
++	 */
++	size = size / sizeof(u32);
+ 
+-	for (idx = 0, temp = buffer; idx < size; idx++, temp++)
+-		*temp++ = ATH11K_DB_MAGIC_VALUE;
++	memset32(buffer, ATH11K_DB_MAGIC_VALUE, size);
+ }
+ 
+ static int ath11k_dbring_bufs_replenish(struct ath11k *ar,
+diff --git a/drivers/net/wireless/ath/ath11k/peer.c b/drivers/net/wireless/ath/ath11k/peer.c
+index 1ae7af02c364e..1380811827a84 100644
+--- a/drivers/net/wireless/ath/ath11k/peer.c
++++ b/drivers/net/wireless/ath/ath11k/peer.c
+@@ -382,22 +382,23 @@ int ath11k_peer_create(struct ath11k *ar, struct ath11k_vif *arvif,
+ 		return -ENOBUFS;
+ 	}
+ 
++	mutex_lock(&ar->ab->tbl_mtx_lock);
+ 	spin_lock_bh(&ar->ab->base_lock);
+ 	peer = ath11k_peer_find_by_addr(ar->ab, param->peer_addr);
+ 	if (peer) {
+ 		if (peer->vdev_id == param->vdev_id) {
+ 			spin_unlock_bh(&ar->ab->base_lock);
++			mutex_unlock(&ar->ab->tbl_mtx_lock);
+ 			return -EINVAL;
+ 		}
+ 
+ 		/* Assume sta is transitioning to another band.
+ 		 * Remove here the peer from rhash.
+ 		 */
+-		mutex_lock(&ar->ab->tbl_mtx_lock);
+ 		ath11k_peer_rhash_delete(ar->ab, peer);
+-		mutex_unlock(&ar->ab->tbl_mtx_lock);
+ 	}
+ 	spin_unlock_bh(&ar->ab->base_lock);
++	mutex_unlock(&ar->ab->tbl_mtx_lock);
+ 
+ 	ret = ath11k_wmi_send_peer_create_cmd(ar, param);
+ 	if (ret) {
+diff --git a/drivers/net/wireless/ath/ath12k/dp_tx.c b/drivers/net/wireless/ath/ath12k/dp_tx.c
+index 95294f35155c4..fd8d850f9818f 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_tx.c
++++ b/drivers/net/wireless/ath/ath12k/dp_tx.c
+@@ -270,7 +270,7 @@ tcl_ring_sel:
+ 					  skb_ext_desc->len, DMA_TO_DEVICE);
+ 		ret = dma_mapping_error(ab->dev, ti.paddr);
+ 		if (ret) {
+-			kfree(skb_ext_desc);
++			kfree_skb(skb_ext_desc);
+ 			goto fail_unmap_dma;
+ 		}
+ 
+diff --git a/drivers/net/wireless/ath/ath12k/pci.c b/drivers/net/wireless/ath/ath12k/pci.c
+index ae7f6083c9fc2..f523aa15885f6 100644
+--- a/drivers/net/wireless/ath/ath12k/pci.c
++++ b/drivers/net/wireless/ath/ath12k/pci.c
+@@ -1195,7 +1195,8 @@ static int ath12k_pci_probe(struct pci_dev *pdev,
+ 			dev_err(&pdev->dev,
+ 				"Unknown hardware version found for QCN9274: 0x%x\n",
+ 				soc_hw_version_major);
+-			return -EOPNOTSUPP;
++			ret = -EOPNOTSUPP;
++			goto err_pci_free_region;
+ 		}
+ 		break;
+ 	case WCN7850_DEVICE_ID:
+diff --git a/drivers/net/wireless/ath/ath5k/ahb.c b/drivers/net/wireless/ath/ath5k/ahb.c
+index 2c9cec8b53d9e..28a1e5eff204e 100644
+--- a/drivers/net/wireless/ath/ath5k/ahb.c
++++ b/drivers/net/wireless/ath/ath5k/ahb.c
+@@ -113,15 +113,13 @@ static int ath_ahb_probe(struct platform_device *pdev)
+ 		goto err_out;
+ 	}
+ 
+-	res = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
+-	if (res == NULL) {
+-		dev_err(&pdev->dev, "no IRQ resource found\n");
+-		ret = -ENXIO;
++	irq = platform_get_irq(pdev, 0);
++	if (irq < 0) {
++		dev_err(&pdev->dev, "no IRQ resource found: %d\n", irq);
++		ret = irq;
+ 		goto err_iounmap;
+ 	}
+ 
+-	irq = res->start;
+-
+ 	hw = ieee80211_alloc_hw(sizeof(struct ath5k_hw), &ath5k_hw_ops);
+ 	if (hw == NULL) {
+ 		dev_err(&pdev->dev, "no memory for ieee80211_hw\n");
+diff --git a/drivers/net/wireless/ath/ath5k/eeprom.c b/drivers/net/wireless/ath/ath5k/eeprom.c
+index d444b3d70ba2e..58d3e86f6256d 100644
+--- a/drivers/net/wireless/ath/ath5k/eeprom.c
++++ b/drivers/net/wireless/ath/ath5k/eeprom.c
+@@ -529,7 +529,7 @@ ath5k_eeprom_read_freq_list(struct ath5k_hw *ah, int *offset, int max,
+ 		ee->ee_n_piers[mode]++;
+ 
+ 		freq2 = (val >> 8) & 0xff;
+-		if (!freq2)
++		if (!freq2 || i >= max)
+ 			break;
+ 
+ 		pc[i++].freq = ath5k_eeprom_bin2freq(ee,
+diff --git a/drivers/net/wireless/ath/ath6kl/bmi.c b/drivers/net/wireless/ath/ath6kl/bmi.c
+index bde5a10d470c8..af98e871199d3 100644
+--- a/drivers/net/wireless/ath/ath6kl/bmi.c
++++ b/drivers/net/wireless/ath/ath6kl/bmi.c
+@@ -246,7 +246,7 @@ int ath6kl_bmi_execute(struct ath6kl *ar, u32 addr, u32 *param)
+ 		return -EACCES;
+ 	}
+ 
+-	size = sizeof(cid) + sizeof(addr) + sizeof(param);
++	size = sizeof(cid) + sizeof(addr) + sizeof(*param);
+ 	if (size > ar->bmi.max_cmd_size) {
+ 		WARN_ON(1);
+ 		return -EINVAL;
+diff --git a/drivers/net/wireless/ath/ath6kl/htc_pipe.c b/drivers/net/wireless/ath/ath6kl/htc_pipe.c
+index c68848819a52d..9b88d96bfe96c 100644
+--- a/drivers/net/wireless/ath/ath6kl/htc_pipe.c
++++ b/drivers/net/wireless/ath/ath6kl/htc_pipe.c
+@@ -960,8 +960,8 @@ static int ath6kl_htc_pipe_rx_complete(struct ath6kl *ar, struct sk_buff *skb,
+ 	 * Thus the possibility of ar->htc_target being NULL
+ 	 * via ath6kl_recv_complete -> ath6kl_usb_io_comp_work.
+ 	 */
+-	if (WARN_ON_ONCE(!target)) {
+-		ath6kl_err("Target not yet initialized\n");
++	if (!target) {
++		ath6kl_dbg(ATH6KL_DBG_HTC, "Target not yet initialized\n");
+ 		status = -EINVAL;
+ 		goto free_skb;
+ 	}
+diff --git a/drivers/net/wireless/ath/ath9k/hif_usb.c b/drivers/net/wireless/ath/ath9k/hif_usb.c
+index f521dfa2f1945..e0130beb304df 100644
+--- a/drivers/net/wireless/ath/ath9k/hif_usb.c
++++ b/drivers/net/wireless/ath/ath9k/hif_usb.c
+@@ -534,6 +534,24 @@ static struct ath9k_htc_hif hif_usb = {
+ 	.send = hif_usb_send,
+ };
+ 
++/* Need to free remain_skb allocated in ath9k_hif_usb_rx_stream
++ * in case ath9k_hif_usb_rx_stream wasn't called next time to
++ * process the buffer and subsequently free it.
++ */
++static void ath9k_hif_usb_free_rx_remain_skb(struct hif_device_usb *hif_dev)
++{
++	unsigned long flags;
++
++	spin_lock_irqsave(&hif_dev->rx_lock, flags);
++	if (hif_dev->remain_skb) {
++		dev_kfree_skb_any(hif_dev->remain_skb);
++		hif_dev->remain_skb = NULL;
++		hif_dev->rx_remain_len = 0;
++		RX_STAT_INC(hif_dev, skb_dropped);
++	}
++	spin_unlock_irqrestore(&hif_dev->rx_lock, flags);
++}
++
+ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
+ 				    struct sk_buff *skb)
+ {
+@@ -868,6 +886,7 @@ err:
+ static void ath9k_hif_usb_dealloc_rx_urbs(struct hif_device_usb *hif_dev)
+ {
+ 	usb_kill_anchored_urbs(&hif_dev->rx_submitted);
++	ath9k_hif_usb_free_rx_remain_skb(hif_dev);
+ }
+ 
+ static int ath9k_hif_usb_alloc_rx_urbs(struct hif_device_usb *hif_dev)
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index 5a9f713ea703e..3d33e687964ad 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -6494,18 +6494,20 @@ static s32 brcmf_notify_rssi(struct brcmf_if *ifp,
+ {
+ 	struct brcmf_cfg80211_vif *vif = ifp->vif;
+ 	struct brcmf_rssi_be *info = data;
+-	s32 rssi, snr, noise;
++	s32 rssi, snr = 0, noise = 0;
+ 	s32 low, high, last;
+ 
+-	if (e->datalen < sizeof(*info)) {
++	if (e->datalen >= sizeof(*info)) {
++		rssi = be32_to_cpu(info->rssi);
++		snr = be32_to_cpu(info->snr);
++		noise = be32_to_cpu(info->noise);
++	} else if (e->datalen >= sizeof(rssi)) {
++		rssi = be32_to_cpu(*(__be32 *)data);
++	} else {
+ 		brcmf_err("insufficient RSSI event data\n");
+ 		return 0;
+ 	}
+ 
+-	rssi = be32_to_cpu(info->rssi);
+-	snr = be32_to_cpu(info->snr);
+-	noise = be32_to_cpu(info->noise);
+-
+ 	low = vif->cqm_rssi_low;
+ 	high = vif->cqm_rssi_high;
+ 	last = vif->cqm_rssi_last;
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/d3.h b/drivers/net/wireless/intel/iwlwifi/fw/api/d3.h
+index df0833890e55a..8a613e150a024 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/api/d3.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/api/d3.h
+@@ -767,7 +767,7 @@ struct iwl_wowlan_status_v12 {
+ } __packed; /* WOWLAN_STATUSES_RSP_API_S_VER_12 */
+ 
+ /**
+- * struct iwl_wowlan_info_notif - WoWLAN information notification
++ * struct iwl_wowlan_info_notif_v1 - WoWLAN information notification
+  * @gtk: GTK data
+  * @igtk: IGTK data
+  * @replay_ctr: GTK rekey replay counter
+@@ -785,7 +785,7 @@ struct iwl_wowlan_status_v12 {
+  * @station_id: station id
+  * @reserved2: reserved
+  */
+-struct iwl_wowlan_info_notif {
++struct iwl_wowlan_info_notif_v1 {
+ 	struct iwl_wowlan_gtk_status_v3 gtk[WOWLAN_GTK_KEYS_NUM];
+ 	struct iwl_wowlan_igtk_status igtk[WOWLAN_IGTK_KEYS_NUM];
+ 	__le64 replay_ctr;
+@@ -803,6 +803,39 @@ struct iwl_wowlan_info_notif {
+ 	u8 reserved2[2];
+ } __packed; /* WOWLAN_INFO_NTFY_API_S_VER_1 */
+ 
++/**
++ * struct iwl_wowlan_info_notif - WoWLAN information notification
++ * @gtk: GTK data
++ * @igtk: IGTK data
++ * @replay_ctr: GTK rekey replay counter
++ * @pattern_number: number of the matched patterns
++ * @reserved1: reserved
++ * @qos_seq_ctr: QoS sequence counters to use next
++ * @wakeup_reasons: wakeup reasons, see &enum iwl_wowlan_wakeup_reason
++ * @num_of_gtk_rekeys: number of GTK rekeys
++ * @transmitted_ndps: number of transmitted neighbor discovery packets
++ * @received_beacons: number of received beacons
++ * @tid_tear_down: bit mask of tids whose BA sessions were closed
++ *	in suspend state
++ * @station_id: station id
++ * @reserved2: reserved
++ */
++struct iwl_wowlan_info_notif {
++	struct iwl_wowlan_gtk_status_v3 gtk[WOWLAN_GTK_KEYS_NUM];
++	struct iwl_wowlan_igtk_status igtk[WOWLAN_IGTK_KEYS_NUM];
++	__le64 replay_ctr;
++	__le16 pattern_number;
++	__le16 reserved1;
++	__le16 qos_seq_ctr[8];
++	__le32 wakeup_reasons;
++	__le32 num_of_gtk_rekeys;
++	__le32 transmitted_ndps;
++	__le32 received_beacons;
++	u8 tid_tear_down;
++	u8 station_id;
++	u8 reserved2[2];
++} __packed; /* WOWLAN_INFO_NTFY_API_S_VER_2 */
++
+ /**
+  * struct iwl_wowlan_wake_pkt_notif - WoWLAN wake packet notification
+  * @wake_packet_length: wakeup packet length
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+index abf49022edbe4..027360e63b926 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+@@ -1038,7 +1038,7 @@ iwl_dump_ini_prph_mac_iter(struct iwl_fw_runtime *fwrt,
+ 	range->range_data_size = reg->dev_addr.size;
+ 	for (i = 0; i < le32_to_cpu(reg->dev_addr.size); i += 4) {
+ 		prph_val = iwl_read_prph(fwrt->trans, addr + i);
+-		if (prph_val == 0x5a5a5a5a)
++		if ((prph_val & ~0xf) == 0xa5a5a5a0)
+ 			return -EBUSY;
+ 		*val++ = cpu_to_le32(prph_val);
+ 	}
+@@ -1388,13 +1388,13 @@ static void iwl_ini_get_rxf_data(struct iwl_fw_runtime *fwrt,
+ 	if (!data)
+ 		return;
+ 
++	memset(data, 0, sizeof(*data));
++
+ 	/* make sure only one bit is set in only one fid */
+ 	if (WARN_ONCE(hweight_long(fid1) + hweight_long(fid2) != 1,
+ 		      "fid1=%x, fid2=%x\n", fid1, fid2))
+ 		return;
+ 
+-	memset(data, 0, sizeof(*data));
+-
+ 	if (fid1) {
+ 		fifo_idx = ffs(fid1) - 1;
+ 		if (WARN_ONCE(fifo_idx >= MAX_NUM_LMAC, "fifo_idx=%d\n",
+@@ -1562,7 +1562,7 @@ iwl_dump_ini_dbgi_sram_iter(struct iwl_fw_runtime *fwrt,
+ 		prph_data = iwl_read_prph_no_grab(fwrt->trans, (i % 2) ?
+ 					  DBGI_SRAM_TARGET_ACCESS_RDATA_MSB :
+ 					  DBGI_SRAM_TARGET_ACCESS_RDATA_LSB);
+-		if (prph_data == 0x5a5a5a5a) {
++		if ((prph_data & ~0xf) == 0xa5a5a5a0) {
+ 			iwl_trans_release_nic_access(fwrt->trans);
+ 			return -EBUSY;
+ 		}
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/debugfs.c b/drivers/net/wireless/intel/iwlwifi/fw/debugfs.c
+index 43e997283db0f..607e07ed2477c 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/debugfs.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/debugfs.c
+@@ -317,8 +317,10 @@ static void *iwl_dbgfs_fw_info_seq_next(struct seq_file *seq,
+ 	const struct iwl_fw *fw = priv->fwrt->fw;
+ 
+ 	*pos = ++state->pos;
+-	if (*pos >= fw->ucode_capa.n_cmd_versions)
++	if (*pos >= fw->ucode_capa.n_cmd_versions) {
++		kfree(state);
+ 		return NULL;
++	}
+ 
+ 	return state;
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+index 48e7376a5fea7..7ed4991f8dab9 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+@@ -138,6 +138,12 @@ static int iwl_dbg_tlv_alloc_buf_alloc(struct iwl_trans *trans,
+ 	    alloc_id != IWL_FW_INI_ALLOCATION_ID_DBGC1)
+ 		goto err;
+ 
++	if (buf_location == IWL_FW_INI_LOCATION_DRAM_PATH &&
++	    alloc->req_size == 0) {
++		IWL_ERR(trans, "WRT: Invalid DRAM buffer allocation requested size (0)\n");
++		return -EINVAL;
++	}
++
+ 	trans->dbg.fw_mon_cfg[alloc_id] = *alloc;
+ 
+ 	return 0;
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-debug.c b/drivers/net/wireless/intel/iwlwifi/iwl-debug.c
+index ae4c2a3d63d5b..3a3c13a41fc61 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-debug.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-debug.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+ /*
+- * Copyright (C) 2005-2011, 2021 Intel Corporation
++ * Copyright (C) 2005-2011, 2021-2022 Intel Corporation
+  */
+ #include <linux/device.h>
+ #include <linux/interrupt.h>
+@@ -57,6 +57,7 @@ void __iwl_err(struct device *dev, enum iwl_err_mode mode, const char *fmt, ...)
+ 	default:
+ 		break;
+ 	}
++	vaf.va = &args;
+ 	trace_iwlwifi_err(&vaf);
+ 	va_end(args);
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+index c5ad34b063df6..d75fec8d0afd4 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+@@ -563,6 +563,7 @@ static void iwl_mvm_wowlan_get_tkip_data(struct ieee80211_hw *hw,
+ 		}
+ 
+ 		for (i = 0; i < IWL_NUM_RSC; i++) {
++			ieee80211_get_key_rx_seq(key, i, &seq);
+ 			/* wrapping isn't allowed, AP must rekey */
+ 			if (seq.tkip.iv32 > cur_rx_iv32)
+ 				cur_rx_iv32 = seq.tkip.iv32;
+@@ -2015,6 +2016,12 @@ static void iwl_mvm_parse_wowlan_info_notif(struct iwl_mvm *mvm,
+ {
+ 	u32 i;
+ 
++	if (!data) {
++		IWL_ERR(mvm, "iwl_wowlan_info_notif data is NULL\n");
++		status = NULL;
++		return;
++	}
++
+ 	if (len < sizeof(*data)) {
+ 		IWL_ERR(mvm, "Invalid WoWLAN info notification!\n");
+ 		status = NULL;
+@@ -2702,10 +2709,15 @@ static bool iwl_mvm_wait_d3_notif(struct iwl_notif_wait_data *notif_wait,
+ 	struct iwl_d3_data *d3_data = data;
+ 	u32 len;
+ 	int ret;
++	int wowlan_info_ver = iwl_fw_lookup_notif_ver(mvm->fw,
++						      PROT_OFFLOAD_GROUP,
++						      WOWLAN_INFO_NOTIFICATION,
++						      IWL_FW_CMD_VER_UNKNOWN);
++
+ 
+ 	switch (WIDE_ID(pkt->hdr.group_id, pkt->hdr.cmd)) {
+ 	case WIDE_ID(PROT_OFFLOAD_GROUP, WOWLAN_INFO_NOTIFICATION): {
+-		struct iwl_wowlan_info_notif *notif = (void *)pkt->data;
++		struct iwl_wowlan_info_notif *notif;
+ 
+ 		if (d3_data->notif_received & IWL_D3_NOTIF_WOWLAN_INFO) {
+ 			/* We might get two notifications due to dual bss */
+@@ -2714,10 +2726,32 @@ static bool iwl_mvm_wait_d3_notif(struct iwl_notif_wait_data *notif_wait,
+ 			break;
+ 		}
+ 
++		if (wowlan_info_ver < 2) {
++			struct iwl_wowlan_info_notif_v1 *notif_v1 = (void *)pkt->data;
++
++			notif = kmemdup(notif_v1,
++					offsetofend(struct iwl_wowlan_info_notif,
++						    received_beacons),
++					GFP_ATOMIC);
++
++			if (!notif)
++				return false;
++
++			notif->tid_tear_down = notif_v1->tid_tear_down;
++			notif->station_id = notif_v1->station_id;
++
++		} else {
++			notif = (void *)pkt->data;
++		}
++
+ 		d3_data->notif_received |= IWL_D3_NOTIF_WOWLAN_INFO;
+ 		len = iwl_rx_packet_payload_len(pkt);
+ 		iwl_mvm_parse_wowlan_info_notif(mvm, notif, d3_data->status,
+ 						len);
++
++		if (wowlan_info_ver < 2)
++			kfree(notif);
++
+ 		if (d3_data->status &&
+ 		    d3_data->status->wakeup_reasons & IWL_WOWLAN_WAKEUP_REASON_HAS_WAKEUP_PKT)
+ 			/* We are supposed to get also wake packet notif */
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
+index 85b99316d029a..e3fc2f4be0045 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
+@@ -1745,6 +1745,11 @@ static ssize_t iwl_dbgfs_mem_read(struct file *file, char __user *user_buf,
+ 	if (ret < 0)
+ 		return ret;
+ 
++	if (iwl_rx_packet_payload_len(hcmd.resp_pkt) < sizeof(*rsp)) {
++		ret = -EIO;
++		goto out;
++	}
++
+ 	rsp = (void *)hcmd.resp_pkt->data;
+ 	if (le32_to_cpu(rsp->status) != DEBUG_MEM_STATUS_SUCCESS) {
+ 		ret = -ENXIO;
+@@ -1821,6 +1826,11 @@ static ssize_t iwl_dbgfs_mem_write(struct file *file,
+ 	if (ret < 0)
+ 		return ret;
+ 
++	if (iwl_rx_packet_payload_len(hcmd.resp_pkt) < sizeof(*rsp)) {
++		ret = -EIO;
++		goto out;
++	}
++
+ 	rsp = (void *)hcmd.resp_pkt->data;
+ 	if (rsp->status != DEBUG_MEM_STATUS_SUCCESS) {
+ 		ret = -ENXIO;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index b55b1b17f4d19..9fc2d5d8b7d75 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -5525,7 +5525,10 @@ static bool iwl_mvm_mac_can_aggregate(struct ieee80211_hw *hw,
+ {
+ 	struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
+ 
+-	if (mvm->trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_BZ)
++	if (mvm->trans->trans_cfg->device_family > IWL_DEVICE_FAMILY_BZ ||
++	    (mvm->trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_BZ &&
++	     !(CSR_HW_REV_TYPE(mvm->trans->hw_rev) == IWL_CFG_MAC_TYPE_GL &&
++	       mvm->trans->hw_rev_step == SILICON_A_STEP)))
+ 		return iwl_mvm_tx_csum_bz(mvm, head, true) ==
+ 		       iwl_mvm_tx_csum_bz(mvm, skb, true);
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rx.c b/drivers/net/wireless/intel/iwlwifi/mvm/rx.c
+index 49ca1e168fc5b..eee98cebbb46a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rx.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rx.c
+@@ -384,9 +384,10 @@ void iwl_mvm_rx_rx_mpdu(struct iwl_mvm *mvm, struct napi_struct *napi,
+ 		 * Don't even try to decrypt a MCAST frame that was received
+ 		 * before the managed vif is authorized, we'd fail anyway.
+ 		 */
+-		if (vif->type == NL80211_IFTYPE_STATION &&
++		if (is_multicast_ether_addr(hdr->addr1) &&
++		    vif->type == NL80211_IFTYPE_STATION &&
+ 		    !mvmvif->authorized &&
+-		    is_multicast_ether_addr(hdr->addr1)) {
++		    ieee80211_has_protected(hdr->frame_control)) {
+ 			IWL_DEBUG_DROP(mvm, "MCAST before the vif is authorized\n");
+ 			kfree_skb(skb);
+ 			rcu_read_unlock();
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+index 549dbe0be223a..e685113172c52 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+@@ -171,8 +171,7 @@ static int iwl_mvm_create_skb(struct iwl_mvm *mvm, struct sk_buff *skb,
+ 	 * Starting from Bz hardware, it calculates starting directly after
+ 	 * the MAC header, so that matches mac80211's expectation.
+ 	 */
+-	if (skb->ip_summed == CHECKSUM_COMPLETE &&
+-	    mvm->trans->trans_cfg->device_family < IWL_DEVICE_FAMILY_BZ) {
++	if (skb->ip_summed == CHECKSUM_COMPLETE) {
+ 		struct {
+ 			u8 hdr[6];
+ 			__be16 type;
+@@ -187,7 +186,7 @@ static int iwl_mvm_create_skb(struct iwl_mvm *mvm, struct sk_buff *skb,
+ 			      shdr->type != htons(ETH_P_PAE) &&
+ 			      shdr->type != htons(ETH_P_TDLS))))
+ 			skb->ip_summed = CHECKSUM_NONE;
+-		else
++		else if (mvm->trans->trans_cfg->device_family < IWL_DEVICE_FAMILY_BZ)
+ 			/* mac80211 assumes full CSUM including SNAP header */
+ 			skb_postpush_rcsum(skb, shdr, sizeof(*shdr));
+ 	}
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 99768d6a60322..a0bf19b18635c 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -565,7 +565,6 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
+ 	IWL_DEV_INFO(0x43F0, 0x1652, killer1650i_2ax_cfg_qu_b0_hr_b0, iwl_ax201_killer_1650i_name),
+ 	IWL_DEV_INFO(0x43F0, 0x2074, iwl_ax201_cfg_qu_hr, NULL),
+ 	IWL_DEV_INFO(0x43F0, 0x4070, iwl_ax201_cfg_qu_hr, NULL),
+-	IWL_DEV_INFO(0x43F0, 0x1651, killer1650s_2ax_cfg_qu_b0_hr_b0, iwl_ax201_killer_1650s_name),
+ 	IWL_DEV_INFO(0xA0F0, 0x0070, iwl_ax201_cfg_qu_hr, NULL),
+ 	IWL_DEV_INFO(0xA0F0, 0x0074, iwl_ax201_cfg_qu_hr, NULL),
+ 	IWL_DEV_INFO(0xA0F0, 0x0078, iwl_ax201_cfg_qu_hr, NULL),
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+index 0a9af1ad1f206..171b6bf4a65a0 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+@@ -599,7 +599,6 @@ static int iwl_pcie_set_hw_ready(struct iwl_trans *trans)
+ int iwl_pcie_prepare_card_hw(struct iwl_trans *trans)
+ {
+ 	int ret;
+-	int t = 0;
+ 	int iter;
+ 
+ 	IWL_DEBUG_INFO(trans, "iwl_trans_prepare_card_hw enter\n");
+@@ -616,6 +615,8 @@ int iwl_pcie_prepare_card_hw(struct iwl_trans *trans)
+ 	usleep_range(1000, 2000);
+ 
+ 	for (iter = 0; iter < 10; iter++) {
++		int t = 0;
++
+ 		/* If HW is not ready, prepare the conditions to check again */
+ 		iwl_set_bit(trans, CSR_HW_IF_CONFIG_REG,
+ 			    CSR_HW_IF_CONFIG_REG_PREPARE);
+@@ -1522,19 +1523,16 @@ static int iwl_pcie_d3_handshake(struct iwl_trans *trans, bool suspend)
+ 	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+ 	int ret;
+ 
+-	if (trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_AX210) {
++	if (trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_AX210)
+ 		iwl_write_umac_prph(trans, UREG_DOORBELL_TO_ISR6,
+ 				    suspend ? UREG_DOORBELL_TO_ISR6_SUSPEND :
+ 					      UREG_DOORBELL_TO_ISR6_RESUME);
+-	} else if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_BZ) {
++	else if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_BZ)
+ 		iwl_write32(trans, CSR_IPC_SLEEP_CONTROL,
+ 			    suspend ? CSR_IPC_SLEEP_CONTROL_SUSPEND :
+ 				      CSR_IPC_SLEEP_CONTROL_RESUME);
+-		iwl_write_umac_prph(trans, UREG_DOORBELL_TO_ISR6,
+-				    UREG_DOORBELL_TO_ISR6_SLEEP_CTRL);
+-	} else {
++	else
+ 		return 0;
+-	}
+ 
+ 	ret = wait_event_timeout(trans_pcie->sx_waitq,
+ 				 trans_pcie->sx_complete, 2 * HZ);
+diff --git a/drivers/net/wireless/mediatek/mt76/dma.c b/drivers/net/wireless/mediatek/mt76/dma.c
+index da281cd1d36fa..10c6d96dc1491 100644
+--- a/drivers/net/wireless/mediatek/mt76/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/dma.c
+@@ -576,7 +576,9 @@ free:
+ free_skb:
+ 	status.skb = tx_info.skb;
+ 	hw = mt76_tx_status_get_hw(dev, tx_info.skb);
++	spin_lock_bh(&dev->rx_lock);
+ 	ieee80211_tx_status_ext(hw, &status);
++	spin_unlock_bh(&dev->rx_lock);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/mac.c b/drivers/net/wireless/mediatek/mt76/mt7603/mac.c
+index 70a7f84af0282..12e0af52082a6 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7603/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7603/mac.c
+@@ -1279,8 +1279,11 @@ void mt7603_mac_add_txs(struct mt7603_dev *dev, void *data)
+ 	if (wcidx >= MT7603_WTBL_STA || !sta)
+ 		goto out;
+ 
+-	if (mt7603_fill_txs(dev, msta, &info, txs_data))
++	if (mt7603_fill_txs(dev, msta, &info, txs_data)) {
++		spin_lock_bh(&dev->mt76.rx_lock);
+ 		ieee80211_tx_status_noskb(mt76_hw(dev), sta, &info);
++		spin_unlock_bh(&dev->mt76.rx_lock);
++	}
+ 
+ out:
+ 	rcu_read_unlock();
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+index 51a968a6afdc9..eafa0f204c1f8 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+@@ -1530,8 +1530,11 @@ static void mt7615_mac_add_txs(struct mt7615_dev *dev, void *data)
+ 	if (wcid->phy_idx && dev->mt76.phys[MT_BAND1])
+ 		mphy = dev->mt76.phys[MT_BAND1];
+ 
+-	if (mt7615_fill_txs(dev, msta, &info, txs_data))
++	if (mt7615_fill_txs(dev, msta, &info, txs_data)) {
++		spin_lock_bh(&dev->mt76.rx_lock);
+ 		ieee80211_tx_status_noskb(mphy->hw, sta, &info);
++		spin_unlock_bh(&dev->mt76.rx_lock);
++	}
+ 
+ out:
+ 	rcu_read_unlock();
+@@ -2352,7 +2355,7 @@ void mt7615_coredump_work(struct work_struct *work)
+ 			break;
+ 
+ 		skb_pull(skb, sizeof(struct mt7615_mcu_rxd));
+-		if (data + skb->len - dump > MT76_CONNAC_COREDUMP_SZ) {
++		if (!dump || data + skb->len - dump > MT76_CONNAC_COREDUMP_SZ) {
+ 			dev_kfree_skb(skb);
+ 			continue;
+ 		}
+@@ -2362,6 +2365,8 @@ void mt7615_coredump_work(struct work_struct *work)
+ 
+ 		dev_kfree_skb(skb);
+ 	}
+-	dev_coredumpv(dev->mt76.dev, dump, MT76_CONNAC_COREDUMP_SZ,
+-		      GFP_KERNEL);
++
++	if (dump)
++		dev_coredumpv(dev->mt76.dev, dump, MT76_CONNAC_COREDUMP_SZ,
++			      GFP_KERNEL);
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
+index aed4ee95fb2ec..82aac0a04655f 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
+@@ -537,7 +537,8 @@ void mt76_connac2_mac_write_txwi(struct mt76_dev *dev, __le32 *txwi,
+ 	if (txwi[2] & cpu_to_le32(MT_TXD2_FIX_RATE)) {
+ 		/* Fixed rata is available just for 802.11 txd */
+ 		struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+-		bool multicast = is_multicast_ether_addr(hdr->addr1);
++		bool multicast = ieee80211_is_data(hdr->frame_control) &&
++				 is_multicast_ether_addr(hdr->addr1);
+ 		u16 rate = mt76_connac2_mac_tx_rate_val(mphy, vif, beacon,
+ 							multicast);
+ 		u32 val = MT_TXD6_FIXED_BW;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+index 008ece1b16f8e..cb27885b5da7c 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+@@ -1678,8 +1678,16 @@ int mt76_connac_mcu_hw_scan(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ 	req->channel_min_dwell_time = cpu_to_le16(duration);
+ 	req->channel_dwell_time = cpu_to_le16(duration);
+ 
+-	req->channels_num = min_t(u8, sreq->n_channels, 32);
+-	req->ext_channels_num = min_t(u8, ext_channels_num, 32);
++	if (sreq->n_channels == 0 || sreq->n_channels > 64) {
++		req->channel_type = 0;
++		req->channels_num = 0;
++		req->ext_channels_num = 0;
++	} else {
++		req->channel_type = 4;
++		req->channels_num = min_t(u8, sreq->n_channels, 32);
++		req->ext_channels_num = min_t(u8, ext_channels_num, 32);
++	}
++
+ 	for (i = 0; i < req->channels_num + req->ext_channels_num; i++) {
+ 		if (i >= 32)
+ 			chan = &req->ext_channels[i - 32];
+@@ -1699,7 +1707,6 @@ int mt76_connac_mcu_hw_scan(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ 		}
+ 		chan->channel_num = scan_list[i]->hw_value;
+ 	}
+-	req->channel_type = sreq->n_channels ? 4 : 0;
+ 
+ 	if (sreq->ie_len > 0) {
+ 		memcpy(req->ies, sreq->ie, sreq->ie_len);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
+index a5e6ee4daf92e..40a99e0caded8 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
+@@ -967,9 +967,6 @@ enum {
+ 	DEV_INFO_MAX_NUM
+ };
+ 
+-#define MCU_UNI_CMD_EVENT                       BIT(1)
+-#define MCU_UNI_CMD_UNSOLICITED_EVENT           BIT(2)
+-
+ /* event table */
+ enum {
+ 	MCU_EVENT_TARGET_ADDRESS_LEN = 0x01,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c b/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
+index d3f74473e6fb0..3e41d809ade39 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
+@@ -631,8 +631,11 @@ void mt76x02_send_tx_status(struct mt76x02_dev *dev,
+ 
+ 	mt76_tx_status_unlock(mdev, &list);
+ 
+-	if (!status.skb)
++	if (!status.skb) {
++		spin_lock_bh(&dev->mt76.rx_lock);
+ 		ieee80211_tx_status_ext(mt76_hw(dev), &status);
++		spin_unlock_bh(&dev->mt76.rx_lock);
++	}
+ 
+ 	if (!len)
+ 		goto out;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/init.c b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+index 5e288116b1b01..71ccefd39cb20 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+@@ -89,6 +89,7 @@ static ssize_t mt7915_thermal_temp_store(struct device *dev,
+ 	     val < phy->throttle_temp[MT7915_CRIT_TEMP_IDX])) {
+ 		dev_err(phy->dev->mt76.dev,
+ 			"temp1_max shall be greater than temp1_crit.");
++		mutex_unlock(&phy->dev->mt76.mutex);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -202,6 +203,10 @@ static int mt7915_thermal_init(struct mt7915_phy *phy)
+ 			phy->cdev = cdev;
+ 	}
+ 
++	/* initialize critical/maximum high temperature */
++	phy->throttle_temp[MT7915_CRIT_TEMP_IDX] = MT7915_CRIT_TEMP;
++	phy->throttle_temp[MT7915_MAX_TEMP_IDX] = MT7915_MAX_TEMP;
++
+ 	if (!IS_REACHABLE(CONFIG_HWMON))
+ 		return 0;
+ 
+@@ -210,10 +215,6 @@ static int mt7915_thermal_init(struct mt7915_phy *phy)
+ 	if (IS_ERR(hwmon))
+ 		return PTR_ERR(hwmon);
+ 
+-	/* initialize critical/maximum high temperature */
+-	phy->throttle_temp[MT7915_CRIT_TEMP_IDX] = MT7915_CRIT_TEMP;
+-	phy->throttle_temp[MT7915_MAX_TEMP_IDX] = MT7915_MAX_TEMP;
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/soc.c b/drivers/net/wireless/mediatek/mt76/mt7915/soc.c
+index 2ac0a0f2859cb..32c137066e7f7 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/soc.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/soc.c
+@@ -1239,6 +1239,8 @@ static const struct of_device_id mt7986_wmac_of_match[] = {
+ 	{},
+ };
+ 
++MODULE_DEVICE_TABLE(of, mt7986_wmac_of_match);
++
+ struct platform_driver mt7986_wmac_driver = {
+ 	.driver = {
+ 		.name = "mt7986-wmac",
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/dma.c b/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
+index d1f10f6d9adc3..fd57c87a29ae3 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
+@@ -66,6 +66,24 @@ static void mt7921_dma_prefetch(struct mt7921_dev *dev)
+ 
+ static int mt7921_dma_disable(struct mt7921_dev *dev, bool force)
+ {
++	/* disable WFDMA0 */
++	mt76_clear(dev, MT_WFDMA0_GLO_CFG,
++		   MT_WFDMA0_GLO_CFG_TX_DMA_EN | MT_WFDMA0_GLO_CFG_RX_DMA_EN |
++		   MT_WFDMA0_GLO_CFG_CSR_DISP_BASE_PTR_CHAIN_EN |
++		   MT_WFDMA0_GLO_CFG_OMIT_TX_INFO |
++		   MT_WFDMA0_GLO_CFG_OMIT_RX_INFO |
++		   MT_WFDMA0_GLO_CFG_OMIT_RX_INFO_PFET2);
++
++	if (!mt76_poll_msec_tick(dev, MT_WFDMA0_GLO_CFG,
++				 MT_WFDMA0_GLO_CFG_TX_DMA_BUSY |
++				 MT_WFDMA0_GLO_CFG_RX_DMA_BUSY, 0, 100, 1))
++		return -ETIMEDOUT;
++
++	/* disable dmashdl */
++	mt76_clear(dev, MT_WFDMA0_GLO_CFG_EXT0,
++		   MT_WFDMA0_CSR_TX_DMASHDL_ENABLE);
++	mt76_set(dev, MT_DMASHDL_SW_CONTROL, MT_DMASHDL_DMASHDL_BYPASS);
++
+ 	if (force) {
+ 		/* reset */
+ 		mt76_clear(dev, MT_WFDMA0_RST,
+@@ -77,24 +95,6 @@ static int mt7921_dma_disable(struct mt7921_dev *dev, bool force)
+ 			 MT_WFDMA0_RST_LOGIC_RST);
+ 	}
+ 
+-	/* disable dmashdl */
+-	mt76_clear(dev, MT_WFDMA0_GLO_CFG_EXT0,
+-		   MT_WFDMA0_CSR_TX_DMASHDL_ENABLE);
+-	mt76_set(dev, MT_DMASHDL_SW_CONTROL, MT_DMASHDL_DMASHDL_BYPASS);
+-
+-	/* disable WFDMA0 */
+-	mt76_clear(dev, MT_WFDMA0_GLO_CFG,
+-		   MT_WFDMA0_GLO_CFG_TX_DMA_EN | MT_WFDMA0_GLO_CFG_RX_DMA_EN |
+-		   MT_WFDMA0_GLO_CFG_CSR_DISP_BASE_PTR_CHAIN_EN |
+-		   MT_WFDMA0_GLO_CFG_OMIT_TX_INFO |
+-		   MT_WFDMA0_GLO_CFG_OMIT_RX_INFO |
+-		   MT_WFDMA0_GLO_CFG_OMIT_RX_INFO_PFET2);
+-
+-	if (!mt76_poll(dev, MT_WFDMA0_GLO_CFG,
+-		       MT_WFDMA0_GLO_CFG_TX_DMA_BUSY |
+-		       MT_WFDMA0_GLO_CFG_RX_DMA_BUSY, 0, 1000))
+-		return -ETIMEDOUT;
+-
+ 	return 0;
+ }
+ 
+@@ -301,6 +301,10 @@ void mt7921_dma_cleanup(struct mt7921_dev *dev)
+ 		   MT_WFDMA0_GLO_CFG_OMIT_RX_INFO |
+ 		   MT_WFDMA0_GLO_CFG_OMIT_RX_INFO_PFET2);
+ 
++	mt76_poll_msec_tick(dev, MT_WFDMA0_GLO_CFG,
++			    MT_WFDMA0_GLO_CFG_TX_DMA_BUSY |
++			    MT_WFDMA0_GLO_CFG_RX_DMA_BUSY, 0, 100, 1);
++
+ 	/* reset */
+ 	mt76_clear(dev, MT_WFDMA0_RST,
+ 		   MT_WFDMA0_RST_DMASHDL_ALL_RST |
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/main.c b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+index 42933a6b7334b..2c5e4ad3a0c69 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+@@ -702,10 +702,25 @@ static void mt7921_configure_filter(struct ieee80211_hw *hw,
+ 				    unsigned int *total_flags,
+ 				    u64 multicast)
+ {
++#define MT7921_FILTER_FCSFAIL    BIT(2)
++#define MT7921_FILTER_CONTROL    BIT(5)
++#define MT7921_FILTER_OTHER_BSS  BIT(6)
++#define MT7921_FILTER_ENABLE     BIT(31)
++
+ 	struct mt7921_dev *dev = mt7921_hw_dev(hw);
++	u32 flags = MT7921_FILTER_ENABLE;
++
++#define MT7921_FILTER(_fif, _type) do {			\
++		if (*total_flags & (_fif))		\
++			flags |= MT7921_FILTER_##_type;	\
++	} while (0)
++
++	MT7921_FILTER(FIF_FCSFAIL, FCSFAIL);
++	MT7921_FILTER(FIF_CONTROL, CONTROL);
++	MT7921_FILTER(FIF_OTHER_BSS, OTHER_BSS);
+ 
+ 	mt7921_mutex_acquire(dev);
+-	mt7921_mcu_set_rxfilter(dev, *total_flags, 0, 0);
++	mt7921_mcu_set_rxfilter(dev, flags, 0, 0);
+ 	mt7921_mutex_release(dev);
+ 
+ 	*total_flags &= (FIF_OTHER_BSS | FIF_FCSFAIL | FIF_CONTROL);
+@@ -1694,7 +1709,7 @@ static void mt7921_ctx_iter(void *priv, u8 *mac,
+ 	if (ctx != mvif->ctx)
+ 		return;
+ 
+-	if (vif->type & NL80211_IFTYPE_MONITOR)
++	if (vif->type == NL80211_IFTYPE_MONITOR)
+ 		mt7921_mcu_config_sniffer(mvif, ctx);
+ 	else
+ 		mt76_connac_mcu_uni_set_chctx(mvif->phy->mt76, &mvif->mt76, ctx);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+index c5e7ad06f8777..48f38dbbb91da 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+@@ -16,24 +16,6 @@ static bool mt7921_disable_clc;
+ module_param_named(disable_clc, mt7921_disable_clc, bool, 0644);
+ MODULE_PARM_DESC(disable_clc, "disable CLC support");
+ 
+-static int
+-mt7921_mcu_parse_eeprom(struct mt76_dev *dev, struct sk_buff *skb)
+-{
+-	struct mt7921_mcu_eeprom_info *res;
+-	u8 *buf;
+-
+-	if (!skb)
+-		return -EINVAL;
+-
+-	skb_pull(skb, sizeof(struct mt76_connac2_mcu_rxd));
+-
+-	res = (struct mt7921_mcu_eeprom_info *)skb->data;
+-	buf = dev->eeprom.data + le32_to_cpu(res->addr);
+-	memcpy(buf, res->data, 16);
+-
+-	return 0;
+-}
+-
+ int mt7921_mcu_parse_response(struct mt76_dev *mdev, int cmd,
+ 			      struct sk_buff *skb, int seq)
+ {
+@@ -60,8 +42,6 @@ int mt7921_mcu_parse_response(struct mt76_dev *mdev, int cmd,
+ 	} else if (cmd == MCU_EXT_CMD(THERMAL_CTRL)) {
+ 		skb_pull(skb, sizeof(*rxd) + 4);
+ 		ret = le32_to_cpu(*(__le32 *)skb->data);
+-	} else if (cmd == MCU_EXT_CMD(EFUSE_ACCESS)) {
+-		ret = mt7921_mcu_parse_eeprom(mdev, skb);
+ 	} else if (cmd == MCU_UNI_CMD(DEV_INFO_UPDATE) ||
+ 		   cmd == MCU_UNI_CMD(BSS_INFO_UPDATE) ||
+ 		   cmd == MCU_UNI_CMD(STA_REC_UPDATE) ||
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
+index 5c23c827abe47..aa4ecf008a3c9 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
+@@ -115,9 +115,10 @@ static void mt7921e_unregister_device(struct mt7921_dev *dev)
+ 		napi_disable(&dev->mt76.napi[i]);
+ 	cancel_delayed_work_sync(&pm->ps_work);
+ 	cancel_work_sync(&pm->wake_work);
++	cancel_work_sync(&dev->reset_work);
+ 
+ 	mt7921_tx_token_put(dev);
+-	mt7921_mcu_drv_pmctrl(dev);
++	__mt7921_mcu_drv_pmctrl(dev);
+ 	mt7921_dma_cleanup(dev);
+ 	mt7921_wfsys_reset(dev);
+ 	skb_queue_purge(&dev->mt76.mcu.res_q);
+@@ -263,6 +264,7 @@ static int mt7921_pci_probe(struct pci_dev *pdev,
+ 	struct mt76_dev *mdev;
+ 	u8 features;
+ 	int ret;
++	u16 cmd;
+ 
+ 	ret = pcim_enable_device(pdev);
+ 	if (ret)
+@@ -272,6 +274,11 @@ static int mt7921_pci_probe(struct pci_dev *pdev,
+ 	if (ret)
+ 		return ret;
+ 
++	pci_read_config_word(pdev, PCI_COMMAND, &cmd);
++	if (!(cmd & PCI_COMMAND_MEMORY)) {
++		cmd |= PCI_COMMAND_MEMORY;
++		pci_write_config_word(pdev, PCI_COMMAND, cmd);
++	}
+ 	pci_set_master(pdev);
+ 
+ 	ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_ALL_TYPES);
+@@ -509,17 +516,7 @@ failed:
+ 
+ static void mt7921_pci_shutdown(struct pci_dev *pdev)
+ {
+-	struct mt76_dev *mdev = pci_get_drvdata(pdev);
+-	struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
+-	struct mt76_connac_pm *pm = &dev->pm;
+-
+-	cancel_delayed_work_sync(&pm->ps_work);
+-	cancel_work_sync(&pm->wake_work);
+-
+-	/* chip cleanup before reboot */
+-	mt7921_mcu_drv_pmctrl(dev);
+-	mt7921_dma_cleanup(dev);
+-	mt7921_wfsys_reset(dev);
++	mt7921_pci_remove(pdev);
+ }
+ 
+ static DEFINE_SIMPLE_DEV_PM_OPS(mt7921_pm_ops, mt7921_pci_suspend, mt7921_pci_resume);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/usb.c b/drivers/net/wireless/mediatek/mt76/mt7921/usb.c
+index 8fef09ed29c91..70c9bbdbf60e9 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/usb.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/usb.c
+@@ -272,7 +272,7 @@ static int mt7921u_probe(struct usb_interface *usb_intf,
+ 
+ 	ret = mt7921u_dma_init(dev, false);
+ 	if (ret)
+-		return ret;
++		goto error;
+ 
+ 	hw = mt76_hw(dev);
+ 	/* check hw sg support in order to enable AMSDU */
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/eeprom.h b/drivers/net/wireless/mediatek/mt76/mt7996/eeprom.h
+index 8da599e0abeac..cfc48698539b3 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/eeprom.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/eeprom.h
+@@ -31,11 +31,11 @@ enum mt7996_eeprom_field {
+ #define MT_EE_WIFI_CONF2_BAND_SEL		GENMASK(2, 0)
+ 
+ #define MT_EE_WIFI_CONF1_TX_PATH_BAND0		GENMASK(5, 3)
+-#define MT_EE_WIFI_CONF2_TX_PATH_BAND1		GENMASK(5, 3)
+-#define MT_EE_WIFI_CONF2_TX_PATH_BAND2		GENMASK(2, 0)
++#define MT_EE_WIFI_CONF2_TX_PATH_BAND1		GENMASK(2, 0)
++#define MT_EE_WIFI_CONF2_TX_PATH_BAND2		GENMASK(5, 3)
+ #define MT_EE_WIFI_CONF4_STREAM_NUM_BAND0	GENMASK(5, 3)
+-#define MT_EE_WIFI_CONF5_STREAM_NUM_BAND1	GENMASK(5, 3)
+-#define MT_EE_WIFI_CONF5_STREAM_NUM_BAND2	GENMASK(2, 0)
++#define MT_EE_WIFI_CONF5_STREAM_NUM_BAND1	GENMASK(2, 0)
++#define MT_EE_WIFI_CONF5_STREAM_NUM_BAND2	GENMASK(5, 3)
+ 
+ #define MT_EE_RATE_DELTA_MASK			GENMASK(5, 0)
+ #define MT_EE_RATE_DELTA_SIGN			BIT(6)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+index c9a9f0e317716..1a715d01b0a3b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+@@ -260,12 +260,9 @@ mt7996_mac_decode_he_radiotap_ru(struct mt76_rx_status *status,
+ 				 struct ieee80211_radiotap_he *he,
+ 				 __le32 *rxv)
+ {
+-	u32 ru_h, ru_l;
+-	u8 ru, offs = 0;
++	u32 ru, offs = 0;
+ 
+-	ru_l = le32_get_bits(rxv[0], MT_PRXV_HE_RU_ALLOC_L);
+-	ru_h = le32_get_bits(rxv[1], MT_PRXV_HE_RU_ALLOC_H);
+-	ru = (u8)(ru_l | ru_h << 4);
++	ru = le32_get_bits(rxv[0], MT_PRXV_HE_RU_ALLOC);
+ 
+ 	status->bw = RATE_INFO_BW_HE_RU;
+ 
+@@ -330,18 +327,23 @@ mt7996_mac_decode_he_mu_radiotap(struct sk_buff *skb, __le32 *rxv)
+ 
+ 	he_mu->flags2 |= MU_PREP(FLAGS2_BW_FROM_SIG_A_BW, status->bw) |
+ 			 MU_PREP(FLAGS2_SIG_B_SYMS_USERS,
+-				 le32_get_bits(rxv[2], MT_CRXV_HE_NUM_USER));
++				 le32_get_bits(rxv[4], MT_CRXV_HE_NUM_USER));
+ 
+-	he_mu->ru_ch1[0] = le32_get_bits(rxv[3], MT_CRXV_HE_RU0);
++	he_mu->ru_ch1[0] = le32_get_bits(rxv[16], MT_CRXV_HE_RU0) & 0xff;
+ 
+ 	if (status->bw >= RATE_INFO_BW_40) {
+ 		he_mu->flags1 |= HE_BITS(MU_FLAGS1_CH2_RU_KNOWN);
+-		he_mu->ru_ch2[0] = le32_get_bits(rxv[3], MT_CRXV_HE_RU1);
++		he_mu->ru_ch2[0] = le32_get_bits(rxv[16], MT_CRXV_HE_RU1) & 0xff;
+ 	}
+ 
+ 	if (status->bw >= RATE_INFO_BW_80) {
+-		he_mu->ru_ch1[1] = le32_get_bits(rxv[3], MT_CRXV_HE_RU2);
+-		he_mu->ru_ch2[1] = le32_get_bits(rxv[3], MT_CRXV_HE_RU3);
++		u32 ru_h, ru_l;
++
++		he_mu->ru_ch1[1] = le32_get_bits(rxv[16], MT_CRXV_HE_RU2) & 0xff;
++
++		ru_l = le32_get_bits(rxv[16], MT_CRXV_HE_RU3_L);
++		ru_h = le32_get_bits(rxv[17], MT_CRXV_HE_RU3_H) & 0x7;
++		he_mu->ru_ch2[1] = (u8)(ru_l | ru_h << 4);
+ 	}
+ }
+ 
+@@ -364,23 +366,23 @@ mt7996_mac_decode_he_radiotap(struct sk_buff *skb, __le32 *rxv, u8 mode)
+ 			 HE_BITS(DATA2_TXOP_KNOWN),
+ 	};
+ 	struct ieee80211_radiotap_he *he = NULL;
+-	u32 ltf_size = le32_get_bits(rxv[2], MT_CRXV_HE_LTF_SIZE) + 1;
++	u32 ltf_size = le32_get_bits(rxv[4], MT_CRXV_HE_LTF_SIZE) + 1;
+ 
+ 	status->flag |= RX_FLAG_RADIOTAP_HE;
+ 
+ 	he = skb_push(skb, sizeof(known));
+ 	memcpy(he, &known, sizeof(known));
+ 
+-	he->data3 = HE_PREP(DATA3_BSS_COLOR, BSS_COLOR, rxv[14]) |
+-		    HE_PREP(DATA3_LDPC_XSYMSEG, LDPC_EXT_SYM, rxv[2]);
+-	he->data4 = HE_PREP(DATA4_SU_MU_SPTL_REUSE, SR_MASK, rxv[11]);
+-	he->data5 = HE_PREP(DATA5_PE_DISAMBIG, PE_DISAMBIG, rxv[2]) |
++	he->data3 = HE_PREP(DATA3_BSS_COLOR, BSS_COLOR, rxv[9]) |
++		    HE_PREP(DATA3_LDPC_XSYMSEG, LDPC_EXT_SYM, rxv[4]);
++	he->data4 = HE_PREP(DATA4_SU_MU_SPTL_REUSE, SR_MASK, rxv[13]);
++	he->data5 = HE_PREP(DATA5_PE_DISAMBIG, PE_DISAMBIG, rxv[5]) |
+ 		    le16_encode_bits(ltf_size,
+ 				     IEEE80211_RADIOTAP_HE_DATA5_LTF_SIZE);
+ 	if (le32_to_cpu(rxv[0]) & MT_PRXV_TXBF)
+ 		he->data5 |= HE_BITS(DATA5_TXBF);
+-	he->data6 = HE_PREP(DATA6_TXOP, TXOP_DUR, rxv[14]) |
+-		    HE_PREP(DATA6_DOPPLER, DOPPLER, rxv[14]);
++	he->data6 = HE_PREP(DATA6_TXOP, TXOP_DUR, rxv[9]) |
++		    HE_PREP(DATA6_DOPPLER, DOPPLER, rxv[9]);
+ 
+ 	switch (mode) {
+ 	case MT_PHY_TYPE_HE_SU:
+@@ -389,22 +391,22 @@ mt7996_mac_decode_he_radiotap(struct sk_buff *skb, __le32 *rxv, u8 mode)
+ 			     HE_BITS(DATA1_BEAM_CHANGE_KNOWN) |
+ 			     HE_BITS(DATA1_BW_RU_ALLOC_KNOWN);
+ 
+-		he->data3 |= HE_PREP(DATA3_BEAM_CHANGE, BEAM_CHNG, rxv[14]) |
+-			     HE_PREP(DATA3_UL_DL, UPLINK, rxv[2]);
++		he->data3 |= HE_PREP(DATA3_BEAM_CHANGE, BEAM_CHNG, rxv[8]) |
++			     HE_PREP(DATA3_UL_DL, UPLINK, rxv[5]);
+ 		break;
+ 	case MT_PHY_TYPE_HE_EXT_SU:
+ 		he->data1 |= HE_BITS(DATA1_FORMAT_EXT_SU) |
+ 			     HE_BITS(DATA1_UL_DL_KNOWN) |
+ 			     HE_BITS(DATA1_BW_RU_ALLOC_KNOWN);
+ 
+-		he->data3 |= HE_PREP(DATA3_UL_DL, UPLINK, rxv[2]);
++		he->data3 |= HE_PREP(DATA3_UL_DL, UPLINK, rxv[5]);
+ 		break;
+ 	case MT_PHY_TYPE_HE_MU:
+ 		he->data1 |= HE_BITS(DATA1_FORMAT_MU) |
+ 			     HE_BITS(DATA1_UL_DL_KNOWN);
+ 
+-		he->data3 |= HE_PREP(DATA3_UL_DL, UPLINK, rxv[2]);
+-		he->data4 |= HE_PREP(DATA4_MU_STA_ID, MU_AID, rxv[7]);
++		he->data3 |= HE_PREP(DATA3_UL_DL, UPLINK, rxv[5]);
++		he->data4 |= HE_PREP(DATA4_MU_STA_ID, MU_AID, rxv[8]);
+ 
+ 		mt7996_mac_decode_he_radiotap_ru(status, he, rxv);
+ 		mt7996_mac_decode_he_mu_radiotap(skb, rxv);
+@@ -415,10 +417,10 @@ mt7996_mac_decode_he_radiotap(struct sk_buff *skb, __le32 *rxv, u8 mode)
+ 			     HE_BITS(DATA1_SPTL_REUSE3_KNOWN) |
+ 			     HE_BITS(DATA1_SPTL_REUSE4_KNOWN);
+ 
+-		he->data4 |= HE_PREP(DATA4_TB_SPTL_REUSE1, SR_MASK, rxv[11]) |
+-			     HE_PREP(DATA4_TB_SPTL_REUSE2, SR1_MASK, rxv[11]) |
+-			     HE_PREP(DATA4_TB_SPTL_REUSE3, SR2_MASK, rxv[11]) |
+-			     HE_PREP(DATA4_TB_SPTL_REUSE4, SR3_MASK, rxv[11]);
++		he->data4 |= HE_PREP(DATA4_TB_SPTL_REUSE1, SR_MASK, rxv[13]) |
++			     HE_PREP(DATA4_TB_SPTL_REUSE2, SR1_MASK, rxv[13]) |
++			     HE_PREP(DATA4_TB_SPTL_REUSE3, SR2_MASK, rxv[13]) |
++			     HE_PREP(DATA4_TB_SPTL_REUSE4, SR3_MASK, rxv[13]);
+ 
+ 		mt7996_mac_decode_he_radiotap_ru(status, he, rxv);
+ 		break;
+@@ -979,8 +981,9 @@ mt7996_mac_write_txwi_80211(struct mt7996_dev *dev, __le32 *txwi,
+ }
+ 
+ void mt7996_mac_write_txwi(struct mt7996_dev *dev, __le32 *txwi,
+-			   struct sk_buff *skb, struct mt76_wcid *wcid, int pid,
+-			   struct ieee80211_key_conf *key, u32 changed)
++			   struct sk_buff *skb, struct mt76_wcid *wcid,
++			   struct ieee80211_key_conf *key, int pid,
++			   enum mt76_txq_id qid, u32 changed)
+ {
+ 	struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+ 	struct ieee80211_vif *vif = info->control.vif;
+@@ -1011,7 +1014,7 @@ void mt7996_mac_write_txwi(struct mt7996_dev *dev, __le32 *txwi,
+ 	} else if (beacon) {
+ 		p_fmt = MT_TX_TYPE_FW;
+ 		q_idx = MT_LMAC_BCN0;
+-	} else if (skb_get_queue_mapping(skb) >= MT_TXQ_PSD) {
++	} else if (qid >= MT_TXQ_PSD) {
+ 		p_fmt = MT_TX_TYPE_CT;
+ 		q_idx = MT_LMAC_ALTX0;
+ 	} else {
+@@ -1117,11 +1120,8 @@ int mt7996_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
+ 		return id;
+ 
+ 	pid = mt76_tx_status_skb_add(mdev, wcid, tx_info->skb);
+-	memset(txwi_ptr, 0, MT_TXD_SIZE);
+-	/* Transmit non qos data by 802.11 header and need to fill txd by host*/
+-	if (!is_8023 || pid >= MT_PACKET_ID_FIRST)
+-		mt7996_mac_write_txwi(dev, txwi_ptr, tx_info->skb, wcid, pid,
+-				      key, 0);
++	mt7996_mac_write_txwi(dev, txwi_ptr, tx_info->skb, wcid, key,
++			      pid, qid, 0);
+ 
+ 	txp = (struct mt76_connac_txp_common *)(txwi + MT_TXD_SIZE);
+ 	for (i = 0; i < nbuf; i++) {
+@@ -1130,10 +1130,8 @@ int mt7996_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
+ 	}
+ 	txp->fw.nbuf = nbuf;
+ 
+-	txp->fw.flags = cpu_to_le16(MT_CT_INFO_FROM_HOST);
+-
+-	if (!is_8023 || pid >= MT_PACKET_ID_FIRST)
+-		txp->fw.flags |= cpu_to_le16(MT_CT_INFO_APPLY_TXD);
++	txp->fw.flags =
++		cpu_to_le16(MT_CT_INFO_FROM_HOST | MT_CT_INFO_APPLY_TXD);
+ 
+ 	if (!key)
+ 		txp->fw.flags |= cpu_to_le16(MT_CT_INFO_NONE_CIPHER_FRAME);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mac.h b/drivers/net/wireless/mediatek/mt76/mt7996/mac.h
+index 27184cbac619b..2cc218f735d88 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mac.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mac.h
+@@ -102,8 +102,7 @@ enum rx_pkt_type {
+ #define MT_PRXV_NSTS			GENMASK(10, 7)
+ #define MT_PRXV_TXBF			BIT(11)
+ #define MT_PRXV_HT_AD_CODE		BIT(12)
+-#define MT_PRXV_HE_RU_ALLOC_L		GENMASK(31, 28)
+-#define MT_PRXV_HE_RU_ALLOC_H		GENMASK(3, 0)
++#define MT_PRXV_HE_RU_ALLOC		GENMASK(30, 22)
+ #define MT_PRXV_RCPI3			GENMASK(31, 24)
+ #define MT_PRXV_RCPI2			GENMASK(23, 16)
+ #define MT_PRXV_RCPI1			GENMASK(15, 8)
+@@ -113,34 +112,32 @@ enum rx_pkt_type {
+ #define MT_PRXV_TX_MODE			GENMASK(14, 11)
+ #define MT_PRXV_FRAME_MODE		GENMASK(2, 0)
+ #define MT_PRXV_DCM			BIT(5)
+-#define MT_PRXV_NUM_RX			BIT(8, 6)
+ 
+ /* C-RXV */
+-#define MT_CRXV_HT_STBC			GENMASK(1, 0)
+-#define MT_CRXV_TX_MODE			GENMASK(7, 4)
+-#define MT_CRXV_FRAME_MODE		GENMASK(10, 8)
+-#define MT_CRXV_HT_SHORT_GI		GENMASK(14, 13)
+-#define MT_CRXV_HE_LTF_SIZE		GENMASK(18, 17)
+-#define MT_CRXV_HE_LDPC_EXT_SYM		BIT(20)
+-#define MT_CRXV_HE_PE_DISAMBIG		BIT(23)
+-#define MT_CRXV_HE_NUM_USER		GENMASK(30, 24)
+-#define MT_CRXV_HE_UPLINK		BIT(31)
+-#define MT_CRXV_HE_RU0			GENMASK(7, 0)
+-#define MT_CRXV_HE_RU1			GENMASK(15, 8)
+-#define MT_CRXV_HE_RU2			GENMASK(23, 16)
+-#define MT_CRXV_HE_RU3			GENMASK(31, 24)
+-
+-#define MT_CRXV_HE_MU_AID		GENMASK(30, 20)
++#define MT_CRXV_HE_NUM_USER		GENMASK(26, 20)
++#define MT_CRXV_HE_LTF_SIZE		GENMASK(28, 27)
++#define MT_CRXV_HE_LDPC_EXT_SYM		BIT(30)
++
++#define MT_CRXV_HE_PE_DISAMBIG		BIT(1)
++#define MT_CRXV_HE_UPLINK		BIT(2)
++
++#define MT_CRXV_HE_MU_AID		GENMASK(27, 17)
++#define MT_CRXV_HE_BEAM_CHNG		BIT(29)
++
++#define MT_CRXV_HE_DOPPLER		BIT(0)
++#define MT_CRXV_HE_BSS_COLOR		GENMASK(15, 10)
++#define MT_CRXV_HE_TXOP_DUR		GENMASK(19, 17)
+ 
+ #define MT_CRXV_HE_SR_MASK		GENMASK(11, 8)
+ #define MT_CRXV_HE_SR1_MASK		GENMASK(16, 12)
+ #define MT_CRXV_HE_SR2_MASK             GENMASK(20, 17)
+ #define MT_CRXV_HE_SR3_MASK             GENMASK(24, 21)
+ 
+-#define MT_CRXV_HE_BSS_COLOR		GENMASK(5, 0)
+-#define MT_CRXV_HE_TXOP_DUR		GENMASK(12, 6)
+-#define MT_CRXV_HE_BEAM_CHNG		BIT(13)
+-#define MT_CRXV_HE_DOPPLER		BIT(16)
++#define MT_CRXV_HE_RU0			GENMASK(8, 0)
++#define MT_CRXV_HE_RU1			GENMASK(17, 9)
++#define MT_CRXV_HE_RU2			GENMASK(26, 18)
++#define MT_CRXV_HE_RU3_L		GENMASK(31, 27)
++#define MT_CRXV_HE_RU3_H		GENMASK(3, 0)
+ 
+ enum tx_header_format {
+ 	MT_HDR_FORMAT_802_3,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
+index dbe30832fd88b..9ad6c889549c5 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
+@@ -422,7 +422,8 @@ mt7996_mcu_ie_countdown(struct mt7996_dev *dev, struct sk_buff *skb)
+ 	if (hdr->band && dev->mt76.phys[hdr->band])
+ 		mphy = dev->mt76.phys[hdr->band];
+ 
+-	tail = skb->data + le16_to_cpu(rxd->len);
++	tail = skb->data + skb->len;
++	data += sizeof(struct header);
+ 	while (data + sizeof(struct tlv) < tail && le16_to_cpu(tlv->len)) {
+ 		switch (le16_to_cpu(tlv->tag)) {
+ 		case UNI_EVENT_IE_COUNTDOWN_CSA:
+@@ -1906,8 +1907,9 @@ mt7996_mcu_beacon_cont(struct mt7996_dev *dev, struct ieee80211_vif *vif,
+ 	}
+ 
+ 	buf = (u8 *)bcn + sizeof(*bcn) - MAX_BEACON_SIZE;
+-	mt7996_mac_write_txwi(dev, (__le32 *)buf, skb, wcid, 0, NULL,
++	mt7996_mac_write_txwi(dev, (__le32 *)buf, skb, wcid, NULL, 0, 0,
+ 			      BSS_CHANGED_BEACON);
++
+ 	memcpy(buf + MT_TXD_SIZE, skb->data, skb->len);
+ }
+ 
+@@ -2115,8 +2117,7 @@ int mt7996_mcu_beacon_inband_discov(struct mt7996_dev *dev,
+ 
+ 	buf = (u8 *)tlv + sizeof(*discov) - MAX_INBAND_FRAME_SIZE;
+ 
+-	mt7996_mac_write_txwi(dev, (__le32 *)buf, skb, wcid, 0, NULL,
+-			      changed);
++	mt7996_mac_write_txwi(dev, (__le32 *)buf, skb, wcid, NULL, 0, 0, changed);
+ 
+ 	memcpy(buf + MT_TXD_SIZE, skb->data, skb->len);
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h b/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h
+index 018dfd2b36b00..ae1210b0a82cd 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h
+@@ -487,8 +487,9 @@ void mt7996_mac_enable_nf(struct mt7996_dev *dev, u8 band);
+ void mt7996_mac_enable_rtscts(struct mt7996_dev *dev,
+ 			      struct ieee80211_vif *vif, bool enable);
+ void mt7996_mac_write_txwi(struct mt7996_dev *dev, __le32 *txwi,
+-			   struct sk_buff *skb, struct mt76_wcid *wcid, int pid,
+-			   struct ieee80211_key_conf *key, u32 changed);
++			   struct sk_buff *skb, struct mt76_wcid *wcid,
++			   struct ieee80211_key_conf *key, int pid,
++			   enum mt76_txq_id qid, u32 changed);
+ void mt7996_mac_set_timing(struct mt7996_phy *phy);
+ int mt7996_mac_sta_add(struct mt76_dev *mdev, struct ieee80211_vif *vif,
+ 		       struct ieee80211_sta *sta);
+diff --git a/drivers/net/wireless/mediatek/mt76/tx.c b/drivers/net/wireless/mediatek/mt76/tx.c
+index 1f309d05380ad..b435aed6aff7d 100644
+--- a/drivers/net/wireless/mediatek/mt76/tx.c
++++ b/drivers/net/wireless/mediatek/mt76/tx.c
+@@ -77,7 +77,9 @@ mt76_tx_status_unlock(struct mt76_dev *dev, struct sk_buff_head *list)
+ 		}
+ 
+ 		hw = mt76_tx_status_get_hw(dev, skb);
++		spin_lock_bh(&dev->rx_lock);
+ 		ieee80211_tx_status_ext(hw, &status);
++		spin_unlock_bh(&dev->rx_lock);
+ 	}
+ 	rcu_read_unlock();
+ }
+@@ -263,7 +265,9 @@ void __mt76_tx_complete_skb(struct mt76_dev *dev, u16 wcid_idx, struct sk_buff *
+ 	if (cb->pktid < MT_PACKET_ID_FIRST) {
+ 		hw = mt76_tx_status_get_hw(dev, skb);
+ 		status.sta = wcid_to_sta(wcid);
++		spin_lock_bh(&dev->rx_lock);
+ 		ieee80211_tx_status_ext(hw, &status);
++		spin_unlock_bh(&dev->rx_lock);
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/net/wireless/ralink/rt2x00/rt2x00dev.c b/drivers/net/wireless/ralink/rt2x00/rt2x00dev.c
+index 3a035afcf7f99..9a9cfd0ce402d 100644
+--- a/drivers/net/wireless/ralink/rt2x00/rt2x00dev.c
++++ b/drivers/net/wireless/ralink/rt2x00/rt2x00dev.c
+@@ -1091,6 +1091,7 @@ static void rt2x00lib_remove_hw(struct rt2x00_dev *rt2x00dev)
+ 	}
+ 
+ 	kfree(rt2x00dev->spec.channels_info);
++	kfree(rt2x00dev->chan_survey);
+ }
+ 
+ static const struct ieee80211_tpt_blink rt2x00_tpt_blink[] = {
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c
+index 5cfc00237f42d..b2f902bfae705 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c
+@@ -1817,6 +1817,7 @@ struct rtl8xxxu_fileops rtl8192eu_fops = {
+ 	.rx_desc_size = sizeof(struct rtl8xxxu_rxdesc24),
+ 	.has_s0s1 = 0,
+ 	.gen2_thermal_meter = 1,
++	.needs_full_init = 1,
+ 	.adda_1t_init = 0x0fc01616,
+ 	.adda_1t_path_on = 0x0fc01616,
+ 	.adda_2t_path_on_a = 0x0fc01616,
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+index 620a5cc2bfdd1..54ca6f2ced3f3 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+@@ -1575,11 +1575,7 @@ rtl8xxxu_set_spec_sifs(struct rtl8xxxu_priv *priv, u16 cck, u16 ofdm)
+ static void rtl8xxxu_print_chipinfo(struct rtl8xxxu_priv *priv)
+ {
+ 	struct device *dev = &priv->udev->dev;
+-	char cut = '?';
+-
+-	/* Currently always true: chip_cut is 4 bits. */
+-	if (priv->chip_cut <= 15)
+-		cut = 'A' + priv->chip_cut;
++	char cut = 'A' + priv->chip_cut;
+ 
+ 	dev_info(dev,
+ 		 "RTL%s rev %c (%s) romver %d, %iT%iR, TX queues %i, WiFi=%i, BT=%i, GPS=%i, HI PA=%i\n",
+diff --git a/drivers/net/wireless/realtek/rtlwifi/debug.c b/drivers/net/wireless/realtek/rtlwifi/debug.c
+index 0b1bc04cb6adb..9eb26dfe4ca92 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/debug.c
++++ b/drivers/net/wireless/realtek/rtlwifi/debug.c
+@@ -278,8 +278,8 @@ static ssize_t rtl_debugfs_set_write_reg(struct file *filp,
+ 
+ 	tmp_len = (count > sizeof(tmp) - 1 ? sizeof(tmp) - 1 : count);
+ 
+-	if (!buffer || copy_from_user(tmp, buffer, tmp_len))
+-		return count;
++	if (copy_from_user(tmp, buffer, tmp_len))
++		return -EFAULT;
+ 
+ 	tmp[tmp_len] = '\0';
+ 
+@@ -287,7 +287,7 @@ static ssize_t rtl_debugfs_set_write_reg(struct file *filp,
+ 	num = sscanf(tmp, "%x %x %x", &addr, &val, &len);
+ 
+ 	if (num !=  3)
+-		return count;
++		return -EINVAL;
+ 
+ 	switch (len) {
+ 	case 1:
+@@ -375,8 +375,8 @@ static ssize_t rtl_debugfs_set_write_rfreg(struct file *filp,
+ 
+ 	tmp_len = (count > sizeof(tmp) - 1 ? sizeof(tmp) - 1 : count);
+ 
+-	if (!buffer || copy_from_user(tmp, buffer, tmp_len))
+-		return count;
++	if (copy_from_user(tmp, buffer, tmp_len))
++		return -EFAULT;
+ 
+ 	tmp[tmp_len] = '\0';
+ 
+@@ -386,7 +386,7 @@ static ssize_t rtl_debugfs_set_write_rfreg(struct file *filp,
+ 	if (num != 4) {
+ 		rtl_dbg(rtlpriv, COMP_ERR, DBG_DMESG,
+ 			"Format is <path> <addr> <mask> <data>\n");
+-		return count;
++		return -EINVAL;
+ 	}
+ 
+ 	rtl_set_rfreg(hw, path, addr, bitmask, data);
+diff --git a/drivers/net/wireless/realtek/rtw88/mac.c b/drivers/net/wireless/realtek/rtw88/mac.c
+index dae64901bac5a..eda05a5d9cffd 100644
+--- a/drivers/net/wireless/realtek/rtw88/mac.c
++++ b/drivers/net/wireless/realtek/rtw88/mac.c
+@@ -233,7 +233,7 @@ static int rtw_pwr_seq_parser(struct rtw_dev *rtwdev,
+ 
+ 		ret = rtw_sub_pwr_seq_parser(rtwdev, intf_mask, cut_mask, cmd);
+ 		if (ret)
+-			return -EBUSY;
++			return ret;
+ 
+ 		idx++;
+ 	} while (1);
+@@ -247,6 +247,7 @@ static int rtw_mac_power_switch(struct rtw_dev *rtwdev, bool pwr_on)
+ 	const struct rtw_pwr_seq_cmd **pwr_seq;
+ 	u8 rpwm;
+ 	bool cur_pwr;
++	int ret;
+ 
+ 	if (rtw_chip_wcpu_11ac(rtwdev)) {
+ 		rpwm = rtw_read8(rtwdev, rtwdev->hci.rpwm_addr);
+@@ -270,8 +271,9 @@ static int rtw_mac_power_switch(struct rtw_dev *rtwdev, bool pwr_on)
+ 		return -EALREADY;
+ 
+ 	pwr_seq = pwr_on ? chip->pwr_on_seq : chip->pwr_off_seq;
+-	if (rtw_pwr_seq_parser(rtwdev, pwr_seq))
+-		return -EINVAL;
++	ret = rtw_pwr_seq_parser(rtwdev, pwr_seq);
++	if (ret)
++		return ret;
+ 
+ 	if (pwr_on)
+ 		set_bit(RTW_FLAG_POWERON, rtwdev->flags);
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8821c.c b/drivers/net/wireless/realtek/rtw88/rtw8821c.c
+index 17f800f6efbd0..67efa58dd78ee 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8821c.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8821c.c
+@@ -47,7 +47,7 @@ static int rtw8821c_read_efuse(struct rtw_dev *rtwdev, u8 *log_map)
+ 
+ 	map = (struct rtw8821c_efuse *)log_map;
+ 
+-	efuse->rfe_option = map->rfe_option;
++	efuse->rfe_option = map->rfe_option & 0x1f;
+ 	efuse->rf_board_option = map->rf_board_option;
+ 	efuse->crystal_cap = map->xtal_k;
+ 	efuse->pa_type_2g = map->pa_type;
+@@ -1537,7 +1537,6 @@ static const struct rtw_rfe_def rtw8821c_rfe_defs[] = {
+ 	[2] = RTW_DEF_RFE_EXT(8821c, 0, 0, 2),
+ 	[4] = RTW_DEF_RFE_EXT(8821c, 0, 0, 2),
+ 	[6] = RTW_DEF_RFE(8821c, 0, 0),
+-	[34] = RTW_DEF_RFE(8821c, 0, 0),
+ };
+ 
+ static struct rtw_hw_reg rtw8821c_dig[] = {
+diff --git a/drivers/net/wireless/realtek/rtw88/usb.c b/drivers/net/wireless/realtek/rtw88/usb.c
+index 2a8336b1847a5..a10d6fef4ffaf 100644
+--- a/drivers/net/wireless/realtek/rtw88/usb.c
++++ b/drivers/net/wireless/realtek/rtw88/usb.c
+@@ -118,6 +118,22 @@ static void rtw_usb_write32(struct rtw_dev *rtwdev, u32 addr, u32 val)
+ 	rtw_usb_write(rtwdev, addr, val, 4);
+ }
+ 
++static int dma_mapping_to_ep(enum rtw_dma_mapping dma_mapping)
++{
++	switch (dma_mapping) {
++	case RTW_DMA_MAPPING_HIGH:
++		return 0;
++	case RTW_DMA_MAPPING_NORMAL:
++		return 1;
++	case RTW_DMA_MAPPING_LOW:
++		return 2;
++	case RTW_DMA_MAPPING_EXTRA:
++		return 3;
++	default:
++		return -EINVAL;
++	}
++}
++
+ static int rtw_usb_parse(struct rtw_dev *rtwdev,
+ 			 struct usb_interface *interface)
+ {
+@@ -129,6 +145,8 @@ static int rtw_usb_parse(struct rtw_dev *rtwdev,
+ 	int num_out_pipes = 0;
+ 	int i;
+ 	u8 num;
++	const struct rtw_chip_info *chip = rtwdev->chip;
++	const struct rtw_rqpn *rqpn;
+ 
+ 	for (i = 0; i < interface_desc->bNumEndpoints; i++) {
+ 		endpoint = &host_interface->endpoint[i].desc;
+@@ -183,31 +201,34 @@ static int rtw_usb_parse(struct rtw_dev *rtwdev,
+ 
+ 	rtwdev->hci.bulkout_num = num_out_pipes;
+ 
+-	switch (num_out_pipes) {
+-	case 4:
+-	case 3:
+-		rtwusb->qsel_to_ep[TX_DESC_QSEL_TID0] = 2;
+-		rtwusb->qsel_to_ep[TX_DESC_QSEL_TID1] = 2;
+-		rtwusb->qsel_to_ep[TX_DESC_QSEL_TID2] = 2;
+-		rtwusb->qsel_to_ep[TX_DESC_QSEL_TID3] = 2;
+-		rtwusb->qsel_to_ep[TX_DESC_QSEL_TID4] = 1;
+-		rtwusb->qsel_to_ep[TX_DESC_QSEL_TID5] = 1;
+-		rtwusb->qsel_to_ep[TX_DESC_QSEL_TID6] = 0;
+-		rtwusb->qsel_to_ep[TX_DESC_QSEL_TID7] = 0;
+-		break;
+-	case 2:
+-		rtwusb->qsel_to_ep[TX_DESC_QSEL_TID0] = 1;
+-		rtwusb->qsel_to_ep[TX_DESC_QSEL_TID1] = 1;
+-		rtwusb->qsel_to_ep[TX_DESC_QSEL_TID2] = 1;
+-		rtwusb->qsel_to_ep[TX_DESC_QSEL_TID3] = 1;
+-		break;
+-	case 1:
+-		break;
+-	default:
+-		rtw_err(rtwdev, "failed to get out_pipes(%d)\n", num_out_pipes);
++	if (num_out_pipes < 1 || num_out_pipes > 4) {
++		rtw_err(rtwdev, "invalid number of endpoints %d\n", num_out_pipes);
+ 		return -EINVAL;
+ 	}
+ 
++	rqpn = &chip->rqpn_table[num_out_pipes];
++
++	rtwusb->qsel_to_ep[TX_DESC_QSEL_TID0] = dma_mapping_to_ep(rqpn->dma_map_be);
++	rtwusb->qsel_to_ep[TX_DESC_QSEL_TID1] = dma_mapping_to_ep(rqpn->dma_map_bk);
++	rtwusb->qsel_to_ep[TX_DESC_QSEL_TID2] = dma_mapping_to_ep(rqpn->dma_map_bk);
++	rtwusb->qsel_to_ep[TX_DESC_QSEL_TID3] = dma_mapping_to_ep(rqpn->dma_map_be);
++	rtwusb->qsel_to_ep[TX_DESC_QSEL_TID4] = dma_mapping_to_ep(rqpn->dma_map_vi);
++	rtwusb->qsel_to_ep[TX_DESC_QSEL_TID5] = dma_mapping_to_ep(rqpn->dma_map_vi);
++	rtwusb->qsel_to_ep[TX_DESC_QSEL_TID6] = dma_mapping_to_ep(rqpn->dma_map_vo);
++	rtwusb->qsel_to_ep[TX_DESC_QSEL_TID7] = dma_mapping_to_ep(rqpn->dma_map_vo);
++	rtwusb->qsel_to_ep[TX_DESC_QSEL_TID8] = -EINVAL;
++	rtwusb->qsel_to_ep[TX_DESC_QSEL_TID9] = -EINVAL;
++	rtwusb->qsel_to_ep[TX_DESC_QSEL_TID10] = -EINVAL;
++	rtwusb->qsel_to_ep[TX_DESC_QSEL_TID11] = -EINVAL;
++	rtwusb->qsel_to_ep[TX_DESC_QSEL_TID12] = -EINVAL;
++	rtwusb->qsel_to_ep[TX_DESC_QSEL_TID13] = -EINVAL;
++	rtwusb->qsel_to_ep[TX_DESC_QSEL_TID14] = -EINVAL;
++	rtwusb->qsel_to_ep[TX_DESC_QSEL_TID15] = -EINVAL;
++	rtwusb->qsel_to_ep[TX_DESC_QSEL_BEACON] = dma_mapping_to_ep(rqpn->dma_map_hi);
++	rtwusb->qsel_to_ep[TX_DESC_QSEL_HIGH] = dma_mapping_to_ep(rqpn->dma_map_hi);
++	rtwusb->qsel_to_ep[TX_DESC_QSEL_MGMT] = dma_mapping_to_ep(rqpn->dma_map_mg);
++	rtwusb->qsel_to_ep[TX_DESC_QSEL_H2C] = dma_mapping_to_ep(rqpn->dma_map_hi);
++
+ 	return 0;
+ }
+ 
+@@ -250,7 +271,7 @@ static void rtw_usb_write_port_tx_complete(struct urb *urb)
+ static int qsel_to_ep(struct rtw_usb *rtwusb, unsigned int qsel)
+ {
+ 	if (qsel >= ARRAY_SIZE(rtwusb->qsel_to_ep))
+-		return 0;
++		return -EINVAL;
+ 
+ 	return rtwusb->qsel_to_ep[qsel];
+ }
+@@ -265,6 +286,9 @@ static int rtw_usb_write_port(struct rtw_dev *rtwdev, u8 qsel, struct sk_buff *s
+ 	int ret;
+ 	int ep = qsel_to_ep(rtwusb, qsel);
+ 
++	if (ep < 0)
++		return ep;
++
+ 	pipe = usb_sndbulkpipe(usbd, rtwusb->out_ep[ep]);
+ 	urb = usb_alloc_urb(0, GFP_ATOMIC);
+ 	if (!urb)
+diff --git a/drivers/net/wireless/realtek/rtw89/core.c b/drivers/net/wireless/realtek/rtw89/core.c
+index f09361bc4a4d1..2487e45da82f5 100644
+--- a/drivers/net/wireless/realtek/rtw89/core.c
++++ b/drivers/net/wireless/realtek/rtw89/core.c
+@@ -3401,18 +3401,22 @@ static int rtw89_core_register_hw(struct rtw89_dev *rtwdev)
+ 	ret = ieee80211_register_hw(hw);
+ 	if (ret) {
+ 		rtw89_err(rtwdev, "failed to register hw\n");
+-		goto err;
++		goto err_free_supported_band;
+ 	}
+ 
+ 	ret = rtw89_regd_init(rtwdev, rtw89_regd_notifier);
+ 	if (ret) {
+ 		rtw89_err(rtwdev, "failed to init regd\n");
+-		goto err;
++		goto err_unregister_hw;
+ 	}
+ 
+ 	return 0;
+ 
+-err:
++err_unregister_hw:
++	ieee80211_unregister_hw(hw);
++err_free_supported_band:
++	rtw89_core_clr_supported_band(rtwdev);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/wireless/realtek/rtw89/pci.c b/drivers/net/wireless/realtek/rtw89/pci.c
+index ec8bb5f10482e..4025aabe06042 100644
+--- a/drivers/net/wireless/realtek/rtw89/pci.c
++++ b/drivers/net/wireless/realtek/rtw89/pci.c
+@@ -3874,25 +3874,26 @@ int rtw89_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	rtw89_pci_link_cfg(rtwdev);
+ 	rtw89_pci_l1ss_cfg(rtwdev);
+ 
+-	ret = rtw89_core_register(rtwdev);
+-	if (ret) {
+-		rtw89_err(rtwdev, "failed to register core\n");
+-		goto err_clear_resource;
+-	}
+-
+ 	rtw89_core_napi_init(rtwdev);
+ 
+ 	ret = rtw89_pci_request_irq(rtwdev, pdev);
+ 	if (ret) {
+ 		rtw89_err(rtwdev, "failed to request pci irq\n");
+-		goto err_unregister;
++		goto err_deinit_napi;
++	}
++
++	ret = rtw89_core_register(rtwdev);
++	if (ret) {
++		rtw89_err(rtwdev, "failed to register core\n");
++		goto err_free_irq;
+ 	}
+ 
+ 	return 0;
+ 
+-err_unregister:
++err_free_irq:
++	rtw89_pci_free_irq(rtwdev, pdev);
++err_deinit_napi:
+ 	rtw89_core_napi_deinit(rtwdev);
+-	rtw89_core_unregister(rtwdev);
+ err_clear_resource:
+ 	rtw89_pci_clear_resource(rtwdev, pdev);
+ err_declaim_pci:
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852b.c b/drivers/net/wireless/realtek/rtw89/rtw8852b.c
+index ee8dba7e0074a..45c374d025cbd 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8852b.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8852b.c
+@@ -1284,7 +1284,7 @@ static void rtw8852b_ctrl_cck_en(struct rtw89_dev *rtwdev, bool cck_en)
+ static void rtw8852b_5m_mask(struct rtw89_dev *rtwdev, const struct rtw89_chan *chan,
+ 			     enum rtw89_phy_idx phy_idx)
+ {
+-	u8 pri_ch = chan->primary_channel;
++	u8 pri_ch = chan->pri_ch_idx;
+ 	bool mask_5m_low;
+ 	bool mask_5m_en;
+ 
+@@ -1292,12 +1292,13 @@ static void rtw8852b_5m_mask(struct rtw89_dev *rtwdev, const struct rtw89_chan *
+ 	case RTW89_CHANNEL_WIDTH_40:
+ 		/* Prich=1: Mask 5M High, Prich=2: Mask 5M Low */
+ 		mask_5m_en = true;
+-		mask_5m_low = pri_ch == 2;
++		mask_5m_low = pri_ch == RTW89_SC_20_LOWER;
+ 		break;
+ 	case RTW89_CHANNEL_WIDTH_80:
+ 		/* Prich=3: Mask 5M High, Prich=4: Mask 5M Low, Else: Disable */
+-		mask_5m_en = pri_ch == 3 || pri_ch == 4;
+-		mask_5m_low = pri_ch == 4;
++		mask_5m_en = pri_ch == RTW89_SC_20_UPMOST ||
++			     pri_ch == RTW89_SC_20_LOWEST;
++		mask_5m_low = pri_ch == RTW89_SC_20_LOWEST;
+ 		break;
+ 	default:
+ 		mask_5m_en = false;
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852c.c b/drivers/net/wireless/realtek/rtw89/rtw8852c.c
+index d2dde21d3daf5..272e72bd1468e 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8852c.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8852c.c
+@@ -1445,18 +1445,19 @@ static void rtw8852c_5m_mask(struct rtw89_dev *rtwdev,
+ 			     const struct rtw89_chan *chan,
+ 			     enum rtw89_phy_idx phy_idx)
+ {
+-	u8 pri_ch = chan->primary_channel;
++	u8 pri_ch = chan->pri_ch_idx;
+ 	bool mask_5m_low;
+ 	bool mask_5m_en;
+ 
+ 	switch (chan->band_width) {
+ 	case RTW89_CHANNEL_WIDTH_40:
+ 		mask_5m_en = true;
+-		mask_5m_low = pri_ch == 2;
++		mask_5m_low = pri_ch == RTW89_SC_20_LOWER;
+ 		break;
+ 	case RTW89_CHANNEL_WIDTH_80:
+-		mask_5m_en = ((pri_ch == 3) || (pri_ch == 4));
+-		mask_5m_low = pri_ch == 4;
++		mask_5m_en = pri_ch == RTW89_SC_20_UPMOST ||
++			     pri_ch == RTW89_SC_20_LOWEST;
++		mask_5m_low = pri_ch == RTW89_SC_20_LOWEST;
+ 		break;
+ 	default:
+ 		mask_5m_en = false;
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index d6a9bac91a4cd..bdf1601219fc4 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -4819,8 +4819,6 @@ static bool nvme_handle_aen_notice(struct nvme_ctrl *ctrl, u32 result)
+ 	u32 aer_notice_type = nvme_aer_subtype(result);
+ 	bool requeue = true;
+ 
+-	trace_nvme_async_event(ctrl, aer_notice_type);
+-
+ 	switch (aer_notice_type) {
+ 	case NVME_AER_NOTICE_NS_CHANGED:
+ 		set_bit(NVME_AER_NOTICE_NS_CHANGED, &ctrl->events);
+@@ -4856,7 +4854,6 @@ static bool nvme_handle_aen_notice(struct nvme_ctrl *ctrl, u32 result)
+ 
+ static void nvme_handle_aer_persistent_error(struct nvme_ctrl *ctrl)
+ {
+-	trace_nvme_async_event(ctrl, NVME_AER_ERROR);
+ 	dev_warn(ctrl->device, "resetting controller due to AER\n");
+ 	nvme_reset_ctrl(ctrl);
+ }
+@@ -4872,6 +4869,7 @@ void nvme_complete_async_event(struct nvme_ctrl *ctrl, __le16 status,
+ 	if (le16_to_cpu(status) >> 1 != NVME_SC_SUCCESS)
+ 		return;
+ 
++	trace_nvme_async_event(ctrl, result);
+ 	switch (aer_type) {
+ 	case NVME_AER_NOTICE:
+ 		requeue = nvme_handle_aen_notice(ctrl, result);
+@@ -4889,7 +4887,6 @@ void nvme_complete_async_event(struct nvme_ctrl *ctrl, __le16 status,
+ 	case NVME_AER_SMART:
+ 	case NVME_AER_CSS:
+ 	case NVME_AER_VS:
+-		trace_nvme_async_event(ctrl, aer_type);
+ 		ctrl->aen_result = result;
+ 		break;
+ 	default:
+diff --git a/drivers/nvme/host/trace.h b/drivers/nvme/host/trace.h
+index 6f0eaf6a15282..4fb5922ffdac5 100644
+--- a/drivers/nvme/host/trace.h
++++ b/drivers/nvme/host/trace.h
+@@ -127,15 +127,12 @@ TRACE_EVENT(nvme_async_event,
+ 	),
+ 	TP_printk("nvme%d: NVME_AEN=%#08x [%s]",
+ 		__entry->ctrl_id, __entry->result,
+-		__print_symbolic(__entry->result,
+-		aer_name(NVME_AER_NOTICE_NS_CHANGED),
+-		aer_name(NVME_AER_NOTICE_ANA),
+-		aer_name(NVME_AER_NOTICE_FW_ACT_STARTING),
+-		aer_name(NVME_AER_NOTICE_DISC_CHANGED),
+-		aer_name(NVME_AER_ERROR),
+-		aer_name(NVME_AER_SMART),
+-		aer_name(NVME_AER_CSS),
+-		aer_name(NVME_AER_VS))
++		__print_symbolic(__entry->result & 0x7,
++			aer_name(NVME_AER_ERROR),
++			aer_name(NVME_AER_SMART),
++			aer_name(NVME_AER_NOTICE),
++			aer_name(NVME_AER_CSS),
++			aer_name(NVME_AER_VS))
+ 	)
+ );
+ 
+diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
+index 80099df37314a..5bfbf5c651db0 100644
+--- a/drivers/nvme/target/admin-cmd.c
++++ b/drivers/nvme/target/admin-cmd.c
+@@ -685,6 +685,13 @@ static bool nvmet_handle_identify_desclist(struct nvmet_req *req)
+ 	}
+ }
+ 
++static void nvmet_execute_identify_ctrl_nvm(struct nvmet_req *req)
++{
++	/* Not supported: return zeroes */
++	nvmet_req_complete(req,
++		   nvmet_zero_sgl(req, 0, sizeof(struct nvme_id_ctrl_nvm)));
++}
++
+ static void nvmet_execute_identify(struct nvmet_req *req)
+ {
+ 	if (!nvmet_check_transfer_len(req, NVME_IDENTIFY_DATA_SIZE))
+@@ -692,13 +699,8 @@ static void nvmet_execute_identify(struct nvmet_req *req)
+ 
+ 	switch (req->cmd->identify.cns) {
+ 	case NVME_ID_CNS_NS:
+-		switch (req->cmd->identify.csi) {
+-		case NVME_CSI_NVM:
+-			return nvmet_execute_identify_ns(req);
+-		default:
+-			break;
+-		}
+-		break;
++		nvmet_execute_identify_ns(req);
++		return;
+ 	case NVME_ID_CNS_CS_NS:
+ 		if (IS_ENABLED(CONFIG_BLK_DEV_ZONED)) {
+ 			switch (req->cmd->identify.csi) {
+@@ -710,29 +712,24 @@ static void nvmet_execute_identify(struct nvmet_req *req)
+ 		}
+ 		break;
+ 	case NVME_ID_CNS_CTRL:
+-		switch (req->cmd->identify.csi) {
+-		case NVME_CSI_NVM:
+-			return nvmet_execute_identify_ctrl(req);
+-		}
+-		break;
++		nvmet_execute_identify_ctrl(req);
++		return;
+ 	case NVME_ID_CNS_CS_CTRL:
+-		if (IS_ENABLED(CONFIG_BLK_DEV_ZONED)) {
+-			switch (req->cmd->identify.csi) {
+-			case NVME_CSI_ZNS:
+-				return nvmet_execute_identify_cns_cs_ctrl(req);
+-			default:
+-				break;
+-			}
+-		}
+-		break;
+-	case NVME_ID_CNS_NS_ACTIVE_LIST:
+ 		switch (req->cmd->identify.csi) {
+ 		case NVME_CSI_NVM:
+-			return nvmet_execute_identify_nslist(req);
+-		default:
++			nvmet_execute_identify_ctrl_nvm(req);
++			return;
++		case NVME_CSI_ZNS:
++			if (IS_ENABLED(CONFIG_BLK_DEV_ZONED)) {
++				nvmet_execute_identify_ctrl_zns(req);
++				return;
++			}
+ 			break;
+ 		}
+ 		break;
++	case NVME_ID_CNS_NS_ACTIVE_LIST:
++		nvmet_execute_identify_nslist(req);
++		return;
+ 	case NVME_ID_CNS_NS_DESC_LIST:
+ 		if (nvmet_handle_identify_desclist(req) == true)
+ 			return;
+diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
+index 5c16372f3b533..c780af36c1d4a 100644
+--- a/drivers/nvme/target/fcloop.c
++++ b/drivers/nvme/target/fcloop.c
+@@ -614,10 +614,11 @@ fcloop_fcp_recv_work(struct work_struct *work)
+ 	struct fcloop_fcpreq *tfcp_req =
+ 		container_of(work, struct fcloop_fcpreq, fcp_rcv_work);
+ 	struct nvmefc_fcp_req *fcpreq = tfcp_req->fcpreq;
++	unsigned long flags;
+ 	int ret = 0;
+ 	bool aborted = false;
+ 
+-	spin_lock_irq(&tfcp_req->reqlock);
++	spin_lock_irqsave(&tfcp_req->reqlock, flags);
+ 	switch (tfcp_req->inistate) {
+ 	case INI_IO_START:
+ 		tfcp_req->inistate = INI_IO_ACTIVE;
+@@ -626,11 +627,11 @@ fcloop_fcp_recv_work(struct work_struct *work)
+ 		aborted = true;
+ 		break;
+ 	default:
+-		spin_unlock_irq(&tfcp_req->reqlock);
++		spin_unlock_irqrestore(&tfcp_req->reqlock, flags);
+ 		WARN_ON(1);
+ 		return;
+ 	}
+-	spin_unlock_irq(&tfcp_req->reqlock);
++	spin_unlock_irqrestore(&tfcp_req->reqlock, flags);
+ 
+ 	if (unlikely(aborted))
+ 		ret = -ECANCELED;
+@@ -655,8 +656,9 @@ fcloop_fcp_abort_recv_work(struct work_struct *work)
+ 		container_of(work, struct fcloop_fcpreq, abort_rcv_work);
+ 	struct nvmefc_fcp_req *fcpreq;
+ 	bool completed = false;
++	unsigned long flags;
+ 
+-	spin_lock_irq(&tfcp_req->reqlock);
++	spin_lock_irqsave(&tfcp_req->reqlock, flags);
+ 	fcpreq = tfcp_req->fcpreq;
+ 	switch (tfcp_req->inistate) {
+ 	case INI_IO_ABORTED:
+@@ -665,11 +667,11 @@ fcloop_fcp_abort_recv_work(struct work_struct *work)
+ 		completed = true;
+ 		break;
+ 	default:
+-		spin_unlock_irq(&tfcp_req->reqlock);
++		spin_unlock_irqrestore(&tfcp_req->reqlock, flags);
+ 		WARN_ON(1);
+ 		return;
+ 	}
+-	spin_unlock_irq(&tfcp_req->reqlock);
++	spin_unlock_irqrestore(&tfcp_req->reqlock, flags);
+ 
+ 	if (unlikely(completed)) {
+ 		/* remove reference taken in original abort downcall */
+@@ -681,9 +683,9 @@ fcloop_fcp_abort_recv_work(struct work_struct *work)
+ 		nvmet_fc_rcv_fcp_abort(tfcp_req->tport->targetport,
+ 					&tfcp_req->tgt_fcp_req);
+ 
+-	spin_lock_irq(&tfcp_req->reqlock);
++	spin_lock_irqsave(&tfcp_req->reqlock, flags);
+ 	tfcp_req->fcpreq = NULL;
+-	spin_unlock_irq(&tfcp_req->reqlock);
++	spin_unlock_irqrestore(&tfcp_req->reqlock, flags);
+ 
+ 	fcloop_call_host_done(fcpreq, tfcp_req, -ECANCELED);
+ 	/* call_host_done releases reference for abort downcall */
+@@ -699,11 +701,12 @@ fcloop_tgt_fcprqst_done_work(struct work_struct *work)
+ 	struct fcloop_fcpreq *tfcp_req =
+ 		container_of(work, struct fcloop_fcpreq, tio_done_work);
+ 	struct nvmefc_fcp_req *fcpreq;
++	unsigned long flags;
+ 
+-	spin_lock_irq(&tfcp_req->reqlock);
++	spin_lock_irqsave(&tfcp_req->reqlock, flags);
+ 	fcpreq = tfcp_req->fcpreq;
+ 	tfcp_req->inistate = INI_IO_COMPLETED;
+-	spin_unlock_irq(&tfcp_req->reqlock);
++	spin_unlock_irqrestore(&tfcp_req->reqlock, flags);
+ 
+ 	fcloop_call_host_done(fcpreq, tfcp_req, tfcp_req->status);
+ }
+@@ -807,13 +810,14 @@ fcloop_fcp_op(struct nvmet_fc_target_port *tgtport,
+ 	u32 rsplen = 0, xfrlen = 0;
+ 	int fcp_err = 0, active, aborted;
+ 	u8 op = tgt_fcpreq->op;
++	unsigned long flags;
+ 
+-	spin_lock_irq(&tfcp_req->reqlock);
++	spin_lock_irqsave(&tfcp_req->reqlock, flags);
+ 	fcpreq = tfcp_req->fcpreq;
+ 	active = tfcp_req->active;
+ 	aborted = tfcp_req->aborted;
+ 	tfcp_req->active = true;
+-	spin_unlock_irq(&tfcp_req->reqlock);
++	spin_unlock_irqrestore(&tfcp_req->reqlock, flags);
+ 
+ 	if (unlikely(active))
+ 		/* illegal - call while i/o active */
+@@ -821,9 +825,9 @@ fcloop_fcp_op(struct nvmet_fc_target_port *tgtport,
+ 
+ 	if (unlikely(aborted)) {
+ 		/* target transport has aborted i/o prior */
+-		spin_lock_irq(&tfcp_req->reqlock);
++		spin_lock_irqsave(&tfcp_req->reqlock, flags);
+ 		tfcp_req->active = false;
+-		spin_unlock_irq(&tfcp_req->reqlock);
++		spin_unlock_irqrestore(&tfcp_req->reqlock, flags);
+ 		tgt_fcpreq->transferred_length = 0;
+ 		tgt_fcpreq->fcp_error = -ECANCELED;
+ 		tgt_fcpreq->done(tgt_fcpreq);
+@@ -880,9 +884,9 @@ fcloop_fcp_op(struct nvmet_fc_target_port *tgtport,
+ 		break;
+ 	}
+ 
+-	spin_lock_irq(&tfcp_req->reqlock);
++	spin_lock_irqsave(&tfcp_req->reqlock, flags);
+ 	tfcp_req->active = false;
+-	spin_unlock_irq(&tfcp_req->reqlock);
++	spin_unlock_irqrestore(&tfcp_req->reqlock, flags);
+ 
+ 	tgt_fcpreq->transferred_length = xfrlen;
+ 	tgt_fcpreq->fcp_error = fcp_err;
+@@ -896,15 +900,16 @@ fcloop_tgt_fcp_abort(struct nvmet_fc_target_port *tgtport,
+ 			struct nvmefc_tgt_fcp_req *tgt_fcpreq)
+ {
+ 	struct fcloop_fcpreq *tfcp_req = tgt_fcp_req_to_fcpreq(tgt_fcpreq);
++	unsigned long flags;
+ 
+ 	/*
+ 	 * mark aborted only in case there were 2 threads in transport
+ 	 * (one doing io, other doing abort) and only kills ops posted
+ 	 * after the abort request
+ 	 */
+-	spin_lock_irq(&tfcp_req->reqlock);
++	spin_lock_irqsave(&tfcp_req->reqlock, flags);
+ 	tfcp_req->aborted = true;
+-	spin_unlock_irq(&tfcp_req->reqlock);
++	spin_unlock_irqrestore(&tfcp_req->reqlock, flags);
+ 
+ 	tfcp_req->status = NVME_SC_INTERNAL;
+ 
+@@ -946,6 +951,7 @@ fcloop_fcp_abort(struct nvme_fc_local_port *localport,
+ 	struct fcloop_ini_fcpreq *inireq = fcpreq->private;
+ 	struct fcloop_fcpreq *tfcp_req;
+ 	bool abortio = true;
++	unsigned long flags;
+ 
+ 	spin_lock(&inireq->inilock);
+ 	tfcp_req = inireq->tfcp_req;
+@@ -958,7 +964,7 @@ fcloop_fcp_abort(struct nvme_fc_local_port *localport,
+ 		return;
+ 
+ 	/* break initiator/target relationship for io */
+-	spin_lock_irq(&tfcp_req->reqlock);
++	spin_lock_irqsave(&tfcp_req->reqlock, flags);
+ 	switch (tfcp_req->inistate) {
+ 	case INI_IO_START:
+ 	case INI_IO_ACTIVE:
+@@ -968,11 +974,11 @@ fcloop_fcp_abort(struct nvme_fc_local_port *localport,
+ 		abortio = false;
+ 		break;
+ 	default:
+-		spin_unlock_irq(&tfcp_req->reqlock);
++		spin_unlock_irqrestore(&tfcp_req->reqlock, flags);
+ 		WARN_ON(1);
+ 		return;
+ 	}
+-	spin_unlock_irq(&tfcp_req->reqlock);
++	spin_unlock_irqrestore(&tfcp_req->reqlock, flags);
+ 
+ 	if (abortio)
+ 		/* leave the reference while the work item is scheduled */
+diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
+index 89bedfcd974c4..ed3786140965f 100644
+--- a/drivers/nvme/target/nvmet.h
++++ b/drivers/nvme/target/nvmet.h
+@@ -581,7 +581,7 @@ bool nvmet_ns_revalidate(struct nvmet_ns *ns);
+ u16 blk_to_nvme_status(struct nvmet_req *req, blk_status_t blk_sts);
+ 
+ bool nvmet_bdev_zns_enable(struct nvmet_ns *ns);
+-void nvmet_execute_identify_cns_cs_ctrl(struct nvmet_req *req);
++void nvmet_execute_identify_ctrl_zns(struct nvmet_req *req);
+ void nvmet_execute_identify_cns_cs_ns(struct nvmet_req *req);
+ void nvmet_bdev_execute_zone_mgmt_recv(struct nvmet_req *req);
+ void nvmet_bdev_execute_zone_mgmt_send(struct nvmet_req *req);
+diff --git a/drivers/nvme/target/zns.c b/drivers/nvme/target/zns.c
+index 7e4292d88016c..ae4d9d35e46ae 100644
+--- a/drivers/nvme/target/zns.c
++++ b/drivers/nvme/target/zns.c
+@@ -70,7 +70,7 @@ bool nvmet_bdev_zns_enable(struct nvmet_ns *ns)
+ 	return true;
+ }
+ 
+-void nvmet_execute_identify_cns_cs_ctrl(struct nvmet_req *req)
++void nvmet_execute_identify_ctrl_zns(struct nvmet_req *req)
+ {
+ 	u8 zasl = req->sq->ctrl->subsys->zasl;
+ 	struct nvmet_ctrl *ctrl = req->sq->ctrl;
+@@ -97,7 +97,7 @@ out:
+ 
+ void nvmet_execute_identify_cns_cs_ns(struct nvmet_req *req)
+ {
+-	struct nvme_id_ns_zns *id_zns;
++	struct nvme_id_ns_zns *id_zns = NULL;
+ 	u64 zsze;
+ 	u16 status;
+ 	u32 mar, mor;
+@@ -118,16 +118,18 @@ void nvmet_execute_identify_cns_cs_ns(struct nvmet_req *req)
+ 	if (status)
+ 		goto done;
+ 
+-	if (!bdev_is_zoned(req->ns->bdev)) {
+-		req->error_loc = offsetof(struct nvme_identify, nsid);
+-		goto done;
+-	}
+-
+ 	if (nvmet_ns_revalidate(req->ns)) {
+ 		mutex_lock(&req->ns->subsys->lock);
+ 		nvmet_ns_changed(req->ns->subsys, req->ns->nsid);
+ 		mutex_unlock(&req->ns->subsys->lock);
+ 	}
++
++	if (!bdev_is_zoned(req->ns->bdev)) {
++		status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		req->error_loc = offsetof(struct nvme_identify, nsid);
++		goto out;
++	}
++
+ 	zsze = (bdev_zone_sectors(req->ns->bdev) << 9) >>
+ 					req->ns->blksize_shift;
+ 	id_zns->lbafe[0].zsze = cpu_to_le64(zsze);
+@@ -148,8 +150,8 @@ void nvmet_execute_identify_cns_cs_ns(struct nvmet_req *req)
+ 
+ done:
+ 	status = nvmet_copy_to_sgl(req, 0, id_zns, sizeof(*id_zns));
+-	kfree(id_zns);
+ out:
++	kfree(id_zns);
+ 	nvmet_req_complete(req, status);
+ }
+ 
+diff --git a/drivers/of/device.c b/drivers/of/device.c
+index 955bfb3d1a834..c91bb58992567 100644
+--- a/drivers/of/device.c
++++ b/drivers/of/device.c
+@@ -297,12 +297,15 @@ int of_device_request_module(struct device *dev)
+ 	if (size < 0)
+ 		return size;
+ 
+-	str = kmalloc(size + 1, GFP_KERNEL);
++	/* Reserve an additional byte for the trailing '\0' */
++	size++;
++
++	str = kmalloc(size, GFP_KERNEL);
+ 	if (!str)
+ 		return -ENOMEM;
+ 
+ 	of_device_get_modalias(dev, str, size);
+-	str[size] = '\0';
++	str[size - 1] = '\0';
+ 	ret = request_module(str);
+ 	kfree(str);
+ 
+diff --git a/drivers/pci/controller/dwc/Kconfig b/drivers/pci/controller/dwc/Kconfig
+index 434f6a4f40411..d29551261e80f 100644
+--- a/drivers/pci/controller/dwc/Kconfig
++++ b/drivers/pci/controller/dwc/Kconfig
+@@ -307,6 +307,7 @@ config PCIE_KIRIN
+ 	tristate "HiSilicon Kirin series SoCs PCIe controllers"
+ 	depends on PCI_MSI
+ 	select PCIE_DW_HOST
++	select REGMAP_MMIO
+ 	help
+ 	  Say Y here if you want PCIe controller support
+ 	  on HiSilicon Kirin series SoCs.
+diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c
+index 55a0405b921d6..52906f999f2bb 100644
+--- a/drivers/pci/controller/dwc/pci-imx6.c
++++ b/drivers/pci/controller/dwc/pci-imx6.c
+@@ -1566,6 +1566,13 @@ DECLARE_PCI_FIXUP_CLASS_HEADER(PCI_VENDOR_ID_SYNOPSYS, 0xabcd,
+ static int __init imx6_pcie_init(void)
+ {
+ #ifdef CONFIG_ARM
++	struct device_node *np;
++
++	np = of_find_matching_node(NULL, imx6_pcie_of_match);
++	if (!np)
++		return -ENODEV;
++	of_node_put(np);
++
+ 	/*
+ 	 * Since probe() can be deferred we need to make sure that
+ 	 * hook_fault_code is not called after __init memory is freed
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index a232b04af048c..89d748cc4b8a3 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -1279,11 +1279,9 @@ static int qcom_pcie_init_2_7_0(struct qcom_pcie *pcie)
+ 	val &= ~REQ_NOT_ENTR_L1;
+ 	writel(val, pcie->parf + PCIE20_PARF_PM_CTRL);
+ 
+-	if (IS_ENABLED(CONFIG_PCI_MSI)) {
+-		val = readl(pcie->parf + PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT);
+-		val |= BIT(31);
+-		writel(val, pcie->parf + PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT);
+-	}
++	val = readl(pcie->parf + PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT_V2);
++	val |= BIT(31);
++	writel(val, pcie->parf + PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT_V2);
+ 
+ 	return 0;
+ err_disable_clocks:
+diff --git a/drivers/pci/hotplug/pciehp_pci.c b/drivers/pci/hotplug/pciehp_pci.c
+index d17f3bf36f709..ad12515a4a121 100644
+--- a/drivers/pci/hotplug/pciehp_pci.c
++++ b/drivers/pci/hotplug/pciehp_pci.c
+@@ -63,7 +63,14 @@ int pciehp_configure_device(struct controller *ctrl)
+ 
+ 	pci_assign_unassigned_bridge_resources(bridge);
+ 	pcie_bus_configure_settings(parent);
++
++	/*
++	 * Release reset_lock during driver binding
++	 * to avoid AB-BA deadlock with device_lock.
++	 */
++	up_read(&ctrl->reset_lock);
+ 	pci_bus_add_devices(parent);
++	down_read_nested(&ctrl->reset_lock, ctrl->depth);
+ 
+  out:
+ 	pci_unlock_rescan_remove();
+@@ -104,7 +111,15 @@ void pciehp_unconfigure_device(struct controller *ctrl, bool presence)
+ 	list_for_each_entry_safe_reverse(dev, temp, &parent->devices,
+ 					 bus_list) {
+ 		pci_dev_get(dev);
++
++		/*
++		 * Release reset_lock during driver unbinding
++		 * to avoid AB-BA deadlock with device_lock.
++		 */
++		up_read(&ctrl->reset_lock);
+ 		pci_stop_and_remove_bus_device(dev);
++		down_read_nested(&ctrl->reset_lock, ctrl->depth);
++
+ 		/*
+ 		 * Ensure that no new Requests will be generated from
+ 		 * the device.
+diff --git a/drivers/pci/pcie/edr.c b/drivers/pci/pcie/edr.c
+index a6b9b479b97ad..87734e4c3c204 100644
+--- a/drivers/pci/pcie/edr.c
++++ b/drivers/pci/pcie/edr.c
+@@ -193,6 +193,7 @@ send_ost:
+ 	 */
+ 	if (estate == PCI_ERS_RESULT_RECOVERED) {
+ 		pci_dbg(edev, "DPC port successfully recovered\n");
++		pcie_clear_device_status(edev);
+ 		acpi_send_edr_status(pdev, edev, EDR_OST_SUCCESS);
+ 	} else {
+ 		pci_dbg(edev, "DPC port recovery failed\n");
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 44cab813bf951..f4e2a88729fd1 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -1939,6 +1939,19 @@ static void quirk_radeon_pm(struct pci_dev *dev)
+ }
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x6741, quirk_radeon_pm);
+ 
++/*
++ * NVIDIA Ampere-based HDA controllers can wedge the whole device if a bus
++ * reset is performed too soon after transition to D0, extend d3hot_delay
++ * to previous effective default for all NVIDIA HDA controllers.
++ */
++static void quirk_nvidia_hda_pm(struct pci_dev *dev)
++{
++	quirk_d3hot_delay(dev, 20);
++}
++DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
++			      PCI_CLASS_MULTIMEDIA_HD_AUDIO, 8,
++			      quirk_nvidia_hda_pm);
++
+ /*
+  * Ryzen5/7 XHCI controllers fail upon resume from runtime suspend or s2idle.
+  * https://bugzilla.kernel.org/show_bug.cgi?id=205587
+diff --git a/drivers/perf/amlogic/meson_ddr_pmu_core.c b/drivers/perf/amlogic/meson_ddr_pmu_core.c
+index b84346dbac2ce..0b24dee1ed3cf 100644
+--- a/drivers/perf/amlogic/meson_ddr_pmu_core.c
++++ b/drivers/perf/amlogic/meson_ddr_pmu_core.c
+@@ -156,10 +156,14 @@ static int meson_ddr_perf_event_add(struct perf_event *event, int flags)
+ 	u64 config2 = event->attr.config2;
+ 	int i;
+ 
+-	for_each_set_bit(i, (const unsigned long *)&config1, sizeof(config1))
++	for_each_set_bit(i,
++			 (const unsigned long *)&config1,
++			 BITS_PER_TYPE(config1))
+ 		meson_ddr_set_axi_filter(event, i);
+ 
+-	for_each_set_bit(i, (const unsigned long *)&config2, sizeof(config2))
++	for_each_set_bit(i,
++			 (const unsigned long *)&config2,
++			 BITS_PER_TYPE(config2))
+ 		meson_ddr_set_axi_filter(event, i + 64);
+ 
+ 	if (flags & PERF_EF_START)
+diff --git a/drivers/perf/arm-cmn.c b/drivers/perf/arm-cmn.c
+index c9689861be3fa..44b719f39c3b3 100644
+--- a/drivers/perf/arm-cmn.c
++++ b/drivers/perf/arm-cmn.c
+@@ -57,14 +57,12 @@
+ #define CMN_INFO_REQ_VC_NUM		GENMASK_ULL(1, 0)
+ 
+ /* XPs also have some local topology info which has uses too */
+-#define CMN_MXP__CONNECT_INFO_P0	0x0008
+-#define CMN_MXP__CONNECT_INFO_P1	0x0010
+-#define CMN_MXP__CONNECT_INFO_P2	0x0028
+-#define CMN_MXP__CONNECT_INFO_P3	0x0030
+-#define CMN_MXP__CONNECT_INFO_P4	0x0038
+-#define CMN_MXP__CONNECT_INFO_P5	0x0040
++#define CMN_MXP__CONNECT_INFO(p)	(0x0008 + 8 * (p))
+ #define CMN__CONNECT_INFO_DEVICE_TYPE	GENMASK_ULL(4, 0)
+ 
++#define CMN_MAX_PORTS			6
++#define CI700_CONNECT_INFO_P2_5_OFFSET	0x10
++
+ /* PMU registers occupy the 3rd 4KB page of each node's region */
+ #define CMN_PMU_OFFSET			0x2000
+ 
+@@ -166,7 +164,7 @@
+ #define CMN_EVENT_BYNODEID(event)	FIELD_GET(CMN_CONFIG_BYNODEID, (event)->attr.config)
+ #define CMN_EVENT_NODEID(event)		FIELD_GET(CMN_CONFIG_NODEID, (event)->attr.config)
+ 
+-#define CMN_CONFIG_WP_COMBINE		GENMASK_ULL(27, 24)
++#define CMN_CONFIG_WP_COMBINE		GENMASK_ULL(30, 27)
+ #define CMN_CONFIG_WP_DEV_SEL		GENMASK_ULL(50, 48)
+ #define CMN_CONFIG_WP_CHN_SEL		GENMASK_ULL(55, 51)
+ /* Note that we don't yet support the tertiary match group on newer IPs */
+@@ -396,6 +394,25 @@ static struct arm_cmn_node *arm_cmn_node(const struct arm_cmn *cmn,
+ 	return NULL;
+ }
+ 
++static u32 arm_cmn_device_connect_info(const struct arm_cmn *cmn,
++				       const struct arm_cmn_node *xp, int port)
++{
++	int offset = CMN_MXP__CONNECT_INFO(port);
++
++	if (port >= 2) {
++		if (cmn->model & (CMN600 | CMN650))
++			return 0;
++		/*
++		 * CI-700 may have extra ports, but still has the
++		 * mesh_port_connect_info registers in the way.
++		 */
++		if (cmn->model == CI700)
++			offset += CI700_CONNECT_INFO_P2_5_OFFSET;
++	}
++
++	return readl_relaxed(xp->pmu_base - CMN_PMU_OFFSET + offset);
++}
++
+ static struct dentry *arm_cmn_debugfs;
+ 
+ #ifdef CONFIG_DEBUG_FS
+@@ -469,7 +486,7 @@ static int arm_cmn_map_show(struct seq_file *s, void *data)
+ 	y = cmn->mesh_y;
+ 	while (y--) {
+ 		int xp_base = cmn->mesh_x * y;
+-		u8 port[6][CMN_MAX_DIMENSION];
++		u8 port[CMN_MAX_PORTS][CMN_MAX_DIMENSION];
+ 
+ 		for (x = 0; x < cmn->mesh_x; x++)
+ 			seq_puts(s, "--------+");
+@@ -477,14 +494,9 @@ static int arm_cmn_map_show(struct seq_file *s, void *data)
+ 		seq_printf(s, "\n%d    |", y);
+ 		for (x = 0; x < cmn->mesh_x; x++) {
+ 			struct arm_cmn_node *xp = cmn->xps + xp_base + x;
+-			void __iomem *base = xp->pmu_base - CMN_PMU_OFFSET;
+-
+-			port[0][x] = readl_relaxed(base + CMN_MXP__CONNECT_INFO_P0);
+-			port[1][x] = readl_relaxed(base + CMN_MXP__CONNECT_INFO_P1);
+-			port[2][x] = readl_relaxed(base + CMN_MXP__CONNECT_INFO_P2);
+-			port[3][x] = readl_relaxed(base + CMN_MXP__CONNECT_INFO_P3);
+-			port[4][x] = readl_relaxed(base + CMN_MXP__CONNECT_INFO_P4);
+-			port[5][x] = readl_relaxed(base + CMN_MXP__CONNECT_INFO_P5);
++
++			for (p = 0; p < CMN_MAX_PORTS; p++)
++				port[p][x] = arm_cmn_device_connect_info(cmn, xp, p);
+ 			seq_printf(s, " XP #%-2d |", xp_base + x);
+ 		}
+ 
+@@ -2083,18 +2095,9 @@ static int arm_cmn_discover(struct arm_cmn *cmn, unsigned int rgn_offset)
+ 		 * from this, since in that case we will see at least one XP
+ 		 * with port 2 connected, for the HN-D.
+ 		 */
+-		if (readq_relaxed(xp_region + CMN_MXP__CONNECT_INFO_P0))
+-			xp_ports |= BIT(0);
+-		if (readq_relaxed(xp_region + CMN_MXP__CONNECT_INFO_P1))
+-			xp_ports |= BIT(1);
+-		if (readq_relaxed(xp_region + CMN_MXP__CONNECT_INFO_P2))
+-			xp_ports |= BIT(2);
+-		if (readq_relaxed(xp_region + CMN_MXP__CONNECT_INFO_P3))
+-			xp_ports |= BIT(3);
+-		if (readq_relaxed(xp_region + CMN_MXP__CONNECT_INFO_P4))
+-			xp_ports |= BIT(4);
+-		if (readq_relaxed(xp_region + CMN_MXP__CONNECT_INFO_P5))
+-			xp_ports |= BIT(5);
++		for (int p = 0; p < CMN_MAX_PORTS; p++)
++			if (arm_cmn_device_connect_info(cmn, xp, p))
++				xp_ports |= BIT(p);
+ 
+ 		if (cmn->multi_dtm && (xp_ports & 0xc))
+ 			arm_cmn_init_dtm(dtm++, xp, 1);
+diff --git a/drivers/perf/riscv_pmu_sbi.c b/drivers/perf/riscv_pmu_sbi.c
+index 70cb50fd41c29..4f3ac296b3e25 100644
+--- a/drivers/perf/riscv_pmu_sbi.c
++++ b/drivers/perf/riscv_pmu_sbi.c
+@@ -924,7 +924,7 @@ static int __init pmu_sbi_devinit(void)
+ 	struct platform_device *pdev;
+ 
+ 	if (sbi_spec_version < sbi_mk_version(0, 3) ||
+-	    sbi_probe_extension(SBI_EXT_PMU) <= 0) {
++	    !sbi_probe_extension(SBI_EXT_PMU)) {
+ 		return 0;
+ 	}
+ 
+diff --git a/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c b/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c
+index 5182aeac43ee6..184d702f92d23 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c
++++ b/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c
+@@ -2152,7 +2152,7 @@ static const struct qmp_phy_cfg msm8998_pciephy_cfg = {
+ };
+ 
+ static const struct qmp_phy_cfg sc8180x_pciephy_cfg = {
+-	.lanes			= 1,
++	.lanes			= 2,
+ 
+ 	.tbls = {
+ 		.serdes		= sc8180x_qmp_pcie_serdes_tbl,
+diff --git a/drivers/phy/tegra/xusb.c b/drivers/phy/tegra/xusb.c
+index 78045bd6c2140..a90aea33e06ac 100644
+--- a/drivers/phy/tegra/xusb.c
++++ b/drivers/phy/tegra/xusb.c
+@@ -805,6 +805,7 @@ static int tegra_xusb_add_usb2_port(struct tegra_xusb_padctl *padctl,
+ 	usb2->base.lane = usb2->base.ops->map(&usb2->base);
+ 	if (IS_ERR(usb2->base.lane)) {
+ 		err = PTR_ERR(usb2->base.lane);
++		tegra_xusb_port_unregister(&usb2->base);
+ 		goto out;
+ 	}
+ 
+@@ -871,6 +872,7 @@ static int tegra_xusb_add_ulpi_port(struct tegra_xusb_padctl *padctl,
+ 	ulpi->base.lane = ulpi->base.ops->map(&ulpi->base);
+ 	if (IS_ERR(ulpi->base.lane)) {
+ 		err = PTR_ERR(ulpi->base.lane);
++		tegra_xusb_port_unregister(&ulpi->base);
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/phy/ti/phy-j721e-wiz.c b/drivers/phy/ti/phy-j721e-wiz.c
+index 1b83c98a78f0f..1b5f1a5e2b3ba 100644
+--- a/drivers/phy/ti/phy-j721e-wiz.c
++++ b/drivers/phy/ti/phy-j721e-wiz.c
+@@ -443,18 +443,17 @@ static int wiz_mode_select(struct wiz *wiz)
+ 	int i;
+ 
+ 	for (i = 0; i < num_lanes; i++) {
+-		if (wiz->lane_phy_type[i] == PHY_TYPE_DP)
++		if (wiz->lane_phy_type[i] == PHY_TYPE_DP) {
+ 			mode = LANE_MODE_GEN1;
+-		else if (wiz->lane_phy_type[i] == PHY_TYPE_QSGMII)
++		} else if (wiz->lane_phy_type[i] == PHY_TYPE_QSGMII) {
+ 			mode = LANE_MODE_GEN2;
+-		else
+-			continue;
+-
+-		if (wiz->lane_phy_type[i] == PHY_TYPE_USXGMII) {
++		} else if (wiz->lane_phy_type[i] == PHY_TYPE_USXGMII) {
+ 			ret = regmap_field_write(wiz->p0_mac_src_sel[i], 0x3);
+ 			ret = regmap_field_write(wiz->p0_rxfclk_sel[i], 0x3);
+ 			ret = regmap_field_write(wiz->p0_refclk_sel[i], 0x3);
+ 			mode = LANE_MODE_GEN1;
++		} else {
++			continue;
+ 		}
+ 
+ 		ret = regmap_field_write(wiz->p_standard_mode[i], mode);
+diff --git a/drivers/pinctrl/bcm/pinctrl-bcm2835.c b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+index 8e2551a08c372..7435173e10f43 100644
+--- a/drivers/pinctrl/bcm/pinctrl-bcm2835.c
++++ b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+@@ -90,6 +90,8 @@ struct bcm2835_pinctrl {
+ 	struct pinctrl_gpio_range gpio_range;
+ 
+ 	raw_spinlock_t irq_lock[BCM2835_NUM_BANKS];
++	/* Protect FSEL registers */
++	spinlock_t fsel_lock;
+ };
+ 
+ /* pins are just named GPIO0..GPIO53 */
+@@ -284,14 +286,19 @@ static inline void bcm2835_pinctrl_fsel_set(
+ 		struct bcm2835_pinctrl *pc, unsigned pin,
+ 		enum bcm2835_fsel fsel)
+ {
+-	u32 val = bcm2835_gpio_rd(pc, FSEL_REG(pin));
+-	enum bcm2835_fsel cur = (val >> FSEL_SHIFT(pin)) & BCM2835_FSEL_MASK;
++	u32 val;
++	enum bcm2835_fsel cur;
++	unsigned long flags;
++
++	spin_lock_irqsave(&pc->fsel_lock, flags);
++	val = bcm2835_gpio_rd(pc, FSEL_REG(pin));
++	cur = (val >> FSEL_SHIFT(pin)) & BCM2835_FSEL_MASK;
+ 
+ 	dev_dbg(pc->dev, "read %08x (%u => %s)\n", val, pin,
+-			bcm2835_functions[cur]);
++		bcm2835_functions[cur]);
+ 
+ 	if (cur == fsel)
+-		return;
++		goto unlock;
+ 
+ 	if (cur != BCM2835_FSEL_GPIO_IN && fsel != BCM2835_FSEL_GPIO_IN) {
+ 		/* always transition through GPIO_IN */
+@@ -309,6 +316,9 @@ static inline void bcm2835_pinctrl_fsel_set(
+ 	dev_dbg(pc->dev, "write %08x (%u <= %s)\n", val, pin,
+ 			bcm2835_functions[fsel]);
+ 	bcm2835_gpio_wr(pc, FSEL_REG(pin), val);
++
++unlock:
++	spin_unlock_irqrestore(&pc->fsel_lock, flags);
+ }
+ 
+ static int bcm2835_gpio_direction_input(struct gpio_chip *chip, unsigned offset)
+@@ -1248,6 +1258,7 @@ static int bcm2835_pinctrl_probe(struct platform_device *pdev)
+ 	pc->gpio_chip = *pdata->gpio_chip;
+ 	pc->gpio_chip.parent = dev;
+ 
++	spin_lock_init(&pc->fsel_lock);
+ 	for (i = 0; i < BCM2835_NUM_BANKS; i++) {
+ 		unsigned long events;
+ 		unsigned offset;
+diff --git a/drivers/pinctrl/qcom/pinctrl-lpass-lpi.c b/drivers/pinctrl/qcom/pinctrl-lpass-lpi.c
+index 87920257bb732..27fc8b6719544 100644
+--- a/drivers/pinctrl/qcom/pinctrl-lpass-lpi.c
++++ b/drivers/pinctrl/qcom/pinctrl-lpass-lpi.c
+@@ -221,6 +221,15 @@ static int lpi_config_set(struct pinctrl_dev *pctldev, unsigned int group,
+ 		}
+ 	}
+ 
++	/*
++	 * As per Hardware Programming Guide, when configuring pin as output,
++	 * set the pin value before setting output-enable (OE).
++	 */
++	if (output_enabled) {
++		val = u32_encode_bits(value ? 1 : 0, LPI_GPIO_VALUE_OUT_MASK);
++		lpi_gpio_write(pctrl, group, LPI_GPIO_VALUE_REG, val);
++	}
++
+ 	val = lpi_gpio_read(pctrl, group, LPI_GPIO_CFG_REG);
+ 
+ 	u32p_replace_bits(&val, pullup, LPI_GPIO_PULL_MASK);
+@@ -230,11 +239,6 @@ static int lpi_config_set(struct pinctrl_dev *pctldev, unsigned int group,
+ 
+ 	lpi_gpio_write(pctrl, group, LPI_GPIO_CFG_REG, val);
+ 
+-	if (output_enabled) {
+-		val = u32_encode_bits(value ? 1 : 0, LPI_GPIO_VALUE_OUT_MASK);
+-		lpi_gpio_write(pctrl, group, LPI_GPIO_VALUE_REG, val);
+-	}
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/pinctrl/ralink/pinctrl-mt7620.c b/drivers/pinctrl/ralink/pinctrl-mt7620.c
+index 4e8d26bb34302..06b86c7268392 100644
+--- a/drivers/pinctrl/ralink/pinctrl-mt7620.c
++++ b/drivers/pinctrl/ralink/pinctrl-mt7620.c
+@@ -372,6 +372,7 @@ static int mt7620_pinctrl_probe(struct platform_device *pdev)
+ 
+ static const struct of_device_id mt7620_pinctrl_match[] = {
+ 	{ .compatible = "ralink,mt7620-pinctrl" },
++	{ .compatible = "ralink,rt2880-pinmux" },
+ 	{}
+ };
+ MODULE_DEVICE_TABLE(of, mt7620_pinctrl_match);
+diff --git a/drivers/pinctrl/ralink/pinctrl-mt7621.c b/drivers/pinctrl/ralink/pinctrl-mt7621.c
+index eddc0ba6d468c..fb5824922e788 100644
+--- a/drivers/pinctrl/ralink/pinctrl-mt7621.c
++++ b/drivers/pinctrl/ralink/pinctrl-mt7621.c
+@@ -97,6 +97,7 @@ static int mt7621_pinctrl_probe(struct platform_device *pdev)
+ 
+ static const struct of_device_id mt7621_pinctrl_match[] = {
+ 	{ .compatible = "ralink,mt7621-pinctrl" },
++	{ .compatible = "ralink,rt2880-pinmux" },
+ 	{}
+ };
+ MODULE_DEVICE_TABLE(of, mt7621_pinctrl_match);
+diff --git a/drivers/pinctrl/ralink/pinctrl-rt2880.c b/drivers/pinctrl/ralink/pinctrl-rt2880.c
+index 3e2f1aaaf0957..d7a65fcc7755a 100644
+--- a/drivers/pinctrl/ralink/pinctrl-rt2880.c
++++ b/drivers/pinctrl/ralink/pinctrl-rt2880.c
+@@ -41,6 +41,7 @@ static int rt2880_pinctrl_probe(struct platform_device *pdev)
+ 
+ static const struct of_device_id rt2880_pinctrl_match[] = {
+ 	{ .compatible = "ralink,rt2880-pinctrl" },
++	{ .compatible = "ralink,rt2880-pinmux" },
+ 	{}
+ };
+ MODULE_DEVICE_TABLE(of, rt2880_pinctrl_match);
+diff --git a/drivers/pinctrl/ralink/pinctrl-rt305x.c b/drivers/pinctrl/ralink/pinctrl-rt305x.c
+index bdaee5ce1ee08..f6092c64383e5 100644
+--- a/drivers/pinctrl/ralink/pinctrl-rt305x.c
++++ b/drivers/pinctrl/ralink/pinctrl-rt305x.c
+@@ -118,6 +118,7 @@ static int rt305x_pinctrl_probe(struct platform_device *pdev)
+ 
+ static const struct of_device_id rt305x_pinctrl_match[] = {
+ 	{ .compatible = "ralink,rt305x-pinctrl" },
++	{ .compatible = "ralink,rt2880-pinmux" },
+ 	{}
+ };
+ MODULE_DEVICE_TABLE(of, rt305x_pinctrl_match);
+diff --git a/drivers/pinctrl/ralink/pinctrl-rt3883.c b/drivers/pinctrl/ralink/pinctrl-rt3883.c
+index 392208662355d..5f766d76bafa6 100644
+--- a/drivers/pinctrl/ralink/pinctrl-rt3883.c
++++ b/drivers/pinctrl/ralink/pinctrl-rt3883.c
+@@ -88,6 +88,7 @@ static int rt3883_pinctrl_probe(struct platform_device *pdev)
+ 
+ static const struct of_device_id rt3883_pinctrl_match[] = {
+ 	{ .compatible = "ralink,rt3883-pinctrl" },
++	{ .compatible = "ralink,rt2880-pinmux" },
+ 	{}
+ };
+ MODULE_DEVICE_TABLE(of, rt3883_pinctrl_match);
+diff --git a/drivers/pinctrl/renesas/pfc-r8a779a0.c b/drivers/pinctrl/renesas/pfc-r8a779a0.c
+index 760c83a8740bd..6069869353bb4 100644
+--- a/drivers/pinctrl/renesas/pfc-r8a779a0.c
++++ b/drivers/pinctrl/renesas/pfc-r8a779a0.c
+@@ -696,16 +696,8 @@ static const u16 pinmux_data[] = {
+ 	PINMUX_SINGLE(PCIE0_CLKREQ_N),
+ 
+ 	PINMUX_SINGLE(AVB0_PHY_INT),
+-	PINMUX_SINGLE(AVB0_MAGIC),
+-	PINMUX_SINGLE(AVB0_MDC),
+-	PINMUX_SINGLE(AVB0_MDIO),
+-	PINMUX_SINGLE(AVB0_TXCREFCLK),
+ 
+ 	PINMUX_SINGLE(AVB1_PHY_INT),
+-	PINMUX_SINGLE(AVB1_MAGIC),
+-	PINMUX_SINGLE(AVB1_MDC),
+-	PINMUX_SINGLE(AVB1_MDIO),
+-	PINMUX_SINGLE(AVB1_TXCREFCLK),
+ 
+ 	PINMUX_SINGLE(AVB2_AVTP_PPS),
+ 	PINMUX_SINGLE(AVB2_AVTP_CAPTURE),
+diff --git a/drivers/pinctrl/renesas/pfc-r8a779f0.c b/drivers/pinctrl/renesas/pfc-r8a779f0.c
+index 417c357f16b19..65c141ce909ac 100644
+--- a/drivers/pinctrl/renesas/pfc-r8a779f0.c
++++ b/drivers/pinctrl/renesas/pfc-r8a779f0.c
+@@ -1213,7 +1213,7 @@ static const unsigned int tsn1_avtp_pps_pins[] = {
+ 	RCAR_GP_PIN(3, 13),
+ };
+ static const unsigned int tsn1_avtp_pps_mux[] = {
+-	TSN0_AVTP_PPS_MARK,
++	TSN1_AVTP_PPS_MARK,
+ };
+ static const unsigned int tsn1_avtp_capture_a_pins[] = {
+ 	/* TSN1_AVTP_CAPTURE_A */
+diff --git a/drivers/pinctrl/renesas/pfc-r8a779g0.c b/drivers/pinctrl/renesas/pfc-r8a779g0.c
+index bf7fcce2d9c6b..a47b02bf46863 100644
+--- a/drivers/pinctrl/renesas/pfc-r8a779g0.c
++++ b/drivers/pinctrl/renesas/pfc-r8a779g0.c
+@@ -156,54 +156,54 @@
+ #define GPSR3_0		F_(MMC_SD_D1,		IP0SR3_3_0)
+ 
+ /* GPSR4 */
+-#define GPSR4_24	FM(AVS1)
+-#define GPSR4_23	FM(AVS0)
+-#define GPSR4_22	FM(PCIE1_CLKREQ_N)
+-#define GPSR4_21	FM(PCIE0_CLKREQ_N)
+-#define GPSR4_20	FM(TSN0_TXCREFCLK)
+-#define GPSR4_19	FM(TSN0_TD2)
+-#define GPSR4_18	FM(TSN0_TD3)
+-#define GPSR4_17	FM(TSN0_RD2)
+-#define GPSR4_16	FM(TSN0_RD3)
+-#define GPSR4_15	FM(TSN0_TD0)
+-#define GPSR4_14	FM(TSN0_TD1)
+-#define GPSR4_13	FM(TSN0_RD1)
+-#define GPSR4_12	FM(TSN0_TXC)
+-#define GPSR4_11	FM(TSN0_RXC)
+-#define GPSR4_10	FM(TSN0_RD0)
+-#define GPSR4_9		FM(TSN0_TX_CTL)
+-#define GPSR4_8		FM(TSN0_AVTP_PPS0)
+-#define GPSR4_7		FM(TSN0_RX_CTL)
+-#define GPSR4_6		FM(TSN0_AVTP_CAPTURE)
+-#define GPSR4_5		FM(TSN0_AVTP_MATCH)
+-#define GPSR4_4		FM(TSN0_LINK)
+-#define GPSR4_3		FM(TSN0_PHY_INT)
+-#define GPSR4_2		FM(TSN0_AVTP_PPS1)
+-#define GPSR4_1		FM(TSN0_MDC)
+-#define GPSR4_0		FM(TSN0_MDIO)
++#define GPSR4_24	F_(AVS1,		IP3SR4_3_0)
++#define GPSR4_23	F_(AVS0,		IP2SR4_31_28)
++#define GPSR4_22	F_(PCIE1_CLKREQ_N,	IP2SR4_27_24)
++#define GPSR4_21	F_(PCIE0_CLKREQ_N,	IP2SR4_23_20)
++#define GPSR4_20	F_(TSN0_TXCREFCLK,	IP2SR4_19_16)
++#define GPSR4_19	F_(TSN0_TD2,		IP2SR4_15_12)
++#define GPSR4_18	F_(TSN0_TD3,		IP2SR4_11_8)
++#define GPSR4_17	F_(TSN0_RD2,		IP2SR4_7_4)
++#define GPSR4_16	F_(TSN0_RD3,		IP2SR4_3_0)
++#define GPSR4_15	F_(TSN0_TD0,		IP1SR4_31_28)
++#define GPSR4_14	F_(TSN0_TD1,		IP1SR4_27_24)
++#define GPSR4_13	F_(TSN0_RD1,		IP1SR4_23_20)
++#define GPSR4_12	F_(TSN0_TXC,		IP1SR4_19_16)
++#define GPSR4_11	F_(TSN0_RXC,		IP1SR4_15_12)
++#define GPSR4_10	F_(TSN0_RD0,		IP1SR4_11_8)
++#define GPSR4_9		F_(TSN0_TX_CTL,		IP1SR4_7_4)
++#define GPSR4_8		F_(TSN0_AVTP_PPS0,	IP1SR4_3_0)
++#define GPSR4_7		F_(TSN0_RX_CTL,		IP0SR4_31_28)
++#define GPSR4_6		F_(TSN0_AVTP_CAPTURE,	IP0SR4_27_24)
++#define GPSR4_5		F_(TSN0_AVTP_MATCH,	IP0SR4_23_20)
++#define GPSR4_4		F_(TSN0_LINK,		IP0SR4_19_16)
++#define GPSR4_3		F_(TSN0_PHY_INT,	IP0SR4_15_12)
++#define GPSR4_2		F_(TSN0_AVTP_PPS1,	IP0SR4_11_8)
++#define GPSR4_1		F_(TSN0_MDC,		IP0SR4_7_4)
++#define GPSR4_0		F_(TSN0_MDIO,		IP0SR4_3_0)
+ 
+ /* GPSR 5 */
+-#define GPSR5_20	FM(AVB2_RX_CTL)
+-#define GPSR5_19	FM(AVB2_TX_CTL)
+-#define GPSR5_18	FM(AVB2_RXC)
+-#define GPSR5_17	FM(AVB2_RD0)
+-#define GPSR5_16	FM(AVB2_TXC)
+-#define GPSR5_15	FM(AVB2_TD0)
+-#define GPSR5_14	FM(AVB2_RD1)
+-#define GPSR5_13	FM(AVB2_RD2)
+-#define GPSR5_12	FM(AVB2_TD1)
+-#define GPSR5_11	FM(AVB2_TD2)
+-#define GPSR5_10	FM(AVB2_MDIO)
+-#define GPSR5_9		FM(AVB2_RD3)
+-#define GPSR5_8		FM(AVB2_TD3)
+-#define GPSR5_7		FM(AVB2_TXCREFCLK)
+-#define GPSR5_6		FM(AVB2_MDC)
+-#define GPSR5_5		FM(AVB2_MAGIC)
+-#define GPSR5_4		FM(AVB2_PHY_INT)
+-#define GPSR5_3		FM(AVB2_LINK)
+-#define GPSR5_2		FM(AVB2_AVTP_MATCH)
+-#define GPSR5_1		FM(AVB2_AVTP_CAPTURE)
+-#define GPSR5_0		FM(AVB2_AVTP_PPS)
++#define GPSR5_20	F_(AVB2_RX_CTL,		IP2SR5_19_16)
++#define GPSR5_19	F_(AVB2_TX_CTL,		IP2SR5_15_12)
++#define GPSR5_18	F_(AVB2_RXC,		IP2SR5_11_8)
++#define GPSR5_17	F_(AVB2_RD0,		IP2SR5_7_4)
++#define GPSR5_16	F_(AVB2_TXC,		IP2SR5_3_0)
++#define GPSR5_15	F_(AVB2_TD0,		IP1SR5_31_28)
++#define GPSR5_14	F_(AVB2_RD1,		IP1SR5_27_24)
++#define GPSR5_13	F_(AVB2_RD2,		IP1SR5_23_20)
++#define GPSR5_12	F_(AVB2_TD1,		IP1SR5_19_16)
++#define GPSR5_11	F_(AVB2_TD2,		IP1SR5_15_12)
++#define GPSR5_10	F_(AVB2_MDIO,		IP1SR5_11_8)
++#define GPSR5_9		F_(AVB2_RD3,		IP1SR5_7_4)
++#define GPSR5_8		F_(AVB2_TD3,		IP1SR5_3_0)
++#define GPSR5_7		F_(AVB2_TXCREFCLK,	IP0SR5_31_28)
++#define GPSR5_6		F_(AVB2_MDC,		IP0SR5_27_24)
++#define GPSR5_5		F_(AVB2_MAGIC,		IP0SR5_23_20)
++#define GPSR5_4		F_(AVB2_PHY_INT,	IP0SR5_19_16)
++#define GPSR5_3		F_(AVB2_LINK,		IP0SR5_15_12)
++#define GPSR5_2		F_(AVB2_AVTP_MATCH,	IP0SR5_11_8)
++#define GPSR5_1		F_(AVB2_AVTP_CAPTURE,	IP0SR5_7_4)
++#define GPSR5_0		F_(AVB2_AVTP_PPS,	IP0SR5_3_0)
+ 
+ /* GPSR 6 */
+ #define GPSR6_20	F_(AVB1_TXCREFCLK,	IP2SR6_19_16)
+@@ -268,209 +268,271 @@
+ #define GPSR8_0		F_(SCL0,		IP0SR8_3_0)
+ 
+ /* SR0 */
+-/* IP0SR0 */		/* 0 */			/* 1 */			/* 2 */		/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+-#define IP0SR0_3_0	F_(0, 0)		FM(ERROROUTC_B)		FM(TCLK2_A)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR0_7_4	F_(0, 0)		FM(MSIOF3_SS1)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR0_11_8	F_(0, 0)		FM(MSIOF3_SS2)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR0_15_12	FM(IRQ3)		FM(MSIOF3_SCK)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR0_19_16	FM(IRQ2)		FM(MSIOF3_TXD)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR0_23_20	FM(IRQ1)		FM(MSIOF3_RXD)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR0_27_24	FM(IRQ0)		FM(MSIOF3_SYNC)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR0_31_28	FM(MSIOF5_SS2)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-
+-/* IP1SR0 */		/* 0 */			/* 1 */			/* 2 */		/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+-#define IP1SR0_3_0	FM(MSIOF5_SS1)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR0_7_4	FM(MSIOF5_SYNC)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR0_11_8	FM(MSIOF5_TXD)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR0_15_12	FM(MSIOF5_SCK)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR0_19_16	FM(MSIOF5_RXD)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR0_23_20	FM(MSIOF2_SS2)		FM(TCLK1)		FM(IRQ2_A)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR0_27_24	FM(MSIOF2_SS1)		FM(HTX1)		FM(TX1)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR0_31_28	FM(MSIOF2_SYNC)		FM(HRX1)		FM(RX1)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-
+-/* IP2SR0 */		/* 0 */			/* 1 */			/* 2 */		/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+-#define IP2SR0_3_0	FM(MSIOF2_TXD)		FM(HCTS1_N)		FM(CTS1_N)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR0_7_4	FM(MSIOF2_SCK)		FM(HRTS1_N)		FM(RTS1_N)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR0_11_8	FM(MSIOF2_RXD)		FM(HSCK1)		FM(SCK1)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++/* IP0SR0 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
++#define IP0SR0_3_0	F_(0, 0)		FM(ERROROUTC_N_B)	FM(TCLK2_A)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR0_7_4	F_(0, 0)		FM(MSIOF3_SS1)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR0_11_8	F_(0, 0)		FM(MSIOF3_SS2)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR0_15_12	FM(IRQ3)		FM(MSIOF3_SCK)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR0_19_16	FM(IRQ2)		FM(MSIOF3_TXD)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR0_23_20	FM(IRQ1)		FM(MSIOF3_RXD)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR0_27_24	FM(IRQ0)		FM(MSIOF3_SYNC)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR0_31_28	FM(MSIOF5_SS2)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++
++/* IP1SR0 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
++#define IP1SR0_3_0	FM(MSIOF5_SS1)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR0_7_4	FM(MSIOF5_SYNC)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR0_11_8	FM(MSIOF5_TXD)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR0_15_12	FM(MSIOF5_SCK)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR0_19_16	FM(MSIOF5_RXD)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR0_23_20	FM(MSIOF2_SS2)		FM(TCLK1)		FM(IRQ2_A)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR0_27_24	FM(MSIOF2_SS1)		FM(HTX1)		FM(TX1)			F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR0_31_28	FM(MSIOF2_SYNC)		FM(HRX1)		FM(RX1)			F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++
++/* IP2SR0 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
++#define IP2SR0_3_0	FM(MSIOF2_TXD)		FM(HCTS1_N)		FM(CTS1_N)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR0_7_4	FM(MSIOF2_SCK)		FM(HRTS1_N)		FM(RTS1_N)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR0_11_8	FM(MSIOF2_RXD)		FM(HSCK1)		FM(SCK1)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ 
+ /* SR1 */
+-/* IP0SR1 */		/* 0 */			/* 1 */			/* 2 */		/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+-#define IP0SR1_3_0	FM(MSIOF1_SS2)		FM(HTX3_A)		FM(TX3)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR1_7_4	FM(MSIOF1_SS1)		FM(HCTS3_N_A)		FM(RX3)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR1_11_8	FM(MSIOF1_SYNC)		FM(HRTS3_N_A)		FM(RTS3_N)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR1_15_12	FM(MSIOF1_SCK)		FM(HSCK3_A)		FM(CTS3_N)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR1_19_16	FM(MSIOF1_TXD)		FM(HRX3_A)		FM(SCK3)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR1_23_20	FM(MSIOF1_RXD)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR1_27_24	FM(MSIOF0_SS2)		FM(HTX1_X)		FM(TX1_X)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR1_31_28	FM(MSIOF0_SS1)		FM(HRX1_X)		FM(RX1_X)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-
+-/* IP1SR1 */		/* 0 */			/* 1 */			/* 2 */		/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+-#define IP1SR1_3_0	FM(MSIOF0_SYNC)		FM(HCTS1_N_X)		FM(CTS1_N_X)	FM(CANFD5_TX_B)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR1_7_4	FM(MSIOF0_TXD)		FM(HRTS1_N_X)		FM(RTS1_N_X)	FM(CANFD5_RX_B)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR1_11_8	FM(MSIOF0_SCK)		FM(HSCK1_X)		FM(SCK1_X)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR1_15_12	FM(MSIOF0_RXD)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR1_19_16	FM(HTX0)		FM(TX0)			F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR1_23_20	FM(HCTS0_N)		FM(CTS0_N)		FM(PWM8_A)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR1_27_24	FM(HRTS0_N)		FM(RTS0_N)		FM(PWM9_A)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR1_31_28	FM(HSCK0)		FM(SCK0)		FM(PWM0_A)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-
+-/* IP2SR1 */		/* 0 */			/* 1 */			/* 2 */		/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+-#define IP2SR1_3_0	FM(HRX0)		FM(RX0)			F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR1_7_4	FM(SCIF_CLK)		FM(IRQ4_A)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR1_11_8	FM(SSI_SCK)		FM(TCLK3)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR1_15_12	FM(SSI_WS)		FM(TCLK4)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR1_19_16	FM(SSI_SD)		FM(IRQ0_A)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR1_23_20	FM(AUDIO_CLKOUT)	FM(IRQ1_A)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR1_27_24	FM(AUDIO_CLKIN)		FM(PWM3_A)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR1_31_28	F_(0, 0)		FM(TCLK2)		FM(MSIOF4_SS1)	FM(IRQ3_B)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-
+-/* IP3SR1 */		/* 0 */			/* 1 */			/* 2 */		/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+-#define IP3SR1_3_0	FM(HRX3)		FM(SCK3_A)		FM(MSIOF4_SS2)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP3SR1_7_4	FM(HSCK3)		FM(CTS3_N_A)		FM(MSIOF4_SCK)	FM(TPU0TO0_A)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP3SR1_11_8	FM(HRTS3_N)		FM(RTS3_N_A)		FM(MSIOF4_TXD)	FM(TPU0TO1_A)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP3SR1_15_12	FM(HCTS3_N)		FM(RX3_A)		FM(MSIOF4_RXD)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP3SR1_19_16	FM(HTX3)		FM(TX3_A)		FM(MSIOF4_SYNC)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++/* IP0SR1 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
++#define IP0SR1_3_0	FM(MSIOF1_SS2)		FM(HTX3_A)		FM(TX3)			F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR1_7_4	FM(MSIOF1_SS1)		FM(HCTS3_N_A)		FM(RX3)			F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR1_11_8	FM(MSIOF1_SYNC)		FM(HRTS3_N_A)		FM(RTS3_N)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR1_15_12	FM(MSIOF1_SCK)		FM(HSCK3_A)		FM(CTS3_N)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR1_19_16	FM(MSIOF1_TXD)		FM(HRX3_A)		FM(SCK3)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR1_23_20	FM(MSIOF1_RXD)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR1_27_24	FM(MSIOF0_SS2)		FM(HTX1_X)		FM(TX1_X)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR1_31_28	FM(MSIOF0_SS1)		FM(HRX1_X)		FM(RX1_X)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++
++/* IP1SR1 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
++#define IP1SR1_3_0	FM(MSIOF0_SYNC)		FM(HCTS1_N_X)		FM(CTS1_N_X)		FM(CANFD5_TX_B)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR1_7_4	FM(MSIOF0_TXD)		FM(HRTS1_N_X)		FM(RTS1_N_X)		FM(CANFD5_RX_B)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR1_11_8	FM(MSIOF0_SCK)		FM(HSCK1_X)		FM(SCK1_X)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR1_15_12	FM(MSIOF0_RXD)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR1_19_16	FM(HTX0)		FM(TX0)			F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR1_23_20	FM(HCTS0_N)		FM(CTS0_N)		FM(PWM8_A)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR1_27_24	FM(HRTS0_N)		FM(RTS0_N)		FM(PWM9_A)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR1_31_28	FM(HSCK0)		FM(SCK0)		FM(PWM0_A)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++
++/* IP2SR1 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
++#define IP2SR1_3_0	FM(HRX0)		FM(RX0)			F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR1_7_4	FM(SCIF_CLK)		FM(IRQ4_A)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR1_11_8	FM(SSI_SCK)		FM(TCLK3)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR1_15_12	FM(SSI_WS)		FM(TCLK4)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR1_19_16	FM(SSI_SD)		FM(IRQ0_A)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR1_23_20	FM(AUDIO_CLKOUT)	FM(IRQ1_A)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR1_27_24	FM(AUDIO_CLKIN)		FM(PWM3_A)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR1_31_28	F_(0, 0)		FM(TCLK2)		FM(MSIOF4_SS1)		FM(IRQ3_B)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++
++/* IP3SR1 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
++#define IP3SR1_3_0	FM(HRX3)		FM(SCK3_A)		FM(MSIOF4_SS2)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP3SR1_7_4	FM(HSCK3)		FM(CTS3_N_A)		FM(MSIOF4_SCK)		FM(TPU0TO0_A)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP3SR1_11_8	FM(HRTS3_N)		FM(RTS3_N_A)		FM(MSIOF4_TXD)		FM(TPU0TO1_A)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP3SR1_15_12	FM(HCTS3_N)		FM(RX3_A)		FM(MSIOF4_RXD)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP3SR1_19_16	FM(HTX3)		FM(TX3_A)		FM(MSIOF4_SYNC)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ 
+ /* SR2 */
+-/* IP0SR2 */		/* 0 */			/* 1 */			/* 2 */		/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+-#define IP0SR2_3_0	FM(FXR_TXDA)		FM(CANFD1_TX)		FM(TPU0TO2_A)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR2_7_4	FM(FXR_TXENA_N)		FM(CANFD1_RX)		FM(TPU0TO3_A)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR2_11_8	FM(RXDA_EXTFXR)		FM(CANFD5_TX)		FM(IRQ5)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR2_15_12	FM(CLK_EXTFXR)		FM(CANFD5_RX)		FM(IRQ4_B)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR2_19_16	FM(RXDB_EXTFXR)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR2_23_20	FM(FXR_TXENB_N)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR2_27_24	FM(FXR_TXDB)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR2_31_28	FM(TPU0TO1)		FM(CANFD6_TX)		F_(0, 0)	FM(TCLK2_B)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-
+-/* IP1SR2 */		/* 0 */			/* 1 */			/* 2 */		/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+-#define IP1SR2_3_0	FM(TPU0TO0)		FM(CANFD6_RX)		F_(0, 0)	FM(TCLK1_A)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR2_7_4	FM(CAN_CLK)		FM(FXR_TXENA_N_X)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR2_11_8	FM(CANFD0_TX)		FM(FXR_TXENB_N_X)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR2_15_12	FM(CANFD0_RX)		FM(STPWT_EXTFXR)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR2_19_16	FM(CANFD2_TX)		FM(TPU0TO2)		F_(0, 0)	FM(TCLK3_A)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR2_23_20	FM(CANFD2_RX)		FM(TPU0TO3)		FM(PWM1_B)	FM(TCLK4_A)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR2_27_24	FM(CANFD3_TX)		F_(0, 0)		FM(PWM2_B)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR2_31_28	FM(CANFD3_RX)		F_(0, 0)		FM(PWM3_B)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-
+-/* IP2SR2 */		/* 0 */			/* 1 */			/* 2 */		/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+-#define IP2SR2_3_0	FM(CANFD4_TX)		F_(0, 0)		FM(PWM4)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR2_7_4	FM(CANFD4_RX)		F_(0, 0)		FM(PWM5)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR2_11_8	FM(CANFD7_TX)		F_(0, 0)		FM(PWM6)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR2_15_12	FM(CANFD7_RX)		F_(0, 0)		FM(PWM7)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++/* IP0SR2 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
++#define IP0SR2_3_0	FM(FXR_TXDA)		FM(CANFD1_TX)		FM(TPU0TO2_A)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR2_7_4	FM(FXR_TXENA_N)		FM(CANFD1_RX)		FM(TPU0TO3_A)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR2_11_8	FM(RXDA_EXTFXR)		FM(CANFD5_TX)		FM(IRQ5)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR2_15_12	FM(CLK_EXTFXR)		FM(CANFD5_RX)		FM(IRQ4_B)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR2_19_16	FM(RXDB_EXTFXR)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR2_23_20	FM(FXR_TXENB_N)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR2_27_24	FM(FXR_TXDB)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR2_31_28	FM(TPU0TO1)		FM(CANFD6_TX)		F_(0, 0)		FM(TCLK2_B)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++
++/* IP1SR2 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
++#define IP1SR2_3_0	FM(TPU0TO0)		FM(CANFD6_RX)		F_(0, 0)		FM(TCLK1_A)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR2_7_4	FM(CAN_CLK)		FM(FXR_TXENA_N_X)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR2_11_8	FM(CANFD0_TX)		FM(FXR_TXENB_N_X)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR2_15_12	FM(CANFD0_RX)		FM(STPWT_EXTFXR)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR2_19_16	FM(CANFD2_TX)		FM(TPU0TO2)		F_(0, 0)		FM(TCLK3_A)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR2_23_20	FM(CANFD2_RX)		FM(TPU0TO3)		FM(PWM1_B)		FM(TCLK4_A)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR2_27_24	FM(CANFD3_TX)		F_(0, 0)		FM(PWM2_B)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR2_31_28	FM(CANFD3_RX)		F_(0, 0)		FM(PWM3_B)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++
++/* IP2SR2 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
++#define IP2SR2_3_0	FM(CANFD4_TX)		F_(0, 0)		FM(PWM4)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR2_7_4	FM(CANFD4_RX)		F_(0, 0)		FM(PWM5)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR2_11_8	FM(CANFD7_TX)		F_(0, 0)		FM(PWM6)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR2_15_12	FM(CANFD7_RX)		F_(0, 0)		FM(PWM7)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ 
+ /* SR3 */
+-/* IP0SR3 */		/* 0 */			/* 1 */			/* 2 */		/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+-#define IP0SR3_3_0	FM(MMC_SD_D1)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR3_7_4	FM(MMC_SD_D0)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR3_11_8	FM(MMC_SD_D2)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR3_15_12	FM(MMC_SD_CLK)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR3_19_16	FM(MMC_DS)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR3_23_20	FM(MMC_SD_D3)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR3_27_24	FM(MMC_D5)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR3_31_28	FM(MMC_D4)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-
+-/* IP1SR3 */		/* 0 */			/* 1 */			/* 2 */		/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+-#define IP1SR3_3_0	FM(MMC_D7)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR3_7_4	FM(MMC_D6)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR3_11_8	FM(MMC_SD_CMD)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR3_15_12	FM(SD_CD)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR3_19_16	FM(SD_WP)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR3_23_20	FM(IPC_CLKIN)		FM(IPC_CLKEN_IN)	FM(PWM1_A)	FM(TCLK3_X)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR3_27_24	FM(IPC_CLKOUT)		FM(IPC_CLKEN_OUT)	FM(ERROROUTC_A)	FM(TCLK4_X)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR3_31_28	FM(QSPI0_SSL)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-
+-/* IP2SR3 */		/* 0 */			/* 1 */			/* 2 */		/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+-#define IP2SR3_3_0	FM(QSPI0_IO3)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR3_7_4	FM(QSPI0_IO2)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR3_11_8	FM(QSPI0_MISO_IO1)	F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR3_15_12	FM(QSPI0_MOSI_IO0)	F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR3_19_16	FM(QSPI0_SPCLK)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR3_23_20	FM(QSPI1_MOSI_IO0)	F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR3_27_24	FM(QSPI1_SPCLK)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR3_31_28	FM(QSPI1_MISO_IO1)	F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-
+-/* IP3SR3 */		/* 0 */			/* 1 */			/* 2 */		/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+-#define IP3SR3_3_0	FM(QSPI1_IO2)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP3SR3_7_4	FM(QSPI1_SSL)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP3SR3_11_8	FM(QSPI1_IO3)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP3SR3_15_12	FM(RPC_RESET_N)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP3SR3_19_16	FM(RPC_WP_N)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP3SR3_23_20	FM(RPC_INT_N)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++/* IP0SR3 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
++#define IP0SR3_3_0	FM(MMC_SD_D1)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR3_7_4	FM(MMC_SD_D0)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR3_11_8	FM(MMC_SD_D2)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR3_15_12	FM(MMC_SD_CLK)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR3_19_16	FM(MMC_DS)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR3_23_20	FM(MMC_SD_D3)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR3_27_24	FM(MMC_D5)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR3_31_28	FM(MMC_D4)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++
++/* IP1SR3 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
++#define IP1SR3_3_0	FM(MMC_D7)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR3_7_4	FM(MMC_D6)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR3_11_8	FM(MMC_SD_CMD)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR3_15_12	FM(SD_CD)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR3_19_16	FM(SD_WP)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR3_23_20	FM(IPC_CLKIN)		FM(IPC_CLKEN_IN)	FM(PWM1_A)		FM(TCLK3_X)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR3_27_24	FM(IPC_CLKOUT)		FM(IPC_CLKEN_OUT)	FM(ERROROUTC_N_A)	FM(TCLK4_X)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR3_31_28	FM(QSPI0_SSL)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++
++/* IP2SR3 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
++#define IP2SR3_3_0	FM(QSPI0_IO3)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR3_7_4	FM(QSPI0_IO2)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR3_11_8	FM(QSPI0_MISO_IO1)	F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR3_15_12	FM(QSPI0_MOSI_IO0)	F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR3_19_16	FM(QSPI0_SPCLK)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR3_23_20	FM(QSPI1_MOSI_IO0)	F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR3_27_24	FM(QSPI1_SPCLK)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR3_31_28	FM(QSPI1_MISO_IO1)	F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++
++/* IP3SR3 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
++#define IP3SR3_3_0	FM(QSPI1_IO2)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP3SR3_7_4	FM(QSPI1_SSL)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP3SR3_11_8	FM(QSPI1_IO3)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP3SR3_15_12	FM(RPC_RESET_N)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP3SR3_19_16	FM(RPC_WP_N)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP3SR3_23_20	FM(RPC_INT_N)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++
++/* SR4 */
++/* IP0SR4 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
++#define IP0SR4_3_0	FM(TSN0_MDIO)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR4_7_4	FM(TSN0_MDC)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR4_11_8	FM(TSN0_AVTP_PPS1)	F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR4_15_12	FM(TSN0_PHY_INT)	F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR4_19_16	FM(TSN0_LINK)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR4_23_20	FM(TSN0_AVTP_MATCH)	F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR4_27_24	FM(TSN0_AVTP_CAPTURE)	F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR4_31_28	FM(TSN0_RX_CTL)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++
++/* IP1SR4 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
++#define IP1SR4_3_0	FM(TSN0_AVTP_PPS0)	F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR4_7_4	FM(TSN0_TX_CTL)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR4_11_8	FM(TSN0_RD0)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR4_15_12	FM(TSN0_RXC)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR4_19_16	FM(TSN0_TXC)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR4_23_20	FM(TSN0_RD1)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR4_27_24	FM(TSN0_TD1)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR4_31_28	FM(TSN0_TD0)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++
++/* IP2SR4 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
++#define IP2SR4_3_0	FM(TSN0_RD3)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR4_7_4	FM(TSN0_RD2)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR4_11_8	FM(TSN0_TD3)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR4_15_12	FM(TSN0_TD2)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR4_19_16	FM(TSN0_TXCREFCLK)	F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR4_23_20	FM(PCIE0_CLKREQ_N)	F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR4_27_24	FM(PCIE1_CLKREQ_N)	F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR4_31_28	FM(AVS0)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++
++/* IP3SR4 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
++#define IP3SR4_3_0	FM(AVS1)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++
++/* SR5 */
++/* IP0SR5 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
++#define IP0SR5_3_0	FM(AVB2_AVTP_PPS)	F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR5_7_4	FM(AVB2_AVTP_CAPTURE)	F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR5_11_8	FM(AVB2_AVTP_MATCH)	F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR5_15_12	FM(AVB2_LINK)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR5_19_16	FM(AVB2_PHY_INT)	F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR5_23_20	FM(AVB2_MAGIC)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR5_27_24	FM(AVB2_MDC)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR5_31_28	FM(AVB2_TXCREFCLK)	F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++
++/* IP1SR5 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
++#define IP1SR5_3_0	FM(AVB2_TD3)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR5_7_4	FM(AVB2_RD3)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR5_11_8	FM(AVB2_MDIO)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR5_15_12	FM(AVB2_TD2)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR5_19_16	FM(AVB2_TD1)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR5_23_20	FM(AVB2_RD2)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR5_27_24	FM(AVB2_RD1)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR5_31_28	FM(AVB2_TD0)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++
++/* IP2SR5 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
++#define IP2SR5_3_0	FM(AVB2_TXC)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR5_7_4	FM(AVB2_RD0)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR5_11_8	FM(AVB2_RXC)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR5_15_12	FM(AVB2_TX_CTL)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR5_19_16	FM(AVB2_RX_CTL)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ 
+ /* SR6 */
+-/* IP0SR6 */		/* 0 */			/* 1 */			/* 2 */		/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+-#define IP0SR6_3_0	FM(AVB1_MDIO)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR6_7_4	FM(AVB1_MAGIC)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR6_11_8	FM(AVB1_MDC)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR6_15_12	FM(AVB1_PHY_INT)	F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR6_19_16	FM(AVB1_LINK)		FM(AVB1_MII_TX_ER)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR6_23_20	FM(AVB1_AVTP_MATCH)	FM(AVB1_MII_RX_ER)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR6_27_24	FM(AVB1_TXC)		FM(AVB1_MII_TXC)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR6_31_28	FM(AVB1_TX_CTL)		FM(AVB1_MII_TX_EN)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-
+-/* IP1SR6 */		/* 0 */			/* 1 */			/* 2 */		/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+-#define IP1SR6_3_0	FM(AVB1_RXC)		FM(AVB1_MII_RXC)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR6_7_4	FM(AVB1_RX_CTL)		FM(AVB1_MII_RX_DV)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR6_11_8	FM(AVB1_AVTP_PPS)	FM(AVB1_MII_COL)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR6_15_12	FM(AVB1_AVTP_CAPTURE)	FM(AVB1_MII_CRS)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR6_19_16	FM(AVB1_TD1)		FM(AVB1_MII_TD1)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR6_23_20	FM(AVB1_TD0)		FM(AVB1_MII_TD0)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR6_27_24	FM(AVB1_RD1)		FM(AVB1_MII_RD1)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR6_31_28	FM(AVB1_RD0)		FM(AVB1_MII_RD0)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-
+-/* IP2SR6 */		/* 0 */			/* 1 */			/* 2 */		/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+-#define IP2SR6_3_0	FM(AVB1_TD2)		FM(AVB1_MII_TD2)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR6_7_4	FM(AVB1_RD2)		FM(AVB1_MII_RD2)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR6_11_8	FM(AVB1_TD3)		FM(AVB1_MII_TD3)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR6_15_12	FM(AVB1_RD3)		FM(AVB1_MII_RD3)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR6_19_16	FM(AVB1_TXCREFCLK)	F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++/* IP0SR6 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
++#define IP0SR6_3_0	FM(AVB1_MDIO)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR6_7_4	FM(AVB1_MAGIC)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR6_11_8	FM(AVB1_MDC)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR6_15_12	FM(AVB1_PHY_INT)	F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR6_19_16	FM(AVB1_LINK)		FM(AVB1_MII_TX_ER)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR6_23_20	FM(AVB1_AVTP_MATCH)	FM(AVB1_MII_RX_ER)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR6_27_24	FM(AVB1_TXC)		FM(AVB1_MII_TXC)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR6_31_28	FM(AVB1_TX_CTL)		FM(AVB1_MII_TX_EN)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++
++/* IP1SR6 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
++#define IP1SR6_3_0	FM(AVB1_RXC)		FM(AVB1_MII_RXC)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR6_7_4	FM(AVB1_RX_CTL)		FM(AVB1_MII_RX_DV)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR6_11_8	FM(AVB1_AVTP_PPS)	FM(AVB1_MII_COL)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR6_15_12	FM(AVB1_AVTP_CAPTURE)	FM(AVB1_MII_CRS)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR6_19_16	FM(AVB1_TD1)		FM(AVB1_MII_TD1)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR6_23_20	FM(AVB1_TD0)		FM(AVB1_MII_TD0)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR6_27_24	FM(AVB1_RD1)		FM(AVB1_MII_RD1)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR6_31_28	FM(AVB1_RD0)		FM(AVB1_MII_RD0)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++
++/* IP2SR6 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
++#define IP2SR6_3_0	FM(AVB1_TD2)		FM(AVB1_MII_TD2)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR6_7_4	FM(AVB1_RD2)		FM(AVB1_MII_RD2)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR6_11_8	FM(AVB1_TD3)		FM(AVB1_MII_TD3)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR6_15_12	FM(AVB1_RD3)		FM(AVB1_MII_RD3)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR6_19_16	FM(AVB1_TXCREFCLK)	F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ 
+ /* SR7 */
+-/* IP0SR7 */		/* 0 */			/* 1 */			/* 2 */		/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+-#define IP0SR7_3_0	FM(AVB0_AVTP_PPS)	FM(AVB0_MII_COL)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR7_7_4	FM(AVB0_AVTP_CAPTURE)	FM(AVB0_MII_CRS)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR7_11_8	FM(AVB0_AVTP_MATCH)	FM(AVB0_MII_RX_ER)	FM(CC5_OSCOUT)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR7_15_12	FM(AVB0_TD3)		FM(AVB0_MII_TD3)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR7_19_16	FM(AVB0_LINK)		FM(AVB0_MII_TX_ER)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR7_23_20	FM(AVB0_PHY_INT)	F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR7_27_24	FM(AVB0_TD2)		FM(AVB0_MII_TD2)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR7_31_28	FM(AVB0_TD1)		FM(AVB0_MII_TD1)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-
+-/* IP1SR7 */		/* 0 */			/* 1 */			/* 2 */		/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+-#define IP1SR7_3_0	FM(AVB0_RD3)		FM(AVB0_MII_RD3)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR7_7_4	FM(AVB0_TXCREFCLK)	F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR7_11_8	FM(AVB0_MAGIC)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR7_15_12	FM(AVB0_TD0)		FM(AVB0_MII_TD0)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR7_19_16	FM(AVB0_RD2)		FM(AVB0_MII_RD2)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR7_23_20	FM(AVB0_MDC)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR7_27_24	FM(AVB0_MDIO)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR7_31_28	FM(AVB0_TXC)		FM(AVB0_MII_TXC)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-
+-/* IP2SR7 */		/* 0 */			/* 1 */			/* 2 */		/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+-#define IP2SR7_3_0	FM(AVB0_TX_CTL)		FM(AVB0_MII_TX_EN)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR7_7_4	FM(AVB0_RD1)		FM(AVB0_MII_RD1)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR7_11_8	FM(AVB0_RD0)		FM(AVB0_MII_RD0)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR7_15_12	FM(AVB0_RXC)		FM(AVB0_MII_RXC)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR7_19_16	FM(AVB0_RX_CTL)		FM(AVB0_MII_RX_DV)	F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++/* IP0SR7 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
++#define IP0SR7_3_0	FM(AVB0_AVTP_PPS)	FM(AVB0_MII_COL)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR7_7_4	FM(AVB0_AVTP_CAPTURE)	FM(AVB0_MII_CRS)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR7_11_8	FM(AVB0_AVTP_MATCH)	FM(AVB0_MII_RX_ER)	FM(CC5_OSCOUT)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR7_15_12	FM(AVB0_TD3)		FM(AVB0_MII_TD3)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR7_19_16	FM(AVB0_LINK)		FM(AVB0_MII_TX_ER)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR7_23_20	FM(AVB0_PHY_INT)	F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR7_27_24	FM(AVB0_TD2)		FM(AVB0_MII_TD2)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR7_31_28	FM(AVB0_TD1)		FM(AVB0_MII_TD1)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++
++/* IP1SR7 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
++#define IP1SR7_3_0	FM(AVB0_RD3)		FM(AVB0_MII_RD3)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR7_7_4	FM(AVB0_TXCREFCLK)	F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR7_11_8	FM(AVB0_MAGIC)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR7_15_12	FM(AVB0_TD0)		FM(AVB0_MII_TD0)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR7_19_16	FM(AVB0_RD2)		FM(AVB0_MII_RD2)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR7_23_20	FM(AVB0_MDC)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR7_27_24	FM(AVB0_MDIO)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR7_31_28	FM(AVB0_TXC)		FM(AVB0_MII_TXC)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++
++/* IP2SR7 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
++#define IP2SR7_3_0	FM(AVB0_TX_CTL)		FM(AVB0_MII_TX_EN)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR7_7_4	FM(AVB0_RD1)		FM(AVB0_MII_RD1)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR7_11_8	FM(AVB0_RD0)		FM(AVB0_MII_RD0)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR7_15_12	FM(AVB0_RXC)		FM(AVB0_MII_RXC)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR7_19_16	FM(AVB0_RX_CTL)		FM(AVB0_MII_RX_DV)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ 
+ /* SR8 */
+-/* IP0SR8 */		/* 0 */			/* 1 */			/* 2 */		/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+-#define IP0SR8_3_0	FM(SCL0)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR8_7_4	FM(SDA0)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR8_11_8	FM(SCL1)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR8_15_12	FM(SDA1)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR8_19_16	FM(SCL2)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR8_23_20	FM(SDA2)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR8_27_24	FM(SCL3)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR8_31_28	FM(SDA3)		F_(0, 0)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-
+-/* IP1SR8 */		/* 0 */			/* 1 */			/* 2 */		/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+-#define IP1SR8_3_0	FM(SCL4)		FM(HRX2)		FM(SCK4)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR8_7_4	FM(SDA4)		FM(HTX2)		FM(CTS4_N)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR8_11_8	FM(SCL5)		FM(HRTS2_N)		FM(RTS4_N)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR8_15_12	FM(SDA5)		FM(SCIF_CLK2)		F_(0, 0)	F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR8_19_16	F_(0, 0)		FM(HCTS2_N)		FM(TX4)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR8_23_20	F_(0, 0)		FM(HSCK2)		FM(RX4)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++/* IP0SR8 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
++#define IP0SR8_3_0	FM(SCL0)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR8_7_4	FM(SDA0)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR8_11_8	FM(SCL1)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR8_15_12	FM(SDA1)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR8_19_16	FM(SCL2)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR8_23_20	FM(SDA2)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR8_27_24	FM(SCL3)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR8_31_28	FM(SDA3)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++
++/* IP1SR8 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
++#define IP1SR8_3_0	FM(SCL4)		FM(HRX2)		FM(SCK4)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR8_7_4	FM(SDA4)		FM(HTX2)		FM(CTS4_N)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR8_11_8	FM(SCL5)		FM(HRTS2_N)		FM(RTS4_N)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR8_15_12	FM(SDA5)		FM(SCIF_CLK2)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR8_19_16	F_(0, 0)		FM(HCTS2_N)		FM(TX4)			F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR8_23_20	F_(0, 0)		FM(HSCK2)		FM(RX4)			F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ 
+ #define PINMUX_GPSR	\
+ 						GPSR3_29											\
+@@ -542,6 +604,24 @@ FM(IP0SR3_23_20)	IP0SR3_23_20	FM(IP1SR3_23_20)	IP1SR3_23_20	FM(IP2SR3_23_20)	IP2
+ FM(IP0SR3_27_24)	IP0SR3_27_24	FM(IP1SR3_27_24)	IP1SR3_27_24	FM(IP2SR3_27_24)	IP2SR3_27_24						\
+ FM(IP0SR3_31_28)	IP0SR3_31_28	FM(IP1SR3_31_28)	IP1SR3_31_28	FM(IP2SR3_31_28)	IP2SR3_31_28						\
+ \
++FM(IP0SR4_3_0)		IP0SR4_3_0	FM(IP1SR4_3_0)		IP1SR4_3_0	FM(IP2SR4_3_0)		IP2SR4_3_0	FM(IP3SR4_3_0)		IP3SR4_3_0	\
++FM(IP0SR4_7_4)		IP0SR4_7_4	FM(IP1SR4_7_4)		IP1SR4_7_4	FM(IP2SR4_7_4)		IP2SR4_7_4	\
++FM(IP0SR4_11_8)		IP0SR4_11_8	FM(IP1SR4_11_8)		IP1SR4_11_8	FM(IP2SR4_11_8)		IP2SR4_11_8	\
++FM(IP0SR4_15_12)	IP0SR4_15_12	FM(IP1SR4_15_12)	IP1SR4_15_12	FM(IP2SR4_15_12)	IP2SR4_15_12	\
++FM(IP0SR4_19_16)	IP0SR4_19_16	FM(IP1SR4_19_16)	IP1SR4_19_16	FM(IP2SR4_19_16)	IP2SR4_19_16	\
++FM(IP0SR4_23_20)	IP0SR4_23_20	FM(IP1SR4_23_20)	IP1SR4_23_20	FM(IP2SR4_23_20)	IP2SR4_23_20	\
++FM(IP0SR4_27_24)	IP0SR4_27_24	FM(IP1SR4_27_24)	IP1SR4_27_24	FM(IP2SR4_27_24)	IP2SR4_27_24	\
++FM(IP0SR4_31_28)	IP0SR4_31_28	FM(IP1SR4_31_28)	IP1SR4_31_28	FM(IP2SR4_31_28)	IP2SR4_31_28	\
++\
++FM(IP0SR5_3_0)		IP0SR5_3_0	FM(IP1SR5_3_0)		IP1SR5_3_0	FM(IP2SR5_3_0)		IP2SR5_3_0	\
++FM(IP0SR5_7_4)		IP0SR5_7_4	FM(IP1SR5_7_4)		IP1SR5_7_4	FM(IP2SR5_7_4)		IP2SR5_7_4	\
++FM(IP0SR5_11_8)		IP0SR5_11_8	FM(IP1SR5_11_8)		IP1SR5_11_8	FM(IP2SR5_11_8)		IP2SR5_11_8	\
++FM(IP0SR5_15_12)	IP0SR5_15_12	FM(IP1SR5_15_12)	IP1SR5_15_12	FM(IP2SR5_15_12)	IP2SR5_15_12	\
++FM(IP0SR5_19_16)	IP0SR5_19_16	FM(IP1SR5_19_16)	IP1SR5_19_16	FM(IP2SR5_19_16)	IP2SR5_19_16	\
++FM(IP0SR5_23_20)	IP0SR5_23_20	FM(IP1SR5_23_20)	IP1SR5_23_20	\
++FM(IP0SR5_27_24)	IP0SR5_27_24	FM(IP1SR5_27_24)	IP1SR5_27_24	\
++FM(IP0SR5_31_28)	IP0SR5_31_28	FM(IP1SR5_31_28)	IP1SR5_31_28	\
++\
+ FM(IP0SR6_3_0)		IP0SR6_3_0	FM(IP1SR6_3_0)		IP1SR6_3_0	FM(IP2SR6_3_0)		IP2SR6_3_0	\
+ FM(IP0SR6_7_4)		IP0SR6_7_4	FM(IP1SR6_7_4)		IP1SR6_7_4	FM(IP2SR6_7_4)		IP2SR6_7_4	\
+ FM(IP0SR6_11_8)		IP0SR6_11_8	FM(IP1SR6_11_8)		IP1SR6_11_8	FM(IP2SR6_11_8)		IP2SR6_11_8	\
+@@ -569,54 +649,6 @@ FM(IP0SR8_23_20)	IP0SR8_23_20	FM(IP1SR8_23_20)	IP1SR8_23_20	\
+ FM(IP0SR8_27_24)	IP0SR8_27_24	\
+ FM(IP0SR8_31_28)	IP0SR8_31_28
+ 
+-/* MOD_SEL4 */			/* 0 */				/* 1 */
+-#define MOD_SEL4_19		FM(SEL_TSN0_TD2_0)		FM(SEL_TSN0_TD2_1)
+-#define MOD_SEL4_18		FM(SEL_TSN0_TD3_0)		FM(SEL_TSN0_TD3_1)
+-#define MOD_SEL4_15		FM(SEL_TSN0_TD0_0)		FM(SEL_TSN0_TD0_1)
+-#define MOD_SEL4_14		FM(SEL_TSN0_TD1_0)		FM(SEL_TSN0_TD1_1)
+-#define MOD_SEL4_12		FM(SEL_TSN0_TXC_0)		FM(SEL_TSN0_TXC_1)
+-#define MOD_SEL4_9		FM(SEL_TSN0_TX_CTL_0)		FM(SEL_TSN0_TX_CTL_1)
+-#define MOD_SEL4_8		FM(SEL_TSN0_AVTP_PPS0_0)	FM(SEL_TSN0_AVTP_PPS0_1)
+-#define MOD_SEL4_5		FM(SEL_TSN0_AVTP_MATCH_0)	FM(SEL_TSN0_AVTP_MATCH_1)
+-#define MOD_SEL4_2		FM(SEL_TSN0_AVTP_PPS1_0)	FM(SEL_TSN0_AVTP_PPS1_1)
+-#define MOD_SEL4_1		FM(SEL_TSN0_MDC_0)		FM(SEL_TSN0_MDC_1)
+-
+-/* MOD_SEL5 */			/* 0 */				/* 1 */
+-#define MOD_SEL5_19		FM(SEL_AVB2_TX_CTL_0)		FM(SEL_AVB2_TX_CTL_1)
+-#define MOD_SEL5_16		FM(SEL_AVB2_TXC_0)		FM(SEL_AVB2_TXC_1)
+-#define MOD_SEL5_15		FM(SEL_AVB2_TD0_0)		FM(SEL_AVB2_TD0_1)
+-#define MOD_SEL5_12		FM(SEL_AVB2_TD1_0)		FM(SEL_AVB2_TD1_1)
+-#define MOD_SEL5_11		FM(SEL_AVB2_TD2_0)		FM(SEL_AVB2_TD2_1)
+-#define MOD_SEL5_8		FM(SEL_AVB2_TD3_0)		FM(SEL_AVB2_TD3_1)
+-#define MOD_SEL5_6		FM(SEL_AVB2_MDC_0)		FM(SEL_AVB2_MDC_1)
+-#define MOD_SEL5_5		FM(SEL_AVB2_MAGIC_0)		FM(SEL_AVB2_MAGIC_1)
+-#define MOD_SEL5_2		FM(SEL_AVB2_AVTP_MATCH_0)	FM(SEL_AVB2_AVTP_MATCH_1)
+-#define MOD_SEL5_0		FM(SEL_AVB2_AVTP_PPS_0)		FM(SEL_AVB2_AVTP_PPS_1)
+-
+-/* MOD_SEL6 */			/* 0 */				/* 1 */
+-#define MOD_SEL6_18		FM(SEL_AVB1_TD3_0)		FM(SEL_AVB1_TD3_1)
+-#define MOD_SEL6_16		FM(SEL_AVB1_TD2_0)		FM(SEL_AVB1_TD2_1)
+-#define MOD_SEL6_13		FM(SEL_AVB1_TD0_0)		FM(SEL_AVB1_TD0_1)
+-#define MOD_SEL6_12		FM(SEL_AVB1_TD1_0)		FM(SEL_AVB1_TD1_1)
+-#define MOD_SEL6_10		FM(SEL_AVB1_AVTP_PPS_0)		FM(SEL_AVB1_AVTP_PPS_1)
+-#define MOD_SEL6_7		FM(SEL_AVB1_TX_CTL_0)		FM(SEL_AVB1_TX_CTL_1)
+-#define MOD_SEL6_6		FM(SEL_AVB1_TXC_0)		FM(SEL_AVB1_TXC_1)
+-#define MOD_SEL6_5		FM(SEL_AVB1_AVTP_MATCH_0)	FM(SEL_AVB1_AVTP_MATCH_1)
+-#define MOD_SEL6_2		FM(SEL_AVB1_MDC_0)		FM(SEL_AVB1_MDC_1)
+-#define MOD_SEL6_1		FM(SEL_AVB1_MAGIC_0)		FM(SEL_AVB1_MAGIC_1)
+-
+-/* MOD_SEL7 */			/* 0 */				/* 1 */
+-#define MOD_SEL7_16		FM(SEL_AVB0_TX_CTL_0)		FM(SEL_AVB0_TX_CTL_1)
+-#define MOD_SEL7_15		FM(SEL_AVB0_TXC_0)		FM(SEL_AVB0_TXC_1)
+-#define MOD_SEL7_13		FM(SEL_AVB0_MDC_0)		FM(SEL_AVB0_MDC_1)
+-#define MOD_SEL7_11		FM(SEL_AVB0_TD0_0)		FM(SEL_AVB0_TD0_1)
+-#define MOD_SEL7_10		FM(SEL_AVB0_MAGIC_0)		FM(SEL_AVB0_MAGIC_1)
+-#define MOD_SEL7_7		FM(SEL_AVB0_TD1_0)		FM(SEL_AVB0_TD1_1)
+-#define MOD_SEL7_6		FM(SEL_AVB0_TD2_0)		FM(SEL_AVB0_TD2_1)
+-#define MOD_SEL7_3		FM(SEL_AVB0_TD3_0)		FM(SEL_AVB0_TD3_1)
+-#define MOD_SEL7_2		FM(SEL_AVB0_AVTP_MATCH_0)	FM(SEL_AVB0_AVTP_MATCH_1)
+-#define MOD_SEL7_0		FM(SEL_AVB0_AVTP_PPS_0)		FM(SEL_AVB0_AVTP_PPS_1)
+-
+ /* MOD_SEL8 */			/* 0 */				/* 1 */
+ #define MOD_SEL8_11		FM(SEL_SDA5_0)			FM(SEL_SDA5_1)
+ #define MOD_SEL8_10		FM(SEL_SCL5_0)			FM(SEL_SCL5_1)
+@@ -633,26 +665,18 @@ FM(IP0SR8_31_28)	IP0SR8_31_28
+ 
+ #define PINMUX_MOD_SELS \
+ \
+-MOD_SEL4_19		MOD_SEL5_19										\
+-MOD_SEL4_18					MOD_SEL6_18							\
+-														\
+-			MOD_SEL5_16		MOD_SEL6_16		MOD_SEL7_16				\
+-MOD_SEL4_15		MOD_SEL5_15					MOD_SEL7_15				\
+-MOD_SEL4_14													\
+-						MOD_SEL6_13		MOD_SEL7_13				\
+-MOD_SEL4_12		MOD_SEL5_12		MOD_SEL6_12							\
+-			MOD_SEL5_11					MOD_SEL7_11		MOD_SEL8_11	\
+-						MOD_SEL6_10		MOD_SEL7_10		MOD_SEL8_10	\
+-MOD_SEL4_9											MOD_SEL8_9	\
+-MOD_SEL4_8		MOD_SEL5_8								MOD_SEL8_8	\
+-						MOD_SEL6_7		MOD_SEL7_7		MOD_SEL8_7	\
+-			MOD_SEL5_6		MOD_SEL6_6		MOD_SEL7_6		MOD_SEL8_6	\
+-MOD_SEL4_5		MOD_SEL5_5		MOD_SEL6_5					MOD_SEL8_5	\
+-												MOD_SEL8_4	\
+-									MOD_SEL7_3		MOD_SEL8_3	\
+-MOD_SEL4_2		MOD_SEL5_2		MOD_SEL6_2		MOD_SEL7_2		MOD_SEL8_2	\
+-MOD_SEL4_1					MOD_SEL6_1					MOD_SEL8_1	\
+-			MOD_SEL5_0					MOD_SEL7_0		MOD_SEL8_0
++MOD_SEL8_11	\
++MOD_SEL8_10	\
++MOD_SEL8_9	\
++MOD_SEL8_8	\
++MOD_SEL8_7	\
++MOD_SEL8_6	\
++MOD_SEL8_5	\
++MOD_SEL8_4	\
++MOD_SEL8_3	\
++MOD_SEL8_2	\
++MOD_SEL8_1	\
++MOD_SEL8_0
+ 
+ enum {
+ 	PINMUX_RESERVED = 0,
+@@ -686,61 +710,8 @@ enum {
+ static const u16 pinmux_data[] = {
+ 	PINMUX_DATA_GP_ALL(),
+ 
+-	PINMUX_SINGLE(AVS1),
+-	PINMUX_SINGLE(AVS0),
+-	PINMUX_SINGLE(PCIE1_CLKREQ_N),
+-	PINMUX_SINGLE(PCIE0_CLKREQ_N),
+-
+-	/* TSN0 without MODSEL4 */
+-	PINMUX_SINGLE(TSN0_TXCREFCLK),
+-	PINMUX_SINGLE(TSN0_RD2),
+-	PINMUX_SINGLE(TSN0_RD3),
+-	PINMUX_SINGLE(TSN0_RD1),
+-	PINMUX_SINGLE(TSN0_RXC),
+-	PINMUX_SINGLE(TSN0_RD0),
+-	PINMUX_SINGLE(TSN0_RX_CTL),
+-	PINMUX_SINGLE(TSN0_AVTP_CAPTURE),
+-	PINMUX_SINGLE(TSN0_LINK),
+-	PINMUX_SINGLE(TSN0_PHY_INT),
+-	PINMUX_SINGLE(TSN0_MDIO),
+-	/* TSN0 with MODSEL4 */
+-	PINMUX_IPSR_NOGM(0, TSN0_TD2,		SEL_TSN0_TD2_1),
+-	PINMUX_IPSR_NOGM(0, TSN0_TD3,		SEL_TSN0_TD3_1),
+-	PINMUX_IPSR_NOGM(0, TSN0_TD0,		SEL_TSN0_TD0_1),
+-	PINMUX_IPSR_NOGM(0, TSN0_TD1,		SEL_TSN0_TD1_1),
+-	PINMUX_IPSR_NOGM(0, TSN0_TXC,		SEL_TSN0_TXC_1),
+-	PINMUX_IPSR_NOGM(0, TSN0_TX_CTL,	SEL_TSN0_TX_CTL_1),
+-	PINMUX_IPSR_NOGM(0, TSN0_AVTP_PPS0,	SEL_TSN0_AVTP_PPS0_1),
+-	PINMUX_IPSR_NOGM(0, TSN0_AVTP_MATCH,	SEL_TSN0_AVTP_MATCH_1),
+-	PINMUX_IPSR_NOGM(0, TSN0_AVTP_PPS1,	SEL_TSN0_AVTP_PPS1_1),
+-	PINMUX_IPSR_NOGM(0, TSN0_MDC,		SEL_TSN0_MDC_1),
+-
+-	/* TSN0 without MODSEL5 */
+-	PINMUX_SINGLE(AVB2_RX_CTL),
+-	PINMUX_SINGLE(AVB2_RXC),
+-	PINMUX_SINGLE(AVB2_RD0),
+-	PINMUX_SINGLE(AVB2_RD1),
+-	PINMUX_SINGLE(AVB2_RD2),
+-	PINMUX_SINGLE(AVB2_MDIO),
+-	PINMUX_SINGLE(AVB2_RD3),
+-	PINMUX_SINGLE(AVB2_TXCREFCLK),
+-	PINMUX_SINGLE(AVB2_PHY_INT),
+-	PINMUX_SINGLE(AVB2_LINK),
+-	PINMUX_SINGLE(AVB2_AVTP_CAPTURE),
+-	/* TSN0 with MODSEL5 */
+-	PINMUX_IPSR_NOGM(0, AVB2_TX_CTL,	SEL_AVB2_TX_CTL_1),
+-	PINMUX_IPSR_NOGM(0, AVB2_TXC,		SEL_AVB2_TXC_1),
+-	PINMUX_IPSR_NOGM(0, AVB2_TD0,		SEL_AVB2_TD0_1),
+-	PINMUX_IPSR_NOGM(0, AVB2_TD1,		SEL_AVB2_TD1_1),
+-	PINMUX_IPSR_NOGM(0, AVB2_TD2,		SEL_AVB2_TD2_1),
+-	PINMUX_IPSR_NOGM(0, AVB2_TD3,		SEL_AVB2_TD3_1),
+-	PINMUX_IPSR_NOGM(0, AVB2_MDC,		SEL_AVB2_MDC_1),
+-	PINMUX_IPSR_NOGM(0, AVB2_MAGIC,		SEL_AVB2_MAGIC_1),
+-	PINMUX_IPSR_NOGM(0, AVB2_AVTP_MATCH,	SEL_AVB2_AVTP_MATCH_1),
+-	PINMUX_IPSR_NOGM(0, AVB2_AVTP_PPS,	SEL_AVB2_AVTP_PPS_1),
+-
+ 	/* IP0SR0 */
+-	PINMUX_IPSR_GPSR(IP0SR0_3_0,	ERROROUTC_B),
++	PINMUX_IPSR_GPSR(IP0SR0_3_0,	ERROROUTC_N_B),
+ 	PINMUX_IPSR_GPSR(IP0SR0_3_0,	TCLK2_A),
+ 
+ 	PINMUX_IPSR_GPSR(IP0SR0_7_4,	MSIOF3_SS1),
+@@ -1006,7 +977,7 @@ static const u16 pinmux_data[] = {
+ 
+ 	PINMUX_IPSR_GPSR(IP1SR3_27_24,	IPC_CLKOUT),
+ 	PINMUX_IPSR_GPSR(IP1SR3_27_24,	IPC_CLKEN_OUT),
+-	PINMUX_IPSR_GPSR(IP1SR3_27_24,	ERROROUTC_A),
++	PINMUX_IPSR_GPSR(IP1SR3_27_24,	ERROROUTC_N_A),
+ 	PINMUX_IPSR_GPSR(IP1SR3_27_24,	TCLK4_X),
+ 
+ 	PINMUX_IPSR_GPSR(IP1SR3_31_28,	QSPI0_SSL),
+@@ -1029,26 +1000,86 @@ static const u16 pinmux_data[] = {
+ 	PINMUX_IPSR_GPSR(IP3SR3_19_16,	RPC_WP_N),
+ 	PINMUX_IPSR_GPSR(IP3SR3_23_20,	RPC_INT_N),
+ 
++	/* IP0SR4 */
++	PINMUX_IPSR_GPSR(IP0SR4_3_0,	TSN0_MDIO),
++	PINMUX_IPSR_GPSR(IP0SR4_7_4,	TSN0_MDC),
++	PINMUX_IPSR_GPSR(IP0SR4_11_8,	TSN0_AVTP_PPS1),
++	PINMUX_IPSR_GPSR(IP0SR4_15_12,	TSN0_PHY_INT),
++	PINMUX_IPSR_GPSR(IP0SR4_19_16,	TSN0_LINK),
++	PINMUX_IPSR_GPSR(IP0SR4_23_20,	TSN0_AVTP_MATCH),
++	PINMUX_IPSR_GPSR(IP0SR4_27_24,	TSN0_AVTP_CAPTURE),
++	PINMUX_IPSR_GPSR(IP0SR4_31_28,	TSN0_RX_CTL),
++
++	/* IP1SR4 */
++	PINMUX_IPSR_GPSR(IP1SR4_3_0,	TSN0_AVTP_PPS0),
++	PINMUX_IPSR_GPSR(IP1SR4_7_4,	TSN0_TX_CTL),
++	PINMUX_IPSR_GPSR(IP1SR4_11_8,	TSN0_RD0),
++	PINMUX_IPSR_GPSR(IP1SR4_15_12,	TSN0_RXC),
++	PINMUX_IPSR_GPSR(IP1SR4_19_16,	TSN0_TXC),
++	PINMUX_IPSR_GPSR(IP1SR4_23_20,	TSN0_RD1),
++	PINMUX_IPSR_GPSR(IP1SR4_27_24,	TSN0_TD1),
++	PINMUX_IPSR_GPSR(IP1SR4_31_28,	TSN0_TD0),
++
++	/* IP2SR4 */
++	PINMUX_IPSR_GPSR(IP2SR4_3_0,	TSN0_RD3),
++	PINMUX_IPSR_GPSR(IP2SR4_7_4,	TSN0_RD2),
++	PINMUX_IPSR_GPSR(IP2SR4_11_8,	TSN0_TD3),
++	PINMUX_IPSR_GPSR(IP2SR4_15_12,	TSN0_TD2),
++	PINMUX_IPSR_GPSR(IP2SR4_19_16,	TSN0_TXCREFCLK),
++	PINMUX_IPSR_GPSR(IP2SR4_23_20,	PCIE0_CLKREQ_N),
++	PINMUX_IPSR_GPSR(IP2SR4_27_24,	PCIE1_CLKREQ_N),
++	PINMUX_IPSR_GPSR(IP2SR4_31_28,	AVS0),
++
++	/* IP3SR4 */
++	PINMUX_IPSR_GPSR(IP3SR4_3_0,	AVS1),
++
++	/* IP0SR5 */
++	PINMUX_IPSR_GPSR(IP0SR5_3_0,	AVB2_AVTP_PPS),
++	PINMUX_IPSR_GPSR(IP0SR5_7_4,	AVB2_AVTP_CAPTURE),
++	PINMUX_IPSR_GPSR(IP0SR5_11_8,	AVB2_AVTP_MATCH),
++	PINMUX_IPSR_GPSR(IP0SR5_15_12,	AVB2_LINK),
++	PINMUX_IPSR_GPSR(IP0SR5_19_16,	AVB2_PHY_INT),
++	PINMUX_IPSR_GPSR(IP0SR5_23_20,	AVB2_MAGIC),
++	PINMUX_IPSR_GPSR(IP0SR5_27_24,	AVB2_MDC),
++	PINMUX_IPSR_GPSR(IP0SR5_31_28,	AVB2_TXCREFCLK),
++
++	/* IP1SR5 */
++	PINMUX_IPSR_GPSR(IP1SR5_3_0,	AVB2_TD3),
++	PINMUX_IPSR_GPSR(IP1SR5_7_4,	AVB2_RD3),
++	PINMUX_IPSR_GPSR(IP1SR5_11_8,	AVB2_MDIO),
++	PINMUX_IPSR_GPSR(IP1SR5_15_12,	AVB2_TD2),
++	PINMUX_IPSR_GPSR(IP1SR5_19_16,	AVB2_TD1),
++	PINMUX_IPSR_GPSR(IP1SR5_23_20,	AVB2_RD2),
++	PINMUX_IPSR_GPSR(IP1SR5_27_24,	AVB2_RD1),
++	PINMUX_IPSR_GPSR(IP1SR5_31_28,	AVB2_TD0),
++
++	/* IP2SR5 */
++	PINMUX_IPSR_GPSR(IP2SR5_3_0,	AVB2_TXC),
++	PINMUX_IPSR_GPSR(IP2SR5_7_4,	AVB2_RD0),
++	PINMUX_IPSR_GPSR(IP2SR5_11_8,	AVB2_RXC),
++	PINMUX_IPSR_GPSR(IP2SR5_15_12,	AVB2_TX_CTL),
++	PINMUX_IPSR_GPSR(IP2SR5_19_16,	AVB2_RX_CTL),
++
+ 	/* IP0SR6 */
+ 	PINMUX_IPSR_GPSR(IP0SR6_3_0,	AVB1_MDIO),
+ 
+-	PINMUX_IPSR_MSEL(IP0SR6_7_4,	AVB1_MAGIC,		SEL_AVB1_MAGIC_1),
++	PINMUX_IPSR_GPSR(IP0SR6_7_4,	AVB1_MAGIC),
+ 
+-	PINMUX_IPSR_MSEL(IP0SR6_11_8,	AVB1_MDC,		SEL_AVB1_MDC_1),
++	PINMUX_IPSR_GPSR(IP0SR6_11_8,	AVB1_MDC),
+ 
+ 	PINMUX_IPSR_GPSR(IP0SR6_15_12,	AVB1_PHY_INT),
+ 
+ 	PINMUX_IPSR_GPSR(IP0SR6_19_16,	AVB1_LINK),
+ 	PINMUX_IPSR_GPSR(IP0SR6_19_16,	AVB1_MII_TX_ER),
+ 
+-	PINMUX_IPSR_MSEL(IP0SR6_23_20,	AVB1_AVTP_MATCH,	SEL_AVB1_AVTP_MATCH_1),
+-	PINMUX_IPSR_MSEL(IP0SR6_23_20,	AVB1_MII_RX_ER,		SEL_AVB1_AVTP_MATCH_0),
++	PINMUX_IPSR_GPSR(IP0SR6_23_20,	AVB1_AVTP_MATCH),
++	PINMUX_IPSR_GPSR(IP0SR6_23_20,	AVB1_MII_RX_ER),
+ 
+-	PINMUX_IPSR_MSEL(IP0SR6_27_24,	AVB1_TXC,		SEL_AVB1_TXC_1),
+-	PINMUX_IPSR_MSEL(IP0SR6_27_24,	AVB1_MII_TXC,		SEL_AVB1_TXC_0),
++	PINMUX_IPSR_GPSR(IP0SR6_27_24,	AVB1_TXC),
++	PINMUX_IPSR_GPSR(IP0SR6_27_24,	AVB1_MII_TXC),
+ 
+-	PINMUX_IPSR_MSEL(IP0SR6_31_28,	AVB1_TX_CTL,		SEL_AVB1_TX_CTL_1),
+-	PINMUX_IPSR_MSEL(IP0SR6_31_28,	AVB1_MII_TX_EN,		SEL_AVB1_TX_CTL_0),
++	PINMUX_IPSR_GPSR(IP0SR6_31_28,	AVB1_TX_CTL),
++	PINMUX_IPSR_GPSR(IP0SR6_31_28,	AVB1_MII_TX_EN),
+ 
+ 	/* IP1SR6 */
+ 	PINMUX_IPSR_GPSR(IP1SR6_3_0,	AVB1_RXC),
+@@ -1057,17 +1088,17 @@ static const u16 pinmux_data[] = {
+ 	PINMUX_IPSR_GPSR(IP1SR6_7_4,	AVB1_RX_CTL),
+ 	PINMUX_IPSR_GPSR(IP1SR6_7_4,	AVB1_MII_RX_DV),
+ 
+-	PINMUX_IPSR_MSEL(IP1SR6_11_8,	AVB1_AVTP_PPS,		SEL_AVB1_AVTP_PPS_1),
+-	PINMUX_IPSR_MSEL(IP1SR6_11_8,	AVB1_MII_COL,		SEL_AVB1_AVTP_PPS_0),
++	PINMUX_IPSR_GPSR(IP1SR6_11_8,	AVB1_AVTP_PPS),
++	PINMUX_IPSR_GPSR(IP1SR6_11_8,	AVB1_MII_COL),
+ 
+ 	PINMUX_IPSR_GPSR(IP1SR6_15_12,	AVB1_AVTP_CAPTURE),
+ 	PINMUX_IPSR_GPSR(IP1SR6_15_12,	AVB1_MII_CRS),
+ 
+-	PINMUX_IPSR_MSEL(IP1SR6_19_16,	AVB1_TD1,		SEL_AVB1_TD1_1),
+-	PINMUX_IPSR_MSEL(IP1SR6_19_16,	AVB1_MII_TD1,		SEL_AVB1_TD1_0),
++	PINMUX_IPSR_GPSR(IP1SR6_19_16,	AVB1_TD1),
++	PINMUX_IPSR_GPSR(IP1SR6_19_16,	AVB1_MII_TD1),
+ 
+-	PINMUX_IPSR_MSEL(IP1SR6_23_20,	AVB1_TD0,		SEL_AVB1_TD0_1),
+-	PINMUX_IPSR_MSEL(IP1SR6_23_20,	AVB1_MII_TD0,		SEL_AVB1_TD0_0),
++	PINMUX_IPSR_GPSR(IP1SR6_23_20,	AVB1_TD0),
++	PINMUX_IPSR_GPSR(IP1SR6_23_20,	AVB1_MII_TD0),
+ 
+ 	PINMUX_IPSR_GPSR(IP1SR6_27_24,	AVB1_RD1),
+ 	PINMUX_IPSR_GPSR(IP1SR6_27_24,	AVB1_MII_RD1),
+@@ -1076,14 +1107,14 @@ static const u16 pinmux_data[] = {
+ 	PINMUX_IPSR_GPSR(IP1SR6_31_28,	AVB1_MII_RD0),
+ 
+ 	/* IP2SR6 */
+-	PINMUX_IPSR_MSEL(IP2SR6_3_0,	AVB1_TD2,		SEL_AVB1_TD2_1),
+-	PINMUX_IPSR_MSEL(IP2SR6_3_0,	AVB1_MII_TD2,		SEL_AVB1_TD2_0),
++	PINMUX_IPSR_GPSR(IP2SR6_3_0,	AVB1_TD2),
++	PINMUX_IPSR_GPSR(IP2SR6_3_0,	AVB1_MII_TD2),
+ 
+ 	PINMUX_IPSR_GPSR(IP2SR6_7_4,	AVB1_RD2),
+ 	PINMUX_IPSR_GPSR(IP2SR6_7_4,	AVB1_MII_RD2),
+ 
+-	PINMUX_IPSR_MSEL(IP2SR6_11_8,	AVB1_TD3,		SEL_AVB1_TD3_1),
+-	PINMUX_IPSR_MSEL(IP2SR6_11_8,	AVB1_MII_TD3,		SEL_AVB1_TD3_0),
++	PINMUX_IPSR_GPSR(IP2SR6_11_8,	AVB1_TD3),
++	PINMUX_IPSR_GPSR(IP2SR6_11_8,	AVB1_MII_TD3),
+ 
+ 	PINMUX_IPSR_GPSR(IP2SR6_15_12,	AVB1_RD3),
+ 	PINMUX_IPSR_GPSR(IP2SR6_15_12,	AVB1_MII_RD3),
+@@ -1091,29 +1122,29 @@ static const u16 pinmux_data[] = {
+ 	PINMUX_IPSR_GPSR(IP2SR6_19_16,	AVB1_TXCREFCLK),
+ 
+ 	/* IP0SR7 */
+-	PINMUX_IPSR_MSEL(IP0SR7_3_0,	AVB0_AVTP_PPS,		SEL_AVB0_AVTP_PPS_1),
+-	PINMUX_IPSR_MSEL(IP0SR7_3_0,	AVB0_MII_COL,		SEL_AVB0_AVTP_PPS_0),
++	PINMUX_IPSR_GPSR(IP0SR7_3_0,	AVB0_AVTP_PPS),
++	PINMUX_IPSR_GPSR(IP0SR7_3_0,	AVB0_MII_COL),
+ 
+ 	PINMUX_IPSR_GPSR(IP0SR7_7_4,	AVB0_AVTP_CAPTURE),
+ 	PINMUX_IPSR_GPSR(IP0SR7_7_4,	AVB0_MII_CRS),
+ 
+-	PINMUX_IPSR_MSEL(IP0SR7_11_8,	AVB0_AVTP_MATCH,	SEL_AVB0_AVTP_MATCH_1),
+-	PINMUX_IPSR_MSEL(IP0SR7_11_8,	AVB0_MII_RX_ER,		SEL_AVB0_AVTP_MATCH_0),
+-	PINMUX_IPSR_MSEL(IP0SR7_11_8,	CC5_OSCOUT,		SEL_AVB0_AVTP_MATCH_0),
++	PINMUX_IPSR_GPSR(IP0SR7_11_8,	AVB0_AVTP_MATCH),
++	PINMUX_IPSR_GPSR(IP0SR7_11_8,	AVB0_MII_RX_ER),
++	PINMUX_IPSR_GPSR(IP0SR7_11_8,	CC5_OSCOUT),
+ 
+-	PINMUX_IPSR_MSEL(IP0SR7_15_12,	AVB0_TD3,		SEL_AVB0_TD3_1),
+-	PINMUX_IPSR_MSEL(IP0SR7_15_12,	AVB0_MII_TD3,		SEL_AVB0_TD3_0),
++	PINMUX_IPSR_GPSR(IP0SR7_15_12,	AVB0_TD3),
++	PINMUX_IPSR_GPSR(IP0SR7_15_12,	AVB0_MII_TD3),
+ 
+ 	PINMUX_IPSR_GPSR(IP0SR7_19_16,	AVB0_LINK),
+ 	PINMUX_IPSR_GPSR(IP0SR7_19_16,	AVB0_MII_TX_ER),
+ 
+ 	PINMUX_IPSR_GPSR(IP0SR7_23_20,	AVB0_PHY_INT),
+ 
+-	PINMUX_IPSR_MSEL(IP0SR7_27_24,	AVB0_TD2,		SEL_AVB0_TD2_1),
+-	PINMUX_IPSR_MSEL(IP0SR7_27_24,	AVB0_MII_TD2,		SEL_AVB0_TD2_0),
++	PINMUX_IPSR_GPSR(IP0SR7_27_24,	AVB0_TD2),
++	PINMUX_IPSR_GPSR(IP0SR7_27_24,	AVB0_MII_TD2),
+ 
+-	PINMUX_IPSR_MSEL(IP0SR7_31_28,	AVB0_TD1,		SEL_AVB0_TD1_1),
+-	PINMUX_IPSR_MSEL(IP0SR7_31_28,	AVB0_MII_TD1,		SEL_AVB0_TD1_0),
++	PINMUX_IPSR_GPSR(IP0SR7_31_28,	AVB0_TD1),
++	PINMUX_IPSR_GPSR(IP0SR7_31_28,	AVB0_MII_TD1),
+ 
+ 	/* IP1SR7 */
+ 	PINMUX_IPSR_GPSR(IP1SR7_3_0,	AVB0_RD3),
+@@ -1121,24 +1152,24 @@ static const u16 pinmux_data[] = {
+ 
+ 	PINMUX_IPSR_GPSR(IP1SR7_7_4,	AVB0_TXCREFCLK),
+ 
+-	PINMUX_IPSR_MSEL(IP1SR7_11_8,	AVB0_MAGIC,		SEL_AVB0_MAGIC_1),
++	PINMUX_IPSR_GPSR(IP1SR7_11_8,	AVB0_MAGIC),
+ 
+-	PINMUX_IPSR_MSEL(IP1SR7_15_12,	AVB0_TD0,		SEL_AVB0_TD0_1),
+-	PINMUX_IPSR_MSEL(IP1SR7_15_12,	AVB0_MII_TD0,		SEL_AVB0_TD0_0),
++	PINMUX_IPSR_GPSR(IP1SR7_15_12,	AVB0_TD0),
++	PINMUX_IPSR_GPSR(IP1SR7_15_12,	AVB0_MII_TD0),
+ 
+ 	PINMUX_IPSR_GPSR(IP1SR7_19_16,	AVB0_RD2),
+ 	PINMUX_IPSR_GPSR(IP1SR7_19_16,	AVB0_MII_RD2),
+ 
+-	PINMUX_IPSR_MSEL(IP1SR7_23_20,	AVB0_MDC,		SEL_AVB0_MDC_1),
++	PINMUX_IPSR_GPSR(IP1SR7_23_20,	AVB0_MDC),
+ 
+ 	PINMUX_IPSR_GPSR(IP1SR7_27_24,	AVB0_MDIO),
+ 
+-	PINMUX_IPSR_MSEL(IP1SR7_31_28,	AVB0_TXC,		SEL_AVB0_TXC_1),
+-	PINMUX_IPSR_MSEL(IP1SR7_31_28,	AVB0_MII_TXC,		SEL_AVB0_TXC_0),
++	PINMUX_IPSR_GPSR(IP1SR7_31_28,	AVB0_TXC),
++	PINMUX_IPSR_GPSR(IP1SR7_31_28,	AVB0_MII_TXC),
+ 
+ 	/* IP2SR7 */
+-	PINMUX_IPSR_MSEL(IP2SR7_3_0,	AVB0_TX_CTL,		SEL_AVB0_TX_CTL_1),
+-	PINMUX_IPSR_MSEL(IP2SR7_3_0,	AVB0_MII_TX_EN,		SEL_AVB0_TX_CTL_0),
++	PINMUX_IPSR_GPSR(IP2SR7_3_0,	AVB0_TX_CTL),
++	PINMUX_IPSR_GPSR(IP2SR7_3_0,	AVB0_MII_TX_EN),
+ 
+ 	PINMUX_IPSR_GPSR(IP2SR7_7_4,	AVB0_RD1),
+ 	PINMUX_IPSR_GPSR(IP2SR7_7_4,	AVB0_MII_RD1),
+@@ -3419,6 +3450,82 @@ static const struct pinmux_cfg_reg pinmux_config_regs[] = {
+ 		IP3SR3_7_4
+ 		IP3SR3_3_0))
+ 	},
++	{ PINMUX_CFG_REG_VAR("IP0SR4", 0xE6060060, 32,
++			     GROUP(4, 4, 4, 4, 4, 4, 4, 4),
++			     GROUP(
++		IP0SR4_31_28
++		IP0SR4_27_24
++		IP0SR4_23_20
++		IP0SR4_19_16
++		IP0SR4_15_12
++		IP0SR4_11_8
++		IP0SR4_7_4
++		IP0SR4_3_0))
++	},
++	{ PINMUX_CFG_REG_VAR("IP1SR4", 0xE6060064, 32,
++			     GROUP(4, 4, 4, 4, 4, 4, 4, 4),
++			     GROUP(
++		IP1SR4_31_28
++		IP1SR4_27_24
++		IP1SR4_23_20
++		IP1SR4_19_16
++		IP1SR4_15_12
++		IP1SR4_11_8
++		IP1SR4_7_4
++		IP1SR4_3_0))
++	},
++	{ PINMUX_CFG_REG_VAR("IP2SR4", 0xE6060068, 32,
++			     GROUP(4, 4, 4, 4, 4, 4, 4, 4),
++			     GROUP(
++		IP2SR4_31_28
++		IP2SR4_27_24
++		IP2SR4_23_20
++		IP2SR4_19_16
++		IP2SR4_15_12
++		IP2SR4_11_8
++		IP2SR4_7_4
++		IP2SR4_3_0))
++	},
++	{ PINMUX_CFG_REG_VAR("IP3SR4", 0xE606006C, 32,
++			     GROUP(-28, 4),
++			     GROUP(
++		/* IP3SR4_31_4 RESERVED */
++		IP3SR4_3_0))
++	},
++	{ PINMUX_CFG_REG_VAR("IP0SR5", 0xE6060860, 32,
++			     GROUP(4, 4, 4, 4, 4, 4, 4, 4),
++			     GROUP(
++		IP0SR5_31_28
++		IP0SR5_27_24
++		IP0SR5_23_20
++		IP0SR5_19_16
++		IP0SR5_15_12
++		IP0SR5_11_8
++		IP0SR5_7_4
++		IP0SR5_3_0))
++	},
++	{ PINMUX_CFG_REG_VAR("IP1SR5", 0xE6060864, 32,
++			     GROUP(4, 4, 4, 4, 4, 4, 4, 4),
++			     GROUP(
++		IP1SR5_31_28
++		IP1SR5_27_24
++		IP1SR5_23_20
++		IP1SR5_19_16
++		IP1SR5_15_12
++		IP1SR5_11_8
++		IP1SR5_7_4
++		IP1SR5_3_0))
++	},
++	{ PINMUX_CFG_REG_VAR("IP2SR5", 0xE6060868, 32,
++			     GROUP(-12, 4, 4, 4, 4, 4),
++			     GROUP(
++		/* IP2SR5_31_20 RESERVED */
++		IP2SR5_19_16
++		IP2SR5_15_12
++		IP2SR5_11_8
++		IP2SR5_7_4
++		IP2SR5_3_0))
++	},
+ 	{ PINMUX_CFG_REG("IP0SR6", 0xE6061060, 32, 4, GROUP(
+ 		IP0SR6_31_28
+ 		IP0SR6_27_24
+@@ -3505,95 +3612,6 @@ static const struct pinmux_cfg_reg pinmux_config_regs[] = {
+ 
+ #define F_(x, y)	x,
+ #define FM(x)		FN_##x,
+-	{ PINMUX_CFG_REG_VAR("MOD_SEL4", 0xE6060100, 32,
+-			     GROUP(-12, 1, 1, -2, 1, 1, -1, 1, -2, 1, 1, -2, 1,
+-				   -2, 1, 1, -1),
+-			     GROUP(
+-		/* RESERVED 31-20 */
+-		MOD_SEL4_19
+-		MOD_SEL4_18
+-		/* RESERVED 17-16 */
+-		MOD_SEL4_15
+-		MOD_SEL4_14
+-		/* RESERVED 13 */
+-		MOD_SEL4_12
+-		/* RESERVED 11-10 */
+-		MOD_SEL4_9
+-		MOD_SEL4_8
+-		/* RESERVED 7-6 */
+-		MOD_SEL4_5
+-		/* RESERVED 4-3 */
+-		MOD_SEL4_2
+-		MOD_SEL4_1
+-		/* RESERVED 0 */
+-		))
+-	},
+-	{ PINMUX_CFG_REG_VAR("MOD_SEL5", 0xE6060900, 32,
+-			     GROUP(-12, 1, -2, 1, 1, -2, 1, 1, -2, 1, -1,
+-				   1, 1, -2, 1, -1, 1),
+-			     GROUP(
+-		/* RESERVED 31-20 */
+-		MOD_SEL5_19
+-		/* RESERVED 18-17 */
+-		MOD_SEL5_16
+-		MOD_SEL5_15
+-		/* RESERVED 14-13 */
+-		MOD_SEL5_12
+-		MOD_SEL5_11
+-		/* RESERVED 10-9 */
+-		MOD_SEL5_8
+-		/* RESERVED 7 */
+-		MOD_SEL5_6
+-		MOD_SEL5_5
+-		/* RESERVED 4-3 */
+-		MOD_SEL5_2
+-		/* RESERVED 1 */
+-		MOD_SEL5_0))
+-	},
+-	{ PINMUX_CFG_REG_VAR("MOD_SEL6", 0xE6061100, 32,
+-			     GROUP(-13, 1, -1, 1, -2, 1, 1,
+-				   -1, 1, -2, 1, 1, 1, -2, 1, 1, -1),
+-			     GROUP(
+-		/* RESERVED 31-19 */
+-		MOD_SEL6_18
+-		/* RESERVED 17 */
+-		MOD_SEL6_16
+-		/* RESERVED 15-14 */
+-		MOD_SEL6_13
+-		MOD_SEL6_12
+-		/* RESERVED 11 */
+-		MOD_SEL6_10
+-		/* RESERVED 9-8 */
+-		MOD_SEL6_7
+-		MOD_SEL6_6
+-		MOD_SEL6_5
+-		/* RESERVED 4-3 */
+-		MOD_SEL6_2
+-		MOD_SEL6_1
+-		/* RESERVED 0 */
+-		))
+-	},
+-	{ PINMUX_CFG_REG_VAR("MOD_SEL7", 0xE6061900, 32,
+-			     GROUP(-15, 1, 1, -1, 1, -1, 1, 1, -2, 1, 1,
+-				   -2, 1, 1, -1, 1),
+-			     GROUP(
+-		/* RESERVED 31-17 */
+-		MOD_SEL7_16
+-		MOD_SEL7_15
+-		/* RESERVED 14 */
+-		MOD_SEL7_13
+-		/* RESERVED 12 */
+-		MOD_SEL7_11
+-		MOD_SEL7_10
+-		/* RESERVED 9-8 */
+-		MOD_SEL7_7
+-		MOD_SEL7_6
+-		/* RESERVED 5-4 */
+-		MOD_SEL7_3
+-		MOD_SEL7_2
+-		/* RESERVED 1 */
+-		MOD_SEL7_0))
+-	},
+ 	{ PINMUX_CFG_REG_VAR("MOD_SEL8", 0xE6068100, 32,
+ 			     GROUP(-20, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1),
+ 			     GROUP(
+diff --git a/drivers/platform/chrome/cros_typec_switch.c b/drivers/platform/chrome/cros_typec_switch.c
+index 9ed1605f4071d..752720483753d 100644
+--- a/drivers/platform/chrome/cros_typec_switch.c
++++ b/drivers/platform/chrome/cros_typec_switch.c
+@@ -270,6 +270,7 @@ static int cros_typec_register_switches(struct cros_typec_switch_data *sdata)
+ 
+ 	return 0;
+ err_switch:
++	fwnode_handle_put(fwnode);
+ 	cros_typec_unregister_switches(sdata);
+ 	return ret;
+ }
+diff --git a/drivers/platform/x86/amd/Kconfig b/drivers/platform/x86/amd/Kconfig
+index 2ce8cb2170dfc..d9685aef0887d 100644
+--- a/drivers/platform/x86/amd/Kconfig
++++ b/drivers/platform/x86/amd/Kconfig
+@@ -7,7 +7,7 @@ source "drivers/platform/x86/amd/pmf/Kconfig"
+ 
+ config AMD_PMC
+ 	tristate "AMD SoC PMC driver"
+-	depends on ACPI && PCI && RTC_CLASS
++	depends on ACPI && PCI && RTC_CLASS && AMD_NB
+ 	select SERIO
+ 	help
+ 	  The driver provides support for AMD Power Management Controller
+diff --git a/drivers/platform/x86/amd/pmc.c b/drivers/platform/x86/amd/pmc.c
+index 2edaae04a6912..69f305496643f 100644
+--- a/drivers/platform/x86/amd/pmc.c
++++ b/drivers/platform/x86/amd/pmc.c
+@@ -10,6 +10,7 @@
+ 
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+ 
++#include <asm/amd_nb.h>
+ #include <linux/acpi.h>
+ #include <linux/bitfield.h>
+ #include <linux/bits.h>
+@@ -37,8 +38,6 @@
+ #define AMD_PMC_SCRATCH_REG_YC		0xD14
+ 
+ /* STB Registers */
+-#define AMD_PMC_STB_INDEX_ADDRESS	0xF8
+-#define AMD_PMC_STB_INDEX_DATA		0xFC
+ #define AMD_PMC_STB_PMI_0		0x03E30600
+ #define AMD_PMC_STB_S2IDLE_PREPARE	0xC6000001
+ #define AMD_PMC_STB_S2IDLE_RESTORE	0xC6000002
+@@ -56,8 +55,6 @@
+ #define S2D_TELEMETRY_DRAMBYTES_MAX	0x1000000
+ 
+ /* Base address of SMU for mapping physical address to virtual address */
+-#define AMD_PMC_SMU_INDEX_ADDRESS	0xB8
+-#define AMD_PMC_SMU_INDEX_DATA		0xBC
+ #define AMD_PMC_MAPPING_SIZE		0x01000
+ #define AMD_PMC_BASE_ADDR_OFFSET	0x10000
+ #define AMD_PMC_BASE_ADDR_LO		0x13B102E8
+@@ -342,33 +339,6 @@ static int amd_pmc_setup_smu_logging(struct amd_pmc_dev *dev)
+ 	return 0;
+ }
+ 
+-static int amd_pmc_idlemask_read(struct amd_pmc_dev *pdev, struct device *dev,
+-				 struct seq_file *s)
+-{
+-	u32 val;
+-
+-	switch (pdev->cpu_id) {
+-	case AMD_CPU_ID_CZN:
+-		val = amd_pmc_reg_read(pdev, AMD_PMC_SCRATCH_REG_CZN);
+-		break;
+-	case AMD_CPU_ID_YC:
+-	case AMD_CPU_ID_CB:
+-	case AMD_CPU_ID_PS:
+-		val = amd_pmc_reg_read(pdev, AMD_PMC_SCRATCH_REG_YC);
+-		break;
+-	default:
+-		return -EINVAL;
+-	}
+-
+-	if (dev)
+-		dev_dbg(pdev->dev, "SMU idlemask s0i3: 0x%x\n", val);
+-
+-	if (s)
+-		seq_printf(s, "SMU idlemask : 0x%x\n", val);
+-
+-	return 0;
+-}
+-
+ static int get_metrics_table(struct amd_pmc_dev *pdev, struct smu_metrics *table)
+ {
+ 	if (!pdev->smu_virt_addr) {
+@@ -403,6 +373,9 @@ static int amd_pmc_get_smu_version(struct amd_pmc_dev *dev)
+ 	int rc;
+ 	u32 val;
+ 
++	if (dev->cpu_id == AMD_CPU_ID_PCO)
++		return -ENODEV;
++
+ 	rc = amd_pmc_send_cmd(dev, 0, &val, SMU_MSG_GETSMUVERSION, 1);
+ 	if (rc)
+ 		return rc;
+@@ -449,12 +422,31 @@ static ssize_t smu_program_show(struct device *d, struct device_attribute *attr,
+ static DEVICE_ATTR_RO(smu_fw_version);
+ static DEVICE_ATTR_RO(smu_program);
+ 
++static umode_t pmc_attr_is_visible(struct kobject *kobj, struct attribute *attr, int idx)
++{
++	struct device *dev = kobj_to_dev(kobj);
++	struct amd_pmc_dev *pdev = dev_get_drvdata(dev);
++
++	if (pdev->cpu_id == AMD_CPU_ID_PCO)
++		return 0;
++	return 0444;
++}
++
+ static struct attribute *pmc_attrs[] = {
+ 	&dev_attr_smu_fw_version.attr,
+ 	&dev_attr_smu_program.attr,
+ 	NULL,
+ };
+-ATTRIBUTE_GROUPS(pmc);
++
++static struct attribute_group pmc_attr_group = {
++	.attrs = pmc_attrs,
++	.is_visible = pmc_attr_is_visible,
++};
++
++static const struct attribute_group *pmc_groups[] = {
++	&pmc_attr_group,
++	NULL,
++};
+ 
+ static int smu_fw_info_show(struct seq_file *s, void *unused)
+ {
+@@ -521,28 +513,47 @@ static int s0ix_stats_show(struct seq_file *s, void *unused)
+ }
+ DEFINE_SHOW_ATTRIBUTE(s0ix_stats);
+ 
+-static int amd_pmc_idlemask_show(struct seq_file *s, void *unused)
++static int amd_pmc_idlemask_read(struct amd_pmc_dev *pdev, struct device *dev,
++				 struct seq_file *s)
+ {
+-	struct amd_pmc_dev *dev = s->private;
++	u32 val;
+ 	int rc;
+ 
+-	/* we haven't yet read SMU version */
+-	if (!dev->major) {
+-		rc = amd_pmc_get_smu_version(dev);
+-		if (rc)
+-			return rc;
++	switch (pdev->cpu_id) {
++	case AMD_CPU_ID_CZN:
++		/* we haven't yet read SMU version */
++		if (!pdev->major) {
++			rc = amd_pmc_get_smu_version(pdev);
++			if (rc)
++				return rc;
++		}
++		if (pdev->major > 56 || (pdev->major >= 55 && pdev->minor >= 37))
++			val = amd_pmc_reg_read(pdev, AMD_PMC_SCRATCH_REG_CZN);
++		else
++			return -EINVAL;
++		break;
++	case AMD_CPU_ID_YC:
++	case AMD_CPU_ID_CB:
++	case AMD_CPU_ID_PS:
++		val = amd_pmc_reg_read(pdev, AMD_PMC_SCRATCH_REG_YC);
++		break;
++	default:
++		return -EINVAL;
+ 	}
+ 
+-	if (dev->major > 56 || (dev->major >= 55 && dev->minor >= 37)) {
+-		rc = amd_pmc_idlemask_read(dev, NULL, s);
+-		if (rc)
+-			return rc;
+-	} else {
+-		seq_puts(s, "Unsupported SMU version for Idlemask\n");
+-	}
++	if (dev)
++		dev_dbg(pdev->dev, "SMU idlemask s0i3: 0x%x\n", val);
++
++	if (s)
++		seq_printf(s, "SMU idlemask : 0x%x\n", val);
+ 
+ 	return 0;
+ }
++
++static int amd_pmc_idlemask_show(struct seq_file *s, void *unused)
++{
++	return amd_pmc_idlemask_read(s->private, NULL, s);
++}
+ DEFINE_SHOW_ATTRIBUTE(amd_pmc_idlemask);
+ 
+ static void amd_pmc_dbgfs_unregister(struct amd_pmc_dev *dev)
+@@ -812,6 +823,14 @@ static void amd_pmc_s2idle_check(void)
+ 		dev_err(pdev->dev, "error writing to STB: %d\n", rc);
+ }
+ 
++static int amd_pmc_dump_data(struct amd_pmc_dev *pdev)
++{
++	if (pdev->cpu_id == AMD_CPU_ID_PCO)
++		return -ENODEV;
++
++	return amd_pmc_send_cmd(pdev, 0, NULL, SMU_MSG_LOG_DUMP_DATA, 0);
++}
++
+ static void amd_pmc_s2idle_restore(void)
+ {
+ 	struct amd_pmc_dev *pdev = &pmc;
+@@ -824,7 +843,7 @@ static void amd_pmc_s2idle_restore(void)
+ 		dev_err(pdev->dev, "resume failed: %d\n", rc);
+ 
+ 	/* Let SMU know that we are looking for stats */
+-	amd_pmc_send_cmd(pdev, 0, NULL, SMU_MSG_LOG_DUMP_DATA, 0);
++	amd_pmc_dump_data(pdev);
+ 
+ 	rc = amd_pmc_write_stb(pdev, AMD_PMC_STB_S2IDLE_RESTORE);
+ 	if (rc)
+@@ -902,17 +921,9 @@ static int amd_pmc_write_stb(struct amd_pmc_dev *dev, u32 data)
+ {
+ 	int err;
+ 
+-	err = pci_write_config_dword(dev->rdev, AMD_PMC_STB_INDEX_ADDRESS, AMD_PMC_STB_PMI_0);
++	err = amd_smn_write(0, AMD_PMC_STB_PMI_0, data);
+ 	if (err) {
+-		dev_err(dev->dev, "failed to write addr in stb: 0x%X\n",
+-			AMD_PMC_STB_INDEX_ADDRESS);
+-		return pcibios_err_to_errno(err);
+-	}
+-
+-	err = pci_write_config_dword(dev->rdev, AMD_PMC_STB_INDEX_DATA, data);
+-	if (err) {
+-		dev_err(dev->dev, "failed to write data in stb: 0x%X\n",
+-			AMD_PMC_STB_INDEX_DATA);
++		dev_err(dev->dev, "failed to write data in stb: 0x%X\n", AMD_PMC_STB_PMI_0);
+ 		return pcibios_err_to_errno(err);
+ 	}
+ 
+@@ -923,18 +934,10 @@ static int amd_pmc_read_stb(struct amd_pmc_dev *dev, u32 *buf)
+ {
+ 	int i, err;
+ 
+-	err = pci_write_config_dword(dev->rdev, AMD_PMC_STB_INDEX_ADDRESS, AMD_PMC_STB_PMI_0);
+-	if (err) {
+-		dev_err(dev->dev, "error writing addr to stb: 0x%X\n",
+-			AMD_PMC_STB_INDEX_ADDRESS);
+-		return pcibios_err_to_errno(err);
+-	}
+-
+ 	for (i = 0; i < FIFO_SIZE; i++) {
+-		err = pci_read_config_dword(dev->rdev, AMD_PMC_STB_INDEX_DATA, buf++);
++		err = amd_smn_read(0, AMD_PMC_STB_PMI_0, buf++);
+ 		if (err) {
+-			dev_err(dev->dev, "error reading data from stb: 0x%X\n",
+-				AMD_PMC_STB_INDEX_DATA);
++			dev_err(dev->dev, "error reading data from stb: 0x%X\n", AMD_PMC_STB_PMI_0);
+ 			return pcibios_err_to_errno(err);
+ 		}
+ 	}
+@@ -961,30 +964,18 @@ static int amd_pmc_probe(struct platform_device *pdev)
+ 
+ 	dev->cpu_id = rdev->device;
+ 	dev->rdev = rdev;
+-	err = pci_write_config_dword(rdev, AMD_PMC_SMU_INDEX_ADDRESS, AMD_PMC_BASE_ADDR_LO);
+-	if (err) {
+-		dev_err(dev->dev, "error writing to 0x%x\n", AMD_PMC_SMU_INDEX_ADDRESS);
+-		err = pcibios_err_to_errno(err);
+-		goto err_pci_dev_put;
+-	}
+-
+-	err = pci_read_config_dword(rdev, AMD_PMC_SMU_INDEX_DATA, &val);
++	err = amd_smn_read(0, AMD_PMC_BASE_ADDR_LO, &val);
+ 	if (err) {
++		dev_err(dev->dev, "error reading 0x%x\n", AMD_PMC_BASE_ADDR_LO);
+ 		err = pcibios_err_to_errno(err);
+ 		goto err_pci_dev_put;
+ 	}
+ 
+ 	base_addr_lo = val & AMD_PMC_BASE_ADDR_HI_MASK;
+ 
+-	err = pci_write_config_dword(rdev, AMD_PMC_SMU_INDEX_ADDRESS, AMD_PMC_BASE_ADDR_HI);
+-	if (err) {
+-		dev_err(dev->dev, "error writing to 0x%x\n", AMD_PMC_SMU_INDEX_ADDRESS);
+-		err = pcibios_err_to_errno(err);
+-		goto err_pci_dev_put;
+-	}
+-
+-	err = pci_read_config_dword(rdev, AMD_PMC_SMU_INDEX_DATA, &val);
++	err = amd_smn_read(0, AMD_PMC_BASE_ADDR_HI, &val);
+ 	if (err) {
++		dev_err(dev->dev, "error reading 0x%x\n", AMD_PMC_BASE_ADDR_HI);
+ 		err = pcibios_err_to_errno(err);
+ 		goto err_pci_dev_put;
+ 	}
+diff --git a/drivers/platform/x86/amd/pmf/Kconfig b/drivers/platform/x86/amd/pmf/Kconfig
+index 6d89528c31779..d87986adf91e1 100644
+--- a/drivers/platform/x86/amd/pmf/Kconfig
++++ b/drivers/platform/x86/amd/pmf/Kconfig
+@@ -7,6 +7,7 @@ config AMD_PMF
+ 	tristate "AMD Platform Management Framework"
+ 	depends on ACPI && PCI
+ 	depends on POWER_SUPPLY
++	depends on AMD_NB
+ 	select ACPI_PLATFORM_PROFILE
+ 	help
+ 	  This driver provides support for the AMD Platform Management Framework.
+diff --git a/drivers/platform/x86/amd/pmf/core.c b/drivers/platform/x86/amd/pmf/core.c
+index da23639071d79..0acc0b6221290 100644
+--- a/drivers/platform/x86/amd/pmf/core.c
++++ b/drivers/platform/x86/amd/pmf/core.c
+@@ -8,6 +8,7 @@
+  * Author: Shyam Sundar S K <Shyam-sundar.S-k@amd.com>
+  */
+ 
++#include <asm/amd_nb.h>
+ #include <linux/debugfs.h>
+ #include <linux/iopoll.h>
+ #include <linux/module.h>
+@@ -22,8 +23,6 @@
+ #define AMD_PMF_REGISTER_ARGUMENT	0xA58
+ 
+ /* Base address of SMU for mapping physical address to virtual address */
+-#define AMD_PMF_SMU_INDEX_ADDRESS	0xB8
+-#define AMD_PMF_SMU_INDEX_DATA		0xBC
+ #define AMD_PMF_MAPPING_SIZE		0x01000
+ #define AMD_PMF_BASE_ADDR_OFFSET	0x10000
+ #define AMD_PMF_BASE_ADDR_LO		0x13B102E8
+@@ -348,30 +347,19 @@ static int amd_pmf_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	dev->cpu_id = rdev->device;
+-	err = pci_write_config_dword(rdev, AMD_PMF_SMU_INDEX_ADDRESS, AMD_PMF_BASE_ADDR_LO);
+-	if (err) {
+-		dev_err(dev->dev, "error writing to 0x%x\n", AMD_PMF_SMU_INDEX_ADDRESS);
+-		pci_dev_put(rdev);
+-		return pcibios_err_to_errno(err);
+-	}
+ 
+-	err = pci_read_config_dword(rdev, AMD_PMF_SMU_INDEX_DATA, &val);
++	err = amd_smn_read(0, AMD_PMF_BASE_ADDR_LO, &val);
+ 	if (err) {
++		dev_err(dev->dev, "error in reading from 0x%x\n", AMD_PMF_BASE_ADDR_LO);
+ 		pci_dev_put(rdev);
+ 		return pcibios_err_to_errno(err);
+ 	}
+ 
+ 	base_addr_lo = val & AMD_PMF_BASE_ADDR_HI_MASK;
+ 
+-	err = pci_write_config_dword(rdev, AMD_PMF_SMU_INDEX_ADDRESS, AMD_PMF_BASE_ADDR_HI);
+-	if (err) {
+-		dev_err(dev->dev, "error writing to 0x%x\n", AMD_PMF_SMU_INDEX_ADDRESS);
+-		pci_dev_put(rdev);
+-		return pcibios_err_to_errno(err);
+-	}
+-
+-	err = pci_read_config_dword(rdev, AMD_PMF_SMU_INDEX_DATA, &val);
++	err = amd_smn_read(0, AMD_PMF_BASE_ADDR_HI, &val);
+ 	if (err) {
++		dev_err(dev->dev, "error in reading from 0x%x\n", AMD_PMF_BASE_ADDR_HI);
+ 		pci_dev_put(rdev);
+ 		return pcibios_err_to_errno(err);
+ 	}
+diff --git a/drivers/power/supply/generic-adc-battery.c b/drivers/power/supply/generic-adc-battery.c
+index 66039c665dd1e..0af536f4932f1 100644
+--- a/drivers/power/supply/generic-adc-battery.c
++++ b/drivers/power/supply/generic-adc-battery.c
+@@ -135,6 +135,9 @@ static int read_channel(struct gab *adc_bat, enum power_supply_property psp,
+ 			result);
+ 	if (ret < 0)
+ 		pr_err("read channel error\n");
++	else
++		*result *= 1000;
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/power/supply/rk817_charger.c b/drivers/power/supply/rk817_charger.c
+index 36f807b5ec442..f1b431aa0e4f2 100644
+--- a/drivers/power/supply/rk817_charger.c
++++ b/drivers/power/supply/rk817_charger.c
+@@ -335,6 +335,20 @@ static int rk817_bat_calib_cap(struct rk817_charger *charger)
+ 			charger->fcc_mah * 1000);
+ 	}
+ 
++	/*
++	 * Set the SOC to 0 if we are below the minimum system voltage.
++	 */
++	if (volt_avg <= charger->bat_voltage_min_design_uv) {
++		charger->soc = 0;
++		charge_now_adc = CHARGE_TO_ADC(0, charger->res_div);
++		put_unaligned_be32(charge_now_adc, bulk_reg);
++		regmap_bulk_write(rk808->regmap,
++				  RK817_GAS_GAUGE_Q_INIT_H3, bulk_reg, 4);
++		dev_warn(charger->dev,
++			 "Battery voltage %d below minimum voltage %d\n",
++			 volt_avg, charger->bat_voltage_min_design_uv);
++		}
++
+ 	rk817_record_battery_nvram_values(charger);
+ 
+ 	return 0;
+@@ -710,9 +724,10 @@ static int rk817_read_battery_nvram_values(struct rk817_charger *charger)
+ 
+ 	/*
+ 	 * Read the nvram for state of charge. Sanity check for values greater
+-	 * than 100 (10000). If the value is off it should get corrected
+-	 * automatically when the voltage drops to the min (soc is 0) or when
+-	 * the battery is full (soc is 100).
++	 * than 100 (10000) or less than 0, because other things (BSP kernels,
++	 * U-Boot, or even i2cset) can write to this register. If the value is
++	 * off it should get corrected automatically when the voltage drops to
++	 * the min (soc is 0) or when the battery is full (soc is 100).
+ 	 */
+ 	ret = regmap_bulk_read(charger->rk808->regmap,
+ 			       RK817_GAS_GAUGE_BAT_R1, bulk_reg, 3);
+@@ -721,6 +736,8 @@ static int rk817_read_battery_nvram_values(struct rk817_charger *charger)
+ 	charger->soc = get_unaligned_le24(bulk_reg);
+ 	if (charger->soc > 10000)
+ 		charger->soc = 10000;
++	if (charger->soc < 0)
++		charger->soc = 0;
+ 
+ 	return 0;
+ }
+@@ -731,8 +748,8 @@ rk817_read_or_set_full_charge_on_boot(struct rk817_charger *charger,
+ {
+ 	struct rk808 *rk808 = charger->rk808;
+ 	u8 bulk_reg[4];
+-	u32 boot_voltage, boot_charge_mah, tmp;
+-	int ret, reg, off_time;
++	u32 boot_voltage, boot_charge_mah;
++	int ret, reg, off_time, tmp;
+ 	bool first_boot;
+ 
+ 	/*
+@@ -785,10 +802,12 @@ rk817_read_or_set_full_charge_on_boot(struct rk817_charger *charger,
+ 		regmap_bulk_read(rk808->regmap, RK817_GAS_GAUGE_Q_PRES_H3,
+ 				 bulk_reg, 4);
+ 		tmp = get_unaligned_be32(bulk_reg);
++		if (tmp < 0)
++			tmp = 0;
+ 		boot_charge_mah = ADC_TO_CHARGE_UAH(tmp,
+ 						    charger->res_div) / 1000;
+ 		/*
+-		 * Check if the columb counter has been off for more than 300
++		 * Check if the columb counter has been off for more than 30
+ 		 * minutes as it tends to drift downward. If so, re-init soc
+ 		 * with the boot voltage instead. Note the unit values for the
+ 		 * OFF_CNT register appear to be in decaminutes and stops
+@@ -799,7 +818,7 @@ rk817_read_or_set_full_charge_on_boot(struct rk817_charger *charger,
+ 		 * than 0 on a reboot anyway.
+ 		 */
+ 		regmap_read(rk808->regmap, RK817_GAS_GAUGE_OFF_CNT, &off_time);
+-		if (off_time >= 30) {
++		if (off_time >= 3) {
+ 			regmap_bulk_read(rk808->regmap,
+ 					 RK817_GAS_GAUGE_PWRON_VOL_H,
+ 					 bulk_reg, 2);
+diff --git a/drivers/pwm/pwm-meson.c b/drivers/pwm/pwm-meson.c
+index 5cd7b90872c62..5732300eb0046 100644
+--- a/drivers/pwm/pwm-meson.c
++++ b/drivers/pwm/pwm-meson.c
+@@ -418,7 +418,7 @@ static const struct meson_pwm_data pwm_axg_ee_data = {
+ };
+ 
+ static const char * const pwm_axg_ao_parent_names[] = {
+-	"aoclk81", "xtal", "fclk_div4", "fclk_div5"
++	"xtal", "axg_ao_clk81", "fclk_div4", "fclk_div5"
+ };
+ 
+ static const struct meson_pwm_data pwm_axg_ao_data = {
+@@ -427,7 +427,7 @@ static const struct meson_pwm_data pwm_axg_ao_data = {
+ };
+ 
+ static const char * const pwm_g12a_ao_ab_parent_names[] = {
+-	"xtal", "aoclk81", "fclk_div4", "fclk_div5"
++	"xtal", "g12a_ao_clk81", "fclk_div4", "fclk_div5"
+ };
+ 
+ static const struct meson_pwm_data pwm_g12a_ao_ab_data = {
+@@ -436,7 +436,7 @@ static const struct meson_pwm_data pwm_g12a_ao_ab_data = {
+ };
+ 
+ static const char * const pwm_g12a_ao_cd_parent_names[] = {
+-	"xtal", "aoclk81",
++	"xtal", "g12a_ao_clk81",
+ };
+ 
+ static const struct meson_pwm_data pwm_g12a_ao_cd_data = {
+diff --git a/drivers/pwm/pwm-mtk-disp.c b/drivers/pwm/pwm-mtk-disp.c
+index 692a06121b286..fe9593f968eeb 100644
+--- a/drivers/pwm/pwm-mtk-disp.c
++++ b/drivers/pwm/pwm-mtk-disp.c
+@@ -138,6 +138,19 @@ static int mtk_disp_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 	high_width = mul_u64_u64_div_u64(state->duty_cycle, rate, div);
+ 	value = period | (high_width << PWM_HIGH_WIDTH_SHIFT);
+ 
++	if (mdp->data->bls_debug && !mdp->data->has_commit) {
++		/*
++		 * For MT2701, disable double buffer before writing register
++		 * and select manual mode and use PWM_PERIOD/PWM_HIGH_WIDTH.
++		 */
++		mtk_disp_pwm_update_bits(mdp, mdp->data->bls_debug,
++					 mdp->data->bls_debug_mask,
++					 mdp->data->bls_debug_mask);
++		mtk_disp_pwm_update_bits(mdp, mdp->data->con0,
++					 mdp->data->con0_sel,
++					 mdp->data->con0_sel);
++	}
++
+ 	mtk_disp_pwm_update_bits(mdp, mdp->data->con0,
+ 				 PWM_CLKDIV_MASK,
+ 				 clk_div << PWM_CLKDIV_SHIFT);
+@@ -152,17 +165,6 @@ static int mtk_disp_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 		mtk_disp_pwm_update_bits(mdp, mdp->data->commit,
+ 					 mdp->data->commit_mask,
+ 					 0x0);
+-	} else {
+-		/*
+-		 * For MT2701, disable double buffer before writing register
+-		 * and select manual mode and use PWM_PERIOD/PWM_HIGH_WIDTH.
+-		 */
+-		mtk_disp_pwm_update_bits(mdp, mdp->data->bls_debug,
+-					 mdp->data->bls_debug_mask,
+-					 mdp->data->bls_debug_mask);
+-		mtk_disp_pwm_update_bits(mdp, mdp->data->con0,
+-					 mdp->data->con0_sel,
+-					 mdp->data->con0_sel);
+ 	}
+ 
+ 	mtk_disp_pwm_update_bits(mdp, DISP_PWM_EN, mdp->data->enable_mask,
+@@ -194,6 +196,16 @@ static int mtk_disp_pwm_get_state(struct pwm_chip *chip,
+ 		return err;
+ 	}
+ 
++	/*
++	 * Apply DISP_PWM_DEBUG settings to choose whether to enable or disable
++	 * registers double buffer and manual commit to working register before
++	 * performing any read/write operation
++	 */
++	if (mdp->data->bls_debug)
++		mtk_disp_pwm_update_bits(mdp, mdp->data->bls_debug,
++					 mdp->data->bls_debug_mask,
++					 mdp->data->bls_debug_mask);
++
+ 	rate = clk_get_rate(mdp->clk_main);
+ 	con0 = readl(mdp->base + mdp->data->con0);
+ 	con1 = readl(mdp->base + mdp->data->con1);
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 4fcd36055b025..08726bc0da9d7 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -207,6 +207,78 @@ static void regulator_unlock(struct regulator_dev *rdev)
+ 	mutex_unlock(&regulator_nesting_mutex);
+ }
+ 
++/**
++ * regulator_lock_two - lock two regulators
++ * @rdev1:		first regulator
++ * @rdev2:		second regulator
++ * @ww_ctx:		w/w mutex acquire context
++ *
++ * Locks both rdevs using the regulator_ww_class.
++ */
++static void regulator_lock_two(struct regulator_dev *rdev1,
++			       struct regulator_dev *rdev2,
++			       struct ww_acquire_ctx *ww_ctx)
++{
++	struct regulator_dev *tmp;
++	int ret;
++
++	ww_acquire_init(ww_ctx, &regulator_ww_class);
++
++	/* Try to just grab both of them */
++	ret = regulator_lock_nested(rdev1, ww_ctx);
++	WARN_ON(ret);
++	ret = regulator_lock_nested(rdev2, ww_ctx);
++	if (ret != -EDEADLOCK) {
++		WARN_ON(ret);
++		goto exit;
++	}
++
++	while (true) {
++		/*
++		 * Start of loop: rdev1 was locked and rdev2 was contended.
++		 * Need to unlock rdev1, slowly lock rdev2, then try rdev1
++		 * again.
++		 */
++		regulator_unlock(rdev1);
++
++		ww_mutex_lock_slow(&rdev2->mutex, ww_ctx);
++		rdev2->ref_cnt++;
++		rdev2->mutex_owner = current;
++		ret = regulator_lock_nested(rdev1, ww_ctx);
++
++		if (ret == -EDEADLOCK) {
++			/* More contention; swap which needs to be slow */
++			tmp = rdev1;
++			rdev1 = rdev2;
++			rdev2 = tmp;
++		} else {
++			WARN_ON(ret);
++			break;
++		}
++	}
++
++exit:
++	ww_acquire_done(ww_ctx);
++}
++
++/**
++ * regulator_unlock_two - unlock two regulators
++ * @rdev1:		first regulator
++ * @rdev2:		second regulator
++ * @ww_ctx:		w/w mutex acquire context
++ *
++ * The inverse of regulator_lock_two().
++ */
++
++static void regulator_unlock_two(struct regulator_dev *rdev1,
++				 struct regulator_dev *rdev2,
++				 struct ww_acquire_ctx *ww_ctx)
++{
++	regulator_unlock(rdev2);
++	regulator_unlock(rdev1);
++	ww_acquire_fini(ww_ctx);
++}
++
+ static bool regulator_supply_is_couple(struct regulator_dev *rdev)
+ {
+ 	struct regulator_dev *c_rdev;
+@@ -334,6 +406,7 @@ static void regulator_lock_dependent(struct regulator_dev *rdev,
+ 			ww_mutex_lock_slow(&new_contended_rdev->mutex, ww_ctx);
+ 			old_contended_rdev = new_contended_rdev;
+ 			old_contended_rdev->ref_cnt++;
++			old_contended_rdev->mutex_owner = current;
+ 		}
+ 
+ 		err = regulator_lock_recursive(rdev,
+@@ -1583,9 +1656,6 @@ static int set_machine_constraints(struct regulator_dev *rdev)
+ 			rdev->constraints->always_on = true;
+ 	}
+ 
+-	if (rdev->desc->off_on_delay)
+-		rdev->last_off = ktime_get_boottime();
+-
+ 	/* If the constraints say the regulator should be on at this point
+ 	 * and we have control then make sure it is enabled.
+ 	 */
+@@ -1619,6 +1689,8 @@ static int set_machine_constraints(struct regulator_dev *rdev)
+ 
+ 		if (rdev->constraints->always_on)
+ 			rdev->use_count++;
++	} else if (rdev->desc->off_on_delay) {
++		rdev->last_off = ktime_get();
+ 	}
+ 
+ 	print_constraints(rdev);
+@@ -1627,8 +1699,8 @@ static int set_machine_constraints(struct regulator_dev *rdev)
+ 
+ /**
+  * set_supply - set regulator supply regulator
+- * @rdev: regulator name
+- * @supply_rdev: supply regulator name
++ * @rdev: regulator (locked)
++ * @supply_rdev: supply regulator (locked))
+  *
+  * Called by platform initialisation code to set the supply regulator for this
+  * regulator. This ensures that a regulators supply will also be enabled by the
+@@ -1800,6 +1872,8 @@ static struct regulator *create_regulator(struct regulator_dev *rdev,
+ 	struct regulator *regulator;
+ 	int err = 0;
+ 
++	lockdep_assert_held_once(&rdev->mutex.base);
++
+ 	if (dev) {
+ 		char buf[REG_STR_SIZE];
+ 		int size;
+@@ -1827,9 +1901,7 @@ static struct regulator *create_regulator(struct regulator_dev *rdev,
+ 	regulator->rdev = rdev;
+ 	regulator->supply_name = supply_name;
+ 
+-	regulator_lock(rdev);
+ 	list_add(&regulator->list, &rdev->consumer_list);
+-	regulator_unlock(rdev);
+ 
+ 	if (dev) {
+ 		regulator->dev = dev;
+@@ -1995,6 +2067,7 @@ static int regulator_resolve_supply(struct regulator_dev *rdev)
+ {
+ 	struct regulator_dev *r;
+ 	struct device *dev = rdev->dev.parent;
++	struct ww_acquire_ctx ww_ctx;
+ 	int ret = 0;
+ 
+ 	/* No supply to resolve? */
+@@ -2061,23 +2134,23 @@ static int regulator_resolve_supply(struct regulator_dev *rdev)
+ 	 * between rdev->supply null check and setting rdev->supply in
+ 	 * set_supply() from concurrent tasks.
+ 	 */
+-	regulator_lock(rdev);
++	regulator_lock_two(rdev, r, &ww_ctx);
+ 
+ 	/* Supply just resolved by a concurrent task? */
+ 	if (rdev->supply) {
+-		regulator_unlock(rdev);
++		regulator_unlock_two(rdev, r, &ww_ctx);
+ 		put_device(&r->dev);
+ 		goto out;
+ 	}
+ 
+ 	ret = set_supply(rdev, r);
+ 	if (ret < 0) {
+-		regulator_unlock(rdev);
++		regulator_unlock_two(rdev, r, &ww_ctx);
+ 		put_device(&r->dev);
+ 		goto out;
+ 	}
+ 
+-	regulator_unlock(rdev);
++	regulator_unlock_two(rdev, r, &ww_ctx);
+ 
+ 	/*
+ 	 * In set_machine_constraints() we may have turned this regulator on
+@@ -2190,7 +2263,9 @@ struct regulator *_regulator_get(struct device *dev, const char *id,
+ 		return regulator;
+ 	}
+ 
++	regulator_lock(rdev);
+ 	regulator = create_regulator(rdev, dev, id);
++	regulator_unlock(rdev);
+ 	if (regulator == NULL) {
+ 		regulator = ERR_PTR(-ENOMEM);
+ 		module_put(rdev->owner);
+@@ -2668,7 +2743,7 @@ static int _regulator_do_enable(struct regulator_dev *rdev)
+ 
+ 	trace_regulator_enable(rdev_get_name(rdev));
+ 
+-	if (rdev->desc->off_on_delay && rdev->last_off) {
++	if (rdev->desc->off_on_delay) {
+ 		/* if needed, keep a distance of off_on_delay from last time
+ 		 * this regulator was disabled.
+ 		 */
+@@ -6049,6 +6124,7 @@ static void regulator_summary_lock(struct ww_acquire_ctx *ww_ctx)
+ 			ww_mutex_lock_slow(&new_contended_rdev->mutex, ww_ctx);
+ 			old_contended_rdev = new_contended_rdev;
+ 			old_contended_rdev->ref_cnt++;
++			old_contended_rdev->mutex_owner = current;
+ 		}
+ 
+ 		err = regulator_summary_lock_all(ww_ctx,
+diff --git a/drivers/regulator/stm32-pwr.c b/drivers/regulator/stm32-pwr.c
+index 2a42acb7c24e9..e5dd4db6403b2 100644
+--- a/drivers/regulator/stm32-pwr.c
++++ b/drivers/regulator/stm32-pwr.c
+@@ -129,17 +129,16 @@ static const struct regulator_desc stm32_pwr_desc[] = {
+ 
+ static int stm32_pwr_regulator_probe(struct platform_device *pdev)
+ {
+-	struct device_node *np = pdev->dev.of_node;
+ 	struct stm32_pwr_reg *priv;
+ 	void __iomem *base;
+ 	struct regulator_dev *rdev;
+ 	struct regulator_config config = { };
+ 	int i, ret = 0;
+ 
+-	base = of_iomap(np, 0);
+-	if (!base) {
++	base = devm_platform_ioremap_resource(pdev, 0);
++	if (IS_ERR(base)) {
+ 		dev_err(&pdev->dev, "Unable to map IO memory\n");
+-		return -ENOMEM;
++		return PTR_ERR(base);
+ 	}
+ 
+ 	config.dev = &pdev->dev;
+diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c
+index 2db57d3941555..5dd007622603d 100644
+--- a/drivers/remoteproc/xlnx_r5_remoteproc.c
++++ b/drivers/remoteproc/xlnx_r5_remoteproc.c
+@@ -61,8 +61,6 @@ static const struct mem_bank_data zynqmp_tcm_banks[] = {
+  * @np: device node of RPU instance
+  * @tcm_bank_count: number TCM banks accessible to this RPU
+  * @tcm_banks: array of each TCM bank data
+- * @rmem_count: Number of reserved mem regions
+- * @rmem: reserved memory region nodes from device tree
+  * @rproc: rproc handle
+  * @pm_domain_id: RPU CPU power domain id
+  */
+@@ -71,8 +69,6 @@ struct zynqmp_r5_core {
+ 	struct device_node *np;
+ 	int tcm_bank_count;
+ 	struct mem_bank_data **tcm_banks;
+-	int rmem_count;
+-	struct reserved_mem **rmem;
+ 	struct rproc *rproc;
+ 	u32 pm_domain_id;
+ };
+@@ -239,21 +235,29 @@ static int add_mem_regions_carveout(struct rproc *rproc)
+ {
+ 	struct rproc_mem_entry *rproc_mem;
+ 	struct zynqmp_r5_core *r5_core;
++	struct of_phandle_iterator it;
+ 	struct reserved_mem *rmem;
+-	int i, num_mem_regions;
++	int i = 0;
+ 
+ 	r5_core = (struct zynqmp_r5_core *)rproc->priv;
+-	num_mem_regions = r5_core->rmem_count;
+ 
+-	for (i = 0; i < num_mem_regions; i++) {
+-		rmem = r5_core->rmem[i];
++	/* Register associated reserved memory regions */
++	of_phandle_iterator_init(&it, r5_core->np, "memory-region", NULL, 0);
+ 
+-		if (!strncmp(rmem->name, "vdev0buffer", strlen("vdev0buffer"))) {
++	while (of_phandle_iterator_next(&it) == 0) {
++		rmem = of_reserved_mem_lookup(it.node);
++		if (!rmem) {
++			of_node_put(it.node);
++			dev_err(&rproc->dev, "unable to acquire memory-region\n");
++			return -EINVAL;
++		}
++
++		if (!strcmp(it.node->name, "vdev0buffer")) {
+ 			/* Init reserved memory for vdev buffer */
+ 			rproc_mem = rproc_of_resm_mem_entry_init(&rproc->dev, i,
+ 								 rmem->size,
+ 								 rmem->base,
+-								 rmem->name);
++								 it.node->name);
+ 		} else {
+ 			/* Register associated reserved memory regions */
+ 			rproc_mem = rproc_mem_entry_init(&rproc->dev, NULL,
+@@ -261,16 +265,19 @@ static int add_mem_regions_carveout(struct rproc *rproc)
+ 							 rmem->size, rmem->base,
+ 							 zynqmp_r5_mem_region_map,
+ 							 zynqmp_r5_mem_region_unmap,
+-							 rmem->name);
++							 it.node->name);
+ 		}
+ 
+-		if (!rproc_mem)
++		if (!rproc_mem) {
++			of_node_put(it.node);
+ 			return -ENOMEM;
++		}
+ 
+ 		rproc_add_carveout(rproc, rproc_mem);
+ 
+ 		dev_dbg(&rproc->dev, "reserved mem carveout %s addr=%llx, size=0x%llx",
+-			rmem->name, rmem->base, rmem->size);
++			it.node->name, rmem->base, rmem->size);
++		i++;
+ 	}
+ 
+ 	return 0;
+@@ -726,59 +733,6 @@ static int zynqmp_r5_get_tcm_node(struct zynqmp_r5_cluster *cluster)
+ 	return 0;
+ }
+ 
+-/**
+- * zynqmp_r5_get_mem_region_node()
+- * parse memory-region property and get reserved mem regions
+- *
+- * @r5_core: pointer to zynqmp_r5_core type object
+- *
+- * Return: 0 for success and error code for failure.
+- */
+-static int zynqmp_r5_get_mem_region_node(struct zynqmp_r5_core *r5_core)
+-{
+-	struct device_node *np, *rmem_np;
+-	struct reserved_mem **rmem;
+-	int res_mem_count, i;
+-	struct device *dev;
+-
+-	dev = r5_core->dev;
+-	np = r5_core->np;
+-
+-	res_mem_count = of_property_count_elems_of_size(np, "memory-region",
+-							sizeof(phandle));
+-	if (res_mem_count <= 0) {
+-		dev_warn(dev, "failed to get memory-region property %d\n",
+-			 res_mem_count);
+-		return 0;
+-	}
+-
+-	rmem = devm_kcalloc(dev, res_mem_count,
+-			    sizeof(struct reserved_mem *), GFP_KERNEL);
+-	if (!rmem)
+-		return -ENOMEM;
+-
+-	for (i = 0; i < res_mem_count; i++) {
+-		rmem_np = of_parse_phandle(np, "memory-region", i);
+-		if (!rmem_np)
+-			goto release_rmem;
+-
+-		rmem[i] = of_reserved_mem_lookup(rmem_np);
+-		if (!rmem[i]) {
+-			of_node_put(rmem_np);
+-			goto release_rmem;
+-		}
+-
+-		of_node_put(rmem_np);
+-	}
+-
+-	r5_core->rmem_count = res_mem_count;
+-	r5_core->rmem = rmem;
+-	return 0;
+-
+-release_rmem:
+-	return -EINVAL;
+-}
+-
+ /*
+  * zynqmp_r5_core_init()
+  * Create and initialize zynqmp_r5_core type object
+@@ -806,10 +760,6 @@ static int zynqmp_r5_core_init(struct zynqmp_r5_cluster *cluster,
+ 	for (i = 0; i < cluster->core_count; i++) {
+ 		r5_core = cluster->r5_cores[i];
+ 
+-		ret = zynqmp_r5_get_mem_region_node(r5_core);
+-		if (ret)
+-			dev_warn(dev, "memory-region prop failed %d\n", ret);
+-
+ 		/* Initialize r5 cores with power-domains parsed from dts */
+ 		ret = of_property_read_u32_index(r5_core->np, "power-domains",
+ 						 1, &r5_core->pm_domain_id);
+diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c
+index 01d2805fe30f5..62634d020d139 100644
+--- a/drivers/rpmsg/qcom_glink_native.c
++++ b/drivers/rpmsg/qcom_glink_native.c
+@@ -1356,8 +1356,9 @@ static int __qcom_glink_send(struct glink_channel *channel,
+ 	ret = qcom_glink_tx(glink, &req, sizeof(req), data, chunk_size, wait);
+ 
+ 	/* Mark intent available if we failed */
+-	if (ret && intent) {
+-		intent->in_use = false;
++	if (ret) {
++		if (intent)
++			intent->in_use = false;
+ 		return ret;
+ 	}
+ 
+@@ -1378,8 +1379,9 @@ static int __qcom_glink_send(struct glink_channel *channel,
+ 				    chunk_size, wait);
+ 
+ 		/* Mark intent available if we failed */
+-		if (ret && intent) {
+-			intent->in_use = false;
++		if (ret) {
++			if (intent)
++				intent->in_use = false;
+ 			break;
+ 		}
+ 	}
+diff --git a/drivers/rtc/rtc-jz4740.c b/drivers/rtc/rtc-jz4740.c
+index 59d279e3e6f5b..36453b008139b 100644
+--- a/drivers/rtc/rtc-jz4740.c
++++ b/drivers/rtc/rtc-jz4740.c
+@@ -414,7 +414,8 @@ static int jz4740_rtc_probe(struct platform_device *pdev)
+ 			return dev_err_probe(dev, ret,
+ 					     "Unable to register clk32k clock\n");
+ 
+-		ret = of_clk_add_hw_provider(np, of_clk_hw_simple_get, &rtc->clk32k);
++		ret = devm_of_clk_add_hw_provider(dev, of_clk_hw_simple_get,
++						  &rtc->clk32k);
+ 		if (ret)
+ 			return dev_err_probe(dev, ret,
+ 					     "Unable to register clk32k clock provider\n");
+diff --git a/drivers/rtc/rtc-meson-vrtc.c b/drivers/rtc/rtc-meson-vrtc.c
+index 1463c86215615..648fa362ec447 100644
+--- a/drivers/rtc/rtc-meson-vrtc.c
++++ b/drivers/rtc/rtc-meson-vrtc.c
+@@ -23,7 +23,7 @@ static int meson_vrtc_read_time(struct device *dev, struct rtc_time *tm)
+ 	struct timespec64 time;
+ 
+ 	dev_dbg(dev, "%s\n", __func__);
+-	ktime_get_raw_ts64(&time);
++	ktime_get_real_ts64(&time);
+ 	rtc_time64_to_tm(time.tv_sec, tm);
+ 
+ 	return 0;
+@@ -96,7 +96,7 @@ static int __maybe_unused meson_vrtc_suspend(struct device *dev)
+ 		long alarm_secs;
+ 		struct timespec64 time;
+ 
+-		ktime_get_raw_ts64(&time);
++		ktime_get_real_ts64(&time);
+ 		local_time = time.tv_sec;
+ 
+ 		dev_dbg(dev, "alarm_time = %lus, local_time=%lus\n",
+diff --git a/drivers/rtc/rtc-omap.c b/drivers/rtc/rtc-omap.c
+index 4d4f3b1a73093..73634a3ccfd3b 100644
+--- a/drivers/rtc/rtc-omap.c
++++ b/drivers/rtc/rtc-omap.c
+@@ -25,6 +25,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/rtc.h>
++#include <linux/rtc/rtc-omap.h>
+ 
+ /*
+  * The OMAP RTC is a year/month/day/hours/minutes/seconds BCD clock
+diff --git a/drivers/rtc/rtc-ti-k3.c b/drivers/rtc/rtc-ti-k3.c
+index ba23163cc0428..0d90fe9233550 100644
+--- a/drivers/rtc/rtc-ti-k3.c
++++ b/drivers/rtc/rtc-ti-k3.c
+@@ -632,7 +632,8 @@ static int __maybe_unused ti_k3_rtc_suspend(struct device *dev)
+ 	struct ti_k3_rtc *priv = dev_get_drvdata(dev);
+ 
+ 	if (device_may_wakeup(dev))
+-		enable_irq_wake(priv->irq);
++		return enable_irq_wake(priv->irq);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
+index a9c2a8d76c453..8b280abb3e59f 100644
+--- a/drivers/s390/block/dasd.c
++++ b/drivers/s390/block/dasd.c
+@@ -2941,7 +2941,7 @@ static int _dasd_requeue_request(struct dasd_ccw_req *cqr)
+ 		return 0;
+ 	spin_lock_irq(&cqr->dq->lock);
+ 	req = (struct request *) cqr->callback_data;
+-	blk_mq_requeue_request(req, false);
++	blk_mq_requeue_request(req, true);
+ 	spin_unlock_irq(&cqr->dq->lock);
+ 
+ 	return 0;
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
+index d643c5a49aa94..70c24377c6a19 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
+@@ -1258,7 +1258,11 @@ static void slot_complete_v1_hw(struct hisi_hba *hisi_hba,
+ 
+ 		slot_err_v1_hw(hisi_hba, task, slot);
+ 		if (unlikely(slot->abort)) {
+-			sas_task_abort(task);
++			if (dev_is_sata(device) && task->ata_task.use_ncq)
++				sas_ata_device_link_abort(device, true);
++			else
++				sas_task_abort(task);
++
+ 			return;
+ 		}
+ 		goto out;
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
+index cded42f4ca445..02575d81afca2 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
+@@ -2404,7 +2404,11 @@ static void slot_complete_v2_hw(struct hisi_hba *hisi_hba,
+ 				 error_info[2], error_info[3]);
+ 
+ 		if (unlikely(slot->abort)) {
+-			sas_task_abort(task);
++			if (dev_is_sata(device) && task->ata_task.use_ncq)
++				sas_ata_device_link_abort(device, true);
++			else
++				sas_task_abort(task);
++
+ 			return;
+ 		}
+ 		goto out;
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index a63279f55d096..9afc23e3a80fc 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -2320,7 +2320,11 @@ static void slot_complete_v3_hw(struct hisi_hba *hisi_hba,
+ 					error_info[0], error_info[1],
+ 					error_info[2], error_info[3]);
+ 			if (unlikely(slot->abort)) {
+-				sas_task_abort(task);
++				if (dev_is_sata(device) && task->ata_task.use_ncq)
++					sas_ata_device_link_abort(device, true);
++				else
++					sas_task_abort(task);
++
+ 				return;
+ 			}
+ 			goto out;
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index 4f7485958c49a..ed75230b02090 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -12026,7 +12026,7 @@ lpfc_sli4_pci_mem_setup(struct lpfc_hba *phba)
+ 				goto out_iounmap_all;
+ 		} else {
+ 			error = -ENOMEM;
+-			goto out_iounmap_all;
++			goto out_iounmap_ctrl;
+ 		}
+ 	}
+ 
+@@ -12044,7 +12044,7 @@ lpfc_sli4_pci_mem_setup(struct lpfc_hba *phba)
+ 			dev_err(&pdev->dev,
+ 			   "ioremap failed for SLI4 HBA dpp registers.\n");
+ 			error = -ENOMEM;
+-			goto out_iounmap_ctrl;
++			goto out_iounmap_all;
+ 		}
+ 		phba->pci_bar4_memmap_p = phba->sli4_hba.dpp_regs_memmap_p;
+ 	}
+@@ -12069,9 +12069,11 @@ lpfc_sli4_pci_mem_setup(struct lpfc_hba *phba)
+ 	return 0;
+ 
+ out_iounmap_all:
+-	iounmap(phba->sli4_hba.drbl_regs_memmap_p);
++	if (phba->sli4_hba.drbl_regs_memmap_p)
++		iounmap(phba->sli4_hba.drbl_regs_memmap_p);
+ out_iounmap_ctrl:
+-	iounmap(phba->sli4_hba.ctrl_regs_memmap_p);
++	if (phba->sli4_hba.ctrl_regs_memmap_p)
++		iounmap(phba->sli4_hba.ctrl_regs_memmap_p);
+ out_iounmap_conf:
+ 	iounmap(phba->sli4_hba.conf_regs_memmap_p);
+ 
+diff --git a/drivers/scsi/megaraid.c b/drivers/scsi/megaraid.c
+index bf491af9f0d65..16e2cf848c6ef 100644
+--- a/drivers/scsi/megaraid.c
++++ b/drivers/scsi/megaraid.c
+@@ -1441,6 +1441,7 @@ mega_cmd_done(adapter_t *adapter, u8 completed[], int nstatus, int status)
+ 		 */
+ 		if (cmdid == CMDID_INT_CMDS) {
+ 			scb = &adapter->int_scb;
++			cmd = scb->cmd;
+ 
+ 			list_del_init(&scb->list);
+ 			scb->state = SCB_FREE;
+diff --git a/drivers/soc/bcm/brcmstb/biuctrl.c b/drivers/soc/bcm/brcmstb/biuctrl.c
+index e1d7b45432485..364ddbe365c24 100644
+--- a/drivers/soc/bcm/brcmstb/biuctrl.c
++++ b/drivers/soc/bcm/brcmstb/biuctrl.c
+@@ -288,6 +288,10 @@ static int __init setup_hifcpubiuctrl_regs(struct device_node *np)
+ 	if (BRCM_ID(family_id) == 0x7260 && BRCM_REV(family_id) == 0)
+ 		cpubiuctrl_regs = b53_cpubiuctrl_no_wb_regs;
+ out:
++	if (ret && cpubiuctrl_base) {
++		iounmap(cpubiuctrl_base);
++		cpubiuctrl_base = NULL;
++	}
+ 	return ret;
+ }
+ 
+diff --git a/drivers/soc/canaan/Kconfig b/drivers/soc/canaan/Kconfig
+index 2527cf5757ec9..43ced2bf84447 100644
+--- a/drivers/soc/canaan/Kconfig
++++ b/drivers/soc/canaan/Kconfig
+@@ -3,8 +3,9 @@
+ config SOC_K210_SYSCTL
+ 	bool "Canaan Kendryte K210 SoC system controller"
+ 	depends on RISCV && SOC_CANAAN && OF
++	depends on COMMON_CLK_K210
+ 	default SOC_CANAAN
+-        select PM
+-        select MFD_SYSCON
++	select PM
++	select MFD_SYSCON
+ 	help
+ 	  Canaan Kendryte K210 SoC system controller driver.
+diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
+index 0f8b2249f8894..f93544f6d7961 100644
+--- a/drivers/soc/qcom/rpmh-rsc.c
++++ b/drivers/soc/qcom/rpmh-rsc.c
+@@ -1073,7 +1073,7 @@ static int rpmh_rsc_probe(struct platform_device *pdev)
+ 	drv->ver.minor = rsc_id & (MINOR_VER_MASK << MINOR_VER_SHIFT);
+ 	drv->ver.minor >>= MINOR_VER_SHIFT;
+ 
+-	if (drv->ver.major == 3 && drv->ver.minor == 0)
++	if (drv->ver.major == 3 && drv->ver.minor >= 0)
+ 		drv->regs = rpmh_rsc_reg_offset_ver_3_0;
+ 	else
+ 		drv->regs = rpmh_rsc_reg_offset_ver_2_7;
+diff --git a/drivers/soc/renesas/renesas-soc.c b/drivers/soc/renesas/renesas-soc.c
+index 468ebce1ea88b..51191d1a6dd1d 100644
+--- a/drivers/soc/renesas/renesas-soc.c
++++ b/drivers/soc/renesas/renesas-soc.c
+@@ -471,8 +471,11 @@ static int __init renesas_soc_init(void)
+ 	}
+ 
+ 	soc_dev_attr = kzalloc(sizeof(*soc_dev_attr), GFP_KERNEL);
+-	if (!soc_dev_attr)
++	if (!soc_dev_attr) {
++		if (chipid)
++			iounmap(chipid);
+ 		return -ENOMEM;
++	}
+ 
+ 	np = of_find_node_by_path("/");
+ 	of_property_read_string(np, "model", &soc_dev_attr->machine);
+diff --git a/drivers/soc/ti/k3-ringacc.c b/drivers/soc/ti/k3-ringacc.c
+index e01e4d815230a..8f131368a7586 100644
+--- a/drivers/soc/ti/k3-ringacc.c
++++ b/drivers/soc/ti/k3-ringacc.c
+@@ -406,6 +406,11 @@ static int k3_dmaring_request_dual_ring(struct k3_ringacc *ringacc, int fwd_id,
+ 
+ 	mutex_lock(&ringacc->req_lock);
+ 
++	if (!try_module_get(ringacc->dev->driver->owner)) {
++		ret = -EINVAL;
++		goto err_module_get;
++	}
++
+ 	if (test_bit(fwd_id, ringacc->rings_inuse)) {
+ 		ret = -EBUSY;
+ 		goto error;
+@@ -421,6 +426,8 @@ static int k3_dmaring_request_dual_ring(struct k3_ringacc *ringacc, int fwd_id,
+ 	return 0;
+ 
+ error:
++	module_put(ringacc->dev->driver->owner);
++err_module_get:
+ 	mutex_unlock(&ringacc->req_lock);
+ 	return ret;
+ }
+diff --git a/drivers/soc/ti/pm33xx.c b/drivers/soc/ti/pm33xx.c
+index ce09c42eaed25..f04c21157904b 100644
+--- a/drivers/soc/ti/pm33xx.c
++++ b/drivers/soc/ti/pm33xx.c
+@@ -527,7 +527,7 @@ static int am33xx_pm_probe(struct platform_device *pdev)
+ 
+ 	ret = am33xx_pm_alloc_sram();
+ 	if (ret)
+-		return ret;
++		goto err_wkup_m3_ipc_put;
+ 
+ 	ret = am33xx_pm_rtc_setup();
+ 	if (ret)
+@@ -572,13 +572,14 @@ err_pm_runtime_put:
+ 	pm_runtime_put_sync(dev);
+ err_pm_runtime_disable:
+ 	pm_runtime_disable(dev);
+-	wkup_m3_ipc_put(m3_ipc);
+ err_unsetup_rtc:
+ 	iounmap(rtc_base_virt);
+ 	clk_put(rtc_fck);
+ err_free_sram:
+ 	am33xx_pm_free_sram();
+ 	pm33xx_dev = NULL;
++err_wkup_m3_ipc_put:
++	wkup_m3_ipc_put(m3_ipc);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/soundwire/cadence_master.h b/drivers/soundwire/cadence_master.h
+index dec0b4f993c1a..6f0092deea4b5 100644
+--- a/drivers/soundwire/cadence_master.h
++++ b/drivers/soundwire/cadence_master.h
+@@ -84,7 +84,6 @@ struct sdw_cdns_stream_config {
+  * @bus: Bus handle
+  * @stream_type: Stream type
+  * @link_id: Master link id
+- * @hw_params: hw_params to be applied in .prepare step
+  * @suspended: status set when suspended, to be used in .prepare
+  * @paused: status set in .trigger, to be used in suspend
+  * @direction: stream direction
+@@ -96,7 +95,6 @@ struct sdw_cdns_dai_runtime {
+ 	struct sdw_bus *bus;
+ 	enum sdw_stream_type stream_type;
+ 	int link_id;
+-	struct snd_pcm_hw_params *hw_params;
+ 	bool suspended;
+ 	bool paused;
+ 	int direction;
+diff --git a/drivers/soundwire/intel.c b/drivers/soundwire/intel.c
+index 2651767272c73..0e26aafd403bb 100644
+--- a/drivers/soundwire/intel.c
++++ b/drivers/soundwire/intel.c
+@@ -817,7 +817,6 @@ static int intel_hw_params(struct snd_pcm_substream *substream,
+ 	dai_runtime->paused = false;
+ 	dai_runtime->suspended = false;
+ 	dai_runtime->pdi = pdi;
+-	dai_runtime->hw_params = params;
+ 
+ 	/* Inform DSP about PDI stream number */
+ 	ret = intel_params_stream(sdw, substream->stream, dai, params,
+@@ -870,6 +869,11 @@ static int intel_prepare(struct snd_pcm_substream *substream,
+ 	}
+ 
+ 	if (dai_runtime->suspended) {
++		struct snd_soc_pcm_runtime *rtd = asoc_substream_to_rtd(substream);
++		struct snd_pcm_hw_params *hw_params;
++
++		hw_params = &rtd->dpcm[substream->stream].hw_params;
++
+ 		dai_runtime->suspended = false;
+ 
+ 		/*
+@@ -881,7 +885,7 @@ static int intel_prepare(struct snd_pcm_substream *substream,
+ 		 */
+ 
+ 		/* configure stream */
+-		ch = params_channels(dai_runtime->hw_params);
++		ch = params_channels(hw_params);
+ 		if (substream->stream == SNDRV_PCM_STREAM_CAPTURE)
+ 			dir = SDW_DATA_DIR_RX;
+ 		else
+@@ -893,7 +897,7 @@ static int intel_prepare(struct snd_pcm_substream *substream,
+ 
+ 		/* Inform DSP about PDI stream number */
+ 		ret = intel_params_stream(sdw, substream->stream, dai,
+-					  dai_runtime->hw_params,
++					  hw_params,
+ 					  sdw->instance,
+ 					  dai_runtime->pdi->intel_alh_id);
+ 	}
+@@ -932,7 +936,6 @@ intel_hw_free(struct snd_pcm_substream *substream, struct snd_soc_dai *dai)
+ 		return ret;
+ 	}
+ 
+-	dai_runtime->hw_params = NULL;
+ 	dai_runtime->pdi = NULL;
+ 
+ 	return 0;
+diff --git a/drivers/soundwire/qcom.c b/drivers/soundwire/qcom.c
+index 3354248702900..ba502129150d5 100644
+--- a/drivers/soundwire/qcom.c
++++ b/drivers/soundwire/qcom.c
+@@ -704,7 +704,7 @@ static int qcom_swrm_init(struct qcom_swrm_ctrl *ctrl)
+ 	}
+ 
+ 	/* Configure number of retries of a read/write cmd */
+-	if (ctrl->version > 0x01050001) {
++	if (ctrl->version >= 0x01050001) {
+ 		/* Only for versions >= 1.5.1 */
+ 		ctrl->reg_write(ctrl, SWRM_CMD_FIFO_CFG_ADDR,
+ 				SWRM_RD_WR_CMD_RETRIES |
+diff --git a/drivers/spi/atmel-quadspi.c b/drivers/spi/atmel-quadspi.c
+index f4632cb074954..713a4d6700fd0 100644
+--- a/drivers/spi/atmel-quadspi.c
++++ b/drivers/spi/atmel-quadspi.c
+@@ -706,18 +706,28 @@ static int atmel_qspi_remove(struct platform_device *pdev)
+ 	struct atmel_qspi *aq = spi_controller_get_devdata(ctrl);
+ 	int ret;
+ 
+-	ret = pm_runtime_resume_and_get(&pdev->dev);
+-	if (ret < 0)
+-		return ret;
+-
+ 	spi_unregister_controller(ctrl);
+-	atmel_qspi_write(QSPI_CR_QSPIDIS, aq, QSPI_CR);
++
++	ret = pm_runtime_get_sync(&pdev->dev);
++	if (ret >= 0) {
++		atmel_qspi_write(QSPI_CR_QSPIDIS, aq, QSPI_CR);
++		clk_disable(aq->qspick);
++		clk_disable(aq->pclk);
++	} else {
++		/*
++		 * atmel_qspi_runtime_{suspend,resume} just disable and enable
++		 * the two clks respectively. So after resume failed these are
++		 * off, and we skip hardware access and disabling these clks again.
++		 */
++		dev_warn(&pdev->dev, "Failed to resume device on remove\n");
++	}
++
++	clk_unprepare(aq->qspick);
++	clk_unprepare(aq->pclk);
+ 
+ 	pm_runtime_disable(&pdev->dev);
+ 	pm_runtime_put_noidle(&pdev->dev);
+ 
+-	clk_disable_unprepare(aq->qspick);
+-	clk_disable_unprepare(aq->pclk);
+ 	return 0;
+ }
+ 
+@@ -786,7 +796,11 @@ static int __maybe_unused atmel_qspi_runtime_resume(struct device *dev)
+ 	if (ret)
+ 		return ret;
+ 
+-	return clk_enable(aq->qspick);
++	ret = clk_enable(aq->qspick);
++	if (ret)
++		clk_disable(aq->pclk);
++
++	return ret;
+ }
+ 
+ static const struct dev_pm_ops __maybe_unused atmel_qspi_pm_ops = {
+diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c
+index 64b6a460d739b..8f36e1306e169 100644
+--- a/drivers/spi/spi-cadence-quadspi.c
++++ b/drivers/spi/spi-cadence-quadspi.c
+@@ -1802,32 +1802,36 @@ static int cqspi_remove(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
+-#ifdef CONFIG_PM_SLEEP
+ static int cqspi_suspend(struct device *dev)
+ {
+ 	struct cqspi_st *cqspi = dev_get_drvdata(dev);
++	struct spi_master *master = dev_get_drvdata(dev);
++	int ret;
+ 
++	ret = spi_master_suspend(master);
+ 	cqspi_controller_enable(cqspi, 0);
+-	return 0;
++
++	clk_disable_unprepare(cqspi->clk);
++
++	return ret;
+ }
+ 
+ static int cqspi_resume(struct device *dev)
+ {
+ 	struct cqspi_st *cqspi = dev_get_drvdata(dev);
++	struct spi_master *master = dev_get_drvdata(dev);
+ 
+-	cqspi_controller_enable(cqspi, 1);
+-	return 0;
+-}
++	clk_prepare_enable(cqspi->clk);
++	cqspi_wait_idle(cqspi);
++	cqspi_controller_init(cqspi);
+ 
+-static const struct dev_pm_ops cqspi__dev_pm_ops = {
+-	.suspend = cqspi_suspend,
+-	.resume = cqspi_resume,
+-};
++	cqspi->current_cs = -1;
++	cqspi->sclk = 0;
++
++	return spi_master_resume(master);
++}
+ 
+-#define CQSPI_DEV_PM_OPS	(&cqspi__dev_pm_ops)
+-#else
+-#define CQSPI_DEV_PM_OPS	NULL
+-#endif
++static DEFINE_SIMPLE_DEV_PM_OPS(cqspi_dev_pm_ops, cqspi_suspend, cqspi_resume);
+ 
+ static const struct cqspi_driver_platdata cdns_qspi = {
+ 	.quirks = CQSPI_DISABLE_DAC_MODE,
+@@ -1894,7 +1898,7 @@ static struct platform_driver cqspi_platform_driver = {
+ 	.remove = cqspi_remove,
+ 	.driver = {
+ 		.name = CQSPI_NAME,
+-		.pm = CQSPI_DEV_PM_OPS,
++		.pm = &cqspi_dev_pm_ops,
+ 		.of_match_table = cqspi_dt_ids,
+ 	},
+ };
+diff --git a/drivers/spi/spi-fsl-spi.c b/drivers/spi/spi-fsl-spi.c
+index 93152144fd2ec..5602f052b2b50 100644
+--- a/drivers/spi/spi-fsl-spi.c
++++ b/drivers/spi/spi-fsl-spi.c
+@@ -181,8 +181,8 @@ static int mspi_apply_qe_mode_quirks(struct spi_mpc8xxx_cs *cs,
+ 				struct spi_device *spi,
+ 				int bits_per_word)
+ {
+-	/* QE uses Little Endian for words > 8
+-	 * so transform all words > 8 into 8 bits
++	/* CPM/QE uses Little Endian for words > 8
++	 * so transform 16 and 32 bits words into 8 bits
+ 	 * Unfortnatly that doesn't work for LSB so
+ 	 * reject these for now */
+ 	/* Note: 32 bits word, LSB works iff
+@@ -190,9 +190,11 @@ static int mspi_apply_qe_mode_quirks(struct spi_mpc8xxx_cs *cs,
+ 	if (spi->mode & SPI_LSB_FIRST &&
+ 	    bits_per_word > 8)
+ 		return -EINVAL;
+-	if (bits_per_word > 8)
++	if (bits_per_word <= 8)
++		return bits_per_word;
++	if (bits_per_word == 16 || bits_per_word == 32)
+ 		return 8; /* pretend its 8 bits */
+-	return bits_per_word;
++	return -EINVAL;
+ }
+ 
+ static int fsl_spi_setup_transfer(struct spi_device *spi,
+@@ -222,7 +224,7 @@ static int fsl_spi_setup_transfer(struct spi_device *spi,
+ 		bits_per_word = mspi_apply_cpu_mode_quirks(cs, spi,
+ 							   mpc8xxx_spi,
+ 							   bits_per_word);
+-	else if (mpc8xxx_spi->flags & SPI_QE)
++	else
+ 		bits_per_word = mspi_apply_qe_mode_quirks(cs, spi,
+ 							  bits_per_word);
+ 
+diff --git a/drivers/spi/spi-imx.c b/drivers/spi/spi-imx.c
+index e4ccd0c329d06..6c9c87cd14cae 100644
+--- a/drivers/spi/spi-imx.c
++++ b/drivers/spi/spi-imx.c
+@@ -1856,13 +1856,11 @@ static int spi_imx_remove(struct platform_device *pdev)
+ 
+ 	spi_unregister_controller(controller);
+ 
+-	ret = pm_runtime_resume_and_get(spi_imx->dev);
+-	if (ret < 0) {
+-		dev_err(spi_imx->dev, "failed to enable clock\n");
+-		return ret;
+-	}
+-
+-	writel(0, spi_imx->base + MXC_CSPICTRL);
++	ret = pm_runtime_get_sync(spi_imx->dev);
++	if (ret >= 0)
++		writel(0, spi_imx->base + MXC_CSPICTRL);
++	else
++		dev_warn(spi_imx->dev, "failed to enable clock, skip hw disable\n");
+ 
+ 	pm_runtime_dont_use_autosuspend(spi_imx->dev);
+ 	pm_runtime_put_sync(spi_imx->dev);
+diff --git a/drivers/spi/spi-mpc512x-psc.c b/drivers/spi/spi-mpc512x-psc.c
+index 03630359ce70d..0b4d49ef84de8 100644
+--- a/drivers/spi/spi-mpc512x-psc.c
++++ b/drivers/spi/spi-mpc512x-psc.c
+@@ -22,7 +22,6 @@
+ #include <linux/delay.h>
+ #include <linux/clk.h>
+ #include <linux/spi/spi.h>
+-#include <linux/fsl_devices.h>
+ #include <asm/mpc52xx_psc.h>
+ 
+ enum {
+@@ -51,8 +50,6 @@ enum {
+ 	__ret; })
+ 
+ struct mpc512x_psc_spi {
+-	void (*cs_control)(struct spi_device *spi, bool on);
+-
+ 	/* driver internal data */
+ 	int type;
+ 	void __iomem *psc;
+@@ -128,26 +125,16 @@ static void mpc512x_psc_spi_activate_cs(struct spi_device *spi)
+ 	mps->bits_per_word = cs->bits_per_word;
+ 
+ 	if (spi->cs_gpiod) {
+-		if (mps->cs_control)
+-			/* boardfile override */
+-			mps->cs_control(spi, (spi->mode & SPI_CS_HIGH) ? 1 : 0);
+-		else
+-			/* gpiolib will deal with the inversion */
+-			gpiod_set_value(spi->cs_gpiod, 1);
++		/* gpiolib will deal with the inversion */
++		gpiod_set_value(spi->cs_gpiod, 1);
+ 	}
+ }
+ 
+ static void mpc512x_psc_spi_deactivate_cs(struct spi_device *spi)
+ {
+-	struct mpc512x_psc_spi *mps = spi_master_get_devdata(spi->master);
+-
+ 	if (spi->cs_gpiod) {
+-		if (mps->cs_control)
+-			/* boardfile override */
+-			mps->cs_control(spi, (spi->mode & SPI_CS_HIGH) ? 0 : 1);
+-		else
+-			/* gpiolib will deal with the inversion */
+-			gpiod_set_value(spi->cs_gpiod, 0);
++		/* gpiolib will deal with the inversion */
++		gpiod_set_value(spi->cs_gpiod, 0);
+ 	}
+ }
+ 
+@@ -474,7 +461,6 @@ static irqreturn_t mpc512x_psc_spi_isr(int irq, void *dev_id)
+ static int mpc512x_psc_spi_do_probe(struct device *dev, u32 regaddr,
+ 					      u32 size, unsigned int irq)
+ {
+-	struct fsl_spi_platform_data *pdata = dev_get_platdata(dev);
+ 	struct mpc512x_psc_spi *mps;
+ 	struct spi_master *master;
+ 	int ret;
+@@ -490,12 +476,6 @@ static int mpc512x_psc_spi_do_probe(struct device *dev, u32 regaddr,
+ 	mps->type = (int)of_device_get_match_data(dev);
+ 	mps->irq = irq;
+ 
+-	if (pdata) {
+-		mps->cs_control = pdata->cs_control;
+-		master->bus_num = pdata->bus_num;
+-		master->num_chipselect = pdata->max_chipselect;
+-	}
+-
+ 	master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | SPI_LSB_FIRST;
+ 	master->setup = mpc512x_psc_spi_setup;
+ 	master->prepare_transfer_hardware = mpc512x_psc_spi_prep_xfer_hw;
+diff --git a/drivers/spi/spi-mpc52xx-psc.c b/drivers/spi/spi-mpc52xx-psc.c
+index 609311231e64b..604868df913c4 100644
+--- a/drivers/spi/spi-mpc52xx-psc.c
++++ b/drivers/spi/spi-mpc52xx-psc.c
+@@ -18,7 +18,6 @@
+ #include <linux/io.h>
+ #include <linux/delay.h>
+ #include <linux/spi/spi.h>
+-#include <linux/fsl_devices.h>
+ #include <linux/slab.h>
+ #include <linux/of_irq.h>
+ 
+@@ -28,10 +27,6 @@
+ #define MCLK 20000000 /* PSC port MClk in hz */
+ 
+ struct mpc52xx_psc_spi {
+-	/* fsl_spi_platform data */
+-	void (*cs_control)(struct spi_device *spi, bool on);
+-	u32 sysclk;
+-
+ 	/* driver internal data */
+ 	struct mpc52xx_psc __iomem *psc;
+ 	struct mpc52xx_psc_fifo __iomem *fifo;
+@@ -101,17 +96,6 @@ static void mpc52xx_psc_spi_activate_cs(struct spi_device *spi)
+ 		ccr |= (MCLK / 1000000 - 1) & 0xFF;
+ 	out_be16((u16 __iomem *)&psc->ccr, ccr);
+ 	mps->bits_per_word = cs->bits_per_word;
+-
+-	if (mps->cs_control)
+-		mps->cs_control(spi, (spi->mode & SPI_CS_HIGH) ? 1 : 0);
+-}
+-
+-static void mpc52xx_psc_spi_deactivate_cs(struct spi_device *spi)
+-{
+-	struct mpc52xx_psc_spi *mps = spi_master_get_devdata(spi->master);
+-
+-	if (mps->cs_control)
+-		mps->cs_control(spi, (spi->mode & SPI_CS_HIGH) ? 0 : 1);
+ }
+ 
+ #define MPC52xx_PSC_BUFSIZE (MPC52xx_PSC_RFNUM_MASK + 1)
+@@ -220,14 +204,9 @@ int mpc52xx_psc_spi_transfer_one_message(struct spi_controller *ctlr,
+ 		m->actual_length += t->len;
+ 
+ 		spi_transfer_delay_exec(t);
+-
+-		if (cs_change)
+-			mpc52xx_psc_spi_deactivate_cs(spi);
+ 	}
+ 
+ 	m->status = status;
+-	if (status || !cs_change)
+-		mpc52xx_psc_spi_deactivate_cs(spi);
+ 
+ 	mpc52xx_psc_spi_transfer_setup(spi, NULL);
+ 
+@@ -269,7 +248,7 @@ static int mpc52xx_psc_spi_port_config(int psc_id, struct mpc52xx_psc_spi *mps)
+ 	int ret;
+ 
+ 	/* default sysclk is 512MHz */
+-	mclken_div = (mps->sysclk ? mps->sysclk : 512000000) / MCLK;
++	mclken_div = 512000000 / MCLK;
+ 	ret = mpc52xx_set_psc_clkdiv(psc_id, mclken_div);
+ 	if (ret)
+ 		return ret;
+@@ -317,7 +296,6 @@ static irqreturn_t mpc52xx_psc_spi_isr(int irq, void *dev_id)
+ static int mpc52xx_psc_spi_do_probe(struct device *dev, u32 regaddr,
+ 				u32 size, unsigned int irq, s16 bus_num)
+ {
+-	struct fsl_spi_platform_data *pdata = dev_get_platdata(dev);
+ 	struct mpc52xx_psc_spi *mps;
+ 	struct spi_master *master;
+ 	int ret;
+@@ -333,19 +311,8 @@ static int mpc52xx_psc_spi_do_probe(struct device *dev, u32 regaddr,
+ 	master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | SPI_LSB_FIRST;
+ 
+ 	mps->irq = irq;
+-	if (pdata == NULL) {
+-		dev_warn(dev,
+-			 "probe called without platform data, no cs_control function will be called\n");
+-		mps->cs_control = NULL;
+-		mps->sysclk = 0;
+-		master->bus_num = bus_num;
+-		master->num_chipselect = 255;
+-	} else {
+-		mps->cs_control = pdata->cs_control;
+-		mps->sysclk = pdata->sysclk;
+-		master->bus_num = pdata->bus_num;
+-		master->num_chipselect = pdata->max_chipselect;
+-	}
++	master->bus_num = bus_num;
++	master->num_chipselect = 255;
+ 	master->setup = mpc52xx_psc_spi_setup;
+ 	master->transfer_one_message = mpc52xx_psc_spi_transfer_one_message;
+ 	master->cleanup = mpc52xx_psc_spi_cleanup;
+diff --git a/drivers/spi/spi-pci1xxxx.c b/drivers/spi/spi-pci1xxxx.c
+index a31c3b612a430..78bff59bd7ad2 100644
+--- a/drivers/spi/spi-pci1xxxx.c
++++ b/drivers/spi/spi-pci1xxxx.c
+@@ -58,7 +58,7 @@
+ #define VENDOR_ID_MCHP 0x1055
+ 
+ #define SPI_SUSPEND_CONFIG 0x101
+-#define SPI_RESUME_CONFIG 0x303
++#define SPI_RESUME_CONFIG 0x203
+ 
+ struct pci1xxxx_spi_internal {
+ 	u8 hw_inst;
+@@ -114,17 +114,14 @@ static void pci1xxxx_spi_set_cs(struct spi_device *spi, bool enable)
+ 
+ 	/* Set the DEV_SEL bits of the SPI_MST_CTL_REG */
+ 	regval = readl(par->reg_base + SPI_MST_CTL_REG_OFFSET(p->hw_inst));
+-	if (enable) {
++	if (!enable) {
++		regval |= SPI_FORCE_CE;
+ 		regval &= ~SPI_MST_CTL_DEVSEL_MASK;
+ 		regval |= (spi->chip_select << 25);
+-		writel(regval,
+-		       par->reg_base + SPI_MST_CTL_REG_OFFSET(p->hw_inst));
+ 	} else {
+-		regval &= ~(spi->chip_select << 25);
+-		writel(regval,
+-		       par->reg_base + SPI_MST_CTL_REG_OFFSET(p->hw_inst));
+-
++		regval &= ~SPI_FORCE_CE;
+ 	}
++	writel(regval, par->reg_base + SPI_MST_CTL_REG_OFFSET(p->hw_inst));
+ }
+ 
+ static u8 pci1xxxx_get_clock_div(u32 hz)
+@@ -199,8 +196,9 @@ static int pci1xxxx_spi_transfer_one(struct spi_controller *spi_ctlr,
+ 			else
+ 				regval &= ~SPI_MST_CTL_MODE_SEL;
+ 
+-			regval |= ((clkdiv << 5) | SPI_FORCE_CE | (len << 8));
++			regval |= (clkdiv << 5);
+ 			regval &= ~SPI_MST_CTL_CMD_LEN_MASK;
++			regval |= (len << 8);
+ 			writel(regval, par->reg_base +
+ 			       SPI_MST_CTL_REG_OFFSET(p->hw_inst));
+ 			regval = readl(par->reg_base +
+@@ -222,10 +220,6 @@ static int pci1xxxx_spi_transfer_one(struct spi_controller *spi_ctlr,
+ 			}
+ 		}
+ 	}
+-
+-	regval = readl(par->reg_base + SPI_MST_CTL_REG_OFFSET(p->hw_inst));
+-	regval &= ~SPI_FORCE_CE;
+-	writel(regval, par->reg_base + SPI_MST_CTL_REG_OFFSET(p->hw_inst));
+ 	p->spi_xfer_in_progress = false;
+ 
+ 	return 0;
+diff --git a/drivers/spi/spi-qup.c b/drivers/spi/spi-qup.c
+index 678dc51ef0174..205e54f157b4a 100644
+--- a/drivers/spi/spi-qup.c
++++ b/drivers/spi/spi-qup.c
+@@ -1277,18 +1277,22 @@ static int spi_qup_remove(struct platform_device *pdev)
+ 	struct spi_qup *controller = spi_master_get_devdata(master);
+ 	int ret;
+ 
+-	ret = pm_runtime_resume_and_get(&pdev->dev);
+-	if (ret < 0)
+-		return ret;
++	ret = pm_runtime_get_sync(&pdev->dev);
+ 
+-	ret = spi_qup_set_state(controller, QUP_STATE_RESET);
+-	if (ret)
+-		return ret;
++	if (ret >= 0) {
++		ret = spi_qup_set_state(controller, QUP_STATE_RESET);
++		if (ret)
++			dev_warn(&pdev->dev, "failed to reset controller (%pe)\n",
++				 ERR_PTR(ret));
+ 
+-	spi_qup_release_dma(master);
++		clk_disable_unprepare(controller->cclk);
++		clk_disable_unprepare(controller->iclk);
++	} else {
++		dev_warn(&pdev->dev, "failed to resume, skip hw disable (%pe)\n",
++			 ERR_PTR(ret));
++	}
+ 
+-	clk_disable_unprepare(controller->cclk);
+-	clk_disable_unprepare(controller->iclk);
++	spi_qup_release_dma(master);
+ 
+ 	pm_runtime_put_noidle(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+diff --git a/drivers/spi/spi-sn-f-ospi.c b/drivers/spi/spi-sn-f-ospi.c
+index 333b22dfd8dba..0aedade8908c4 100644
+--- a/drivers/spi/spi-sn-f-ospi.c
++++ b/drivers/spi/spi-sn-f-ospi.c
+@@ -561,7 +561,7 @@ static bool f_ospi_supports_op(struct spi_mem *mem,
+ 	if (!f_ospi_supports_op_width(mem, op))
+ 		return false;
+ 
+-	return true;
++	return spi_mem_default_supports_op(mem, op);
+ }
+ 
+ static int f_ospi_adjust_op_size(struct spi_mem *mem, struct spi_mem_op *op)
+diff --git a/drivers/spmi/spmi.c b/drivers/spmi/spmi.c
+index 73551531ed437..dbc8585e8d6d0 100644
+--- a/drivers/spmi/spmi.c
++++ b/drivers/spmi/spmi.c
+@@ -350,7 +350,8 @@ static void spmi_drv_remove(struct device *dev)
+ 	const struct spmi_driver *sdrv = to_spmi_driver(dev->driver);
+ 
+ 	pm_runtime_get_sync(dev);
+-	sdrv->remove(to_spmi_device(dev));
++	if (sdrv->remove)
++		sdrv->remove(to_spmi_device(dev));
+ 	pm_runtime_put_noidle(dev);
+ 
+ 	pm_runtime_disable(dev);
+diff --git a/drivers/staging/iio/resolver/ad2s1210.c b/drivers/staging/iio/resolver/ad2s1210.c
+index e4cf42438487d..636c45b128438 100644
+--- a/drivers/staging/iio/resolver/ad2s1210.c
++++ b/drivers/staging/iio/resolver/ad2s1210.c
+@@ -101,7 +101,7 @@ struct ad2s1210_state {
+ static const int ad2s1210_mode_vals[4][2] = {
+ 	[MOD_POS] = { 0, 0 },
+ 	[MOD_VEL] = { 0, 1 },
+-	[MOD_CONFIG] = { 1, 0 },
++	[MOD_CONFIG] = { 1, 1 },
+ };
+ 
+ static inline void ad2s1210_set_mode(enum ad2s1210_mode mode,
+diff --git a/drivers/staging/media/av7110/av7110_av.c b/drivers/staging/media/av7110/av7110_av.c
+index 0bf513c26b6b5..a5c5bebad3061 100644
+--- a/drivers/staging/media/av7110/av7110_av.c
++++ b/drivers/staging/media/av7110/av7110_av.c
+@@ -823,10 +823,10 @@ static int write_ts_to_decoder(struct av7110 *av7110, int type, const u8 *buf, s
+ 		av7110_ipack_flush(ipack);
+ 
+ 	if (buf[3] & ADAPT_FIELD) {
++		if (buf[4] > len - 1 - 4)
++			return 0;
+ 		len -= buf[4] + 1;
+ 		buf += buf[4] + 1;
+-		if (!len)
+-			return 0;
+ 	}
+ 
+ 	av7110_ipack_instant_repack(buf + 4, len - 4, ipack);
+diff --git a/drivers/staging/media/rkvdec/rkvdec.c b/drivers/staging/media/rkvdec/rkvdec.c
+index 7bab7586918c1..82806f198074a 100644
+--- a/drivers/staging/media/rkvdec/rkvdec.c
++++ b/drivers/staging/media/rkvdec/rkvdec.c
+@@ -1066,6 +1066,8 @@ static int rkvdec_remove(struct platform_device *pdev)
+ {
+ 	struct rkvdec_dev *rkvdec = platform_get_drvdata(pdev);
+ 
++	cancel_delayed_work_sync(&rkvdec->watchdog_work);
++
+ 	rkvdec_v4l2_cleanup(rkvdec);
+ 	pm_runtime_disable(&pdev->dev);
+ 	pm_runtime_dont_use_autosuspend(&pdev->dev);
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus.c b/drivers/staging/media/sunxi/cedrus/cedrus.c
+index a43d5ff667163..a50a4d0a8f715 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus.c
++++ b/drivers/staging/media/sunxi/cedrus/cedrus.c
+@@ -547,6 +547,7 @@ static int cedrus_remove(struct platform_device *pdev)
+ {
+ 	struct cedrus_dev *dev = platform_get_drvdata(pdev);
+ 
++	cancel_delayed_work_sync(&dev->watchdog_work);
+ 	if (media_devnode_is_registered(dev->mdev.devnode)) {
+ 		media_device_unregister(&dev->mdev);
+ 		v4l2_m2m_unregister_media_controller(dev->m2m_dev);
+diff --git a/drivers/staging/rtl8192e/rtl8192e/rtl_core.c b/drivers/staging/rtl8192e/rtl8192e/rtl_core.c
+index 104b16cfa9790..72d76dc7df781 100644
+--- a/drivers/staging/rtl8192e/rtl8192e/rtl_core.c
++++ b/drivers/staging/rtl8192e/rtl8192e/rtl_core.c
+@@ -713,6 +713,7 @@ static int _rtl92e_sta_up(struct net_device *dev, bool is_silent_reset)
+ 	else
+ 		netif_wake_queue(dev);
+ 
++	priv->bfirst_after_down = false;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/staging/rtl8723bs/core/rtw_mlme.c b/drivers/staging/rtl8723bs/core/rtw_mlme.c
+index c6fd6cf741ef6..091842d70d486 100644
+--- a/drivers/staging/rtl8723bs/core/rtw_mlme.c
++++ b/drivers/staging/rtl8723bs/core/rtw_mlme.c
+@@ -1549,7 +1549,7 @@ void _rtw_join_timeout_handler(struct timer_list *t)
+ 	if (adapter->bDriverStopped || adapter->bSurpriseRemoved)
+ 		return;
+ 
+-	spin_lock_irq(&pmlmepriv->lock);
++	spin_lock_bh(&pmlmepriv->lock);
+ 
+ 	if (rtw_to_roam(adapter) > 0) { /* join timeout caused by roaming */
+ 		while (1) {
+@@ -1577,7 +1577,7 @@ void _rtw_join_timeout_handler(struct timer_list *t)
+ 
+ 	}
+ 
+-	spin_unlock_irq(&pmlmepriv->lock);
++	spin_unlock_bh(&pmlmepriv->lock);
+ }
+ 
+ /*
+@@ -1590,11 +1590,11 @@ void rtw_scan_timeout_handler(struct timer_list *t)
+ 						  mlmepriv.scan_to_timer);
+ 	struct	mlme_priv *pmlmepriv = &adapter->mlmepriv;
+ 
+-	spin_lock_irq(&pmlmepriv->lock);
++	spin_lock_bh(&pmlmepriv->lock);
+ 
+ 	_clr_fwstate_(pmlmepriv, _FW_UNDER_SURVEY);
+ 
+-	spin_unlock_irq(&pmlmepriv->lock);
++	spin_unlock_bh(&pmlmepriv->lock);
+ 
+ 	rtw_indicate_scan_done(adapter, true);
+ }
+diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
+index baf4da7bb3b4e..3f7a9f7f5f4e3 100644
+--- a/drivers/target/iscsi/iscsi_target.c
++++ b/drivers/target/iscsi/iscsi_target.c
+@@ -1190,9 +1190,10 @@ int iscsit_setup_scsi_cmd(struct iscsit_conn *conn, struct iscsit_cmd *cmd,
+ 	 * Initialize struct se_cmd descriptor from target_core_mod infrastructure
+ 	 */
+ 	__target_init_cmd(&cmd->se_cmd, &iscsi_ops,
+-			 conn->sess->se_sess, be32_to_cpu(hdr->data_length),
+-			 cmd->data_direction, sam_task_attr,
+-			 cmd->sense_buffer + 2, scsilun_to_int(&hdr->lun));
++			  conn->sess->se_sess, be32_to_cpu(hdr->data_length),
++			  cmd->data_direction, sam_task_attr,
++			  cmd->sense_buffer + 2, scsilun_to_int(&hdr->lun),
++			  conn->cmd_cnt);
+ 
+ 	pr_debug("Got SCSI Command, ITT: 0x%08x, CmdSN: 0x%08x,"
+ 		" ExpXferLen: %u, Length: %u, CID: %hu\n", hdr->itt,
+@@ -2055,7 +2056,8 @@ iscsit_handle_task_mgt_cmd(struct iscsit_conn *conn, struct iscsit_cmd *cmd,
+ 	__target_init_cmd(&cmd->se_cmd, &iscsi_ops,
+ 			  conn->sess->se_sess, 0, DMA_NONE,
+ 			  TCM_SIMPLE_TAG, cmd->sense_buffer + 2,
+-			  scsilun_to_int(&hdr->lun));
++			  scsilun_to_int(&hdr->lun),
++			  conn->cmd_cnt);
+ 
+ 	target_get_sess_cmd(&cmd->se_cmd, true);
+ 
+@@ -4218,9 +4220,12 @@ static void iscsit_release_commands_from_conn(struct iscsit_conn *conn)
+ 	list_for_each_entry_safe(cmd, cmd_tmp, &tmp_list, i_conn_node) {
+ 		struct se_cmd *se_cmd = &cmd->se_cmd;
+ 
+-		if (se_cmd->se_tfo != NULL) {
+-			spin_lock_irq(&se_cmd->t_state_lock);
+-			if (se_cmd->transport_state & CMD_T_ABORTED) {
++		if (!se_cmd->se_tfo)
++			continue;
++
++		spin_lock_irq(&se_cmd->t_state_lock);
++		if (se_cmd->transport_state & CMD_T_ABORTED) {
++			if (!(se_cmd->transport_state & CMD_T_TAS))
+ 				/*
+ 				 * LIO's abort path owns the cleanup for this,
+ 				 * so put it back on the list and let
+@@ -4228,11 +4233,10 @@ static void iscsit_release_commands_from_conn(struct iscsit_conn *conn)
+ 				 */
+ 				list_move_tail(&cmd->i_conn_node,
+ 					       &conn->conn_cmd_list);
+-			} else {
+-				se_cmd->transport_state |= CMD_T_FABRIC_STOP;
+-			}
+-			spin_unlock_irq(&se_cmd->t_state_lock);
++		} else {
++			se_cmd->transport_state |= CMD_T_FABRIC_STOP;
+ 		}
++		spin_unlock_irq(&se_cmd->t_state_lock);
+ 	}
+ 	spin_unlock_bh(&conn->cmd_lock);
+ 
+@@ -4243,6 +4247,16 @@ static void iscsit_release_commands_from_conn(struct iscsit_conn *conn)
+ 		iscsit_free_cmd(cmd, true);
+ 
+ 	}
++
++	/*
++	 * Wait on commands that were cleaned up via the aborted_task path.
++	 * LLDs that implement iscsit_wait_conn will already have waited for
++	 * commands.
++	 */
++	if (!conn->conn_transport->iscsit_wait_conn) {
++		target_stop_cmd_counter(conn->cmd_cnt);
++		target_wait_for_cmds(conn->cmd_cnt);
++	}
+ }
+ 
+ static void iscsit_stop_timers_for_cmds(
+diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c
+index 27e448c2d066c..274bdd7845ca9 100644
+--- a/drivers/target/iscsi/iscsi_target_login.c
++++ b/drivers/target/iscsi/iscsi_target_login.c
+@@ -1147,8 +1147,14 @@ static struct iscsit_conn *iscsit_alloc_conn(struct iscsi_np *np)
+ 		goto free_conn_cpumask;
+ 	}
+ 
++	conn->cmd_cnt = target_alloc_cmd_counter();
++	if (!conn->cmd_cnt)
++		goto free_conn_allowed_cpumask;
++
+ 	return conn;
+ 
++free_conn_allowed_cpumask:
++	free_cpumask_var(conn->allowed_cpumask);
+ free_conn_cpumask:
+ 	free_cpumask_var(conn->conn_cpumask);
+ free_conn_ops:
+@@ -1162,6 +1168,7 @@ free_conn:
+ 
+ void iscsit_free_conn(struct iscsit_conn *conn)
+ {
++	target_free_cmd_counter(conn->cmd_cnt);
+ 	free_cpumask_var(conn->allowed_cpumask);
+ 	free_cpumask_var(conn->conn_cpumask);
+ 	kfree(conn->conn_ops);
+diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c
+index f6e58410ec3f9..aeb03136773d5 100644
+--- a/drivers/target/target_core_device.c
++++ b/drivers/target/target_core_device.c
+@@ -782,6 +782,7 @@ struct se_device *target_alloc_device(struct se_hba *hba, const char *name)
+ 	spin_lock_init(&dev->t10_alua.lba_map_lock);
+ 
+ 	INIT_WORK(&dev->delayed_cmd_work, target_do_delayed_work);
++	mutex_init(&dev->lun_reset_mutex);
+ 
+ 	dev->t10_wwn.t10_dev = dev;
+ 	/*
+diff --git a/drivers/target/target_core_internal.h b/drivers/target/target_core_internal.h
+index 38a6d08f75b34..85e35cf582e50 100644
+--- a/drivers/target/target_core_internal.h
++++ b/drivers/target/target_core_internal.h
+@@ -138,7 +138,6 @@ int	init_se_kmem_caches(void);
+ void	release_se_kmem_caches(void);
+ u32	scsi_get_new_index(scsi_index_t);
+ void	transport_subsystem_check_init(void);
+-void	transport_uninit_session(struct se_session *);
+ unsigned char *transport_dump_cmd_direction(struct se_cmd *);
+ void	transport_dump_dev_state(struct se_device *, char *, int *);
+ void	transport_dump_dev_info(struct se_device *, struct se_lun *,
+diff --git a/drivers/target/target_core_tmr.c b/drivers/target/target_core_tmr.c
+index 2b95b4550a637..4718db628222b 100644
+--- a/drivers/target/target_core_tmr.c
++++ b/drivers/target/target_core_tmr.c
+@@ -188,14 +188,23 @@ static void core_tmr_drain_tmr_list(
+ 	 * LUN_RESET tmr..
+ 	 */
+ 	spin_lock_irqsave(&dev->se_tmr_lock, flags);
+-	if (tmr)
+-		list_del_init(&tmr->tmr_list);
+ 	list_for_each_entry_safe(tmr_p, tmr_pp, &dev->dev_tmr_list, tmr_list) {
++		if (tmr_p == tmr)
++			continue;
++
+ 		cmd = tmr_p->task_cmd;
+ 		if (!cmd) {
+ 			pr_err("Unable to locate struct se_cmd for TMR\n");
+ 			continue;
+ 		}
++
++		/*
++		 * We only execute one LUN_RESET at a time so we can't wait
++		 * on them below.
++		 */
++		if (tmr_p->function == TMR_LUN_RESET)
++			continue;
++
+ 		/*
+ 		 * If this function was called with a valid pr_res_key
+ 		 * parameter (eg: for PROUT PREEMPT_AND_ABORT service action
+@@ -379,14 +388,25 @@ int core_tmr_lun_reset(
+ 				tmr_nacl->initiatorname);
+ 		}
+ 	}
++
++
++	/*
++	 * We only allow one reset or preempt and abort to execute at a time
++	 * to prevent one call from claiming all the cmds causing a second
++	 * call from returning while cmds it should have waited on are still
++	 * running.
++	 */
++	mutex_lock(&dev->lun_reset_mutex);
++
+ 	pr_debug("LUN_RESET: %s starting for [%s], tas: %d\n",
+ 		(preempt_and_abort_list) ? "Preempt" : "TMR",
+ 		dev->transport->name, tas);
+-
+ 	core_tmr_drain_tmr_list(dev, tmr, preempt_and_abort_list);
+ 	core_tmr_drain_state_list(dev, prout_cmd, tmr_sess, tas,
+ 				preempt_and_abort_list);
+ 
++	mutex_unlock(&dev->lun_reset_mutex);
++
+ 	/*
+ 	 * Clear any legacy SPC-2 reservation when called during
+ 	 * LOGICAL UNIT RESET
+diff --git a/drivers/target/target_core_tpg.c b/drivers/target/target_core_tpg.c
+index 736847c933e5c..8ebccdbd94f0e 100644
+--- a/drivers/target/target_core_tpg.c
++++ b/drivers/target/target_core_tpg.c
+@@ -328,7 +328,7 @@ static void target_shutdown_sessions(struct se_node_acl *acl)
+ restart:
+ 	spin_lock_irqsave(&acl->nacl_sess_lock, flags);
+ 	list_for_each_entry(sess, &acl->acl_sess_list, sess_acl_list) {
+-		if (atomic_read(&sess->stopped))
++		if (sess->cmd_cnt && atomic_read(&sess->cmd_cnt->stopped))
+ 			continue;
+ 
+ 		list_del_init(&sess->sess_acl_list);
+diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
+index 5926316252eb9..86adff2a86edd 100644
+--- a/drivers/target/target_core_transport.c
++++ b/drivers/target/target_core_transport.c
+@@ -220,12 +220,52 @@ void transport_subsystem_check_init(void)
+ 	sub_api_initialized = 1;
+ }
+ 
+-static void target_release_sess_cmd_refcnt(struct percpu_ref *ref)
++static void target_release_cmd_refcnt(struct percpu_ref *ref)
+ {
+-	struct se_session *sess = container_of(ref, typeof(*sess), cmd_count);
++	struct target_cmd_counter *cmd_cnt  = container_of(ref,
++							   typeof(*cmd_cnt),
++							   refcnt);
++	wake_up(&cmd_cnt->refcnt_wq);
++}
++
++struct target_cmd_counter *target_alloc_cmd_counter(void)
++{
++	struct target_cmd_counter *cmd_cnt;
++	int rc;
++
++	cmd_cnt = kzalloc(sizeof(*cmd_cnt), GFP_KERNEL);
++	if (!cmd_cnt)
++		return NULL;
++
++	init_completion(&cmd_cnt->stop_done);
++	init_waitqueue_head(&cmd_cnt->refcnt_wq);
++	atomic_set(&cmd_cnt->stopped, 0);
+ 
+-	wake_up(&sess->cmd_count_wq);
++	rc = percpu_ref_init(&cmd_cnt->refcnt, target_release_cmd_refcnt, 0,
++			     GFP_KERNEL);
++	if (rc)
++		goto free_cmd_cnt;
++
++	return cmd_cnt;
++
++free_cmd_cnt:
++	kfree(cmd_cnt);
++	return NULL;
+ }
++EXPORT_SYMBOL_GPL(target_alloc_cmd_counter);
++
++void target_free_cmd_counter(struct target_cmd_counter *cmd_cnt)
++{
++	/*
++	 * Drivers like loop do not call target_stop_session during session
++	 * shutdown so we have to drop the ref taken at init time here.
++	 */
++	if (!atomic_read(&cmd_cnt->stopped))
++		percpu_ref_put(&cmd_cnt->refcnt);
++
++	percpu_ref_exit(&cmd_cnt->refcnt);
++}
++EXPORT_SYMBOL_GPL(target_free_cmd_counter);
+ 
+ /**
+  * transport_init_session - initialize a session object
+@@ -233,32 +273,14 @@ static void target_release_sess_cmd_refcnt(struct percpu_ref *ref)
+  *
+  * The caller must have zero-initialized @se_sess before calling this function.
+  */
+-int transport_init_session(struct se_session *se_sess)
++void transport_init_session(struct se_session *se_sess)
+ {
+ 	INIT_LIST_HEAD(&se_sess->sess_list);
+ 	INIT_LIST_HEAD(&se_sess->sess_acl_list);
+ 	spin_lock_init(&se_sess->sess_cmd_lock);
+-	init_waitqueue_head(&se_sess->cmd_count_wq);
+-	init_completion(&se_sess->stop_done);
+-	atomic_set(&se_sess->stopped, 0);
+-	return percpu_ref_init(&se_sess->cmd_count,
+-			       target_release_sess_cmd_refcnt, 0, GFP_KERNEL);
+ }
+ EXPORT_SYMBOL(transport_init_session);
+ 
+-void transport_uninit_session(struct se_session *se_sess)
+-{
+-	/*
+-	 * Drivers like iscsi and loop do not call target_stop_session
+-	 * during session shutdown so we have to drop the ref taken at init
+-	 * time here.
+-	 */
+-	if (!atomic_read(&se_sess->stopped))
+-		percpu_ref_put(&se_sess->cmd_count);
+-
+-	percpu_ref_exit(&se_sess->cmd_count);
+-}
+-
+ /**
+  * transport_alloc_session - allocate a session object and initialize it
+  * @sup_prot_ops: bitmask that defines which T10-PI modes are supported.
+@@ -266,7 +288,6 @@ void transport_uninit_session(struct se_session *se_sess)
+ struct se_session *transport_alloc_session(enum target_prot_op sup_prot_ops)
+ {
+ 	struct se_session *se_sess;
+-	int ret;
+ 
+ 	se_sess = kmem_cache_zalloc(se_sess_cache, GFP_KERNEL);
+ 	if (!se_sess) {
+@@ -274,11 +295,7 @@ struct se_session *transport_alloc_session(enum target_prot_op sup_prot_ops)
+ 				" se_sess_cache\n");
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+-	ret = transport_init_session(se_sess);
+-	if (ret < 0) {
+-		kmem_cache_free(se_sess_cache, se_sess);
+-		return ERR_PTR(ret);
+-	}
++	transport_init_session(se_sess);
+ 	se_sess->sup_prot_ops = sup_prot_ops;
+ 
+ 	return se_sess;
+@@ -444,8 +461,13 @@ target_setup_session(struct se_portal_group *tpg,
+ 		     int (*callback)(struct se_portal_group *,
+ 				     struct se_session *, void *))
+ {
++	struct target_cmd_counter *cmd_cnt;
+ 	struct se_session *sess;
++	int rc;
+ 
++	cmd_cnt = target_alloc_cmd_counter();
++	if (!cmd_cnt)
++		return ERR_PTR(-ENOMEM);
+ 	/*
+ 	 * If the fabric driver is using percpu-ida based pre allocation
+ 	 * of I/O descriptor tags, go ahead and perform that setup now..
+@@ -455,29 +477,36 @@ target_setup_session(struct se_portal_group *tpg,
+ 	else
+ 		sess = transport_alloc_session(prot_op);
+ 
+-	if (IS_ERR(sess))
+-		return sess;
++	if (IS_ERR(sess)) {
++		rc = PTR_ERR(sess);
++		goto free_cnt;
++	}
++	sess->cmd_cnt = cmd_cnt;
+ 
+ 	sess->se_node_acl = core_tpg_check_initiator_node_acl(tpg,
+ 					(unsigned char *)initiatorname);
+ 	if (!sess->se_node_acl) {
+-		transport_free_session(sess);
+-		return ERR_PTR(-EACCES);
++		rc = -EACCES;
++		goto free_sess;
+ 	}
+ 	/*
+ 	 * Go ahead and perform any remaining fabric setup that is
+ 	 * required before transport_register_session().
+ 	 */
+ 	if (callback != NULL) {
+-		int rc = callback(tpg, sess, private);
+-		if (rc) {
+-			transport_free_session(sess);
+-			return ERR_PTR(rc);
+-		}
++		rc = callback(tpg, sess, private);
++		if (rc)
++			goto free_sess;
+ 	}
+ 
+ 	transport_register_session(tpg, sess->se_node_acl, sess, private);
+ 	return sess;
++
++free_sess:
++	transport_free_session(sess);
++free_cnt:
++	target_free_cmd_counter(cmd_cnt);
++	return ERR_PTR(rc);
+ }
+ EXPORT_SYMBOL(target_setup_session);
+ 
+@@ -602,7 +631,8 @@ void transport_free_session(struct se_session *se_sess)
+ 		sbitmap_queue_free(&se_sess->sess_tag_pool);
+ 		kvfree(se_sess->sess_cmd_map);
+ 	}
+-	transport_uninit_session(se_sess);
++	if (se_sess->cmd_cnt)
++		target_free_cmd_counter(se_sess->cmd_cnt);
+ 	kmem_cache_free(se_sess_cache, se_sess);
+ }
+ EXPORT_SYMBOL(transport_free_session);
+@@ -1412,14 +1442,12 @@ target_cmd_size_check(struct se_cmd *cmd, unsigned int size)
+  *
+  * Preserves the value of @cmd->tag.
+  */
+-void __target_init_cmd(
+-	struct se_cmd *cmd,
+-	const struct target_core_fabric_ops *tfo,
+-	struct se_session *se_sess,
+-	u32 data_length,
+-	int data_direction,
+-	int task_attr,
+-	unsigned char *sense_buffer, u64 unpacked_lun)
++void __target_init_cmd(struct se_cmd *cmd,
++		       const struct target_core_fabric_ops *tfo,
++		       struct se_session *se_sess, u32 data_length,
++		       int data_direction, int task_attr,
++		       unsigned char *sense_buffer, u64 unpacked_lun,
++		       struct target_cmd_counter *cmd_cnt)
+ {
+ 	INIT_LIST_HEAD(&cmd->se_delayed_node);
+ 	INIT_LIST_HEAD(&cmd->se_qf_node);
+@@ -1439,6 +1467,7 @@ void __target_init_cmd(
+ 	cmd->sam_task_attr = task_attr;
+ 	cmd->sense_buffer = sense_buffer;
+ 	cmd->orig_fe_lun = unpacked_lun;
++	cmd->cmd_cnt = cmd_cnt;
+ 
+ 	if (!(cmd->se_cmd_flags & SCF_USE_CPUID))
+ 		cmd->cpuid = raw_smp_processor_id();
+@@ -1658,7 +1687,8 @@ int target_init_cmd(struct se_cmd *se_cmd, struct se_session *se_sess,
+ 	 * target_core_fabric_ops->queue_status() callback
+ 	 */
+ 	__target_init_cmd(se_cmd, se_tpg->se_tpg_tfo, se_sess, data_length,
+-			  data_dir, task_attr, sense, unpacked_lun);
++			  data_dir, task_attr, sense, unpacked_lun,
++			  se_sess->cmd_cnt);
+ 
+ 	/*
+ 	 * Obtain struct se_cmd->cmd_kref reference. A second kref_get here is
+@@ -1953,7 +1983,8 @@ int target_submit_tmr(struct se_cmd *se_cmd, struct se_session *se_sess,
+ 	BUG_ON(!se_tpg);
+ 
+ 	__target_init_cmd(se_cmd, se_tpg->se_tpg_tfo, se_sess,
+-			  0, DMA_NONE, TCM_SIMPLE_TAG, sense, unpacked_lun);
++			  0, DMA_NONE, TCM_SIMPLE_TAG, sense, unpacked_lun,
++			  se_sess->cmd_cnt);
+ 	/*
+ 	 * FIXME: Currently expect caller to handle se_cmd->se_tmr_req
+ 	 * allocation failure.
+@@ -2957,7 +2988,6 @@ EXPORT_SYMBOL(transport_generic_free_cmd);
+  */
+ int target_get_sess_cmd(struct se_cmd *se_cmd, bool ack_kref)
+ {
+-	struct se_session *se_sess = se_cmd->se_sess;
+ 	int ret = 0;
+ 
+ 	/*
+@@ -2970,9 +3000,14 @@ int target_get_sess_cmd(struct se_cmd *se_cmd, bool ack_kref)
+ 		se_cmd->se_cmd_flags |= SCF_ACK_KREF;
+ 	}
+ 
+-	if (!percpu_ref_tryget_live(&se_sess->cmd_count))
+-		ret = -ESHUTDOWN;
+-
++	/*
++	 * Users like xcopy do not use counters since they never do a stop
++	 * and wait.
++	 */
++	if (se_cmd->cmd_cnt) {
++		if (!percpu_ref_tryget_live(&se_cmd->cmd_cnt->refcnt))
++			ret = -ESHUTDOWN;
++	}
+ 	if (ret && ack_kref)
+ 		target_put_sess_cmd(se_cmd);
+ 
+@@ -2993,7 +3028,7 @@ static void target_free_cmd_mem(struct se_cmd *cmd)
+ static void target_release_cmd_kref(struct kref *kref)
+ {
+ 	struct se_cmd *se_cmd = container_of(kref, struct se_cmd, cmd_kref);
+-	struct se_session *se_sess = se_cmd->se_sess;
++	struct target_cmd_counter *cmd_cnt = se_cmd->cmd_cnt;
+ 	struct completion *free_compl = se_cmd->free_compl;
+ 	struct completion *abrt_compl = se_cmd->abrt_compl;
+ 
+@@ -3004,7 +3039,8 @@ static void target_release_cmd_kref(struct kref *kref)
+ 	if (abrt_compl)
+ 		complete(abrt_compl);
+ 
+-	percpu_ref_put(&se_sess->cmd_count);
++	if (cmd_cnt)
++		percpu_ref_put(&cmd_cnt->refcnt);
+ }
+ 
+ /**
+@@ -3123,46 +3159,67 @@ void target_show_cmd(const char *pfx, struct se_cmd *cmd)
+ }
+ EXPORT_SYMBOL(target_show_cmd);
+ 
+-static void target_stop_session_confirm(struct percpu_ref *ref)
++static void target_stop_cmd_counter_confirm(struct percpu_ref *ref)
++{
++	struct target_cmd_counter *cmd_cnt = container_of(ref,
++						struct target_cmd_counter,
++						refcnt);
++	complete_all(&cmd_cnt->stop_done);
++}
++
++/**
++ * target_stop_cmd_counter - Stop new IO from being added to the counter.
++ * @cmd_cnt: counter to stop
++ */
++void target_stop_cmd_counter(struct target_cmd_counter *cmd_cnt)
+ {
+-	struct se_session *se_sess = container_of(ref, struct se_session,
+-						  cmd_count);
+-	complete_all(&se_sess->stop_done);
++	pr_debug("Stopping command counter.\n");
++	if (!atomic_cmpxchg(&cmd_cnt->stopped, 0, 1))
++		percpu_ref_kill_and_confirm(&cmd_cnt->refcnt,
++					    target_stop_cmd_counter_confirm);
+ }
++EXPORT_SYMBOL_GPL(target_stop_cmd_counter);
+ 
+ /**
+  * target_stop_session - Stop new IO from being queued on the session.
+- * @se_sess:    session to stop
++ * @se_sess: session to stop
+  */
+ void target_stop_session(struct se_session *se_sess)
+ {
+-	pr_debug("Stopping session queue.\n");
+-	if (atomic_cmpxchg(&se_sess->stopped, 0, 1) == 0)
+-		percpu_ref_kill_and_confirm(&se_sess->cmd_count,
+-					    target_stop_session_confirm);
++	target_stop_cmd_counter(se_sess->cmd_cnt);
+ }
+ EXPORT_SYMBOL(target_stop_session);
+ 
+ /**
+- * target_wait_for_sess_cmds - Wait for outstanding commands
+- * @se_sess:    session to wait for active I/O
++ * target_wait_for_cmds - Wait for outstanding cmds.
++ * @cmd_cnt: counter to wait for active I/O for.
+  */
+-void target_wait_for_sess_cmds(struct se_session *se_sess)
++void target_wait_for_cmds(struct target_cmd_counter *cmd_cnt)
+ {
+ 	int ret;
+ 
+-	WARN_ON_ONCE(!atomic_read(&se_sess->stopped));
++	WARN_ON_ONCE(!atomic_read(&cmd_cnt->stopped));
+ 
+ 	do {
+ 		pr_debug("Waiting for running cmds to complete.\n");
+-		ret = wait_event_timeout(se_sess->cmd_count_wq,
+-				percpu_ref_is_zero(&se_sess->cmd_count),
+-				180 * HZ);
++		ret = wait_event_timeout(cmd_cnt->refcnt_wq,
++					 percpu_ref_is_zero(&cmd_cnt->refcnt),
++					 180 * HZ);
+ 	} while (ret <= 0);
+ 
+-	wait_for_completion(&se_sess->stop_done);
++	wait_for_completion(&cmd_cnt->stop_done);
+ 	pr_debug("Waiting for cmds done.\n");
+ }
++EXPORT_SYMBOL_GPL(target_wait_for_cmds);
++
++/**
++ * target_wait_for_sess_cmds - Wait for outstanding commands
++ * @se_sess: session to wait for active I/O
++ */
++void target_wait_for_sess_cmds(struct se_session *se_sess)
++{
++	target_wait_for_cmds(se_sess->cmd_cnt);
++}
+ EXPORT_SYMBOL(target_wait_for_sess_cmds);
+ 
+ /*
+diff --git a/drivers/target/target_core_xcopy.c b/drivers/target/target_core_xcopy.c
+index 49eaee022ef1d..91ed015b588c6 100644
+--- a/drivers/target/target_core_xcopy.c
++++ b/drivers/target/target_core_xcopy.c
+@@ -461,8 +461,6 @@ static const struct target_core_fabric_ops xcopy_pt_tfo = {
+ 
+ int target_xcopy_setup_pt(void)
+ {
+-	int ret;
+-
+ 	xcopy_wq = alloc_workqueue("xcopy_wq", WQ_MEM_RECLAIM, 0);
+ 	if (!xcopy_wq) {
+ 		pr_err("Unable to allocate xcopy_wq\n");
+@@ -479,9 +477,7 @@ int target_xcopy_setup_pt(void)
+ 	INIT_LIST_HEAD(&xcopy_pt_nacl.acl_list);
+ 	INIT_LIST_HEAD(&xcopy_pt_nacl.acl_sess_list);
+ 	memset(&xcopy_pt_sess, 0, sizeof(struct se_session));
+-	ret = transport_init_session(&xcopy_pt_sess);
+-	if (ret < 0)
+-		goto destroy_wq;
++	transport_init_session(&xcopy_pt_sess);
+ 
+ 	xcopy_pt_nacl.se_tpg = &xcopy_pt_tpg;
+ 	xcopy_pt_nacl.nacl_sess = &xcopy_pt_sess;
+@@ -490,19 +486,12 @@ int target_xcopy_setup_pt(void)
+ 	xcopy_pt_sess.se_node_acl = &xcopy_pt_nacl;
+ 
+ 	return 0;
+-
+-destroy_wq:
+-	destroy_workqueue(xcopy_wq);
+-	xcopy_wq = NULL;
+-	return ret;
+ }
+ 
+ void target_xcopy_release_pt(void)
+ {
+-	if (xcopy_wq) {
++	if (xcopy_wq)
+ 		destroy_workqueue(xcopy_wq);
+-		transport_uninit_session(&xcopy_pt_sess);
+-	}
+ }
+ 
+ /*
+@@ -602,8 +591,8 @@ static int target_xcopy_read_source(
+ 		(unsigned long long)src_lba, transfer_length_block, src_bytes);
+ 
+ 	__target_init_cmd(se_cmd, &xcopy_pt_tfo, &xcopy_pt_sess, src_bytes,
+-			  DMA_FROM_DEVICE, 0, &xpt_cmd.sense_buffer[0], 0);
+-
++			  DMA_FROM_DEVICE, 0, &xpt_cmd.sense_buffer[0], 0,
++			  NULL);
+ 	rc = target_xcopy_setup_pt_cmd(&xpt_cmd, xop, src_dev, &cdb[0],
+ 				remote_port);
+ 	if (rc < 0) {
+@@ -647,8 +636,8 @@ static int target_xcopy_write_destination(
+ 		(unsigned long long)dst_lba, transfer_length_block, dst_bytes);
+ 
+ 	__target_init_cmd(se_cmd, &xcopy_pt_tfo, &xcopy_pt_sess, dst_bytes,
+-			  DMA_TO_DEVICE, 0, &xpt_cmd.sense_buffer[0], 0);
+-
++			  DMA_TO_DEVICE, 0, &xpt_cmd.sense_buffer[0], 0,
++			  NULL);
+ 	rc = target_xcopy_setup_pt_cmd(&xpt_cmd, xop, dst_dev, &cdb[0],
+ 				remote_port);
+ 	if (rc < 0) {
+diff --git a/drivers/thermal/intel/intel_powerclamp.c b/drivers/thermal/intel/intel_powerclamp.c
+index 91fc7e2394971..36243a3972fd7 100644
+--- a/drivers/thermal/intel/intel_powerclamp.c
++++ b/drivers/thermal/intel/intel_powerclamp.c
+@@ -703,6 +703,10 @@ static int powerclamp_set_cur_state(struct thermal_cooling_device *cdev,
+ 
+ 	new_target_ratio = clamp(new_target_ratio, 0UL,
+ 				(unsigned long) (max_idle - 1));
++
++	if (powerclamp_data.target_ratio == new_target_ratio)
++		goto exit_set;
++
+ 	if (!powerclamp_data.target_ratio && new_target_ratio > 0) {
+ 		pr_info("Start idle injection to reduce power\n");
+ 		powerclamp_data.target_ratio = new_target_ratio;
+diff --git a/drivers/thermal/mediatek/auxadc_thermal.c b/drivers/thermal/mediatek/auxadc_thermal.c
+index ab730f9552d0e..3372f7c29626c 100644
+--- a/drivers/thermal/mediatek/auxadc_thermal.c
++++ b/drivers/thermal/mediatek/auxadc_thermal.c
+@@ -1142,7 +1142,12 @@ static int mtk_thermal_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 	}
+ 
+-	auxadc_base = of_iomap(auxadc, 0);
++	auxadc_base = devm_of_iomap(&pdev->dev, auxadc, 0, NULL);
++	if (IS_ERR(auxadc_base)) {
++		of_node_put(auxadc);
++		return PTR_ERR(auxadc_base);
++	}
++
+ 	auxadc_phys_base = of_get_phys_base(auxadc);
+ 
+ 	of_node_put(auxadc);
+@@ -1158,7 +1163,12 @@ static int mtk_thermal_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 	}
+ 
+-	apmixed_base = of_iomap(apmixedsys, 0);
++	apmixed_base = devm_of_iomap(&pdev->dev, apmixedsys, 0, NULL);
++	if (IS_ERR(apmixed_base)) {
++		of_node_put(apmixedsys);
++		return PTR_ERR(apmixed_base);
++	}
++
+ 	apmixed_phys_base = of_get_phys_base(apmixedsys);
+ 
+ 	of_node_put(apmixedsys);
+diff --git a/drivers/thermal/mediatek/lvts_thermal.c b/drivers/thermal/mediatek/lvts_thermal.c
+index 84ba65a27acf7..acce1321a1a23 100644
+--- a/drivers/thermal/mediatek/lvts_thermal.c
++++ b/drivers/thermal/mediatek/lvts_thermal.c
+@@ -66,7 +66,7 @@
+ #define LVTS_MONINT_CONF			0x9FBF7BDE
+ 
+ #define LVTS_INT_SENSOR0			0x0009001F
+-#define LVTS_INT_SENSOR1			0X000881F0
++#define LVTS_INT_SENSOR1			0x001203E0
+ #define LVTS_INT_SENSOR2			0x00247C00
+ #define LVTS_INT_SENSOR3			0x1FC00000
+ 
+@@ -393,8 +393,8 @@ static irqreturn_t lvts_ctrl_irq_handler(struct lvts_ctrl *lvts_ctrl)
+ 	 *                  => 0x1FC00000
+ 	 * sensor 2 interrupt: 0000 0000 0010 0100 0111 1100 0000 0000
+ 	 *                  => 0x00247C00
+-	 * sensor 1 interrupt: 0000 0000 0001 0001 0000 0011 1110 0000
+-	 *                  => 0X000881F0
++	 * sensor 1 interrupt: 0000 0000 0001 0010 0000 0011 1110 0000
++	 *                  => 0X001203E0
+ 	 * sensor 0 interrupt: 0000 0000 0000 1001 0000 0000 0001 1111
+ 	 *                  => 0x0009001F
+ 	 */
+diff --git a/drivers/tty/serial/8250/8250.h b/drivers/tty/serial/8250/8250.h
+index 287153d325365..1e8fe44a7099f 100644
+--- a/drivers/tty/serial/8250/8250.h
++++ b/drivers/tty/serial/8250/8250.h
+@@ -365,6 +365,13 @@ static inline void serial8250_do_prepare_rx_dma(struct uart_8250_port *p)
+ 	if (dma->prepare_rx_dma)
+ 		dma->prepare_rx_dma(p);
+ }
++
++static inline bool serial8250_tx_dma_running(struct uart_8250_port *p)
++{
++	struct uart_8250_dma *dma = p->dma;
++
++	return dma && dma->tx_running;
++}
+ #else
+ static inline int serial8250_tx_dma(struct uart_8250_port *p)
+ {
+@@ -380,6 +387,11 @@ static inline int serial8250_request_dma(struct uart_8250_port *p)
+ 	return -1;
+ }
+ static inline void serial8250_release_dma(struct uart_8250_port *p) { }
++
++static inline bool serial8250_tx_dma_running(struct uart_8250_port *p)
++{
++	return false;
++}
+ #endif
+ 
+ static inline int ns16550a_goto_highspeed(struct uart_8250_port *up)
+diff --git a/drivers/tty/serial/8250/8250_bcm7271.c b/drivers/tty/serial/8250/8250_bcm7271.c
+index ed5a947476920..f801b1f5b46c0 100644
+--- a/drivers/tty/serial/8250/8250_bcm7271.c
++++ b/drivers/tty/serial/8250/8250_bcm7271.c
+@@ -1014,14 +1014,16 @@ static int brcmuart_probe(struct platform_device *pdev)
+ 	/* See if a Baud clock has been specified */
+ 	baud_mux_clk = of_clk_get_by_name(np, "sw_baud");
+ 	if (IS_ERR(baud_mux_clk)) {
+-		if (PTR_ERR(baud_mux_clk) == -EPROBE_DEFER)
+-			return -EPROBE_DEFER;
++		if (PTR_ERR(baud_mux_clk) == -EPROBE_DEFER) {
++			ret = -EPROBE_DEFER;
++			goto release_dma;
++		}
+ 		dev_dbg(dev, "BAUD MUX clock not specified\n");
+ 	} else {
+ 		dev_dbg(dev, "BAUD MUX clock found\n");
+ 		ret = clk_prepare_enable(baud_mux_clk);
+ 		if (ret)
+-			return ret;
++			goto release_dma;
+ 		priv->baud_mux_clk = baud_mux_clk;
+ 		init_real_clk_rates(dev, priv);
+ 		clk_rate = priv->default_mux_rate;
+@@ -1029,7 +1031,8 @@ static int brcmuart_probe(struct platform_device *pdev)
+ 
+ 	if (clk_rate == 0) {
+ 		dev_err(dev, "clock-frequency or clk not defined\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto release_dma;
+ 	}
+ 
+ 	dev_dbg(dev, "DMA is %senabled\n", priv->dma_enabled ? "" : "not ");
+@@ -1116,7 +1119,9 @@ err1:
+ 	serial8250_unregister_port(priv->line);
+ err:
+ 	brcmuart_free_bufs(dev, priv);
+-	brcmuart_arbitration(priv, 0);
++release_dma:
++	if (priv->dma_enabled)
++		brcmuart_arbitration(priv, 0);
+ 	return ret;
+ }
+ 
+@@ -1128,7 +1133,8 @@ static int brcmuart_remove(struct platform_device *pdev)
+ 	hrtimer_cancel(&priv->hrt);
+ 	serial8250_unregister_port(priv->line);
+ 	brcmuart_free_bufs(&pdev->dev, priv);
+-	brcmuart_arbitration(priv, 0);
++	if (priv->dma_enabled)
++		brcmuart_arbitration(priv, 0);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 3ba9c8b93ae6c..fe8d79c4ae95e 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -15,6 +15,7 @@
+ #include <linux/moduleparam.h>
+ #include <linux/ioport.h>
+ #include <linux/init.h>
++#include <linux/irq.h>
+ #include <linux/console.h>
+ #include <linux/gpio/consumer.h>
+ #include <linux/sysrq.h>
+@@ -1932,6 +1933,7 @@ static bool handle_rx_dma(struct uart_8250_port *up, unsigned int iir)
+ int serial8250_handle_irq(struct uart_port *port, unsigned int iir)
+ {
+ 	struct uart_8250_port *up = up_to_u8250p(port);
++	struct tty_port *tport = &port->state->port;
+ 	bool skip_rx = false;
+ 	unsigned long flags;
+ 	u16 status;
+@@ -1957,6 +1959,8 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir)
+ 		skip_rx = true;
+ 
+ 	if (status & (UART_LSR_DR | UART_LSR_BI) && !skip_rx) {
++		if (irqd_is_wakeup_set(irq_get_irq_data(port->irq)))
++			pm_wakeup_event(tport->tty->dev, 0);
+ 		if (!up->dma || handle_rx_dma(up, iir))
+ 			status = serial8250_rx_chars(up, status);
+ 	}
+@@ -2016,18 +2020,19 @@ static int serial8250_tx_threshold_handle_irq(struct uart_port *port)
+ static unsigned int serial8250_tx_empty(struct uart_port *port)
+ {
+ 	struct uart_8250_port *up = up_to_u8250p(port);
++	unsigned int result = 0;
+ 	unsigned long flags;
+-	u16 lsr;
+ 
+ 	serial8250_rpm_get(up);
+ 
+ 	spin_lock_irqsave(&port->lock, flags);
+-	lsr = serial_lsr_in(up);
++	if (!serial8250_tx_dma_running(up) && uart_lsr_tx_empty(serial_lsr_in(up)))
++		result = TIOCSER_TEMT;
+ 	spin_unlock_irqrestore(&port->lock, flags);
+ 
+ 	serial8250_rpm_put(up);
+ 
+-	return uart_lsr_tx_empty(lsr) ? TIOCSER_TEMT : 0;
++	return result;
+ }
+ 
+ unsigned int serial8250_do_get_mctrl(struct uart_port *port)
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 074bfed57fc9e..aef9be3e73c15 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -1296,7 +1296,7 @@ static inline int lpuart_start_rx_dma(struct lpuart_port *sport)
+ 	 * 10ms at any baud rate.
+ 	 */
+ 	sport->rx_dma_rng_buf_len = (DMA_RX_TIMEOUT * baud /  bits / 1000) * 2;
+-	sport->rx_dma_rng_buf_len = (1 << (fls(sport->rx_dma_rng_buf_len) - 1));
++	sport->rx_dma_rng_buf_len = (1 << fls(sport->rx_dma_rng_buf_len));
+ 	if (sport->rx_dma_rng_buf_len < 16)
+ 		sport->rx_dma_rng_buf_len = 16;
+ 
+diff --git a/drivers/tty/serial/max310x.c b/drivers/tty/serial/max310x.c
+index e9cacfe7e0326..9fee722058f43 100644
+--- a/drivers/tty/serial/max310x.c
++++ b/drivers/tty/serial/max310x.c
+@@ -525,6 +525,11 @@ static bool max310x_reg_precious(struct device *dev, unsigned int reg)
+ 	return false;
+ }
+ 
++static bool max310x_reg_noinc(struct device *dev, unsigned int reg)
++{
++	return reg == MAX310X_RHR_REG;
++}
++
+ static int max310x_set_baud(struct uart_port *port, int baud)
+ {
+ 	unsigned int mode = 0, div = 0, frac = 0, c = 0, F = 0;
+@@ -651,14 +656,14 @@ static void max310x_batch_write(struct uart_port *port, u8 *txbuf, unsigned int
+ {
+ 	struct max310x_one *one = to_max310x_port(port);
+ 
+-	regmap_raw_write(one->regmap, MAX310X_THR_REG, txbuf, len);
++	regmap_noinc_write(one->regmap, MAX310X_THR_REG, txbuf, len);
+ }
+ 
+ static void max310x_batch_read(struct uart_port *port, u8 *rxbuf, unsigned int len)
+ {
+ 	struct max310x_one *one = to_max310x_port(port);
+ 
+-	regmap_raw_read(one->regmap, MAX310X_RHR_REG, rxbuf, len);
++	regmap_noinc_read(one->regmap, MAX310X_RHR_REG, rxbuf, len);
+ }
+ 
+ static void max310x_handle_rx(struct uart_port *port, unsigned int rxlen)
+@@ -1468,6 +1473,10 @@ static struct regmap_config regcfg = {
+ 	.writeable_reg = max310x_reg_writeable,
+ 	.volatile_reg = max310x_reg_volatile,
+ 	.precious_reg = max310x_reg_precious,
++	.writeable_noinc_reg = max310x_reg_noinc,
++	.readable_noinc_reg = max310x_reg_noinc,
++	.max_raw_read = MAX310X_FIFO_SIZE,
++	.max_raw_write = MAX310X_FIFO_SIZE,
+ };
+ 
+ #ifdef CONFIG_SPI_MASTER
+@@ -1553,6 +1562,10 @@ static struct regmap_config regcfg_i2c = {
+ 	.volatile_reg = max310x_reg_volatile,
+ 	.precious_reg = max310x_reg_precious,
+ 	.max_register = MAX310X_I2C_REVID_EXTREG,
++	.writeable_noinc_reg = max310x_reg_noinc,
++	.readable_noinc_reg = max310x_reg_noinc,
++	.max_raw_read = MAX310X_FIFO_SIZE,
++	.max_raw_write = MAX310X_FIFO_SIZE,
+ };
+ 
+ static const struct max310x_if_cfg max310x_i2c_if_cfg = {
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 2bd32c8ece393..728cb72be0666 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -1552,7 +1552,7 @@ uart_ioctl(struct tty_struct *tty, unsigned int cmd, unsigned long arg)
+ 		goto out;
+ 
+ 	/* rs485_config requires more locking than others */
+-	if (cmd == TIOCGRS485)
++	if (cmd == TIOCSRS485)
+ 		down_write(&tty->termios_rwsem);
+ 
+ 	mutex_lock(&port->mutex);
+@@ -1595,7 +1595,7 @@ uart_ioctl(struct tty_struct *tty, unsigned int cmd, unsigned long arg)
+ 	}
+ out_up:
+ 	mutex_unlock(&port->mutex);
+-	if (cmd == TIOCGRS485)
++	if (cmd == TIOCSRS485)
+ 		up_write(&tty->termios_rwsem);
+ out:
+ 	return ret;
+diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c
+index 767ff9fdb2e58..84700016932d6 100644
+--- a/drivers/tty/serial/stm32-usart.c
++++ b/drivers/tty/serial/stm32-usart.c
+@@ -693,8 +693,9 @@ static void stm32_usart_transmit_chars(struct uart_port *port)
+ 	int ret;
+ 
+ 	if (!stm32_port->hw_flow_control &&
+-	    port->rs485.flags & SER_RS485_ENABLED) {
+-		stm32_port->txdone = false;
++	    port->rs485.flags & SER_RS485_ENABLED &&
++	    (port->x_char ||
++	     !(uart_circ_empty(xmit) || uart_tx_stopped(port)))) {
+ 		stm32_usart_tc_interrupt_disable(port);
+ 		stm32_usart_rs485_rts_enable(port);
+ 	}
+diff --git a/drivers/tty/tty.h b/drivers/tty/tty.h
+index f45cd683c02ea..1e0d80e98d263 100644
+--- a/drivers/tty/tty.h
++++ b/drivers/tty/tty.h
+@@ -62,6 +62,8 @@ int __tty_check_change(struct tty_struct *tty, int sig);
+ int tty_check_change(struct tty_struct *tty);
+ void __stop_tty(struct tty_struct *tty);
+ void __start_tty(struct tty_struct *tty);
++void tty_write_unlock(struct tty_struct *tty);
++int tty_write_lock(struct tty_struct *tty, int ndelay);
+ void tty_vhangup_session(struct tty_struct *tty);
+ void tty_open_proc_set_tty(struct file *filp, struct tty_struct *tty);
+ int tty_signal_session_leader(struct tty_struct *tty, int exit_session);
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index 36fb945fdad48..8e3de07f103da 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -933,13 +933,13 @@ static ssize_t tty_read(struct kiocb *iocb, struct iov_iter *to)
+ 	return i;
+ }
+ 
+-static void tty_write_unlock(struct tty_struct *tty)
++void tty_write_unlock(struct tty_struct *tty)
+ {
+ 	mutex_unlock(&tty->atomic_write_lock);
+ 	wake_up_interruptible_poll(&tty->write_wait, EPOLLOUT);
+ }
+ 
+-static int tty_write_lock(struct tty_struct *tty, int ndelay)
++int tty_write_lock(struct tty_struct *tty, int ndelay)
+ {
+ 	if (!mutex_trylock(&tty->atomic_write_lock)) {
+ 		if (ndelay)
+diff --git a/drivers/tty/tty_ioctl.c b/drivers/tty/tty_ioctl.c
+index 12983ce4e43ef..a13e3797c4772 100644
+--- a/drivers/tty/tty_ioctl.c
++++ b/drivers/tty/tty_ioctl.c
+@@ -500,21 +500,42 @@ static int set_termios(struct tty_struct *tty, void __user *arg, int opt)
+ 	tmp_termios.c_ispeed = tty_termios_input_baud_rate(&tmp_termios);
+ 	tmp_termios.c_ospeed = tty_termios_baud_rate(&tmp_termios);
+ 
+-	ld = tty_ldisc_ref(tty);
++	if (opt & (TERMIOS_FLUSH|TERMIOS_WAIT)) {
++retry_write_wait:
++		retval = wait_event_interruptible(tty->write_wait, !tty_chars_in_buffer(tty));
++		if (retval < 0)
++			return retval;
+ 
+-	if (ld != NULL) {
+-		if ((opt & TERMIOS_FLUSH) && ld->ops->flush_buffer)
+-			ld->ops->flush_buffer(tty);
+-		tty_ldisc_deref(ld);
+-	}
++		if (tty_write_lock(tty, 0) < 0)
++			goto retry_write_wait;
+ 
+-	if (opt & TERMIOS_WAIT) {
+-		tty_wait_until_sent(tty, 0);
+-		if (signal_pending(current))
+-			return -ERESTARTSYS;
+-	}
++		/* Racing writer? */
++		if (tty_chars_in_buffer(tty)) {
++			tty_write_unlock(tty);
++			goto retry_write_wait;
++		}
+ 
+-	tty_set_termios(tty, &tmp_termios);
++		ld = tty_ldisc_ref(tty);
++		if (ld != NULL) {
++			if ((opt & TERMIOS_FLUSH) && ld->ops->flush_buffer)
++				ld->ops->flush_buffer(tty);
++			tty_ldisc_deref(ld);
++		}
++
++		if ((opt & TERMIOS_WAIT) && tty->ops->wait_until_sent) {
++			tty->ops->wait_until_sent(tty, 0);
++			if (signal_pending(current)) {
++				tty_write_unlock(tty);
++				return -ERESTARTSYS;
++			}
++		}
++
++		tty_set_termios(tty, &tmp_termios);
++
++		tty_write_unlock(tty);
++	} else {
++		tty_set_termios(tty, &tmp_termios);
++	}
+ 
+ 	/* FIXME: Arguably if tmp_termios == tty->termios AND the
+ 	   actual requested termios was not tmp_termios then we may
+diff --git a/drivers/usb/chipidea/core.c b/drivers/usb/chipidea/core.c
+index 281fc51720cea..25084ce7c297b 100644
+--- a/drivers/usb/chipidea/core.c
++++ b/drivers/usb/chipidea/core.c
+@@ -1108,7 +1108,7 @@ static int ci_hdrc_probe(struct platform_device *pdev)
+ 	ret = ci_usb_phy_init(ci);
+ 	if (ret) {
+ 		dev_err(dev, "unable to init phy: %d\n", ret);
+-		return ret;
++		goto ulpi_exit;
+ 	}
+ 
+ 	ci->hw_bank.phys = res->start;
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 476b636185116..9f8c988c25cb1 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -1883,13 +1883,11 @@ static int dwc3_probe(struct platform_device *pdev)
+ 	spin_lock_init(&dwc->lock);
+ 	mutex_init(&dwc->mutex);
+ 
++	pm_runtime_get_noresume(dev);
+ 	pm_runtime_set_active(dev);
+ 	pm_runtime_use_autosuspend(dev);
+ 	pm_runtime_set_autosuspend_delay(dev, DWC3_DEFAULT_AUTOSUSPEND_DELAY);
+ 	pm_runtime_enable(dev);
+-	ret = pm_runtime_get_sync(dev);
+-	if (ret < 0)
+-		goto err1;
+ 
+ 	pm_runtime_forbid(dev);
+ 
+@@ -1954,12 +1952,10 @@ err3:
+ 	dwc3_free_event_buffers(dwc);
+ 
+ err2:
+-	pm_runtime_allow(&pdev->dev);
+-
+-err1:
+-	pm_runtime_put_sync(&pdev->dev);
+-	pm_runtime_disable(&pdev->dev);
+-
++	pm_runtime_allow(dev);
++	pm_runtime_disable(dev);
++	pm_runtime_set_suspended(dev);
++	pm_runtime_put_noidle(dev);
+ disable_clks:
+ 	dwc3_clk_disable(dwc);
+ assert_reset:
+@@ -1983,6 +1979,7 @@ static int dwc3_remove(struct platform_device *pdev)
+ 	dwc3_core_exit(dwc);
+ 	dwc3_ulpi_exit(dwc);
+ 
++	pm_runtime_allow(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+ 	pm_runtime_put_noidle(&pdev->dev);
+ 	pm_runtime_set_suspended(&pdev->dev);
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index cf5b4f49c3ed8..3faac3244c7db 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -2532,29 +2532,17 @@ static int __dwc3_gadget_start(struct dwc3 *dwc);
+ static int dwc3_gadget_soft_disconnect(struct dwc3 *dwc)
+ {
+ 	unsigned long flags;
++	int ret;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	dwc->connected = false;
+ 
+ 	/*
+-	 * Per databook, when we want to stop the gadget, if a control transfer
+-	 * is still in process, complete it and get the core into setup phase.
++	 * Attempt to end pending SETUP status phase, and not wait for the
++	 * function to do so.
+ 	 */
+-	if (dwc->ep0state != EP0_SETUP_PHASE) {
+-		int ret;
+-
+-		if (dwc->delayed_status)
+-			dwc3_ep0_send_delayed_status(dwc);
+-
+-		reinit_completion(&dwc->ep0_in_setup);
+-
+-		spin_unlock_irqrestore(&dwc->lock, flags);
+-		ret = wait_for_completion_timeout(&dwc->ep0_in_setup,
+-				msecs_to_jiffies(DWC3_PULL_UP_TIMEOUT));
+-		spin_lock_irqsave(&dwc->lock, flags);
+-		if (ret == 0)
+-			dev_warn(dwc->dev, "timed out waiting for SETUP phase\n");
+-	}
++	if (dwc->delayed_status)
++		dwc3_ep0_send_delayed_status(dwc);
+ 
+ 	/*
+ 	 * In the Synopsys DesignWare Cores USB3 Databook Rev. 3.30a
+@@ -2567,6 +2555,33 @@ static int dwc3_gadget_soft_disconnect(struct dwc3 *dwc)
+ 	__dwc3_gadget_stop(dwc);
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
++	/*
++	 * Per databook, when we want to stop the gadget, if a control transfer
++	 * is still in process, complete it and get the core into setup phase.
++	 * In case the host is unresponsive to a SETUP transaction, forcefully
++	 * stall the transfer, and move back to the SETUP phase, so that any
++	 * pending endxfers can be executed.
++	 */
++	if (dwc->ep0state != EP0_SETUP_PHASE) {
++		reinit_completion(&dwc->ep0_in_setup);
++
++		ret = wait_for_completion_timeout(&dwc->ep0_in_setup,
++				msecs_to_jiffies(DWC3_PULL_UP_TIMEOUT));
++		if (ret == 0) {
++			unsigned int    dir;
++
++			dev_warn(dwc->dev, "wait for SETUP phase timed out\n");
++			spin_lock_irqsave(&dwc->lock, flags);
++			dir = !!dwc->ep0_expect_in;
++			if (dwc->ep0state == EP0_DATA_PHASE)
++				dwc3_ep0_end_control_data(dwc, dwc->eps[dir]);
++			else
++				dwc3_ep0_end_control_data(dwc, dwc->eps[!dir]);
++			dwc3_ep0_stall_and_restart(dwc);
++			spin_unlock_irqrestore(&dwc->lock, flags);
++		}
++	}
++
+ 	/*
+ 	 * Note: if the GEVNTCOUNT indicates events in the event buffer, the
+ 	 * driver needs to acknowledge them before the controller can halt.
+@@ -4247,15 +4262,8 @@ static void dwc3_gadget_interrupt(struct dwc3 *dwc,
+ 		break;
+ 	case DWC3_DEVICE_EVENT_SUSPEND:
+ 		/* It changed to be suspend event for version 2.30a and above */
+-		if (!DWC3_VER_IS_PRIOR(DWC3, 230A)) {
+-			/*
+-			 * Ignore suspend event until the gadget enters into
+-			 * USB_STATE_CONFIGURED state.
+-			 */
+-			if (dwc->gadget->state >= USB_STATE_CONFIGURED)
+-				dwc3_gadget_suspend_interrupt(dwc,
+-						event->event_info);
+-		}
++		if (!DWC3_VER_IS_PRIOR(DWC3, 230A))
++			dwc3_gadget_suspend_interrupt(dwc, event->event_info);
+ 		break;
+ 	case DWC3_DEVICE_EVENT_SOF:
+ 	case DWC3_DEVICE_EVENT_ERRATIC_ERROR:
+diff --git a/drivers/usb/gadget/function/f_tcm.c b/drivers/usb/gadget/function/f_tcm.c
+index 658e2e21fdd0d..c21acebe8aae5 100644
+--- a/drivers/usb/gadget/function/f_tcm.c
++++ b/drivers/usb/gadget/function/f_tcm.c
+@@ -1054,7 +1054,7 @@ static void usbg_cmd_work(struct work_struct *work)
+ 				  tv_nexus->tvn_se_sess->se_tpg->se_tpg_tfo,
+ 				  tv_nexus->tvn_se_sess, cmd->data_len, DMA_NONE,
+ 				  cmd->prio_attr, cmd->sense_iu.sense,
+-				  cmd->unpacked_lun);
++				  cmd->unpacked_lun, NULL);
+ 		goto out;
+ 	}
+ 
+@@ -1183,7 +1183,7 @@ static void bot_cmd_work(struct work_struct *work)
+ 				  tv_nexus->tvn_se_sess->se_tpg->se_tpg_tfo,
+ 				  tv_nexus->tvn_se_sess, cmd->data_len, DMA_NONE,
+ 				  cmd->prio_attr, cmd->sense_iu.sense,
+-				  cmd->unpacked_lun);
++				  cmd->unpacked_lun, NULL);
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
+index 23b0629a87743..b14cbd0a6d013 100644
+--- a/drivers/usb/gadget/udc/core.c
++++ b/drivers/usb/gadget/udc/core.c
+@@ -37,6 +37,10 @@ static struct bus_type gadget_bus_type;
+  * @vbus: for udcs who care about vbus status, this value is real vbus status;
+  * for udcs who do not care about vbus status, this value is always true
+  * @started: the UDC's started state. True if the UDC had started.
++ * @connect_lock: protects udc->vbus, udc->started, gadget->connect, gadget->deactivate related
++ * functions. usb_gadget_connect_locked, usb_gadget_disconnect_locked,
++ * usb_udc_connect_control_locked, usb_gadget_udc_start_locked, usb_gadget_udc_stop_locked are
++ * called with this lock held.
+  *
+  * This represents the internal data structure which is used by the UDC-class
+  * to hold information about udc driver and gadget together.
+@@ -48,6 +52,7 @@ struct usb_udc {
+ 	struct list_head		list;
+ 	bool				vbus;
+ 	bool				started;
++	struct mutex			connect_lock;
+ };
+ 
+ static struct class *udc_class;
+@@ -660,17 +665,9 @@ out:
+ }
+ EXPORT_SYMBOL_GPL(usb_gadget_vbus_disconnect);
+ 
+-/**
+- * usb_gadget_connect - software-controlled connect to USB host
+- * @gadget:the peripheral being connected
+- *
+- * Enables the D+ (or potentially D-) pullup.  The host will start
+- * enumerating this gadget when the pullup is active and a VBUS session
+- * is active (the link is powered).
+- *
+- * Returns zero on success, else negative errno.
+- */
+-int usb_gadget_connect(struct usb_gadget *gadget)
++/* Internal version of usb_gadget_connect needs to be called with connect_lock held. */
++static int usb_gadget_connect_locked(struct usb_gadget *gadget)
++	__must_hold(&gadget->udc->connect_lock)
+ {
+ 	int ret = 0;
+ 
+@@ -679,10 +676,15 @@ int usb_gadget_connect(struct usb_gadget *gadget)
+ 		goto out;
+ 	}
+ 
+-	if (gadget->deactivated) {
++	if (gadget->connected)
++		goto out;
++
++	if (gadget->deactivated || !gadget->udc->started) {
+ 		/*
+ 		 * If gadget is deactivated we only save new state.
+ 		 * Gadget will be connected automatically after activation.
++		 *
++		 * udc first needs to be started before gadget can be pulled up.
+ 		 */
+ 		gadget->connected = true;
+ 		goto out;
+@@ -697,22 +699,32 @@ out:
+ 
+ 	return ret;
+ }
+-EXPORT_SYMBOL_GPL(usb_gadget_connect);
+ 
+ /**
+- * usb_gadget_disconnect - software-controlled disconnect from USB host
+- * @gadget:the peripheral being disconnected
+- *
+- * Disables the D+ (or potentially D-) pullup, which the host may see
+- * as a disconnect (when a VBUS session is active).  Not all systems
+- * support software pullup controls.
++ * usb_gadget_connect - software-controlled connect to USB host
++ * @gadget:the peripheral being connected
+  *
+- * Following a successful disconnect, invoke the ->disconnect() callback
+- * for the current gadget driver so that UDC drivers don't need to.
++ * Enables the D+ (or potentially D-) pullup.  The host will start
++ * enumerating this gadget when the pullup is active and a VBUS session
++ * is active (the link is powered).
+  *
+  * Returns zero on success, else negative errno.
+  */
+-int usb_gadget_disconnect(struct usb_gadget *gadget)
++int usb_gadget_connect(struct usb_gadget *gadget)
++{
++	int ret;
++
++	mutex_lock(&gadget->udc->connect_lock);
++	ret = usb_gadget_connect_locked(gadget);
++	mutex_unlock(&gadget->udc->connect_lock);
++
++	return ret;
++}
++EXPORT_SYMBOL_GPL(usb_gadget_connect);
++
++/* Internal version of usb_gadget_disconnect needs to be called with connect_lock held. */
++static int usb_gadget_disconnect_locked(struct usb_gadget *gadget)
++	__must_hold(&gadget->udc->connect_lock)
+ {
+ 	int ret = 0;
+ 
+@@ -724,10 +736,12 @@ int usb_gadget_disconnect(struct usb_gadget *gadget)
+ 	if (!gadget->connected)
+ 		goto out;
+ 
+-	if (gadget->deactivated) {
++	if (gadget->deactivated || !gadget->udc->started) {
+ 		/*
+ 		 * If gadget is deactivated we only save new state.
+ 		 * Gadget will stay disconnected after activation.
++		 *
++		 * udc should have been started before gadget being pulled down.
+ 		 */
+ 		gadget->connected = false;
+ 		goto out;
+@@ -747,6 +761,30 @@ out:
+ 
+ 	return ret;
+ }
++
++/**
++ * usb_gadget_disconnect - software-controlled disconnect from USB host
++ * @gadget:the peripheral being disconnected
++ *
++ * Disables the D+ (or potentially D-) pullup, which the host may see
++ * as a disconnect (when a VBUS session is active).  Not all systems
++ * support software pullup controls.
++ *
++ * Following a successful disconnect, invoke the ->disconnect() callback
++ * for the current gadget driver so that UDC drivers don't need to.
++ *
++ * Returns zero on success, else negative errno.
++ */
++int usb_gadget_disconnect(struct usb_gadget *gadget)
++{
++	int ret;
++
++	mutex_lock(&gadget->udc->connect_lock);
++	ret = usb_gadget_disconnect_locked(gadget);
++	mutex_unlock(&gadget->udc->connect_lock);
++
++	return ret;
++}
+ EXPORT_SYMBOL_GPL(usb_gadget_disconnect);
+ 
+ /**
+@@ -767,10 +805,11 @@ int usb_gadget_deactivate(struct usb_gadget *gadget)
+ 	if (gadget->deactivated)
+ 		goto out;
+ 
++	mutex_lock(&gadget->udc->connect_lock);
+ 	if (gadget->connected) {
+-		ret = usb_gadget_disconnect(gadget);
++		ret = usb_gadget_disconnect_locked(gadget);
+ 		if (ret)
+-			goto out;
++			goto unlock;
+ 
+ 		/*
+ 		 * If gadget was being connected before deactivation, we want
+@@ -780,6 +819,8 @@ int usb_gadget_deactivate(struct usb_gadget *gadget)
+ 	}
+ 	gadget->deactivated = true;
+ 
++unlock:
++	mutex_unlock(&gadget->udc->connect_lock);
+ out:
+ 	trace_usb_gadget_deactivate(gadget, ret);
+ 
+@@ -803,6 +844,7 @@ int usb_gadget_activate(struct usb_gadget *gadget)
+ 	if (!gadget->deactivated)
+ 		goto out;
+ 
++	mutex_lock(&gadget->udc->connect_lock);
+ 	gadget->deactivated = false;
+ 
+ 	/*
+@@ -810,7 +852,8 @@ int usb_gadget_activate(struct usb_gadget *gadget)
+ 	 * while it was being deactivated, we call usb_gadget_connect().
+ 	 */
+ 	if (gadget->connected)
+-		ret = usb_gadget_connect(gadget);
++		ret = usb_gadget_connect_locked(gadget);
++	mutex_unlock(&gadget->udc->connect_lock);
+ 
+ out:
+ 	trace_usb_gadget_activate(gadget, ret);
+@@ -1051,12 +1094,13 @@ EXPORT_SYMBOL_GPL(usb_gadget_set_state);
+ 
+ /* ------------------------------------------------------------------------- */
+ 
+-static void usb_udc_connect_control(struct usb_udc *udc)
++/* Acquire connect_lock before calling this function. */
++static void usb_udc_connect_control_locked(struct usb_udc *udc) __must_hold(&udc->connect_lock)
+ {
+-	if (udc->vbus)
+-		usb_gadget_connect(udc->gadget);
++	if (udc->vbus && udc->started)
++		usb_gadget_connect_locked(udc->gadget);
+ 	else
+-		usb_gadget_disconnect(udc->gadget);
++		usb_gadget_disconnect_locked(udc->gadget);
+ }
+ 
+ /**
+@@ -1072,10 +1116,12 @@ void usb_udc_vbus_handler(struct usb_gadget *gadget, bool status)
+ {
+ 	struct usb_udc *udc = gadget->udc;
+ 
++	mutex_lock(&udc->connect_lock);
+ 	if (udc) {
+ 		udc->vbus = status;
+-		usb_udc_connect_control(udc);
++		usb_udc_connect_control_locked(udc);
+ 	}
++	mutex_unlock(&udc->connect_lock);
+ }
+ EXPORT_SYMBOL_GPL(usb_udc_vbus_handler);
+ 
+@@ -1097,7 +1143,7 @@ void usb_gadget_udc_reset(struct usb_gadget *gadget,
+ EXPORT_SYMBOL_GPL(usb_gadget_udc_reset);
+ 
+ /**
+- * usb_gadget_udc_start - tells usb device controller to start up
++ * usb_gadget_udc_start_locked - tells usb device controller to start up
+  * @udc: The UDC to be started
+  *
+  * This call is issued by the UDC Class driver when it's about
+@@ -1108,8 +1154,11 @@ EXPORT_SYMBOL_GPL(usb_gadget_udc_reset);
+  * necessary to have it powered on.
+  *
+  * Returns zero on success, else negative errno.
++ *
++ * Caller should acquire connect_lock before invoking this function.
+  */
+-static inline int usb_gadget_udc_start(struct usb_udc *udc)
++static inline int usb_gadget_udc_start_locked(struct usb_udc *udc)
++	__must_hold(&udc->connect_lock)
+ {
+ 	int ret;
+ 
+@@ -1126,7 +1175,7 @@ static inline int usb_gadget_udc_start(struct usb_udc *udc)
+ }
+ 
+ /**
+- * usb_gadget_udc_stop - tells usb device controller we don't need it anymore
++ * usb_gadget_udc_stop_locked - tells usb device controller we don't need it anymore
+  * @udc: The UDC to be stopped
+  *
+  * This call is issued by the UDC Class driver after calling
+@@ -1135,8 +1184,11 @@ static inline int usb_gadget_udc_start(struct usb_udc *udc)
+  * The details are implementation specific, but it can go as
+  * far as powering off UDC completely and disable its data
+  * line pullups.
++ *
++ * Caller should acquire connect lock before invoking this function.
+  */
+-static inline void usb_gadget_udc_stop(struct usb_udc *udc)
++static inline void usb_gadget_udc_stop_locked(struct usb_udc *udc)
++	__must_hold(&udc->connect_lock)
+ {
+ 	if (!udc->started) {
+ 		dev_err(&udc->dev, "UDC had already stopped\n");
+@@ -1295,6 +1347,7 @@ int usb_add_gadget(struct usb_gadget *gadget)
+ 
+ 	udc->gadget = gadget;
+ 	gadget->udc = udc;
++	mutex_init(&udc->connect_lock);
+ 
+ 	udc->started = false;
+ 
+@@ -1496,11 +1549,15 @@ static int gadget_bind_driver(struct device *dev)
+ 	if (ret)
+ 		goto err_bind;
+ 
+-	ret = usb_gadget_udc_start(udc);
+-	if (ret)
++	mutex_lock(&udc->connect_lock);
++	ret = usb_gadget_udc_start_locked(udc);
++	if (ret) {
++		mutex_unlock(&udc->connect_lock);
+ 		goto err_start;
++	}
+ 	usb_gadget_enable_async_callbacks(udc);
+-	usb_udc_connect_control(udc);
++	usb_udc_connect_control_locked(udc);
++	mutex_unlock(&udc->connect_lock);
+ 
+ 	kobject_uevent(&udc->dev.kobj, KOBJ_CHANGE);
+ 	return 0;
+@@ -1531,12 +1588,14 @@ static void gadget_unbind_driver(struct device *dev)
+ 
+ 	kobject_uevent(&udc->dev.kobj, KOBJ_CHANGE);
+ 
+-	usb_gadget_disconnect(gadget);
++	mutex_lock(&udc->connect_lock);
++	usb_gadget_disconnect_locked(gadget);
+ 	usb_gadget_disable_async_callbacks(udc);
+ 	if (gadget->irq)
+ 		synchronize_irq(gadget->irq);
+ 	udc->driver->unbind(gadget);
+-	usb_gadget_udc_stop(udc);
++	usb_gadget_udc_stop_locked(udc);
++	mutex_unlock(&udc->connect_lock);
+ 
+ 	mutex_lock(&udc_lock);
+ 	driver->is_bound = false;
+@@ -1622,11 +1681,15 @@ static ssize_t soft_connect_store(struct device *dev,
+ 	}
+ 
+ 	if (sysfs_streq(buf, "connect")) {
+-		usb_gadget_udc_start(udc);
+-		usb_gadget_connect(udc->gadget);
++		mutex_lock(&udc->connect_lock);
++		usb_gadget_udc_start_locked(udc);
++		usb_gadget_connect_locked(udc->gadget);
++		mutex_unlock(&udc->connect_lock);
+ 	} else if (sysfs_streq(buf, "disconnect")) {
+-		usb_gadget_disconnect(udc->gadget);
+-		usb_gadget_udc_stop(udc);
++		mutex_lock(&udc->connect_lock);
++		usb_gadget_disconnect_locked(udc->gadget);
++		usb_gadget_udc_stop_locked(udc);
++		mutex_unlock(&udc->connect_lock);
+ 	} else {
+ 		dev_err(dev, "unsupported command '%s'\n", buf);
+ 		ret = -EINVAL;
+diff --git a/drivers/usb/gadget/udc/renesas_usb3.c b/drivers/usb/gadget/udc/renesas_usb3.c
+index bee6bceafc4fa..a301af66bd912 100644
+--- a/drivers/usb/gadget/udc/renesas_usb3.c
++++ b/drivers/usb/gadget/udc/renesas_usb3.c
+@@ -2661,6 +2661,7 @@ static int renesas_usb3_remove(struct platform_device *pdev)
+ 	debugfs_remove_recursive(usb3->dentry);
+ 	device_remove_file(&pdev->dev, &dev_attr_role);
+ 
++	cancel_work_sync(&usb3->role_work);
+ 	usb_role_switch_unregister(usb3->role_sw);
+ 
+ 	usb_del_gadget_udc(&usb3->gadget);
+diff --git a/drivers/usb/gadget/udc/tegra-xudc.c b/drivers/usb/gadget/udc/tegra-xudc.c
+index 2b71b33725f17..5bccd64847ff7 100644
+--- a/drivers/usb/gadget/udc/tegra-xudc.c
++++ b/drivers/usb/gadget/udc/tegra-xudc.c
+@@ -2167,7 +2167,7 @@ static int tegra_xudc_gadget_vbus_draw(struct usb_gadget *gadget,
+ 
+ 	dev_dbg(xudc->dev, "%s: %u mA\n", __func__, m_a);
+ 
+-	if (xudc->curr_usbphy->chg_type == SDP_TYPE)
++	if (xudc->curr_usbphy && xudc->curr_usbphy->chg_type == SDP_TYPE)
+ 		ret = usb_phy_set_power(xudc->curr_usbphy, m_a);
+ 
+ 	return ret;
+diff --git a/drivers/usb/host/xhci-debugfs.c b/drivers/usb/host/xhci-debugfs.c
+index 0bc7fe11f749e..99baa60ef50fe 100644
+--- a/drivers/usb/host/xhci-debugfs.c
++++ b/drivers/usb/host/xhci-debugfs.c
+@@ -133,6 +133,7 @@ static void xhci_debugfs_regset(struct xhci_hcd *xhci, u32 base,
+ 	regset->regs = regs;
+ 	regset->nregs = nregs;
+ 	regset->base = hcd->regs + base;
++	regset->dev = hcd->self.controller;
+ 
+ 	debugfs_create_regset32((const char *)rgs->name, 0444, parent, regset);
+ }
+diff --git a/drivers/usb/host/xhci-rcar.c b/drivers/usb/host/xhci-rcar.c
+index 7f18509a1d397..567b096a24e93 100644
+--- a/drivers/usb/host/xhci-rcar.c
++++ b/drivers/usb/host/xhci-rcar.c
+@@ -80,7 +80,6 @@ MODULE_FIRMWARE(XHCI_RCAR_FIRMWARE_NAME_V3);
+ 
+ /* For soc_device_attribute */
+ #define RCAR_XHCI_FIRMWARE_V2   BIT(0) /* FIRMWARE V2 */
+-#define RCAR_XHCI_FIRMWARE_V3   BIT(1) /* FIRMWARE V3 */
+ 
+ static const struct soc_device_attribute rcar_quirks_match[]  = {
+ 	{
+@@ -152,8 +151,6 @@ static int xhci_rcar_download_firmware(struct usb_hcd *hcd)
+ 
+ 	if (quirks & RCAR_XHCI_FIRMWARE_V2)
+ 		firmware_name = XHCI_RCAR_FIRMWARE_NAME_V2;
+-	else if (quirks & RCAR_XHCI_FIRMWARE_V3)
+-		firmware_name = XHCI_RCAR_FIRMWARE_NAME_V3;
+ 	else
+ 		firmware_name = priv->firmware_name;
+ 
+diff --git a/drivers/usb/mtu3/mtu3_qmu.c b/drivers/usb/mtu3/mtu3_qmu.c
+index a2fdab8b63b28..3b68a0a19cc76 100644
+--- a/drivers/usb/mtu3/mtu3_qmu.c
++++ b/drivers/usb/mtu3/mtu3_qmu.c
+@@ -210,6 +210,7 @@ static struct qmu_gpd *advance_enq_gpd(struct mtu3_gpd_ring *ring)
+ 	return ring->enqueue;
+ }
+ 
++/* @dequeue may be NULL if ring is unallocated or freed */
+ static struct qmu_gpd *advance_deq_gpd(struct mtu3_gpd_ring *ring)
+ {
+ 	if (ring->dequeue < ring->end)
+@@ -491,7 +492,7 @@ static void qmu_done_tx(struct mtu3 *mtu, u8 epnum)
+ 	dev_dbg(mtu->dev, "%s EP%d, last=%p, current=%p, enq=%p\n",
+ 		__func__, epnum, gpd, gpd_current, ring->enqueue);
+ 
+-	while (gpd != gpd_current && !GET_GPD_HWO(gpd)) {
++	while (gpd && gpd != gpd_current && !GET_GPD_HWO(gpd)) {
+ 
+ 		mreq = next_request(mep);
+ 
+@@ -530,7 +531,7 @@ static void qmu_done_rx(struct mtu3 *mtu, u8 epnum)
+ 	dev_dbg(mtu->dev, "%s EP%d, last=%p, current=%p, enq=%p\n",
+ 		__func__, epnum, gpd, gpd_current, ring->enqueue);
+ 
+-	while (gpd != gpd_current && !GET_GPD_HWO(gpd)) {
++	while (gpd && gpd != gpd_current && !GET_GPD_HWO(gpd)) {
+ 
+ 		mreq = next_request(mep);
+ 
+diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+index 195963b82b636..97a16f7eb8941 100644
+--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+@@ -2298,6 +2298,113 @@ static void update_cvq_info(struct mlx5_vdpa_dev *mvdev)
+ 	}
+ }
+ 
++static u8 query_vport_state(struct mlx5_core_dev *mdev, u8 opmod, u16 vport)
++{
++	u32 out[MLX5_ST_SZ_DW(query_vport_state_out)] = {};
++	u32 in[MLX5_ST_SZ_DW(query_vport_state_in)] = {};
++	int err;
++
++	MLX5_SET(query_vport_state_in, in, opcode, MLX5_CMD_OP_QUERY_VPORT_STATE);
++	MLX5_SET(query_vport_state_in, in, op_mod, opmod);
++	MLX5_SET(query_vport_state_in, in, vport_number, vport);
++	if (vport)
++		MLX5_SET(query_vport_state_in, in, other_vport, 1);
++
++	err = mlx5_cmd_exec_inout(mdev, query_vport_state, in, out);
++	if (err)
++		return 0;
++
++	return MLX5_GET(query_vport_state_out, out, state);
++}
++
++static bool get_link_state(struct mlx5_vdpa_dev *mvdev)
++{
++	if (query_vport_state(mvdev->mdev, MLX5_VPORT_STATE_OP_MOD_VNIC_VPORT, 0) ==
++	    VPORT_STATE_UP)
++		return true;
++
++	return false;
++}
++
++static void update_carrier(struct work_struct *work)
++{
++	struct mlx5_vdpa_wq_ent *wqent;
++	struct mlx5_vdpa_dev *mvdev;
++	struct mlx5_vdpa_net *ndev;
++
++	wqent = container_of(work, struct mlx5_vdpa_wq_ent, work);
++	mvdev = wqent->mvdev;
++	ndev = to_mlx5_vdpa_ndev(mvdev);
++	if (get_link_state(mvdev))
++		ndev->config.status |= cpu_to_mlx5vdpa16(mvdev, VIRTIO_NET_S_LINK_UP);
++	else
++		ndev->config.status &= cpu_to_mlx5vdpa16(mvdev, ~VIRTIO_NET_S_LINK_UP);
++
++	if (ndev->config_cb.callback)
++		ndev->config_cb.callback(ndev->config_cb.private);
++
++	kfree(wqent);
++}
++
++static int queue_link_work(struct mlx5_vdpa_net *ndev)
++{
++	struct mlx5_vdpa_wq_ent *wqent;
++
++	wqent = kzalloc(sizeof(*wqent), GFP_ATOMIC);
++	if (!wqent)
++		return -ENOMEM;
++
++	wqent->mvdev = &ndev->mvdev;
++	INIT_WORK(&wqent->work, update_carrier);
++	queue_work(ndev->mvdev.wq, &wqent->work);
++	return 0;
++}
++
++static int event_handler(struct notifier_block *nb, unsigned long event, void *param)
++{
++	struct mlx5_vdpa_net *ndev = container_of(nb, struct mlx5_vdpa_net, nb);
++	struct mlx5_eqe *eqe = param;
++	int ret = NOTIFY_DONE;
++
++	if (event == MLX5_EVENT_TYPE_PORT_CHANGE) {
++		switch (eqe->sub_type) {
++		case MLX5_PORT_CHANGE_SUBTYPE_DOWN:
++		case MLX5_PORT_CHANGE_SUBTYPE_ACTIVE:
++			if (queue_link_work(ndev))
++				return NOTIFY_DONE;
++
++			ret = NOTIFY_OK;
++			break;
++		default:
++			return NOTIFY_DONE;
++		}
++		return ret;
++	}
++	return ret;
++}
++
++static void register_link_notifier(struct mlx5_vdpa_net *ndev)
++{
++	if (!(ndev->mvdev.actual_features & BIT_ULL(VIRTIO_NET_F_STATUS)))
++		return;
++
++	ndev->nb.notifier_call = event_handler;
++	mlx5_notifier_register(ndev->mvdev.mdev, &ndev->nb);
++	ndev->nb_registered = true;
++	queue_link_work(ndev);
++}
++
++static void unregister_link_notifier(struct mlx5_vdpa_net *ndev)
++{
++	if (!ndev->nb_registered)
++		return;
++
++	ndev->nb_registered = false;
++	mlx5_notifier_unregister(ndev->mvdev.mdev, &ndev->nb);
++	if (ndev->mvdev.wq)
++		flush_workqueue(ndev->mvdev.wq);
++}
++
+ static int mlx5_vdpa_set_driver_features(struct vdpa_device *vdev, u64 features)
+ {
+ 	struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
+@@ -2567,10 +2674,11 @@ static void mlx5_vdpa_set_status(struct vdpa_device *vdev, u8 status)
+ 				mlx5_vdpa_warn(mvdev, "failed to setup control VQ vring\n");
+ 				goto err_setup;
+ 			}
++			register_link_notifier(ndev);
+ 			err = setup_driver(mvdev);
+ 			if (err) {
+ 				mlx5_vdpa_warn(mvdev, "failed to setup driver\n");
+-				goto err_setup;
++				goto err_driver;
+ 			}
+ 		} else {
+ 			mlx5_vdpa_warn(mvdev, "did not expect DRIVER_OK to be cleared\n");
+@@ -2582,6 +2690,8 @@ static void mlx5_vdpa_set_status(struct vdpa_device *vdev, u8 status)
+ 	up_write(&ndev->reslock);
+ 	return;
+ 
++err_driver:
++	unregister_link_notifier(ndev);
+ err_setup:
+ 	mlx5_vdpa_destroy_mr(&ndev->mvdev);
+ 	ndev->mvdev.status |= VIRTIO_CONFIG_S_FAILED;
+@@ -2607,6 +2717,7 @@ static int mlx5_vdpa_reset(struct vdpa_device *vdev)
+ 	mlx5_vdpa_info(mvdev, "performing device reset\n");
+ 
+ 	down_write(&ndev->reslock);
++	unregister_link_notifier(ndev);
+ 	teardown_driver(ndev);
+ 	clear_vqs_ready(ndev);
+ 	mlx5_vdpa_destroy_mr(&ndev->mvdev);
+@@ -2861,9 +2972,7 @@ static int mlx5_vdpa_suspend(struct vdpa_device *vdev)
+ 	mlx5_vdpa_info(mvdev, "suspending device\n");
+ 
+ 	down_write(&ndev->reslock);
+-	ndev->nb_registered = false;
+-	mlx5_notifier_unregister(mvdev->mdev, &ndev->nb);
+-	flush_workqueue(ndev->mvdev.wq);
++	unregister_link_notifier(ndev);
+ 	for (i = 0; i < ndev->cur_num_vqs; i++) {
+ 		mvq = &ndev->vqs[i];
+ 		suspend_vq(ndev, mvq);
+@@ -3000,84 +3109,6 @@ struct mlx5_vdpa_mgmtdev {
+ 	struct mlx5_vdpa_net *ndev;
+ };
+ 
+-static u8 query_vport_state(struct mlx5_core_dev *mdev, u8 opmod, u16 vport)
+-{
+-	u32 out[MLX5_ST_SZ_DW(query_vport_state_out)] = {};
+-	u32 in[MLX5_ST_SZ_DW(query_vport_state_in)] = {};
+-	int err;
+-
+-	MLX5_SET(query_vport_state_in, in, opcode, MLX5_CMD_OP_QUERY_VPORT_STATE);
+-	MLX5_SET(query_vport_state_in, in, op_mod, opmod);
+-	MLX5_SET(query_vport_state_in, in, vport_number, vport);
+-	if (vport)
+-		MLX5_SET(query_vport_state_in, in, other_vport, 1);
+-
+-	err = mlx5_cmd_exec_inout(mdev, query_vport_state, in, out);
+-	if (err)
+-		return 0;
+-
+-	return MLX5_GET(query_vport_state_out, out, state);
+-}
+-
+-static bool get_link_state(struct mlx5_vdpa_dev *mvdev)
+-{
+-	if (query_vport_state(mvdev->mdev, MLX5_VPORT_STATE_OP_MOD_VNIC_VPORT, 0) ==
+-	    VPORT_STATE_UP)
+-		return true;
+-
+-	return false;
+-}
+-
+-static void update_carrier(struct work_struct *work)
+-{
+-	struct mlx5_vdpa_wq_ent *wqent;
+-	struct mlx5_vdpa_dev *mvdev;
+-	struct mlx5_vdpa_net *ndev;
+-
+-	wqent = container_of(work, struct mlx5_vdpa_wq_ent, work);
+-	mvdev = wqent->mvdev;
+-	ndev = to_mlx5_vdpa_ndev(mvdev);
+-	if (get_link_state(mvdev))
+-		ndev->config.status |= cpu_to_mlx5vdpa16(mvdev, VIRTIO_NET_S_LINK_UP);
+-	else
+-		ndev->config.status &= cpu_to_mlx5vdpa16(mvdev, ~VIRTIO_NET_S_LINK_UP);
+-
+-	if (ndev->nb_registered && ndev->config_cb.callback)
+-		ndev->config_cb.callback(ndev->config_cb.private);
+-
+-	kfree(wqent);
+-}
+-
+-static int event_handler(struct notifier_block *nb, unsigned long event, void *param)
+-{
+-	struct mlx5_vdpa_net *ndev = container_of(nb, struct mlx5_vdpa_net, nb);
+-	struct mlx5_eqe *eqe = param;
+-	int ret = NOTIFY_DONE;
+-	struct mlx5_vdpa_wq_ent *wqent;
+-
+-	if (event == MLX5_EVENT_TYPE_PORT_CHANGE) {
+-		if (!(ndev->mvdev.actual_features & BIT_ULL(VIRTIO_NET_F_STATUS)))
+-			return NOTIFY_DONE;
+-		switch (eqe->sub_type) {
+-		case MLX5_PORT_CHANGE_SUBTYPE_DOWN:
+-		case MLX5_PORT_CHANGE_SUBTYPE_ACTIVE:
+-			wqent = kzalloc(sizeof(*wqent), GFP_ATOMIC);
+-			if (!wqent)
+-				return NOTIFY_DONE;
+-
+-			wqent->mvdev = &ndev->mvdev;
+-			INIT_WORK(&wqent->work, update_carrier);
+-			queue_work(ndev->mvdev.wq, &wqent->work);
+-			ret = NOTIFY_OK;
+-			break;
+-		default:
+-			return NOTIFY_DONE;
+-		}
+-		return ret;
+-	}
+-	return ret;
+-}
+-
+ static int config_func_mtu(struct mlx5_core_dev *mdev, u16 mtu)
+ {
+ 	int inlen = MLX5_ST_SZ_BYTES(modify_nic_vport_context_in);
+@@ -3258,9 +3289,6 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
+ 		goto err_res2;
+ 	}
+ 
+-	ndev->nb.notifier_call = event_handler;
+-	mlx5_notifier_register(mdev, &ndev->nb);
+-	ndev->nb_registered = true;
+ 	mvdev->vdev.mdev = &mgtdev->mgtdev;
+ 	err = _vdpa_register_device(&mvdev->vdev, max_vqs + 1);
+ 	if (err)
+@@ -3294,10 +3322,7 @@ static void mlx5_vdpa_dev_del(struct vdpa_mgmt_dev *v_mdev, struct vdpa_device *
+ 
+ 	mlx5_vdpa_remove_debugfs(ndev->debugfs);
+ 	ndev->debugfs = NULL;
+-	if (ndev->nb_registered) {
+-		ndev->nb_registered = false;
+-		mlx5_notifier_unregister(mvdev->mdev, &ndev->nb);
+-	}
++	unregister_link_notifier(ndev);
+ 	wq = mvdev->wq;
+ 	mvdev->wq = NULL;
+ 	destroy_workqueue(wq);
+diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
+index 7be9d9d8f01c8..74c7d1f978b75 100644
+--- a/drivers/vhost/vdpa.c
++++ b/drivers/vhost/vdpa.c
+@@ -851,11 +851,7 @@ static void vhost_vdpa_unmap(struct vhost_vdpa *v,
+ 		if (!v->in_batch)
+ 			ops->set_map(vdpa, asid, iotlb);
+ 	}
+-	/* If we are in the middle of batch processing, delay the free
+-	 * of AS until BATCH_END.
+-	 */
+-	if (!v->in_batch && !iotlb->nmaps)
+-		vhost_vdpa_remove_as(v, asid);
++
+ }
+ 
+ static int vhost_vdpa_va_map(struct vhost_vdpa *v,
+@@ -1112,8 +1108,6 @@ static int vhost_vdpa_process_iotlb_msg(struct vhost_dev *dev, u32 asid,
+ 		if (v->in_batch && ops->set_map)
+ 			ops->set_map(vdpa, asid, iotlb);
+ 		v->in_batch = false;
+-		if (!iotlb->nmaps)
+-			vhost_vdpa_remove_as(v, asid);
+ 		break;
+ 	default:
+ 		r = -EINVAL;
+diff --git a/drivers/video/fbdev/mmp/hw/mmp_ctrl.c b/drivers/video/fbdev/mmp/hw/mmp_ctrl.c
+index a9df8ee798102..51fbf02a03430 100644
+--- a/drivers/video/fbdev/mmp/hw/mmp_ctrl.c
++++ b/drivers/video/fbdev/mmp/hw/mmp_ctrl.c
+@@ -514,9 +514,9 @@ static int mmphw_probe(struct platform_device *pdev)
+ 	/* get clock */
+ 	ctrl->clk = devm_clk_get(ctrl->dev, mi->clk_name);
+ 	if (IS_ERR(ctrl->clk)) {
++		ret = PTR_ERR(ctrl->clk);
+ 		dev_err_probe(ctrl->dev, ret,
+ 			      "unable to get clk %s\n", mi->clk_name);
+-		ret = -ENOENT;
+ 		goto failed;
+ 	}
+ 	clk_prepare_enable(ctrl->clk);
+diff --git a/drivers/virt/coco/sev-guest/sev-guest.c b/drivers/virt/coco/sev-guest/sev-guest.c
+index 46f1a8d558b0b..0c7b47acba2a8 100644
+--- a/drivers/virt/coco/sev-guest/sev-guest.c
++++ b/drivers/virt/coco/sev-guest/sev-guest.c
+@@ -46,7 +46,15 @@ struct snp_guest_dev {
+ 
+ 	void *certs_data;
+ 	struct snp_guest_crypto *crypto;
++	/* request and response are in unencrypted memory */
+ 	struct snp_guest_msg *request, *response;
++
++	/*
++	 * Avoid information leakage by double-buffering shared messages
++	 * in fields that are in regular encrypted memory.
++	 */
++	struct snp_guest_msg secret_request, secret_response;
++
+ 	struct snp_secrets_page_layout *layout;
+ 	struct snp_req_data input;
+ 	u32 *os_area_msg_seqno;
+@@ -266,14 +274,17 @@ static int dec_payload(struct snp_guest_dev *snp_dev, struct snp_guest_msg *msg,
+ static int verify_and_dec_payload(struct snp_guest_dev *snp_dev, void *payload, u32 sz)
+ {
+ 	struct snp_guest_crypto *crypto = snp_dev->crypto;
+-	struct snp_guest_msg *resp = snp_dev->response;
+-	struct snp_guest_msg *req = snp_dev->request;
++	struct snp_guest_msg *resp = &snp_dev->secret_response;
++	struct snp_guest_msg *req = &snp_dev->secret_request;
+ 	struct snp_guest_msg_hdr *req_hdr = &req->hdr;
+ 	struct snp_guest_msg_hdr *resp_hdr = &resp->hdr;
+ 
+ 	dev_dbg(snp_dev->dev, "response [seqno %lld type %d version %d sz %d]\n",
+ 		resp_hdr->msg_seqno, resp_hdr->msg_type, resp_hdr->msg_version, resp_hdr->msg_sz);
+ 
++	/* Copy response from shared memory to encrypted memory. */
++	memcpy(resp, snp_dev->response, sizeof(*resp));
++
+ 	/* Verify that the sequence counter is incremented by 1 */
+ 	if (unlikely(resp_hdr->msg_seqno != (req_hdr->msg_seqno + 1)))
+ 		return -EBADMSG;
+@@ -297,7 +308,7 @@ static int verify_and_dec_payload(struct snp_guest_dev *snp_dev, void *payload,
+ static int enc_payload(struct snp_guest_dev *snp_dev, u64 seqno, int version, u8 type,
+ 			void *payload, size_t sz)
+ {
+-	struct snp_guest_msg *req = snp_dev->request;
++	struct snp_guest_msg *req = &snp_dev->secret_request;
+ 	struct snp_guest_msg_hdr *hdr = &req->hdr;
+ 
+ 	memset(req, 0, sizeof(*req));
+@@ -417,13 +428,21 @@ static int handle_guest_request(struct snp_guest_dev *snp_dev, u64 exit_code, in
+ 	if (!seqno)
+ 		return -EIO;
+ 
++	/* Clear shared memory's response for the host to populate. */
+ 	memset(snp_dev->response, 0, sizeof(struct snp_guest_msg));
+ 
+-	/* Encrypt the userspace provided payload */
++	/* Encrypt the userspace provided payload in snp_dev->secret_request. */
+ 	rc = enc_payload(snp_dev, seqno, msg_ver, type, req_buf, req_sz);
+ 	if (rc)
+ 		return rc;
+ 
++	/*
++	 * Write the fully encrypted request to the shared unencrypted
++	 * request page.
++	 */
++	memcpy(snp_dev->request, &snp_dev->secret_request,
++	       sizeof(snp_dev->secret_request));
++
+ 	rc = __handle_guest_request(snp_dev, exit_code, fw_err);
+ 	if (rc) {
+ 		if (rc == -EIO && *fw_err == SNP_GUEST_REQ_INVALID_LEN)
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index 41144b5246a8a..7a78ff05998ad 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -854,6 +854,14 @@ static void virtqueue_disable_cb_split(struct virtqueue *_vq)
+ 
+ 	if (!(vq->split.avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT)) {
+ 		vq->split.avail_flags_shadow |= VRING_AVAIL_F_NO_INTERRUPT;
++
++		/*
++		 * If device triggered an event already it won't trigger one again:
++		 * no need to disable.
++		 */
++		if (vq->event_triggered)
++			return;
++
+ 		if (vq->event)
+ 			/* TODO: this is a hack. Figure out a cleaner value to write. */
+ 			vring_used_event(&vq->split.vring) = 0x0;
+@@ -1699,6 +1707,14 @@ static void virtqueue_disable_cb_packed(struct virtqueue *_vq)
+ 
+ 	if (vq->packed.event_flags_shadow != VRING_PACKED_EVENT_FLAG_DISABLE) {
+ 		vq->packed.event_flags_shadow = VRING_PACKED_EVENT_FLAG_DISABLE;
++
++		/*
++		 * If device triggered an event already it won't trigger one again:
++		 * no need to disable.
++		 */
++		if (vq->event_triggered)
++			return;
++
+ 		vq->packed.vring.driver->flags =
+ 			cpu_to_le16(vq->packed.event_flags_shadow);
+ 	}
+@@ -2330,12 +2346,6 @@ void virtqueue_disable_cb(struct virtqueue *_vq)
+ {
+ 	struct vring_virtqueue *vq = to_vvq(_vq);
+ 
+-	/* If device triggered an event already it won't trigger one again:
+-	 * no need to disable.
+-	 */
+-	if (vq->event_triggered)
+-		return;
+-
+ 	if (vq->packed_ring)
+ 		virtqueue_disable_cb_packed(_vq);
+ 	else
+diff --git a/drivers/xen/pcpu.c b/drivers/xen/pcpu.c
+index fd3a644b08559..b3e3d1bb37f3e 100644
+--- a/drivers/xen/pcpu.c
++++ b/drivers/xen/pcpu.c
+@@ -58,6 +58,7 @@ struct pcpu {
+ 	struct list_head list;
+ 	struct device dev;
+ 	uint32_t cpu_id;
++	uint32_t acpi_id;
+ 	uint32_t flags;
+ };
+ 
+@@ -249,6 +250,7 @@ static struct pcpu *create_and_register_pcpu(struct xenpf_pcpuinfo *info)
+ 
+ 	INIT_LIST_HEAD(&pcpu->list);
+ 	pcpu->cpu_id = info->xen_cpuid;
++	pcpu->acpi_id = info->acpi_id;
+ 	pcpu->flags = info->flags;
+ 
+ 	/* Need hold on xen_pcpu_lock before pcpu list manipulations */
+@@ -381,3 +383,21 @@ err1:
+ 	return ret;
+ }
+ arch_initcall(xen_pcpu_init);
++
++#ifdef CONFIG_ACPI
++bool __init xen_processor_present(uint32_t acpi_id)
++{
++	const struct pcpu *pcpu;
++	bool online = false;
++
++	mutex_lock(&xen_pcpu_lock);
++	list_for_each_entry(pcpu, &xen_pcpus, list)
++		if (pcpu->acpi_id == acpi_id) {
++			online = pcpu->flags & XEN_PCPU_FLAGS_ONLINE;
++			break;
++		}
++	mutex_unlock(&xen_pcpu_lock);
++
++	return online;
++}
++#endif
+diff --git a/fs/Makefile b/fs/Makefile
+index 05f89b5c962f8..8d4736fcc766c 100644
+--- a/fs/Makefile
++++ b/fs/Makefile
+@@ -6,7 +6,6 @@
+ # Rewritten to use lists instead of if-statements.
+ # 
+ 
+-obj-$(CONFIG_SYSCTL)		+= sysctls.o
+ 
+ obj-y :=	open.o read_write.o file_table.o super.o \
+ 		char_dev.o stat.o exec.o pipe.o namei.o fcntl.o \
+@@ -50,7 +49,7 @@ obj-$(CONFIG_FS_MBCACHE)	+= mbcache.o
+ obj-$(CONFIG_FS_POSIX_ACL)	+= posix_acl.o
+ obj-$(CONFIG_NFS_COMMON)	+= nfs_common/
+ obj-$(CONFIG_COREDUMP)		+= coredump.o
+-obj-$(CONFIG_SYSCTL)		+= drop_caches.o
++obj-$(CONFIG_SYSCTL)		+= drop_caches.o sysctls.o
+ 
+ obj-$(CONFIG_FHANDLE)		+= fhandle.o
+ obj-y				+= iomap/
+diff --git a/fs/afs/dir.c b/fs/afs/dir.c
+index 82690d1dd49a0..a97499fd747b6 100644
+--- a/fs/afs/dir.c
++++ b/fs/afs/dir.c
+@@ -275,6 +275,7 @@ static struct afs_read *afs_read_dir(struct afs_vnode *dvnode, struct key *key)
+ 	loff_t i_size;
+ 	int nr_pages, i;
+ 	int ret;
++	loff_t remote_size = 0;
+ 
+ 	_enter("");
+ 
+@@ -289,6 +290,8 @@ static struct afs_read *afs_read_dir(struct afs_vnode *dvnode, struct key *key)
+ 
+ expand:
+ 	i_size = i_size_read(&dvnode->netfs.inode);
++	if (i_size < remote_size)
++	    i_size = remote_size;
+ 	if (i_size < 2048) {
+ 		ret = afs_bad(dvnode, afs_file_error_dir_small);
+ 		goto error;
+@@ -364,6 +367,7 @@ expand:
+ 			 * buffer.
+ 			 */
+ 			up_write(&dvnode->validate_lock);
++			remote_size = req->file_size;
+ 			goto expand;
+ 		}
+ 
+diff --git a/fs/afs/inode.c b/fs/afs/inode.c
+index 0167e96e51986..c5098f70b53af 100644
+--- a/fs/afs/inode.c
++++ b/fs/afs/inode.c
+@@ -230,6 +230,7 @@ static void afs_apply_status(struct afs_operation *op,
+ 			set_bit(AFS_VNODE_ZAP_DATA, &vnode->flags);
+ 		}
+ 		change_size = true;
++		data_changed = true;
+ 	} else if (vnode->status.type == AFS_FTYPE_DIR) {
+ 		/* Expected directory change is handled elsewhere so
+ 		 * that we can locally edit the directory and save on a
+@@ -449,7 +450,7 @@ static void afs_get_inode_cache(struct afs_vnode *vnode)
+ 				    0 : FSCACHE_ADV_SINGLE_CHUNK,
+ 				    &key, sizeof(key),
+ 				    &aux, sizeof(aux),
+-				    vnode->status.size));
++				    i_size_read(&vnode->netfs.inode)));
+ #endif
+ }
+ 
+@@ -765,6 +766,13 @@ int afs_getattr(struct mnt_idmap *idmap, const struct path *path,
+ 		if (test_bit(AFS_VNODE_SILLY_DELETED, &vnode->flags) &&
+ 		    stat->nlink > 0)
+ 			stat->nlink -= 1;
++
++		/* Lie about the size of directories.  We maintain a locally
++		 * edited copy and may make different allocation decisions on
++		 * it, but we need to give userspace the server's size.
++		 */
++		if (S_ISDIR(inode->i_mode))
++			stat->size = vnode->netfs.remote_i_size;
+ 	} while (need_seqretry(&vnode->cb_lock, seq));
+ 
+ 	done_seqretry(&vnode->cb_lock, seq);
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index ba769a1eb87ab..25833b4eeaf57 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -3161,6 +3161,11 @@ static long btrfs_ioctl_scrub(struct file *file, void __user *arg)
+ 	if (IS_ERR(sa))
+ 		return PTR_ERR(sa);
+ 
++	if (sa->flags & ~BTRFS_SCRUB_SUPPORTED_FLAGS) {
++		ret = -EOPNOTSUPP;
++		goto out;
++	}
++
+ 	if (!(sa->flags & BTRFS_SCRUB_READONLY)) {
+ 		ret = mnt_want_write_file(file);
+ 		if (ret)
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index 7cc20772eac96..789be30d6ee22 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -431,7 +431,7 @@ void ceph_reservation_status(struct ceph_fs_client *fsc,
+  *
+  * Called with i_ceph_lock held.
+  */
+-static struct ceph_cap *__get_cap_for_mds(struct ceph_inode_info *ci, int mds)
++struct ceph_cap *__get_cap_for_mds(struct ceph_inode_info *ci, int mds)
+ {
+ 	struct ceph_cap *cap;
+ 	struct rb_node *n = ci->i_caps.rb_node;
+diff --git a/fs/ceph/debugfs.c b/fs/ceph/debugfs.c
+index bec3c4549c07d..3904333fa6c38 100644
+--- a/fs/ceph/debugfs.c
++++ b/fs/ceph/debugfs.c
+@@ -248,14 +248,20 @@ static int metrics_caps_show(struct seq_file *s, void *p)
+ 	return 0;
+ }
+ 
+-static int caps_show_cb(struct inode *inode, struct ceph_cap *cap, void *p)
++static int caps_show_cb(struct inode *inode, int mds, void *p)
+ {
++	struct ceph_inode_info *ci = ceph_inode(inode);
+ 	struct seq_file *s = p;
+-
+-	seq_printf(s, "0x%-17llx%-3d%-17s%-17s\n", ceph_ino(inode),
+-		   cap->session->s_mds,
+-		   ceph_cap_string(cap->issued),
+-		   ceph_cap_string(cap->implemented));
++	struct ceph_cap *cap;
++
++	spin_lock(&ci->i_ceph_lock);
++	cap = __get_cap_for_mds(ci, mds);
++	if (cap)
++		seq_printf(s, "0x%-17llx%-3d%-17s%-17s\n", ceph_ino(inode),
++			   cap->session->s_mds,
++			   ceph_cap_string(cap->issued),
++			   ceph_cap_string(cap->implemented));
++	spin_unlock(&ci->i_ceph_lock);
+ 	return 0;
+ }
+ 
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 27a245d959c0a..54e3c2ab21d22 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -1632,8 +1632,8 @@ static void cleanup_session_requests(struct ceph_mds_client *mdsc,
+  * Caller must hold session s_mutex.
+  */
+ int ceph_iterate_session_caps(struct ceph_mds_session *session,
+-			      int (*cb)(struct inode *, struct ceph_cap *,
+-					void *), void *arg)
++			      int (*cb)(struct inode *, int mds, void *),
++			      void *arg)
+ {
+ 	struct list_head *p;
+ 	struct ceph_cap *cap;
+@@ -1645,6 +1645,8 @@ int ceph_iterate_session_caps(struct ceph_mds_session *session,
+ 	spin_lock(&session->s_cap_lock);
+ 	p = session->s_caps.next;
+ 	while (p != &session->s_caps) {
++		int mds;
++
+ 		cap = list_entry(p, struct ceph_cap, session_caps);
+ 		inode = igrab(&cap->ci->netfs.inode);
+ 		if (!inode) {
+@@ -1652,6 +1654,7 @@ int ceph_iterate_session_caps(struct ceph_mds_session *session,
+ 			continue;
+ 		}
+ 		session->s_cap_iterator = cap;
++		mds = cap->mds;
+ 		spin_unlock(&session->s_cap_lock);
+ 
+ 		if (last_inode) {
+@@ -1663,7 +1666,7 @@ int ceph_iterate_session_caps(struct ceph_mds_session *session,
+ 			old_cap = NULL;
+ 		}
+ 
+-		ret = cb(inode, cap, arg);
++		ret = cb(inode, mds, arg);
+ 		last_inode = inode;
+ 
+ 		spin_lock(&session->s_cap_lock);
+@@ -1696,20 +1699,25 @@ out:
+ 	return ret;
+ }
+ 
+-static int remove_session_caps_cb(struct inode *inode, struct ceph_cap *cap,
+-				  void *arg)
++static int remove_session_caps_cb(struct inode *inode, int mds, void *arg)
+ {
+ 	struct ceph_inode_info *ci = ceph_inode(inode);
+ 	bool invalidate = false;
+-	int iputs;
++	struct ceph_cap *cap;
++	int iputs = 0;
+ 
+-	dout("removing cap %p, ci is %p, inode is %p\n",
+-	     cap, ci, &ci->netfs.inode);
+ 	spin_lock(&ci->i_ceph_lock);
+-	iputs = ceph_purge_inode_cap(inode, cap, &invalidate);
++	cap = __get_cap_for_mds(ci, mds);
++	if (cap) {
++		dout(" removing cap %p, ci is %p, inode is %p\n",
++		     cap, ci, &ci->netfs.inode);
++
++		iputs = ceph_purge_inode_cap(inode, cap, &invalidate);
++	}
+ 	spin_unlock(&ci->i_ceph_lock);
+ 
+-	wake_up_all(&ci->i_cap_wq);
++	if (cap)
++		wake_up_all(&ci->i_cap_wq);
+ 	if (invalidate)
+ 		ceph_queue_invalidate(inode);
+ 	while (iputs--)
+@@ -1780,8 +1788,7 @@ enum {
+  *
+  * caller must hold s_mutex.
+  */
+-static int wake_up_session_cb(struct inode *inode, struct ceph_cap *cap,
+-			      void *arg)
++static int wake_up_session_cb(struct inode *inode, int mds, void *arg)
+ {
+ 	struct ceph_inode_info *ci = ceph_inode(inode);
+ 	unsigned long ev = (unsigned long)arg;
+@@ -1792,12 +1799,14 @@ static int wake_up_session_cb(struct inode *inode, struct ceph_cap *cap,
+ 		ci->i_requested_max_size = 0;
+ 		spin_unlock(&ci->i_ceph_lock);
+ 	} else if (ev == RENEWCAPS) {
+-		if (cap->cap_gen < atomic_read(&cap->session->s_cap_gen)) {
+-			/* mds did not re-issue stale cap */
+-			spin_lock(&ci->i_ceph_lock);
++		struct ceph_cap *cap;
++
++		spin_lock(&ci->i_ceph_lock);
++		cap = __get_cap_for_mds(ci, mds);
++		/* mds did not re-issue stale cap */
++		if (cap && cap->cap_gen < atomic_read(&cap->session->s_cap_gen))
+ 			cap->issued = cap->implemented = CEPH_CAP_PIN;
+-			spin_unlock(&ci->i_ceph_lock);
+-		}
++		spin_unlock(&ci->i_ceph_lock);
+ 	} else if (ev == FORCE_RO) {
+ 	}
+ 	wake_up_all(&ci->i_cap_wq);
+@@ -1959,16 +1968,22 @@ out:
+  * Yes, this is a bit sloppy.  Our only real goal here is to respond to
+  * memory pressure from the MDS, though, so it needn't be perfect.
+  */
+-static int trim_caps_cb(struct inode *inode, struct ceph_cap *cap, void *arg)
++static int trim_caps_cb(struct inode *inode, int mds, void *arg)
+ {
+ 	int *remaining = arg;
+ 	struct ceph_inode_info *ci = ceph_inode(inode);
+ 	int used, wanted, oissued, mine;
++	struct ceph_cap *cap;
+ 
+ 	if (*remaining <= 0)
+ 		return -1;
+ 
+ 	spin_lock(&ci->i_ceph_lock);
++	cap = __get_cap_for_mds(ci, mds);
++	if (!cap) {
++		spin_unlock(&ci->i_ceph_lock);
++		return 0;
++	}
+ 	mine = cap->issued | cap->implemented;
+ 	used = __ceph_caps_used(ci);
+ 	wanted = __ceph_caps_file_wanted(ci);
+@@ -3911,26 +3926,22 @@ out_unlock:
+ /*
+  * Encode information about a cap for a reconnect with the MDS.
+  */
+-static int reconnect_caps_cb(struct inode *inode, struct ceph_cap *cap,
+-			  void *arg)
++static int reconnect_caps_cb(struct inode *inode, int mds, void *arg)
+ {
+ 	union {
+ 		struct ceph_mds_cap_reconnect v2;
+ 		struct ceph_mds_cap_reconnect_v1 v1;
+ 	} rec;
+-	struct ceph_inode_info *ci = cap->ci;
++	struct ceph_inode_info *ci = ceph_inode(inode);
+ 	struct ceph_reconnect_state *recon_state = arg;
+ 	struct ceph_pagelist *pagelist = recon_state->pagelist;
+ 	struct dentry *dentry;
++	struct ceph_cap *cap;
+ 	char *path;
+-	int pathlen = 0, err;
++	int pathlen = 0, err = 0;
+ 	u64 pathbase;
+ 	u64 snap_follows;
+ 
+-	dout(" adding %p ino %llx.%llx cap %p %lld %s\n",
+-	     inode, ceph_vinop(inode), cap, cap->cap_id,
+-	     ceph_cap_string(cap->issued));
+-
+ 	dentry = d_find_primary(inode);
+ 	if (dentry) {
+ 		/* set pathbase to parent dir when msg_version >= 2 */
+@@ -3947,6 +3958,15 @@ static int reconnect_caps_cb(struct inode *inode, struct ceph_cap *cap,
+ 	}
+ 
+ 	spin_lock(&ci->i_ceph_lock);
++	cap = __get_cap_for_mds(ci, mds);
++	if (!cap) {
++		spin_unlock(&ci->i_ceph_lock);
++		goto out_err;
++	}
++	dout(" adding %p ino %llx.%llx cap %p %lld %s\n",
++	     inode, ceph_vinop(inode), cap, cap->cap_id,
++	     ceph_cap_string(cap->issued));
++
+ 	cap->seq = 0;        /* reset cap seq */
+ 	cap->issue_seq = 0;  /* and issue_seq */
+ 	cap->mseq = 0;       /* and migrate_seq */
+diff --git a/fs/ceph/mds_client.h b/fs/ceph/mds_client.h
+index 0598faa50e2e0..18b026b1ac63f 100644
+--- a/fs/ceph/mds_client.h
++++ b/fs/ceph/mds_client.h
+@@ -541,8 +541,7 @@ extern void ceph_flush_cap_releases(struct ceph_mds_client *mdsc,
+ extern void ceph_queue_cap_reclaim_work(struct ceph_mds_client *mdsc);
+ extern void ceph_reclaim_caps_nr(struct ceph_mds_client *mdsc, int nr);
+ extern int ceph_iterate_session_caps(struct ceph_mds_session *session,
+-				     int (*cb)(struct inode *,
+-					       struct ceph_cap *, void *),
++				     int (*cb)(struct inode *, int mds, void *),
+ 				     void *arg);
+ extern void ceph_mdsc_pre_umount(struct ceph_mds_client *mdsc);
+ 
+diff --git a/fs/ceph/super.h b/fs/ceph/super.h
+index 6ecca2c6d1379..d24bf0db52346 100644
+--- a/fs/ceph/super.h
++++ b/fs/ceph/super.h
+@@ -1192,6 +1192,8 @@ extern void ceph_kick_flushing_caps(struct ceph_mds_client *mdsc,
+ 				    struct ceph_mds_session *session);
+ void ceph_kick_flushing_inode_caps(struct ceph_mds_session *session,
+ 				   struct ceph_inode_info *ci);
++extern struct ceph_cap *__get_cap_for_mds(struct ceph_inode_info *ci,
++					  int mds);
+ extern struct ceph_cap *ceph_get_cap_for_mds(struct ceph_inode_info *ci,
+ 					     int mds);
+ extern void ceph_take_cap_refs(struct ceph_inode_info *ci, int caps,
+diff --git a/fs/cifs/cifs_debug.c b/fs/cifs/cifs_debug.c
+index e9c8c088d948c..d4ed200a94714 100644
+--- a/fs/cifs/cifs_debug.c
++++ b/fs/cifs/cifs_debug.c
+@@ -280,8 +280,10 @@ static int cifs_debug_data_proc_show(struct seq_file *m, void *v)
+ 		seq_printf(m, "\n%d) ConnectionId: 0x%llx ",
+ 			c, server->conn_id);
+ 
++		spin_lock(&server->srv_lock);
+ 		if (server->hostname)
+ 			seq_printf(m, "Hostname: %s ", server->hostname);
++		spin_unlock(&server->srv_lock);
+ #ifdef CONFIG_CIFS_SMB_DIRECT
+ 		if (!server->rdma)
+ 			goto skip_rdma;
+@@ -623,10 +625,13 @@ static int cifs_stats_proc_show(struct seq_file *m, void *v)
+ 				server->fastest_cmd[j],
+ 				server->slowest_cmd[j]);
+ 		for (j = 0; j < NUMBER_OF_SMB2_COMMANDS; j++)
+-			if (atomic_read(&server->smb2slowcmd[j]))
++			if (atomic_read(&server->smb2slowcmd[j])) {
++				spin_lock(&server->srv_lock);
+ 				seq_printf(m, "  %d slow responses from %s for command %d\n",
+ 					atomic_read(&server->smb2slowcmd[j]),
+ 					server->hostname, j);
++				spin_unlock(&server->srv_lock);
++			}
+ #endif /* STATS2 */
+ 		list_for_each_entry(ses, &server->smb_ses_list, smb_ses_list) {
+ 			list_for_each_entry(tcon, &ses->tcon_list, tcon_list) {
+diff --git a/fs/cifs/cifs_debug.h b/fs/cifs/cifs_debug.h
+index d44808263cfba..ce5cfd236fdb8 100644
+--- a/fs/cifs/cifs_debug.h
++++ b/fs/cifs/cifs_debug.h
+@@ -81,19 +81,19 @@ do {									\
+ 
+ #define cifs_server_dbg_func(ratefunc, type, fmt, ...)			\
+ do {									\
+-	const char *sn = "";						\
+-	if (server && server->hostname)					\
+-		sn = server->hostname;					\
++	spin_lock(&server->srv_lock);					\
+ 	if ((type) & FYI && cifsFYI & CIFS_INFO) {			\
+ 		pr_debug_ ## ratefunc("%s: \\\\%s " fmt,		\
+-				      __FILE__, sn, ##__VA_ARGS__);	\
++				      __FILE__, server->hostname,	\
++				      ##__VA_ARGS__);			\
+ 	} else if ((type) & VFS) {					\
+ 		pr_err_ ## ratefunc("VFS: \\\\%s " fmt,			\
+-				    sn, ##__VA_ARGS__);			\
++				    server->hostname, ##__VA_ARGS__);	\
+ 	} else if ((type) & NOISY && (NOISY != 0)) {			\
+ 		pr_debug_ ## ratefunc("\\\\%s " fmt,			\
+-				      sn, ##__VA_ARGS__);		\
++				      server->hostname, ##__VA_ARGS__);	\
+ 	}								\
++	spin_unlock(&server->srv_lock);					\
+ } while (0)
+ 
+ #define cifs_server_dbg(type, fmt, ...)					\
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 08a73dcb77864..414685c5d5306 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -736,17 +736,23 @@ struct TCP_Server_Info {
+ #endif
+ 	struct mutex refpath_lock; /* protects leaf_fullpath */
+ 	/*
+-	 * Canonical DFS full paths that were used to chase referrals in mount and reconnect.
++	 * origin_fullpath: Canonical copy of smb3_fs_context::source.
++	 *                  It is used for matching existing DFS tcons.
+ 	 *
+-	 * origin_fullpath: first or original referral path
+-	 * leaf_fullpath: last referral path (might be changed due to nested links in reconnect)
++	 * leaf_fullpath: Canonical DFS referral path related to this
++	 *                connection.
++	 *                It is used in DFS cache refresher, reconnect and may
++	 *                change due to nested DFS links.
+ 	 *
+-	 * current_fullpath: pointer to either origin_fullpath or leaf_fullpath
+-	 * NOTE: cannot be accessed outside cifs_reconnect() and smb2_reconnect()
++	 * Both protected by @refpath_lock and @srv_lock.  The @refpath_lock is
++	 * mosly used for not requiring a copy of @leaf_fullpath when getting
++	 * cached or new DFS referrals (which might also sleep during I/O).
++	 * While @srv_lock is held for making string and NULL comparions against
++	 * both fields as in mount(2) and cache refresh.
+ 	 *
+-	 * format: \\HOST\SHARE\[OPTIONAL PATH]
++	 * format: \\HOST\SHARE[\OPTIONAL PATH]
+ 	 */
+-	char *origin_fullpath, *leaf_fullpath, *current_fullpath;
++	char *origin_fullpath, *leaf_fullpath;
+ };
+ 
+ static inline bool is_smb1(struct TCP_Server_Info *server)
+@@ -1232,8 +1238,8 @@ struct cifs_tcon {
+ 	struct cached_fids *cfids;
+ 	/* BB add field for back pointer to sb struct(s)? */
+ #ifdef CONFIG_CIFS_DFS_UPCALL
+-	struct list_head ulist; /* cache update list */
+ 	struct list_head dfs_ses_list;
++	struct delayed_work dfs_cache_work;
+ #endif
+ 	struct delayed_work	query_interfaces; /* query interfaces workqueue job */
+ };
+@@ -1750,7 +1756,6 @@ struct cifs_mount_ctx {
+ 	struct TCP_Server_Info *server;
+ 	struct cifs_ses *ses;
+ 	struct cifs_tcon *tcon;
+-	char *origin_fullpath, *leaf_fullpath;
+ 	struct list_head dfs_ses_list;
+ };
+ 
+diff --git a/fs/cifs/cifsproto.h b/fs/cifs/cifsproto.h
+index e2eff66eefabf..c1c704990b986 100644
+--- a/fs/cifs/cifsproto.h
++++ b/fs/cifs/cifsproto.h
+@@ -8,6 +8,7 @@
+ #ifndef _CIFSPROTO_H
+ #define _CIFSPROTO_H
+ #include <linux/nls.h>
++#include <linux/ctype.h>
+ #include "trace.h"
+ #ifdef CONFIG_CIFS_DFS_UPCALL
+ #include "dfs_cache.h"
+@@ -572,7 +573,7 @@ extern int E_md4hash(const unsigned char *passwd, unsigned char *p16,
+ extern struct TCP_Server_Info *
+ cifs_find_tcp_session(struct smb3_fs_context *ctx);
+ 
+-extern void cifs_put_smb_ses(struct cifs_ses *ses);
++void __cifs_put_smb_ses(struct cifs_ses *ses);
+ 
+ extern struct cifs_ses *
+ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx);
+@@ -696,4 +697,45 @@ struct super_block *cifs_get_tcon_super(struct cifs_tcon *tcon);
+ void cifs_put_tcon_super(struct super_block *sb);
+ int cifs_wait_for_server_reconnect(struct TCP_Server_Info *server, bool retry);
+ 
++/* Put references of @ses and @ses->dfs_root_ses */
++static inline void cifs_put_smb_ses(struct cifs_ses *ses)
++{
++	struct cifs_ses *rses = ses->dfs_root_ses;
++
++	__cifs_put_smb_ses(ses);
++	if (rses)
++		__cifs_put_smb_ses(rses);
++}
++
++/* Get an active reference of @ses and @ses->dfs_root_ses.
++ *
++ * NOTE: make sure to call this function when incrementing reference count of
++ * @ses to ensure that any DFS root session attached to it (@ses->dfs_root_ses)
++ * will also get its reference count incremented.
++ *
++ * cifs_put_smb_ses() will put both references, so call it when you're done.
++ */
++static inline void cifs_smb_ses_inc_refcount(struct cifs_ses *ses)
++{
++	lockdep_assert_held(&cifs_tcp_ses_lock);
++
++	ses->ses_count++;
++	if (ses->dfs_root_ses)
++		ses->dfs_root_ses->ses_count++;
++}
++
++static inline bool dfs_src_pathname_equal(const char *s1, const char *s2)
++{
++	if (strlen(s1) != strlen(s2))
++		return false;
++	for (; *s1; s1++, s2++) {
++		if (*s1 == '/' || *s1 == '\\') {
++			if (*s2 != '/' && *s2 != '\\')
++				return false;
++		} else if (tolower(*s1) != tolower(*s2))
++			return false;
++	}
++	return true;
++}
++
+ #endif			/* _CIFSPROTO_H */
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 1cbb905879957..2c573062ec874 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -403,8 +403,10 @@ static int __reconnect_target_unlocked(struct TCP_Server_Info *server, const cha
+ 		if (server->hostname != target) {
+ 			hostname = extract_hostname(target);
+ 			if (!IS_ERR(hostname)) {
++				spin_lock(&server->srv_lock);
+ 				kfree(server->hostname);
+ 				server->hostname = hostname;
++				spin_unlock(&server->srv_lock);
+ 			} else {
+ 				cifs_dbg(FYI, "%s: couldn't extract hostname or address from dfs target: %ld\n",
+ 					 __func__, PTR_ERR(hostname));
+@@ -452,7 +454,6 @@ static int reconnect_target_unlocked(struct TCP_Server_Info *server, struct dfs_
+ static int reconnect_dfs_server(struct TCP_Server_Info *server)
+ {
+ 	int rc = 0;
+-	const char *refpath = server->current_fullpath + 1;
+ 	struct dfs_cache_tgt_list tl = DFS_CACHE_TGT_LIST_INIT(tl);
+ 	struct dfs_cache_tgt_iterator *target_hint = NULL;
+ 	int num_targets = 0;
+@@ -465,8 +466,10 @@ static int reconnect_dfs_server(struct TCP_Server_Info *server)
+ 	 * through /proc/fs/cifs/dfscache or the target list is empty due to server settings after
+ 	 * refreshing the referral, so, in this case, default it to 1.
+ 	 */
+-	if (!dfs_cache_noreq_find(refpath, NULL, &tl))
++	mutex_lock(&server->refpath_lock);
++	if (!dfs_cache_noreq_find(server->leaf_fullpath + 1, NULL, &tl))
+ 		num_targets = dfs_cache_get_nr_tgts(&tl);
++	mutex_unlock(&server->refpath_lock);
+ 	if (!num_targets)
+ 		num_targets = 1;
+ 
+@@ -510,7 +513,9 @@ static int reconnect_dfs_server(struct TCP_Server_Info *server)
+ 		mod_delayed_work(cifsiod_wq, &server->reconnect, 0);
+ 	} while (server->tcpStatus == CifsNeedReconnect);
+ 
+-	dfs_cache_noreq_update_tgthint(refpath, target_hint);
++	mutex_lock(&server->refpath_lock);
++	dfs_cache_noreq_update_tgthint(server->leaf_fullpath + 1, target_hint);
++	mutex_unlock(&server->refpath_lock);
+ 	dfs_cache_free_tgts(&tl);
+ 
+ 	/* Need to set up echo worker again once connection has been established */
+@@ -561,9 +566,7 @@ cifs_echo_request(struct work_struct *work)
+ 		goto requeue_echo;
+ 
+ 	rc = server->ops->echo ? server->ops->echo(server) : -ENOSYS;
+-	if (rc)
+-		cifs_dbg(FYI, "Unable to send echo request to server: %s\n",
+-			 server->hostname);
++	cifs_server_dbg(FYI, "send echo request: rc = %d\n", rc);
+ 
+ 	/* Check witness registrations */
+ 	cifs_swn_check();
+@@ -993,10 +996,8 @@ static void clean_demultiplex_info(struct TCP_Server_Info *server)
+ 		 */
+ 	}
+ 
+-#ifdef CONFIG_CIFS_DFS_UPCALL
+ 	kfree(server->origin_fullpath);
+ 	kfree(server->leaf_fullpath);
+-#endif
+ 	kfree(server);
+ 
+ 	length = atomic_dec_return(&tcpSesAllocCount);
+@@ -1384,26 +1385,13 @@ match_security(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
+ 	return true;
+ }
+ 
+-static bool dfs_src_pathname_equal(const char *s1, const char *s2)
+-{
+-	if (strlen(s1) != strlen(s2))
+-		return false;
+-	for (; *s1; s1++, s2++) {
+-		if (*s1 == '/' || *s1 == '\\') {
+-			if (*s2 != '/' && *s2 != '\\')
+-				return false;
+-		} else if (tolower(*s1) != tolower(*s2))
+-			return false;
+-	}
+-	return true;
+-}
+-
+ /* this function must be called with srv_lock held */
+-static int match_server(struct TCP_Server_Info *server, struct smb3_fs_context *ctx,
+-			bool dfs_super_cmp)
++static int match_server(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
+ {
+ 	struct sockaddr *addr = (struct sockaddr *)&ctx->dstaddr;
+ 
++	lockdep_assert_held(&server->srv_lock);
++
+ 	if (ctx->nosharesock)
+ 		return 0;
+ 
+@@ -1429,27 +1417,41 @@ static int match_server(struct TCP_Server_Info *server, struct smb3_fs_context *
+ 			       (struct sockaddr *)&server->srcaddr))
+ 		return 0;
+ 	/*
+-	 * When matching DFS superblocks, we only check for original source pathname as the
+-	 * currently connected target might be different than the one parsed earlier in i.e.
+-	 * mount.cifs(8).
++	 * - Match for an DFS tcon (@server->origin_fullpath).
++	 * - Match for an DFS root server connection (@server->leaf_fullpath).
++	 * - If none of the above and @ctx->leaf_fullpath is set, then
++	 *   it is a new DFS connection.
++	 * - If 'nodfs' mount option was passed, then match only connections
++	 *   that have no DFS referrals set
++	 *   (e.g. can't failover to other targets).
+ 	 */
+-	if (dfs_super_cmp) {
+-		if (!ctx->source || !server->origin_fullpath ||
+-		    !dfs_src_pathname_equal(server->origin_fullpath, ctx->source))
+-			return 0;
+-	} else {
+-		/* Skip addr, hostname and port matching for DFS connections */
+-		if (server->leaf_fullpath) {
++	if (!ctx->nodfs) {
++		if (ctx->source && server->origin_fullpath) {
++			if (!dfs_src_pathname_equal(ctx->source,
++						    server->origin_fullpath))
++				return 0;
++		} else if (server->leaf_fullpath) {
+ 			if (!ctx->leaf_fullpath ||
+-			    strcasecmp(server->leaf_fullpath, ctx->leaf_fullpath))
++			    strcasecmp(server->leaf_fullpath,
++				       ctx->leaf_fullpath))
+ 				return 0;
+-		} else if (strcasecmp(server->hostname, ctx->server_hostname) ||
+-			   !match_server_address(server, addr) ||
+-			   !match_port(server, addr)) {
++		} else if (ctx->leaf_fullpath) {
+ 			return 0;
+ 		}
++	} else if (server->origin_fullpath || server->leaf_fullpath) {
++		return 0;
+ 	}
+ 
++	/*
++	 * Match for a regular connection (address/hostname/port) which has no
++	 * DFS referrals set.
++	 */
++	if (!server->origin_fullpath && !server->leaf_fullpath &&
++	    (strcasecmp(server->hostname, ctx->server_hostname) ||
++	     !match_server_address(server, addr) ||
++	     !match_port(server, addr)))
++		return 0;
++
+ 	if (!match_security(server, ctx))
+ 		return 0;
+ 
+@@ -1480,7 +1482,7 @@ cifs_find_tcp_session(struct smb3_fs_context *ctx)
+ 		 * Skip ses channels since they're only handled in lower layers
+ 		 * (e.g. cifs_send_recv).
+ 		 */
+-		if (CIFS_SERVER_IS_CHAN(server) || !match_server(server, ctx, false)) {
++		if (CIFS_SERVER_IS_CHAN(server) || !match_server(server, ctx)) {
+ 			spin_unlock(&server->srv_lock);
+ 			continue;
+ 		}
+@@ -1580,7 +1582,6 @@ cifs_get_tcp_session(struct smb3_fs_context *ctx,
+ 			rc = -ENOMEM;
+ 			goto out_err;
+ 		}
+-		tcp_ses->current_fullpath = tcp_ses->leaf_fullpath;
+ 	}
+ 
+ 	if (ctx->nosharesock)
+@@ -1810,7 +1811,9 @@ cifs_setup_ipc(struct cifs_ses *ses, struct smb3_fs_context *ctx)
+ 	if (tcon == NULL)
+ 		return -ENOMEM;
+ 
++	spin_lock(&server->srv_lock);
+ 	scnprintf(unc, sizeof(unc), "\\\\%s\\IPC$", server->hostname);
++	spin_unlock(&server->srv_lock);
+ 
+ 	xid = get_xid();
+ 	tcon->ses = ses;
+@@ -1863,7 +1866,7 @@ cifs_free_ipc(struct cifs_ses *ses)
+ static struct cifs_ses *
+ cifs_find_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
+ {
+-	struct cifs_ses *ses;
++	struct cifs_ses *ses, *ret = NULL;
+ 
+ 	spin_lock(&cifs_tcp_ses_lock);
+ 	list_for_each_entry(ses, &server->smb_ses_list, smb_ses_list) {
+@@ -1873,23 +1876,22 @@ cifs_find_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
+ 			continue;
+ 		}
+ 		spin_lock(&ses->chan_lock);
+-		if (!match_session(ses, ctx)) {
++		if (match_session(ses, ctx)) {
+ 			spin_unlock(&ses->chan_lock);
+ 			spin_unlock(&ses->ses_lock);
+-			continue;
++			ret = ses;
++			break;
+ 		}
+ 		spin_unlock(&ses->chan_lock);
+ 		spin_unlock(&ses->ses_lock);
+-
+-		++ses->ses_count;
+-		spin_unlock(&cifs_tcp_ses_lock);
+-		return ses;
+ 	}
++	if (ret)
++		cifs_smb_ses_inc_refcount(ret);
+ 	spin_unlock(&cifs_tcp_ses_lock);
+-	return NULL;
++	return ret;
+ }
+ 
+-void cifs_put_smb_ses(struct cifs_ses *ses)
++void __cifs_put_smb_ses(struct cifs_ses *ses)
+ {
+ 	unsigned int rc, xid;
+ 	unsigned int chan_count;
+@@ -2240,6 +2242,8 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
+ 	 */
+ 	spin_lock(&cifs_tcp_ses_lock);
+ 	ses->dfs_root_ses = ctx->dfs_root_ses;
++	if (ses->dfs_root_ses)
++		ses->dfs_root_ses->ses_count++;
+ 	list_add(&ses->smb_ses_list, &server->smb_ses_list);
+ 	spin_unlock(&cifs_tcp_ses_lock);
+ 
+@@ -2256,12 +2260,15 @@ get_ses_fail:
+ }
+ 
+ /* this function must be called with tc_lock held */
+-static int match_tcon(struct cifs_tcon *tcon, struct smb3_fs_context *ctx, bool dfs_super_cmp)
++static int match_tcon(struct cifs_tcon *tcon, struct smb3_fs_context *ctx)
+ {
++	struct TCP_Server_Info *server = tcon->ses->server;
++
+ 	if (tcon->status == TID_EXITING)
+ 		return 0;
+-	/* Skip UNC validation when matching DFS superblocks */
+-	if (!dfs_super_cmp && strncmp(tcon->tree_name, ctx->UNC, MAX_TREE_SIZE))
++	/* Skip UNC validation when matching DFS connections or superblocks */
++	if (!server->origin_fullpath && !server->leaf_fullpath &&
++	    strncmp(tcon->tree_name, ctx->UNC, MAX_TREE_SIZE))
+ 		return 0;
+ 	if (tcon->seal != ctx->seal)
+ 		return 0;
+@@ -2284,7 +2291,7 @@ cifs_find_tcon(struct cifs_ses *ses, struct smb3_fs_context *ctx)
+ 	spin_lock(&cifs_tcp_ses_lock);
+ 	list_for_each_entry(tcon, &ses->tcon_list, tcon_list) {
+ 		spin_lock(&tcon->tc_lock);
+-		if (!match_tcon(tcon, ctx, false)) {
++		if (!match_tcon(tcon, ctx)) {
+ 			spin_unlock(&tcon->tc_lock);
+ 			continue;
+ 		}
+@@ -2330,6 +2337,9 @@ cifs_put_tcon(struct cifs_tcon *tcon)
+ 
+ 	/* cancel polling of interfaces */
+ 	cancel_delayed_work_sync(&tcon->query_interfaces);
++#ifdef CONFIG_CIFS_DFS_UPCALL
++	cancel_delayed_work_sync(&tcon->dfs_cache_work);
++#endif
+ 
+ 	if (tcon->use_witness) {
+ 		int rc;
+@@ -2577,7 +2587,9 @@ cifs_get_tcon(struct cifs_ses *ses, struct smb3_fs_context *ctx)
+ 		queue_delayed_work(cifsiod_wq, &tcon->query_interfaces,
+ 				   (SMB_INTERFACE_POLL_INTERVAL * HZ));
+ 	}
+-
++#ifdef CONFIG_CIFS_DFS_UPCALL
++	INIT_DELAYED_WORK(&tcon->dfs_cache_work, dfs_cache_refresh);
++#endif
+ 	spin_lock(&cifs_tcp_ses_lock);
+ 	list_add(&tcon->tcon_list, &ses->tcon_list);
+ 	spin_unlock(&cifs_tcp_ses_lock);
+@@ -2655,9 +2667,11 @@ compare_mount_options(struct super_block *sb, struct cifs_mnt_data *mnt_data)
+ 	return 1;
+ }
+ 
+-static int
+-match_prepath(struct super_block *sb, struct cifs_mnt_data *mnt_data)
++static int match_prepath(struct super_block *sb,
++			 struct TCP_Server_Info *server,
++			 struct cifs_mnt_data *mnt_data)
+ {
++	struct smb3_fs_context *ctx = mnt_data->ctx;
+ 	struct cifs_sb_info *old = CIFS_SB(sb);
+ 	struct cifs_sb_info *new = mnt_data->cifs_sb;
+ 	bool old_set = (old->mnt_cifs_flags & CIFS_MOUNT_USE_PREFIX_PATH) &&
+@@ -2665,6 +2679,10 @@ match_prepath(struct super_block *sb, struct cifs_mnt_data *mnt_data)
+ 	bool new_set = (new->mnt_cifs_flags & CIFS_MOUNT_USE_PREFIX_PATH) &&
+ 		new->prepath;
+ 
++	if (server->origin_fullpath &&
++	    dfs_src_pathname_equal(server->origin_fullpath, ctx->source))
++		return 1;
++
+ 	if (old_set && new_set && !strcmp(new->prepath, old->prepath))
+ 		return 1;
+ 	else if (!old_set && !new_set)
+@@ -2683,7 +2701,6 @@ cifs_match_super(struct super_block *sb, void *data)
+ 	struct cifs_ses *ses;
+ 	struct cifs_tcon *tcon;
+ 	struct tcon_link *tlink;
+-	bool dfs_super_cmp;
+ 	int rc = 0;
+ 
+ 	spin_lock(&cifs_tcp_ses_lock);
+@@ -2698,18 +2715,16 @@ cifs_match_super(struct super_block *sb, void *data)
+ 	ses = tcon->ses;
+ 	tcp_srv = ses->server;
+ 
+-	dfs_super_cmp = IS_ENABLED(CONFIG_CIFS_DFS_UPCALL) && tcp_srv->origin_fullpath;
+-
+ 	ctx = mnt_data->ctx;
+ 
+ 	spin_lock(&tcp_srv->srv_lock);
+ 	spin_lock(&ses->ses_lock);
+ 	spin_lock(&ses->chan_lock);
+ 	spin_lock(&tcon->tc_lock);
+-	if (!match_server(tcp_srv, ctx, dfs_super_cmp) ||
++	if (!match_server(tcp_srv, ctx) ||
+ 	    !match_session(ses, ctx) ||
+-	    !match_tcon(tcon, ctx, dfs_super_cmp) ||
+-	    !match_prepath(sb, mnt_data)) {
++	    !match_tcon(tcon, ctx) ||
++	    !match_prepath(sb, tcp_srv, mnt_data)) {
+ 		rc = 0;
+ 		goto out;
+ 	}
+@@ -3454,8 +3469,6 @@ out:
+ 
+ error:
+ 	dfs_put_root_smb_sessions(&mnt_ctx.dfs_ses_list);
+-	kfree(mnt_ctx.origin_fullpath);
+-	kfree(mnt_ctx.leaf_fullpath);
+ 	cifs_mount_put_conns(&mnt_ctx);
+ 	return rc;
+ }
+diff --git a/fs/cifs/dfs.c b/fs/cifs/dfs.c
+index 3a11716b6e13e..a93dbca1411b2 100644
+--- a/fs/cifs/dfs.c
++++ b/fs/cifs/dfs.c
+@@ -99,7 +99,7 @@ static int get_session(struct cifs_mount_ctx *mnt_ctx, const char *full_path)
+ 	return rc;
+ }
+ 
+-static int get_root_smb_session(struct cifs_mount_ctx *mnt_ctx)
++static int add_root_smb_session(struct cifs_mount_ctx *mnt_ctx)
+ {
+ 	struct smb3_fs_context *ctx = mnt_ctx->fs_ctx;
+ 	struct dfs_root_ses *root_ses;
+@@ -127,7 +127,7 @@ static int get_dfs_conn(struct cifs_mount_ctx *mnt_ctx, const char *ref_path, co
+ {
+ 	struct smb3_fs_context *ctx = mnt_ctx->fs_ctx;
+ 	struct dfs_info3_param ref = {};
+-	bool is_refsrv = false;
++	bool is_refsrv;
+ 	int rc, rc2;
+ 
+ 	rc = dfs_cache_get_tgt_referral(ref_path + 1, tit, &ref);
+@@ -157,8 +157,10 @@ static int get_dfs_conn(struct cifs_mount_ctx *mnt_ctx, const char *ref_path, co
+ 		rc = cifs_is_path_remote(mnt_ctx);
+ 	}
+ 
++	dfs_cache_noreq_update_tgthint(ref_path + 1, tit);
++
+ 	if (rc == -EREMOTE && is_refsrv) {
+-		rc2 = get_root_smb_session(mnt_ctx);
++		rc2 = add_root_smb_session(mnt_ctx);
+ 		if (rc2)
+ 			rc = rc2;
+ 	}
+@@ -248,16 +250,19 @@ static int __dfs_mount_share(struct cifs_mount_ctx *mnt_ctx)
+ 		tcon = mnt_ctx->tcon;
+ 
+ 		mutex_lock(&server->refpath_lock);
++		spin_lock(&server->srv_lock);
+ 		if (!server->origin_fullpath) {
+ 			server->origin_fullpath = origin_fullpath;
+-			server->current_fullpath = server->leaf_fullpath;
+ 			origin_fullpath = NULL;
+ 		}
++		spin_unlock(&server->srv_lock);
+ 		mutex_unlock(&server->refpath_lock);
+ 
+ 		if (list_empty(&tcon->dfs_ses_list)) {
+ 			list_replace_init(&mnt_ctx->dfs_ses_list,
+ 					  &tcon->dfs_ses_list);
++			queue_delayed_work(dfscache_wq, &tcon->dfs_cache_work,
++					   dfs_cache_get_ttl() * HZ);
+ 		} else {
+ 			dfs_put_root_smb_sessions(&mnt_ctx->dfs_ses_list);
+ 		}
+@@ -272,15 +277,21 @@ out:
+ 
+ int dfs_mount_share(struct cifs_mount_ctx *mnt_ctx, bool *isdfs)
+ {
+-	struct cifs_sb_info *cifs_sb = mnt_ctx->cifs_sb;
+ 	struct smb3_fs_context *ctx = mnt_ctx->fs_ctx;
++	struct cifs_ses *ses;
++	char *source = ctx->source;
++	bool nodfs = ctx->nodfs;
+ 	int rc;
+ 
+ 	*isdfs = false;
+-
++	/* Temporarily set @ctx->source to NULL as we're not matching DFS
++	 * superblocks yet.  See cifs_match_super() and match_server().
++	 */
++	ctx->source = NULL;
+ 	rc = get_session(mnt_ctx, NULL);
+ 	if (rc)
+-		return rc;
++		goto out;
++
+ 	ctx->dfs_root_ses = mnt_ctx->ses;
+ 	/*
+ 	 * If called with 'nodfs' mount option, then skip DFS resolving.  Otherwise unconditionally
+@@ -289,23 +300,41 @@ int dfs_mount_share(struct cifs_mount_ctx *mnt_ctx, bool *isdfs)
+ 	 * Skip prefix path to provide support for DFS referrals from w2k8 servers which don't seem
+ 	 * to respond with PATH_NOT_COVERED to requests that include the prefix.
+ 	 */
+-	if ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_DFS) ||
+-	    dfs_get_referral(mnt_ctx, ctx->UNC + 1, NULL, NULL)) {
++	if (!nodfs) {
++		rc = dfs_get_referral(mnt_ctx, ctx->UNC + 1, NULL, NULL);
++		if (rc) {
++			if (rc != -ENOENT && rc != -EOPNOTSUPP)
++				goto out;
++			nodfs = true;
++		}
++	}
++	if (nodfs) {
+ 		rc = cifs_mount_get_tcon(mnt_ctx);
+-		if (rc)
+-			return rc;
+-
+-		rc = cifs_is_path_remote(mnt_ctx);
+-		if (!rc || rc != -EREMOTE)
+-			return rc;
++		if (!rc)
++			rc = cifs_is_path_remote(mnt_ctx);
++		goto out;
+ 	}
+ 
+ 	*isdfs = true;
+-	rc = get_root_smb_session(mnt_ctx);
+-	if (rc)
+-		return rc;
+-
+-	return __dfs_mount_share(mnt_ctx);
++	/*
++	 * Prevent DFS root session of being put in the first call to
++	 * cifs_mount_put_conns().  If another DFS root server was not found
++	 * while chasing the referrals (@ctx->dfs_root_ses == @ses), then we
++	 * can safely put extra refcount of @ses.
++	 */
++	ses = mnt_ctx->ses;
++	mnt_ctx->ses = NULL;
++	mnt_ctx->server = NULL;
++	rc = __dfs_mount_share(mnt_ctx);
++	if (ses == ctx->dfs_root_ses)
++		cifs_put_smb_ses(ses);
++out:
++	/*
++	 * Restore previous value of @ctx->source so DFS superblock can be
++	 * matched in cifs_match_super().
++	 */
++	ctx->source = source;
++	return rc;
+ }
+ 
+ /* Update dfs referral path of superblock */
+@@ -342,10 +371,11 @@ static int update_server_fullpath(struct TCP_Server_Info *server, struct cifs_sb
+ 		rc = PTR_ERR(npath);
+ 	} else {
+ 		mutex_lock(&server->refpath_lock);
++		spin_lock(&server->srv_lock);
+ 		kfree(server->leaf_fullpath);
+ 		server->leaf_fullpath = npath;
++		spin_unlock(&server->srv_lock);
+ 		mutex_unlock(&server->refpath_lock);
+-		server->current_fullpath = server->leaf_fullpath;
+ 	}
+ 	return rc;
+ }
+@@ -374,6 +404,54 @@ static int target_share_matches_server(struct TCP_Server_Info *server, char *sha
+ 	return rc;
+ }
+ 
++static void __tree_connect_ipc(const unsigned int xid, char *tree,
++			       struct cifs_sb_info *cifs_sb,
++			       struct cifs_ses *ses)
++{
++	struct TCP_Server_Info *server = ses->server;
++	struct cifs_tcon *tcon = ses->tcon_ipc;
++	int rc;
++
++	spin_lock(&ses->ses_lock);
++	spin_lock(&ses->chan_lock);
++	if (cifs_chan_needs_reconnect(ses, server) ||
++	    ses->ses_status != SES_GOOD) {
++		spin_unlock(&ses->chan_lock);
++		spin_unlock(&ses->ses_lock);
++		cifs_server_dbg(FYI, "%s: skipping ipc reconnect due to disconnected ses\n",
++				__func__);
++		return;
++	}
++	spin_unlock(&ses->chan_lock);
++	spin_unlock(&ses->ses_lock);
++
++	cifs_server_lock(server);
++	scnprintf(tree, MAX_TREE_SIZE, "\\\\%s\\IPC$", server->hostname);
++	cifs_server_unlock(server);
++
++	rc = server->ops->tree_connect(xid, ses, tree, tcon,
++				       cifs_sb->local_nls);
++	cifs_server_dbg(FYI, "%s: tree_reconnect %s: %d\n", __func__, tree, rc);
++	spin_lock(&tcon->tc_lock);
++	if (rc) {
++		tcon->status = TID_NEED_TCON;
++	} else {
++		tcon->status = TID_GOOD;
++		tcon->need_reconnect = false;
++	}
++	spin_unlock(&tcon->tc_lock);
++}
++
++static void tree_connect_ipc(const unsigned int xid, char *tree,
++			     struct cifs_sb_info *cifs_sb,
++			     struct cifs_tcon *tcon)
++{
++	struct cifs_ses *ses = tcon->ses;
++
++	__tree_connect_ipc(xid, tree, cifs_sb, ses);
++	__tree_connect_ipc(xid, tree, cifs_sb, CIFS_DFS_ROOT_SES(ses));
++}
++
+ static int __tree_connect_dfs_target(const unsigned int xid, struct cifs_tcon *tcon,
+ 				     struct cifs_sb_info *cifs_sb, char *tree, bool islink,
+ 				     struct dfs_cache_tgt_list *tl)
+@@ -382,7 +460,6 @@ static int __tree_connect_dfs_target(const unsigned int xid, struct cifs_tcon *t
+ 	struct TCP_Server_Info *server = tcon->ses->server;
+ 	const struct smb_version_operations *ops = server->ops;
+ 	struct cifs_ses *root_ses = CIFS_DFS_ROOT_SES(tcon->ses);
+-	struct cifs_tcon *ipc = root_ses->tcon_ipc;
+ 	char *share = NULL, *prefix = NULL;
+ 	struct dfs_cache_tgt_iterator *tit;
+ 	bool target_match;
+@@ -403,7 +480,7 @@ static int __tree_connect_dfs_target(const unsigned int xid, struct cifs_tcon *t
+ 		share = prefix = NULL;
+ 
+ 		/* Check if share matches with tcp ses */
+-		rc = dfs_cache_get_tgt_share(server->current_fullpath + 1, tit, &share, &prefix);
++		rc = dfs_cache_get_tgt_share(server->leaf_fullpath + 1, tit, &share, &prefix);
+ 		if (rc) {
+ 			cifs_dbg(VFS, "%s: failed to parse target share: %d\n", __func__, rc);
+ 			break;
+@@ -417,19 +494,15 @@ static int __tree_connect_dfs_target(const unsigned int xid, struct cifs_tcon *t
+ 			continue;
+ 		}
+ 
+-		dfs_cache_noreq_update_tgthint(server->current_fullpath + 1, tit);
+-
+-		if (ipc->need_reconnect) {
+-			scnprintf(tree, MAX_TREE_SIZE, "\\\\%s\\IPC$", server->hostname);
+-			rc = ops->tree_connect(xid, ipc->ses, tree, ipc, cifs_sb->local_nls);
+-			cifs_dbg(FYI, "%s: reconnect ipc: %d\n", __func__, rc);
+-		}
++		dfs_cache_noreq_update_tgthint(server->leaf_fullpath + 1, tit);
++		tree_connect_ipc(xid, tree, cifs_sb, tcon);
+ 
+ 		scnprintf(tree, MAX_TREE_SIZE, "\\%s", share);
+ 		if (!islink) {
+ 			rc = ops->tree_connect(xid, tcon->ses, tree, tcon, cifs_sb->local_nls);
+ 			break;
+ 		}
++
+ 		/*
+ 		 * If no dfs referrals were returned from link target, then just do a TREE_CONNECT
+ 		 * to it.  Otherwise, cache the dfs referral and then mark current tcp ses for
+@@ -539,8 +612,8 @@ int cifs_tree_connect(const unsigned int xid, struct cifs_tcon *tcon, const stru
+ 	cifs_sb = CIFS_SB(sb);
+ 
+ 	/* If it is not dfs or there was no cached dfs referral, then reconnect to same share */
+-	if (!server->current_fullpath ||
+-	    dfs_cache_noreq_find(server->current_fullpath + 1, &ref, &tl)) {
++	if (!server->leaf_fullpath ||
++	    dfs_cache_noreq_find(server->leaf_fullpath + 1, &ref, &tl)) {
+ 		rc = ops->tree_connect(xid, tcon->ses, tcon->tree_name, tcon, cifs_sb->local_nls);
+ 		goto out;
+ 	}
+diff --git a/fs/cifs/dfs.h b/fs/cifs/dfs.h
+index 0b8cbf721fff6..1c90df5ecfbda 100644
+--- a/fs/cifs/dfs.h
++++ b/fs/cifs/dfs.h
+@@ -43,8 +43,12 @@ static inline char *dfs_get_automount_devname(struct dentry *dentry, void *page)
+ 	size_t len;
+ 	char *s;
+ 
+-	if (unlikely(!server->origin_fullpath))
++	spin_lock(&server->srv_lock);
++	if (unlikely(!server->origin_fullpath)) {
++		spin_unlock(&server->srv_lock);
+ 		return ERR_PTR(-EREMOTE);
++	}
++	spin_unlock(&server->srv_lock);
+ 
+ 	s = dentry_path_raw(dentry, page, PATH_MAX);
+ 	if (IS_ERR(s))
+@@ -53,13 +57,18 @@ static inline char *dfs_get_automount_devname(struct dentry *dentry, void *page)
+ 	if (!s[1])
+ 		s++;
+ 
++	spin_lock(&server->srv_lock);
+ 	len = strlen(server->origin_fullpath);
+-	if (s < (char *)page + len)
++	if (s < (char *)page + len) {
++		spin_unlock(&server->srv_lock);
+ 		return ERR_PTR(-ENAMETOOLONG);
++	}
+ 
+ 	s -= len;
+ 	memcpy(s, server->origin_fullpath, len);
++	spin_unlock(&server->srv_lock);
+ 	convert_delimiter(s, '/');
++
+ 	return s;
+ }
+ 
+diff --git a/fs/cifs/dfs_cache.c b/fs/cifs/dfs_cache.c
+index 30cbdf8514a59..1513b2709889b 100644
+--- a/fs/cifs/dfs_cache.c
++++ b/fs/cifs/dfs_cache.c
+@@ -20,12 +20,14 @@
+ #include "cifs_unicode.h"
+ #include "smb2glob.h"
+ #include "dns_resolve.h"
++#include "dfs.h"
+ 
+ #include "dfs_cache.h"
+ 
+-#define CACHE_HTABLE_SIZE 32
+-#define CACHE_MAX_ENTRIES 64
+-#define CACHE_MIN_TTL 120 /* 2 minutes */
++#define CACHE_HTABLE_SIZE	32
++#define CACHE_MAX_ENTRIES	64
++#define CACHE_MIN_TTL		120 /* 2 minutes */
++#define CACHE_DEFAULT_TTL	300 /* 5 minutes */
+ 
+ #define IS_DFS_INTERLINK(v) (((v) & DFSREF_REFERRAL_SERVER) && !((v) & DFSREF_STORAGE_SERVER))
+ 
+@@ -50,10 +52,9 @@ struct cache_entry {
+ };
+ 
+ static struct kmem_cache *cache_slab __read_mostly;
+-static struct workqueue_struct *dfscache_wq __read_mostly;
++struct workqueue_struct *dfscache_wq;
+ 
+-static int cache_ttl;
+-static DEFINE_SPINLOCK(cache_ttl_lock);
++atomic_t dfs_cache_ttl;
+ 
+ static struct nls_table *cache_cp;
+ 
+@@ -65,10 +66,6 @@ static atomic_t cache_count;
+ static struct hlist_head cache_htable[CACHE_HTABLE_SIZE];
+ static DECLARE_RWSEM(htable_rw_lock);
+ 
+-static void refresh_cache_worker(struct work_struct *work);
+-
+-static DECLARE_DELAYED_WORK(refresh_task, refresh_cache_worker);
+-
+ /**
+  * dfs_cache_canonical_path - get a canonical DFS path
+  *
+@@ -290,7 +287,9 @@ int dfs_cache_init(void)
+ 	int rc;
+ 	int i;
+ 
+-	dfscache_wq = alloc_workqueue("cifs-dfscache", WQ_FREEZABLE | WQ_UNBOUND, 1);
++	dfscache_wq = alloc_workqueue("cifs-dfscache",
++				      WQ_UNBOUND|WQ_FREEZABLE|WQ_MEM_RECLAIM,
++				      0);
+ 	if (!dfscache_wq)
+ 		return -ENOMEM;
+ 
+@@ -306,6 +305,7 @@ int dfs_cache_init(void)
+ 		INIT_HLIST_HEAD(&cache_htable[i]);
+ 
+ 	atomic_set(&cache_count, 0);
++	atomic_set(&dfs_cache_ttl, CACHE_DEFAULT_TTL);
+ 	cache_cp = load_nls("utf8");
+ 	if (!cache_cp)
+ 		cache_cp = load_nls_default();
+@@ -480,6 +480,7 @@ static struct cache_entry *add_cache_entry_locked(struct dfs_info3_param *refs,
+ 	int rc;
+ 	struct cache_entry *ce;
+ 	unsigned int hash;
++	int ttl;
+ 
+ 	WARN_ON(!rwsem_is_locked(&htable_rw_lock));
+ 
+@@ -496,15 +497,8 @@ static struct cache_entry *add_cache_entry_locked(struct dfs_info3_param *refs,
+ 	if (IS_ERR(ce))
+ 		return ce;
+ 
+-	spin_lock(&cache_ttl_lock);
+-	if (!cache_ttl) {
+-		cache_ttl = ce->ttl;
+-		queue_delayed_work(dfscache_wq, &refresh_task, cache_ttl * HZ);
+-	} else {
+-		cache_ttl = min_t(int, cache_ttl, ce->ttl);
+-		mod_delayed_work(dfscache_wq, &refresh_task, cache_ttl * HZ);
+-	}
+-	spin_unlock(&cache_ttl_lock);
++	ttl = min_t(int, atomic_read(&dfs_cache_ttl), ce->ttl);
++	atomic_set(&dfs_cache_ttl, ttl);
+ 
+ 	hlist_add_head(&ce->hlist, &cache_htable[hash]);
+ 	dump_ce(ce);
+@@ -616,7 +610,6 @@ static struct cache_entry *lookup_cache_entry(const char *path)
+  */
+ void dfs_cache_destroy(void)
+ {
+-	cancel_delayed_work_sync(&refresh_task);
+ 	unload_nls(cache_cp);
+ 	flush_cache_ents();
+ 	kmem_cache_destroy(cache_slab);
+@@ -1142,6 +1135,7 @@ static bool target_share_equal(struct TCP_Server_Info *server, const char *s1, c
+  * target shares in @refs.
+  */
+ static void mark_for_reconnect_if_needed(struct TCP_Server_Info *server,
++					 const char *path,
+ 					 struct dfs_cache_tgt_list *old_tl,
+ 					 struct dfs_cache_tgt_list *new_tl)
+ {
+@@ -1153,8 +1147,10 @@ static void mark_for_reconnect_if_needed(struct TCP_Server_Info *server,
+ 		     nit = dfs_cache_get_next_tgt(new_tl, nit)) {
+ 			if (target_share_equal(server,
+ 					       dfs_cache_get_tgt_name(oit),
+-					       dfs_cache_get_tgt_name(nit)))
++					       dfs_cache_get_tgt_name(nit))) {
++				dfs_cache_noreq_update_tgthint(path, nit);
+ 				return;
++			}
+ 		}
+ 	}
+ 
+@@ -1162,13 +1158,28 @@ static void mark_for_reconnect_if_needed(struct TCP_Server_Info *server,
+ 	cifs_signal_cifsd_for_reconnect(server, true);
+ }
+ 
++static bool is_ses_good(struct cifs_ses *ses)
++{
++	struct TCP_Server_Info *server = ses->server;
++	struct cifs_tcon *tcon = ses->tcon_ipc;
++	bool ret;
++
++	spin_lock(&ses->ses_lock);
++	spin_lock(&ses->chan_lock);
++	ret = !cifs_chan_needs_reconnect(ses, server) &&
++		ses->ses_status == SES_GOOD &&
++		!tcon->need_reconnect;
++	spin_unlock(&ses->chan_lock);
++	spin_unlock(&ses->ses_lock);
++	return ret;
++}
++
+ /* Refresh dfs referral of tcon and mark it for reconnect if needed */
+-static int __refresh_tcon(const char *path, struct cifs_tcon *tcon, bool force_refresh)
++static int __refresh_tcon(const char *path, struct cifs_ses *ses, bool force_refresh)
+ {
+ 	struct dfs_cache_tgt_list old_tl = DFS_CACHE_TGT_LIST_INIT(old_tl);
+ 	struct dfs_cache_tgt_list new_tl = DFS_CACHE_TGT_LIST_INIT(new_tl);
+-	struct cifs_ses *ses = CIFS_DFS_ROOT_SES(tcon->ses);
+-	struct cifs_tcon *ipc = ses->tcon_ipc;
++	struct TCP_Server_Info *server = ses->server;
+ 	bool needs_refresh = false;
+ 	struct cache_entry *ce;
+ 	unsigned int xid;
+@@ -1190,20 +1201,19 @@ static int __refresh_tcon(const char *path, struct cifs_tcon *tcon, bool force_r
+ 		goto out;
+ 	}
+ 
+-	spin_lock(&ipc->tc_lock);
+-	if (ipc->status != TID_GOOD) {
+-		spin_unlock(&ipc->tc_lock);
+-		cifs_dbg(FYI, "%s: skip cache refresh due to disconnected ipc\n", __func__);
++	ses = CIFS_DFS_ROOT_SES(ses);
++	if (!is_ses_good(ses)) {
++		cifs_dbg(FYI, "%s: skip cache refresh due to disconnected ipc\n",
++			 __func__);
+ 		goto out;
+ 	}
+-	spin_unlock(&ipc->tc_lock);
+ 
+ 	ce = cache_refresh_path(xid, ses, path, true);
+ 	if (!IS_ERR(ce)) {
+ 		rc = get_targets(ce, &new_tl);
+ 		up_read(&htable_rw_lock);
+ 		cifs_dbg(FYI, "%s: get_targets: %d\n", __func__, rc);
+-		mark_for_reconnect_if_needed(tcon->ses->server, &old_tl, &new_tl);
++		mark_for_reconnect_if_needed(server, path, &old_tl, &new_tl);
+ 	}
+ 
+ out:
+@@ -1216,10 +1226,11 @@ out:
+ static int refresh_tcon(struct cifs_tcon *tcon, bool force_refresh)
+ {
+ 	struct TCP_Server_Info *server = tcon->ses->server;
++	struct cifs_ses *ses = tcon->ses;
+ 
+ 	mutex_lock(&server->refpath_lock);
+ 	if (server->leaf_fullpath)
+-		__refresh_tcon(server->leaf_fullpath + 1, tcon, force_refresh);
++		__refresh_tcon(server->leaf_fullpath + 1, ses, force_refresh);
+ 	mutex_unlock(&server->refpath_lock);
+ 	return 0;
+ }
+@@ -1263,56 +1274,32 @@ int dfs_cache_remount_fs(struct cifs_sb_info *cifs_sb)
+ 	return refresh_tcon(tcon, true);
+ }
+ 
+-/*
+- * Worker that will refresh DFS cache from all active mounts based on lowest TTL value
+- * from a DFS referral.
+- */
+-static void refresh_cache_worker(struct work_struct *work)
++/* Refresh all DFS referrals related to DFS tcon */
++void dfs_cache_refresh(struct work_struct *work)
+ {
+ 	struct TCP_Server_Info *server;
+-	struct cifs_tcon *tcon, *ntcon;
+-	struct list_head tcons;
++	struct dfs_root_ses *rses;
++	struct cifs_tcon *tcon;
+ 	struct cifs_ses *ses;
+ 
+-	INIT_LIST_HEAD(&tcons);
+-
+-	spin_lock(&cifs_tcp_ses_lock);
+-	list_for_each_entry(server, &cifs_tcp_ses_list, tcp_ses_list) {
+-		if (!server->leaf_fullpath)
+-			continue;
+-
+-		list_for_each_entry(ses, &server->smb_ses_list, smb_ses_list) {
+-			if (ses->tcon_ipc) {
+-				ses->ses_count++;
+-				list_add_tail(&ses->tcon_ipc->ulist, &tcons);
+-			}
+-			list_for_each_entry(tcon, &ses->tcon_list, tcon_list) {
+-				if (!tcon->ipc) {
+-					tcon->tc_count++;
+-					list_add_tail(&tcon->ulist, &tcons);
+-				}
+-			}
+-		}
+-	}
+-	spin_unlock(&cifs_tcp_ses_lock);
++	tcon = container_of(work, struct cifs_tcon, dfs_cache_work.work);
++	ses = tcon->ses;
++	server = ses->server;
+ 
+-	list_for_each_entry_safe(tcon, ntcon, &tcons, ulist) {
+-		struct TCP_Server_Info *server = tcon->ses->server;
+-
+-		list_del_init(&tcon->ulist);
++	mutex_lock(&server->refpath_lock);
++	if (server->leaf_fullpath)
++		__refresh_tcon(server->leaf_fullpath + 1, ses, false);
++	mutex_unlock(&server->refpath_lock);
+ 
++	list_for_each_entry(rses, &tcon->dfs_ses_list, list) {
++		ses = rses->ses;
++		server = ses->server;
+ 		mutex_lock(&server->refpath_lock);
+ 		if (server->leaf_fullpath)
+-			__refresh_tcon(server->leaf_fullpath + 1, tcon, false);
++			__refresh_tcon(server->leaf_fullpath + 1, ses, false);
+ 		mutex_unlock(&server->refpath_lock);
+-
+-		if (tcon->ipc)
+-			cifs_put_smb_ses(tcon->ses);
+-		else
+-			cifs_put_tcon(tcon);
+ 	}
+ 
+-	spin_lock(&cache_ttl_lock);
+-	queue_delayed_work(dfscache_wq, &refresh_task, cache_ttl * HZ);
+-	spin_unlock(&cache_ttl_lock);
++	queue_delayed_work(dfscache_wq, &tcon->dfs_cache_work,
++			   atomic_read(&dfs_cache_ttl) * HZ);
+ }
+diff --git a/fs/cifs/dfs_cache.h b/fs/cifs/dfs_cache.h
+index e0d39393035a9..c6d89cd6d4fd7 100644
+--- a/fs/cifs/dfs_cache.h
++++ b/fs/cifs/dfs_cache.h
+@@ -13,6 +13,9 @@
+ #include <linux/uuid.h>
+ #include "cifsglob.h"
+ 
++extern struct workqueue_struct *dfscache_wq;
++extern atomic_t dfs_cache_ttl;
++
+ #define DFS_CACHE_TGT_LIST_INIT(var) { .tl_numtgts = 0, .tl_list = LIST_HEAD_INIT((var).tl_list), }
+ 
+ struct dfs_cache_tgt_list {
+@@ -42,6 +45,7 @@ int dfs_cache_get_tgt_share(char *path, const struct dfs_cache_tgt_iterator *it,
+ 			    char **prefix);
+ char *dfs_cache_canonical_path(const char *path, const struct nls_table *cp, int remap);
+ int dfs_cache_remount_fs(struct cifs_sb_info *cifs_sb);
++void dfs_cache_refresh(struct work_struct *work);
+ 
+ static inline struct dfs_cache_tgt_iterator *
+ dfs_cache_get_next_tgt(struct dfs_cache_tgt_list *tl,
+@@ -89,4 +93,9 @@ dfs_cache_get_nr_tgts(const struct dfs_cache_tgt_list *tl)
+ 	return tl ? tl->tl_numtgts : 0;
+ }
+ 
++static inline int dfs_cache_get_ttl(void)
++{
++	return atomic_read(&dfs_cache_ttl);
++}
++
+ #endif /* _CIFS_DFS_CACHE_H */
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index b33d2e7b0f984..c5fcefdfd7976 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -4882,6 +4882,8 @@ void cifs_oplock_break(struct work_struct *work)
+ 	struct TCP_Server_Info *server = tcon->ses->server;
+ 	int rc = 0;
+ 	bool purge_cache = false;
++	struct cifs_deferred_close *dclose;
++	bool is_deferred = false;
+ 
+ 	wait_on_bit(&cinode->flags, CIFS_INODE_PENDING_WRITERS,
+ 			TASK_UNINTERRUPTIBLE);
+@@ -4917,6 +4919,20 @@ void cifs_oplock_break(struct work_struct *work)
+ 		cifs_dbg(VFS, "Push locks rc = %d\n", rc);
+ 
+ oplock_break_ack:
++	/*
++	 * When oplock break is received and there are no active
++	 * file handles but cached, then schedule deferred close immediately.
++	 * So, new open will not use cached handle.
++	 */
++	spin_lock(&CIFS_I(inode)->deferred_lock);
++	is_deferred = cifs_is_deferred_close(cfile, &dclose);
++	spin_unlock(&CIFS_I(inode)->deferred_lock);
++
++	if (!CIFS_CACHE_HANDLE(cinode) && is_deferred &&
++			cfile->deferred_close_scheduled && delayed_work_pending(&cfile->deferred)) {
++		cifs_close_deferred_file(cinode);
++	}
++
+ 	/*
+ 	 * releasing stale oplock after recent reconnect of smb session using
+ 	 * a now incorrect file handle is not a data integrity issue but do
+diff --git a/fs/cifs/ioctl.c b/fs/cifs/ioctl.c
+index 6419ec47c2a85..cb3be58cd55eb 100644
+--- a/fs/cifs/ioctl.c
++++ b/fs/cifs/ioctl.c
+@@ -239,7 +239,7 @@ static int cifs_dump_full_key(struct cifs_tcon *tcon, struct smb3_full_key_debug
+ 					 * section, we need to make sure it won't be released
+ 					 * so increment its refcount
+ 					 */
+-					ses->ses_count++;
++					cifs_smb_ses_inc_refcount(ses);
+ 					found = true;
+ 					goto search_end;
+ 				}
+diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c
+index 7f085ed2d866b..cd914be905b24 100644
+--- a/fs/cifs/misc.c
++++ b/fs/cifs/misc.c
+@@ -749,7 +749,9 @@ cifs_close_deferred_file(struct cifsInodeInfo *cifs_inode)
+ 	list_for_each_entry(cfile, &cifs_inode->openFileList, flist) {
+ 		if (delayed_work_pending(&cfile->deferred)) {
+ 			if (cancel_delayed_work(&cfile->deferred)) {
++				spin_lock(&cifs_inode->deferred_lock);
+ 				cifs_del_deferred_close(cfile);
++				spin_unlock(&cifs_inode->deferred_lock);
+ 
+ 				tmp_list = kmalloc(sizeof(struct file_list), GFP_ATOMIC);
+ 				if (tmp_list == NULL)
+@@ -762,7 +764,7 @@ cifs_close_deferred_file(struct cifsInodeInfo *cifs_inode)
+ 	spin_unlock(&cifs_inode->open_file_lock);
+ 
+ 	list_for_each_entry_safe(tmp_list, tmp_next_list, &file_head, list) {
+-		_cifsFileInfo_put(tmp_list->cfile, true, false);
++		_cifsFileInfo_put(tmp_list->cfile, false, false);
+ 		list_del(&tmp_list->list);
+ 		kfree(tmp_list);
+ 	}
+@@ -780,7 +782,9 @@ cifs_close_all_deferred_files(struct cifs_tcon *tcon)
+ 	list_for_each_entry(cfile, &tcon->openFileList, tlist) {
+ 		if (delayed_work_pending(&cfile->deferred)) {
+ 			if (cancel_delayed_work(&cfile->deferred)) {
++				spin_lock(&CIFS_I(d_inode(cfile->dentry))->deferred_lock);
+ 				cifs_del_deferred_close(cfile);
++				spin_unlock(&CIFS_I(d_inode(cfile->dentry))->deferred_lock);
+ 
+ 				tmp_list = kmalloc(sizeof(struct file_list), GFP_ATOMIC);
+ 				if (tmp_list == NULL)
+@@ -815,7 +819,9 @@ cifs_close_deferred_file_under_dentry(struct cifs_tcon *tcon, const char *path)
+ 		if (strstr(full_path, path)) {
+ 			if (delayed_work_pending(&cfile->deferred)) {
+ 				if (cancel_delayed_work(&cfile->deferred)) {
++					spin_lock(&CIFS_I(d_inode(cfile->dentry))->deferred_lock);
+ 					cifs_del_deferred_close(cfile);
++					spin_unlock(&CIFS_I(d_inode(cfile->dentry))->deferred_lock);
+ 
+ 					tmp_list = kmalloc(sizeof(struct file_list), GFP_ATOMIC);
+ 					if (tmp_list == NULL)
+diff --git a/fs/cifs/sess.c b/fs/cifs/sess.c
+index d2cbae4b5d211..335c078c42fb5 100644
+--- a/fs/cifs/sess.c
++++ b/fs/cifs/sess.c
+@@ -159,6 +159,7 @@ cifs_chan_is_iface_active(struct cifs_ses *ses,
+ /* returns number of channels added */
+ int cifs_try_adding_channels(struct cifs_sb_info *cifs_sb, struct cifs_ses *ses)
+ {
++	struct TCP_Server_Info *server = ses->server;
+ 	int old_chan_count, new_chan_count;
+ 	int left;
+ 	int rc = 0;
+@@ -178,16 +179,16 @@ int cifs_try_adding_channels(struct cifs_sb_info *cifs_sb, struct cifs_ses *ses)
+ 		return 0;
+ 	}
+ 
+-	if (ses->server->dialect < SMB30_PROT_ID) {
++	if (server->dialect < SMB30_PROT_ID) {
+ 		spin_unlock(&ses->chan_lock);
+ 		cifs_dbg(VFS, "multichannel is not supported on this protocol version, use 3.0 or above\n");
+ 		return 0;
+ 	}
+ 
+-	if (!(ses->server->capabilities & SMB2_GLOBAL_CAP_MULTI_CHANNEL)) {
++	if (!(server->capabilities & SMB2_GLOBAL_CAP_MULTI_CHANNEL)) {
+ 		ses->chan_max = 1;
+ 		spin_unlock(&ses->chan_lock);
+-		cifs_dbg(VFS, "server %s does not support multichannel\n", ses->server->hostname);
++		cifs_server_dbg(VFS, "no multichannel support\n");
+ 		return 0;
+ 	}
+ 	spin_unlock(&ses->chan_lock);
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 366f0c3b799b6..02228a590cd54 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -175,8 +175,17 @@ smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon,
+ 		}
+ 	}
+ 	spin_unlock(&tcon->tc_lock);
+-	if ((!tcon->ses) || (tcon->ses->ses_status == SES_EXITING) ||
+-	    (!tcon->ses->server) || !server)
++
++	ses = tcon->ses;
++	if (!ses)
++		return -EIO;
++	spin_lock(&ses->ses_lock);
++	if (ses->ses_status == SES_EXITING) {
++		spin_unlock(&ses->ses_lock);
++		return -EIO;
++	}
++	spin_unlock(&ses->ses_lock);
++	if (!ses->server || !server)
+ 		return -EIO;
+ 
+ 	spin_lock(&server->srv_lock);
+@@ -204,8 +213,6 @@ again:
+ 	if (rc)
+ 		return rc;
+ 
+-	ses = tcon->ses;
+-
+ 	spin_lock(&ses->chan_lock);
+ 	if (!cifs_chan_needs_reconnect(ses, server) && !tcon->need_reconnect) {
+ 		spin_unlock(&ses->chan_lock);
+@@ -3849,7 +3856,7 @@ void smb2_reconnect_server(struct work_struct *work)
+ 		if (ses->tcon_ipc && ses->tcon_ipc->need_reconnect) {
+ 			list_add_tail(&ses->tcon_ipc->rlist, &tmp_list);
+ 			tcon_selected = tcon_exist = true;
+-			ses->ses_count++;
++			cifs_smb_ses_inc_refcount(ses);
+ 		}
+ 		/*
+ 		 * handle the case where channel needs to reconnect
+@@ -3860,7 +3867,7 @@ void smb2_reconnect_server(struct work_struct *work)
+ 		if (!tcon_selected && cifs_chan_needs_reconnect(ses, server)) {
+ 			list_add_tail(&ses->rlist, &tmp_ses_list);
+ 			ses_exist = true;
+-			ses->ses_count++;
++			cifs_smb_ses_inc_refcount(ses);
+ 		}
+ 		spin_unlock(&ses->chan_lock);
+ 	}
+diff --git a/fs/dlm/ast.c b/fs/dlm/ast.c
+index 26fef9945cc90..39805aea33367 100644
+--- a/fs/dlm/ast.c
++++ b/fs/dlm/ast.c
+@@ -45,7 +45,7 @@ void dlm_purge_lkb_callbacks(struct dlm_lkb *lkb)
+ 		kref_put(&cb->ref, dlm_release_callback);
+ 	}
+ 
+-	lkb->lkb_flags &= ~DLM_IFL_CB_PENDING;
++	clear_bit(DLM_IFL_CB_PENDING_BIT, &lkb->lkb_iflags);
+ 
+ 	/* invalidate */
+ 	dlm_callback_set_last_ptr(&lkb->lkb_last_cast, NULL);
+@@ -103,10 +103,9 @@ int dlm_enqueue_lkb_callback(struct dlm_lkb *lkb, uint32_t flags, int mode,
+ 	cb->sb_status = status;
+ 	cb->sb_flags = (sbflags & 0x000000FF);
+ 	kref_init(&cb->ref);
+-	if (!(lkb->lkb_flags & DLM_IFL_CB_PENDING)) {
+-		lkb->lkb_flags |= DLM_IFL_CB_PENDING;
++	if (!test_and_set_bit(DLM_IFL_CB_PENDING_BIT, &lkb->lkb_iflags))
+ 		rv = DLM_ENQUEUE_CALLBACK_NEED_SCHED;
+-	}
++
+ 	list_add_tail(&cb->list, &lkb->lkb_callbacks);
+ 
+ 	if (flags & DLM_CB_CAST)
+@@ -209,7 +208,7 @@ void dlm_callback_work(struct work_struct *work)
+ 		spin_lock(&lkb->lkb_cb_lock);
+ 		rv = dlm_dequeue_lkb_callback(lkb, &cb);
+ 		if (rv == DLM_DEQUEUE_CALLBACK_EMPTY) {
+-			lkb->lkb_flags &= ~DLM_IFL_CB_PENDING;
++			clear_bit(DLM_IFL_CB_PENDING_BIT, &lkb->lkb_iflags);
+ 			spin_unlock(&lkb->lkb_cb_lock);
+ 			break;
+ 		}
+diff --git a/fs/dlm/dlm_internal.h b/fs/dlm/dlm_internal.h
+index ab1a55337a6eb..9bf70962bc495 100644
+--- a/fs/dlm/dlm_internal.h
++++ b/fs/dlm/dlm_internal.h
+@@ -211,7 +211,9 @@ struct dlm_args {
+ #endif
+ #define DLM_IFL_DEADLOCK_CANCEL	0x01000000
+ #define DLM_IFL_STUB_MS		0x02000000 /* magic number for m_flags */
+-#define DLM_IFL_CB_PENDING	0x04000000
++
++#define DLM_IFL_CB_PENDING_BIT	0
++
+ /* least significant 2 bytes are message changed, they are full transmitted
+  * but at receive side only the 2 bytes LSB will be set.
+  *
+@@ -246,6 +248,7 @@ struct dlm_lkb {
+ 	uint32_t		lkb_exflags;	/* external flags from caller */
+ 	uint32_t		lkb_sbflags;	/* lksb flags */
+ 	uint32_t		lkb_flags;	/* internal flags */
++	unsigned long		lkb_iflags;	/* internal flags */
+ 	uint32_t		lkb_lvbseq;	/* lvb sequence number */
+ 
+ 	int8_t			lkb_status;     /* granted, waiting, convert */
+diff --git a/fs/dlm/user.c b/fs/dlm/user.c
+index 35129505ddda1..688a480879e4b 100644
+--- a/fs/dlm/user.c
++++ b/fs/dlm/user.c
+@@ -884,7 +884,7 @@ static ssize_t device_read(struct file *file, char __user *buf, size_t count,
+ 		goto try_another;
+ 	case DLM_DEQUEUE_CALLBACK_LAST:
+ 		list_del_init(&lkb->lkb_cb_list);
+-		lkb->lkb_flags &= ~DLM_IFL_CB_PENDING;
++		clear_bit(DLM_IFL_CB_PENDING_BIT, &lkb->lkb_iflags);
+ 		break;
+ 	case DLM_DEQUEUE_CALLBACK_SUCCESS:
+ 		break;
+diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
+index 1db018f8c2e89..9ebb87e342dcb 100644
+--- a/fs/erofs/internal.h
++++ b/fs/erofs/internal.h
+@@ -156,6 +156,7 @@ struct erofs_sb_info {
+ 
+ 	/* what we really care is nid, rather than ino.. */
+ 	erofs_nid_t root_nid;
++	erofs_nid_t packed_nid;
+ 	/* used for statfs, f_files - f_favail */
+ 	u64 inos;
+ 
+@@ -306,7 +307,7 @@ struct erofs_inode {
+ 
+ 	unsigned char datalayout;
+ 	unsigned char inode_isize;
+-	unsigned short xattr_isize;
++	unsigned int xattr_isize;
+ 
+ 	unsigned int xattr_shared_count;
+ 	unsigned int *xattr_shared_xattrs;
+diff --git a/fs/erofs/super.c b/fs/erofs/super.c
+index 19b1ae79cec41..dbe466295d0e0 100644
+--- a/fs/erofs/super.c
++++ b/fs/erofs/super.c
+@@ -380,17 +380,7 @@ static int erofs_read_superblock(struct super_block *sb)
+ #endif
+ 	sbi->islotbits = ilog2(sizeof(struct erofs_inode_compact));
+ 	sbi->root_nid = le16_to_cpu(dsb->root_nid);
+-#ifdef CONFIG_EROFS_FS_ZIP
+-	sbi->packed_inode = NULL;
+-	if (erofs_sb_has_fragments(sbi) && dsb->packed_nid) {
+-		sbi->packed_inode =
+-			erofs_iget(sb, le64_to_cpu(dsb->packed_nid));
+-		if (IS_ERR(sbi->packed_inode)) {
+-			ret = PTR_ERR(sbi->packed_inode);
+-			goto out;
+-		}
+-	}
+-#endif
++	sbi->packed_nid = le64_to_cpu(dsb->packed_nid);
+ 	sbi->inos = le64_to_cpu(dsb->inos);
+ 
+ 	sbi->build_time = le64_to_cpu(dsb->build_time);
+@@ -799,6 +789,16 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
+ 
+ 	erofs_shrinker_register(sb);
+ 	/* sb->s_umount is already locked, SB_ACTIVE and SB_BORN are not set */
++#ifdef CONFIG_EROFS_FS_ZIP
++	if (erofs_sb_has_fragments(sbi) && sbi->packed_nid) {
++		sbi->packed_inode = erofs_iget(sb, sbi->packed_nid);
++		if (IS_ERR(sbi->packed_inode)) {
++			err = PTR_ERR(sbi->packed_inode);
++			sbi->packed_inode = NULL;
++			return err;
++		}
++	}
++#endif
+ 	err = erofs_init_managed_cache(sb);
+ 	if (err)
+ 		return err;
+diff --git a/fs/erofs/zmap.c b/fs/erofs/zmap.c
+index 655da4d739cb6..b5f4086537548 100644
+--- a/fs/erofs/zmap.c
++++ b/fs/erofs/zmap.c
+@@ -86,6 +86,10 @@ static int legacy_load_cluster_from_disk(struct z_erofs_maprecorder *m,
+ 		if (advise & Z_EROFS_VLE_DI_PARTIAL_REF)
+ 			m->partialref = true;
+ 		m->clusterofs = le16_to_cpu(di->di_clusterofs);
++		if (m->clusterofs >= 1 << vi->z_logical_clusterbits) {
++			DBG_BUGON(1);
++			return -EFSCORRUPTED;
++		}
+ 		m->pblk = le32_to_cpu(di->di_u.blkaddr);
+ 		break;
+ 	default:
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 3559ea6b07818..74251eebf8313 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -5802,7 +5802,8 @@ int ext4_clu_mapped(struct inode *inode, ext4_lblk_t lclu)
+ 	 * mapped - no physical clusters have been allocated, and the
+ 	 * file has no extents
+ 	 */
+-	if (ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA))
++	if (ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA) ||
++	    ext4_has_inline_data(inode))
+ 		return 0;
+ 
+ 	/* search for the extent closest to the first block in the cluster */
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index bf0b7dea4900a..41ba1c4328449 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -3148,6 +3148,9 @@ static int ext4_da_write_end(struct file *file,
+ 	    ext4_has_inline_data(inode))
+ 		return ext4_write_inline_data_end(inode, pos, len, copied, page);
+ 
++	if (unlikely(copied < len) && !PageUptodate(page))
++		copied = 0;
++
+ 	start = pos & (PAGE_SIZE - 1);
+ 	end = start + copied - 1;
+ 
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index b40dec3d7f799..25f8162d292da 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -755,7 +755,12 @@ void f2fs_decompress_cluster(struct decompress_io_ctx *dic, bool in_task)
+ 
+ 	if (dic->clen > PAGE_SIZE * dic->nr_cpages - COMPRESS_HEADER_SIZE) {
+ 		ret = -EFSCORRUPTED;
+-		f2fs_handle_error(sbi, ERROR_FAIL_DECOMPRESSION);
++
++		/* Avoid f2fs_commit_super in irq context */
++		if (in_task)
++			f2fs_save_errors(sbi, ERROR_FAIL_DECOMPRESSION);
++		else
++			f2fs_handle_error(sbi, ERROR_FAIL_DECOMPRESSION);
+ 		goto out_release;
+ 	}
+ 
+@@ -1456,6 +1461,12 @@ continue_unlock:
+ 		if (!PageDirty(cc->rpages[i]))
+ 			goto continue_unlock;
+ 
++		if (PageWriteback(cc->rpages[i])) {
++			if (wbc->sync_mode == WB_SYNC_NONE)
++				goto continue_unlock;
++			f2fs_wait_on_page_writeback(cc->rpages[i], DATA, true, true);
++		}
++
+ 		if (!clear_page_dirty_for_io(cc->rpages[i]))
+ 			goto continue_unlock;
+ 
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 06b552a0aba23..1034912a61b30 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -874,6 +874,8 @@ void f2fs_submit_merged_ipu_write(struct f2fs_sb_info *sbi,
+ 	bool found = false;
+ 	struct bio *target = bio ? *bio : NULL;
+ 
++	f2fs_bug_on(sbi, !target && !page);
++
+ 	for (temp = HOT; temp < NR_TEMP_TYPE && !found; temp++) {
+ 		struct f2fs_bio_info *io = sbi->write_io[DATA] + temp;
+ 		struct list_head *head = &io->bio_list;
+@@ -2898,7 +2900,8 @@ out:
+ 
+ 	if (unlikely(f2fs_cp_error(sbi))) {
+ 		f2fs_submit_merged_write(sbi, DATA);
+-		f2fs_submit_merged_ipu_write(sbi, bio, NULL);
++		if (bio && *bio)
++			f2fs_submit_merged_ipu_write(sbi, bio, NULL);
+ 		submitted = NULL;
+ 	}
+ 
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index b0ab2062038a0..620343c65ab67 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -3554,6 +3554,7 @@ int f2fs_quota_sync(struct super_block *sb, int type);
+ loff_t max_file_blocks(struct inode *inode);
+ void f2fs_quota_off_umount(struct super_block *sb);
+ void f2fs_handle_stop(struct f2fs_sb_info *sbi, unsigned char reason);
++void f2fs_save_errors(struct f2fs_sb_info *sbi, unsigned char flag);
+ void f2fs_handle_error(struct f2fs_sb_info *sbi, unsigned char error);
+ int f2fs_commit_super(struct f2fs_sb_info *sbi, bool recover);
+ int f2fs_sync_fs(struct super_block *sb, int sync);
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 15dabeac46905..16e286cea6301 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -2113,7 +2113,11 @@ static int f2fs_ioc_start_atomic_write(struct file *filp, bool truncate)
+ 		clear_inode_flag(fi->cow_inode, FI_INLINE_DATA);
+ 	} else {
+ 		/* Reuse the already created COW inode */
+-		f2fs_do_truncate_blocks(fi->cow_inode, 0, true);
++		ret = f2fs_do_truncate_blocks(fi->cow_inode, 0, true);
++		if (ret) {
++			f2fs_up_write(&fi->i_gc_rwsem[WRITE]);
++			goto out;
++		}
+ 	}
+ 
+ 	f2fs_write_inode(inode, NULL);
+@@ -3009,15 +3013,16 @@ int f2fs_transfer_project_quota(struct inode *inode, kprojid_t kprojid)
+ 	struct dquot *transfer_to[MAXQUOTAS] = {};
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	struct super_block *sb = sbi->sb;
+-	int err = 0;
++	int err;
+ 
+ 	transfer_to[PRJQUOTA] = dqget(sb, make_kqid_projid(kprojid));
+-	if (!IS_ERR(transfer_to[PRJQUOTA])) {
+-		err = __dquot_transfer(inode, transfer_to);
+-		if (err)
+-			set_sbi_flag(sbi, SBI_QUOTA_NEED_REPAIR);
+-		dqput(transfer_to[PRJQUOTA]);
+-	}
++	if (IS_ERR(transfer_to[PRJQUOTA]))
++		return PTR_ERR(transfer_to[PRJQUOTA]);
++
++	err = __dquot_transfer(inode, transfer_to);
++	if (err)
++		set_sbi_flag(sbi, SBI_QUOTA_NEED_REPAIR);
++	dqput(transfer_to[PRJQUOTA]);
+ 	return err;
+ }
+ 
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 0a9dfa4598606..292a17d62f569 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -1791,8 +1791,8 @@ int f2fs_gc(struct f2fs_sb_info *sbi, struct f2fs_gc_control *gc_control)
+ 				prefree_segments(sbi));
+ 
+ 	cpc.reason = __get_cp_reason(sbi);
+-	sbi->skipped_gc_rwsem = 0;
+ gc_more:
++	sbi->skipped_gc_rwsem = 0;
+ 	if (unlikely(!(sbi->sb->s_flags & SB_ACTIVE))) {
+ 		ret = -EINVAL;
+ 		goto stop;
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 227e258361734..b2a080c660c86 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -246,10 +246,16 @@ retry:
+ 	} else {
+ 		blkcnt_t count = 1;
+ 
++		err = inc_valid_block_count(sbi, inode, &count);
++		if (err) {
++			f2fs_put_dnode(&dn);
++			return err;
++		}
++
+ 		*old_addr = dn.data_blkaddr;
+ 		f2fs_truncate_data_blocks_range(&dn, 1);
+ 		dec_valid_block_count(sbi, F2FS_I(inode)->cow_inode, count);
+-		inc_valid_block_count(sbi, inode, &count);
++
+ 		f2fs_replace_block(sbi, &dn, dn.data_blkaddr, new_addr,
+ 					ni.version, true, false);
+ 	}
+@@ -4920,48 +4926,6 @@ int f2fs_check_write_pointer(struct f2fs_sb_info *sbi)
+ 	return 0;
+ }
+ 
+-static bool is_conv_zone(struct f2fs_sb_info *sbi, unsigned int zone_idx,
+-						unsigned int dev_idx)
+-{
+-	if (!bdev_is_zoned(FDEV(dev_idx).bdev))
+-		return true;
+-	return !test_bit(zone_idx, FDEV(dev_idx).blkz_seq);
+-}
+-
+-/* Return the zone index in the given device */
+-static unsigned int get_zone_idx(struct f2fs_sb_info *sbi, unsigned int secno,
+-					int dev_idx)
+-{
+-	block_t sec_start_blkaddr = START_BLOCK(sbi, GET_SEG_FROM_SEC(sbi, secno));
+-
+-	return (sec_start_blkaddr - FDEV(dev_idx).start_blk) >>
+-						sbi->log_blocks_per_blkz;
+-}
+-
+-/*
+- * Return the usable segments in a section based on the zone's
+- * corresponding zone capacity. Zone is equal to a section.
+- */
+-static inline unsigned int f2fs_usable_zone_segs_in_sec(
+-		struct f2fs_sb_info *sbi, unsigned int segno)
+-{
+-	unsigned int dev_idx, zone_idx;
+-
+-	dev_idx = f2fs_target_device_index(sbi, START_BLOCK(sbi, segno));
+-	zone_idx = get_zone_idx(sbi, GET_SEC_FROM_SEG(sbi, segno), dev_idx);
+-
+-	/* Conventional zone's capacity is always equal to zone size */
+-	if (is_conv_zone(sbi, zone_idx, dev_idx))
+-		return sbi->segs_per_sec;
+-
+-	if (!sbi->unusable_blocks_per_sec)
+-		return sbi->segs_per_sec;
+-
+-	/* Get the segment count beyond zone capacity block */
+-	return sbi->segs_per_sec - (sbi->unusable_blocks_per_sec >>
+-						sbi->log_blocks_per_seg);
+-}
+-
+ /*
+  * Return the number of usable blocks in a segment. The number of blocks
+  * returned is always equal to the number of blocks in a segment for
+@@ -4974,23 +4938,13 @@ static inline unsigned int f2fs_usable_zone_blks_in_seg(
+ 			struct f2fs_sb_info *sbi, unsigned int segno)
+ {
+ 	block_t seg_start, sec_start_blkaddr, sec_cap_blkaddr;
+-	unsigned int zone_idx, dev_idx, secno;
+-
+-	secno = GET_SEC_FROM_SEG(sbi, segno);
+-	seg_start = START_BLOCK(sbi, segno);
+-	dev_idx = f2fs_target_device_index(sbi, seg_start);
+-	zone_idx = get_zone_idx(sbi, secno, dev_idx);
+-
+-	/*
+-	 * Conventional zone's capacity is always equal to zone size,
+-	 * so, blocks per segment is unchanged.
+-	 */
+-	if (is_conv_zone(sbi, zone_idx, dev_idx))
+-		return sbi->blocks_per_seg;
++	unsigned int secno;
+ 
+ 	if (!sbi->unusable_blocks_per_sec)
+ 		return sbi->blocks_per_seg;
+ 
++	secno = GET_SEC_FROM_SEG(sbi, segno);
++	seg_start = START_BLOCK(sbi, segno);
+ 	sec_start_blkaddr = START_BLOCK(sbi, GET_SEG_FROM_SEC(sbi, secno));
+ 	sec_cap_blkaddr = sec_start_blkaddr + CAP_BLKS_PER_SEC(sbi);
+ 
+@@ -5024,11 +4978,6 @@ static inline unsigned int f2fs_usable_zone_blks_in_seg(struct f2fs_sb_info *sbi
+ 	return 0;
+ }
+ 
+-static inline unsigned int f2fs_usable_zone_segs_in_sec(struct f2fs_sb_info *sbi,
+-							unsigned int segno)
+-{
+-	return 0;
+-}
+ #endif
+ unsigned int f2fs_usable_blks_in_seg(struct f2fs_sb_info *sbi,
+ 					unsigned int segno)
+@@ -5043,7 +4992,7 @@ unsigned int f2fs_usable_segs_in_sec(struct f2fs_sb_info *sbi,
+ 					unsigned int segno)
+ {
+ 	if (f2fs_sb_has_blkzoned(sbi))
+-		return f2fs_usable_zone_segs_in_sec(sbi, segno);
++		return CAP_SEGS_PER_SEC(sbi);
+ 
+ 	return sbi->segs_per_sec;
+ }
+diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
+index efdb7fc3b7975..babb29a1c0347 100644
+--- a/fs/f2fs/segment.h
++++ b/fs/f2fs/segment.h
+@@ -104,6 +104,9 @@ static inline void sanity_check_seg_type(struct f2fs_sb_info *sbi,
+ #define CAP_BLKS_PER_SEC(sbi)					\
+ 	((sbi)->segs_per_sec * (sbi)->blocks_per_seg -		\
+ 	 (sbi)->unusable_blocks_per_sec)
++#define CAP_SEGS_PER_SEC(sbi)					\
++	((sbi)->segs_per_sec - ((sbi)->unusable_blocks_per_sec >>\
++	(sbi)->log_blocks_per_seg))
+ #define GET_SEC_FROM_SEG(sbi, segno)				\
+ 	(((segno) == -1) ? -1: (segno) / (sbi)->segs_per_sec)
+ #define GET_SEG_FROM_SEC(sbi, secno)				\
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index fbaaabbcd6de7..5c1c3a84501fe 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -3885,7 +3885,7 @@ void f2fs_handle_stop(struct f2fs_sb_info *sbi, unsigned char reason)
+ 	f2fs_up_write(&sbi->sb_lock);
+ }
+ 
+-static void f2fs_save_errors(struct f2fs_sb_info *sbi, unsigned char flag)
++void f2fs_save_errors(struct f2fs_sb_info *sbi, unsigned char flag)
+ {
+ 	spin_lock(&sbi->error_lock);
+ 	if (!test_bit(flag, (unsigned long *)sbi->errors)) {
+diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
+index 0b19163c90d41..fd238a68017eb 100644
+--- a/fs/f2fs/sysfs.c
++++ b/fs/f2fs/sysfs.c
+@@ -575,9 +575,9 @@ out:
+ 	if (!strcmp(a->attr.name, "iostat_period_ms")) {
+ 		if (t < MIN_IOSTAT_PERIOD_MS || t > MAX_IOSTAT_PERIOD_MS)
+ 			return -EINVAL;
+-		spin_lock(&sbi->iostat_lock);
++		spin_lock_irq(&sbi->iostat_lock);
+ 		sbi->iostat_period_ms = (unsigned int)t;
+-		spin_unlock(&sbi->iostat_lock);
++		spin_unlock_irq(&sbi->iostat_lock);
+ 		return count;
+ 	}
+ #endif
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index 15de1385012eb..18611241f4513 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -2387,6 +2387,9 @@ static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh,
+ 			spin_unlock(&jh->b_state_lock);
+ 			write_unlock(&journal->j_state_lock);
+ 			jbd2_journal_put_journal_head(jh);
++			/* Already zapped buffer? Nothing to do... */
++			if (!bh->b_bdev)
++				return 0;
+ 			return -EBUSY;
+ 		}
+ 		/*
+diff --git a/fs/ksmbd/auth.c b/fs/ksmbd/auth.c
+index cead696b656a8..df8fb076f6f14 100644
+--- a/fs/ksmbd/auth.c
++++ b/fs/ksmbd/auth.c
+@@ -221,22 +221,22 @@ int ksmbd_auth_ntlmv2(struct ksmbd_conn *conn, struct ksmbd_session *sess,
+ {
+ 	char ntlmv2_hash[CIFS_ENCPWD_SIZE];
+ 	char ntlmv2_rsp[CIFS_HMAC_MD5_HASH_SIZE];
+-	struct ksmbd_crypto_ctx *ctx;
++	struct ksmbd_crypto_ctx *ctx = NULL;
+ 	char *construct = NULL;
+ 	int rc, len;
+ 
+-	ctx = ksmbd_crypto_ctx_find_hmacmd5();
+-	if (!ctx) {
+-		ksmbd_debug(AUTH, "could not crypto alloc hmacmd5\n");
+-		return -ENOMEM;
+-	}
+-
+ 	rc = calc_ntlmv2_hash(conn, sess, ntlmv2_hash, domain_name);
+ 	if (rc) {
+ 		ksmbd_debug(AUTH, "could not get v2 hash rc %d\n", rc);
+ 		goto out;
+ 	}
+ 
++	ctx = ksmbd_crypto_ctx_find_hmacmd5();
++	if (!ctx) {
++		ksmbd_debug(AUTH, "could not crypto alloc hmacmd5\n");
++		return -ENOMEM;
++	}
++
+ 	rc = crypto_shash_setkey(CRYPTO_HMACMD5_TFM(ctx),
+ 				 ntlmv2_hash,
+ 				 CIFS_HMAC_MD5_HASH_SIZE);
+@@ -272,6 +272,8 @@ int ksmbd_auth_ntlmv2(struct ksmbd_conn *conn, struct ksmbd_session *sess,
+ 		ksmbd_debug(AUTH, "Could not generate md5 hash\n");
+ 		goto out;
+ 	}
++	ksmbd_release_crypto_ctx(ctx);
++	ctx = NULL;
+ 
+ 	rc = ksmbd_gen_sess_key(sess, ntlmv2_hash, ntlmv2_rsp);
+ 	if (rc) {
+@@ -282,7 +284,8 @@ int ksmbd_auth_ntlmv2(struct ksmbd_conn *conn, struct ksmbd_session *sess,
+ 	if (memcmp(ntlmv2->ntlmv2_hash, ntlmv2_rsp, CIFS_HMAC_MD5_HASH_SIZE) != 0)
+ 		rc = -EINVAL;
+ out:
+-	ksmbd_release_crypto_ctx(ctx);
++	if (ctx)
++		ksmbd_release_crypto_ctx(ctx);
+ 	kfree(construct);
+ 	return rc;
+ }
+diff --git a/fs/ksmbd/connection.c b/fs/ksmbd/connection.c
+index 365ac32af5058..4ed379f9b1aa6 100644
+--- a/fs/ksmbd/connection.c
++++ b/fs/ksmbd/connection.c
+@@ -20,7 +20,7 @@ static DEFINE_MUTEX(init_lock);
+ static struct ksmbd_conn_ops default_conn_ops;
+ 
+ LIST_HEAD(conn_list);
+-DEFINE_RWLOCK(conn_list_lock);
++DECLARE_RWSEM(conn_list_lock);
+ 
+ /**
+  * ksmbd_conn_free() - free resources of the connection instance
+@@ -32,9 +32,9 @@ DEFINE_RWLOCK(conn_list_lock);
+  */
+ void ksmbd_conn_free(struct ksmbd_conn *conn)
+ {
+-	write_lock(&conn_list_lock);
++	down_write(&conn_list_lock);
+ 	list_del(&conn->conns_list);
+-	write_unlock(&conn_list_lock);
++	up_write(&conn_list_lock);
+ 
+ 	xa_destroy(&conn->sessions);
+ 	kvfree(conn->request_buf);
+@@ -56,7 +56,7 @@ struct ksmbd_conn *ksmbd_conn_alloc(void)
+ 		return NULL;
+ 
+ 	conn->need_neg = true;
+-	conn->status = KSMBD_SESS_NEW;
++	ksmbd_conn_set_new(conn);
+ 	conn->local_nls = load_nls("utf8");
+ 	if (!conn->local_nls)
+ 		conn->local_nls = load_nls_default();
+@@ -84,9 +84,9 @@ struct ksmbd_conn *ksmbd_conn_alloc(void)
+ 	spin_lock_init(&conn->llist_lock);
+ 	INIT_LIST_HEAD(&conn->lock_list);
+ 
+-	write_lock(&conn_list_lock);
++	down_write(&conn_list_lock);
+ 	list_add(&conn->conns_list, &conn_list);
+-	write_unlock(&conn_list_lock);
++	up_write(&conn_list_lock);
+ 	return conn;
+ }
+ 
+@@ -95,7 +95,7 @@ bool ksmbd_conn_lookup_dialect(struct ksmbd_conn *c)
+ 	struct ksmbd_conn *t;
+ 	bool ret = false;
+ 
+-	read_lock(&conn_list_lock);
++	down_read(&conn_list_lock);
+ 	list_for_each_entry(t, &conn_list, conns_list) {
+ 		if (memcmp(t->ClientGUID, c->ClientGUID, SMB2_CLIENT_GUID_SIZE))
+ 			continue;
+@@ -103,7 +103,7 @@ bool ksmbd_conn_lookup_dialect(struct ksmbd_conn *c)
+ 		ret = true;
+ 		break;
+ 	}
+-	read_unlock(&conn_list_lock);
++	up_read(&conn_list_lock);
+ 	return ret;
+ }
+ 
+@@ -147,19 +147,47 @@ int ksmbd_conn_try_dequeue_request(struct ksmbd_work *work)
+ 	return ret;
+ }
+ 
+-static void ksmbd_conn_lock(struct ksmbd_conn *conn)
++void ksmbd_conn_lock(struct ksmbd_conn *conn)
+ {
+ 	mutex_lock(&conn->srv_mutex);
+ }
+ 
+-static void ksmbd_conn_unlock(struct ksmbd_conn *conn)
++void ksmbd_conn_unlock(struct ksmbd_conn *conn)
+ {
+ 	mutex_unlock(&conn->srv_mutex);
+ }
+ 
+-void ksmbd_conn_wait_idle(struct ksmbd_conn *conn)
++void ksmbd_all_conn_set_status(u64 sess_id, u32 status)
+ {
++	struct ksmbd_conn *conn;
++
++	down_read(&conn_list_lock);
++	list_for_each_entry(conn, &conn_list, conns_list) {
++		if (conn->binding || xa_load(&conn->sessions, sess_id))
++			WRITE_ONCE(conn->status, status);
++	}
++	up_read(&conn_list_lock);
++}
++
++void ksmbd_conn_wait_idle(struct ksmbd_conn *conn, u64 sess_id)
++{
++	struct ksmbd_conn *bind_conn;
++
+ 	wait_event(conn->req_running_q, atomic_read(&conn->req_running) < 2);
++
++	down_read(&conn_list_lock);
++	list_for_each_entry(bind_conn, &conn_list, conns_list) {
++		if (bind_conn == conn)
++			continue;
++
++		if ((bind_conn->binding || xa_load(&bind_conn->sessions, sess_id)) &&
++		    !ksmbd_conn_releasing(bind_conn) &&
++		    atomic_read(&bind_conn->req_running)) {
++			wait_event(bind_conn->req_running_q,
++				atomic_read(&bind_conn->req_running) == 0);
++		}
++	}
++	up_read(&conn_list_lock);
+ }
+ 
+ int ksmbd_conn_write(struct ksmbd_work *work)
+@@ -243,7 +271,7 @@ bool ksmbd_conn_alive(struct ksmbd_conn *conn)
+ 	if (!ksmbd_server_running())
+ 		return false;
+ 
+-	if (conn->status == KSMBD_SESS_EXITING)
++	if (ksmbd_conn_exiting(conn))
+ 		return false;
+ 
+ 	if (kthread_should_stop())
+@@ -303,7 +331,7 @@ int ksmbd_conn_handler_loop(void *p)
+ 		pdu_size = get_rfc1002_len(hdr_buf);
+ 		ksmbd_debug(CONN, "RFC1002 header %u bytes\n", pdu_size);
+ 
+-		if (conn->status == KSMBD_SESS_GOOD)
++		if (ksmbd_conn_good(conn))
+ 			max_allowed_pdu_size =
+ 				SMB3_MAX_MSGSIZE + conn->vals->max_write_size;
+ 		else
+@@ -312,7 +340,7 @@ int ksmbd_conn_handler_loop(void *p)
+ 		if (pdu_size > max_allowed_pdu_size) {
+ 			pr_err_ratelimited("PDU length(%u) exceeded maximum allowed pdu size(%u) on connection(%d)\n",
+ 					pdu_size, max_allowed_pdu_size,
+-					conn->status);
++					READ_ONCE(conn->status));
+ 			break;
+ 		}
+ 
+@@ -360,10 +388,10 @@ int ksmbd_conn_handler_loop(void *p)
+ 	}
+ 
+ out:
++	ksmbd_conn_set_releasing(conn);
+ 	/* Wait till all reference dropped to the Server object*/
+ 	wait_event(conn->r_count_q, atomic_read(&conn->r_count) == 0);
+ 
+-
+ 	if (IS_ENABLED(CONFIG_UNICODE))
+ 		utf8_unload(conn->um);
+ 	unload_nls(conn->local_nls);
+@@ -407,7 +435,7 @@ static void stop_sessions(void)
+ 	struct ksmbd_transport *t;
+ 
+ again:
+-	read_lock(&conn_list_lock);
++	down_read(&conn_list_lock);
+ 	list_for_each_entry(conn, &conn_list, conns_list) {
+ 		struct task_struct *task;
+ 
+@@ -416,14 +444,14 @@ again:
+ 		if (task)
+ 			ksmbd_debug(CONN, "Stop session handler %s/%d\n",
+ 				    task->comm, task_pid_nr(task));
+-		conn->status = KSMBD_SESS_EXITING;
++		ksmbd_conn_set_exiting(conn);
+ 		if (t->ops->shutdown) {
+-			read_unlock(&conn_list_lock);
++			up_read(&conn_list_lock);
+ 			t->ops->shutdown(t);
+-			read_lock(&conn_list_lock);
++			down_read(&conn_list_lock);
+ 		}
+ 	}
+-	read_unlock(&conn_list_lock);
++	up_read(&conn_list_lock);
+ 
+ 	if (!list_empty(&conn_list)) {
+ 		schedule_timeout_interruptible(HZ / 10); /* 100ms */
+diff --git a/fs/ksmbd/connection.h b/fs/ksmbd/connection.h
+index 0e3a848defaf3..ad8dfaa48ffb3 100644
+--- a/fs/ksmbd/connection.h
++++ b/fs/ksmbd/connection.h
+@@ -26,7 +26,8 @@ enum {
+ 	KSMBD_SESS_GOOD,
+ 	KSMBD_SESS_EXITING,
+ 	KSMBD_SESS_NEED_RECONNECT,
+-	KSMBD_SESS_NEED_NEGOTIATE
++	KSMBD_SESS_NEED_NEGOTIATE,
++	KSMBD_SESS_RELEASING
+ };
+ 
+ struct ksmbd_stats {
+@@ -140,10 +141,10 @@ struct ksmbd_transport {
+ #define KSMBD_TCP_PEER_SOCKADDR(c)	((struct sockaddr *)&((c)->peer_addr))
+ 
+ extern struct list_head conn_list;
+-extern rwlock_t conn_list_lock;
++extern struct rw_semaphore conn_list_lock;
+ 
+ bool ksmbd_conn_alive(struct ksmbd_conn *conn);
+-void ksmbd_conn_wait_idle(struct ksmbd_conn *conn);
++void ksmbd_conn_wait_idle(struct ksmbd_conn *conn, u64 sess_id);
+ struct ksmbd_conn *ksmbd_conn_alloc(void);
+ void ksmbd_conn_free(struct ksmbd_conn *conn);
+ bool ksmbd_conn_lookup_dialect(struct ksmbd_conn *c);
+@@ -162,6 +163,8 @@ void ksmbd_conn_init_server_callbacks(struct ksmbd_conn_ops *ops);
+ int ksmbd_conn_handler_loop(void *p);
+ int ksmbd_conn_transport_init(void);
+ void ksmbd_conn_transport_destroy(void);
++void ksmbd_conn_lock(struct ksmbd_conn *conn);
++void ksmbd_conn_unlock(struct ksmbd_conn *conn);
+ 
+ /*
+  * WARNING
+@@ -169,43 +172,60 @@ void ksmbd_conn_transport_destroy(void);
+  * This is a hack. We will move status to a proper place once we land
+  * a multi-sessions support.
+  */
+-static inline bool ksmbd_conn_good(struct ksmbd_work *work)
++static inline bool ksmbd_conn_good(struct ksmbd_conn *conn)
+ {
+-	return work->conn->status == KSMBD_SESS_GOOD;
++	return READ_ONCE(conn->status) == KSMBD_SESS_GOOD;
+ }
+ 
+-static inline bool ksmbd_conn_need_negotiate(struct ksmbd_work *work)
++static inline bool ksmbd_conn_need_negotiate(struct ksmbd_conn *conn)
+ {
+-	return work->conn->status == KSMBD_SESS_NEED_NEGOTIATE;
++	return READ_ONCE(conn->status) == KSMBD_SESS_NEED_NEGOTIATE;
+ }
+ 
+-static inline bool ksmbd_conn_need_reconnect(struct ksmbd_work *work)
++static inline bool ksmbd_conn_need_reconnect(struct ksmbd_conn *conn)
+ {
+-	return work->conn->status == KSMBD_SESS_NEED_RECONNECT;
++	return READ_ONCE(conn->status) == KSMBD_SESS_NEED_RECONNECT;
+ }
+ 
+-static inline bool ksmbd_conn_exiting(struct ksmbd_work *work)
++static inline bool ksmbd_conn_exiting(struct ksmbd_conn *conn)
+ {
+-	return work->conn->status == KSMBD_SESS_EXITING;
++	return READ_ONCE(conn->status) == KSMBD_SESS_EXITING;
+ }
+ 
+-static inline void ksmbd_conn_set_good(struct ksmbd_work *work)
++static inline bool ksmbd_conn_releasing(struct ksmbd_conn *conn)
+ {
+-	work->conn->status = KSMBD_SESS_GOOD;
++	return READ_ONCE(conn->status) == KSMBD_SESS_RELEASING;
+ }
+ 
+-static inline void ksmbd_conn_set_need_negotiate(struct ksmbd_work *work)
++static inline void ksmbd_conn_set_new(struct ksmbd_conn *conn)
+ {
+-	work->conn->status = KSMBD_SESS_NEED_NEGOTIATE;
++	WRITE_ONCE(conn->status, KSMBD_SESS_NEW);
+ }
+ 
+-static inline void ksmbd_conn_set_need_reconnect(struct ksmbd_work *work)
++static inline void ksmbd_conn_set_good(struct ksmbd_conn *conn)
+ {
+-	work->conn->status = KSMBD_SESS_NEED_RECONNECT;
++	WRITE_ONCE(conn->status, KSMBD_SESS_GOOD);
+ }
+ 
+-static inline void ksmbd_conn_set_exiting(struct ksmbd_work *work)
++static inline void ksmbd_conn_set_need_negotiate(struct ksmbd_conn *conn)
+ {
+-	work->conn->status = KSMBD_SESS_EXITING;
++	WRITE_ONCE(conn->status, KSMBD_SESS_NEED_NEGOTIATE);
+ }
++
++static inline void ksmbd_conn_set_need_reconnect(struct ksmbd_conn *conn)
++{
++	WRITE_ONCE(conn->status, KSMBD_SESS_NEED_RECONNECT);
++}
++
++static inline void ksmbd_conn_set_exiting(struct ksmbd_conn *conn)
++{
++	WRITE_ONCE(conn->status, KSMBD_SESS_EXITING);
++}
++
++static inline void ksmbd_conn_set_releasing(struct ksmbd_conn *conn)
++{
++	WRITE_ONCE(conn->status, KSMBD_SESS_RELEASING);
++}
++
++void ksmbd_all_conn_set_status(u64 sess_id, u32 status);
+ #endif /* __CONNECTION_H__ */
+diff --git a/fs/ksmbd/mgmt/tree_connect.c b/fs/ksmbd/mgmt/tree_connect.c
+index 8ce17b3fb8dad..f07a05f376513 100644
+--- a/fs/ksmbd/mgmt/tree_connect.c
++++ b/fs/ksmbd/mgmt/tree_connect.c
+@@ -109,7 +109,15 @@ int ksmbd_tree_conn_disconnect(struct ksmbd_session *sess,
+ struct ksmbd_tree_connect *ksmbd_tree_conn_lookup(struct ksmbd_session *sess,
+ 						  unsigned int id)
+ {
+-	return xa_load(&sess->tree_conns, id);
++	struct ksmbd_tree_connect *tcon;
++
++	tcon = xa_load(&sess->tree_conns, id);
++	if (tcon) {
++		if (test_bit(TREE_CONN_EXPIRE, &tcon->status))
++			tcon = NULL;
++	}
++
++	return tcon;
+ }
+ 
+ struct ksmbd_share_config *ksmbd_tree_conn_share(struct ksmbd_session *sess,
+@@ -129,6 +137,9 @@ int ksmbd_tree_conn_session_logoff(struct ksmbd_session *sess)
+ 	struct ksmbd_tree_connect *tc;
+ 	unsigned long id;
+ 
++	if (!sess)
++		return -EINVAL;
++
+ 	xa_for_each(&sess->tree_conns, id, tc)
+ 		ret |= ksmbd_tree_conn_disconnect(sess, tc);
+ 	xa_destroy(&sess->tree_conns);
+diff --git a/fs/ksmbd/mgmt/tree_connect.h b/fs/ksmbd/mgmt/tree_connect.h
+index 0f97ddc1e39c0..700df36cf3e30 100644
+--- a/fs/ksmbd/mgmt/tree_connect.h
++++ b/fs/ksmbd/mgmt/tree_connect.h
+@@ -14,6 +14,8 @@ struct ksmbd_share_config;
+ struct ksmbd_user;
+ struct ksmbd_conn;
+ 
++#define TREE_CONN_EXPIRE		1
++
+ struct ksmbd_tree_connect {
+ 	int				id;
+ 
+@@ -25,6 +27,7 @@ struct ksmbd_tree_connect {
+ 
+ 	int				maximal_access;
+ 	bool				posix_extensions;
++	unsigned long			status;
+ };
+ 
+ struct ksmbd_tree_conn_status {
+diff --git a/fs/ksmbd/mgmt/user_session.c b/fs/ksmbd/mgmt/user_session.c
+index 1ca2aae4c2997..8a5dcab05614f 100644
+--- a/fs/ksmbd/mgmt/user_session.c
++++ b/fs/ksmbd/mgmt/user_session.c
+@@ -144,10 +144,6 @@ void ksmbd_session_destroy(struct ksmbd_session *sess)
+ 	if (!sess)
+ 		return;
+ 
+-	down_write(&sessions_table_lock);
+-	hash_del(&sess->hlist);
+-	up_write(&sessions_table_lock);
+-
+ 	if (sess->user)
+ 		ksmbd_free_user(sess->user);
+ 
+@@ -165,17 +161,39 @@ static struct ksmbd_session *__session_lookup(unsigned long long id)
+ 	struct ksmbd_session *sess;
+ 
+ 	hash_for_each_possible(sessions_table, sess, hlist, id) {
+-		if (id == sess->id)
++		if (id == sess->id) {
++			sess->last_active = jiffies;
+ 			return sess;
++		}
+ 	}
+ 	return NULL;
+ }
+ 
++static void ksmbd_expire_session(struct ksmbd_conn *conn)
++{
++	unsigned long id;
++	struct ksmbd_session *sess;
++
++	down_write(&sessions_table_lock);
++	xa_for_each(&conn->sessions, id, sess) {
++		if (sess->state != SMB2_SESSION_VALID ||
++		    time_after(jiffies,
++			       sess->last_active + SMB2_SESSION_TIMEOUT)) {
++			xa_erase(&conn->sessions, sess->id);
++			hash_del(&sess->hlist);
++			ksmbd_session_destroy(sess);
++			continue;
++		}
++	}
++	up_write(&sessions_table_lock);
++}
++
+ int ksmbd_session_register(struct ksmbd_conn *conn,
+ 			   struct ksmbd_session *sess)
+ {
+ 	sess->dialect = conn->dialect;
+ 	memcpy(sess->ClientGUID, conn->ClientGUID, SMB2_CLIENT_GUID_SIZE);
++	ksmbd_expire_session(conn);
+ 	return xa_err(xa_store(&conn->sessions, sess->id, sess, GFP_KERNEL));
+ }
+ 
+@@ -188,47 +206,56 @@ static int ksmbd_chann_del(struct ksmbd_conn *conn, struct ksmbd_session *sess)
+ 		return -ENOENT;
+ 
+ 	kfree(chann);
+-
+ 	return 0;
+ }
+ 
+ void ksmbd_sessions_deregister(struct ksmbd_conn *conn)
+ {
+ 	struct ksmbd_session *sess;
++	unsigned long id;
+ 
++	down_write(&sessions_table_lock);
+ 	if (conn->binding) {
+ 		int bkt;
++		struct hlist_node *tmp;
+ 
+-		down_write(&sessions_table_lock);
+-		hash_for_each(sessions_table, bkt, sess, hlist) {
+-			if (!ksmbd_chann_del(conn, sess)) {
+-				up_write(&sessions_table_lock);
+-				goto sess_destroy;
++		hash_for_each_safe(sessions_table, bkt, tmp, sess, hlist) {
++			if (!ksmbd_chann_del(conn, sess) &&
++			    xa_empty(&sess->ksmbd_chann_list)) {
++				hash_del(&sess->hlist);
++				ksmbd_session_destroy(sess);
+ 			}
+ 		}
+-		up_write(&sessions_table_lock);
+-	} else {
+-		unsigned long id;
+-
+-		xa_for_each(&conn->sessions, id, sess) {
+-			if (!ksmbd_chann_del(conn, sess))
+-				goto sess_destroy;
+-		}
+ 	}
+ 
+-	return;
++	xa_for_each(&conn->sessions, id, sess) {
++		unsigned long chann_id;
++		struct channel *chann;
++
++		xa_for_each(&sess->ksmbd_chann_list, chann_id, chann) {
++			if (chann->conn != conn)
++				ksmbd_conn_set_exiting(chann->conn);
++		}
+ 
+-sess_destroy:
+-	if (xa_empty(&sess->ksmbd_chann_list)) {
+-		xa_erase(&conn->sessions, sess->id);
+-		ksmbd_session_destroy(sess);
++		ksmbd_chann_del(conn, sess);
++		if (xa_empty(&sess->ksmbd_chann_list)) {
++			xa_erase(&conn->sessions, sess->id);
++			hash_del(&sess->hlist);
++			ksmbd_session_destroy(sess);
++		}
+ 	}
++	up_write(&sessions_table_lock);
+ }
+ 
+ struct ksmbd_session *ksmbd_session_lookup(struct ksmbd_conn *conn,
+ 					   unsigned long long id)
+ {
+-	return xa_load(&conn->sessions, id);
++	struct ksmbd_session *sess;
++
++	sess = xa_load(&conn->sessions, id);
++	if (sess)
++		sess->last_active = jiffies;
++	return sess;
+ }
+ 
+ struct ksmbd_session *ksmbd_session_lookup_slowpath(unsigned long long id)
+@@ -237,6 +264,8 @@ struct ksmbd_session *ksmbd_session_lookup_slowpath(unsigned long long id)
+ 
+ 	down_read(&sessions_table_lock);
+ 	sess = __session_lookup(id);
++	if (sess)
++		sess->last_active = jiffies;
+ 	up_read(&sessions_table_lock);
+ 
+ 	return sess;
+@@ -315,6 +344,8 @@ static struct ksmbd_session *__session_create(int protocol)
+ 	if (ksmbd_init_file_table(&sess->file_table))
+ 		goto error;
+ 
++	sess->last_active = jiffies;
++	sess->state = SMB2_SESSION_IN_PROGRESS;
+ 	set_session_flag(sess, protocol);
+ 	xa_init(&sess->tree_conns);
+ 	xa_init(&sess->ksmbd_chann_list);
+diff --git a/fs/ksmbd/mgmt/user_session.h b/fs/ksmbd/mgmt/user_session.h
+index b6a9e7a6aae45..f99d475b28db4 100644
+--- a/fs/ksmbd/mgmt/user_session.h
++++ b/fs/ksmbd/mgmt/user_session.h
+@@ -59,6 +59,7 @@ struct ksmbd_session {
+ 	__u8				smb3signingkey[SMB3_SIGN_KEY_SIZE];
+ 
+ 	struct ksmbd_file_table		file_table;
++	unsigned long			last_active;
+ };
+ 
+ static inline int test_session_flag(struct ksmbd_session *sess, int bit)
+diff --git a/fs/ksmbd/server.c b/fs/ksmbd/server.c
+index 0d8242789dc8f..dc76d7cf241f0 100644
+--- a/fs/ksmbd/server.c
++++ b/fs/ksmbd/server.c
+@@ -93,7 +93,8 @@ static inline int check_conn_state(struct ksmbd_work *work)
+ {
+ 	struct smb_hdr *rsp_hdr;
+ 
+-	if (ksmbd_conn_exiting(work) || ksmbd_conn_need_reconnect(work)) {
++	if (ksmbd_conn_exiting(work->conn) ||
++	    ksmbd_conn_need_reconnect(work->conn)) {
+ 		rsp_hdr = work->response_buf;
+ 		rsp_hdr->Status.CifsError = STATUS_CONNECTION_DISCONNECTED;
+ 		return 1;
+@@ -606,6 +607,7 @@ err_unregister:
+ static void __exit ksmbd_server_exit(void)
+ {
+ 	ksmbd_server_shutdown();
++	rcu_barrier();
+ 	ksmbd_release_inode_hash();
+ }
+ 
+diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c
+index 67b7e766a06ba..da66abc7c4e18 100644
+--- a/fs/ksmbd/smb2pdu.c
++++ b/fs/ksmbd/smb2pdu.c
+@@ -248,7 +248,7 @@ int init_smb2_neg_rsp(struct ksmbd_work *work)
+ 
+ 	rsp = smb2_get_msg(work->response_buf);
+ 
+-	WARN_ON(ksmbd_conn_good(work));
++	WARN_ON(ksmbd_conn_good(conn));
+ 
+ 	rsp->StructureSize = cpu_to_le16(65);
+ 	ksmbd_debug(SMB, "conn->dialect 0x%x\n", conn->dialect);
+@@ -277,7 +277,7 @@ int init_smb2_neg_rsp(struct ksmbd_work *work)
+ 		rsp->SecurityMode |= SMB2_NEGOTIATE_SIGNING_REQUIRED_LE;
+ 	conn->use_spnego = true;
+ 
+-	ksmbd_conn_set_need_negotiate(work);
++	ksmbd_conn_set_need_negotiate(conn);
+ 	return 0;
+ }
+ 
+@@ -561,7 +561,7 @@ int smb2_check_user_session(struct ksmbd_work *work)
+ 	    cmd == SMB2_SESSION_SETUP_HE)
+ 		return 0;
+ 
+-	if (!ksmbd_conn_good(work))
++	if (!ksmbd_conn_good(conn))
+ 		return -EINVAL;
+ 
+ 	sess_id = le64_to_cpu(req_hdr->SessionId);
+@@ -594,7 +594,7 @@ static void destroy_previous_session(struct ksmbd_conn *conn,
+ 
+ 	prev_sess->state = SMB2_SESSION_EXPIRED;
+ 	xa_for_each(&prev_sess->ksmbd_chann_list, index, chann)
+-		chann->conn->status = KSMBD_SESS_EXITING;
++		ksmbd_conn_set_exiting(chann->conn);
+ }
+ 
+ /**
+@@ -1079,7 +1079,7 @@ int smb2_handle_negotiate(struct ksmbd_work *work)
+ 
+ 	ksmbd_debug(SMB, "Received negotiate request\n");
+ 	conn->need_neg = false;
+-	if (ksmbd_conn_good(work)) {
++	if (ksmbd_conn_good(conn)) {
+ 		pr_err("conn->tcp_status is already in CifsGood State\n");
+ 		work->send_no_response = 1;
+ 		return rc;
+@@ -1233,7 +1233,7 @@ int smb2_handle_negotiate(struct ksmbd_work *work)
+ 	}
+ 
+ 	conn->srv_sec_mode = le16_to_cpu(rsp->SecurityMode);
+-	ksmbd_conn_set_need_negotiate(work);
++	ksmbd_conn_set_need_negotiate(conn);
+ 
+ err_out:
+ 	if (rc < 0)
+@@ -1459,7 +1459,7 @@ static int ntlm_authenticate(struct ksmbd_work *work)
+ 		 * Reuse session if anonymous try to connect
+ 		 * on reauthetication.
+ 		 */
+-		if (ksmbd_anonymous_user(user)) {
++		if (conn->binding == false && ksmbd_anonymous_user(user)) {
+ 			ksmbd_free_user(user);
+ 			return 0;
+ 		}
+@@ -1473,7 +1473,7 @@ static int ntlm_authenticate(struct ksmbd_work *work)
+ 		sess->user = user;
+ 	}
+ 
+-	if (user_guest(sess->user)) {
++	if (conn->binding == false && user_guest(sess->user)) {
+ 		rsp->SessionFlags = SMB2_SESSION_FLAG_IS_GUEST_LE;
+ 	} else {
+ 		struct authenticate_message *authblob;
+@@ -1656,6 +1656,7 @@ int smb2_sess_setup(struct ksmbd_work *work)
+ 	rsp->SecurityBufferLength = 0;
+ 	inc_rfc1001_len(work->response_buf, 9);
+ 
++	ksmbd_conn_lock(conn);
+ 	if (!req->hdr.SessionId) {
+ 		sess = ksmbd_smb2_session_create();
+ 		if (!sess) {
+@@ -1703,11 +1704,22 @@ int smb2_sess_setup(struct ksmbd_work *work)
+ 			goto out_err;
+ 		}
+ 
++		if (ksmbd_conn_need_reconnect(conn)) {
++			rc = -EFAULT;
++			sess = NULL;
++			goto out_err;
++		}
++
+ 		if (ksmbd_session_lookup(conn, sess_id)) {
+ 			rc = -EACCES;
+ 			goto out_err;
+ 		}
+ 
++		if (user_guest(sess->user)) {
++			rc = -EOPNOTSUPP;
++			goto out_err;
++		}
++
+ 		conn->binding = true;
+ 	} else if ((conn->dialect < SMB30_PROT_ID ||
+ 		    server_conf.flags & KSMBD_GLOBAL_FLAG_SMB3_MULTICHANNEL) &&
+@@ -1722,12 +1734,20 @@ int smb2_sess_setup(struct ksmbd_work *work)
+ 			rc = -ENOENT;
+ 			goto out_err;
+ 		}
++
++		if (sess->state == SMB2_SESSION_EXPIRED) {
++			rc = -EFAULT;
++			goto out_err;
++		}
++
++		if (ksmbd_conn_need_reconnect(conn)) {
++			rc = -EFAULT;
++			sess = NULL;
++			goto out_err;
++		}
+ 	}
+ 	work->sess = sess;
+ 
+-	if (sess->state == SMB2_SESSION_EXPIRED)
+-		sess->state = SMB2_SESSION_IN_PROGRESS;
+-
+ 	negblob_off = le16_to_cpu(req->SecurityBufferOffset);
+ 	negblob_len = le16_to_cpu(req->SecurityBufferLength);
+ 	if (negblob_off < offsetof(struct smb2_sess_setup_req, Buffer) ||
+@@ -1757,8 +1777,10 @@ int smb2_sess_setup(struct ksmbd_work *work)
+ 				goto out_err;
+ 			}
+ 
+-			ksmbd_conn_set_good(work);
+-			sess->state = SMB2_SESSION_VALID;
++			if (!ksmbd_conn_need_reconnect(conn)) {
++				ksmbd_conn_set_good(conn);
++				sess->state = SMB2_SESSION_VALID;
++			}
+ 			kfree(sess->Preauth_HashValue);
+ 			sess->Preauth_HashValue = NULL;
+ 		} else if (conn->preferred_auth_mech == KSMBD_AUTH_NTLMSSP) {
+@@ -1780,8 +1802,10 @@ int smb2_sess_setup(struct ksmbd_work *work)
+ 				if (rc)
+ 					goto out_err;
+ 
+-				ksmbd_conn_set_good(work);
+-				sess->state = SMB2_SESSION_VALID;
++				if (!ksmbd_conn_need_reconnect(conn)) {
++					ksmbd_conn_set_good(conn);
++					sess->state = SMB2_SESSION_VALID;
++				}
+ 				if (conn->binding) {
+ 					struct preauth_session *preauth_sess;
+ 
+@@ -1794,6 +1818,10 @@ int smb2_sess_setup(struct ksmbd_work *work)
+ 				}
+ 				kfree(sess->Preauth_HashValue);
+ 				sess->Preauth_HashValue = NULL;
++			} else {
++				pr_info_ratelimited("Unknown NTLMSSP message type : 0x%x\n",
++						le32_to_cpu(negblob->MessageType));
++				rc = -EINVAL;
+ 			}
+ 		} else {
+ 			/* TODO: need one more negotiation */
+@@ -1816,6 +1844,8 @@ out_err:
+ 		rsp->hdr.Status = STATUS_NETWORK_SESSION_EXPIRED;
+ 	else if (rc == -ENOMEM)
+ 		rsp->hdr.Status = STATUS_INSUFFICIENT_RESOURCES;
++	else if (rc == -EOPNOTSUPP)
++		rsp->hdr.Status = STATUS_NOT_SUPPORTED;
+ 	else if (rc)
+ 		rsp->hdr.Status = STATUS_LOGON_FAILURE;
+ 
+@@ -1843,14 +1873,17 @@ out_err:
+ 			if (sess->user && sess->user->flags & KSMBD_USER_FLAG_DELAY_SESSION)
+ 				try_delay = true;
+ 
+-			xa_erase(&conn->sessions, sess->id);
+-			ksmbd_session_destroy(sess);
+-			work->sess = NULL;
+-			if (try_delay)
++			sess->last_active = jiffies;
++			sess->state = SMB2_SESSION_EXPIRED;
++			if (try_delay) {
++				ksmbd_conn_set_need_reconnect(conn);
+ 				ssleep(5);
++				ksmbd_conn_set_need_negotiate(conn);
++			}
+ 		}
+ 	}
+ 
++	ksmbd_conn_unlock(conn);
+ 	return rc;
+ }
+ 
+@@ -2048,11 +2081,12 @@ int smb2_tree_disconnect(struct ksmbd_work *work)
+ 
+ 	ksmbd_debug(SMB, "request\n");
+ 
+-	if (!tcon) {
++	if (!tcon || test_and_set_bit(TREE_CONN_EXPIRE, &tcon->status)) {
+ 		struct smb2_tree_disconnect_req *req =
+ 			smb2_get_msg(work->request_buf);
+ 
+ 		ksmbd_debug(SMB, "Invalid tid %d\n", req->hdr.Id.SyncId.TreeId);
++
+ 		rsp->hdr.Status = STATUS_NETWORK_NAME_DELETED;
+ 		smb2_set_err_rsp(work);
+ 		return 0;
+@@ -2074,21 +2108,25 @@ int smb2_session_logoff(struct ksmbd_work *work)
+ {
+ 	struct ksmbd_conn *conn = work->conn;
+ 	struct smb2_logoff_rsp *rsp = smb2_get_msg(work->response_buf);
+-	struct ksmbd_session *sess = work->sess;
++	struct ksmbd_session *sess;
++	struct smb2_logoff_req *req = smb2_get_msg(work->request_buf);
++	u64 sess_id = le64_to_cpu(req->hdr.SessionId);
+ 
+ 	rsp->StructureSize = cpu_to_le16(4);
+ 	inc_rfc1001_len(work->response_buf, 4);
+ 
+ 	ksmbd_debug(SMB, "request\n");
+ 
+-	/* setting CifsExiting here may race with start_tcp_sess */
+-	ksmbd_conn_set_need_reconnect(work);
++	ksmbd_all_conn_set_status(sess_id, KSMBD_SESS_NEED_RECONNECT);
+ 	ksmbd_close_session_fds(work);
+-	ksmbd_conn_wait_idle(conn);
++	ksmbd_conn_wait_idle(conn, sess_id);
+ 
++	/*
++	 * Re-lookup session to validate if session is deleted
++	 * while waiting request complete
++	 */
++	sess = ksmbd_session_lookup_all(conn, sess_id);
+ 	if (ksmbd_tree_conn_session_logoff(sess)) {
+-		struct smb2_logoff_req *req = smb2_get_msg(work->request_buf);
+-
+ 		ksmbd_debug(SMB, "Invalid tid %d\n", req->hdr.Id.SyncId.TreeId);
+ 		rsp->hdr.Status = STATUS_NETWORK_NAME_DELETED;
+ 		smb2_set_err_rsp(work);
+@@ -2100,9 +2138,7 @@ int smb2_session_logoff(struct ksmbd_work *work)
+ 
+ 	ksmbd_free_user(sess->user);
+ 	sess->user = NULL;
+-
+-	/* let start_tcp_sess free connection info now */
+-	ksmbd_conn_set_need_negotiate(work);
++	ksmbd_all_conn_set_status(sess_id, KSMBD_SESS_NEED_NEGOTIATE);
+ 	return 0;
+ }
+ 
+@@ -4907,6 +4943,9 @@ static int smb2_get_info_filesystem(struct ksmbd_work *work,
+ 	int rc = 0, len;
+ 	int fs_infoclass_size = 0;
+ 
++	if (!share->path)
++		return -EIO;
++
+ 	rc = kern_path(share->path, LOOKUP_NO_SYMLINKS, &path);
+ 	if (rc) {
+ 		pr_err("cannot create vfs path\n");
+@@ -6931,7 +6970,7 @@ int smb2_lock(struct ksmbd_work *work)
+ 
+ 		nolock = 1;
+ 		/* check locks in connection list */
+-		read_lock(&conn_list_lock);
++		down_read(&conn_list_lock);
+ 		list_for_each_entry(conn, &conn_list, conns_list) {
+ 			spin_lock(&conn->llist_lock);
+ 			list_for_each_entry_safe(cmp_lock, tmp2, &conn->lock_list, clist) {
+@@ -6948,7 +6987,7 @@ int smb2_lock(struct ksmbd_work *work)
+ 						list_del(&cmp_lock->flist);
+ 						list_del(&cmp_lock->clist);
+ 						spin_unlock(&conn->llist_lock);
+-						read_unlock(&conn_list_lock);
++						up_read(&conn_list_lock);
+ 
+ 						locks_free_lock(cmp_lock->fl);
+ 						kfree(cmp_lock);
+@@ -6970,7 +7009,7 @@ int smb2_lock(struct ksmbd_work *work)
+ 				    cmp_lock->start > smb_lock->start &&
+ 				    cmp_lock->start < smb_lock->end) {
+ 					spin_unlock(&conn->llist_lock);
+-					read_unlock(&conn_list_lock);
++					up_read(&conn_list_lock);
+ 					pr_err("previous lock conflict with zero byte lock range\n");
+ 					goto out;
+ 				}
+@@ -6979,7 +7018,7 @@ int smb2_lock(struct ksmbd_work *work)
+ 				    smb_lock->start > cmp_lock->start &&
+ 				    smb_lock->start < cmp_lock->end) {
+ 					spin_unlock(&conn->llist_lock);
+-					read_unlock(&conn_list_lock);
++					up_read(&conn_list_lock);
+ 					pr_err("current lock conflict with zero byte lock range\n");
+ 					goto out;
+ 				}
+@@ -6990,14 +7029,14 @@ int smb2_lock(struct ksmbd_work *work)
+ 				      cmp_lock->end >= smb_lock->end)) &&
+ 				    !cmp_lock->zero_len && !smb_lock->zero_len) {
+ 					spin_unlock(&conn->llist_lock);
+-					read_unlock(&conn_list_lock);
++					up_read(&conn_list_lock);
+ 					pr_err("Not allow lock operation on exclusive lock range\n");
+ 					goto out;
+ 				}
+ 			}
+ 			spin_unlock(&conn->llist_lock);
+ 		}
+-		read_unlock(&conn_list_lock);
++		up_read(&conn_list_lock);
+ out_check_cl:
+ 		if (smb_lock->fl->fl_type == F_UNLCK && nolock) {
+ 			pr_err("Try to unlock nolocked range\n");
+diff --git a/fs/ksmbd/smb2pdu.h b/fs/ksmbd/smb2pdu.h
+index 9420dd2813fb7..977a9ee6a5b31 100644
+--- a/fs/ksmbd/smb2pdu.h
++++ b/fs/ksmbd/smb2pdu.h
+@@ -61,6 +61,8 @@ struct preauth_integrity_info {
+ #define SMB2_SESSION_IN_PROGRESS	BIT(0)
+ #define SMB2_SESSION_VALID		BIT(1)
+ 
++#define SMB2_SESSION_TIMEOUT		(10 * HZ)
++
+ struct create_durable_req_v2 {
+ 	struct create_context ccontext;
+ 	__u8   Name[8];
+diff --git a/fs/ksmbd/transport_tcp.c b/fs/ksmbd/transport_tcp.c
+index 20e85e2701f26..eff7a1d793f00 100644
+--- a/fs/ksmbd/transport_tcp.c
++++ b/fs/ksmbd/transport_tcp.c
+@@ -333,7 +333,7 @@ static int ksmbd_tcp_readv(struct tcp_transport *t, struct kvec *iov_orig,
+ 		if (length == -EINTR) {
+ 			total_read = -ESHUTDOWN;
+ 			break;
+-		} else if (conn->status == KSMBD_SESS_NEED_RECONNECT) {
++		} else if (ksmbd_conn_need_reconnect(conn)) {
+ 			total_read = -EAGAIN;
+ 			break;
+ 		} else if (length == -ERESTARTSYS || length == -EAGAIN) {
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index 2a0ca5c7f082a..660ccfaf463e4 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -67,6 +67,8 @@
+ 
+ #define OPENOWNER_POOL_SIZE	8
+ 
++static void nfs4_state_start_reclaim_reboot(struct nfs_client *clp);
++
+ const nfs4_stateid zero_stateid = {
+ 	{ .data = { 0 } },
+ 	.type = NFS4_SPECIAL_STATEID_TYPE,
+@@ -330,6 +332,8 @@ do_confirm:
+ 	status = nfs4_proc_create_session(clp, cred);
+ 	if (status != 0)
+ 		goto out;
++	if (!(clp->cl_exchange_flags & EXCHGID4_FLAG_CONFIRMED_R))
++		nfs4_state_start_reclaim_reboot(clp);
+ 	nfs41_finish_session_reset(clp);
+ 	nfs_mark_client_ready(clp, NFS_CS_READY);
+ out:
+diff --git a/fs/nilfs2/bmap.c b/fs/nilfs2/bmap.c
+index 798a2c1b38c6c..7a8f166f2c8d8 100644
+--- a/fs/nilfs2/bmap.c
++++ b/fs/nilfs2/bmap.c
+@@ -67,20 +67,28 @@ int nilfs_bmap_lookup_at_level(struct nilfs_bmap *bmap, __u64 key, int level,
+ 
+ 	down_read(&bmap->b_sem);
+ 	ret = bmap->b_ops->bop_lookup(bmap, key, level, ptrp);
+-	if (ret < 0) {
+-		ret = nilfs_bmap_convert_error(bmap, __func__, ret);
++	if (ret < 0)
+ 		goto out;
+-	}
++
+ 	if (NILFS_BMAP_USE_VBN(bmap)) {
+ 		ret = nilfs_dat_translate(nilfs_bmap_get_dat(bmap), *ptrp,
+ 					  &blocknr);
+ 		if (!ret)
+ 			*ptrp = blocknr;
++		else if (ret == -ENOENT) {
++			/*
++			 * If there was no valid entry in DAT for the block
++			 * address obtained by b_ops->bop_lookup, then pass
++			 * internal code -EINVAL to nilfs_bmap_convert_error
++			 * to treat it as metadata corruption.
++			 */
++			ret = -EINVAL;
++		}
+ 	}
+ 
+  out:
+ 	up_read(&bmap->b_sem);
+-	return ret;
++	return nilfs_bmap_convert_error(bmap, __func__, ret);
+ }
+ 
+ int nilfs_bmap_lookup_contig(struct nilfs_bmap *bmap, __u64 key, __u64 *ptrp,
+diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
+index 228659612c0d7..ac949fd7603ff 100644
+--- a/fs/nilfs2/segment.c
++++ b/fs/nilfs2/segment.c
+@@ -2041,6 +2041,9 @@ static int nilfs_segctor_do_construct(struct nilfs_sc_info *sci, int mode)
+ 	struct the_nilfs *nilfs = sci->sc_super->s_fs_info;
+ 	int err;
+ 
++	if (sb_rdonly(sci->sc_super))
++		return -EROFS;
++
+ 	nilfs_sc_cstage_set(sci, NILFS_ST_INIT);
+ 	sci->sc_cno = nilfs->ns_cno;
+ 
+@@ -2724,7 +2727,7 @@ static void nilfs_segctor_write_out(struct nilfs_sc_info *sci)
+ 
+ 		flush_work(&sci->sc_iput_work);
+ 
+-	} while (ret && retrycount-- > 0);
++	} while (ret && ret != -EROFS && retrycount-- > 0);
+ }
+ 
+ /**
+diff --git a/fs/ntfs3/fslog.c b/fs/ntfs3/fslog.c
+index c6eb371a36951..bf73964472845 100644
+--- a/fs/ntfs3/fslog.c
++++ b/fs/ntfs3/fslog.c
+@@ -2575,7 +2575,7 @@ static int read_next_log_rec(struct ntfs_log *log, struct lcb *lcb, u64 *lsn)
+ 	return find_log_rec(log, *lsn, lcb);
+ }
+ 
+-static inline bool check_index_header(const struct INDEX_HDR *hdr, size_t bytes)
++bool check_index_header(const struct INDEX_HDR *hdr, size_t bytes)
+ {
+ 	__le16 mask;
+ 	u32 min_de, de_off, used, total;
+@@ -4256,6 +4256,10 @@ check_attribute_names:
+ 	rec_len -= t32;
+ 
+ 	attr_names = kmemdup(Add2Ptr(lrh, t32), rec_len, GFP_NOFS);
++	if (!attr_names) {
++		err = -ENOMEM;
++		goto out;
++	}
+ 
+ 	lcb_put(lcb);
+ 	lcb = NULL;
+diff --git a/fs/ntfs3/index.c b/fs/ntfs3/index.c
+index 51ab759546403..7a1e01a2ed9ae 100644
+--- a/fs/ntfs3/index.c
++++ b/fs/ntfs3/index.c
+@@ -725,9 +725,13 @@ static struct NTFS_DE *hdr_find_e(const struct ntfs_index *indx,
+ 	u32 e_size, e_key_len;
+ 	u32 end = le32_to_cpu(hdr->used);
+ 	u32 off = le32_to_cpu(hdr->de_off);
++	u32 total = le32_to_cpu(hdr->total);
+ 	u16 offs[128];
+ 
+ fill_table:
++	if (end > total)
++		return NULL;
++
+ 	if (off + sizeof(struct NTFS_DE) > end)
+ 		return NULL;
+ 
+@@ -844,6 +848,10 @@ static inline struct NTFS_DE *hdr_delete_de(struct INDEX_HDR *hdr,
+ 	u32 off = PtrOffset(hdr, re);
+ 	int bytes = used - (off + esize);
+ 
++	/* check INDEX_HDR valid before using INDEX_HDR */
++	if (!check_index_header(hdr, le32_to_cpu(hdr->total)))
++		return NULL;
++
+ 	if (off >= used || esize < sizeof(struct NTFS_DE) ||
+ 	    bytes < sizeof(struct NTFS_DE))
+ 		return NULL;
+diff --git a/fs/ntfs3/inode.c b/fs/ntfs3/inode.c
+index 309d9b46b5d5c..ce6bb3bd86b6e 100644
+--- a/fs/ntfs3/inode.c
++++ b/fs/ntfs3/inode.c
+@@ -259,7 +259,6 @@ next_attr:
+ 			goto out;
+ 
+ 		root = Add2Ptr(attr, roff);
+-		is_root = true;
+ 
+ 		if (attr->name_len != ARRAY_SIZE(I30_NAME) ||
+ 		    memcmp(attr_name(attr), I30_NAME, sizeof(I30_NAME)))
+@@ -272,6 +271,7 @@ next_attr:
+ 		if (!is_dir)
+ 			goto next_attr;
+ 
++		is_root = true;
+ 		ni->ni_flags |= NI_FLAG_DIR;
+ 
+ 		err = indx_init(&ni->dir, sbi, attr, INDEX_MUTEX_I30);
+diff --git a/fs/ntfs3/ntfs_fs.h b/fs/ntfs3/ntfs_fs.h
+index 80072e5f96f70..15296f5690b5a 100644
+--- a/fs/ntfs3/ntfs_fs.h
++++ b/fs/ntfs3/ntfs_fs.h
+@@ -581,6 +581,7 @@ int ni_rename(struct ntfs_inode *dir_ni, struct ntfs_inode *new_dir_ni,
+ bool ni_is_dirty(struct inode *inode);
+ 
+ /* Globals from fslog.c */
++bool check_index_header(const struct INDEX_HDR *hdr, size_t bytes);
+ int log_replay(struct ntfs_inode *ni, bool *initialized);
+ 
+ /* Globals from fsntfs.c */
+diff --git a/fs/pstore/pmsg.c b/fs/pstore/pmsg.c
+index ab82e5f053464..b31c9c72d90b4 100644
+--- a/fs/pstore/pmsg.c
++++ b/fs/pstore/pmsg.c
+@@ -7,10 +7,9 @@
+ #include <linux/device.h>
+ #include <linux/fs.h>
+ #include <linux/uaccess.h>
+-#include <linux/rtmutex.h>
+ #include "internal.h"
+ 
+-static DEFINE_RT_MUTEX(pmsg_lock);
++static DEFINE_MUTEX(pmsg_lock);
+ 
+ static ssize_t write_pmsg(struct file *file, const char __user *buf,
+ 			  size_t count, loff_t *ppos)
+@@ -29,9 +28,9 @@ static ssize_t write_pmsg(struct file *file, const char __user *buf,
+ 	if (!access_ok(buf, count))
+ 		return -EFAULT;
+ 
+-	rt_mutex_lock(&pmsg_lock);
++	mutex_lock(&pmsg_lock);
+ 	ret = psinfo->write_user(&record, buf);
+-	rt_mutex_unlock(&pmsg_lock);
++	mutex_unlock(&pmsg_lock);
+ 	return ret ? ret : count;
+ }
+ 
+diff --git a/fs/reiserfs/xattr_security.c b/fs/reiserfs/xattr_security.c
+index 41c0ea84fbffc..7f227afc742ba 100644
+--- a/fs/reiserfs/xattr_security.c
++++ b/fs/reiserfs/xattr_security.c
+@@ -82,11 +82,15 @@ int reiserfs_security_write(struct reiserfs_transaction_handle *th,
+ 			    struct inode *inode,
+ 			    struct reiserfs_security_handle *sec)
+ {
++	char xattr_name[XATTR_NAME_MAX + 1] = XATTR_SECURITY_PREFIX;
+ 	int error;
+-	if (strlen(sec->name) < sizeof(XATTR_SECURITY_PREFIX))
++
++	if (XATTR_SECURITY_PREFIX_LEN + strlen(sec->name) > XATTR_NAME_MAX)
+ 		return -EINVAL;
+ 
+-	error = reiserfs_xattr_set_handle(th, inode, sec->name, sec->value,
++	strlcat(xattr_name, sec->name, sizeof(xattr_name));
++
++	error = reiserfs_xattr_set_handle(th, inode, xattr_name, sec->value,
+ 					  sec->length, XATTR_CREATE);
+ 	if (error == -ENODATA || error == -EOPNOTSUPP)
+ 		error = 0;
+diff --git a/fs/ubifs/dir.c b/fs/ubifs/dir.c
+index 1505539f6fe97..ef0499edc248f 100644
+--- a/fs/ubifs/dir.c
++++ b/fs/ubifs/dir.c
+@@ -358,7 +358,6 @@ static struct inode *create_whiteout(struct inode *dir, struct dentry *dentry)
+ 	umode_t mode = S_IFCHR | WHITEOUT_MODE;
+ 	struct inode *inode;
+ 	struct ubifs_info *c = dir->i_sb->s_fs_info;
+-	struct fscrypt_name nm;
+ 
+ 	/*
+ 	 * Create an inode('nlink = 1') for whiteout without updating journal,
+@@ -369,10 +368,6 @@ static struct inode *create_whiteout(struct inode *dir, struct dentry *dentry)
+ 	dbg_gen("dent '%pd', mode %#hx in dir ino %lu",
+ 		dentry, mode, dir->i_ino);
+ 
+-	err = fscrypt_setup_filename(dir, &dentry->d_name, 0, &nm);
+-	if (err)
+-		return ERR_PTR(err);
+-
+ 	inode = ubifs_new_inode(c, dir, mode, false);
+ 	if (IS_ERR(inode)) {
+ 		err = PTR_ERR(inode);
+@@ -395,7 +390,6 @@ out_inode:
+ 	make_bad_inode(inode);
+ 	iput(inode);
+ out_free:
+-	fscrypt_free_filename(&nm);
+ 	ubifs_err(c, "cannot create whiteout file, error %d", err);
+ 	return ERR_PTR(err);
+ }
+@@ -492,6 +486,7 @@ static int ubifs_tmpfile(struct mnt_idmap *idmap, struct inode *dir,
+ 	unlock_2_inodes(dir, inode);
+ 
+ 	ubifs_release_budget(c, &req);
++	fscrypt_free_filename(&nm);
+ 
+ 	return finish_open_simple(file, 0);
+ 
+diff --git a/fs/ubifs/tnc.c b/fs/ubifs/tnc.c
+index 2469f72eeaabb..6b7d95b65f4b6 100644
+--- a/fs/ubifs/tnc.c
++++ b/fs/ubifs/tnc.c
+@@ -44,6 +44,33 @@ enum {
+ 	NOT_ON_MEDIA = 3,
+ };
+ 
++static void do_insert_old_idx(struct ubifs_info *c,
++			      struct ubifs_old_idx *old_idx)
++{
++	struct ubifs_old_idx *o;
++	struct rb_node **p, *parent = NULL;
++
++	p = &c->old_idx.rb_node;
++	while (*p) {
++		parent = *p;
++		o = rb_entry(parent, struct ubifs_old_idx, rb);
++		if (old_idx->lnum < o->lnum)
++			p = &(*p)->rb_left;
++		else if (old_idx->lnum > o->lnum)
++			p = &(*p)->rb_right;
++		else if (old_idx->offs < o->offs)
++			p = &(*p)->rb_left;
++		else if (old_idx->offs > o->offs)
++			p = &(*p)->rb_right;
++		else {
++			ubifs_err(c, "old idx added twice!");
++			kfree(old_idx);
++		}
++	}
++	rb_link_node(&old_idx->rb, parent, p);
++	rb_insert_color(&old_idx->rb, &c->old_idx);
++}
++
+ /**
+  * insert_old_idx - record an index node obsoleted since the last commit start.
+  * @c: UBIFS file-system description object
+@@ -69,35 +96,15 @@ enum {
+  */
+ static int insert_old_idx(struct ubifs_info *c, int lnum, int offs)
+ {
+-	struct ubifs_old_idx *old_idx, *o;
+-	struct rb_node **p, *parent = NULL;
++	struct ubifs_old_idx *old_idx;
+ 
+ 	old_idx = kmalloc(sizeof(struct ubifs_old_idx), GFP_NOFS);
+ 	if (unlikely(!old_idx))
+ 		return -ENOMEM;
+ 	old_idx->lnum = lnum;
+ 	old_idx->offs = offs;
++	do_insert_old_idx(c, old_idx);
+ 
+-	p = &c->old_idx.rb_node;
+-	while (*p) {
+-		parent = *p;
+-		o = rb_entry(parent, struct ubifs_old_idx, rb);
+-		if (lnum < o->lnum)
+-			p = &(*p)->rb_left;
+-		else if (lnum > o->lnum)
+-			p = &(*p)->rb_right;
+-		else if (offs < o->offs)
+-			p = &(*p)->rb_left;
+-		else if (offs > o->offs)
+-			p = &(*p)->rb_right;
+-		else {
+-			ubifs_err(c, "old idx added twice!");
+-			kfree(old_idx);
+-			return 0;
+-		}
+-	}
+-	rb_link_node(&old_idx->rb, parent, p);
+-	rb_insert_color(&old_idx->rb, &c->old_idx);
+ 	return 0;
+ }
+ 
+@@ -199,23 +206,6 @@ static struct ubifs_znode *copy_znode(struct ubifs_info *c,
+ 	__set_bit(DIRTY_ZNODE, &zn->flags);
+ 	__clear_bit(COW_ZNODE, &zn->flags);
+ 
+-	ubifs_assert(c, !ubifs_zn_obsolete(znode));
+-	__set_bit(OBSOLETE_ZNODE, &znode->flags);
+-
+-	if (znode->level != 0) {
+-		int i;
+-		const int n = zn->child_cnt;
+-
+-		/* The children now have new parent */
+-		for (i = 0; i < n; i++) {
+-			struct ubifs_zbranch *zbr = &zn->zbranch[i];
+-
+-			if (zbr->znode)
+-				zbr->znode->parent = zn;
+-		}
+-	}
+-
+-	atomic_long_inc(&c->dirty_zn_cnt);
+ 	return zn;
+ }
+ 
+@@ -233,6 +223,42 @@ static int add_idx_dirt(struct ubifs_info *c, int lnum, int dirt)
+ 	return ubifs_add_dirt(c, lnum, dirt);
+ }
+ 
++/**
++ * replace_znode - replace old znode with new znode.
++ * @c: UBIFS file-system description object
++ * @new_zn: new znode
++ * @old_zn: old znode
++ * @zbr: the branch of parent znode
++ *
++ * Replace old znode with new znode in TNC.
++ */
++static void replace_znode(struct ubifs_info *c, struct ubifs_znode *new_zn,
++			  struct ubifs_znode *old_zn, struct ubifs_zbranch *zbr)
++{
++	ubifs_assert(c, !ubifs_zn_obsolete(old_zn));
++	__set_bit(OBSOLETE_ZNODE, &old_zn->flags);
++
++	if (old_zn->level != 0) {
++		int i;
++		const int n = new_zn->child_cnt;
++
++		/* The children now have new parent */
++		for (i = 0; i < n; i++) {
++			struct ubifs_zbranch *child = &new_zn->zbranch[i];
++
++			if (child->znode)
++				child->znode->parent = new_zn;
++		}
++	}
++
++	zbr->znode = new_zn;
++	zbr->lnum = 0;
++	zbr->offs = 0;
++	zbr->len = 0;
++
++	atomic_long_inc(&c->dirty_zn_cnt);
++}
++
+ /**
+  * dirty_cow_znode - ensure a znode is not being committed.
+  * @c: UBIFS file-system description object
+@@ -265,28 +291,32 @@ static struct ubifs_znode *dirty_cow_znode(struct ubifs_info *c,
+ 		return zn;
+ 
+ 	if (zbr->len) {
+-		err = insert_old_idx(c, zbr->lnum, zbr->offs);
+-		if (unlikely(err))
+-			/*
+-			 * Obsolete znodes will be freed by tnc_destroy_cnext()
+-			 * or free_obsolete_znodes(), copied up znodes should
+-			 * be added back to tnc and freed by
+-			 * ubifs_destroy_tnc_subtree().
+-			 */
++		struct ubifs_old_idx *old_idx;
++
++		old_idx = kmalloc(sizeof(struct ubifs_old_idx), GFP_NOFS);
++		if (unlikely(!old_idx)) {
++			err = -ENOMEM;
+ 			goto out;
++		}
++		old_idx->lnum = zbr->lnum;
++		old_idx->offs = zbr->offs;
++
+ 		err = add_idx_dirt(c, zbr->lnum, zbr->len);
+-	} else
+-		err = 0;
++		if (err) {
++			kfree(old_idx);
++			goto out;
++		}
+ 
+-out:
+-	zbr->znode = zn;
+-	zbr->lnum = 0;
+-	zbr->offs = 0;
+-	zbr->len = 0;
++		do_insert_old_idx(c, old_idx);
++	}
++
++	replace_znode(c, zn, znode, zbr);
+ 
+-	if (unlikely(err))
+-		return ERR_PTR(err);
+ 	return zn;
++
++out:
++	kfree(zn);
++	return ERR_PTR(err);
+ }
+ 
+ /**
+diff --git a/fs/xfs/libxfs/xfs_sb.c b/fs/xfs/libxfs/xfs_sb.c
+index 99cc03a298e21..ba0f17bc1dc03 100644
+--- a/fs/xfs/libxfs/xfs_sb.c
++++ b/fs/xfs/libxfs/xfs_sb.c
+@@ -72,7 +72,8 @@ xfs_sb_validate_v5_features(
+ }
+ 
+ /*
+- * We support all XFS versions newer than a v4 superblock with V2 directories.
++ * We current support XFS v5 formats with known features and v4 superblocks with
++ * at least V2 directories.
+  */
+ bool
+ xfs_sb_good_version(
+@@ -86,16 +87,16 @@ xfs_sb_good_version(
+ 	if (xfs_sb_is_v5(sbp))
+ 		return xfs_sb_validate_v5_features(sbp);
+ 
++	/* versions prior to v4 are not supported */
++	if (XFS_SB_VERSION_NUM(sbp) != XFS_SB_VERSION_4)
++		return false;
++
+ 	/* We must not have any unknown v4 feature bits set */
+ 	if ((sbp->sb_versionnum & ~XFS_SB_VERSION_OKBITS) ||
+ 	    ((sbp->sb_versionnum & XFS_SB_VERSION_MOREBITSBIT) &&
+ 	     (sbp->sb_features2 & ~XFS_SB_VERSION2_OKBITS)))
+ 		return false;
+ 
+-	/* versions prior to v4 are not supported */
+-	if (XFS_SB_VERSION_NUM(sbp) < XFS_SB_VERSION_4)
+-		return false;
+-
+ 	/* V4 filesystems need v2 directories and unwritten extents */
+ 	if (!(sbp->sb_versionnum & XFS_SB_VERSION_DIRV2BIT))
+ 		return false;
+diff --git a/include/acpi/acpi_bus.h b/include/acpi/acpi_bus.h
+index 57acb895c0381..a6affc0550b0f 100644
+--- a/include/acpi/acpi_bus.h
++++ b/include/acpi/acpi_bus.h
+@@ -52,7 +52,7 @@ bool acpi_dock_match(acpi_handle handle);
+ bool acpi_check_dsm(acpi_handle handle, const guid_t *guid, u64 rev, u64 funcs);
+ union acpi_object *acpi_evaluate_dsm(acpi_handle handle, const guid_t *guid,
+ 			u64 rev, u64 func, union acpi_object *argv4);
+-
++#ifdef CONFIG_ACPI
+ static inline union acpi_object *
+ acpi_evaluate_dsm_typed(acpi_handle handle, const guid_t *guid, u64 rev,
+ 			u64 func, union acpi_object *argv4,
+@@ -68,6 +68,7 @@ acpi_evaluate_dsm_typed(acpi_handle handle, const guid_t *guid, u64 rev,
+ 
+ 	return obj;
+ }
++#endif
+ 
+ #define	ACPI_INIT_DSM_ARGV4(cnt, eles)			\
+ 	{						\
+diff --git a/include/drm/drm_file.h b/include/drm/drm_file.h
+index 0d1f853092ab8..ecffe24e2b1b0 100644
+--- a/include/drm/drm_file.h
++++ b/include/drm/drm_file.h
+@@ -408,7 +408,8 @@ static inline bool drm_is_render_client(const struct drm_file *file_priv)
+  * Returns true if this is an open file of the compute acceleration node, i.e.
+  * &drm_file.minor of @file_priv is a accel minor.
+  *
+- * See also the :ref:`section on accel nodes <drm_accel_node>`.
++ * See also :doc:`Introduction to compute accelerators subsystem
++ * </accel/introduction>`.
+  */
+ static inline bool drm_is_accel_client(const struct drm_file *file_priv)
+ {
+diff --git a/include/drm/i915_pciids.h b/include/drm/i915_pciids.h
+index 4a4c190f76984..8f648c32a9657 100644
+--- a/include/drm/i915_pciids.h
++++ b/include/drm/i915_pciids.h
+@@ -706,7 +706,6 @@
+ 	INTEL_VGA_DEVICE(0x5693, info), \
+ 	INTEL_VGA_DEVICE(0x5694, info), \
+ 	INTEL_VGA_DEVICE(0x5695, info), \
+-	INTEL_VGA_DEVICE(0x5698, info), \
+ 	INTEL_VGA_DEVICE(0x56A5, info), \
+ 	INTEL_VGA_DEVICE(0x56A6, info), \
+ 	INTEL_VGA_DEVICE(0x56B0, info), \
+diff --git a/include/linux/blk-crypto.h b/include/linux/blk-crypto.h
+index 1e3e5d0adf120..5e5822c18ee41 100644
+--- a/include/linux/blk-crypto.h
++++ b/include/linux/blk-crypto.h
+@@ -95,8 +95,8 @@ int blk_crypto_init_key(struct blk_crypto_key *blk_key, const u8 *raw_key,
+ int blk_crypto_start_using_key(struct block_device *bdev,
+ 			       const struct blk_crypto_key *key);
+ 
+-int blk_crypto_evict_key(struct block_device *bdev,
+-			 const struct blk_crypto_key *key);
++void blk_crypto_evict_key(struct block_device *bdev,
++			  const struct blk_crypto_key *key);
+ 
+ bool blk_crypto_config_supported_natively(struct block_device *bdev,
+ 					  const struct blk_crypto_config *cfg);
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 520b238abd5a2..5bd6ac04773aa 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -96,11 +96,11 @@ struct bpf_map_ops {
+ 
+ 	/* funcs callable from userspace and from eBPF programs */
+ 	void *(*map_lookup_elem)(struct bpf_map *map, void *key);
+-	int (*map_update_elem)(struct bpf_map *map, void *key, void *value, u64 flags);
+-	int (*map_delete_elem)(struct bpf_map *map, void *key);
+-	int (*map_push_elem)(struct bpf_map *map, void *value, u64 flags);
+-	int (*map_pop_elem)(struct bpf_map *map, void *value);
+-	int (*map_peek_elem)(struct bpf_map *map, void *value);
++	long (*map_update_elem)(struct bpf_map *map, void *key, void *value, u64 flags);
++	long (*map_delete_elem)(struct bpf_map *map, void *key);
++	long (*map_push_elem)(struct bpf_map *map, void *value, u64 flags);
++	long (*map_pop_elem)(struct bpf_map *map, void *value);
++	long (*map_peek_elem)(struct bpf_map *map, void *value);
+ 	void *(*map_lookup_percpu_elem)(struct bpf_map *map, void *key, u32 cpu);
+ 
+ 	/* funcs called by prog_array and perf_event_array map */
+@@ -139,7 +139,7 @@ struct bpf_map_ops {
+ 	struct bpf_local_storage __rcu ** (*map_owner_storage_ptr)(void *owner);
+ 
+ 	/* Misc helpers.*/
+-	int (*map_redirect)(struct bpf_map *map, u64 key, u64 flags);
++	long (*map_redirect)(struct bpf_map *map, u64 key, u64 flags);
+ 
+ 	/* map_meta_equal must be implemented for maps that can be
+ 	 * used as an inner map.  It is a runtime check to ensure
+@@ -157,7 +157,7 @@ struct bpf_map_ops {
+ 	int (*map_set_for_each_callback_args)(struct bpf_verifier_env *env,
+ 					      struct bpf_func_state *caller,
+ 					      struct bpf_func_state *callee);
+-	int (*map_for_each_callback)(struct bpf_map *map,
++	long (*map_for_each_callback)(struct bpf_map *map,
+ 				     bpf_callback_t callback_fn,
+ 				     void *callback_ctx, u64 flags);
+ 
+diff --git a/include/linux/bpf_mem_alloc.h b/include/linux/bpf_mem_alloc.h
+index 3e164b8efaa92..a7104af61ab4d 100644
+--- a/include/linux/bpf_mem_alloc.h
++++ b/include/linux/bpf_mem_alloc.h
+@@ -14,6 +14,13 @@ struct bpf_mem_alloc {
+ 	struct work_struct work;
+ };
+ 
++/* 'size != 0' is for bpf_mem_alloc which manages fixed-size objects.
++ * Alloc and free are done with bpf_mem_cache_{alloc,free}().
++ *
++ * 'size = 0' is for bpf_mem_alloc which manages many fixed-size objects.
++ * Alloc and free are done with bpf_mem_{alloc,free}() and the size of
++ * the returned object is given by the size argument of bpf_mem_alloc().
++ */
+ int bpf_mem_alloc_init(struct bpf_mem_alloc *ma, int size, bool percpu);
+ void bpf_mem_alloc_destroy(struct bpf_mem_alloc *ma);
+ 
+diff --git a/include/linux/filter.h b/include/linux/filter.h
+index 1727898f16413..370c96acd7098 100644
+--- a/include/linux/filter.h
++++ b/include/linux/filter.h
+@@ -1504,9 +1504,9 @@ static inline bool bpf_sk_lookup_run_v6(struct net *net, int protocol,
+ }
+ #endif /* IS_ENABLED(CONFIG_IPV6) */
+ 
+-static __always_inline int __bpf_xdp_redirect_map(struct bpf_map *map, u64 index,
+-						  u64 flags, const u64 flag_mask,
+-						  void *lookup_elem(struct bpf_map *map, u32 key))
++static __always_inline long __bpf_xdp_redirect_map(struct bpf_map *map, u64 index,
++						   u64 flags, const u64 flag_mask,
++						   void *lookup_elem(struct bpf_map *map, u32 key))
+ {
+ 	struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info);
+ 	const u64 action_mask = XDP_ABORTED | XDP_DROP | XDP_PASS | XDP_TX;
+diff --git a/include/linux/mailbox/zynqmp-ipi-message.h b/include/linux/mailbox/zynqmp-ipi-message.h
+index 35ce84c8ca02c..31d8046d945e7 100644
+--- a/include/linux/mailbox/zynqmp-ipi-message.h
++++ b/include/linux/mailbox/zynqmp-ipi-message.h
+@@ -9,7 +9,7 @@
+  * @data: message payload
+  *
+  * This is the structure for data used in mbox_send_message
+- * the maximum length of data buffer is fixed to 12 bytes.
++ * the maximum length of data buffer is fixed to 32 bytes.
+  * Client is supposed to be aware of this.
+  */
+ struct zynqmp_ipi_message {
+diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
+index 66d76e97a0876..1636d7f65630a 100644
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -9271,7 +9271,8 @@ struct mlx5_ifc_alloc_flow_counter_in_bits {
+ 	u8         reserved_at_20[0x10];
+ 	u8         op_mod[0x10];
+ 
+-	u8         reserved_at_40[0x38];
++	u8         reserved_at_40[0x33];
++	u8         flow_counter_bulk_log_size[0x5];
+ 	u8         flow_counter_bulk[0x8];
+ };
+ 
+diff --git a/include/linux/netfilter/nfnetlink.h b/include/linux/netfilter/nfnetlink.h
+index 241e005f290ad..e9a9ab34a7ccc 100644
+--- a/include/linux/netfilter/nfnetlink.h
++++ b/include/linux/netfilter/nfnetlink.h
+@@ -45,7 +45,6 @@ struct nfnetlink_subsystem {
+ 	int (*commit)(struct net *net, struct sk_buff *skb);
+ 	int (*abort)(struct net *net, struct sk_buff *skb,
+ 		     enum nfnl_abort_action action);
+-	void (*cleanup)(struct net *net);
+ 	bool (*valid_genid)(struct net *net, u32 genid);
+ };
+ 
+diff --git a/include/linux/posix-timers.h b/include/linux/posix-timers.h
+index 2c6e99ca48afc..d607f51404fca 100644
+--- a/include/linux/posix-timers.h
++++ b/include/linux/posix-timers.h
+@@ -4,6 +4,7 @@
+ 
+ #include <linux/spinlock.h>
+ #include <linux/list.h>
++#include <linux/mutex.h>
+ #include <linux/alarmtimer.h>
+ #include <linux/timerqueue.h>
+ 
+@@ -62,16 +63,18 @@ static inline int clockid_to_fd(const clockid_t clk)
+  * cpu_timer - Posix CPU timer representation for k_itimer
+  * @node:	timerqueue node to queue in the task/sig
+  * @head:	timerqueue head on which this timer is queued
+- * @task:	Pointer to target task
++ * @pid:	Pointer to target task PID
+  * @elist:	List head for the expiry list
+  * @firing:	Timer is currently firing
++ * @handling:	Pointer to the task which handles expiry
+  */
+ struct cpu_timer {
+-	struct timerqueue_node	node;
+-	struct timerqueue_head	*head;
+-	struct pid		*pid;
+-	struct list_head	elist;
+-	int			firing;
++	struct timerqueue_node		node;
++	struct timerqueue_head		*head;
++	struct pid			*pid;
++	struct list_head		elist;
++	int				firing;
++	struct task_struct __rcu	*handling;
+ };
+ 
+ static inline bool cpu_timer_enqueue(struct timerqueue_head *head,
+@@ -135,10 +138,12 @@ struct posix_cputimers {
+ /**
+  * posix_cputimers_work - Container for task work based posix CPU timer expiry
+  * @work:	The task work to be scheduled
++ * @mutex:	Mutex held around expiry in context of this task work
+  * @scheduled:  @work has been scheduled already, no further processing
+  */
+ struct posix_cputimers_work {
+ 	struct callback_head	work;
++	struct mutex		mutex;
+ 	unsigned int		scheduled;
+ };
+ 
+diff --git a/include/linux/spi/spi.h b/include/linux/spi/spi.h
+index 4fa26b9a35725..509bd620a258c 100644
+--- a/include/linux/spi/spi.h
++++ b/include/linux/spi/spi.h
+@@ -266,7 +266,7 @@ static inline void *spi_get_drvdata(struct spi_device *spi)
+ 	return dev_get_drvdata(&spi->dev);
+ }
+ 
+-static inline u8 spi_get_chipselect(struct spi_device *spi, u8 idx)
++static inline u8 spi_get_chipselect(const struct spi_device *spi, u8 idx)
+ {
+ 	return spi->chip_select;
+ }
+@@ -276,7 +276,7 @@ static inline void spi_set_chipselect(struct spi_device *spi, u8 idx, u8 chipsel
+ 	spi->chip_select = chipselect;
+ }
+ 
+-static inline struct gpio_desc *spi_get_csgpiod(struct spi_device *spi, u8 idx)
++static inline struct gpio_desc *spi_get_csgpiod(const struct spi_device *spi, u8 idx)
+ {
+ 	return spi->cs_gpiod;
+ }
+diff --git a/include/linux/sunrpc/sched.h b/include/linux/sunrpc/sched.h
+index b8ca3ecaf8d76..8ada7dc802d30 100644
+--- a/include/linux/sunrpc/sched.h
++++ b/include/linux/sunrpc/sched.h
+@@ -90,8 +90,7 @@ struct rpc_task {
+ #endif
+ 	unsigned char		tk_priority : 2,/* Task priority */
+ 				tk_garb_retry : 2,
+-				tk_cred_retry : 2,
+-				tk_rebind_retry : 2;
++				tk_cred_retry : 2;
+ };
+ 
+ typedef void			(*rpc_action)(struct rpc_task *);
+diff --git a/include/linux/tick.h b/include/linux/tick.h
+index bfd571f18cfdc..9459fef5b8573 100644
+--- a/include/linux/tick.h
++++ b/include/linux/tick.h
+@@ -216,6 +216,7 @@ extern void tick_nohz_dep_set_signal(struct task_struct *tsk,
+ 				     enum tick_dep_bits bit);
+ extern void tick_nohz_dep_clear_signal(struct signal_struct *signal,
+ 				       enum tick_dep_bits bit);
++extern bool tick_nohz_cpu_hotpluggable(unsigned int cpu);
+ 
+ /*
+  * The below are tick_nohz_[set,clear]_dep() wrappers that optimize off-cases
+@@ -280,6 +281,7 @@ static inline void tick_nohz_full_add_cpus_to(struct cpumask *mask) { }
+ 
+ static inline void tick_nohz_dep_set_cpu(int cpu, enum tick_dep_bits bit) { }
+ static inline void tick_nohz_dep_clear_cpu(int cpu, enum tick_dep_bits bit) { }
++static inline bool tick_nohz_cpu_hotpluggable(unsigned int cpu) { return true; }
+ 
+ static inline void tick_dep_set(enum tick_dep_bits bit) { }
+ static inline void tick_dep_clear(enum tick_dep_bits bit) { }
+diff --git a/include/linux/vt_buffer.h b/include/linux/vt_buffer.h
+index 848db1b1569ff..919d999a8c1db 100644
+--- a/include/linux/vt_buffer.h
++++ b/include/linux/vt_buffer.h
+@@ -16,7 +16,7 @@
+ 
+ #include <linux/string.h>
+ 
+-#if defined(CONFIG_VGA_CONSOLE) || defined(CONFIG_MDA_CONSOLE)
++#if IS_ENABLED(CONFIG_VGA_CONSOLE) || IS_ENABLED(CONFIG_MDA_CONSOLE)
+ #include <asm/vga.h>
+ #endif
+ 
+diff --git a/include/net/netfilter/nf_conntrack_core.h b/include/net/netfilter/nf_conntrack_core.h
+index 71d1269fe4d4f..3384859a89210 100644
+--- a/include/net/netfilter/nf_conntrack_core.h
++++ b/include/net/netfilter/nf_conntrack_core.h
+@@ -89,7 +89,11 @@ static inline void __nf_ct_set_timeout(struct nf_conn *ct, u64 timeout)
+ {
+ 	if (timeout > INT_MAX)
+ 		timeout = INT_MAX;
+-	WRITE_ONCE(ct->timeout, nfct_time_stamp + (u32)timeout);
++
++	if (nf_ct_is_confirmed(ct))
++		WRITE_ONCE(ct->timeout, nfct_time_stamp + (u32)timeout);
++	else
++		ct->timeout = (u32)timeout;
+ }
+ 
+ int __nf_ct_change_timeout(struct nf_conn *ct, u64 cta_timeout);
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 1b8e305bb54ae..9dace9bcba8e5 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -619,6 +619,7 @@ struct nft_set_binding {
+ };
+ 
+ enum nft_trans_phase;
++void nf_tables_activate_set(const struct nft_ctx *ctx, struct nft_set *set);
+ void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
+ 			      struct nft_set_binding *binding,
+ 			      enum nft_trans_phase phase);
+diff --git a/include/net/scm.h b/include/net/scm.h
+index 1ce365f4c2560..585adc1346bd0 100644
+--- a/include/net/scm.h
++++ b/include/net/scm.h
+@@ -105,16 +105,27 @@ static inline void scm_passec(struct socket *sock, struct msghdr *msg, struct sc
+ 		}
+ 	}
+ }
++
++static inline bool scm_has_secdata(struct socket *sock)
++{
++	return test_bit(SOCK_PASSSEC, &sock->flags);
++}
+ #else
+ static inline void scm_passec(struct socket *sock, struct msghdr *msg, struct scm_cookie *scm)
+ { }
++
++static inline bool scm_has_secdata(struct socket *sock)
++{
++	return false;
++}
+ #endif /* CONFIG_SECURITY_NETWORK */
+ 
+ static __inline__ void scm_recv(struct socket *sock, struct msghdr *msg,
+ 				struct scm_cookie *scm, int flags)
+ {
+ 	if (!msg->msg_control) {
+-		if (test_bit(SOCK_PASSCRED, &sock->flags) || scm->fp)
++		if (test_bit(SOCK_PASSCRED, &sock->flags) || scm->fp ||
++		    scm_has_secdata(sock))
+ 			msg->msg_flags |= MSG_CTRUNC;
+ 		scm_destroy(scm);
+ 		return;
+diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h
+index 3e952e5694188..d318c769b4454 100644
+--- a/include/net/xsk_buff_pool.h
++++ b/include/net/xsk_buff_pool.h
+@@ -180,13 +180,8 @@ static inline bool xp_desc_crosses_non_contig_pg(struct xsk_buff_pool *pool,
+ 	if (likely(!cross_pg))
+ 		return false;
+ 
+-	if (pool->dma_pages_cnt) {
+-		return !(pool->dma_pages[addr >> PAGE_SHIFT] &
+-			 XSK_NEXT_PG_CONTIG_MASK);
+-	}
+-
+-	/* skb path */
+-	return addr + len > pool->addrs_cnt;
++	return pool->dma_pages_cnt &&
++	       !(pool->dma_pages[addr >> PAGE_SHIFT] & XSK_NEXT_PG_CONTIG_MASK);
+ }
+ 
+ static inline u64 xp_aligned_extract_addr(struct xsk_buff_pool *pool, u64 addr)
+diff --git a/include/target/iscsi/iscsi_target_core.h b/include/target/iscsi/iscsi_target_core.h
+index 94d06ddfd80ad..229118156a1f6 100644
+--- a/include/target/iscsi/iscsi_target_core.h
++++ b/include/target/iscsi/iscsi_target_core.h
+@@ -600,6 +600,7 @@ struct iscsit_conn {
+ 	struct iscsi_tpg_np	*tpg_np;
+ 	/* Pointer to parent session */
+ 	struct iscsit_session	*sess;
++	struct target_cmd_counter *cmd_cnt;
+ 	int			bitmap_id;
+ 	int			rx_thread_active;
+ 	struct task_struct	*rx_thread;
+diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
+index 12c9ba16217ef..8cc42ad65c925 100644
+--- a/include/target/target_core_base.h
++++ b/include/target/target_core_base.h
+@@ -494,6 +494,7 @@ struct se_cmd {
+ 	struct se_lun		*se_lun;
+ 	/* Only used for internal passthrough and legacy TCM fabric modules */
+ 	struct se_session	*se_sess;
++	struct target_cmd_counter *cmd_cnt;
+ 	struct se_tmr_req	*se_tmr_req;
+ 	struct llist_node	se_cmd_list;
+ 	struct completion	*free_compl;
+@@ -619,22 +620,26 @@ static inline struct se_node_acl *fabric_stat_to_nacl(struct config_item *item)
+ 			acl_fabric_stat_group);
+ }
+ 
+-struct se_session {
++struct target_cmd_counter {
++	struct percpu_ref	refcnt;
++	wait_queue_head_t	refcnt_wq;
++	struct completion	stop_done;
+ 	atomic_t		stopped;
++};
++
++struct se_session {
+ 	u64			sess_bin_isid;
+ 	enum target_prot_op	sup_prot_ops;
+ 	enum target_prot_type	sess_prot_type;
+ 	struct se_node_acl	*se_node_acl;
+ 	struct se_portal_group *se_tpg;
+ 	void			*fabric_sess_ptr;
+-	struct percpu_ref	cmd_count;
+ 	struct list_head	sess_list;
+ 	struct list_head	sess_acl_list;
+ 	spinlock_t		sess_cmd_lock;
+-	wait_queue_head_t	cmd_count_wq;
+-	struct completion	stop_done;
+ 	void			*sess_cmd_map;
+ 	struct sbitmap_queue	sess_tag_pool;
++	struct target_cmd_counter *cmd_cnt;
+ };
+ 
+ struct se_device;
+@@ -867,6 +872,7 @@ struct se_device {
+ 	struct rcu_head		rcu_head;
+ 	int			queue_cnt;
+ 	struct se_device_queue	*queues;
++	struct mutex		lun_reset_mutex;
+ };
+ 
+ struct target_opcode_descriptor {
+diff --git a/include/target/target_core_fabric.h b/include/target/target_core_fabric.h
+index 38f0662476d14..b188b1e90e1ed 100644
+--- a/include/target/target_core_fabric.h
++++ b/include/target/target_core_fabric.h
+@@ -133,7 +133,12 @@ struct se_session *target_setup_session(struct se_portal_group *,
+ 				struct se_session *, void *));
+ void target_remove_session(struct se_session *);
+ 
+-int transport_init_session(struct se_session *se_sess);
++void target_stop_cmd_counter(struct target_cmd_counter *cmd_cnt);
++void target_wait_for_cmds(struct target_cmd_counter *cmd_cnt);
++struct target_cmd_counter *target_alloc_cmd_counter(void);
++void target_free_cmd_counter(struct target_cmd_counter *cmd_cnt);
++
++void transport_init_session(struct se_session *se_sess);
+ struct se_session *transport_alloc_session(enum target_prot_op);
+ int transport_alloc_session_tags(struct se_session *, unsigned int,
+ 		unsigned int);
+@@ -149,9 +154,11 @@ void	transport_deregister_session_configfs(struct se_session *);
+ void	transport_deregister_session(struct se_session *);
+ 
+ 
+-void	__target_init_cmd(struct se_cmd *,
+-		const struct target_core_fabric_ops *,
+-		struct se_session *, u32, int, int, unsigned char *, u64);
++void	__target_init_cmd(struct se_cmd *cmd,
++		const struct target_core_fabric_ops *tfo,
++		struct se_session *sess, u32 data_length, int data_direction,
++		int task_attr, unsigned char *sense_buffer, u64 unpacked_lun,
++		struct target_cmd_counter *cmd_cnt);
+ int	target_init_cmd(struct se_cmd *se_cmd, struct se_session *se_sess,
+ 		unsigned char *sense, u64 unpacked_lun, u32 data_length,
+ 		int task_attr, int data_dir, int flags);
+diff --git a/include/trace/events/qrtr.h b/include/trace/events/qrtr.h
+index b1de14c3bb934..441132c67133f 100644
+--- a/include/trace/events/qrtr.h
++++ b/include/trace/events/qrtr.h
+@@ -10,15 +10,16 @@
+ 
+ TRACE_EVENT(qrtr_ns_service_announce_new,
+ 
+-	TP_PROTO(__le32 service, __le32 instance, __le32 node, __le32 port),
++	TP_PROTO(unsigned int service, unsigned int instance,
++		 unsigned int node, unsigned int port),
+ 
+ 	TP_ARGS(service, instance, node, port),
+ 
+ 	TP_STRUCT__entry(
+-		__field(__le32, service)
+-		__field(__le32, instance)
+-		__field(__le32, node)
+-		__field(__le32, port)
++		__field(unsigned int, service)
++		__field(unsigned int, instance)
++		__field(unsigned int, node)
++		__field(unsigned int, port)
+ 	),
+ 
+ 	TP_fast_assign(
+@@ -36,15 +37,16 @@ TRACE_EVENT(qrtr_ns_service_announce_new,
+ 
+ TRACE_EVENT(qrtr_ns_service_announce_del,
+ 
+-	TP_PROTO(__le32 service, __le32 instance, __le32 node, __le32 port),
++	TP_PROTO(unsigned int service, unsigned int instance,
++		 unsigned int node, unsigned int port),
+ 
+ 	TP_ARGS(service, instance, node, port),
+ 
+ 	TP_STRUCT__entry(
+-		__field(__le32, service)
+-		__field(__le32, instance)
+-		__field(__le32, node)
+-		__field(__le32, port)
++		__field(unsigned int, service)
++		__field(unsigned int, instance)
++		__field(unsigned int, node)
++		__field(unsigned int, port)
+ 	),
+ 
+ 	TP_fast_assign(
+@@ -62,15 +64,16 @@ TRACE_EVENT(qrtr_ns_service_announce_del,
+ 
+ TRACE_EVENT(qrtr_ns_server_add,
+ 
+-	TP_PROTO(__le32 service, __le32 instance, __le32 node, __le32 port),
++	TP_PROTO(unsigned int service, unsigned int instance,
++		 unsigned int node, unsigned int port),
+ 
+ 	TP_ARGS(service, instance, node, port),
+ 
+ 	TP_STRUCT__entry(
+-		__field(__le32, service)
+-		__field(__le32, instance)
+-		__field(__le32, node)
+-		__field(__le32, port)
++		__field(unsigned int, service)
++		__field(unsigned int, instance)
++		__field(unsigned int, node)
++		__field(unsigned int, port)
+ 	),
+ 
+ 	TP_fast_assign(
+diff --git a/include/trace/events/timer.h b/include/trace/events/timer.h
+index 2e713a7d9aa3a..3e8619c72f774 100644
+--- a/include/trace/events/timer.h
++++ b/include/trace/events/timer.h
+@@ -371,7 +371,8 @@ TRACE_EVENT(itimer_expire,
+ 		tick_dep_name(PERF_EVENTS)		\
+ 		tick_dep_name(SCHED)			\
+ 		tick_dep_name(CLOCK_UNSTABLE)		\
+-		tick_dep_name_end(RCU)
++		tick_dep_name(RCU)			\
++		tick_dep_name_end(RCU_EXP)
+ 
+ #undef tick_dep_name
+ #undef tick_dep_mask_name
+diff --git a/include/uapi/linux/btrfs.h b/include/uapi/linux/btrfs.h
+index ada0a489bf2b6..dbb8b96da50d3 100644
+--- a/include/uapi/linux/btrfs.h
++++ b/include/uapi/linux/btrfs.h
+@@ -187,6 +187,7 @@ struct btrfs_scrub_progress {
+ };
+ 
+ #define BTRFS_SCRUB_READONLY	1
++#define BTRFS_SCRUB_SUPPORTED_FLAGS	(BTRFS_SCRUB_READONLY)
+ struct btrfs_ioctl_scrub_args {
+ 	__u64 devid;				/* in */
+ 	__u64 start;				/* in */
+diff --git a/include/uapi/linux/const.h b/include/uapi/linux/const.h
+index af2a44c08683d..a429381e7ca50 100644
+--- a/include/uapi/linux/const.h
++++ b/include/uapi/linux/const.h
+@@ -28,7 +28,7 @@
+ #define _BITUL(x)	(_UL(1) << (x))
+ #define _BITULL(x)	(_ULL(1) << (x))
+ 
+-#define __ALIGN_KERNEL(x, a)		__ALIGN_KERNEL_MASK(x, (typeof(x))(a) - 1)
++#define __ALIGN_KERNEL(x, a)		__ALIGN_KERNEL_MASK(x, (__typeof__(x))(a) - 1)
+ #define __ALIGN_KERNEL_MASK(x, mask)	(((x) + (mask)) & ~(mask))
+ 
+ #define __KERNEL_DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
+diff --git a/include/xen/xen.h b/include/xen/xen.h
+index 7adf59837c258..0efeb652f9b8f 100644
+--- a/include/xen/xen.h
++++ b/include/xen/xen.h
+@@ -71,4 +71,15 @@ static inline void xen_free_unpopulated_pages(unsigned int nr_pages,
+ }
+ #endif
+ 
++#if defined(CONFIG_XEN_DOM0) && defined(CONFIG_ACPI) && defined(CONFIG_X86)
++bool __init xen_processor_present(uint32_t acpi_id);
++#else
++#include <linux/bug.h>
++static inline bool xen_processor_present(uint32_t acpi_id)
++{
++	BUG();
++	return false;
++}
++#endif
++
+ #endif	/* _XEN_XEN_H */
+diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
+index 7a43aed8e395c..b234350e57423 100644
+--- a/io_uring/rsrc.c
++++ b/io_uring/rsrc.c
+@@ -577,7 +577,7 @@ static int __io_sqe_buffers_update(struct io_ring_ctx *ctx,
+ 		}
+ 
+ 		ctx->user_bufs[i] = imu;
+-		*io_get_tag_slot(ctx->buf_data, offset) = tag;
++		*io_get_tag_slot(ctx->buf_data, i) = tag;
+ 	}
+ 
+ 	if (needs_switch)
+@@ -1230,7 +1230,12 @@ static int io_sqe_buffer_register(struct io_ring_ctx *ctx, struct iovec *iov,
+ 	if (nr_pages > 1) {
+ 		folio = page_folio(pages[0]);
+ 		for (i = 1; i < nr_pages; i++) {
+-			if (page_folio(pages[i]) != folio) {
++			/*
++			 * Pages must be consecutive and on the same folio for
++			 * this to work
++			 */
++			if (page_folio(pages[i]) != folio ||
++			    pages[i] != pages[i - 1] + 1) {
+ 				folio = NULL;
+ 				break;
+ 			}
+diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
+index 4847069595569..cb80bcc880b44 100644
+--- a/kernel/bpf/arraymap.c
++++ b/kernel/bpf/arraymap.c
+@@ -307,8 +307,8 @@ static int array_map_get_next_key(struct bpf_map *map, void *key, void *next_key
+ }
+ 
+ /* Called from syscall or from eBPF program */
+-static int array_map_update_elem(struct bpf_map *map, void *key, void *value,
+-				 u64 map_flags)
++static long array_map_update_elem(struct bpf_map *map, void *key, void *value,
++				  u64 map_flags)
+ {
+ 	struct bpf_array *array = container_of(map, struct bpf_array, map);
+ 	u32 index = *(u32 *)key;
+@@ -386,7 +386,7 @@ int bpf_percpu_array_update(struct bpf_map *map, void *key, void *value,
+ }
+ 
+ /* Called from syscall or from eBPF program */
+-static int array_map_delete_elem(struct bpf_map *map, void *key)
++static long array_map_delete_elem(struct bpf_map *map, void *key)
+ {
+ 	return -EINVAL;
+ }
+@@ -686,8 +686,8 @@ static const struct bpf_iter_seq_info iter_seq_info = {
+ 	.seq_priv_size		= sizeof(struct bpf_iter_seq_array_map_info),
+ };
+ 
+-static int bpf_for_each_array_elem(struct bpf_map *map, bpf_callback_t callback_fn,
+-				   void *callback_ctx, u64 flags)
++static long bpf_for_each_array_elem(struct bpf_map *map, bpf_callback_t callback_fn,
++				    void *callback_ctx, u64 flags)
+ {
+ 	u32 i, key, num_elems = 0;
+ 	struct bpf_array *array;
+@@ -847,7 +847,7 @@ int bpf_fd_array_map_update_elem(struct bpf_map *map, struct file *map_file,
+ 	return 0;
+ }
+ 
+-static int fd_array_map_delete_elem(struct bpf_map *map, void *key)
++static long fd_array_map_delete_elem(struct bpf_map *map, void *key)
+ {
+ 	struct bpf_array *array = container_of(map, struct bpf_array, map);
+ 	void *old_ptr;
+diff --git a/kernel/bpf/bloom_filter.c b/kernel/bpf/bloom_filter.c
+index 48ee750849f25..b386a8fdf28cc 100644
+--- a/kernel/bpf/bloom_filter.c
++++ b/kernel/bpf/bloom_filter.c
+@@ -41,7 +41,7 @@ static u32 hash(struct bpf_bloom_filter *bloom, void *value,
+ 	return h & bloom->bitset_mask;
+ }
+ 
+-static int bloom_map_peek_elem(struct bpf_map *map, void *value)
++static long bloom_map_peek_elem(struct bpf_map *map, void *value)
+ {
+ 	struct bpf_bloom_filter *bloom =
+ 		container_of(map, struct bpf_bloom_filter, map);
+@@ -56,7 +56,7 @@ static int bloom_map_peek_elem(struct bpf_map *map, void *value)
+ 	return 0;
+ }
+ 
+-static int bloom_map_push_elem(struct bpf_map *map, void *value, u64 flags)
++static long bloom_map_push_elem(struct bpf_map *map, void *value, u64 flags)
+ {
+ 	struct bpf_bloom_filter *bloom =
+ 		container_of(map, struct bpf_bloom_filter, map);
+@@ -73,12 +73,12 @@ static int bloom_map_push_elem(struct bpf_map *map, void *value, u64 flags)
+ 	return 0;
+ }
+ 
+-static int bloom_map_pop_elem(struct bpf_map *map, void *value)
++static long bloom_map_pop_elem(struct bpf_map *map, void *value)
+ {
+ 	return -EOPNOTSUPP;
+ }
+ 
+-static int bloom_map_delete_elem(struct bpf_map *map, void *value)
++static long bloom_map_delete_elem(struct bpf_map *map, void *value)
+ {
+ 	return -EOPNOTSUPP;
+ }
+@@ -177,8 +177,8 @@ static void *bloom_map_lookup_elem(struct bpf_map *map, void *key)
+ 	return ERR_PTR(-EINVAL);
+ }
+ 
+-static int bloom_map_update_elem(struct bpf_map *map, void *key,
+-				 void *value, u64 flags)
++static long bloom_map_update_elem(struct bpf_map *map, void *key,
++				  void *value, u64 flags)
+ {
+ 	/* The eBPF program should use map_push_elem instead */
+ 	return -EINVAL;
+diff --git a/kernel/bpf/bpf_cgrp_storage.c b/kernel/bpf/bpf_cgrp_storage.c
+index 6cdf6d9ed91df..d4a074247b64d 100644
+--- a/kernel/bpf/bpf_cgrp_storage.c
++++ b/kernel/bpf/bpf_cgrp_storage.c
+@@ -100,8 +100,8 @@ static void *bpf_cgrp_storage_lookup_elem(struct bpf_map *map, void *key)
+ 	return sdata ? sdata->data : NULL;
+ }
+ 
+-static int bpf_cgrp_storage_update_elem(struct bpf_map *map, void *key,
+-					  void *value, u64 map_flags)
++static long bpf_cgrp_storage_update_elem(struct bpf_map *map, void *key,
++					 void *value, u64 map_flags)
+ {
+ 	struct bpf_local_storage_data *sdata;
+ 	struct cgroup *cgroup;
+@@ -132,7 +132,7 @@ static int cgroup_storage_delete(struct cgroup *cgroup, struct bpf_map *map)
+ 	return 0;
+ }
+ 
+-static int bpf_cgrp_storage_delete_elem(struct bpf_map *map, void *key)
++static long bpf_cgrp_storage_delete_elem(struct bpf_map *map, void *key)
+ {
+ 	struct cgroup *cgroup;
+ 	int err, fd;
+diff --git a/kernel/bpf/bpf_inode_storage.c b/kernel/bpf/bpf_inode_storage.c
+index 05f4c66c9089f..a8afc656808b2 100644
+--- a/kernel/bpf/bpf_inode_storage.c
++++ b/kernel/bpf/bpf_inode_storage.c
+@@ -97,8 +97,8 @@ static void *bpf_fd_inode_storage_lookup_elem(struct bpf_map *map, void *key)
+ 	return sdata ? sdata->data : NULL;
+ }
+ 
+-static int bpf_fd_inode_storage_update_elem(struct bpf_map *map, void *key,
+-					 void *value, u64 map_flags)
++static long bpf_fd_inode_storage_update_elem(struct bpf_map *map, void *key,
++					     void *value, u64 map_flags)
+ {
+ 	struct bpf_local_storage_data *sdata;
+ 	struct file *f;
+@@ -133,7 +133,7 @@ static int inode_storage_delete(struct inode *inode, struct bpf_map *map)
+ 	return 0;
+ }
+ 
+-static int bpf_fd_inode_storage_delete_elem(struct bpf_map *map, void *key)
++static long bpf_fd_inode_storage_delete_elem(struct bpf_map *map, void *key)
+ {
+ 	struct file *f;
+ 	int fd, err;
+diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
+index ece9870cab68e..36c17271d38bd 100644
+--- a/kernel/bpf/bpf_struct_ops.c
++++ b/kernel/bpf/bpf_struct_ops.c
+@@ -349,8 +349,8 @@ int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_links *tlinks,
+ 					   model, flags, tlinks, NULL);
+ }
+ 
+-static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
+-					  void *value, u64 flags)
++static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
++					   void *value, u64 flags)
+ {
+ 	struct bpf_struct_ops_map *st_map = (struct bpf_struct_ops_map *)map;
+ 	const struct bpf_struct_ops *st_ops = st_map->st_ops;
+@@ -524,7 +524,7 @@ unlock:
+ 	return err;
+ }
+ 
+-static int bpf_struct_ops_map_delete_elem(struct bpf_map *map, void *key)
++static long bpf_struct_ops_map_delete_elem(struct bpf_map *map, void *key)
+ {
+ 	enum bpf_struct_ops_state prev_state;
+ 	struct bpf_struct_ops_map *st_map;
+diff --git a/kernel/bpf/bpf_task_storage.c b/kernel/bpf/bpf_task_storage.c
+index 1e486055a523d..b29f0bf28fd15 100644
+--- a/kernel/bpf/bpf_task_storage.c
++++ b/kernel/bpf/bpf_task_storage.c
+@@ -127,8 +127,8 @@ out:
+ 	return ERR_PTR(err);
+ }
+ 
+-static int bpf_pid_task_storage_update_elem(struct bpf_map *map, void *key,
+-					    void *value, u64 map_flags)
++static long bpf_pid_task_storage_update_elem(struct bpf_map *map, void *key,
++					     void *value, u64 map_flags)
+ {
+ 	struct bpf_local_storage_data *sdata;
+ 	struct task_struct *task;
+@@ -180,7 +180,7 @@ static int task_storage_delete(struct task_struct *task, struct bpf_map *map,
+ 	return 0;
+ }
+ 
+-static int bpf_pid_task_storage_delete_elem(struct bpf_map *map, void *key)
++static long bpf_pid_task_storage_delete_elem(struct bpf_map *map, void *key)
+ {
+ 	struct task_struct *task;
+ 	unsigned int f_flags;
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index 73780748404c2..46bce5577fb48 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -572,8 +572,8 @@ static s32 bpf_find_btf_id(const char *name, u32 kind, struct btf **btf_p)
+ 			*btf_p = btf;
+ 			return ret;
+ 		}
+-		spin_lock_bh(&btf_idr_lock);
+ 		btf_put(btf);
++		spin_lock_bh(&btf_idr_lock);
+ 	}
+ 	spin_unlock_bh(&btf_idr_lock);
+ 	return ret;
+@@ -5891,12 +5891,8 @@ struct btf *bpf_prog_get_target_btf(const struct bpf_prog *prog)
+ 
+ static bool is_int_ptr(struct btf *btf, const struct btf_type *t)
+ {
+-	/* t comes in already as a pointer */
+-	t = btf_type_by_id(btf, t->type);
+-
+-	/* allow const */
+-	if (BTF_INFO_KIND(t->info) == BTF_KIND_CONST)
+-		t = btf_type_by_id(btf, t->type);
++	/* skip modifiers */
++	t = btf_type_skip_modifiers(btf, t->type, NULL);
+ 
+ 	return btf_type_is_int(t);
+ }
+@@ -8249,12 +8245,10 @@ check_modules:
+ 		btf_get(mod_btf);
+ 		spin_unlock_bh(&btf_idr_lock);
+ 		cands = bpf_core_add_cands(cands, mod_btf, btf_nr_types(main_btf));
+-		if (IS_ERR(cands)) {
+-			btf_put(mod_btf);
++		btf_put(mod_btf);
++		if (IS_ERR(cands))
+ 			return ERR_CAST(cands);
+-		}
+ 		spin_lock_bh(&btf_idr_lock);
+-		btf_put(mod_btf);
+ 	}
+ 	spin_unlock_bh(&btf_idr_lock);
+ 	/* cands is a pointer to kmalloced memory here if cands->cnt > 0
+diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
+index bf2fdb33fb313..819f011f0a9cd 100644
+--- a/kernel/bpf/cgroup.c
++++ b/kernel/bpf/cgroup.c
+@@ -1921,14 +1921,17 @@ int __cgroup_bpf_run_filter_getsockopt(struct sock *sk, int level,
+ 	if (ret < 0)
+ 		goto out;
+ 
+-	if (ctx.optlen > max_optlen || ctx.optlen < 0) {
++	if (optval && (ctx.optlen > max_optlen || ctx.optlen < 0)) {
+ 		ret = -EFAULT;
+ 		goto out;
+ 	}
+ 
+ 	if (ctx.optlen != 0) {
+-		if (copy_to_user(optval, ctx.optval, ctx.optlen) ||
+-		    put_user(ctx.optlen, optlen)) {
++		if (optval && copy_to_user(optval, ctx.optval, ctx.optlen)) {
++			ret = -EFAULT;
++			goto out;
++		}
++		if (put_user(ctx.optlen, optlen)) {
+ 			ret = -EFAULT;
+ 			goto out;
+ 		}
+diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
+index d2110c1f6fa64..90b95a69acbbc 100644
+--- a/kernel/bpf/cpumap.c
++++ b/kernel/bpf/cpumap.c
+@@ -540,7 +540,7 @@ static void __cpu_map_entry_replace(struct bpf_cpu_map *cmap,
+ 	}
+ }
+ 
+-static int cpu_map_delete_elem(struct bpf_map *map, void *key)
++static long cpu_map_delete_elem(struct bpf_map *map, void *key)
+ {
+ 	struct bpf_cpu_map *cmap = container_of(map, struct bpf_cpu_map, map);
+ 	u32 key_cpu = *(u32 *)key;
+@@ -553,8 +553,8 @@ static int cpu_map_delete_elem(struct bpf_map *map, void *key)
+ 	return 0;
+ }
+ 
+-static int cpu_map_update_elem(struct bpf_map *map, void *key, void *value,
+-			       u64 map_flags)
++static long cpu_map_update_elem(struct bpf_map *map, void *key, void *value,
++				u64 map_flags)
+ {
+ 	struct bpf_cpu_map *cmap = container_of(map, struct bpf_cpu_map, map);
+ 	struct bpf_cpumap_val cpumap_value = {};
+@@ -667,7 +667,7 @@ static int cpu_map_get_next_key(struct bpf_map *map, void *key, void *next_key)
+ 	return 0;
+ }
+ 
+-static int cpu_map_redirect(struct bpf_map *map, u64 index, u64 flags)
++static long cpu_map_redirect(struct bpf_map *map, u64 index, u64 flags)
+ {
+ 	return __bpf_xdp_redirect_map(map, index, flags, 0,
+ 				      __cpu_map_lookup_elem);
+diff --git a/kernel/bpf/cpumask.c b/kernel/bpf/cpumask.c
+index 52b981512a351..c08132caca33d 100644
+--- a/kernel/bpf/cpumask.c
++++ b/kernel/bpf/cpumask.c
+@@ -9,6 +9,7 @@
+ /**
+  * struct bpf_cpumask - refcounted BPF cpumask wrapper structure
+  * @cpumask:	The actual cpumask embedded in the struct.
++ * @rcu:	The RCU head used to free the cpumask with RCU safety.
+  * @usage:	Object reference counter. When the refcount goes to 0, the
+  *		memory is released back to the BPF allocator, which provides
+  *		RCU safety.
+@@ -24,6 +25,7 @@
+  */
+ struct bpf_cpumask {
+ 	cpumask_t cpumask;
++	struct rcu_head rcu;
+ 	refcount_t usage;
+ };
+ 
+@@ -55,7 +57,7 @@ __bpf_kfunc struct bpf_cpumask *bpf_cpumask_create(void)
+ 	/* cpumask must be the first element so struct bpf_cpumask be cast to struct cpumask. */
+ 	BUILD_BUG_ON(offsetof(struct bpf_cpumask, cpumask) != 0);
+ 
+-	cpumask = bpf_mem_alloc(&bpf_cpumask_ma, sizeof(*cpumask));
++	cpumask = bpf_mem_cache_alloc(&bpf_cpumask_ma);
+ 	if (!cpumask)
+ 		return NULL;
+ 
+@@ -108,6 +110,16 @@ __bpf_kfunc struct bpf_cpumask *bpf_cpumask_kptr_get(struct bpf_cpumask **cpumas
+ 	return cpumask;
+ }
+ 
++static void cpumask_free_cb(struct rcu_head *head)
++{
++	struct bpf_cpumask *cpumask;
++
++	cpumask = container_of(head, struct bpf_cpumask, rcu);
++	migrate_disable();
++	bpf_mem_cache_free(&bpf_cpumask_ma, cpumask);
++	migrate_enable();
++}
++
+ /**
+  * bpf_cpumask_release() - Release a previously acquired BPF cpumask.
+  * @cpumask: The cpumask being released.
+@@ -121,11 +133,8 @@ __bpf_kfunc void bpf_cpumask_release(struct bpf_cpumask *cpumask)
+ 	if (!cpumask)
+ 		return;
+ 
+-	if (refcount_dec_and_test(&cpumask->usage)) {
+-		migrate_disable();
+-		bpf_mem_free(&bpf_cpumask_ma, cpumask);
+-		migrate_enable();
+-	}
++	if (refcount_dec_and_test(&cpumask->usage))
++		call_rcu(&cpumask->rcu, cpumask_free_cb);
+ }
+ 
+ /**
+@@ -468,7 +477,7 @@ static int __init cpumask_kfunc_init(void)
+ 		},
+ 	};
+ 
+-	ret = bpf_mem_alloc_init(&bpf_cpumask_ma, 0, false);
++	ret = bpf_mem_alloc_init(&bpf_cpumask_ma, sizeof(struct bpf_cpumask), false);
+ 	ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &cpumask_kfunc_set);
+ 	ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_STRUCT_OPS, &cpumask_kfunc_set);
+ 	return  ret ?: register_btf_id_dtor_kfuncs(cpumask_dtors,
+diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
+index 2675fefc6cb62..38aa5d16b4d85 100644
+--- a/kernel/bpf/devmap.c
++++ b/kernel/bpf/devmap.c
+@@ -809,7 +809,7 @@ static void __dev_map_entry_free(struct rcu_head *rcu)
+ 	kfree(dev);
+ }
+ 
+-static int dev_map_delete_elem(struct bpf_map *map, void *key)
++static long dev_map_delete_elem(struct bpf_map *map, void *key)
+ {
+ 	struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
+ 	struct bpf_dtab_netdev *old_dev;
+@@ -824,7 +824,7 @@ static int dev_map_delete_elem(struct bpf_map *map, void *key)
+ 	return 0;
+ }
+ 
+-static int dev_map_hash_delete_elem(struct bpf_map *map, void *key)
++static long dev_map_hash_delete_elem(struct bpf_map *map, void *key)
+ {
+ 	struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
+ 	struct bpf_dtab_netdev *old_dev;
+@@ -895,8 +895,8 @@ err_out:
+ 	return ERR_PTR(-EINVAL);
+ }
+ 
+-static int __dev_map_update_elem(struct net *net, struct bpf_map *map,
+-				 void *key, void *value, u64 map_flags)
++static long __dev_map_update_elem(struct net *net, struct bpf_map *map,
++				  void *key, void *value, u64 map_flags)
+ {
+ 	struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
+ 	struct bpf_dtab_netdev *dev, *old_dev;
+@@ -935,15 +935,15 @@ static int __dev_map_update_elem(struct net *net, struct bpf_map *map,
+ 	return 0;
+ }
+ 
+-static int dev_map_update_elem(struct bpf_map *map, void *key, void *value,
+-			       u64 map_flags)
++static long dev_map_update_elem(struct bpf_map *map, void *key, void *value,
++				u64 map_flags)
+ {
+ 	return __dev_map_update_elem(current->nsproxy->net_ns,
+ 				     map, key, value, map_flags);
+ }
+ 
+-static int __dev_map_hash_update_elem(struct net *net, struct bpf_map *map,
+-				     void *key, void *value, u64 map_flags)
++static long __dev_map_hash_update_elem(struct net *net, struct bpf_map *map,
++				       void *key, void *value, u64 map_flags)
+ {
+ 	struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
+ 	struct bpf_dtab_netdev *dev, *old_dev;
+@@ -995,21 +995,21 @@ out_err:
+ 	return err;
+ }
+ 
+-static int dev_map_hash_update_elem(struct bpf_map *map, void *key, void *value,
+-				   u64 map_flags)
++static long dev_map_hash_update_elem(struct bpf_map *map, void *key, void *value,
++				     u64 map_flags)
+ {
+ 	return __dev_map_hash_update_elem(current->nsproxy->net_ns,
+ 					 map, key, value, map_flags);
+ }
+ 
+-static int dev_map_redirect(struct bpf_map *map, u64 ifindex, u64 flags)
++static long dev_map_redirect(struct bpf_map *map, u64 ifindex, u64 flags)
+ {
+ 	return __bpf_xdp_redirect_map(map, ifindex, flags,
+ 				      BPF_F_BROADCAST | BPF_F_EXCLUDE_INGRESS,
+ 				      __dev_map_lookup_elem);
+ }
+ 
+-static int dev_hash_map_redirect(struct bpf_map *map, u64 ifindex, u64 flags)
++static long dev_hash_map_redirect(struct bpf_map *map, u64 ifindex, u64 flags)
+ {
+ 	return __bpf_xdp_redirect_map(map, ifindex, flags,
+ 				      BPF_F_BROADCAST | BPF_F_EXCLUDE_INGRESS,
+diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
+index 5dfcb5ad0d068..90852e0e64f26 100644
+--- a/kernel/bpf/hashtab.c
++++ b/kernel/bpf/hashtab.c
+@@ -1057,8 +1057,8 @@ static int check_flags(struct bpf_htab *htab, struct htab_elem *l_old,
+ }
+ 
+ /* Called from syscall or from eBPF program */
+-static int htab_map_update_elem(struct bpf_map *map, void *key, void *value,
+-				u64 map_flags)
++static long htab_map_update_elem(struct bpf_map *map, void *key, void *value,
++				 u64 map_flags)
+ {
+ 	struct bpf_htab *htab = container_of(map, struct bpf_htab, map);
+ 	struct htab_elem *l_new = NULL, *l_old;
+@@ -1159,8 +1159,8 @@ static void htab_lru_push_free(struct bpf_htab *htab, struct htab_elem *elem)
+ 	bpf_lru_push_free(&htab->lru, &elem->lru_node);
+ }
+ 
+-static int htab_lru_map_update_elem(struct bpf_map *map, void *key, void *value,
+-				    u64 map_flags)
++static long htab_lru_map_update_elem(struct bpf_map *map, void *key, void *value,
++				     u64 map_flags)
+ {
+ 	struct bpf_htab *htab = container_of(map, struct bpf_htab, map);
+ 	struct htab_elem *l_new, *l_old = NULL;
+@@ -1226,9 +1226,9 @@ err:
+ 	return ret;
+ }
+ 
+-static int __htab_percpu_map_update_elem(struct bpf_map *map, void *key,
+-					 void *value, u64 map_flags,
+-					 bool onallcpus)
++static long __htab_percpu_map_update_elem(struct bpf_map *map, void *key,
++					  void *value, u64 map_flags,
++					  bool onallcpus)
+ {
+ 	struct bpf_htab *htab = container_of(map, struct bpf_htab, map);
+ 	struct htab_elem *l_new = NULL, *l_old;
+@@ -1281,9 +1281,9 @@ err:
+ 	return ret;
+ }
+ 
+-static int __htab_lru_percpu_map_update_elem(struct bpf_map *map, void *key,
+-					     void *value, u64 map_flags,
+-					     bool onallcpus)
++static long __htab_lru_percpu_map_update_elem(struct bpf_map *map, void *key,
++					      void *value, u64 map_flags,
++					      bool onallcpus)
+ {
+ 	struct bpf_htab *htab = container_of(map, struct bpf_htab, map);
+ 	struct htab_elem *l_new = NULL, *l_old;
+@@ -1348,21 +1348,21 @@ err:
+ 	return ret;
+ }
+ 
+-static int htab_percpu_map_update_elem(struct bpf_map *map, void *key,
+-				       void *value, u64 map_flags)
++static long htab_percpu_map_update_elem(struct bpf_map *map, void *key,
++					void *value, u64 map_flags)
+ {
+ 	return __htab_percpu_map_update_elem(map, key, value, map_flags, false);
+ }
+ 
+-static int htab_lru_percpu_map_update_elem(struct bpf_map *map, void *key,
+-					   void *value, u64 map_flags)
++static long htab_lru_percpu_map_update_elem(struct bpf_map *map, void *key,
++					    void *value, u64 map_flags)
+ {
+ 	return __htab_lru_percpu_map_update_elem(map, key, value, map_flags,
+ 						 false);
+ }
+ 
+ /* Called from syscall or from eBPF program */
+-static int htab_map_delete_elem(struct bpf_map *map, void *key)
++static long htab_map_delete_elem(struct bpf_map *map, void *key)
+ {
+ 	struct bpf_htab *htab = container_of(map, struct bpf_htab, map);
+ 	struct hlist_nulls_head *head;
+@@ -1398,7 +1398,7 @@ static int htab_map_delete_elem(struct bpf_map *map, void *key)
+ 	return ret;
+ }
+ 
+-static int htab_lru_map_delete_elem(struct bpf_map *map, void *key)
++static long htab_lru_map_delete_elem(struct bpf_map *map, void *key)
+ {
+ 	struct bpf_htab *htab = container_of(map, struct bpf_htab, map);
+ 	struct hlist_nulls_head *head;
+@@ -2119,8 +2119,8 @@ static const struct bpf_iter_seq_info iter_seq_info = {
+ 	.seq_priv_size		= sizeof(struct bpf_iter_seq_hash_map_info),
+ };
+ 
+-static int bpf_for_each_hash_elem(struct bpf_map *map, bpf_callback_t callback_fn,
+-				  void *callback_ctx, u64 flags)
++static long bpf_for_each_hash_elem(struct bpf_map *map, bpf_callback_t callback_fn,
++				   void *callback_ctx, u64 flags)
+ {
+ 	struct bpf_htab *htab = container_of(map, struct bpf_htab, map);
+ 	struct hlist_nulls_head *head;
+diff --git a/kernel/bpf/local_storage.c b/kernel/bpf/local_storage.c
+index e90d9f63edc5d..66d8ce2ab5b34 100644
+--- a/kernel/bpf/local_storage.c
++++ b/kernel/bpf/local_storage.c
+@@ -141,8 +141,8 @@ static void *cgroup_storage_lookup_elem(struct bpf_map *_map, void *key)
+ 	return &READ_ONCE(storage->buf)->data[0];
+ }
+ 
+-static int cgroup_storage_update_elem(struct bpf_map *map, void *key,
+-				      void *value, u64 flags)
++static long cgroup_storage_update_elem(struct bpf_map *map, void *key,
++				       void *value, u64 flags)
+ {
+ 	struct bpf_cgroup_storage *storage;
+ 	struct bpf_storage_buffer *new;
+@@ -348,7 +348,7 @@ static void cgroup_storage_map_free(struct bpf_map *_map)
+ 	bpf_map_area_free(map);
+ }
+ 
+-static int cgroup_storage_delete_elem(struct bpf_map *map, void *key)
++static long cgroup_storage_delete_elem(struct bpf_map *map, void *key)
+ {
+ 	return -EINVAL;
+ }
+diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
+index d833496e9e426..27980506fc7d5 100644
+--- a/kernel/bpf/lpm_trie.c
++++ b/kernel/bpf/lpm_trie.c
+@@ -300,8 +300,8 @@ static struct lpm_trie_node *lpm_trie_node_alloc(const struct lpm_trie *trie,
+ }
+ 
+ /* Called from syscall or from eBPF program */
+-static int trie_update_elem(struct bpf_map *map,
+-			    void *_key, void *value, u64 flags)
++static long trie_update_elem(struct bpf_map *map,
++			     void *_key, void *value, u64 flags)
+ {
+ 	struct lpm_trie *trie = container_of(map, struct lpm_trie, map);
+ 	struct lpm_trie_node *node, *im_node = NULL, *new_node = NULL;
+@@ -431,7 +431,7 @@ out:
+ }
+ 
+ /* Called from syscall or from eBPF program */
+-static int trie_delete_elem(struct bpf_map *map, void *_key)
++static long trie_delete_elem(struct bpf_map *map, void *_key)
+ {
+ 	struct lpm_trie *trie = container_of(map, struct lpm_trie, map);
+ 	struct bpf_lpm_trie_key *key = _key;
+diff --git a/kernel/bpf/queue_stack_maps.c b/kernel/bpf/queue_stack_maps.c
+index 8a5e060de63bc..80f958ff63966 100644
+--- a/kernel/bpf/queue_stack_maps.c
++++ b/kernel/bpf/queue_stack_maps.c
+@@ -95,7 +95,7 @@ static void queue_stack_map_free(struct bpf_map *map)
+ 	bpf_map_area_free(qs);
+ }
+ 
+-static int __queue_map_get(struct bpf_map *map, void *value, bool delete)
++static long __queue_map_get(struct bpf_map *map, void *value, bool delete)
+ {
+ 	struct bpf_queue_stack *qs = bpf_queue_stack(map);
+ 	unsigned long flags;
+@@ -124,7 +124,7 @@ out:
+ }
+ 
+ 
+-static int __stack_map_get(struct bpf_map *map, void *value, bool delete)
++static long __stack_map_get(struct bpf_map *map, void *value, bool delete)
+ {
+ 	struct bpf_queue_stack *qs = bpf_queue_stack(map);
+ 	unsigned long flags;
+@@ -156,32 +156,32 @@ out:
+ }
+ 
+ /* Called from syscall or from eBPF program */
+-static int queue_map_peek_elem(struct bpf_map *map, void *value)
++static long queue_map_peek_elem(struct bpf_map *map, void *value)
+ {
+ 	return __queue_map_get(map, value, false);
+ }
+ 
+ /* Called from syscall or from eBPF program */
+-static int stack_map_peek_elem(struct bpf_map *map, void *value)
++static long stack_map_peek_elem(struct bpf_map *map, void *value)
+ {
+ 	return __stack_map_get(map, value, false);
+ }
+ 
+ /* Called from syscall or from eBPF program */
+-static int queue_map_pop_elem(struct bpf_map *map, void *value)
++static long queue_map_pop_elem(struct bpf_map *map, void *value)
+ {
+ 	return __queue_map_get(map, value, true);
+ }
+ 
+ /* Called from syscall or from eBPF program */
+-static int stack_map_pop_elem(struct bpf_map *map, void *value)
++static long stack_map_pop_elem(struct bpf_map *map, void *value)
+ {
+ 	return __stack_map_get(map, value, true);
+ }
+ 
+ /* Called from syscall or from eBPF program */
+-static int queue_stack_map_push_elem(struct bpf_map *map, void *value,
+-				     u64 flags)
++static long queue_stack_map_push_elem(struct bpf_map *map, void *value,
++				      u64 flags)
+ {
+ 	struct bpf_queue_stack *qs = bpf_queue_stack(map);
+ 	unsigned long irq_flags;
+@@ -227,14 +227,14 @@ static void *queue_stack_map_lookup_elem(struct bpf_map *map, void *key)
+ }
+ 
+ /* Called from syscall or from eBPF program */
+-static int queue_stack_map_update_elem(struct bpf_map *map, void *key,
+-				       void *value, u64 flags)
++static long queue_stack_map_update_elem(struct bpf_map *map, void *key,
++					void *value, u64 flags)
+ {
+ 	return -EINVAL;
+ }
+ 
+ /* Called from syscall or from eBPF program */
+-static int queue_stack_map_delete_elem(struct bpf_map *map, void *key)
++static long queue_stack_map_delete_elem(struct bpf_map *map, void *key)
+ {
+ 	return -EINVAL;
+ }
+diff --git a/kernel/bpf/reuseport_array.c b/kernel/bpf/reuseport_array.c
+index 82c61612f382a..d1188b0350914 100644
+--- a/kernel/bpf/reuseport_array.c
++++ b/kernel/bpf/reuseport_array.c
+@@ -59,7 +59,7 @@ static void *reuseport_array_lookup_elem(struct bpf_map *map, void *key)
+ }
+ 
+ /* Called from syscall only */
+-static int reuseport_array_delete_elem(struct bpf_map *map, void *key)
++static long reuseport_array_delete_elem(struct bpf_map *map, void *key)
+ {
+ 	struct reuseport_array *array = reuseport_array(map);
+ 	u32 index = *(u32 *)key;
+diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
+index 8732e0aadf366..f9bbc254cd449 100644
+--- a/kernel/bpf/ringbuf.c
++++ b/kernel/bpf/ringbuf.c
+@@ -241,13 +241,13 @@ static void *ringbuf_map_lookup_elem(struct bpf_map *map, void *key)
+ 	return ERR_PTR(-ENOTSUPP);
+ }
+ 
+-static int ringbuf_map_update_elem(struct bpf_map *map, void *key, void *value,
+-				   u64 flags)
++static long ringbuf_map_update_elem(struct bpf_map *map, void *key, void *value,
++				    u64 flags)
+ {
+ 	return -ENOTSUPP;
+ }
+ 
+-static int ringbuf_map_delete_elem(struct bpf_map *map, void *key)
++static long ringbuf_map_delete_elem(struct bpf_map *map, void *key)
+ {
+ 	return -ENOTSUPP;
+ }
+diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
+index aecea7451b610..496ce1695dd89 100644
+--- a/kernel/bpf/stackmap.c
++++ b/kernel/bpf/stackmap.c
+@@ -618,14 +618,14 @@ static int stack_map_get_next_key(struct bpf_map *map, void *key,
+ 	return 0;
+ }
+ 
+-static int stack_map_update_elem(struct bpf_map *map, void *key, void *value,
+-				 u64 map_flags)
++static long stack_map_update_elem(struct bpf_map *map, void *key, void *value,
++				  u64 map_flags)
+ {
+ 	return -EINVAL;
+ }
+ 
+ /* Called from syscall or from eBPF program */
+-static int stack_map_delete_elem(struct bpf_map *map, void *key)
++static long stack_map_delete_elem(struct bpf_map *map, void *key)
+ {
+ 	struct bpf_stack_map *smap = container_of(map, struct bpf_stack_map, map);
+ 	struct stack_map_bucket *old_bucket;
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 767e8930b0bd5..64600acbb4e76 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -1823,9 +1823,9 @@ static void __reg_bound_offset(struct bpf_reg_state *reg)
+ 	struct tnum var64_off = tnum_intersect(reg->var_off,
+ 					       tnum_range(reg->umin_value,
+ 							  reg->umax_value));
+-	struct tnum var32_off = tnum_intersect(tnum_subreg(reg->var_off),
+-						tnum_range(reg->u32_min_value,
+-							   reg->u32_max_value));
++	struct tnum var32_off = tnum_intersect(tnum_subreg(var64_off),
++					       tnum_range(reg->u32_min_value,
++							  reg->u32_max_value));
+ 
+ 	reg->var_off = tnum_or(tnum_clear_subreg(var64_off), var32_off);
+ }
+@@ -3977,17 +3977,13 @@ static int check_stack_read(struct bpf_verifier_env *env,
+ 	}
+ 	/* Variable offset is prohibited for unprivileged mode for simplicity
+ 	 * since it requires corresponding support in Spectre masking for stack
+-	 * ALU. See also retrieve_ptr_limit().
++	 * ALU. See also retrieve_ptr_limit(). The check in
++	 * check_stack_access_for_ptr_arithmetic() called by
++	 * adjust_ptr_min_max_vals() prevents users from creating stack pointers
++	 * with variable offsets, therefore no check is required here. Further,
++	 * just checking it here would be insufficient as speculative stack
++	 * writes could still lead to unsafe speculative behaviour.
+ 	 */
+-	if (!env->bypass_spec_v1 && var_off) {
+-		char tn_buf[48];
+-
+-		tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
+-		verbose(env, "R%d variable offset stack access prohibited for !root, var_off=%s\n",
+-				ptr_regno, tn_buf);
+-		return -EACCES;
+-	}
+-
+ 	if (!var_off) {
+ 		off += reg->var_off.value;
+ 		err = check_stack_read_fixed_off(env, state, off, size,
+@@ -9792,24 +9788,21 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ 	return 0;
+ }
+ 
+-static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
+-			    int *insn_idx_p)
++static int fetch_kfunc_meta(struct bpf_verifier_env *env,
++			    struct bpf_insn *insn,
++			    struct bpf_kfunc_call_arg_meta *meta,
++			    const char **kfunc_name)
+ {
+-	const struct btf_type *t, *func, *func_proto, *ptr_type;
+-	u32 i, nargs, func_id, ptr_type_id, release_ref_obj_id;
+-	struct bpf_reg_state *regs = cur_regs(env);
+-	const char *func_name, *ptr_type_name;
+-	bool sleepable, rcu_lock, rcu_unlock;
+-	struct bpf_kfunc_call_arg_meta meta;
+-	int err, insn_idx = *insn_idx_p;
+-	const struct btf_param *args;
+-	const struct btf_type *ret_t;
++	const struct btf_type *func, *func_proto;
++	u32 func_id, *kfunc_flags;
++	const char *func_name;
+ 	struct btf *desc_btf;
+-	u32 *kfunc_flags;
+ 
+-	/* skip for now, but return error when we find this in fixup_kfunc_call */
++	if (kfunc_name)
++		*kfunc_name = NULL;
++
+ 	if (!insn->imm)
+-		return 0;
++		return -EINVAL;
+ 
+ 	desc_btf = find_kfunc_desc_btf(env, insn->off);
+ 	if (IS_ERR(desc_btf))
+@@ -9818,22 +9811,51 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
+ 	func_id = insn->imm;
+ 	func = btf_type_by_id(desc_btf, func_id);
+ 	func_name = btf_name_by_offset(desc_btf, func->name_off);
++	if (kfunc_name)
++		*kfunc_name = func_name;
+ 	func_proto = btf_type_by_id(desc_btf, func->type);
+ 
+ 	kfunc_flags = btf_kfunc_id_set_contains(desc_btf, resolve_prog_type(env->prog), func_id);
+ 	if (!kfunc_flags) {
+-		verbose(env, "calling kernel function %s is not allowed\n",
+-			func_name);
+ 		return -EACCES;
+ 	}
+ 
+-	/* Prepare kfunc call metadata */
+-	memset(&meta, 0, sizeof(meta));
+-	meta.btf = desc_btf;
+-	meta.func_id = func_id;
+-	meta.kfunc_flags = *kfunc_flags;
+-	meta.func_proto = func_proto;
+-	meta.func_name = func_name;
++	memset(meta, 0, sizeof(*meta));
++	meta->btf = desc_btf;
++	meta->func_id = func_id;
++	meta->kfunc_flags = *kfunc_flags;
++	meta->func_proto = func_proto;
++	meta->func_name = func_name;
++
++	return 0;
++}
++
++static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
++			    int *insn_idx_p)
++{
++	const struct btf_type *t, *ptr_type;
++	u32 i, nargs, ptr_type_id, release_ref_obj_id;
++	struct bpf_reg_state *regs = cur_regs(env);
++	const char *func_name, *ptr_type_name;
++	bool sleepable, rcu_lock, rcu_unlock;
++	struct bpf_kfunc_call_arg_meta meta;
++	struct bpf_insn_aux_data *insn_aux;
++	int err, insn_idx = *insn_idx_p;
++	const struct btf_param *args;
++	const struct btf_type *ret_t;
++	struct btf *desc_btf;
++
++	/* skip for now, but return error when we find this in fixup_kfunc_call */
++	if (!insn->imm)
++		return 0;
++
++	err = fetch_kfunc_meta(env, insn, &meta, &func_name);
++	if (err == -EACCES && func_name)
++		verbose(env, "calling kernel function %s is not allowed\n", func_name);
++	if (err)
++		return err;
++	desc_btf = meta.btf;
++	insn_aux = &env->insn_aux_data[insn_idx];
+ 
+ 	if (is_kfunc_destructive(&meta) && !capable(CAP_SYS_BOOT)) {
+ 		verbose(env, "destructive kfunc calls require CAP_SYS_BOOT capability\n");
+@@ -9890,7 +9912,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
+ 		err = release_reference(env, regs[meta.release_regno].ref_obj_id);
+ 		if (err) {
+ 			verbose(env, "kfunc %s#%d reference has not been acquired before\n",
+-				func_name, func_id);
++				func_name, meta.func_id);
+ 			return err;
+ 		}
+ 	}
+@@ -9902,14 +9924,14 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
+ 		err = ref_convert_owning_non_owning(env, release_ref_obj_id);
+ 		if (err) {
+ 			verbose(env, "kfunc %s#%d conversion of owning ref to non-owning failed\n",
+-				func_name, func_id);
++				func_name, meta.func_id);
+ 			return err;
+ 		}
+ 
+ 		err = release_reference(env, release_ref_obj_id);
+ 		if (err) {
+ 			verbose(env, "kfunc %s#%d reference has not been acquired before\n",
+-				func_name, func_id);
++				func_name, meta.func_id);
+ 			return err;
+ 		}
+ 	}
+@@ -9919,7 +9941,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
+ 					set_rbtree_add_callback_state);
+ 		if (err) {
+ 			verbose(env, "kfunc %s#%d failed callback verification\n",
+-				func_name, func_id);
++				func_name, meta.func_id);
+ 			return err;
+ 		}
+ 	}
+@@ -9928,7 +9950,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
+ 		mark_reg_not_init(env, regs, caller_saved[i]);
+ 
+ 	/* Check return type */
+-	t = btf_type_skip_modifiers(desc_btf, func_proto->type, NULL);
++	t = btf_type_skip_modifiers(desc_btf, meta.func_proto->type, NULL);
+ 
+ 	if (is_kfunc_acquire(&meta) && !btf_type_is_struct_ptr(meta.btf, t)) {
+ 		/* Only exception is bpf_obj_new_impl */
+@@ -9977,13 +9999,9 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
+ 				regs[BPF_REG_0].btf = ret_btf;
+ 				regs[BPF_REG_0].btf_id = ret_btf_id;
+ 
+-				env->insn_aux_data[insn_idx].obj_new_size = ret_t->size;
+-				env->insn_aux_data[insn_idx].kptr_struct_meta =
++				insn_aux->obj_new_size = ret_t->size;
++				insn_aux->kptr_struct_meta =
+ 					btf_find_struct_meta(ret_btf, ret_btf_id);
+-			} else if (meta.func_id == special_kfunc_list[KF_bpf_obj_drop_impl]) {
+-				env->insn_aux_data[insn_idx].kptr_struct_meta =
+-					btf_find_struct_meta(meta.arg_obj_drop.btf,
+-							     meta.arg_obj_drop.btf_id);
+ 			} else if (meta.func_id == special_kfunc_list[KF_bpf_list_pop_front] ||
+ 				   meta.func_id == special_kfunc_list[KF_bpf_list_pop_back]) {
+ 				struct btf_field *field = meta.arg_list_head.field;
+@@ -10068,10 +10086,18 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
+ 
+ 		if (reg_may_point_to_spin_lock(&regs[BPF_REG_0]) && !regs[BPF_REG_0].id)
+ 			regs[BPF_REG_0].id = ++env->id_gen;
+-	} /* else { add_kfunc_call() ensures it is btf_type_is_void(t) } */
++	} else if (btf_type_is_void(t)) {
++		if (meta.btf == btf_vmlinux && btf_id_set_contains(&special_kfunc_set, meta.func_id)) {
++			if (meta.func_id == special_kfunc_list[KF_bpf_obj_drop_impl]) {
++				insn_aux->kptr_struct_meta =
++					btf_find_struct_meta(meta.arg_obj_drop.btf,
++							     meta.arg_obj_drop.btf_id);
++			}
++		}
++	}
+ 
+-	nargs = btf_type_vlen(func_proto);
+-	args = (const struct btf_param *)(func_proto + 1);
++	nargs = btf_type_vlen(meta.func_proto);
++	args = (const struct btf_param *)(meta.func_proto + 1);
+ 	for (i = 0; i < nargs; i++) {
+ 		u32 regno = i + 1;
+ 
+@@ -14225,10 +14251,11 @@ static int propagate_precision(struct bpf_verifier_env *env,
+ 		state_reg = state->regs;
+ 		for (i = 0; i < BPF_REG_FP; i++, state_reg++) {
+ 			if (state_reg->type != SCALAR_VALUE ||
+-			    !state_reg->precise)
++			    !state_reg->precise ||
++			    !(state_reg->live & REG_LIVE_READ))
+ 				continue;
+ 			if (env->log.level & BPF_LOG_LEVEL2)
+-				verbose(env, "frame %d: propagating r%d\n", i, fr);
++				verbose(env, "frame %d: propagating r%d\n", fr, i);
+ 			err = mark_chain_precision_frame(env, fr, i);
+ 			if (err < 0)
+ 				return err;
+@@ -14239,11 +14266,12 @@ static int propagate_precision(struct bpf_verifier_env *env,
+ 				continue;
+ 			state_reg = &state->stack[i].spilled_ptr;
+ 			if (state_reg->type != SCALAR_VALUE ||
+-			    !state_reg->precise)
++			    !state_reg->precise ||
++			    !(state_reg->live & REG_LIVE_READ))
+ 				continue;
+ 			if (env->log.level & BPF_LOG_LEVEL2)
+ 				verbose(env, "frame %d: propagating fp%d\n",
+-					(-i - 1) * BPF_REG_SIZE, fr);
++					fr, (-i - 1) * BPF_REG_SIZE);
+ 			err = mark_chain_precision_stack_frame(env, fr, i);
+ 			if (err < 0)
+ 				return err;
+@@ -16680,21 +16708,21 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
+ 			BUILD_BUG_ON(!__same_type(ops->map_lookup_elem,
+ 				     (void *(*)(struct bpf_map *map, void *key))NULL));
+ 			BUILD_BUG_ON(!__same_type(ops->map_delete_elem,
+-				     (int (*)(struct bpf_map *map, void *key))NULL));
++				     (long (*)(struct bpf_map *map, void *key))NULL));
+ 			BUILD_BUG_ON(!__same_type(ops->map_update_elem,
+-				     (int (*)(struct bpf_map *map, void *key, void *value,
++				     (long (*)(struct bpf_map *map, void *key, void *value,
+ 					      u64 flags))NULL));
+ 			BUILD_BUG_ON(!__same_type(ops->map_push_elem,
+-				     (int (*)(struct bpf_map *map, void *value,
++				     (long (*)(struct bpf_map *map, void *value,
+ 					      u64 flags))NULL));
+ 			BUILD_BUG_ON(!__same_type(ops->map_pop_elem,
+-				     (int (*)(struct bpf_map *map, void *value))NULL));
++				     (long (*)(struct bpf_map *map, void *value))NULL));
+ 			BUILD_BUG_ON(!__same_type(ops->map_peek_elem,
+-				     (int (*)(struct bpf_map *map, void *value))NULL));
++				     (long (*)(struct bpf_map *map, void *value))NULL));
+ 			BUILD_BUG_ON(!__same_type(ops->map_redirect,
+-				     (int (*)(struct bpf_map *map, u64 index, u64 flags))NULL));
++				     (long (*)(struct bpf_map *map, u64 index, u64 flags))NULL));
+ 			BUILD_BUG_ON(!__same_type(ops->map_for_each_callback,
+-				     (int (*)(struct bpf_map *map,
++				     (long (*)(struct bpf_map *map,
+ 					      bpf_callback_t callback_fn,
+ 					      void *callback_ctx,
+ 					      u64 flags))NULL));
+diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
+index dac42a2ad5883..09d2f4877d0f4 100644
+--- a/kernel/dma/swiotlb.c
++++ b/kernel/dma/swiotlb.c
+@@ -930,7 +930,9 @@ EXPORT_SYMBOL_GPL(is_swiotlb_active);
+ 
+ static int io_tlb_used_get(void *data, u64 *val)
+ {
+-	*val = mem_used(&io_tlb_default_mem);
++	struct io_tlb_mem *mem = data;
++
++	*val = mem_used(mem);
+ 	return 0;
+ }
+ DEFINE_DEBUGFS_ATTRIBUTE(fops_io_tlb_used, io_tlb_used_get, NULL, "%llu\n");
+@@ -943,7 +945,7 @@ static void swiotlb_create_debugfs_files(struct io_tlb_mem *mem,
+ 		return;
+ 
+ 	debugfs_create_ulong("io_tlb_nslabs", 0400, mem->debugfs, &mem->nslabs);
+-	debugfs_create_file("io_tlb_used", 0400, mem->debugfs, NULL,
++	debugfs_create_file("io_tlb_used", 0400, mem->debugfs, mem,
+ 			&fops_io_tlb_used);
+ }
+ 
+@@ -998,6 +1000,11 @@ static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
+ 	/* Set Per-device io tlb area to one */
+ 	unsigned int nareas = 1;
+ 
++	if (PageHighMem(pfn_to_page(PHYS_PFN(rmem->base)))) {
++		dev_err(dev, "Restricted DMA pool must be accessible within the linear mapping.");
++		return -EINVAL;
++	}
++
+ 	/*
+ 	 * Since multiple devices can share the same pool, the private data,
+ 	 * io_tlb_mem struct, will be initialized by the first device attached
+@@ -1059,11 +1066,6 @@ static int __init rmem_swiotlb_setup(struct reserved_mem *rmem)
+ 	    of_get_flat_dt_prop(node, "no-map", NULL))
+ 		return -EINVAL;
+ 
+-	if (PageHighMem(pfn_to_page(PHYS_PFN(rmem->base)))) {
+-		pr_err("Restricted DMA pool must be accessible within the linear mapping.");
+-		return -EINVAL;
+-	}
+-
+ 	rmem->ops = &rmem_swiotlb_ops;
+ 	pr_info("Reserved memory: created restricted DMA pool at %pa, size %ld MiB\n",
+ 		&rmem->base, (unsigned long)rmem->size / SZ_1M);
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 435815d3be3f8..68baa8194d9f8 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -9433,8 +9433,8 @@ __perf_event_account_interrupt(struct perf_event *event, int throttle)
+ 		hwc->interrupts = 1;
+ 	} else {
+ 		hwc->interrupts++;
+-		if (unlikely(throttle
+-			     && hwc->interrupts >= max_samples_per_tick)) {
++		if (unlikely(throttle &&
++			     hwc->interrupts > max_samples_per_tick)) {
+ 			__this_cpu_inc(perf_throttled_count);
+ 			tick_dep_set_cpu(smp_processor_id(), TICK_DEP_BIT_PERF_EVENTS);
+ 			hwc->interrupts = MAX_INTERRUPTS;
+diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c
+index 54d077e1a2dc7..5a60cc52adc0c 100644
+--- a/kernel/kcsan/core.c
++++ b/kernel/kcsan/core.c
+@@ -337,11 +337,20 @@ static void delay_access(int type)
+  */
+ static __always_inline u64 read_instrumented_memory(const volatile void *ptr, size_t size)
+ {
++	/*
++	 * In the below we don't necessarily need the read of the location to
++	 * be atomic, and we don't use READ_ONCE(), since all we need for race
++	 * detection is to observe 2 different values.
++	 *
++	 * Furthermore, on certain architectures (such as arm64), READ_ONCE()
++	 * may turn into more complex instructions than a plain load that cannot
++	 * do unaligned accesses.
++	 */
+ 	switch (size) {
+-	case 1:  return READ_ONCE(*(const u8 *)ptr);
+-	case 2:  return READ_ONCE(*(const u16 *)ptr);
+-	case 4:  return READ_ONCE(*(const u32 *)ptr);
+-	case 8:  return READ_ONCE(*(const u64 *)ptr);
++	case 1:  return *(const volatile u8 *)ptr;
++	case 2:  return *(const volatile u16 *)ptr;
++	case 4:  return *(const volatile u32 *)ptr;
++	case 8:  return *(const volatile u64 *)ptr;
+ 	default: return 0; /* Ignore; we do not diff the values. */
+ 	}
+ }
+diff --git a/kernel/kheaders.c b/kernel/kheaders.c
+index 8f69772af77b4..42163c9e94e55 100644
+--- a/kernel/kheaders.c
++++ b/kernel/kheaders.c
+@@ -26,15 +26,15 @@ asm (
+ "	.popsection				\n"
+ );
+ 
+-extern char kernel_headers_data;
+-extern char kernel_headers_data_end;
++extern char kernel_headers_data[];
++extern char kernel_headers_data_end[];
+ 
+ static ssize_t
+ ikheaders_read(struct file *file,  struct kobject *kobj,
+ 	       struct bin_attribute *bin_attr,
+ 	       char *buf, loff_t off, size_t len)
+ {
+-	memcpy(buf, &kernel_headers_data + off, len);
++	memcpy(buf, &kernel_headers_data[off], len);
+ 	return len;
+ }
+ 
+@@ -48,8 +48,8 @@ static struct bin_attribute kheaders_attr __ro_after_init = {
+ 
+ static int __init ikheaders_init(void)
+ {
+-	kheaders_attr.size = (&kernel_headers_data_end -
+-			      &kernel_headers_data);
++	kheaders_attr.size = (kernel_headers_data_end -
++			      kernel_headers_data);
+ 	return sysfs_create_bin_file(kernel_kobj, &kheaders_attr);
+ }
+ 
+diff --git a/kernel/module/decompress.c b/kernel/module/decompress.c
+index bb79ac1a6d8f7..7ddc87bee2741 100644
+--- a/kernel/module/decompress.c
++++ b/kernel/module/decompress.c
+@@ -267,7 +267,7 @@ static ssize_t module_zstd_decompress(struct load_info *info,
+ 		zstd_dec.size = PAGE_SIZE;
+ 
+ 		ret = zstd_decompress_stream(dstream, &zstd_dec, &zstd_buf);
+-		kunmap(page);
++		kunmap_local(zstd_dec.dst);
+ 		retval = zstd_get_error_code(ret);
+ 		if (retval)
+ 			break;
+diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
+index 793c55a2becba..30d1274f03f62 100644
+--- a/kernel/power/hibernate.c
++++ b/kernel/power/hibernate.c
+@@ -64,6 +64,7 @@ enum {
+ static int hibernation_mode = HIBERNATION_SHUTDOWN;
+ 
+ bool freezer_test_done;
++bool snapshot_test;
+ 
+ static const struct platform_hibernation_ops *hibernation_ops;
+ 
+@@ -687,18 +688,22 @@ static int load_image_and_restore(void)
+ {
+ 	int error;
+ 	unsigned int flags;
++	fmode_t mode = FMODE_READ;
++
++	if (snapshot_test)
++		mode |= FMODE_EXCL;
+ 
+ 	pm_pr_dbg("Loading hibernation image.\n");
+ 
+ 	lock_device_hotplug();
+ 	error = create_basic_memory_bitmaps();
+ 	if (error) {
+-		swsusp_close(FMODE_READ | FMODE_EXCL);
++		swsusp_close(mode);
+ 		goto Unlock;
+ 	}
+ 
+ 	error = swsusp_read(&flags);
+-	swsusp_close(FMODE_READ | FMODE_EXCL);
++	swsusp_close(mode);
+ 	if (!error)
+ 		error = hibernation_restore(flags & SF_PLATFORM_MODE);
+ 
+@@ -716,7 +721,6 @@ static int load_image_and_restore(void)
+  */
+ int hibernate(void)
+ {
+-	bool snapshot_test = false;
+ 	unsigned int sleep_flags;
+ 	int error;
+ 
+@@ -744,6 +748,9 @@ int hibernate(void)
+ 	if (error)
+ 		goto Exit;
+ 
++	/* protected by system_transition_mutex */
++	snapshot_test = false;
++
+ 	lock_device_hotplug();
+ 	/* Allocate memory management structures */
+ 	error = create_basic_memory_bitmaps();
+@@ -940,6 +947,8 @@ static int software_resume(void)
+ 	 */
+ 	mutex_lock_nested(&system_transition_mutex, SINGLE_DEPTH_NESTING);
+ 
++	snapshot_test = false;
++
+ 	if (swsusp_resume_device)
+ 		goto Check_image;
+ 
+diff --git a/kernel/power/power.h b/kernel/power/power.h
+index b4f4339432096..b83c8d5e188de 100644
+--- a/kernel/power/power.h
++++ b/kernel/power/power.h
+@@ -59,6 +59,7 @@ asmlinkage int swsusp_save(void);
+ 
+ /* kernel/power/hibernate.c */
+ extern bool freezer_test_done;
++extern bool snapshot_test;
+ 
+ extern int hibernation_snapshot(int platform_mode);
+ extern int hibernation_restore(int platform_mode);
+diff --git a/kernel/power/swap.c b/kernel/power/swap.c
+index 36a1df48280c3..92e41ed292ada 100644
+--- a/kernel/power/swap.c
++++ b/kernel/power/swap.c
+@@ -1518,9 +1518,13 @@ int swsusp_check(void)
+ {
+ 	int error;
+ 	void *holder;
++	fmode_t mode = FMODE_READ;
++
++	if (snapshot_test)
++		mode |= FMODE_EXCL;
+ 
+ 	hib_resume_bdev = blkdev_get_by_dev(swsusp_resume_device,
+-					    FMODE_READ | FMODE_EXCL, &holder);
++					    mode, &holder);
+ 	if (!IS_ERR(hib_resume_bdev)) {
+ 		set_blocksize(hib_resume_bdev, PAGE_SIZE);
+ 		clear_page(swsusp_header);
+@@ -1547,7 +1551,7 @@ int swsusp_check(void)
+ 
+ put:
+ 		if (error)
+-			blkdev_put(hib_resume_bdev, FMODE_READ | FMODE_EXCL);
++			blkdev_put(hib_resume_bdev, mode);
+ 		else
+ 			pr_debug("Image signature found, resuming\n");
+ 	} else {
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 7b95ee98a1a53..a565dc5c54440 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -640,6 +640,7 @@ void __rcu_irq_enter_check_tick(void)
+ 	}
+ 	raw_spin_unlock_rcu_node(rdp->mynode);
+ }
++NOKPROBE_SYMBOL(__rcu_irq_enter_check_tick);
+ #endif /* CONFIG_NO_HZ_FULL */
+ 
+ /*
+diff --git a/kernel/relay.c b/kernel/relay.c
+index 9aa70ae53d24d..a80fa01042e98 100644
+--- a/kernel/relay.c
++++ b/kernel/relay.c
+@@ -989,7 +989,8 @@ static size_t relay_file_read_start_pos(struct rchan_buf *buf)
+ 	size_t subbuf_size = buf->chan->subbuf_size;
+ 	size_t n_subbufs = buf->chan->n_subbufs;
+ 	size_t consumed = buf->subbufs_consumed % n_subbufs;
+-	size_t read_pos = consumed * subbuf_size + buf->bytes_consumed;
++	size_t read_pos = (consumed * subbuf_size + buf->bytes_consumed)
++			% (n_subbufs * subbuf_size);
+ 
+ 	read_subbuf = read_pos / subbuf_size;
+ 	padding = buf->padding[read_subbuf];
+diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c
+index 5732fa75ebab2..b5cc2b53464de 100644
+--- a/kernel/sched/clock.c
++++ b/kernel/sched/clock.c
+@@ -300,6 +300,9 @@ noinstr u64 local_clock(void)
+ 	if (static_branch_likely(&__sched_clock_stable))
+ 		return sched_clock() + __sched_clock_offset;
+ 
++	if (!static_branch_likely(&sched_clock_running))
++		return sched_clock();
++
+ 	preempt_disable_notrace();
+ 	clock = sched_clock_local(this_scd());
+ 	preempt_enable_notrace();
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index 71b24371a6f77..ac8010f6f3a22 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -2246,6 +2246,7 @@ static struct rq *find_lock_later_rq(struct task_struct *task, struct rq *rq)
+ 				     !cpumask_test_cpu(later_rq->cpu, &task->cpus_mask) ||
+ 				     task_on_cpu(rq, task) ||
+ 				     !dl_task(task) ||
++				     is_migration_disabled(task) ||
+ 				     !task_on_rq_queued(task))) {
+ 				double_unlock_balance(rq, later_rq);
+ 				later_rq = NULL;
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 5f6587d94c1dd..ed89be0aa6503 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -6614,7 +6614,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p,
+ 		target = wake_affine_weight(sd, p, this_cpu, prev_cpu, sync);
+ 
+ 	schedstat_inc(p->stats.nr_wakeups_affine_attempts);
+-	if (target == nr_cpumask_bits)
++	if (target != this_cpu)
+ 		return prev_cpu;
+ 
+ 	schedstat_inc(sd->ttwu_move_affine);
+diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
+index 0a11f44adee57..4f5796dd26a56 100644
+--- a/kernel/sched/rt.c
++++ b/kernel/sched/rt.c
+@@ -2000,11 +2000,15 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
+ 			 * the mean time, task could have
+ 			 * migrated already or had its affinity changed.
+ 			 * Also make sure that it wasn't scheduled on its rq.
++			 * It is possible the task was scheduled, set
++			 * "migrate_disabled" and then got preempted, so we must
++			 * check the task migration disable flag here too.
+ 			 */
+ 			if (unlikely(task_rq(task) != rq ||
+ 				     !cpumask_test_cpu(lowest_rq->cpu, &task->cpus_mask) ||
+ 				     task_on_cpu(rq, task) ||
+ 				     !rt_task(task) ||
++				     is_migration_disabled(task) ||
+ 				     !task_on_rq_queued(task))) {
+ 
+ 				double_unlock_balance(rq, lowest_rq);
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index 2f5e9b34022c5..e9c6f9d0e42ce 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -846,6 +846,8 @@ static u64 collect_timerqueue(struct timerqueue_head *head,
+ 			return expires;
+ 
+ 		ctmr->firing = 1;
++		/* See posix_cpu_timer_wait_running() */
++		rcu_assign_pointer(ctmr->handling, current);
+ 		cpu_timer_dequeue(ctmr);
+ 		list_add_tail(&ctmr->elist, firing);
+ 	}
+@@ -1161,7 +1163,49 @@ static void handle_posix_cpu_timers(struct task_struct *tsk);
+ #ifdef CONFIG_POSIX_CPU_TIMERS_TASK_WORK
+ static void posix_cpu_timers_work(struct callback_head *work)
+ {
++	struct posix_cputimers_work *cw = container_of(work, typeof(*cw), work);
++
++	mutex_lock(&cw->mutex);
+ 	handle_posix_cpu_timers(current);
++	mutex_unlock(&cw->mutex);
++}
++
++/*
++ * Invoked from the posix-timer core when a cancel operation failed because
++ * the timer is marked firing. The caller holds rcu_read_lock(), which
++ * protects the timer and the task which is expiring it from being freed.
++ */
++static void posix_cpu_timer_wait_running(struct k_itimer *timr)
++{
++	struct task_struct *tsk = rcu_dereference(timr->it.cpu.handling);
++
++	/* Has the handling task completed expiry already? */
++	if (!tsk)
++		return;
++
++	/* Ensure that the task cannot go away */
++	get_task_struct(tsk);
++	/* Now drop the RCU protection so the mutex can be locked */
++	rcu_read_unlock();
++	/* Wait on the expiry mutex */
++	mutex_lock(&tsk->posix_cputimers_work.mutex);
++	/* Release it immediately again. */
++	mutex_unlock(&tsk->posix_cputimers_work.mutex);
++	/* Drop the task reference. */
++	put_task_struct(tsk);
++	/* Relock RCU so the callsite is balanced */
++	rcu_read_lock();
++}
++
++static void posix_cpu_timer_wait_running_nsleep(struct k_itimer *timr)
++{
++	/* Ensure that timr->it.cpu.handling task cannot go away */
++	rcu_read_lock();
++	spin_unlock_irq(&timr->it_lock);
++	posix_cpu_timer_wait_running(timr);
++	rcu_read_unlock();
++	/* @timr is on stack and is valid */
++	spin_lock_irq(&timr->it_lock);
+ }
+ 
+ /*
+@@ -1177,6 +1221,7 @@ void clear_posix_cputimers_work(struct task_struct *p)
+ 	       sizeof(p->posix_cputimers_work.work));
+ 	init_task_work(&p->posix_cputimers_work.work,
+ 		       posix_cpu_timers_work);
++	mutex_init(&p->posix_cputimers_work.mutex);
+ 	p->posix_cputimers_work.scheduled = false;
+ }
+ 
+@@ -1255,6 +1300,18 @@ static inline void __run_posix_cpu_timers(struct task_struct *tsk)
+ 	lockdep_posixtimer_exit();
+ }
+ 
++static void posix_cpu_timer_wait_running(struct k_itimer *timr)
++{
++	cpu_relax();
++}
++
++static void posix_cpu_timer_wait_running_nsleep(struct k_itimer *timr)
++{
++	spin_unlock_irq(&timr->it_lock);
++	cpu_relax();
++	spin_lock_irq(&timr->it_lock);
++}
++
+ static inline bool posix_cpu_timers_work_scheduled(struct task_struct *tsk)
+ {
+ 	return false;
+@@ -1363,6 +1420,8 @@ static void handle_posix_cpu_timers(struct task_struct *tsk)
+ 		 */
+ 		if (likely(cpu_firing >= 0))
+ 			cpu_timer_fire(timer);
++		/* See posix_cpu_timer_wait_running() */
++		rcu_assign_pointer(timer->it.cpu.handling, NULL);
+ 		spin_unlock(&timer->it_lock);
+ 	}
+ }
+@@ -1497,23 +1556,16 @@ static int do_cpu_nanosleep(const clockid_t which_clock, int flags,
+ 		expires = cpu_timer_getexpires(&timer.it.cpu);
+ 		error = posix_cpu_timer_set(&timer, 0, &zero_it, &it);
+ 		if (!error) {
+-			/*
+-			 * Timer is now unarmed, deletion can not fail.
+-			 */
++			/* Timer is now unarmed, deletion can not fail. */
+ 			posix_cpu_timer_del(&timer);
++		} else {
++			while (error == TIMER_RETRY) {
++				posix_cpu_timer_wait_running_nsleep(&timer);
++				error = posix_cpu_timer_del(&timer);
++			}
+ 		}
+-		spin_unlock_irq(&timer.it_lock);
+ 
+-		while (error == TIMER_RETRY) {
+-			/*
+-			 * We need to handle case when timer was or is in the
+-			 * middle of firing. In other cases we already freed
+-			 * resources.
+-			 */
+-			spin_lock_irq(&timer.it_lock);
+-			error = posix_cpu_timer_del(&timer);
+-			spin_unlock_irq(&timer.it_lock);
+-		}
++		spin_unlock_irq(&timer.it_lock);
+ 
+ 		if ((it.it_value.tv_sec | it.it_value.tv_nsec) == 0) {
+ 			/*
+@@ -1623,6 +1675,7 @@ const struct k_clock clock_posix_cpu = {
+ 	.timer_del		= posix_cpu_timer_del,
+ 	.timer_get		= posix_cpu_timer_get,
+ 	.timer_rearm		= posix_cpu_timer_rearm,
++	.timer_wait_running	= posix_cpu_timer_wait_running,
+ };
+ 
+ const struct k_clock clock_process = {
+diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
+index 0c8a87a11b39d..808a247205a9a 100644
+--- a/kernel/time/posix-timers.c
++++ b/kernel/time/posix-timers.c
+@@ -846,6 +846,10 @@ static struct k_itimer *timer_wait_running(struct k_itimer *timer,
+ 	rcu_read_lock();
+ 	unlock_timer(timer, *flags);
+ 
++	/*
++	 * kc->timer_wait_running() might drop RCU lock. So @timer
++	 * cannot be touched anymore after the function returns!
++	 */
+ 	if (!WARN_ON_ONCE(!kc->timer_wait_running))
+ 		kc->timer_wait_running(timer);
+ 
+diff --git a/kernel/time/tick-common.c b/kernel/time/tick-common.c
+index 46789356f856e..65b8658da829e 100644
+--- a/kernel/time/tick-common.c
++++ b/kernel/time/tick-common.c
+@@ -218,9 +218,19 @@ static void tick_setup_device(struct tick_device *td,
+ 		 * this cpu:
+ 		 */
+ 		if (tick_do_timer_cpu == TICK_DO_TIMER_BOOT) {
++			ktime_t next_p;
++			u32 rem;
++
+ 			tick_do_timer_cpu = cpu;
+ 
+-			tick_next_period = ktime_get();
++			next_p = ktime_get();
++			div_u64_rem(next_p, TICK_NSEC, &rem);
++			if (rem) {
++				next_p -= rem;
++				next_p += TICK_NSEC;
++			}
++
++			tick_next_period = next_p;
+ #ifdef CONFIG_NO_HZ_FULL
+ 			/*
+ 			 * The boot CPU may be nohz_full, in which case set
+diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
+index b0e3c9205946f..a46506f7ec6d0 100644
+--- a/kernel/time/tick-sched.c
++++ b/kernel/time/tick-sched.c
+@@ -281,6 +281,11 @@ static bool check_tick_dependency(atomic_t *dep)
+ 		return true;
+ 	}
+ 
++	if (val & TICK_DEP_MASK_RCU_EXP) {
++		trace_tick_stop(0, TICK_DEP_MASK_RCU_EXP);
++		return true;
++	}
++
+ 	return false;
+ }
+ 
+@@ -527,7 +532,7 @@ void __init tick_nohz_full_setup(cpumask_var_t cpumask)
+ 	tick_nohz_full_running = true;
+ }
+ 
+-static int tick_nohz_cpu_down(unsigned int cpu)
++bool tick_nohz_cpu_hotpluggable(unsigned int cpu)
+ {
+ 	/*
+ 	 * The tick_do_timer_cpu CPU handles housekeeping duty (unbound
+@@ -535,8 +540,13 @@ static int tick_nohz_cpu_down(unsigned int cpu)
+ 	 * CPUs. It must remain online when nohz full is enabled.
+ 	 */
+ 	if (tick_nohz_full_running && tick_do_timer_cpu == cpu)
+-		return -EBUSY;
+-	return 0;
++		return false;
++	return true;
++}
++
++static int tick_nohz_cpu_down(unsigned int cpu)
++{
++	return tick_nohz_cpu_hotpluggable(cpu) ? 0 : -EBUSY;
+ }
+ 
+ void __init tick_nohz_init(void)
+diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
+index 5579ead449f25..09d594900ee0b 100644
+--- a/kernel/time/timekeeping.c
++++ b/kernel/time/timekeeping.c
+@@ -526,7 +526,7 @@ EXPORT_SYMBOL_GPL(ktime_get_raw_fast_ns);
+  * partially updated.  Since the tk->offs_boot update is a rare event, this
+  * should be a rare occurrence which postprocessing should be able to handle.
+  *
+- * The caveats vs. timestamp ordering as documented for ktime_get_fast_ns()
++ * The caveats vs. timestamp ordering as documented for ktime_get_mono_fast_ns()
+  * apply as well.
+  */
+ u64 notrace ktime_get_boot_fast_ns(void)
+@@ -576,7 +576,7 @@ static __always_inline u64 __ktime_get_real_fast(struct tk_fast *tkf, u64 *mono)
+ /**
+  * ktime_get_real_fast_ns: - NMI safe and fast access to clock realtime.
+  *
+- * See ktime_get_fast_ns() for documentation of the time stamp ordering.
++ * See ktime_get_mono_fast_ns() for documentation of the time stamp ordering.
+  */
+ u64 ktime_get_real_fast_ns(void)
+ {
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 76a2d91eecad9..94e844cf854d4 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -1774,6 +1774,8 @@ static void rb_free_cpu_buffer(struct ring_buffer_per_cpu *cpu_buffer)
+ 	struct list_head *head = cpu_buffer->pages;
+ 	struct buffer_page *bpage, *tmp;
+ 
++	irq_work_sync(&cpu_buffer->irq_work.work);
++
+ 	free_buffer_page(cpu_buffer->reader_page);
+ 
+ 	if (head) {
+@@ -1880,6 +1882,8 @@ ring_buffer_free(struct trace_buffer *buffer)
+ 
+ 	cpuhp_state_remove_instance(CPUHP_TRACE_RB_PREPARE, &buffer->node);
+ 
++	irq_work_sync(&buffer->irq_work.work);
++
+ 	for_each_buffer_cpu(buffer, cpu)
+ 		rb_free_cpu_buffer(buffer->buffers[cpu]);
+ 
+@@ -5345,6 +5349,9 @@ void ring_buffer_reset_cpu(struct trace_buffer *buffer, int cpu)
+ }
+ EXPORT_SYMBOL_GPL(ring_buffer_reset_cpu);
+ 
++/* Flag to ensure proper resetting of atomic variables */
++#define RESET_BIT	(1 << 30)
++
+ /**
+  * ring_buffer_reset_online_cpus - reset a ring buffer per CPU buffer
+  * @buffer: The ring buffer to reset a per cpu buffer of
+@@ -5361,20 +5368,27 @@ void ring_buffer_reset_online_cpus(struct trace_buffer *buffer)
+ 	for_each_online_buffer_cpu(buffer, cpu) {
+ 		cpu_buffer = buffer->buffers[cpu];
+ 
+-		atomic_inc(&cpu_buffer->resize_disabled);
++		atomic_add(RESET_BIT, &cpu_buffer->resize_disabled);
+ 		atomic_inc(&cpu_buffer->record_disabled);
+ 	}
+ 
+ 	/* Make sure all commits have finished */
+ 	synchronize_rcu();
+ 
+-	for_each_online_buffer_cpu(buffer, cpu) {
++	for_each_buffer_cpu(buffer, cpu) {
+ 		cpu_buffer = buffer->buffers[cpu];
+ 
++		/*
++		 * If a CPU came online during the synchronize_rcu(), then
++		 * ignore it.
++		 */
++		if (!(atomic_read(&cpu_buffer->resize_disabled) & RESET_BIT))
++			continue;
++
+ 		reset_disabled_cpu_buffer(cpu_buffer);
+ 
+ 		atomic_dec(&cpu_buffer->record_disabled);
+-		atomic_dec(&cpu_buffer->resize_disabled);
++		atomic_sub(RESET_BIT, &cpu_buffer->resize_disabled);
+ 	}
+ 
+ 	mutex_unlock(&buffer->mutex);
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 36a6037823cd9..5909aaf2f4c08 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -9658,7 +9658,7 @@ init_tracer_tracefs(struct trace_array *tr, struct dentry *d_tracer)
+ 
+ 	tr->buffer_percent = 50;
+ 
+-	trace_create_file("buffer_percent", TRACE_MODE_READ, d_tracer,
++	trace_create_file("buffer_percent", TRACE_MODE_WRITE, d_tracer,
+ 			tr, &buffer_percent_fops);
+ 
+ 	create_trace_options_dir(tr);
+diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c
+index 908e8a13c675b..625cab4b9d945 100644
+--- a/kernel/trace/trace_events_user.c
++++ b/kernel/trace/trace_events_user.c
+@@ -1398,6 +1398,9 @@ static ssize_t user_events_write_core(struct file *file, struct iov_iter *i)
+ 	if (unlikely(copy_from_iter(&idx, sizeof(idx), i) != sizeof(idx)))
+ 		return -EFAULT;
+ 
++	if (idx < 0)
++		return -EINVAL;
++
+ 	rcu_read_lock_sched();
+ 
+ 	refs = rcu_dereference_sched(info->refs);
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index b8b541caed485..2be9b0ecf22ce 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -5002,10 +5002,16 @@ static void show_one_worker_pool(struct worker_pool *pool)
+ 	struct worker *worker;
+ 	bool first = true;
+ 	unsigned long flags;
++	unsigned long hung = 0;
+ 
+ 	raw_spin_lock_irqsave(&pool->lock, flags);
+ 	if (pool->nr_workers == pool->nr_idle)
+ 		goto next_pool;
++
++	/* How long the first pending work is waiting for a worker. */
++	if (!list_empty(&pool->worklist))
++		hung = jiffies_to_msecs(jiffies - pool->watchdog_ts) / 1000;
++
+ 	/*
+ 	 * Defer printing to avoid deadlocks in console drivers that
+ 	 * queue work while holding locks also taken in their write
+@@ -5014,9 +5020,7 @@ static void show_one_worker_pool(struct worker_pool *pool)
+ 	printk_deferred_enter();
+ 	pr_info("pool %d:", pool->id);
+ 	pr_cont_pool_info(pool);
+-	pr_cont(" hung=%us workers=%d",
+-		jiffies_to_msecs(jiffies - pool->watchdog_ts) / 1000,
+-		pool->nr_workers);
++	pr_cont(" hung=%lus workers=%d", hung, pool->nr_workers);
+ 	if (pool->manager)
+ 		pr_cont(" manager: %d",
+ 			task_pid_nr(pool->manager->task));
+diff --git a/lib/debugobjects.c b/lib/debugobjects.c
+index df86e649d8be0..003edc5ebd673 100644
+--- a/lib/debugobjects.c
++++ b/lib/debugobjects.c
+@@ -216,10 +216,6 @@ static struct debug_obj *__alloc_object(struct hlist_head *list)
+ 	return obj;
+ }
+ 
+-/*
+- * Allocate a new object. If the pool is empty, switch off the debugger.
+- * Must be called with interrupts disabled.
+- */
+ static struct debug_obj *
+ alloc_object(void *addr, struct debug_bucket *b, const struct debug_obj_descr *descr)
+ {
+@@ -552,36 +548,74 @@ static void debug_object_is_on_stack(void *addr, int onstack)
+ 	WARN_ON(1);
+ }
+ 
+-static void
+-__debug_object_init(void *addr, const struct debug_obj_descr *descr, int onstack)
++static struct debug_obj *lookup_object_or_alloc(void *addr, struct debug_bucket *b,
++						const struct debug_obj_descr *descr,
++						bool onstack, bool alloc_ifstatic)
+ {
+-	enum debug_obj_state state;
+-	bool check_stack = false;
+-	struct debug_bucket *db;
+-	struct debug_obj *obj;
+-	unsigned long flags;
++	struct debug_obj *obj = lookup_object(addr, b);
++	enum debug_obj_state state = ODEBUG_STATE_NONE;
++
++	if (likely(obj))
++		return obj;
++
++	/*
++	 * debug_object_init() unconditionally allocates untracked
++	 * objects. It does not matter whether it is a static object or
++	 * not.
++	 *
++	 * debug_object_assert_init() and debug_object_activate() allow
++	 * allocation only if the descriptor callback confirms that the
++	 * object is static and considered initialized. For non-static
++	 * objects the allocation needs to be done from the fixup callback.
++	 */
++	if (unlikely(alloc_ifstatic)) {
++		if (!descr->is_static_object || !descr->is_static_object(addr))
++			return ERR_PTR(-ENOENT);
++		/* Statically allocated objects are considered initialized */
++		state = ODEBUG_STATE_INIT;
++	}
++
++	obj = alloc_object(addr, b, descr);
++	if (likely(obj)) {
++		obj->state = state;
++		debug_object_is_on_stack(addr, onstack);
++		return obj;
++	}
++
++	/* Out of memory. Do the cleanup outside of the locked region */
++	debug_objects_enabled = 0;
++	return NULL;
++}
+ 
++static void debug_objects_fill_pool(void)
++{
+ 	/*
+ 	 * On RT enabled kernels the pool refill must happen in preemptible
+ 	 * context:
+ 	 */
+ 	if (!IS_ENABLED(CONFIG_PREEMPT_RT) || preemptible())
+ 		fill_pool();
++}
++
++static void
++__debug_object_init(void *addr, const struct debug_obj_descr *descr, int onstack)
++{
++	enum debug_obj_state state;
++	struct debug_bucket *db;
++	struct debug_obj *obj;
++	unsigned long flags;
++
++	debug_objects_fill_pool();
+ 
+ 	db = get_bucket((unsigned long) addr);
+ 
+ 	raw_spin_lock_irqsave(&db->lock, flags);
+ 
+-	obj = lookup_object(addr, db);
+-	if (!obj) {
+-		obj = alloc_object(addr, db, descr);
+-		if (!obj) {
+-			debug_objects_enabled = 0;
+-			raw_spin_unlock_irqrestore(&db->lock, flags);
+-			debug_objects_oom();
+-			return;
+-		}
+-		check_stack = true;
++	obj = lookup_object_or_alloc(addr, db, descr, onstack, false);
++	if (unlikely(!obj)) {
++		raw_spin_unlock_irqrestore(&db->lock, flags);
++		debug_objects_oom();
++		return;
+ 	}
+ 
+ 	switch (obj->state) {
+@@ -607,8 +641,6 @@ __debug_object_init(void *addr, const struct debug_obj_descr *descr, int onstack
+ 	}
+ 
+ 	raw_spin_unlock_irqrestore(&db->lock, flags);
+-	if (check_stack)
+-		debug_object_is_on_stack(addr, onstack);
+ }
+ 
+ /**
+@@ -648,24 +680,24 @@ EXPORT_SYMBOL_GPL(debug_object_init_on_stack);
+  */
+ int debug_object_activate(void *addr, const struct debug_obj_descr *descr)
+ {
++	struct debug_obj o = { .object = addr, .state = ODEBUG_STATE_NOTAVAILABLE, .descr = descr };
+ 	enum debug_obj_state state;
+ 	struct debug_bucket *db;
+ 	struct debug_obj *obj;
+ 	unsigned long flags;
+ 	int ret;
+-	struct debug_obj o = { .object = addr,
+-			       .state = ODEBUG_STATE_NOTAVAILABLE,
+-			       .descr = descr };
+ 
+ 	if (!debug_objects_enabled)
+ 		return 0;
+ 
++	debug_objects_fill_pool();
++
+ 	db = get_bucket((unsigned long) addr);
+ 
+ 	raw_spin_lock_irqsave(&db->lock, flags);
+ 
+-	obj = lookup_object(addr, db);
+-	if (obj) {
++	obj = lookup_object_or_alloc(addr, db, descr, false, true);
++	if (likely(!IS_ERR_OR_NULL(obj))) {
+ 		bool print_object = false;
+ 
+ 		switch (obj->state) {
+@@ -698,24 +730,16 @@ int debug_object_activate(void *addr, const struct debug_obj_descr *descr)
+ 
+ 	raw_spin_unlock_irqrestore(&db->lock, flags);
+ 
+-	/*
+-	 * We are here when a static object is activated. We
+-	 * let the type specific code confirm whether this is
+-	 * true or not. if true, we just make sure that the
+-	 * static object is tracked in the object tracker. If
+-	 * not, this must be a bug, so we try to fix it up.
+-	 */
+-	if (descr->is_static_object && descr->is_static_object(addr)) {
+-		/* track this static object */
+-		debug_object_init(addr, descr);
+-		debug_object_activate(addr, descr);
+-	} else {
+-		debug_print_object(&o, "activate");
+-		ret = debug_object_fixup(descr->fixup_activate, addr,
+-					ODEBUG_STATE_NOTAVAILABLE);
+-		return ret ? 0 : -EINVAL;
++	/* If NULL the allocation has hit OOM */
++	if (!obj) {
++		debug_objects_oom();
++		return 0;
+ 	}
+-	return 0;
++
++	/* Object is neither static nor tracked. It's not initialized */
++	debug_print_object(&o, "activate");
++	ret = debug_object_fixup(descr->fixup_activate, addr, ODEBUG_STATE_NOTAVAILABLE);
++	return ret ? 0 : -EINVAL;
+ }
+ EXPORT_SYMBOL_GPL(debug_object_activate);
+ 
+@@ -869,6 +893,7 @@ EXPORT_SYMBOL_GPL(debug_object_free);
+  */
+ void debug_object_assert_init(void *addr, const struct debug_obj_descr *descr)
+ {
++	struct debug_obj o = { .object = addr, .state = ODEBUG_STATE_NOTAVAILABLE, .descr = descr };
+ 	struct debug_bucket *db;
+ 	struct debug_obj *obj;
+ 	unsigned long flags;
+@@ -876,34 +901,25 @@ void debug_object_assert_init(void *addr, const struct debug_obj_descr *descr)
+ 	if (!debug_objects_enabled)
+ 		return;
+ 
++	debug_objects_fill_pool();
++
+ 	db = get_bucket((unsigned long) addr);
+ 
+ 	raw_spin_lock_irqsave(&db->lock, flags);
++	obj = lookup_object_or_alloc(addr, db, descr, false, true);
++	raw_spin_unlock_irqrestore(&db->lock, flags);
++	if (likely(!IS_ERR_OR_NULL(obj)))
++		return;
+ 
+-	obj = lookup_object(addr, db);
++	/* If NULL the allocation has hit OOM */
+ 	if (!obj) {
+-		struct debug_obj o = { .object = addr,
+-				       .state = ODEBUG_STATE_NOTAVAILABLE,
+-				       .descr = descr };
+-
+-		raw_spin_unlock_irqrestore(&db->lock, flags);
+-		/*
+-		 * Maybe the object is static, and we let the type specific
+-		 * code confirm. Track this static object if true, else invoke
+-		 * fixup.
+-		 */
+-		if (descr->is_static_object && descr->is_static_object(addr)) {
+-			/* Track this static object */
+-			debug_object_init(addr, descr);
+-		} else {
+-			debug_print_object(&o, "assert_init");
+-			debug_object_fixup(descr->fixup_assert_init, addr,
+-					   ODEBUG_STATE_NOTAVAILABLE);
+-		}
++		debug_objects_oom();
+ 		return;
+ 	}
+ 
+-	raw_spin_unlock_irqrestore(&db->lock, flags);
++	/* Object is neither tracked nor static. It's not initialized. */
++	debug_print_object(&o, "assert_init");
++	debug_object_fixup(descr->fixup_assert_init, addr, ODEBUG_STATE_NOTAVAILABLE);
+ }
+ EXPORT_SYMBOL_GPL(debug_object_assert_init);
+ 
+diff --git a/lib/kunit/debugfs.c b/lib/kunit/debugfs.c
+index de0ee2e03ed60..b08bb1fba106d 100644
+--- a/lib/kunit/debugfs.c
++++ b/lib/kunit/debugfs.c
+@@ -55,14 +55,24 @@ static int debugfs_print_results(struct seq_file *seq, void *v)
+ 	enum kunit_status success = kunit_suite_has_succeeded(suite);
+ 	struct kunit_case *test_case;
+ 
+-	if (!suite || !suite->log)
++	if (!suite)
+ 		return 0;
+ 
+-	seq_printf(seq, "%s", suite->log);
++	/* Print KTAP header so the debugfs log can be parsed as valid KTAP. */
++	seq_puts(seq, "KTAP version 1\n");
++	seq_puts(seq, "1..1\n");
++
++	/* Print suite header because it is not stored in the test logs. */
++	seq_puts(seq, KUNIT_SUBTEST_INDENT "KTAP version 1\n");
++	seq_printf(seq, KUNIT_SUBTEST_INDENT "# Subtest: %s\n", suite->name);
++	seq_printf(seq, KUNIT_SUBTEST_INDENT "1..%zd\n", kunit_suite_num_test_cases(suite));
+ 
+ 	kunit_suite_for_each_test_case(suite, test_case)
+ 		debugfs_print_result(seq, suite, test_case);
+ 
++	if (suite->log)
++		seq_printf(seq, "%s", suite->log);
++
+ 	seq_printf(seq, "%s %d %s\n",
+ 		   kunit_status_to_ok_not_ok(success), 1, suite->name);
+ 	return 0;
+diff --git a/lib/kunit/test.c b/lib/kunit/test.c
+index c9e15bb600584..10a5325785f0e 100644
+--- a/lib/kunit/test.c
++++ b/lib/kunit/test.c
+@@ -147,10 +147,18 @@ EXPORT_SYMBOL_GPL(kunit_suite_num_test_cases);
+ 
+ static void kunit_print_suite_start(struct kunit_suite *suite)
+ {
+-	kunit_log(KERN_INFO, suite, KUNIT_SUBTEST_INDENT "KTAP version 1\n");
+-	kunit_log(KERN_INFO, suite, KUNIT_SUBTEST_INDENT "# Subtest: %s",
++	/*
++	 * We do not log the test suite header as doing so would
++	 * mean debugfs display would consist of the test suite
++	 * header prior to individual test results.
++	 * Hence directly printk the suite status, and we will
++	 * separately seq_printf() the suite header for the debugfs
++	 * representation.
++	 */
++	pr_info(KUNIT_SUBTEST_INDENT "KTAP version 1\n");
++	pr_info(KUNIT_SUBTEST_INDENT "# Subtest: %s\n",
+ 		  suite->name);
+-	kunit_log(KERN_INFO, suite, KUNIT_SUBTEST_INDENT "1..%zd",
++	pr_info(KUNIT_SUBTEST_INDENT "1..%zd\n",
+ 		  kunit_suite_num_test_cases(suite));
+ }
+ 
+@@ -167,10 +175,9 @@ static void kunit_print_ok_not_ok(void *test_or_suite,
+ 
+ 	/*
+ 	 * We do not log the test suite results as doing so would
+-	 * mean debugfs display would consist of the test suite
+-	 * description and status prior to individual test results.
+-	 * Hence directly printk the suite status, and we will
+-	 * separately seq_printf() the suite status for the debugfs
++	 * mean debugfs display would consist of an incorrect test
++	 * number. Hence directly printk the suite result, and we will
++	 * separately seq_printf() the suite results for the debugfs
+ 	 * representation.
+ 	 */
+ 	if (suite)
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 245038a9fe4ea..c4096c4af2535 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -4949,11 +4949,15 @@ static bool is_hugetlb_entry_hwpoisoned(pte_t pte)
+ 
+ static void
+ hugetlb_install_folio(struct vm_area_struct *vma, pte_t *ptep, unsigned long addr,
+-		     struct folio *new_folio)
++		      struct folio *new_folio, pte_t old)
+ {
++	pte_t newpte = make_huge_pte(vma, &new_folio->page, 1);
++
+ 	__folio_mark_uptodate(new_folio);
+ 	hugepage_add_new_anon_rmap(new_folio, vma, addr);
+-	set_huge_pte_at(vma->vm_mm, addr, ptep, make_huge_pte(vma, &new_folio->page, 1));
++	if (userfaultfd_wp(vma) && huge_pte_uffd_wp(old))
++		newpte = huge_pte_mkuffd_wp(newpte);
++	set_huge_pte_at(vma->vm_mm, addr, ptep, newpte);
+ 	hugetlb_count_add(pages_per_huge_page(hstate_vma(vma)), vma->vm_mm);
+ 	folio_set_hugetlb_migratable(new_folio);
+ }
+@@ -5028,14 +5032,12 @@ again:
+ 			 */
+ 			;
+ 		} else if (unlikely(is_hugetlb_entry_hwpoisoned(entry))) {
+-			bool uffd_wp = huge_pte_uffd_wp(entry);
+-
+-			if (!userfaultfd_wp(dst_vma) && uffd_wp)
++			if (!userfaultfd_wp(dst_vma))
+ 				entry = huge_pte_clear_uffd_wp(entry);
+ 			set_huge_pte_at(dst, addr, dst_pte, entry);
+ 		} else if (unlikely(is_hugetlb_entry_migration(entry))) {
+ 			swp_entry_t swp_entry = pte_to_swp_entry(entry);
+-			bool uffd_wp = huge_pte_uffd_wp(entry);
++			bool uffd_wp = pte_swp_uffd_wp(entry);
+ 
+ 			if (!is_readable_migration_entry(swp_entry) && cow) {
+ 				/*
+@@ -5046,10 +5048,10 @@ again:
+ 							swp_offset(swp_entry));
+ 				entry = swp_entry_to_pte(swp_entry);
+ 				if (userfaultfd_wp(src_vma) && uffd_wp)
+-					entry = huge_pte_mkuffd_wp(entry);
++					entry = pte_swp_mkuffd_wp(entry);
+ 				set_huge_pte_at(src, addr, src_pte, entry);
+ 			}
+-			if (!userfaultfd_wp(dst_vma) && uffd_wp)
++			if (!userfaultfd_wp(dst_vma))
+ 				entry = huge_pte_clear_uffd_wp(entry);
+ 			set_huge_pte_at(dst, addr, dst_pte, entry);
+ 		} else if (unlikely(is_pte_marker(entry))) {
+@@ -5109,7 +5111,8 @@ again:
+ 					/* huge_ptep of dst_pte won't change as in child */
+ 					goto again;
+ 				}
+-				hugetlb_install_folio(dst_vma, dst_pte, addr, new_folio);
++				hugetlb_install_folio(dst_vma, dst_pte, addr,
++						      new_folio, src_pte_old);
+ 				spin_unlock(src_ptl);
+ 				spin_unlock(dst_ptl);
+ 				continue;
+@@ -5127,6 +5130,9 @@ again:
+ 				entry = huge_pte_wrprotect(entry);
+ 			}
+ 
++			if (!userfaultfd_wp(dst_vma))
++				entry = huge_pte_clear_uffd_wp(entry);
++
+ 			set_huge_pte_at(dst, addr, dst_pte, entry);
+ 			hugetlb_count_add(npages, dst);
+ 		}
+diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
+index d1bcb0205327a..2f7ec2e1718ad 100644
+--- a/mm/kasan/hw_tags.c
++++ b/mm/kasan/hw_tags.c
+@@ -285,7 +285,7 @@ static void init_vmalloc_pages(const void *start, unsigned long size)
+ 	const void *addr;
+ 
+ 	for (addr = start; addr < start + size; addr += PAGE_SIZE) {
+-		struct page *page = virt_to_page(addr);
++		struct page *page = vmalloc_to_page(addr);
+ 
+ 		clear_highpage_kasan_tagged(page);
+ 	}
+@@ -297,7 +297,7 @@ void *__kasan_unpoison_vmalloc(const void *start, unsigned long size,
+ 	u8 tag;
+ 	unsigned long redzone_start, redzone_size;
+ 
+-	if (!kasan_vmalloc_enabled() || !is_vmalloc_or_module_addr(start)) {
++	if (!kasan_vmalloc_enabled()) {
+ 		if (flags & KASAN_VMALLOC_INIT)
+ 			init_vmalloc_pages(start, size);
+ 		return (void *)start;
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index 2068b594dc882..1756389a06094 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -808,8 +808,10 @@ static int mbind_range(struct vma_iterator *vmi, struct vm_area_struct *vma,
+ 		vmstart = vma->vm_start;
+ 	}
+ 
+-	if (mpol_equal(vma_policy(vma), new_pol))
++	if (mpol_equal(vma_policy(vma), new_pol)) {
++		*prev = vma;
+ 		return 0;
++	}
+ 
+ 	pgoff = vma->vm_pgoff + ((vmstart - vma->vm_start) >> PAGE_SHIFT);
+ 	merged = vma_merge(vmi, vma->vm_mm, *prev, vmstart, vmend, vma->vm_flags,
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 9c1c5e8b24b8f..2bb7ce0a934a7 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -1911,6 +1911,16 @@ retry:
+ 			}
+ 		}
+ 
++		/*
++		 * Folio is unmapped now so it cannot be newly pinned anymore.
++		 * No point in trying to reclaim folio if it is pinned.
++		 * Furthermore we don't want to reclaim underlying fs metadata
++		 * if the folio is pinned and thus potentially modified by the
++		 * pinning process as that may upset the filesystem.
++		 */
++		if (folio_maybe_dma_pinned(folio))
++			goto activate_locked;
++
+ 		mapping = folio_mapping(folio);
+ 		if (folio_test_dirty(folio)) {
+ 			/*
+diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c
+index 296d0145932f4..5920544e93e82 100644
+--- a/net/8021q/vlan_dev.c
++++ b/net/8021q/vlan_dev.c
+@@ -365,7 +365,7 @@ static int vlan_dev_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+ 
+ 	switch (cmd) {
+ 	case SIOCSHWTSTAMP:
+-		if (!net_eq(dev_net(dev), &init_net))
++		if (!net_eq(dev_net(dev), dev_net(real_dev)))
+ 			break;
+ 		fallthrough;
+ 	case SIOCGMIIPHY:
+diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
+index bb378c33f542c..6a4b3e9313241 100644
+--- a/net/core/bpf_sk_storage.c
++++ b/net/core/bpf_sk_storage.c
+@@ -100,8 +100,8 @@ static void *bpf_fd_sk_storage_lookup_elem(struct bpf_map *map, void *key)
+ 	return ERR_PTR(err);
+ }
+ 
+-static int bpf_fd_sk_storage_update_elem(struct bpf_map *map, void *key,
+-					 void *value, u64 map_flags)
++static long bpf_fd_sk_storage_update_elem(struct bpf_map *map, void *key,
++					  void *value, u64 map_flags)
+ {
+ 	struct bpf_local_storage_data *sdata;
+ 	struct socket *sock;
+@@ -120,7 +120,7 @@ static int bpf_fd_sk_storage_update_elem(struct bpf_map *map, void *key,
+ 	return err;
+ }
+ 
+-static int bpf_fd_sk_storage_delete_elem(struct bpf_map *map, void *key)
++static long bpf_fd_sk_storage_delete_elem(struct bpf_map *map, void *key)
+ {
+ 	struct socket *sock;
+ 	int fd, err;
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 4c0879798eb8a..2f9bb98170ab0 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -5162,6 +5162,9 @@ void __skb_tstamp_tx(struct sk_buff *orig_skb,
+ 			skb = alloc_skb(0, GFP_ATOMIC);
+ 	} else {
+ 		skb = skb_clone(orig_skb, GFP_ATOMIC);
++
++		if (skb_orphan_frags_rx(skb, GFP_ATOMIC))
++			return;
+ 	}
+ 	if (!skb)
+ 		return;
+diff --git a/net/core/sock_map.c b/net/core/sock_map.c
+index a68a7290a3b2b..a055139f410e2 100644
+--- a/net/core/sock_map.c
++++ b/net/core/sock_map.c
+@@ -437,7 +437,7 @@ static void sock_map_delete_from_link(struct bpf_map *map, struct sock *sk,
+ 	__sock_map_delete(stab, sk, link_raw);
+ }
+ 
+-static int sock_map_delete_elem(struct bpf_map *map, void *key)
++static long sock_map_delete_elem(struct bpf_map *map, void *key)
+ {
+ 	struct bpf_stab *stab = container_of(map, struct bpf_stab, map);
+ 	u32 i = *(u32 *)key;
+@@ -587,8 +587,8 @@ out:
+ 	return ret;
+ }
+ 
+-static int sock_map_update_elem(struct bpf_map *map, void *key,
+-				void *value, u64 flags)
++static long sock_map_update_elem(struct bpf_map *map, void *key,
++				 void *value, u64 flags)
+ {
+ 	struct sock *sk = (struct sock *)value;
+ 	int ret;
+@@ -916,7 +916,7 @@ static void sock_hash_delete_from_link(struct bpf_map *map, struct sock *sk,
+ 	raw_spin_unlock_bh(&bucket->lock);
+ }
+ 
+-static int sock_hash_delete_elem(struct bpf_map *map, void *key)
++static long sock_hash_delete_elem(struct bpf_map *map, void *key)
+ {
+ 	struct bpf_shtab *htab = container_of(map, struct bpf_shtab, map);
+ 	u32 hash, key_size = map->key_size;
+diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c
+index b9d7c3dd1cb39..c0fd8f5f3b94e 100644
+--- a/net/dccp/ipv6.c
++++ b/net/dccp/ipv6.c
+@@ -783,6 +783,7 @@ lookup:
+ 
+ 	if (!xfrm6_policy_check(sk, XFRM_POLICY_IN, skb))
+ 		goto discard_and_relse;
++	nf_reset_ct(skb);
+ 
+ 	return __sk_receive_skb(sk, skb, 1, dh->dccph_doff * 4,
+ 				refcounted) ? -1 : 0;
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index 4e4e308c3230a..89bd7872c6290 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -1570,9 +1570,19 @@ struct sk_buff *__ip_make_skb(struct sock *sk,
+ 	cork->dst = NULL;
+ 	skb_dst_set(skb, &rt->dst);
+ 
+-	if (iph->protocol == IPPROTO_ICMP)
+-		icmp_out_count(net, ((struct icmphdr *)
+-			skb_transport_header(skb))->type);
++	if (iph->protocol == IPPROTO_ICMP) {
++		u8 icmp_type;
++
++		/* For such sockets, transhdrlen is zero when do ip_append_data(),
++		 * so icmphdr does not in skb linear region and can not get icmp_type
++		 * by icmp_hdr(skb)->type.
++		 */
++		if (sk->sk_type == SOCK_RAW && !inet_sk(sk)->hdrincl)
++			icmp_type = fl4->fl4_icmp_type;
++		else
++			icmp_type = icmp_hdr(skb)->type;
++		icmp_out_count(net, icmp_type);
++	}
+ 
+ 	ip_cork_release(cork);
+ out:
+diff --git a/net/ipv6/ip6_input.c b/net/ipv6/ip6_input.c
+index e1ebf5e42ebe9..d94041bb42872 100644
+--- a/net/ipv6/ip6_input.c
++++ b/net/ipv6/ip6_input.c
+@@ -404,10 +404,6 @@ resubmit_final:
+ 			/* Only do this once for first final protocol */
+ 			have_final = true;
+ 
+-			/* Free reference early: we don't need it any more,
+-			   and it may hold ip_conntrack module loaded
+-			   indefinitely. */
+-			nf_reset_ct(skb);
+ 
+ 			skb_postpull_rcsum(skb, skb_network_header(skb),
+ 					   skb_network_header_len(skb));
+@@ -430,10 +426,12 @@ resubmit_final:
+ 				goto discard;
+ 			}
+ 		}
+-		if (!(ipprot->flags & INET6_PROTO_NOPOLICY) &&
+-		    !xfrm6_policy_check(NULL, XFRM_POLICY_IN, skb)) {
+-			SKB_DR_SET(reason, XFRM_POLICY);
+-			goto discard;
++		if (!(ipprot->flags & INET6_PROTO_NOPOLICY)) {
++			if (!xfrm6_policy_check(NULL, XFRM_POLICY_IN, skb)) {
++				SKB_DR_SET(reason, XFRM_POLICY);
++				goto discard;
++			}
++			nf_reset_ct(skb);
+ 		}
+ 
+ 		ret = INDIRECT_CALL_2(ipprot->handler, tcp_v6_rcv, udpv6_rcv,
+diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c
+index a327aa481df48..9986e2d15c8be 100644
+--- a/net/ipv6/raw.c
++++ b/net/ipv6/raw.c
+@@ -193,10 +193,8 @@ static bool ipv6_raw_deliver(struct sk_buff *skb, int nexthdr)
+ 			struct sk_buff *clone = skb_clone(skb, GFP_ATOMIC);
+ 
+ 			/* Not releasing hash table! */
+-			if (clone) {
+-				nf_reset_ct(clone);
++			if (clone)
+ 				rawv6_rcv(sk, clone);
+-			}
+ 		}
+ 	}
+ 	rcu_read_unlock();
+@@ -389,6 +387,7 @@ int rawv6_rcv(struct sock *sk, struct sk_buff *skb)
+ 		kfree_skb_reason(skb, SKB_DROP_REASON_XFRM_POLICY);
+ 		return NET_RX_DROP;
+ 	}
++	nf_reset_ct(skb);
+ 
+ 	if (!rp->checksum)
+ 		skb->ip_summed = CHECKSUM_UNNECESSARY;
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index 1bf93b61aa06f..1e747241c7710 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -1722,6 +1722,8 @@ process:
+ 	if (drop_reason)
+ 		goto discard_and_relse;
+ 
++	nf_reset_ct(skb);
++
+ 	if (tcp_filter(sk, skb)) {
+ 		drop_reason = SKB_DROP_REASON_SOCKET_FILTER;
+ 		goto discard_and_relse;
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index a675acfb901d1..c519f21632656 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -704,6 +704,7 @@ static int udpv6_queue_rcv_one_skb(struct sock *sk, struct sk_buff *skb)
+ 		drop_reason = SKB_DROP_REASON_XFRM_POLICY;
+ 		goto drop;
+ 	}
++	nf_reset_ct(skb);
+ 
+ 	if (static_branch_unlikely(&udpv6_encap_needed_key) && up->encap_type) {
+ 		int (*encap_rcv)(struct sock *sk, struct sk_buff *skb);
+@@ -1027,6 +1028,7 @@ int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
+ 
+ 	if (!xfrm6_policy_check(NULL, XFRM_POLICY_IN, skb))
+ 		goto discard;
++	nf_reset_ct(skb);
+ 
+ 	if (udp_lib_checksum_complete(skb))
+ 		goto csum_error;
+diff --git a/net/netfilter/nf_conntrack_bpf.c b/net/netfilter/nf_conntrack_bpf.c
+index cd99e6dc1f35c..34913521c385a 100644
+--- a/net/netfilter/nf_conntrack_bpf.c
++++ b/net/netfilter/nf_conntrack_bpf.c
+@@ -381,6 +381,7 @@ __bpf_kfunc struct nf_conn *bpf_ct_insert_entry(struct nf_conn___init *nfct_i)
+ 	struct nf_conn *nfct = (struct nf_conn *)nfct_i;
+ 	int err;
+ 
++	nfct->status |= IPS_CONFIRMED;
+ 	err = nf_conntrack_hash_check_insert(nfct);
+ 	if (err < 0) {
+ 		nf_conntrack_free(nfct);
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index c6a6a6099b4e2..7ba6ab9b54b56 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -932,7 +932,6 @@ nf_conntrack_hash_check_insert(struct nf_conn *ct)
+ 		goto out;
+ 	}
+ 
+-	ct->status |= IPS_CONFIRMED;
+ 	smp_wmb();
+ 	/* The caller holds a reference to this object */
+ 	refcount_set(&ct->ct_general.use, 2);
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index bfc3aaa2c872b..6f3b23a6653cc 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -176,7 +176,12 @@ nla_put_failure:
+ static int ctnetlink_dump_timeout(struct sk_buff *skb, const struct nf_conn *ct,
+ 				  bool skip_zero)
+ {
+-	long timeout = nf_ct_expires(ct) / HZ;
++	long timeout;
++
++	if (nf_ct_is_confirmed(ct))
++		timeout = nf_ct_expires(ct) / HZ;
++	else
++		timeout = ct->timeout / HZ;
+ 
+ 	if (skip_zero && timeout == 0)
+ 		return 0;
+@@ -2253,9 +2258,6 @@ ctnetlink_create_conntrack(struct net *net,
+ 	if (!cda[CTA_TIMEOUT])
+ 		goto err1;
+ 
+-	timeout = (u64)ntohl(nla_get_be32(cda[CTA_TIMEOUT])) * HZ;
+-	__nf_ct_set_timeout(ct, timeout);
+-
+ 	rcu_read_lock();
+  	if (cda[CTA_HELP]) {
+ 		char *helpname = NULL;
+@@ -2316,6 +2318,12 @@ ctnetlink_create_conntrack(struct net *net,
+ 	nfct_seqadj_ext_add(ct);
+ 	nfct_synproxy_ext_add(ct);
+ 
++	/* we must add conntrack extensions before confirmation. */
++	ct->status |= IPS_CONFIRMED;
++
++	timeout = (u64)ntohl(nla_get_be32(cda[CTA_TIMEOUT])) * HZ;
++	__nf_ct_set_timeout(ct, timeout);
++
+ 	if (cda[CTA_STATUS]) {
+ 		err = ctnetlink_change_status(ct, cda);
+ 		if (err < 0)
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index e48ab8dfb5410..46f60648a57d1 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -5004,12 +5004,24 @@ static void nf_tables_unbind_set(const struct nft_ctx *ctx, struct nft_set *set,
+ 	}
+ }
+ 
++void nf_tables_activate_set(const struct nft_ctx *ctx, struct nft_set *set)
++{
++	if (nft_set_is_anonymous(set))
++		nft_clear(ctx->net, set);
++
++	set->use++;
++}
++EXPORT_SYMBOL_GPL(nf_tables_activate_set);
++
+ void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
+ 			      struct nft_set_binding *binding,
+ 			      enum nft_trans_phase phase)
+ {
+ 	switch (phase) {
+ 	case NFT_TRANS_PREPARE:
++		if (nft_set_is_anonymous(set))
++			nft_deactivate_next(ctx->net, set);
++
+ 		set->use--;
+ 		return;
+ 	case NFT_TRANS_ABORT:
+@@ -8645,6 +8657,8 @@ static int nf_tables_validate(struct net *net)
+ 			if (nft_table_validate(net, table) < 0)
+ 				return -EAGAIN;
+ 		}
++
++		nft_validate_state_update(net, NFT_VALIDATE_SKIP);
+ 		break;
+ 	}
+ 
+@@ -9586,11 +9600,6 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 	return 0;
+ }
+ 
+-static void nf_tables_cleanup(struct net *net)
+-{
+-	nft_validate_state_update(net, NFT_VALIDATE_SKIP);
+-}
+-
+ static int nf_tables_abort(struct net *net, struct sk_buff *skb,
+ 			   enum nfnl_abort_action action)
+ {
+@@ -9624,7 +9633,6 @@ static const struct nfnetlink_subsystem nf_tables_subsys = {
+ 	.cb		= nf_tables_cb,
+ 	.commit		= nf_tables_commit,
+ 	.abort		= nf_tables_abort,
+-	.cleanup	= nf_tables_cleanup,
+ 	.valid_genid	= nf_tables_valid_genid,
+ 	.owner		= THIS_MODULE,
+ };
+diff --git a/net/netfilter/nfnetlink.c b/net/netfilter/nfnetlink.c
+index 81c7737c803a6..ae7146475d17a 100644
+--- a/net/netfilter/nfnetlink.c
++++ b/net/netfilter/nfnetlink.c
+@@ -590,8 +590,6 @@ done:
+ 			goto replay_abort;
+ 		}
+ 	}
+-	if (ss->cleanup)
+-		ss->cleanup(net);
+ 
+ 	nfnl_err_deliver(&err_list, oskb);
+ 	kfree_skb(skb);
+diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c
+index 274579b1696e0..bd19c7aec92ee 100644
+--- a/net/netfilter/nft_dynset.c
++++ b/net/netfilter/nft_dynset.c
+@@ -342,7 +342,7 @@ static void nft_dynset_activate(const struct nft_ctx *ctx,
+ {
+ 	struct nft_dynset *priv = nft_expr_priv(expr);
+ 
+-	priv->set->use++;
++	nf_tables_activate_set(ctx, priv->set);
+ }
+ 
+ static void nft_dynset_destroy(const struct nft_ctx *ctx,
+diff --git a/net/netfilter/nft_lookup.c b/net/netfilter/nft_lookup.c
+index cecf8ab90e58f..03ef4fdaa460b 100644
+--- a/net/netfilter/nft_lookup.c
++++ b/net/netfilter/nft_lookup.c
+@@ -167,7 +167,7 @@ static void nft_lookup_activate(const struct nft_ctx *ctx,
+ {
+ 	struct nft_lookup *priv = nft_expr_priv(expr);
+ 
+-	priv->set->use++;
++	nf_tables_activate_set(ctx, priv->set);
+ }
+ 
+ static void nft_lookup_destroy(const struct nft_ctx *ctx,
+diff --git a/net/netfilter/nft_objref.c b/net/netfilter/nft_objref.c
+index cb37169608bab..a48dd5b5d45b1 100644
+--- a/net/netfilter/nft_objref.c
++++ b/net/netfilter/nft_objref.c
+@@ -185,7 +185,7 @@ static void nft_objref_map_activate(const struct nft_ctx *ctx,
+ {
+ 	struct nft_objref_map *priv = nft_expr_priv(expr);
+ 
+-	priv->set->use++;
++	nf_tables_activate_set(ctx, priv->set);
+ }
+ 
+ static void nft_objref_map_destroy(const struct nft_ctx *ctx,
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index f365dfdd672d7..9b6eb28e6e94f 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -1742,7 +1742,8 @@ static int netlink_getsockopt(struct socket *sock, int level, int optname,
+ {
+ 	struct sock *sk = sock->sk;
+ 	struct netlink_sock *nlk = nlk_sk(sk);
+-	int len, val, err;
++	unsigned int flag;
++	int len, val;
+ 
+ 	if (level != SOL_NETLINK)
+ 		return -ENOPROTOOPT;
+@@ -1754,39 +1755,17 @@ static int netlink_getsockopt(struct socket *sock, int level, int optname,
+ 
+ 	switch (optname) {
+ 	case NETLINK_PKTINFO:
+-		if (len < sizeof(int))
+-			return -EINVAL;
+-		len = sizeof(int);
+-		val = nlk->flags & NETLINK_F_RECV_PKTINFO ? 1 : 0;
+-		if (put_user(len, optlen) ||
+-		    put_user(val, optval))
+-			return -EFAULT;
+-		err = 0;
++		flag = NETLINK_F_RECV_PKTINFO;
+ 		break;
+ 	case NETLINK_BROADCAST_ERROR:
+-		if (len < sizeof(int))
+-			return -EINVAL;
+-		len = sizeof(int);
+-		val = nlk->flags & NETLINK_F_BROADCAST_SEND_ERROR ? 1 : 0;
+-		if (put_user(len, optlen) ||
+-		    put_user(val, optval))
+-			return -EFAULT;
+-		err = 0;
++		flag = NETLINK_F_BROADCAST_SEND_ERROR;
+ 		break;
+ 	case NETLINK_NO_ENOBUFS:
+-		if (len < sizeof(int))
+-			return -EINVAL;
+-		len = sizeof(int);
+-		val = nlk->flags & NETLINK_F_RECV_NO_ENOBUFS ? 1 : 0;
+-		if (put_user(len, optlen) ||
+-		    put_user(val, optval))
+-			return -EFAULT;
+-		err = 0;
++		flag = NETLINK_F_RECV_NO_ENOBUFS;
+ 		break;
+ 	case NETLINK_LIST_MEMBERSHIPS: {
+-		int pos, idx, shift;
++		int pos, idx, shift, err = 0;
+ 
+-		err = 0;
+ 		netlink_lock_table();
+ 		for (pos = 0; pos * 8 < nlk->ngroups; pos += sizeof(u32)) {
+ 			if (len - pos < sizeof(u32))
+@@ -1803,40 +1782,32 @@ static int netlink_getsockopt(struct socket *sock, int level, int optname,
+ 		if (put_user(ALIGN(nlk->ngroups / 8, sizeof(u32)), optlen))
+ 			err = -EFAULT;
+ 		netlink_unlock_table();
+-		break;
++		return err;
+ 	}
+ 	case NETLINK_CAP_ACK:
+-		if (len < sizeof(int))
+-			return -EINVAL;
+-		len = sizeof(int);
+-		val = nlk->flags & NETLINK_F_CAP_ACK ? 1 : 0;
+-		if (put_user(len, optlen) ||
+-		    put_user(val, optval))
+-			return -EFAULT;
+-		err = 0;
++		flag = NETLINK_F_CAP_ACK;
+ 		break;
+ 	case NETLINK_EXT_ACK:
+-		if (len < sizeof(int))
+-			return -EINVAL;
+-		len = sizeof(int);
+-		val = nlk->flags & NETLINK_F_EXT_ACK ? 1 : 0;
+-		if (put_user(len, optlen) || put_user(val, optval))
+-			return -EFAULT;
+-		err = 0;
++		flag = NETLINK_F_EXT_ACK;
+ 		break;
+ 	case NETLINK_GET_STRICT_CHK:
+-		if (len < sizeof(int))
+-			return -EINVAL;
+-		len = sizeof(int);
+-		val = nlk->flags & NETLINK_F_STRICT_CHK ? 1 : 0;
+-		if (put_user(len, optlen) || put_user(val, optval))
+-			return -EFAULT;
+-		err = 0;
++		flag = NETLINK_F_STRICT_CHK;
+ 		break;
+ 	default:
+-		err = -ENOPROTOOPT;
++		return -ENOPROTOOPT;
+ 	}
+-	return err;
++
++	if (len < sizeof(int))
++		return -EINVAL;
++
++	len = sizeof(int);
++	val = nlk->flags & flag ? 1 : 0;
++
++	if (put_user(len, optlen) ||
++	    copy_to_user(optval, &val, len))
++		return -EFAULT;
++
++	return 0;
+ }
+ 
+ static void netlink_cmsg_recv_pktinfo(struct msghdr *msg, struct sk_buff *skb)
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index d4e76e2ae153e..ecd9fc27e360c 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -307,7 +307,8 @@ static void packet_cached_dev_reset(struct packet_sock *po)
+ 
+ static bool packet_use_direct_xmit(const struct packet_sock *po)
+ {
+-	return po->xmit == packet_direct_xmit;
++	/* Paired with WRITE_ONCE() in packet_setsockopt() */
++	return READ_ONCE(po->xmit) == packet_direct_xmit;
+ }
+ 
+ static u16 packet_pick_tx_queue(struct sk_buff *skb)
+@@ -2183,7 +2184,7 @@ static int packet_rcv(struct sk_buff *skb, struct net_device *dev,
+ 	sll = &PACKET_SKB_CB(skb)->sa.ll;
+ 	sll->sll_hatype = dev->type;
+ 	sll->sll_pkttype = skb->pkt_type;
+-	if (unlikely(po->origdev))
++	if (unlikely(packet_sock_flag(po, PACKET_SOCK_ORIGDEV)))
+ 		sll->sll_ifindex = orig_dev->ifindex;
+ 	else
+ 		sll->sll_ifindex = dev->ifindex;
+@@ -2460,7 +2461,7 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
+ 	sll->sll_hatype = dev->type;
+ 	sll->sll_protocol = skb->protocol;
+ 	sll->sll_pkttype = skb->pkt_type;
+-	if (unlikely(po->origdev))
++	if (unlikely(packet_sock_flag(po, PACKET_SOCK_ORIGDEV)))
+ 		sll->sll_ifindex = orig_dev->ifindex;
+ 	else
+ 		sll->sll_ifindex = dev->ifindex;
+@@ -2867,7 +2868,8 @@ tpacket_error:
+ 		packet_inc_pending(&po->tx_ring);
+ 
+ 		status = TP_STATUS_SEND_REQUEST;
+-		err = po->xmit(skb);
++		/* Paired with WRITE_ONCE() in packet_setsockopt() */
++		err = READ_ONCE(po->xmit)(skb);
+ 		if (unlikely(err != 0)) {
+ 			if (err > 0)
+ 				err = net_xmit_errno(err);
+@@ -3070,7 +3072,8 @@ static int packet_snd(struct socket *sock, struct msghdr *msg, size_t len)
+ 		virtio_net_hdr_set_proto(skb, &vnet_hdr);
+ 	}
+ 
+-	err = po->xmit(skb);
++	/* Paired with WRITE_ONCE() in packet_setsockopt() */
++	err = READ_ONCE(po->xmit)(skb);
+ 	if (unlikely(err != 0)) {
+ 		if (err > 0)
+ 			err = net_xmit_errno(err);
+@@ -3511,7 +3514,7 @@ static int packet_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 		memcpy(msg->msg_name, &PACKET_SKB_CB(skb)->sa, copy_len);
+ 	}
+ 
+-	if (pkt_sk(sk)->auxdata) {
++	if (packet_sock_flag(pkt_sk(sk), PACKET_SOCK_AUXDATA)) {
+ 		struct tpacket_auxdata aux;
+ 
+ 		aux.tp_status = TP_STATUS_USER;
+@@ -3897,9 +3900,7 @@ packet_setsockopt(struct socket *sock, int level, int optname, sockptr_t optval,
+ 		if (copy_from_sockptr(&val, optval, sizeof(val)))
+ 			return -EFAULT;
+ 
+-		lock_sock(sk);
+-		po->auxdata = !!val;
+-		release_sock(sk);
++		packet_sock_flag_set(po, PACKET_SOCK_AUXDATA, val);
+ 		return 0;
+ 	}
+ 	case PACKET_ORIGDEV:
+@@ -3911,9 +3912,7 @@ packet_setsockopt(struct socket *sock, int level, int optname, sockptr_t optval,
+ 		if (copy_from_sockptr(&val, optval, sizeof(val)))
+ 			return -EFAULT;
+ 
+-		lock_sock(sk);
+-		po->origdev = !!val;
+-		release_sock(sk);
++		packet_sock_flag_set(po, PACKET_SOCK_ORIGDEV, val);
+ 		return 0;
+ 	}
+ 	case PACKET_VNET_HDR:
+@@ -4007,7 +4006,8 @@ packet_setsockopt(struct socket *sock, int level, int optname, sockptr_t optval,
+ 		if (copy_from_sockptr(&val, optval, sizeof(val)))
+ 			return -EFAULT;
+ 
+-		po->xmit = val ? packet_direct_xmit : dev_queue_xmit;
++		/* Paired with all lockless reads of po->xmit */
++		WRITE_ONCE(po->xmit, val ? packet_direct_xmit : dev_queue_xmit);
+ 		return 0;
+ 	}
+ 	default:
+@@ -4058,10 +4058,10 @@ static int packet_getsockopt(struct socket *sock, int level, int optname,
+ 
+ 		break;
+ 	case PACKET_AUXDATA:
+-		val = po->auxdata;
++		val = packet_sock_flag(po, PACKET_SOCK_AUXDATA);
+ 		break;
+ 	case PACKET_ORIGDEV:
+-		val = po->origdev;
++		val = packet_sock_flag(po, PACKET_SOCK_ORIGDEV);
+ 		break;
+ 	case PACKET_VNET_HDR:
+ 		val = po->has_vnet_hdr;
+diff --git a/net/packet/diag.c b/net/packet/diag.c
+index 07812ae5ca073..d704c7bf51b20 100644
+--- a/net/packet/diag.c
++++ b/net/packet/diag.c
+@@ -23,9 +23,9 @@ static int pdiag_put_info(const struct packet_sock *po, struct sk_buff *nlskb)
+ 	pinfo.pdi_flags = 0;
+ 	if (po->running)
+ 		pinfo.pdi_flags |= PDI_RUNNING;
+-	if (po->auxdata)
++	if (packet_sock_flag(po, PACKET_SOCK_AUXDATA))
+ 		pinfo.pdi_flags |= PDI_AUXDATA;
+-	if (po->origdev)
++	if (packet_sock_flag(po, PACKET_SOCK_ORIGDEV))
+ 		pinfo.pdi_flags |= PDI_ORIGDEV;
+ 	if (po->has_vnet_hdr)
+ 		pinfo.pdi_flags |= PDI_VNETHDR;
+diff --git a/net/packet/internal.h b/net/packet/internal.h
+index 48af35b1aed25..3bae8ea7a36f5 100644
+--- a/net/packet/internal.h
++++ b/net/packet/internal.h
+@@ -116,10 +116,9 @@ struct packet_sock {
+ 	int			copy_thresh;
+ 	spinlock_t		bind_lock;
+ 	struct mutex		pg_vec_lock;
++	unsigned long		flags;
+ 	unsigned int		running;	/* bind_lock must be held */
+-	unsigned int		auxdata:1,	/* writer must hold sock lock */
+-				origdev:1,
+-				has_vnet_hdr:1,
++	unsigned int		has_vnet_hdr:1, /* writer must hold sock lock */
+ 				tp_loss:1,
+ 				tp_tx_has_off:1;
+ 	int			pressure;
+@@ -144,4 +143,25 @@ static inline struct packet_sock *pkt_sk(struct sock *sk)
+ 	return (struct packet_sock *)sk;
+ }
+ 
++enum packet_sock_flags {
++	PACKET_SOCK_ORIGDEV,
++	PACKET_SOCK_AUXDATA,
++};
++
++static inline void packet_sock_flag_set(struct packet_sock *po,
++					enum packet_sock_flags flag,
++					bool val)
++{
++	if (val)
++		set_bit(flag, &po->flags);
++	else
++		clear_bit(flag, &po->flags);
++}
++
++static inline bool packet_sock_flag(const struct packet_sock *po,
++				    enum packet_sock_flags flag)
++{
++	return test_bit(flag, &po->flags);
++}
++
+ #endif
+diff --git a/net/rxrpc/key.c b/net/rxrpc/key.c
+index 8d53aded09c42..33e8302a79e33 100644
+--- a/net/rxrpc/key.c
++++ b/net/rxrpc/key.c
+@@ -680,7 +680,7 @@ static long rxrpc_read(const struct key *key,
+ 			return -ENOPKG;
+ 		}
+ 
+-		if (WARN_ON((unsigned long)xdr - (unsigned long)oldxdr ==
++		if (WARN_ON((unsigned long)xdr - (unsigned long)oldxdr !=
+ 			    toksize))
+ 			return -EIO;
+ 	}
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index 35785a36c8029..3c3629c9e7b65 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -3211,6 +3211,7 @@ int tcf_exts_init_ex(struct tcf_exts *exts, struct net *net, int action,
+ #ifdef CONFIG_NET_CLS_ACT
+ 	exts->type = 0;
+ 	exts->nr_actions = 0;
++	exts->miss_cookie_node = NULL;
+ 	/* Note: we do not own yet a reference on net.
+ 	 * This reference might be taken later from tcf_exts_get_net().
+ 	 */
+diff --git a/net/sched/sch_fq.c b/net/sched/sch_fq.c
+index 48d14fb90ba02..f59a2cb2c803d 100644
+--- a/net/sched/sch_fq.c
++++ b/net/sched/sch_fq.c
+@@ -779,13 +779,17 @@ static int fq_resize(struct Qdisc *sch, u32 log)
+ 	return 0;
+ }
+ 
++static struct netlink_range_validation iq_range = {
++	.max = INT_MAX,
++};
++
+ static const struct nla_policy fq_policy[TCA_FQ_MAX + 1] = {
+ 	[TCA_FQ_UNSPEC]			= { .strict_start_type = TCA_FQ_TIMER_SLACK },
+ 
+ 	[TCA_FQ_PLIMIT]			= { .type = NLA_U32 },
+ 	[TCA_FQ_FLOW_PLIMIT]		= { .type = NLA_U32 },
+ 	[TCA_FQ_QUANTUM]		= { .type = NLA_U32 },
+-	[TCA_FQ_INITIAL_QUANTUM]	= { .type = NLA_U32 },
++	[TCA_FQ_INITIAL_QUANTUM]	= NLA_POLICY_FULL_RANGE(NLA_U32, &iq_range),
+ 	[TCA_FQ_RATE_ENABLE]		= { .type = NLA_U32 },
+ 	[TCA_FQ_FLOW_DEFAULT_RATE]	= { .type = NLA_U32 },
+ 	[TCA_FQ_FLOW_MAX_RATE]		= { .type = NLA_U32 },
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index fd7e1c630493e..d2ee566343083 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -2050,9 +2050,6 @@ call_bind_status(struct rpc_task *task)
+ 			status = -EOPNOTSUPP;
+ 			break;
+ 		}
+-		if (task->tk_rebind_retry == 0)
+-			break;
+-		task->tk_rebind_retry--;
+ 		rpc_delay(task, 3*HZ);
+ 		goto retry_timeout;
+ 	case -ENOBUFS:
+diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
+index be587a308e05a..c8321de341eea 100644
+--- a/net/sunrpc/sched.c
++++ b/net/sunrpc/sched.c
+@@ -817,7 +817,6 @@ rpc_init_task_statistics(struct rpc_task *task)
+ 	/* Initialize retry counters */
+ 	task->tk_garb_retry = 2;
+ 	task->tk_cred_retry = 2;
+-	task->tk_rebind_retry = 2;
+ 
+ 	/* starting timestamp */
+ 	task->tk_start = ktime_get();
+diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
+index bfb2a7e50c261..66c6f57c9c447 100644
+--- a/net/xdp/xsk_queue.h
++++ b/net/xdp/xsk_queue.h
+@@ -162,6 +162,7 @@ static inline bool xp_unaligned_validate_desc(struct xsk_buff_pool *pool,
+ 		return false;
+ 
+ 	if (base_addr >= pool->addrs_cnt || addr >= pool->addrs_cnt ||
++	    addr + desc->len > pool->addrs_cnt ||
+ 	    xp_desc_crosses_non_contig_pg(pool, addr, desc->len))
+ 		return false;
+ 
+diff --git a/net/xdp/xskmap.c b/net/xdp/xskmap.c
+index 771d0fa90ef58..3436a8efb8dc7 100644
+--- a/net/xdp/xskmap.c
++++ b/net/xdp/xskmap.c
+@@ -150,8 +150,8 @@ static void *xsk_map_lookup_elem_sys_only(struct bpf_map *map, void *key)
+ 	return ERR_PTR(-EOPNOTSUPP);
+ }
+ 
+-static int xsk_map_update_elem(struct bpf_map *map, void *key, void *value,
+-			       u64 map_flags)
++static long xsk_map_update_elem(struct bpf_map *map, void *key, void *value,
++				u64 map_flags)
+ {
+ 	struct xsk_map *m = container_of(map, struct xsk_map, map);
+ 	struct xdp_sock __rcu **map_entry;
+@@ -211,7 +211,7 @@ out:
+ 	return err;
+ }
+ 
+-static int xsk_map_delete_elem(struct bpf_map *map, void *key)
++static long xsk_map_delete_elem(struct bpf_map *map, void *key)
+ {
+ 	struct xsk_map *m = container_of(map, struct xsk_map, map);
+ 	struct xdp_sock __rcu **map_entry;
+@@ -231,7 +231,7 @@ static int xsk_map_delete_elem(struct bpf_map *map, void *key)
+ 	return 0;
+ }
+ 
+-static int xsk_map_redirect(struct bpf_map *map, u64 index, u64 flags)
++static long xsk_map_redirect(struct bpf_map *map, u64 index, u64 flags)
+ {
+ 	return __bpf_xdp_redirect_map(map, index, flags, 0,
+ 				      __xsk_map_lookup_elem);
+diff --git a/scripts/gdb/linux/clk.py b/scripts/gdb/linux/clk.py
+index 061aecfa294e6..7a01fdc3e8446 100644
+--- a/scripts/gdb/linux/clk.py
++++ b/scripts/gdb/linux/clk.py
+@@ -41,6 +41,8 @@ are cached and potentially out of date"""
+             self.show_subtree(child, level + 1)
+ 
+     def invoke(self, arg, from_tty):
++        if utils.gdb_eval_or_none("clk_root_list") is None:
++            raise gdb.GdbError("No clocks registered")
+         gdb.write("                                 enable  prepare  protect               \n")
+         gdb.write("   clock                          count    count    count        rate   \n")
+         gdb.write("------------------------------------------------------------------------\n")
+diff --git a/scripts/gdb/linux/constants.py.in b/scripts/gdb/linux/constants.py.in
+index 2efbec6b6b8db..08f0587d15ea1 100644
+--- a/scripts/gdb/linux/constants.py.in
++++ b/scripts/gdb/linux/constants.py.in
+@@ -39,6 +39,8 @@
+ 
+ import gdb
+ 
++LX_CONFIG(CONFIG_DEBUG_INFO_REDUCED)
++
+ /* linux/clk-provider.h */
+ if IS_BUILTIN(CONFIG_COMMON_CLK):
+     LX_GDBPARSED(CLK_GET_RATE_NOCACHE)
+diff --git a/scripts/gdb/linux/genpd.py b/scripts/gdb/linux/genpd.py
+index 39cd1abd85590..b53649c0a77a6 100644
+--- a/scripts/gdb/linux/genpd.py
++++ b/scripts/gdb/linux/genpd.py
+@@ -5,7 +5,7 @@
+ import gdb
+ import sys
+ 
+-from linux.utils import CachedType
++from linux.utils import CachedType, gdb_eval_or_none
+ from linux.lists import list_for_each_entry
+ 
+ generic_pm_domain_type = CachedType('struct generic_pm_domain')
+@@ -70,6 +70,8 @@ Output is similar to /sys/kernel/debug/pm_genpd/pm_genpd_summary'''
+             gdb.write('    %-50s  %s\n' % (kobj_path, rtpm_status_str(dev)))
+ 
+     def invoke(self, arg, from_tty):
++        if gdb_eval_or_none("&gpd_list") is None:
++            raise gdb.GdbError("No power domain(s) registered")
+         gdb.write('domain                          status          children\n');
+         gdb.write('    /device                                             runtime status\n');
+         gdb.write('----------------------------------------------------------------------\n');
+diff --git a/scripts/gdb/linux/timerlist.py b/scripts/gdb/linux/timerlist.py
+index 071d0dd5a6349..51def847f1ef9 100644
+--- a/scripts/gdb/linux/timerlist.py
++++ b/scripts/gdb/linux/timerlist.py
+@@ -73,7 +73,7 @@ def print_cpu(hrtimer_bases, cpu, max_clock_bases):
+     ts = cpus.per_cpu(tick_sched_ptr, cpu)
+ 
+     text = "cpu: {}\n".format(cpu)
+-    for i in xrange(max_clock_bases):
++    for i in range(max_clock_bases):
+         text += " clock {}:\n".format(i)
+         text += print_base(cpu_base['clock_base'][i])
+ 
+@@ -158,6 +158,8 @@ def pr_cpumask(mask):
+     num_bytes = (nr_cpu_ids + 7) / 8
+     buf = utils.read_memoryview(inf, bits, num_bytes).tobytes()
+     buf = binascii.b2a_hex(buf)
++    if type(buf) is not str:
++        buf=buf.decode()
+ 
+     chunks = []
+     i = num_bytes
+diff --git a/scripts/gdb/linux/utils.py b/scripts/gdb/linux/utils.py
+index 1553f68716cc2..7f36aee32ac66 100644
+--- a/scripts/gdb/linux/utils.py
++++ b/scripts/gdb/linux/utils.py
+@@ -88,7 +88,10 @@ def get_target_endianness():
+ 
+ 
+ def read_memoryview(inf, start, length):
+-    return memoryview(inf.read_memory(start, length))
++    m = inf.read_memory(start, length)
++    if type(m) is memoryview:
++        return m
++    return memoryview(m)
+ 
+ 
+ def read_u16(buffer, offset):
+diff --git a/scripts/gdb/vmlinux-gdb.py b/scripts/gdb/vmlinux-gdb.py
+index 3a5b44cd6bfe4..7e024c051e77f 100644
+--- a/scripts/gdb/vmlinux-gdb.py
++++ b/scripts/gdb/vmlinux-gdb.py
+@@ -22,6 +22,10 @@ except:
+     gdb.write("NOTE: gdb 7.2 or later required for Linux helper scripts to "
+               "work.\n")
+ else:
++    import linux.constants
++    if linux.constants.LX_CONFIG_DEBUG_INFO_REDUCED:
++        raise gdb.GdbError("Reduced debug information will prevent GDB "
++                           "from having complete types.\n")
+     import linux.utils
+     import linux.symbols
+     import linux.modules
+@@ -32,7 +36,6 @@ else:
+     import linux.lists
+     import linux.rbtree
+     import linux.proc
+-    import linux.constants
+     import linux.timerlist
+     import linux.clk
+     import linux.genpd
+diff --git a/security/integrity/ima/Kconfig b/security/integrity/ima/Kconfig
+index 39caeca474449..60a511c6b583e 100644
+--- a/security/integrity/ima/Kconfig
++++ b/security/integrity/ima/Kconfig
+@@ -8,7 +8,7 @@ config IMA
+ 	select CRYPTO_HMAC
+ 	select CRYPTO_SHA1
+ 	select CRYPTO_HASH_INFO
+-	select TCG_TPM if HAS_IOMEM && !UML
++	select TCG_TPM if HAS_IOMEM
+ 	select TCG_TIS if TCG_TPM && X86
+ 	select TCG_CRB if TCG_TPM && ACPI
+ 	select TCG_IBMVTPM if TCG_TPM && PPC_PSERIES
+diff --git a/security/selinux/Makefile b/security/selinux/Makefile
+index 7761624448826..0aecf9334ec31 100644
+--- a/security/selinux/Makefile
++++ b/security/selinux/Makefile
+@@ -23,8 +23,8 @@ ccflags-y := -I$(srctree)/security/selinux -I$(srctree)/security/selinux/include
+ $(addprefix $(obj)/,$(selinux-y)): $(obj)/flask.h
+ 
+ quiet_cmd_flask = GEN     $(obj)/flask.h $(obj)/av_permissions.h
+-      cmd_flask = scripts/selinux/genheaders/genheaders $(obj)/flask.h $(obj)/av_permissions.h
++      cmd_flask = $< $(obj)/flask.h $(obj)/av_permissions.h
+ 
+ targets += flask.h av_permissions.h
+-$(obj)/flask.h: $(src)/include/classmap.h FORCE
++$(obj)/flask.h $(obj)/av_permissions.h &: scripts/selinux/genheaders/genheaders FORCE
+ 	$(call if_changed,flask)
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index f70d6a33421d2..172ffc2c332b7 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -9428,6 +9428,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8898, "HP EliteBook 845 G8 Notebook PC", ALC285_FIXUP_HP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x103c, 0x88d0, "HP Pavilion 15-eh1xxx (mainboard 88D0)", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8902, "HP OMEN 16", ALC285_FIXUP_HP_MUTE_LED),
++	SND_PCI_QUIRK(0x103c, 0x8919, "HP Pavilion Aero Laptop 13-be0xxx", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x896d, "HP ZBook Firefly 16 G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x896e, "HP EliteBook x360 830 G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8971, "HP EliteBook 830 G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+@@ -9478,6 +9479,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8b8d, "HP", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8b8f, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8b92, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x8b96, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x8bf0, "HP", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+@@ -9500,6 +9502,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK),
+ 	SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_ASUS_ZENBOOK_UX31A),
+ 	SND_PCI_QUIRK(0x1043, 0x1662, "ASUS GV301QH", ALC294_FIXUP_ASUS_DUAL_SPK),
++	SND_PCI_QUIRK(0x1043, 0x1683, "ASUS UM3402YAR", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x1043, 0x16b2, "ASUS GU603", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1740, "ASUS UX430UA", ALC295_FIXUP_ASUS_DACS),
+@@ -9689,6 +9692,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x22f1, "Thinkpad", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x17aa, 0x22f2, "Thinkpad", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x17aa, 0x22f3, "Thinkpad", ALC287_FIXUP_CS35L41_I2C_2),
++	SND_PCI_QUIRK(0x17aa, 0x2316, "Thinkpad P1 Gen 6", ALC287_FIXUP_CS35L41_I2C_2),
++	SND_PCI_QUIRK(0x17aa, 0x2317, "Thinkpad P1 Gen 6", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x17aa, 0x2318, "Thinkpad Z13 Gen2", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x17aa, 0x2319, "Thinkpad Z16 Gen2", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x17aa, 0x231a, "Thinkpad Z16 Gen2", ALC287_FIXUP_CS35L41_I2C_2),
+diff --git a/sound/soc/amd/ps/pci-ps.c b/sound/soc/amd/ps/pci-ps.c
+index e86f23d97584f..688a1d4643d91 100644
+--- a/sound/soc/amd/ps/pci-ps.c
++++ b/sound/soc/amd/ps/pci-ps.c
+@@ -91,7 +91,6 @@ static int acp63_init(void __iomem *acp_base, struct device *dev)
+ 		dev_err(dev, "ACP reset failed\n");
+ 		return ret;
+ 	}
+-	acp63_writel(0x03, acp_base + ACP_CLKMUX_SEL);
+ 	acp63_enable_interrupts(acp_base);
+ 	return 0;
+ }
+@@ -106,7 +105,6 @@ static int acp63_deinit(void __iomem *acp_base, struct device *dev)
+ 		dev_err(dev, "ACP reset failed\n");
+ 		return ret;
+ 	}
+-	acp63_writel(0, acp_base + ACP_CLKMUX_SEL);
+ 	acp63_writel(0, acp_base + ACP_CONTROL);
+ 	return 0;
+ }
+diff --git a/sound/soc/codecs/cs35l41.c b/sound/soc/codecs/cs35l41.c
+index c223d83e02cfb..f2b5032daa6ae 100644
+--- a/sound/soc/codecs/cs35l41.c
++++ b/sound/soc/codecs/cs35l41.c
+@@ -356,6 +356,19 @@ static const struct snd_kcontrol_new cs35l41_aud_controls[] = {
+ 	WM_ADSP_FW_CONTROL("DSP1", 0),
+ };
+ 
++static void cs35l41_boost_enable(struct cs35l41_private *cs35l41, unsigned int enable)
++{
++	switch (cs35l41->hw_cfg.bst_type) {
++	case CS35L41_INT_BOOST:
++		enable = enable ? CS35L41_BST_EN_DEFAULT : CS35L41_BST_DIS_FET_OFF;
++		regmap_update_bits(cs35l41->regmap, CS35L41_PWR_CTRL2, CS35L41_BST_EN_MASK,
++				enable << CS35L41_BST_EN_SHIFT);
++		break;
++	default:
++		break;
++	}
++}
++
+ static irqreturn_t cs35l41_irq(int irq, void *data)
+ {
+ 	struct cs35l41_private *cs35l41 = data;
+@@ -431,8 +444,7 @@ static irqreturn_t cs35l41_irq(int irq, void *data)
+ 
+ 	if (status[0] & CS35L41_BST_OVP_ERR) {
+ 		dev_crit_ratelimited(cs35l41->dev, "VBST Over Voltage error\n");
+-		regmap_update_bits(cs35l41->regmap, CS35L41_PWR_CTRL2,
+-				   CS35L41_BST_EN_MASK, 0);
++		cs35l41_boost_enable(cs35l41, 0);
+ 		regmap_write(cs35l41->regmap, CS35L41_IRQ1_STATUS1,
+ 			     CS35L41_BST_OVP_ERR);
+ 		regmap_write(cs35l41->regmap, CS35L41_PROTECT_REL_ERR_IGN, 0);
+@@ -441,16 +453,13 @@ static irqreturn_t cs35l41_irq(int irq, void *data)
+ 				   CS35L41_BST_OVP_ERR_RLS);
+ 		regmap_update_bits(cs35l41->regmap, CS35L41_PROTECT_REL_ERR_IGN,
+ 				   CS35L41_BST_OVP_ERR_RLS, 0);
+-		regmap_update_bits(cs35l41->regmap, CS35L41_PWR_CTRL2,
+-				   CS35L41_BST_EN_MASK,
+-				   CS35L41_BST_EN_DEFAULT << CS35L41_BST_EN_SHIFT);
++		cs35l41_boost_enable(cs35l41, 1);
+ 		ret = IRQ_HANDLED;
+ 	}
+ 
+ 	if (status[0] & CS35L41_BST_DCM_UVP_ERR) {
+ 		dev_crit_ratelimited(cs35l41->dev, "DCM VBST Under Voltage Error\n");
+-		regmap_update_bits(cs35l41->regmap, CS35L41_PWR_CTRL2,
+-				   CS35L41_BST_EN_MASK, 0);
++		cs35l41_boost_enable(cs35l41, 0);
+ 		regmap_write(cs35l41->regmap, CS35L41_IRQ1_STATUS1,
+ 			     CS35L41_BST_DCM_UVP_ERR);
+ 		regmap_write(cs35l41->regmap, CS35L41_PROTECT_REL_ERR_IGN, 0);
+@@ -459,16 +468,13 @@ static irqreturn_t cs35l41_irq(int irq, void *data)
+ 				   CS35L41_BST_UVP_ERR_RLS);
+ 		regmap_update_bits(cs35l41->regmap, CS35L41_PROTECT_REL_ERR_IGN,
+ 				   CS35L41_BST_UVP_ERR_RLS, 0);
+-		regmap_update_bits(cs35l41->regmap, CS35L41_PWR_CTRL2,
+-				   CS35L41_BST_EN_MASK,
+-				   CS35L41_BST_EN_DEFAULT << CS35L41_BST_EN_SHIFT);
++		cs35l41_boost_enable(cs35l41, 1);
+ 		ret = IRQ_HANDLED;
+ 	}
+ 
+ 	if (status[0] & CS35L41_BST_SHORT_ERR) {
+ 		dev_crit_ratelimited(cs35l41->dev, "LBST error: powering off!\n");
+-		regmap_update_bits(cs35l41->regmap, CS35L41_PWR_CTRL2,
+-				   CS35L41_BST_EN_MASK, 0);
++		cs35l41_boost_enable(cs35l41, 0);
+ 		regmap_write(cs35l41->regmap, CS35L41_IRQ1_STATUS1,
+ 			     CS35L41_BST_SHORT_ERR);
+ 		regmap_write(cs35l41->regmap, CS35L41_PROTECT_REL_ERR_IGN, 0);
+@@ -477,9 +483,7 @@ static irqreturn_t cs35l41_irq(int irq, void *data)
+ 				   CS35L41_BST_SHORT_ERR_RLS);
+ 		regmap_update_bits(cs35l41->regmap, CS35L41_PROTECT_REL_ERR_IGN,
+ 				   CS35L41_BST_SHORT_ERR_RLS, 0);
+-		regmap_update_bits(cs35l41->regmap, CS35L41_PWR_CTRL2,
+-				   CS35L41_BST_EN_MASK,
+-				   CS35L41_BST_EN_DEFAULT << CS35L41_BST_EN_SHIFT);
++		cs35l41_boost_enable(cs35l41, 1);
+ 		ret = IRQ_HANDLED;
+ 	}
+ 
+diff --git a/sound/soc/codecs/es8316.c b/sound/soc/codecs/es8316.c
+index 056c3082fe02c..f7d7a9c91e04c 100644
+--- a/sound/soc/codecs/es8316.c
++++ b/sound/soc/codecs/es8316.c
+@@ -842,12 +842,14 @@ static int es8316_i2c_probe(struct i2c_client *i2c_client)
+ 	es8316->irq = i2c_client->irq;
+ 	mutex_init(&es8316->lock);
+ 
+-	ret = devm_request_threaded_irq(dev, es8316->irq, NULL, es8316_irq,
+-					IRQF_TRIGGER_HIGH | IRQF_ONESHOT | IRQF_NO_AUTOEN,
+-					"es8316", es8316);
+-	if (ret) {
+-		dev_warn(dev, "Failed to get IRQ %d: %d\n", es8316->irq, ret);
+-		es8316->irq = -ENXIO;
++	if (es8316->irq > 0) {
++		ret = devm_request_threaded_irq(dev, es8316->irq, NULL, es8316_irq,
++						IRQF_TRIGGER_HIGH | IRQF_ONESHOT | IRQF_NO_AUTOEN,
++						"es8316", es8316);
++		if (ret) {
++			dev_warn(dev, "Failed to get IRQ %d: %d\n", es8316->irq, ret);
++			es8316->irq = -ENXIO;
++		}
+ 	}
+ 
+ 	return devm_snd_soc_register_component(&i2c_client->dev,
+diff --git a/sound/soc/codecs/wcd938x-sdw.c b/sound/soc/codecs/wcd938x-sdw.c
+index 33d1b5ffeaeba..402286dfaea44 100644
+--- a/sound/soc/codecs/wcd938x-sdw.c
++++ b/sound/soc/codecs/wcd938x-sdw.c
+@@ -161,6 +161,14 @@ EXPORT_SYMBOL_GPL(wcd938x_sdw_set_sdw_stream);
+ static int wcd9380_update_status(struct sdw_slave *slave,
+ 				 enum sdw_slave_status status)
+ {
++	struct wcd938x_sdw_priv *wcd = dev_get_drvdata(&slave->dev);
++
++	if (wcd->regmap && (status == SDW_SLAVE_ATTACHED)) {
++		/* Write out any cached changes that happened between probe and attach */
++		regcache_cache_only(wcd->regmap, false);
++		return regcache_sync(wcd->regmap);
++	}
++
+ 	return 0;
+ }
+ 
+@@ -177,20 +185,1014 @@ static int wcd9380_interrupt_callback(struct sdw_slave *slave,
+ {
+ 	struct wcd938x_sdw_priv *wcd = dev_get_drvdata(&slave->dev);
+ 	struct irq_domain *slave_irq = wcd->slave_irq;
+-	struct regmap *regmap = dev_get_regmap(&slave->dev, NULL);
+ 	u32 sts1, sts2, sts3;
+ 
+ 	do {
+ 		handle_nested_irq(irq_find_mapping(slave_irq, 0));
+-		regmap_read(regmap, WCD938X_DIGITAL_INTR_STATUS_0, &sts1);
+-		regmap_read(regmap, WCD938X_DIGITAL_INTR_STATUS_1, &sts2);
+-		regmap_read(regmap, WCD938X_DIGITAL_INTR_STATUS_2, &sts3);
++		regmap_read(wcd->regmap, WCD938X_DIGITAL_INTR_STATUS_0, &sts1);
++		regmap_read(wcd->regmap, WCD938X_DIGITAL_INTR_STATUS_1, &sts2);
++		regmap_read(wcd->regmap, WCD938X_DIGITAL_INTR_STATUS_2, &sts3);
+ 
+ 	} while (sts1 || sts2 || sts3);
+ 
+ 	return IRQ_HANDLED;
+ }
+ 
++static const struct reg_default wcd938x_defaults[] = {
++	{WCD938X_ANA_PAGE_REGISTER,                            0x00},
++	{WCD938X_ANA_BIAS,                                     0x00},
++	{WCD938X_ANA_RX_SUPPLIES,                              0x00},
++	{WCD938X_ANA_HPH,                                      0x0C},
++	{WCD938X_ANA_EAR,                                      0x00},
++	{WCD938X_ANA_EAR_COMPANDER_CTL,                        0x02},
++	{WCD938X_ANA_TX_CH1,                                   0x20},
++	{WCD938X_ANA_TX_CH2,                                   0x00},
++	{WCD938X_ANA_TX_CH3,                                   0x20},
++	{WCD938X_ANA_TX_CH4,                                   0x00},
++	{WCD938X_ANA_MICB1_MICB2_DSP_EN_LOGIC,                 0x00},
++	{WCD938X_ANA_MICB3_DSP_EN_LOGIC,                       0x00},
++	{WCD938X_ANA_MBHC_MECH,                                0x39},
++	{WCD938X_ANA_MBHC_ELECT,                               0x08},
++	{WCD938X_ANA_MBHC_ZDET,                                0x00},
++	{WCD938X_ANA_MBHC_RESULT_1,                            0x00},
++	{WCD938X_ANA_MBHC_RESULT_2,                            0x00},
++	{WCD938X_ANA_MBHC_RESULT_3,                            0x00},
++	{WCD938X_ANA_MBHC_BTN0,                                0x00},
++	{WCD938X_ANA_MBHC_BTN1,                                0x10},
++	{WCD938X_ANA_MBHC_BTN2,                                0x20},
++	{WCD938X_ANA_MBHC_BTN3,                                0x30},
++	{WCD938X_ANA_MBHC_BTN4,                                0x40},
++	{WCD938X_ANA_MBHC_BTN5,                                0x50},
++	{WCD938X_ANA_MBHC_BTN6,                                0x60},
++	{WCD938X_ANA_MBHC_BTN7,                                0x70},
++	{WCD938X_ANA_MICB1,                                    0x10},
++	{WCD938X_ANA_MICB2,                                    0x10},
++	{WCD938X_ANA_MICB2_RAMP,                               0x00},
++	{WCD938X_ANA_MICB3,                                    0x10},
++	{WCD938X_ANA_MICB4,                                    0x10},
++	{WCD938X_BIAS_CTL,                                     0x2A},
++	{WCD938X_BIAS_VBG_FINE_ADJ,                            0x55},
++	{WCD938X_LDOL_VDDCX_ADJUST,                            0x01},
++	{WCD938X_LDOL_DISABLE_LDOL,                            0x00},
++	{WCD938X_MBHC_CTL_CLK,                                 0x00},
++	{WCD938X_MBHC_CTL_ANA,                                 0x00},
++	{WCD938X_MBHC_CTL_SPARE_1,                             0x00},
++	{WCD938X_MBHC_CTL_SPARE_2,                             0x00},
++	{WCD938X_MBHC_CTL_BCS,                                 0x00},
++	{WCD938X_MBHC_MOISTURE_DET_FSM_STATUS,                 0x00},
++	{WCD938X_MBHC_TEST_CTL,                                0x00},
++	{WCD938X_LDOH_MODE,                                    0x2B},
++	{WCD938X_LDOH_BIAS,                                    0x68},
++	{WCD938X_LDOH_STB_LOADS,                               0x00},
++	{WCD938X_LDOH_SLOWRAMP,                                0x50},
++	{WCD938X_MICB1_TEST_CTL_1,                             0x1A},
++	{WCD938X_MICB1_TEST_CTL_2,                             0x00},
++	{WCD938X_MICB1_TEST_CTL_3,                             0xA4},
++	{WCD938X_MICB2_TEST_CTL_1,                             0x1A},
++	{WCD938X_MICB2_TEST_CTL_2,                             0x00},
++	{WCD938X_MICB2_TEST_CTL_3,                             0x24},
++	{WCD938X_MICB3_TEST_CTL_1,                             0x1A},
++	{WCD938X_MICB3_TEST_CTL_2,                             0x00},
++	{WCD938X_MICB3_TEST_CTL_3,                             0xA4},
++	{WCD938X_MICB4_TEST_CTL_1,                             0x1A},
++	{WCD938X_MICB4_TEST_CTL_2,                             0x00},
++	{WCD938X_MICB4_TEST_CTL_3,                             0xA4},
++	{WCD938X_TX_COM_ADC_VCM,                               0x39},
++	{WCD938X_TX_COM_BIAS_ATEST,                            0xE0},
++	{WCD938X_TX_COM_SPARE1,                                0x00},
++	{WCD938X_TX_COM_SPARE2,                                0x00},
++	{WCD938X_TX_COM_TXFE_DIV_CTL,                          0x22},
++	{WCD938X_TX_COM_TXFE_DIV_START,                        0x00},
++	{WCD938X_TX_COM_SPARE3,                                0x00},
++	{WCD938X_TX_COM_SPARE4,                                0x00},
++	{WCD938X_TX_1_2_TEST_EN,                               0xCC},
++	{WCD938X_TX_1_2_ADC_IB,                                0xE9},
++	{WCD938X_TX_1_2_ATEST_REFCTL,                          0x0A},
++	{WCD938X_TX_1_2_TEST_CTL,                              0x38},
++	{WCD938X_TX_1_2_TEST_BLK_EN1,                          0xFF},
++	{WCD938X_TX_1_2_TXFE1_CLKDIV,                          0x00},
++	{WCD938X_TX_1_2_SAR2_ERR,                              0x00},
++	{WCD938X_TX_1_2_SAR1_ERR,                              0x00},
++	{WCD938X_TX_3_4_TEST_EN,                               0xCC},
++	{WCD938X_TX_3_4_ADC_IB,                                0xE9},
++	{WCD938X_TX_3_4_ATEST_REFCTL,                          0x0A},
++	{WCD938X_TX_3_4_TEST_CTL,                              0x38},
++	{WCD938X_TX_3_4_TEST_BLK_EN3,                          0xFF},
++	{WCD938X_TX_3_4_TXFE3_CLKDIV,                          0x00},
++	{WCD938X_TX_3_4_SAR4_ERR,                              0x00},
++	{WCD938X_TX_3_4_SAR3_ERR,                              0x00},
++	{WCD938X_TX_3_4_TEST_BLK_EN2,                          0xFB},
++	{WCD938X_TX_3_4_TXFE2_CLKDIV,                          0x00},
++	{WCD938X_TX_3_4_SPARE1,                                0x00},
++	{WCD938X_TX_3_4_TEST_BLK_EN4,                          0xFB},
++	{WCD938X_TX_3_4_TXFE4_CLKDIV,                          0x00},
++	{WCD938X_TX_3_4_SPARE2,                                0x00},
++	{WCD938X_CLASSH_MODE_1,                                0x40},
++	{WCD938X_CLASSH_MODE_2,                                0x3A},
++	{WCD938X_CLASSH_MODE_3,                                0x00},
++	{WCD938X_CLASSH_CTRL_VCL_1,                            0x70},
++	{WCD938X_CLASSH_CTRL_VCL_2,                            0x82},
++	{WCD938X_CLASSH_CTRL_CCL_1,                            0x31},
++	{WCD938X_CLASSH_CTRL_CCL_2,                            0x80},
++	{WCD938X_CLASSH_CTRL_CCL_3,                            0x80},
++	{WCD938X_CLASSH_CTRL_CCL_4,                            0x51},
++	{WCD938X_CLASSH_CTRL_CCL_5,                            0x00},
++	{WCD938X_CLASSH_BUCK_TMUX_A_D,                         0x00},
++	{WCD938X_CLASSH_BUCK_SW_DRV_CNTL,                      0x77},
++	{WCD938X_CLASSH_SPARE,                                 0x00},
++	{WCD938X_FLYBACK_EN,                                   0x4E},
++	{WCD938X_FLYBACK_VNEG_CTRL_1,                          0x0B},
++	{WCD938X_FLYBACK_VNEG_CTRL_2,                          0x45},
++	{WCD938X_FLYBACK_VNEG_CTRL_3,                          0x74},
++	{WCD938X_FLYBACK_VNEG_CTRL_4,                          0x7F},
++	{WCD938X_FLYBACK_VNEG_CTRL_5,                          0x83},
++	{WCD938X_FLYBACK_VNEG_CTRL_6,                          0x98},
++	{WCD938X_FLYBACK_VNEG_CTRL_7,                          0xA9},
++	{WCD938X_FLYBACK_VNEG_CTRL_8,                          0x68},
++	{WCD938X_FLYBACK_VNEG_CTRL_9,                          0x64},
++	{WCD938X_FLYBACK_VNEGDAC_CTRL_1,                       0xED},
++	{WCD938X_FLYBACK_VNEGDAC_CTRL_2,                       0xF0},
++	{WCD938X_FLYBACK_VNEGDAC_CTRL_3,                       0xA6},
++	{WCD938X_FLYBACK_CTRL_1,                               0x65},
++	{WCD938X_FLYBACK_TEST_CTL,                             0x00},
++	{WCD938X_RX_AUX_SW_CTL,                                0x00},
++	{WCD938X_RX_PA_AUX_IN_CONN,                            0x01},
++	{WCD938X_RX_TIMER_DIV,                                 0x32},
++	{WCD938X_RX_OCP_CTL,                                   0x1F},
++	{WCD938X_RX_OCP_COUNT,                                 0x77},
++	{WCD938X_RX_BIAS_EAR_DAC,                              0xA0},
++	{WCD938X_RX_BIAS_EAR_AMP,                              0xAA},
++	{WCD938X_RX_BIAS_HPH_LDO,                              0xA9},
++	{WCD938X_RX_BIAS_HPH_PA,                               0xAA},
++	{WCD938X_RX_BIAS_HPH_RDACBUFF_CNP2,                    0x8A},
++	{WCD938X_RX_BIAS_HPH_RDAC_LDO,                         0x88},
++	{WCD938X_RX_BIAS_HPH_CNP1,                             0x82},
++	{WCD938X_RX_BIAS_HPH_LOWPOWER,                         0x82},
++	{WCD938X_RX_BIAS_AUX_DAC,                              0xA0},
++	{WCD938X_RX_BIAS_AUX_AMP,                              0xAA},
++	{WCD938X_RX_BIAS_VNEGDAC_BLEEDER,                      0x50},
++	{WCD938X_RX_BIAS_MISC,                                 0x00},
++	{WCD938X_RX_BIAS_BUCK_RST,                             0x08},
++	{WCD938X_RX_BIAS_BUCK_VREF_ERRAMP,                     0x44},
++	{WCD938X_RX_BIAS_FLYB_ERRAMP,                          0x40},
++	{WCD938X_RX_BIAS_FLYB_BUFF,                            0xAA},
++	{WCD938X_RX_BIAS_FLYB_MID_RST,                         0x14},
++	{WCD938X_HPH_L_STATUS,                                 0x04},
++	{WCD938X_HPH_R_STATUS,                                 0x04},
++	{WCD938X_HPH_CNP_EN,                                   0x80},
++	{WCD938X_HPH_CNP_WG_CTL,                               0x9A},
++	{WCD938X_HPH_CNP_WG_TIME,                              0x14},
++	{WCD938X_HPH_OCP_CTL,                                  0x28},
++	{WCD938X_HPH_AUTO_CHOP,                                0x16},
++	{WCD938X_HPH_CHOP_CTL,                                 0x83},
++	{WCD938X_HPH_PA_CTL1,                                  0x46},
++	{WCD938X_HPH_PA_CTL2,                                  0x50},
++	{WCD938X_HPH_L_EN,                                     0x80},
++	{WCD938X_HPH_L_TEST,                                   0xE0},
++	{WCD938X_HPH_L_ATEST,                                  0x50},
++	{WCD938X_HPH_R_EN,                                     0x80},
++	{WCD938X_HPH_R_TEST,                                   0xE0},
++	{WCD938X_HPH_R_ATEST,                                  0x54},
++	{WCD938X_HPH_RDAC_CLK_CTL1,                            0x99},
++	{WCD938X_HPH_RDAC_CLK_CTL2,                            0x9B},
++	{WCD938X_HPH_RDAC_LDO_CTL,                             0x33},
++	{WCD938X_HPH_RDAC_CHOP_CLK_LP_CTL,                     0x00},
++	{WCD938X_HPH_REFBUFF_UHQA_CTL,                         0x68},
++	{WCD938X_HPH_REFBUFF_LP_CTL,                           0x0E},
++	{WCD938X_HPH_L_DAC_CTL,                                0x20},
++	{WCD938X_HPH_R_DAC_CTL,                                0x20},
++	{WCD938X_HPH_SURGE_HPHLR_SURGE_COMP_SEL,               0x55},
++	{WCD938X_HPH_SURGE_HPHLR_SURGE_EN,                     0x19},
++	{WCD938X_HPH_SURGE_HPHLR_SURGE_MISC1,                  0xA0},
++	{WCD938X_HPH_SURGE_HPHLR_SURGE_STATUS,                 0x00},
++	{WCD938X_EAR_EAR_EN_REG,                               0x22},
++	{WCD938X_EAR_EAR_PA_CON,                               0x44},
++	{WCD938X_EAR_EAR_SP_CON,                               0xDB},
++	{WCD938X_EAR_EAR_DAC_CON,                              0x80},
++	{WCD938X_EAR_EAR_CNP_FSM_CON,                          0xB2},
++	{WCD938X_EAR_TEST_CTL,                                 0x00},
++	{WCD938X_EAR_STATUS_REG_1,                             0x00},
++	{WCD938X_EAR_STATUS_REG_2,                             0x08},
++	{WCD938X_ANA_NEW_PAGE_REGISTER,                        0x00},
++	{WCD938X_HPH_NEW_ANA_HPH2,                             0x00},
++	{WCD938X_HPH_NEW_ANA_HPH3,                             0x00},
++	{WCD938X_SLEEP_CTL,                                    0x16},
++	{WCD938X_SLEEP_WATCHDOG_CTL,                           0x00},
++	{WCD938X_MBHC_NEW_ELECT_REM_CLAMP_CTL,                 0x00},
++	{WCD938X_MBHC_NEW_CTL_1,                               0x02},
++	{WCD938X_MBHC_NEW_CTL_2,                               0x05},
++	{WCD938X_MBHC_NEW_PLUG_DETECT_CTL,                     0xE9},
++	{WCD938X_MBHC_NEW_ZDET_ANA_CTL,                        0x0F},
++	{WCD938X_MBHC_NEW_ZDET_RAMP_CTL,                       0x00},
++	{WCD938X_MBHC_NEW_FSM_STATUS,                          0x00},
++	{WCD938X_MBHC_NEW_ADC_RESULT,                          0x00},
++	{WCD938X_TX_NEW_AMIC_MUX_CFG,                          0x00},
++	{WCD938X_AUX_AUXPA,                                    0x00},
++	{WCD938X_LDORXTX_MODE,                                 0x0C},
++	{WCD938X_LDORXTX_CONFIG,                               0x10},
++	{WCD938X_DIE_CRACK_DIE_CRK_DET_EN,                     0x00},
++	{WCD938X_DIE_CRACK_DIE_CRK_DET_OUT,                    0x00},
++	{WCD938X_HPH_NEW_INT_RDAC_GAIN_CTL,                    0x40},
++	{WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_L,                   0x81},
++	{WCD938X_HPH_NEW_INT_RDAC_VREF_CTL,                    0x10},
++	{WCD938X_HPH_NEW_INT_RDAC_OVERRIDE_CTL,                0x00},
++	{WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_R,                   0x81},
++	{WCD938X_HPH_NEW_INT_PA_MISC1,                         0x22},
++	{WCD938X_HPH_NEW_INT_PA_MISC2,                         0x00},
++	{WCD938X_HPH_NEW_INT_PA_RDAC_MISC,                     0x00},
++	{WCD938X_HPH_NEW_INT_HPH_TIMER1,                       0xFE},
++	{WCD938X_HPH_NEW_INT_HPH_TIMER2,                       0x02},
++	{WCD938X_HPH_NEW_INT_HPH_TIMER3,                       0x4E},
++	{WCD938X_HPH_NEW_INT_HPH_TIMER4,                       0x54},
++	{WCD938X_HPH_NEW_INT_PA_RDAC_MISC2,                    0x00},
++	{WCD938X_HPH_NEW_INT_PA_RDAC_MISC3,                    0x00},
++	{WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_L_NEW,               0x90},
++	{WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_R_NEW,               0x90},
++	{WCD938X_RX_NEW_INT_HPH_RDAC_BIAS_LOHIFI,              0x62},
++	{WCD938X_RX_NEW_INT_HPH_RDAC_BIAS_ULP,                 0x01},
++	{WCD938X_RX_NEW_INT_HPH_RDAC_LDO_LP,                   0x11},
++	{WCD938X_MBHC_NEW_INT_MOISTURE_DET_DC_CTRL,            0x57},
++	{WCD938X_MBHC_NEW_INT_MOISTURE_DET_POLLING_CTRL,       0x01},
++	{WCD938X_MBHC_NEW_INT_MECH_DET_CURRENT,                0x00},
++	{WCD938X_MBHC_NEW_INT_SPARE_2,                         0x00},
++	{WCD938X_EAR_INT_NEW_EAR_CHOPPER_CON,                  0xA8},
++	{WCD938X_EAR_INT_NEW_CNP_VCM_CON1,                     0x42},
++	{WCD938X_EAR_INT_NEW_CNP_VCM_CON2,                     0x22},
++	{WCD938X_EAR_INT_NEW_EAR_DYNAMIC_BIAS,                 0x00},
++	{WCD938X_AUX_INT_EN_REG,                               0x00},
++	{WCD938X_AUX_INT_PA_CTRL,                              0x06},
++	{WCD938X_AUX_INT_SP_CTRL,                              0xD2},
++	{WCD938X_AUX_INT_DAC_CTRL,                             0x80},
++	{WCD938X_AUX_INT_CLK_CTRL,                             0x50},
++	{WCD938X_AUX_INT_TEST_CTRL,                            0x00},
++	{WCD938X_AUX_INT_STATUS_REG,                           0x00},
++	{WCD938X_AUX_INT_MISC,                                 0x00},
++	{WCD938X_LDORXTX_INT_BIAS,                             0x6E},
++	{WCD938X_LDORXTX_INT_STB_LOADS_DTEST,                  0x50},
++	{WCD938X_LDORXTX_INT_TEST0,                            0x1C},
++	{WCD938X_LDORXTX_INT_STARTUP_TIMER,                    0xFF},
++	{WCD938X_LDORXTX_INT_TEST1,                            0x1F},
++	{WCD938X_LDORXTX_INT_STATUS,                           0x00},
++	{WCD938X_SLEEP_INT_WATCHDOG_CTL_1,                     0x0A},
++	{WCD938X_SLEEP_INT_WATCHDOG_CTL_2,                     0x0A},
++	{WCD938X_DIE_CRACK_INT_DIE_CRK_DET_INT1,               0x02},
++	{WCD938X_DIE_CRACK_INT_DIE_CRK_DET_INT2,               0x60},
++	{WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L2,               0xFF},
++	{WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L1,               0x7F},
++	{WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L0,               0x3F},
++	{WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_ULP1P2M,          0x1F},
++	{WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_ULP0P6M,          0x0F},
++	{WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_L2L1,          0xD7},
++	{WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_L0,            0xC8},
++	{WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_ULP,           0xC6},
++	{WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_L2L1,      0xD5},
++	{WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_L0,        0xCA},
++	{WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_ULP,       0x05},
++	{WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2CASC_L2L1L0,    0xA5},
++	{WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2CASC_ULP,       0x13},
++	{WCD938X_TX_COM_NEW_INT_TXADC_SCBIAS_L2L1,             0x88},
++	{WCD938X_TX_COM_NEW_INT_TXADC_SCBIAS_L0ULP,            0x42},
++	{WCD938X_TX_COM_NEW_INT_TXADC_INT_L2,                  0xFF},
++	{WCD938X_TX_COM_NEW_INT_TXADC_INT_L1,                  0x64},
++	{WCD938X_TX_COM_NEW_INT_TXADC_INT_L0,                  0x64},
++	{WCD938X_TX_COM_NEW_INT_TXADC_INT_ULP,                 0x77},
++	{WCD938X_DIGITAL_PAGE_REGISTER,                        0x00},
++	{WCD938X_DIGITAL_CHIP_ID0,                             0x00},
++	{WCD938X_DIGITAL_CHIP_ID1,                             0x00},
++	{WCD938X_DIGITAL_CHIP_ID2,                             0x0D},
++	{WCD938X_DIGITAL_CHIP_ID3,                             0x01},
++	{WCD938X_DIGITAL_SWR_TX_CLK_RATE,                      0x00},
++	{WCD938X_DIGITAL_CDC_RST_CTL,                          0x03},
++	{WCD938X_DIGITAL_TOP_CLK_CFG,                          0x00},
++	{WCD938X_DIGITAL_CDC_ANA_CLK_CTL,                      0x00},
++	{WCD938X_DIGITAL_CDC_DIG_CLK_CTL,                      0xF0},
++	{WCD938X_DIGITAL_SWR_RST_EN,                           0x00},
++	{WCD938X_DIGITAL_CDC_PATH_MODE,                        0x55},
++	{WCD938X_DIGITAL_CDC_RX_RST,                           0x00},
++	{WCD938X_DIGITAL_CDC_RX0_CTL,                          0xFC},
++	{WCD938X_DIGITAL_CDC_RX1_CTL,                          0xFC},
++	{WCD938X_DIGITAL_CDC_RX2_CTL,                          0xFC},
++	{WCD938X_DIGITAL_CDC_TX_ANA_MODE_0_1,                  0x00},
++	{WCD938X_DIGITAL_CDC_TX_ANA_MODE_2_3,                  0x00},
++	{WCD938X_DIGITAL_CDC_COMP_CTL_0,                       0x00},
++	{WCD938X_DIGITAL_CDC_ANA_TX_CLK_CTL,                   0x1E},
++	{WCD938X_DIGITAL_CDC_HPH_DSM_A1_0,                     0x00},
++	{WCD938X_DIGITAL_CDC_HPH_DSM_A1_1,                     0x01},
++	{WCD938X_DIGITAL_CDC_HPH_DSM_A2_0,                     0x63},
++	{WCD938X_DIGITAL_CDC_HPH_DSM_A2_1,                     0x04},
++	{WCD938X_DIGITAL_CDC_HPH_DSM_A3_0,                     0xAC},
++	{WCD938X_DIGITAL_CDC_HPH_DSM_A3_1,                     0x04},
++	{WCD938X_DIGITAL_CDC_HPH_DSM_A4_0,                     0x1A},
++	{WCD938X_DIGITAL_CDC_HPH_DSM_A4_1,                     0x03},
++	{WCD938X_DIGITAL_CDC_HPH_DSM_A5_0,                     0xBC},
++	{WCD938X_DIGITAL_CDC_HPH_DSM_A5_1,                     0x02},
++	{WCD938X_DIGITAL_CDC_HPH_DSM_A6_0,                     0xC7},
++	{WCD938X_DIGITAL_CDC_HPH_DSM_A7_0,                     0xF8},
++	{WCD938X_DIGITAL_CDC_HPH_DSM_C_0,                      0x47},
++	{WCD938X_DIGITAL_CDC_HPH_DSM_C_1,                      0x43},
++	{WCD938X_DIGITAL_CDC_HPH_DSM_C_2,                      0xB1},
++	{WCD938X_DIGITAL_CDC_HPH_DSM_C_3,                      0x17},
++	{WCD938X_DIGITAL_CDC_HPH_DSM_R1,                       0x4D},
++	{WCD938X_DIGITAL_CDC_HPH_DSM_R2,                       0x29},
++	{WCD938X_DIGITAL_CDC_HPH_DSM_R3,                       0x34},
++	{WCD938X_DIGITAL_CDC_HPH_DSM_R4,                       0x59},
++	{WCD938X_DIGITAL_CDC_HPH_DSM_R5,                       0x66},
++	{WCD938X_DIGITAL_CDC_HPH_DSM_R6,                       0x87},
++	{WCD938X_DIGITAL_CDC_HPH_DSM_R7,                       0x64},
++	{WCD938X_DIGITAL_CDC_AUX_DSM_A1_0,                     0x00},
++	{WCD938X_DIGITAL_CDC_AUX_DSM_A1_1,                     0x01},
++	{WCD938X_DIGITAL_CDC_AUX_DSM_A2_0,                     0x96},
++	{WCD938X_DIGITAL_CDC_AUX_DSM_A2_1,                     0x09},
++	{WCD938X_DIGITAL_CDC_AUX_DSM_A3_0,                     0xAB},
++	{WCD938X_DIGITAL_CDC_AUX_DSM_A3_1,                     0x05},
++	{WCD938X_DIGITAL_CDC_AUX_DSM_A4_0,                     0x1C},
++	{WCD938X_DIGITAL_CDC_AUX_DSM_A4_1,                     0x02},
++	{WCD938X_DIGITAL_CDC_AUX_DSM_A5_0,                     0x17},
++	{WCD938X_DIGITAL_CDC_AUX_DSM_A5_1,                     0x02},
++	{WCD938X_DIGITAL_CDC_AUX_DSM_A6_0,                     0xAA},
++	{WCD938X_DIGITAL_CDC_AUX_DSM_A7_0,                     0xE3},
++	{WCD938X_DIGITAL_CDC_AUX_DSM_C_0,                      0x69},
++	{WCD938X_DIGITAL_CDC_AUX_DSM_C_1,                      0x54},
++	{WCD938X_DIGITAL_CDC_AUX_DSM_C_2,                      0x02},
++	{WCD938X_DIGITAL_CDC_AUX_DSM_C_3,                      0x15},
++	{WCD938X_DIGITAL_CDC_AUX_DSM_R1,                       0xA4},
++	{WCD938X_DIGITAL_CDC_AUX_DSM_R2,                       0xB5},
++	{WCD938X_DIGITAL_CDC_AUX_DSM_R3,                       0x86},
++	{WCD938X_DIGITAL_CDC_AUX_DSM_R4,                       0x85},
++	{WCD938X_DIGITAL_CDC_AUX_DSM_R5,                       0xAA},
++	{WCD938X_DIGITAL_CDC_AUX_DSM_R6,                       0xE2},
++	{WCD938X_DIGITAL_CDC_AUX_DSM_R7,                       0x62},
++	{WCD938X_DIGITAL_CDC_HPH_GAIN_RX_0,                    0x55},
++	{WCD938X_DIGITAL_CDC_HPH_GAIN_RX_1,                    0xA9},
++	{WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_0,                   0x3D},
++	{WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_1,                   0x2E},
++	{WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_2,                   0x01},
++	{WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_0,                   0x00},
++	{WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_1,                   0xFC},
++	{WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_2,                   0x01},
++	{WCD938X_DIGITAL_CDC_HPH_GAIN_CTL,                     0x00},
++	{WCD938X_DIGITAL_CDC_AUX_GAIN_CTL,                     0x00},
++	{WCD938X_DIGITAL_CDC_EAR_PATH_CTL,                     0x00},
++	{WCD938X_DIGITAL_CDC_SWR_CLH,                          0x00},
++	{WCD938X_DIGITAL_SWR_CLH_BYP,                          0x00},
++	{WCD938X_DIGITAL_CDC_TX0_CTL,                          0x68},
++	{WCD938X_DIGITAL_CDC_TX1_CTL,                          0x68},
++	{WCD938X_DIGITAL_CDC_TX2_CTL,                          0x68},
++	{WCD938X_DIGITAL_CDC_TX_RST,                           0x00},
++	{WCD938X_DIGITAL_CDC_REQ_CTL,                          0x01},
++	{WCD938X_DIGITAL_CDC_RST,                              0x00},
++	{WCD938X_DIGITAL_CDC_AMIC_CTL,                         0x0F},
++	{WCD938X_DIGITAL_CDC_DMIC_CTL,                         0x04},
++	{WCD938X_DIGITAL_CDC_DMIC1_CTL,                        0x01},
++	{WCD938X_DIGITAL_CDC_DMIC2_CTL,                        0x01},
++	{WCD938X_DIGITAL_CDC_DMIC3_CTL,                        0x01},
++	{WCD938X_DIGITAL_CDC_DMIC4_CTL,                        0x01},
++	{WCD938X_DIGITAL_EFUSE_PRG_CTL,                        0x00},
++	{WCD938X_DIGITAL_EFUSE_CTL,                            0x2B},
++	{WCD938X_DIGITAL_CDC_DMIC_RATE_1_2,                    0x11},
++	{WCD938X_DIGITAL_CDC_DMIC_RATE_3_4,                    0x11},
++	{WCD938X_DIGITAL_PDM_WD_CTL0,                          0x00},
++	{WCD938X_DIGITAL_PDM_WD_CTL1,                          0x00},
++	{WCD938X_DIGITAL_PDM_WD_CTL2,                          0x00},
++	{WCD938X_DIGITAL_INTR_MODE,                            0x00},
++	{WCD938X_DIGITAL_INTR_MASK_0,                          0xFF},
++	{WCD938X_DIGITAL_INTR_MASK_1,                          0xFF},
++	{WCD938X_DIGITAL_INTR_MASK_2,                          0x3F},
++	{WCD938X_DIGITAL_INTR_STATUS_0,                        0x00},
++	{WCD938X_DIGITAL_INTR_STATUS_1,                        0x00},
++	{WCD938X_DIGITAL_INTR_STATUS_2,                        0x00},
++	{WCD938X_DIGITAL_INTR_CLEAR_0,                         0x00},
++	{WCD938X_DIGITAL_INTR_CLEAR_1,                         0x00},
++	{WCD938X_DIGITAL_INTR_CLEAR_2,                         0x00},
++	{WCD938X_DIGITAL_INTR_LEVEL_0,                         0x00},
++	{WCD938X_DIGITAL_INTR_LEVEL_1,                         0x00},
++	{WCD938X_DIGITAL_INTR_LEVEL_2,                         0x00},
++	{WCD938X_DIGITAL_INTR_SET_0,                           0x00},
++	{WCD938X_DIGITAL_INTR_SET_1,                           0x00},
++	{WCD938X_DIGITAL_INTR_SET_2,                           0x00},
++	{WCD938X_DIGITAL_INTR_TEST_0,                          0x00},
++	{WCD938X_DIGITAL_INTR_TEST_1,                          0x00},
++	{WCD938X_DIGITAL_INTR_TEST_2,                          0x00},
++	{WCD938X_DIGITAL_TX_MODE_DBG_EN,                       0x00},
++	{WCD938X_DIGITAL_TX_MODE_DBG_0_1,                      0x00},
++	{WCD938X_DIGITAL_TX_MODE_DBG_2_3,                      0x00},
++	{WCD938X_DIGITAL_LB_IN_SEL_CTL,                        0x00},
++	{WCD938X_DIGITAL_LOOP_BACK_MODE,                       0x00},
++	{WCD938X_DIGITAL_SWR_DAC_TEST,                         0x00},
++	{WCD938X_DIGITAL_SWR_HM_TEST_RX_0,                     0x40},
++	{WCD938X_DIGITAL_SWR_HM_TEST_TX_0,                     0x40},
++	{WCD938X_DIGITAL_SWR_HM_TEST_RX_1,                     0x00},
++	{WCD938X_DIGITAL_SWR_HM_TEST_TX_1,                     0x00},
++	{WCD938X_DIGITAL_SWR_HM_TEST_TX_2,                     0x00},
++	{WCD938X_DIGITAL_SWR_HM_TEST_0,                        0x00},
++	{WCD938X_DIGITAL_SWR_HM_TEST_1,                        0x00},
++	{WCD938X_DIGITAL_PAD_CTL_SWR_0,                        0x8F},
++	{WCD938X_DIGITAL_PAD_CTL_SWR_1,                        0x06},
++	{WCD938X_DIGITAL_I2C_CTL,                              0x00},
++	{WCD938X_DIGITAL_CDC_TX_TANGGU_SW_MODE,                0x00},
++	{WCD938X_DIGITAL_EFUSE_TEST_CTL_0,                     0x00},
++	{WCD938X_DIGITAL_EFUSE_TEST_CTL_1,                     0x00},
++	{WCD938X_DIGITAL_EFUSE_T_DATA_0,                       0x00},
++	{WCD938X_DIGITAL_EFUSE_T_DATA_1,                       0x00},
++	{WCD938X_DIGITAL_PAD_CTL_PDM_RX0,                      0xF1},
++	{WCD938X_DIGITAL_PAD_CTL_PDM_RX1,                      0xF1},
++	{WCD938X_DIGITAL_PAD_CTL_PDM_TX0,                      0xF1},
++	{WCD938X_DIGITAL_PAD_CTL_PDM_TX1,                      0xF1},
++	{WCD938X_DIGITAL_PAD_CTL_PDM_TX2,                      0xF1},
++	{WCD938X_DIGITAL_PAD_INP_DIS_0,                        0x00},
++	{WCD938X_DIGITAL_PAD_INP_DIS_1,                        0x00},
++	{WCD938X_DIGITAL_DRIVE_STRENGTH_0,                     0x00},
++	{WCD938X_DIGITAL_DRIVE_STRENGTH_1,                     0x00},
++	{WCD938X_DIGITAL_DRIVE_STRENGTH_2,                     0x00},
++	{WCD938X_DIGITAL_RX_DATA_EDGE_CTL,                     0x1F},
++	{WCD938X_DIGITAL_TX_DATA_EDGE_CTL,                     0x80},
++	{WCD938X_DIGITAL_GPIO_MODE,                            0x00},
++	{WCD938X_DIGITAL_PIN_CTL_OE,                           0x00},
++	{WCD938X_DIGITAL_PIN_CTL_DATA_0,                       0x00},
++	{WCD938X_DIGITAL_PIN_CTL_DATA_1,                       0x00},
++	{WCD938X_DIGITAL_PIN_STATUS_0,                         0x00},
++	{WCD938X_DIGITAL_PIN_STATUS_1,                         0x00},
++	{WCD938X_DIGITAL_DIG_DEBUG_CTL,                        0x00},
++	{WCD938X_DIGITAL_DIG_DEBUG_EN,                         0x00},
++	{WCD938X_DIGITAL_ANA_CSR_DBG_ADD,                      0x00},
++	{WCD938X_DIGITAL_ANA_CSR_DBG_CTL,                      0x48},
++	{WCD938X_DIGITAL_SSP_DBG,                              0x00},
++	{WCD938X_DIGITAL_MODE_STATUS_0,                        0x00},
++	{WCD938X_DIGITAL_MODE_STATUS_1,                        0x00},
++	{WCD938X_DIGITAL_SPARE_0,                              0x00},
++	{WCD938X_DIGITAL_SPARE_1,                              0x00},
++	{WCD938X_DIGITAL_SPARE_2,                              0x00},
++	{WCD938X_DIGITAL_EFUSE_REG_0,                          0x00},
++	{WCD938X_DIGITAL_EFUSE_REG_1,                          0xFF},
++	{WCD938X_DIGITAL_EFUSE_REG_2,                          0xFF},
++	{WCD938X_DIGITAL_EFUSE_REG_3,                          0xFF},
++	{WCD938X_DIGITAL_EFUSE_REG_4,                          0xFF},
++	{WCD938X_DIGITAL_EFUSE_REG_5,                          0xFF},
++	{WCD938X_DIGITAL_EFUSE_REG_6,                          0xFF},
++	{WCD938X_DIGITAL_EFUSE_REG_7,                          0xFF},
++	{WCD938X_DIGITAL_EFUSE_REG_8,                          0xFF},
++	{WCD938X_DIGITAL_EFUSE_REG_9,                          0xFF},
++	{WCD938X_DIGITAL_EFUSE_REG_10,                         0xFF},
++	{WCD938X_DIGITAL_EFUSE_REG_11,                         0xFF},
++	{WCD938X_DIGITAL_EFUSE_REG_12,                         0xFF},
++	{WCD938X_DIGITAL_EFUSE_REG_13,                         0xFF},
++	{WCD938X_DIGITAL_EFUSE_REG_14,                         0xFF},
++	{WCD938X_DIGITAL_EFUSE_REG_15,                         0xFF},
++	{WCD938X_DIGITAL_EFUSE_REG_16,                         0xFF},
++	{WCD938X_DIGITAL_EFUSE_REG_17,                         0xFF},
++	{WCD938X_DIGITAL_EFUSE_REG_18,                         0xFF},
++	{WCD938X_DIGITAL_EFUSE_REG_19,                         0xFF},
++	{WCD938X_DIGITAL_EFUSE_REG_20,                         0x0E},
++	{WCD938X_DIGITAL_EFUSE_REG_21,                         0x00},
++	{WCD938X_DIGITAL_EFUSE_REG_22,                         0x00},
++	{WCD938X_DIGITAL_EFUSE_REG_23,                         0xF8},
++	{WCD938X_DIGITAL_EFUSE_REG_24,                         0x16},
++	{WCD938X_DIGITAL_EFUSE_REG_25,                         0x00},
++	{WCD938X_DIGITAL_EFUSE_REG_26,                         0x00},
++	{WCD938X_DIGITAL_EFUSE_REG_27,                         0x00},
++	{WCD938X_DIGITAL_EFUSE_REG_28,                         0x00},
++	{WCD938X_DIGITAL_EFUSE_REG_29,                         0x00},
++	{WCD938X_DIGITAL_EFUSE_REG_30,                         0x00},
++	{WCD938X_DIGITAL_EFUSE_REG_31,                         0x00},
++	{WCD938X_DIGITAL_TX_REQ_FB_CTL_0,                      0x88},
++	{WCD938X_DIGITAL_TX_REQ_FB_CTL_1,                      0x88},
++	{WCD938X_DIGITAL_TX_REQ_FB_CTL_2,                      0x88},
++	{WCD938X_DIGITAL_TX_REQ_FB_CTL_3,                      0x88},
++	{WCD938X_DIGITAL_TX_REQ_FB_CTL_4,                      0x88},
++	{WCD938X_DIGITAL_DEM_BYPASS_DATA0,                     0x55},
++	{WCD938X_DIGITAL_DEM_BYPASS_DATA1,                     0x55},
++	{WCD938X_DIGITAL_DEM_BYPASS_DATA2,                     0x55},
++	{WCD938X_DIGITAL_DEM_BYPASS_DATA3,                     0x01},
++};
++
++static bool wcd938x_rdwr_register(struct device *dev, unsigned int reg)
++{
++	switch (reg) {
++	case WCD938X_ANA_PAGE_REGISTER:
++	case WCD938X_ANA_BIAS:
++	case WCD938X_ANA_RX_SUPPLIES:
++	case WCD938X_ANA_HPH:
++	case WCD938X_ANA_EAR:
++	case WCD938X_ANA_EAR_COMPANDER_CTL:
++	case WCD938X_ANA_TX_CH1:
++	case WCD938X_ANA_TX_CH2:
++	case WCD938X_ANA_TX_CH3:
++	case WCD938X_ANA_TX_CH4:
++	case WCD938X_ANA_MICB1_MICB2_DSP_EN_LOGIC:
++	case WCD938X_ANA_MICB3_DSP_EN_LOGIC:
++	case WCD938X_ANA_MBHC_MECH:
++	case WCD938X_ANA_MBHC_ELECT:
++	case WCD938X_ANA_MBHC_ZDET:
++	case WCD938X_ANA_MBHC_BTN0:
++	case WCD938X_ANA_MBHC_BTN1:
++	case WCD938X_ANA_MBHC_BTN2:
++	case WCD938X_ANA_MBHC_BTN3:
++	case WCD938X_ANA_MBHC_BTN4:
++	case WCD938X_ANA_MBHC_BTN5:
++	case WCD938X_ANA_MBHC_BTN6:
++	case WCD938X_ANA_MBHC_BTN7:
++	case WCD938X_ANA_MICB1:
++	case WCD938X_ANA_MICB2:
++	case WCD938X_ANA_MICB2_RAMP:
++	case WCD938X_ANA_MICB3:
++	case WCD938X_ANA_MICB4:
++	case WCD938X_BIAS_CTL:
++	case WCD938X_BIAS_VBG_FINE_ADJ:
++	case WCD938X_LDOL_VDDCX_ADJUST:
++	case WCD938X_LDOL_DISABLE_LDOL:
++	case WCD938X_MBHC_CTL_CLK:
++	case WCD938X_MBHC_CTL_ANA:
++	case WCD938X_MBHC_CTL_SPARE_1:
++	case WCD938X_MBHC_CTL_SPARE_2:
++	case WCD938X_MBHC_CTL_BCS:
++	case WCD938X_MBHC_TEST_CTL:
++	case WCD938X_LDOH_MODE:
++	case WCD938X_LDOH_BIAS:
++	case WCD938X_LDOH_STB_LOADS:
++	case WCD938X_LDOH_SLOWRAMP:
++	case WCD938X_MICB1_TEST_CTL_1:
++	case WCD938X_MICB1_TEST_CTL_2:
++	case WCD938X_MICB1_TEST_CTL_3:
++	case WCD938X_MICB2_TEST_CTL_1:
++	case WCD938X_MICB2_TEST_CTL_2:
++	case WCD938X_MICB2_TEST_CTL_3:
++	case WCD938X_MICB3_TEST_CTL_1:
++	case WCD938X_MICB3_TEST_CTL_2:
++	case WCD938X_MICB3_TEST_CTL_3:
++	case WCD938X_MICB4_TEST_CTL_1:
++	case WCD938X_MICB4_TEST_CTL_2:
++	case WCD938X_MICB4_TEST_CTL_3:
++	case WCD938X_TX_COM_ADC_VCM:
++	case WCD938X_TX_COM_BIAS_ATEST:
++	case WCD938X_TX_COM_SPARE1:
++	case WCD938X_TX_COM_SPARE2:
++	case WCD938X_TX_COM_TXFE_DIV_CTL:
++	case WCD938X_TX_COM_TXFE_DIV_START:
++	case WCD938X_TX_COM_SPARE3:
++	case WCD938X_TX_COM_SPARE4:
++	case WCD938X_TX_1_2_TEST_EN:
++	case WCD938X_TX_1_2_ADC_IB:
++	case WCD938X_TX_1_2_ATEST_REFCTL:
++	case WCD938X_TX_1_2_TEST_CTL:
++	case WCD938X_TX_1_2_TEST_BLK_EN1:
++	case WCD938X_TX_1_2_TXFE1_CLKDIV:
++	case WCD938X_TX_3_4_TEST_EN:
++	case WCD938X_TX_3_4_ADC_IB:
++	case WCD938X_TX_3_4_ATEST_REFCTL:
++	case WCD938X_TX_3_4_TEST_CTL:
++	case WCD938X_TX_3_4_TEST_BLK_EN3:
++	case WCD938X_TX_3_4_TXFE3_CLKDIV:
++	case WCD938X_TX_3_4_TEST_BLK_EN2:
++	case WCD938X_TX_3_4_TXFE2_CLKDIV:
++	case WCD938X_TX_3_4_SPARE1:
++	case WCD938X_TX_3_4_TEST_BLK_EN4:
++	case WCD938X_TX_3_4_TXFE4_CLKDIV:
++	case WCD938X_TX_3_4_SPARE2:
++	case WCD938X_CLASSH_MODE_1:
++	case WCD938X_CLASSH_MODE_2:
++	case WCD938X_CLASSH_MODE_3:
++	case WCD938X_CLASSH_CTRL_VCL_1:
++	case WCD938X_CLASSH_CTRL_VCL_2:
++	case WCD938X_CLASSH_CTRL_CCL_1:
++	case WCD938X_CLASSH_CTRL_CCL_2:
++	case WCD938X_CLASSH_CTRL_CCL_3:
++	case WCD938X_CLASSH_CTRL_CCL_4:
++	case WCD938X_CLASSH_CTRL_CCL_5:
++	case WCD938X_CLASSH_BUCK_TMUX_A_D:
++	case WCD938X_CLASSH_BUCK_SW_DRV_CNTL:
++	case WCD938X_CLASSH_SPARE:
++	case WCD938X_FLYBACK_EN:
++	case WCD938X_FLYBACK_VNEG_CTRL_1:
++	case WCD938X_FLYBACK_VNEG_CTRL_2:
++	case WCD938X_FLYBACK_VNEG_CTRL_3:
++	case WCD938X_FLYBACK_VNEG_CTRL_4:
++	case WCD938X_FLYBACK_VNEG_CTRL_5:
++	case WCD938X_FLYBACK_VNEG_CTRL_6:
++	case WCD938X_FLYBACK_VNEG_CTRL_7:
++	case WCD938X_FLYBACK_VNEG_CTRL_8:
++	case WCD938X_FLYBACK_VNEG_CTRL_9:
++	case WCD938X_FLYBACK_VNEGDAC_CTRL_1:
++	case WCD938X_FLYBACK_VNEGDAC_CTRL_2:
++	case WCD938X_FLYBACK_VNEGDAC_CTRL_3:
++	case WCD938X_FLYBACK_CTRL_1:
++	case WCD938X_FLYBACK_TEST_CTL:
++	case WCD938X_RX_AUX_SW_CTL:
++	case WCD938X_RX_PA_AUX_IN_CONN:
++	case WCD938X_RX_TIMER_DIV:
++	case WCD938X_RX_OCP_CTL:
++	case WCD938X_RX_OCP_COUNT:
++	case WCD938X_RX_BIAS_EAR_DAC:
++	case WCD938X_RX_BIAS_EAR_AMP:
++	case WCD938X_RX_BIAS_HPH_LDO:
++	case WCD938X_RX_BIAS_HPH_PA:
++	case WCD938X_RX_BIAS_HPH_RDACBUFF_CNP2:
++	case WCD938X_RX_BIAS_HPH_RDAC_LDO:
++	case WCD938X_RX_BIAS_HPH_CNP1:
++	case WCD938X_RX_BIAS_HPH_LOWPOWER:
++	case WCD938X_RX_BIAS_AUX_DAC:
++	case WCD938X_RX_BIAS_AUX_AMP:
++	case WCD938X_RX_BIAS_VNEGDAC_BLEEDER:
++	case WCD938X_RX_BIAS_MISC:
++	case WCD938X_RX_BIAS_BUCK_RST:
++	case WCD938X_RX_BIAS_BUCK_VREF_ERRAMP:
++	case WCD938X_RX_BIAS_FLYB_ERRAMP:
++	case WCD938X_RX_BIAS_FLYB_BUFF:
++	case WCD938X_RX_BIAS_FLYB_MID_RST:
++	case WCD938X_HPH_CNP_EN:
++	case WCD938X_HPH_CNP_WG_CTL:
++	case WCD938X_HPH_CNP_WG_TIME:
++	case WCD938X_HPH_OCP_CTL:
++	case WCD938X_HPH_AUTO_CHOP:
++	case WCD938X_HPH_CHOP_CTL:
++	case WCD938X_HPH_PA_CTL1:
++	case WCD938X_HPH_PA_CTL2:
++	case WCD938X_HPH_L_EN:
++	case WCD938X_HPH_L_TEST:
++	case WCD938X_HPH_L_ATEST:
++	case WCD938X_HPH_R_EN:
++	case WCD938X_HPH_R_TEST:
++	case WCD938X_HPH_R_ATEST:
++	case WCD938X_HPH_RDAC_CLK_CTL1:
++	case WCD938X_HPH_RDAC_CLK_CTL2:
++	case WCD938X_HPH_RDAC_LDO_CTL:
++	case WCD938X_HPH_RDAC_CHOP_CLK_LP_CTL:
++	case WCD938X_HPH_REFBUFF_UHQA_CTL:
++	case WCD938X_HPH_REFBUFF_LP_CTL:
++	case WCD938X_HPH_L_DAC_CTL:
++	case WCD938X_HPH_R_DAC_CTL:
++	case WCD938X_HPH_SURGE_HPHLR_SURGE_COMP_SEL:
++	case WCD938X_HPH_SURGE_HPHLR_SURGE_EN:
++	case WCD938X_HPH_SURGE_HPHLR_SURGE_MISC1:
++	case WCD938X_EAR_EAR_EN_REG:
++	case WCD938X_EAR_EAR_PA_CON:
++	case WCD938X_EAR_EAR_SP_CON:
++	case WCD938X_EAR_EAR_DAC_CON:
++	case WCD938X_EAR_EAR_CNP_FSM_CON:
++	case WCD938X_EAR_TEST_CTL:
++	case WCD938X_ANA_NEW_PAGE_REGISTER:
++	case WCD938X_HPH_NEW_ANA_HPH2:
++	case WCD938X_HPH_NEW_ANA_HPH3:
++	case WCD938X_SLEEP_CTL:
++	case WCD938X_SLEEP_WATCHDOG_CTL:
++	case WCD938X_MBHC_NEW_ELECT_REM_CLAMP_CTL:
++	case WCD938X_MBHC_NEW_CTL_1:
++	case WCD938X_MBHC_NEW_CTL_2:
++	case WCD938X_MBHC_NEW_PLUG_DETECT_CTL:
++	case WCD938X_MBHC_NEW_ZDET_ANA_CTL:
++	case WCD938X_MBHC_NEW_ZDET_RAMP_CTL:
++	case WCD938X_TX_NEW_AMIC_MUX_CFG:
++	case WCD938X_AUX_AUXPA:
++	case WCD938X_LDORXTX_MODE:
++	case WCD938X_LDORXTX_CONFIG:
++	case WCD938X_DIE_CRACK_DIE_CRK_DET_EN:
++	case WCD938X_HPH_NEW_INT_RDAC_GAIN_CTL:
++	case WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_L:
++	case WCD938X_HPH_NEW_INT_RDAC_VREF_CTL:
++	case WCD938X_HPH_NEW_INT_RDAC_OVERRIDE_CTL:
++	case WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_R:
++	case WCD938X_HPH_NEW_INT_PA_MISC1:
++	case WCD938X_HPH_NEW_INT_PA_MISC2:
++	case WCD938X_HPH_NEW_INT_PA_RDAC_MISC:
++	case WCD938X_HPH_NEW_INT_HPH_TIMER1:
++	case WCD938X_HPH_NEW_INT_HPH_TIMER2:
++	case WCD938X_HPH_NEW_INT_HPH_TIMER3:
++	case WCD938X_HPH_NEW_INT_HPH_TIMER4:
++	case WCD938X_HPH_NEW_INT_PA_RDAC_MISC2:
++	case WCD938X_HPH_NEW_INT_PA_RDAC_MISC3:
++	case WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_L_NEW:
++	case WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_R_NEW:
++	case WCD938X_RX_NEW_INT_HPH_RDAC_BIAS_LOHIFI:
++	case WCD938X_RX_NEW_INT_HPH_RDAC_BIAS_ULP:
++	case WCD938X_RX_NEW_INT_HPH_RDAC_LDO_LP:
++	case WCD938X_MBHC_NEW_INT_MOISTURE_DET_DC_CTRL:
++	case WCD938X_MBHC_NEW_INT_MOISTURE_DET_POLLING_CTRL:
++	case WCD938X_MBHC_NEW_INT_MECH_DET_CURRENT:
++	case WCD938X_MBHC_NEW_INT_SPARE_2:
++	case WCD938X_EAR_INT_NEW_EAR_CHOPPER_CON:
++	case WCD938X_EAR_INT_NEW_CNP_VCM_CON1:
++	case WCD938X_EAR_INT_NEW_CNP_VCM_CON2:
++	case WCD938X_EAR_INT_NEW_EAR_DYNAMIC_BIAS:
++	case WCD938X_AUX_INT_EN_REG:
++	case WCD938X_AUX_INT_PA_CTRL:
++	case WCD938X_AUX_INT_SP_CTRL:
++	case WCD938X_AUX_INT_DAC_CTRL:
++	case WCD938X_AUX_INT_CLK_CTRL:
++	case WCD938X_AUX_INT_TEST_CTRL:
++	case WCD938X_AUX_INT_MISC:
++	case WCD938X_LDORXTX_INT_BIAS:
++	case WCD938X_LDORXTX_INT_STB_LOADS_DTEST:
++	case WCD938X_LDORXTX_INT_TEST0:
++	case WCD938X_LDORXTX_INT_STARTUP_TIMER:
++	case WCD938X_LDORXTX_INT_TEST1:
++	case WCD938X_SLEEP_INT_WATCHDOG_CTL_1:
++	case WCD938X_SLEEP_INT_WATCHDOG_CTL_2:
++	case WCD938X_DIE_CRACK_INT_DIE_CRK_DET_INT1:
++	case WCD938X_DIE_CRACK_INT_DIE_CRK_DET_INT2:
++	case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L2:
++	case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L1:
++	case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L0:
++	case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_ULP1P2M:
++	case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_ULP0P6M:
++	case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_L2L1:
++	case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_L0:
++	case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_ULP:
++	case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_L2L1:
++	case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_L0:
++	case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_ULP:
++	case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2CASC_L2L1L0:
++	case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2CASC_ULP:
++	case WCD938X_TX_COM_NEW_INT_TXADC_SCBIAS_L2L1:
++	case WCD938X_TX_COM_NEW_INT_TXADC_SCBIAS_L0ULP:
++	case WCD938X_TX_COM_NEW_INT_TXADC_INT_L2:
++	case WCD938X_TX_COM_NEW_INT_TXADC_INT_L1:
++	case WCD938X_TX_COM_NEW_INT_TXADC_INT_L0:
++	case WCD938X_TX_COM_NEW_INT_TXADC_INT_ULP:
++	case WCD938X_DIGITAL_PAGE_REGISTER:
++	case WCD938X_DIGITAL_SWR_TX_CLK_RATE:
++	case WCD938X_DIGITAL_CDC_RST_CTL:
++	case WCD938X_DIGITAL_TOP_CLK_CFG:
++	case WCD938X_DIGITAL_CDC_ANA_CLK_CTL:
++	case WCD938X_DIGITAL_CDC_DIG_CLK_CTL:
++	case WCD938X_DIGITAL_SWR_RST_EN:
++	case WCD938X_DIGITAL_CDC_PATH_MODE:
++	case WCD938X_DIGITAL_CDC_RX_RST:
++	case WCD938X_DIGITAL_CDC_RX0_CTL:
++	case WCD938X_DIGITAL_CDC_RX1_CTL:
++	case WCD938X_DIGITAL_CDC_RX2_CTL:
++	case WCD938X_DIGITAL_CDC_TX_ANA_MODE_0_1:
++	case WCD938X_DIGITAL_CDC_TX_ANA_MODE_2_3:
++	case WCD938X_DIGITAL_CDC_COMP_CTL_0:
++	case WCD938X_DIGITAL_CDC_ANA_TX_CLK_CTL:
++	case WCD938X_DIGITAL_CDC_HPH_DSM_A1_0:
++	case WCD938X_DIGITAL_CDC_HPH_DSM_A1_1:
++	case WCD938X_DIGITAL_CDC_HPH_DSM_A2_0:
++	case WCD938X_DIGITAL_CDC_HPH_DSM_A2_1:
++	case WCD938X_DIGITAL_CDC_HPH_DSM_A3_0:
++	case WCD938X_DIGITAL_CDC_HPH_DSM_A3_1:
++	case WCD938X_DIGITAL_CDC_HPH_DSM_A4_0:
++	case WCD938X_DIGITAL_CDC_HPH_DSM_A4_1:
++	case WCD938X_DIGITAL_CDC_HPH_DSM_A5_0:
++	case WCD938X_DIGITAL_CDC_HPH_DSM_A5_1:
++	case WCD938X_DIGITAL_CDC_HPH_DSM_A6_0:
++	case WCD938X_DIGITAL_CDC_HPH_DSM_A7_0:
++	case WCD938X_DIGITAL_CDC_HPH_DSM_C_0:
++	case WCD938X_DIGITAL_CDC_HPH_DSM_C_1:
++	case WCD938X_DIGITAL_CDC_HPH_DSM_C_2:
++	case WCD938X_DIGITAL_CDC_HPH_DSM_C_3:
++	case WCD938X_DIGITAL_CDC_HPH_DSM_R1:
++	case WCD938X_DIGITAL_CDC_HPH_DSM_R2:
++	case WCD938X_DIGITAL_CDC_HPH_DSM_R3:
++	case WCD938X_DIGITAL_CDC_HPH_DSM_R4:
++	case WCD938X_DIGITAL_CDC_HPH_DSM_R5:
++	case WCD938X_DIGITAL_CDC_HPH_DSM_R6:
++	case WCD938X_DIGITAL_CDC_HPH_DSM_R7:
++	case WCD938X_DIGITAL_CDC_AUX_DSM_A1_0:
++	case WCD938X_DIGITAL_CDC_AUX_DSM_A1_1:
++	case WCD938X_DIGITAL_CDC_AUX_DSM_A2_0:
++	case WCD938X_DIGITAL_CDC_AUX_DSM_A2_1:
++	case WCD938X_DIGITAL_CDC_AUX_DSM_A3_0:
++	case WCD938X_DIGITAL_CDC_AUX_DSM_A3_1:
++	case WCD938X_DIGITAL_CDC_AUX_DSM_A4_0:
++	case WCD938X_DIGITAL_CDC_AUX_DSM_A4_1:
++	case WCD938X_DIGITAL_CDC_AUX_DSM_A5_0:
++	case WCD938X_DIGITAL_CDC_AUX_DSM_A5_1:
++	case WCD938X_DIGITAL_CDC_AUX_DSM_A6_0:
++	case WCD938X_DIGITAL_CDC_AUX_DSM_A7_0:
++	case WCD938X_DIGITAL_CDC_AUX_DSM_C_0:
++	case WCD938X_DIGITAL_CDC_AUX_DSM_C_1:
++	case WCD938X_DIGITAL_CDC_AUX_DSM_C_2:
++	case WCD938X_DIGITAL_CDC_AUX_DSM_C_3:
++	case WCD938X_DIGITAL_CDC_AUX_DSM_R1:
++	case WCD938X_DIGITAL_CDC_AUX_DSM_R2:
++	case WCD938X_DIGITAL_CDC_AUX_DSM_R3:
++	case WCD938X_DIGITAL_CDC_AUX_DSM_R4:
++	case WCD938X_DIGITAL_CDC_AUX_DSM_R5:
++	case WCD938X_DIGITAL_CDC_AUX_DSM_R6:
++	case WCD938X_DIGITAL_CDC_AUX_DSM_R7:
++	case WCD938X_DIGITAL_CDC_HPH_GAIN_RX_0:
++	case WCD938X_DIGITAL_CDC_HPH_GAIN_RX_1:
++	case WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_0:
++	case WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_1:
++	case WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_2:
++	case WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_0:
++	case WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_1:
++	case WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_2:
++	case WCD938X_DIGITAL_CDC_HPH_GAIN_CTL:
++	case WCD938X_DIGITAL_CDC_AUX_GAIN_CTL:
++	case WCD938X_DIGITAL_CDC_EAR_PATH_CTL:
++	case WCD938X_DIGITAL_CDC_SWR_CLH:
++	case WCD938X_DIGITAL_SWR_CLH_BYP:
++	case WCD938X_DIGITAL_CDC_TX0_CTL:
++	case WCD938X_DIGITAL_CDC_TX1_CTL:
++	case WCD938X_DIGITAL_CDC_TX2_CTL:
++	case WCD938X_DIGITAL_CDC_TX_RST:
++	case WCD938X_DIGITAL_CDC_REQ_CTL:
++	case WCD938X_DIGITAL_CDC_RST:
++	case WCD938X_DIGITAL_CDC_AMIC_CTL:
++	case WCD938X_DIGITAL_CDC_DMIC_CTL:
++	case WCD938X_DIGITAL_CDC_DMIC1_CTL:
++	case WCD938X_DIGITAL_CDC_DMIC2_CTL:
++	case WCD938X_DIGITAL_CDC_DMIC3_CTL:
++	case WCD938X_DIGITAL_CDC_DMIC4_CTL:
++	case WCD938X_DIGITAL_EFUSE_PRG_CTL:
++	case WCD938X_DIGITAL_EFUSE_CTL:
++	case WCD938X_DIGITAL_CDC_DMIC_RATE_1_2:
++	case WCD938X_DIGITAL_CDC_DMIC_RATE_3_4:
++	case WCD938X_DIGITAL_PDM_WD_CTL0:
++	case WCD938X_DIGITAL_PDM_WD_CTL1:
++	case WCD938X_DIGITAL_PDM_WD_CTL2:
++	case WCD938X_DIGITAL_INTR_MODE:
++	case WCD938X_DIGITAL_INTR_MASK_0:
++	case WCD938X_DIGITAL_INTR_MASK_1:
++	case WCD938X_DIGITAL_INTR_MASK_2:
++	case WCD938X_DIGITAL_INTR_CLEAR_0:
++	case WCD938X_DIGITAL_INTR_CLEAR_1:
++	case WCD938X_DIGITAL_INTR_CLEAR_2:
++	case WCD938X_DIGITAL_INTR_LEVEL_0:
++	case WCD938X_DIGITAL_INTR_LEVEL_1:
++	case WCD938X_DIGITAL_INTR_LEVEL_2:
++	case WCD938X_DIGITAL_INTR_SET_0:
++	case WCD938X_DIGITAL_INTR_SET_1:
++	case WCD938X_DIGITAL_INTR_SET_2:
++	case WCD938X_DIGITAL_INTR_TEST_0:
++	case WCD938X_DIGITAL_INTR_TEST_1:
++	case WCD938X_DIGITAL_INTR_TEST_2:
++	case WCD938X_DIGITAL_TX_MODE_DBG_EN:
++	case WCD938X_DIGITAL_TX_MODE_DBG_0_1:
++	case WCD938X_DIGITAL_TX_MODE_DBG_2_3:
++	case WCD938X_DIGITAL_LB_IN_SEL_CTL:
++	case WCD938X_DIGITAL_LOOP_BACK_MODE:
++	case WCD938X_DIGITAL_SWR_DAC_TEST:
++	case WCD938X_DIGITAL_SWR_HM_TEST_RX_0:
++	case WCD938X_DIGITAL_SWR_HM_TEST_TX_0:
++	case WCD938X_DIGITAL_SWR_HM_TEST_RX_1:
++	case WCD938X_DIGITAL_SWR_HM_TEST_TX_1:
++	case WCD938X_DIGITAL_SWR_HM_TEST_TX_2:
++	case WCD938X_DIGITAL_PAD_CTL_SWR_0:
++	case WCD938X_DIGITAL_PAD_CTL_SWR_1:
++	case WCD938X_DIGITAL_I2C_CTL:
++	case WCD938X_DIGITAL_CDC_TX_TANGGU_SW_MODE:
++	case WCD938X_DIGITAL_EFUSE_TEST_CTL_0:
++	case WCD938X_DIGITAL_EFUSE_TEST_CTL_1:
++	case WCD938X_DIGITAL_PAD_CTL_PDM_RX0:
++	case WCD938X_DIGITAL_PAD_CTL_PDM_RX1:
++	case WCD938X_DIGITAL_PAD_CTL_PDM_TX0:
++	case WCD938X_DIGITAL_PAD_CTL_PDM_TX1:
++	case WCD938X_DIGITAL_PAD_CTL_PDM_TX2:
++	case WCD938X_DIGITAL_PAD_INP_DIS_0:
++	case WCD938X_DIGITAL_PAD_INP_DIS_1:
++	case WCD938X_DIGITAL_DRIVE_STRENGTH_0:
++	case WCD938X_DIGITAL_DRIVE_STRENGTH_1:
++	case WCD938X_DIGITAL_DRIVE_STRENGTH_2:
++	case WCD938X_DIGITAL_RX_DATA_EDGE_CTL:
++	case WCD938X_DIGITAL_TX_DATA_EDGE_CTL:
++	case WCD938X_DIGITAL_GPIO_MODE:
++	case WCD938X_DIGITAL_PIN_CTL_OE:
++	case WCD938X_DIGITAL_PIN_CTL_DATA_0:
++	case WCD938X_DIGITAL_PIN_CTL_DATA_1:
++	case WCD938X_DIGITAL_DIG_DEBUG_CTL:
++	case WCD938X_DIGITAL_DIG_DEBUG_EN:
++	case WCD938X_DIGITAL_ANA_CSR_DBG_ADD:
++	case WCD938X_DIGITAL_ANA_CSR_DBG_CTL:
++	case WCD938X_DIGITAL_SSP_DBG:
++	case WCD938X_DIGITAL_SPARE_0:
++	case WCD938X_DIGITAL_SPARE_1:
++	case WCD938X_DIGITAL_SPARE_2:
++	case WCD938X_DIGITAL_TX_REQ_FB_CTL_0:
++	case WCD938X_DIGITAL_TX_REQ_FB_CTL_1:
++	case WCD938X_DIGITAL_TX_REQ_FB_CTL_2:
++	case WCD938X_DIGITAL_TX_REQ_FB_CTL_3:
++	case WCD938X_DIGITAL_TX_REQ_FB_CTL_4:
++	case WCD938X_DIGITAL_DEM_BYPASS_DATA0:
++	case WCD938X_DIGITAL_DEM_BYPASS_DATA1:
++	case WCD938X_DIGITAL_DEM_BYPASS_DATA2:
++	case WCD938X_DIGITAL_DEM_BYPASS_DATA3:
++		return true;
++	}
++
++	return false;
++}
++
++static bool wcd938x_readonly_register(struct device *dev, unsigned int reg)
++{
++	switch (reg) {
++	case WCD938X_ANA_MBHC_RESULT_1:
++	case WCD938X_ANA_MBHC_RESULT_2:
++	case WCD938X_ANA_MBHC_RESULT_3:
++	case WCD938X_MBHC_MOISTURE_DET_FSM_STATUS:
++	case WCD938X_TX_1_2_SAR2_ERR:
++	case WCD938X_TX_1_2_SAR1_ERR:
++	case WCD938X_TX_3_4_SAR4_ERR:
++	case WCD938X_TX_3_4_SAR3_ERR:
++	case WCD938X_HPH_L_STATUS:
++	case WCD938X_HPH_R_STATUS:
++	case WCD938X_HPH_SURGE_HPHLR_SURGE_STATUS:
++	case WCD938X_EAR_STATUS_REG_1:
++	case WCD938X_EAR_STATUS_REG_2:
++	case WCD938X_MBHC_NEW_FSM_STATUS:
++	case WCD938X_MBHC_NEW_ADC_RESULT:
++	case WCD938X_DIE_CRACK_DIE_CRK_DET_OUT:
++	case WCD938X_AUX_INT_STATUS_REG:
++	case WCD938X_LDORXTX_INT_STATUS:
++	case WCD938X_DIGITAL_CHIP_ID0:
++	case WCD938X_DIGITAL_CHIP_ID1:
++	case WCD938X_DIGITAL_CHIP_ID2:
++	case WCD938X_DIGITAL_CHIP_ID3:
++	case WCD938X_DIGITAL_INTR_STATUS_0:
++	case WCD938X_DIGITAL_INTR_STATUS_1:
++	case WCD938X_DIGITAL_INTR_STATUS_2:
++	case WCD938X_DIGITAL_INTR_CLEAR_0:
++	case WCD938X_DIGITAL_INTR_CLEAR_1:
++	case WCD938X_DIGITAL_INTR_CLEAR_2:
++	case WCD938X_DIGITAL_SWR_HM_TEST_0:
++	case WCD938X_DIGITAL_SWR_HM_TEST_1:
++	case WCD938X_DIGITAL_EFUSE_T_DATA_0:
++	case WCD938X_DIGITAL_EFUSE_T_DATA_1:
++	case WCD938X_DIGITAL_PIN_STATUS_0:
++	case WCD938X_DIGITAL_PIN_STATUS_1:
++	case WCD938X_DIGITAL_MODE_STATUS_0:
++	case WCD938X_DIGITAL_MODE_STATUS_1:
++	case WCD938X_DIGITAL_EFUSE_REG_0:
++	case WCD938X_DIGITAL_EFUSE_REG_1:
++	case WCD938X_DIGITAL_EFUSE_REG_2:
++	case WCD938X_DIGITAL_EFUSE_REG_3:
++	case WCD938X_DIGITAL_EFUSE_REG_4:
++	case WCD938X_DIGITAL_EFUSE_REG_5:
++	case WCD938X_DIGITAL_EFUSE_REG_6:
++	case WCD938X_DIGITAL_EFUSE_REG_7:
++	case WCD938X_DIGITAL_EFUSE_REG_8:
++	case WCD938X_DIGITAL_EFUSE_REG_9:
++	case WCD938X_DIGITAL_EFUSE_REG_10:
++	case WCD938X_DIGITAL_EFUSE_REG_11:
++	case WCD938X_DIGITAL_EFUSE_REG_12:
++	case WCD938X_DIGITAL_EFUSE_REG_13:
++	case WCD938X_DIGITAL_EFUSE_REG_14:
++	case WCD938X_DIGITAL_EFUSE_REG_15:
++	case WCD938X_DIGITAL_EFUSE_REG_16:
++	case WCD938X_DIGITAL_EFUSE_REG_17:
++	case WCD938X_DIGITAL_EFUSE_REG_18:
++	case WCD938X_DIGITAL_EFUSE_REG_19:
++	case WCD938X_DIGITAL_EFUSE_REG_20:
++	case WCD938X_DIGITAL_EFUSE_REG_21:
++	case WCD938X_DIGITAL_EFUSE_REG_22:
++	case WCD938X_DIGITAL_EFUSE_REG_23:
++	case WCD938X_DIGITAL_EFUSE_REG_24:
++	case WCD938X_DIGITAL_EFUSE_REG_25:
++	case WCD938X_DIGITAL_EFUSE_REG_26:
++	case WCD938X_DIGITAL_EFUSE_REG_27:
++	case WCD938X_DIGITAL_EFUSE_REG_28:
++	case WCD938X_DIGITAL_EFUSE_REG_29:
++	case WCD938X_DIGITAL_EFUSE_REG_30:
++	case WCD938X_DIGITAL_EFUSE_REG_31:
++		return true;
++	}
++	return false;
++}
++
++static bool wcd938x_readable_register(struct device *dev, unsigned int reg)
++{
++	bool ret;
++
++	ret = wcd938x_readonly_register(dev, reg);
++	if (!ret)
++		return wcd938x_rdwr_register(dev, reg);
++
++	return ret;
++}
++
++static bool wcd938x_writeable_register(struct device *dev, unsigned int reg)
++{
++	return wcd938x_rdwr_register(dev, reg);
++}
++
++static bool wcd938x_volatile_register(struct device *dev, unsigned int reg)
++{
++	if (reg <= WCD938X_BASE_ADDRESS)
++		return false;
++
++	if (reg == WCD938X_DIGITAL_SWR_TX_CLK_RATE)
++		return true;
++
++	if (wcd938x_readonly_register(dev, reg))
++		return true;
++
++	return false;
++}
++
++static const struct regmap_config wcd938x_regmap_config = {
++	.name = "wcd938x_csr",
++	.reg_bits = 32,
++	.val_bits = 8,
++	.cache_type = REGCACHE_RBTREE,
++	.reg_defaults = wcd938x_defaults,
++	.num_reg_defaults = ARRAY_SIZE(wcd938x_defaults),
++	.max_register = WCD938X_MAX_REGISTER,
++	.readable_reg = wcd938x_readable_register,
++	.writeable_reg = wcd938x_writeable_register,
++	.volatile_reg = wcd938x_volatile_register,
++	.can_multi_write = true,
++};
++
+ static const struct sdw_slave_ops wcd9380_slave_ops = {
+ 	.update_status = wcd9380_update_status,
+ 	.interrupt_callback = wcd9380_interrupt_callback,
+@@ -261,6 +1263,16 @@ static int wcd9380_probe(struct sdw_slave *pdev,
+ 		wcd->ch_info = &wcd938x_sdw_rx_ch_info[0];
+ 	}
+ 
++	if (wcd->is_tx) {
++		wcd->regmap = devm_regmap_init_sdw(pdev, &wcd938x_regmap_config);
++		if (IS_ERR(wcd->regmap))
++			return dev_err_probe(dev, PTR_ERR(wcd->regmap),
++					     "Regmap init failed\n");
++
++		/* Start in cache-only until device is enumerated */
++		regcache_cache_only(wcd->regmap, true);
++	};
++
+ 	pm_runtime_set_autosuspend_delay(dev, 3000);
+ 	pm_runtime_use_autosuspend(dev);
+ 	pm_runtime_mark_last_busy(dev);
+@@ -278,22 +1290,23 @@ MODULE_DEVICE_TABLE(sdw, wcd9380_slave_id);
+ 
+ static int __maybe_unused wcd938x_sdw_runtime_suspend(struct device *dev)
+ {
+-	struct regmap *regmap = dev_get_regmap(dev, NULL);
++	struct wcd938x_sdw_priv *wcd = dev_get_drvdata(dev);
+ 
+-	if (regmap) {
+-		regcache_cache_only(regmap, true);
+-		regcache_mark_dirty(regmap);
++	if (wcd->regmap) {
++		regcache_cache_only(wcd->regmap, true);
++		regcache_mark_dirty(wcd->regmap);
+ 	}
++
+ 	return 0;
+ }
+ 
+ static int __maybe_unused wcd938x_sdw_runtime_resume(struct device *dev)
+ {
+-	struct regmap *regmap = dev_get_regmap(dev, NULL);
++	struct wcd938x_sdw_priv *wcd = dev_get_drvdata(dev);
+ 
+-	if (regmap) {
+-		regcache_cache_only(regmap, false);
+-		regcache_sync(regmap);
++	if (wcd->regmap) {
++		regcache_cache_only(wcd->regmap, false);
++		regcache_sync(wcd->regmap);
+ 	}
+ 
+ 	pm_runtime_mark_last_busy(dev);
+diff --git a/sound/soc/codecs/wcd938x.c b/sound/soc/codecs/wcd938x.c
+index fcac763b04d1b..d34f13758aca0 100644
+--- a/sound/soc/codecs/wcd938x.c
++++ b/sound/soc/codecs/wcd938x.c
+@@ -273,1001 +273,6 @@ static struct wcd_mbhc_field wcd_mbhc_fields[WCD_MBHC_REG_FUNC_MAX] = {
+ 	WCD_MBHC_FIELD(WCD_MBHC_ELECT_ISRC_EN, WCD938X_ANA_MBHC_ZDET, 0x02),
+ };
+ 
+-static const struct reg_default wcd938x_defaults[] = {
+-	{WCD938X_ANA_PAGE_REGISTER,                            0x00},
+-	{WCD938X_ANA_BIAS,                                     0x00},
+-	{WCD938X_ANA_RX_SUPPLIES,                              0x00},
+-	{WCD938X_ANA_HPH,                                      0x0C},
+-	{WCD938X_ANA_EAR,                                      0x00},
+-	{WCD938X_ANA_EAR_COMPANDER_CTL,                        0x02},
+-	{WCD938X_ANA_TX_CH1,                                   0x20},
+-	{WCD938X_ANA_TX_CH2,                                   0x00},
+-	{WCD938X_ANA_TX_CH3,                                   0x20},
+-	{WCD938X_ANA_TX_CH4,                                   0x00},
+-	{WCD938X_ANA_MICB1_MICB2_DSP_EN_LOGIC,                 0x00},
+-	{WCD938X_ANA_MICB3_DSP_EN_LOGIC,                       0x00},
+-	{WCD938X_ANA_MBHC_MECH,                                0x39},
+-	{WCD938X_ANA_MBHC_ELECT,                               0x08},
+-	{WCD938X_ANA_MBHC_ZDET,                                0x00},
+-	{WCD938X_ANA_MBHC_RESULT_1,                            0x00},
+-	{WCD938X_ANA_MBHC_RESULT_2,                            0x00},
+-	{WCD938X_ANA_MBHC_RESULT_3,                            0x00},
+-	{WCD938X_ANA_MBHC_BTN0,                                0x00},
+-	{WCD938X_ANA_MBHC_BTN1,                                0x10},
+-	{WCD938X_ANA_MBHC_BTN2,                                0x20},
+-	{WCD938X_ANA_MBHC_BTN3,                                0x30},
+-	{WCD938X_ANA_MBHC_BTN4,                                0x40},
+-	{WCD938X_ANA_MBHC_BTN5,                                0x50},
+-	{WCD938X_ANA_MBHC_BTN6,                                0x60},
+-	{WCD938X_ANA_MBHC_BTN7,                                0x70},
+-	{WCD938X_ANA_MICB1,                                    0x10},
+-	{WCD938X_ANA_MICB2,                                    0x10},
+-	{WCD938X_ANA_MICB2_RAMP,                               0x00},
+-	{WCD938X_ANA_MICB3,                                    0x10},
+-	{WCD938X_ANA_MICB4,                                    0x10},
+-	{WCD938X_BIAS_CTL,                                     0x2A},
+-	{WCD938X_BIAS_VBG_FINE_ADJ,                            0x55},
+-	{WCD938X_LDOL_VDDCX_ADJUST,                            0x01},
+-	{WCD938X_LDOL_DISABLE_LDOL,                            0x00},
+-	{WCD938X_MBHC_CTL_CLK,                                 0x00},
+-	{WCD938X_MBHC_CTL_ANA,                                 0x00},
+-	{WCD938X_MBHC_CTL_SPARE_1,                             0x00},
+-	{WCD938X_MBHC_CTL_SPARE_2,                             0x00},
+-	{WCD938X_MBHC_CTL_BCS,                                 0x00},
+-	{WCD938X_MBHC_MOISTURE_DET_FSM_STATUS,                 0x00},
+-	{WCD938X_MBHC_TEST_CTL,                                0x00},
+-	{WCD938X_LDOH_MODE,                                    0x2B},
+-	{WCD938X_LDOH_BIAS,                                    0x68},
+-	{WCD938X_LDOH_STB_LOADS,                               0x00},
+-	{WCD938X_LDOH_SLOWRAMP,                                0x50},
+-	{WCD938X_MICB1_TEST_CTL_1,                             0x1A},
+-	{WCD938X_MICB1_TEST_CTL_2,                             0x00},
+-	{WCD938X_MICB1_TEST_CTL_3,                             0xA4},
+-	{WCD938X_MICB2_TEST_CTL_1,                             0x1A},
+-	{WCD938X_MICB2_TEST_CTL_2,                             0x00},
+-	{WCD938X_MICB2_TEST_CTL_3,                             0x24},
+-	{WCD938X_MICB3_TEST_CTL_1,                             0x1A},
+-	{WCD938X_MICB3_TEST_CTL_2,                             0x00},
+-	{WCD938X_MICB3_TEST_CTL_3,                             0xA4},
+-	{WCD938X_MICB4_TEST_CTL_1,                             0x1A},
+-	{WCD938X_MICB4_TEST_CTL_2,                             0x00},
+-	{WCD938X_MICB4_TEST_CTL_3,                             0xA4},
+-	{WCD938X_TX_COM_ADC_VCM,                               0x39},
+-	{WCD938X_TX_COM_BIAS_ATEST,                            0xE0},
+-	{WCD938X_TX_COM_SPARE1,                                0x00},
+-	{WCD938X_TX_COM_SPARE2,                                0x00},
+-	{WCD938X_TX_COM_TXFE_DIV_CTL,                          0x22},
+-	{WCD938X_TX_COM_TXFE_DIV_START,                        0x00},
+-	{WCD938X_TX_COM_SPARE3,                                0x00},
+-	{WCD938X_TX_COM_SPARE4,                                0x00},
+-	{WCD938X_TX_1_2_TEST_EN,                               0xCC},
+-	{WCD938X_TX_1_2_ADC_IB,                                0xE9},
+-	{WCD938X_TX_1_2_ATEST_REFCTL,                          0x0A},
+-	{WCD938X_TX_1_2_TEST_CTL,                              0x38},
+-	{WCD938X_TX_1_2_TEST_BLK_EN1,                          0xFF},
+-	{WCD938X_TX_1_2_TXFE1_CLKDIV,                          0x00},
+-	{WCD938X_TX_1_2_SAR2_ERR,                              0x00},
+-	{WCD938X_TX_1_2_SAR1_ERR,                              0x00},
+-	{WCD938X_TX_3_4_TEST_EN,                               0xCC},
+-	{WCD938X_TX_3_4_ADC_IB,                                0xE9},
+-	{WCD938X_TX_3_4_ATEST_REFCTL,                          0x0A},
+-	{WCD938X_TX_3_4_TEST_CTL,                              0x38},
+-	{WCD938X_TX_3_4_TEST_BLK_EN3,                          0xFF},
+-	{WCD938X_TX_3_4_TXFE3_CLKDIV,                          0x00},
+-	{WCD938X_TX_3_4_SAR4_ERR,                              0x00},
+-	{WCD938X_TX_3_4_SAR3_ERR,                              0x00},
+-	{WCD938X_TX_3_4_TEST_BLK_EN2,                          0xFB},
+-	{WCD938X_TX_3_4_TXFE2_CLKDIV,                          0x00},
+-	{WCD938X_TX_3_4_SPARE1,                                0x00},
+-	{WCD938X_TX_3_4_TEST_BLK_EN4,                          0xFB},
+-	{WCD938X_TX_3_4_TXFE4_CLKDIV,                          0x00},
+-	{WCD938X_TX_3_4_SPARE2,                                0x00},
+-	{WCD938X_CLASSH_MODE_1,                                0x40},
+-	{WCD938X_CLASSH_MODE_2,                                0x3A},
+-	{WCD938X_CLASSH_MODE_3,                                0x00},
+-	{WCD938X_CLASSH_CTRL_VCL_1,                            0x70},
+-	{WCD938X_CLASSH_CTRL_VCL_2,                            0x82},
+-	{WCD938X_CLASSH_CTRL_CCL_1,                            0x31},
+-	{WCD938X_CLASSH_CTRL_CCL_2,                            0x80},
+-	{WCD938X_CLASSH_CTRL_CCL_3,                            0x80},
+-	{WCD938X_CLASSH_CTRL_CCL_4,                            0x51},
+-	{WCD938X_CLASSH_CTRL_CCL_5,                            0x00},
+-	{WCD938X_CLASSH_BUCK_TMUX_A_D,                         0x00},
+-	{WCD938X_CLASSH_BUCK_SW_DRV_CNTL,                      0x77},
+-	{WCD938X_CLASSH_SPARE,                                 0x00},
+-	{WCD938X_FLYBACK_EN,                                   0x4E},
+-	{WCD938X_FLYBACK_VNEG_CTRL_1,                          0x0B},
+-	{WCD938X_FLYBACK_VNEG_CTRL_2,                          0x45},
+-	{WCD938X_FLYBACK_VNEG_CTRL_3,                          0x74},
+-	{WCD938X_FLYBACK_VNEG_CTRL_4,                          0x7F},
+-	{WCD938X_FLYBACK_VNEG_CTRL_5,                          0x83},
+-	{WCD938X_FLYBACK_VNEG_CTRL_6,                          0x98},
+-	{WCD938X_FLYBACK_VNEG_CTRL_7,                          0xA9},
+-	{WCD938X_FLYBACK_VNEG_CTRL_8,                          0x68},
+-	{WCD938X_FLYBACK_VNEG_CTRL_9,                          0x64},
+-	{WCD938X_FLYBACK_VNEGDAC_CTRL_1,                       0xED},
+-	{WCD938X_FLYBACK_VNEGDAC_CTRL_2,                       0xF0},
+-	{WCD938X_FLYBACK_VNEGDAC_CTRL_3,                       0xA6},
+-	{WCD938X_FLYBACK_CTRL_1,                               0x65},
+-	{WCD938X_FLYBACK_TEST_CTL,                             0x00},
+-	{WCD938X_RX_AUX_SW_CTL,                                0x00},
+-	{WCD938X_RX_PA_AUX_IN_CONN,                            0x01},
+-	{WCD938X_RX_TIMER_DIV,                                 0x32},
+-	{WCD938X_RX_OCP_CTL,                                   0x1F},
+-	{WCD938X_RX_OCP_COUNT,                                 0x77},
+-	{WCD938X_RX_BIAS_EAR_DAC,                              0xA0},
+-	{WCD938X_RX_BIAS_EAR_AMP,                              0xAA},
+-	{WCD938X_RX_BIAS_HPH_LDO,                              0xA9},
+-	{WCD938X_RX_BIAS_HPH_PA,                               0xAA},
+-	{WCD938X_RX_BIAS_HPH_RDACBUFF_CNP2,                    0x8A},
+-	{WCD938X_RX_BIAS_HPH_RDAC_LDO,                         0x88},
+-	{WCD938X_RX_BIAS_HPH_CNP1,                             0x82},
+-	{WCD938X_RX_BIAS_HPH_LOWPOWER,                         0x82},
+-	{WCD938X_RX_BIAS_AUX_DAC,                              0xA0},
+-	{WCD938X_RX_BIAS_AUX_AMP,                              0xAA},
+-	{WCD938X_RX_BIAS_VNEGDAC_BLEEDER,                      0x50},
+-	{WCD938X_RX_BIAS_MISC,                                 0x00},
+-	{WCD938X_RX_BIAS_BUCK_RST,                             0x08},
+-	{WCD938X_RX_BIAS_BUCK_VREF_ERRAMP,                     0x44},
+-	{WCD938X_RX_BIAS_FLYB_ERRAMP,                          0x40},
+-	{WCD938X_RX_BIAS_FLYB_BUFF,                            0xAA},
+-	{WCD938X_RX_BIAS_FLYB_MID_RST,                         0x14},
+-	{WCD938X_HPH_L_STATUS,                                 0x04},
+-	{WCD938X_HPH_R_STATUS,                                 0x04},
+-	{WCD938X_HPH_CNP_EN,                                   0x80},
+-	{WCD938X_HPH_CNP_WG_CTL,                               0x9A},
+-	{WCD938X_HPH_CNP_WG_TIME,                              0x14},
+-	{WCD938X_HPH_OCP_CTL,                                  0x28},
+-	{WCD938X_HPH_AUTO_CHOP,                                0x16},
+-	{WCD938X_HPH_CHOP_CTL,                                 0x83},
+-	{WCD938X_HPH_PA_CTL1,                                  0x46},
+-	{WCD938X_HPH_PA_CTL2,                                  0x50},
+-	{WCD938X_HPH_L_EN,                                     0x80},
+-	{WCD938X_HPH_L_TEST,                                   0xE0},
+-	{WCD938X_HPH_L_ATEST,                                  0x50},
+-	{WCD938X_HPH_R_EN,                                     0x80},
+-	{WCD938X_HPH_R_TEST,                                   0xE0},
+-	{WCD938X_HPH_R_ATEST,                                  0x54},
+-	{WCD938X_HPH_RDAC_CLK_CTL1,                            0x99},
+-	{WCD938X_HPH_RDAC_CLK_CTL2,                            0x9B},
+-	{WCD938X_HPH_RDAC_LDO_CTL,                             0x33},
+-	{WCD938X_HPH_RDAC_CHOP_CLK_LP_CTL,                     0x00},
+-	{WCD938X_HPH_REFBUFF_UHQA_CTL,                         0x68},
+-	{WCD938X_HPH_REFBUFF_LP_CTL,                           0x0E},
+-	{WCD938X_HPH_L_DAC_CTL,                                0x20},
+-	{WCD938X_HPH_R_DAC_CTL,                                0x20},
+-	{WCD938X_HPH_SURGE_HPHLR_SURGE_COMP_SEL,               0x55},
+-	{WCD938X_HPH_SURGE_HPHLR_SURGE_EN,                     0x19},
+-	{WCD938X_HPH_SURGE_HPHLR_SURGE_MISC1,                  0xA0},
+-	{WCD938X_HPH_SURGE_HPHLR_SURGE_STATUS,                 0x00},
+-	{WCD938X_EAR_EAR_EN_REG,                               0x22},
+-	{WCD938X_EAR_EAR_PA_CON,                               0x44},
+-	{WCD938X_EAR_EAR_SP_CON,                               0xDB},
+-	{WCD938X_EAR_EAR_DAC_CON,                              0x80},
+-	{WCD938X_EAR_EAR_CNP_FSM_CON,                          0xB2},
+-	{WCD938X_EAR_TEST_CTL,                                 0x00},
+-	{WCD938X_EAR_STATUS_REG_1,                             0x00},
+-	{WCD938X_EAR_STATUS_REG_2,                             0x08},
+-	{WCD938X_ANA_NEW_PAGE_REGISTER,                        0x00},
+-	{WCD938X_HPH_NEW_ANA_HPH2,                             0x00},
+-	{WCD938X_HPH_NEW_ANA_HPH3,                             0x00},
+-	{WCD938X_SLEEP_CTL,                                    0x16},
+-	{WCD938X_SLEEP_WATCHDOG_CTL,                           0x00},
+-	{WCD938X_MBHC_NEW_ELECT_REM_CLAMP_CTL,                 0x00},
+-	{WCD938X_MBHC_NEW_CTL_1,                               0x02},
+-	{WCD938X_MBHC_NEW_CTL_2,                               0x05},
+-	{WCD938X_MBHC_NEW_PLUG_DETECT_CTL,                     0xE9},
+-	{WCD938X_MBHC_NEW_ZDET_ANA_CTL,                        0x0F},
+-	{WCD938X_MBHC_NEW_ZDET_RAMP_CTL,                       0x00},
+-	{WCD938X_MBHC_NEW_FSM_STATUS,                          0x00},
+-	{WCD938X_MBHC_NEW_ADC_RESULT,                          0x00},
+-	{WCD938X_TX_NEW_AMIC_MUX_CFG,                          0x00},
+-	{WCD938X_AUX_AUXPA,                                    0x00},
+-	{WCD938X_LDORXTX_MODE,                                 0x0C},
+-	{WCD938X_LDORXTX_CONFIG,                               0x10},
+-	{WCD938X_DIE_CRACK_DIE_CRK_DET_EN,                     0x00},
+-	{WCD938X_DIE_CRACK_DIE_CRK_DET_OUT,                    0x00},
+-	{WCD938X_HPH_NEW_INT_RDAC_GAIN_CTL,                    0x40},
+-	{WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_L,                   0x81},
+-	{WCD938X_HPH_NEW_INT_RDAC_VREF_CTL,                    0x10},
+-	{WCD938X_HPH_NEW_INT_RDAC_OVERRIDE_CTL,                0x00},
+-	{WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_R,                   0x81},
+-	{WCD938X_HPH_NEW_INT_PA_MISC1,                         0x22},
+-	{WCD938X_HPH_NEW_INT_PA_MISC2,                         0x00},
+-	{WCD938X_HPH_NEW_INT_PA_RDAC_MISC,                     0x00},
+-	{WCD938X_HPH_NEW_INT_HPH_TIMER1,                       0xFE},
+-	{WCD938X_HPH_NEW_INT_HPH_TIMER2,                       0x02},
+-	{WCD938X_HPH_NEW_INT_HPH_TIMER3,                       0x4E},
+-	{WCD938X_HPH_NEW_INT_HPH_TIMER4,                       0x54},
+-	{WCD938X_HPH_NEW_INT_PA_RDAC_MISC2,                    0x00},
+-	{WCD938X_HPH_NEW_INT_PA_RDAC_MISC3,                    0x00},
+-	{WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_L_NEW,               0x90},
+-	{WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_R_NEW,               0x90},
+-	{WCD938X_RX_NEW_INT_HPH_RDAC_BIAS_LOHIFI,              0x62},
+-	{WCD938X_RX_NEW_INT_HPH_RDAC_BIAS_ULP,                 0x01},
+-	{WCD938X_RX_NEW_INT_HPH_RDAC_LDO_LP,                   0x11},
+-	{WCD938X_MBHC_NEW_INT_MOISTURE_DET_DC_CTRL,            0x57},
+-	{WCD938X_MBHC_NEW_INT_MOISTURE_DET_POLLING_CTRL,       0x01},
+-	{WCD938X_MBHC_NEW_INT_MECH_DET_CURRENT,                0x00},
+-	{WCD938X_MBHC_NEW_INT_SPARE_2,                         0x00},
+-	{WCD938X_EAR_INT_NEW_EAR_CHOPPER_CON,                  0xA8},
+-	{WCD938X_EAR_INT_NEW_CNP_VCM_CON1,                     0x42},
+-	{WCD938X_EAR_INT_NEW_CNP_VCM_CON2,                     0x22},
+-	{WCD938X_EAR_INT_NEW_EAR_DYNAMIC_BIAS,                 0x00},
+-	{WCD938X_AUX_INT_EN_REG,                               0x00},
+-	{WCD938X_AUX_INT_PA_CTRL,                              0x06},
+-	{WCD938X_AUX_INT_SP_CTRL,                              0xD2},
+-	{WCD938X_AUX_INT_DAC_CTRL,                             0x80},
+-	{WCD938X_AUX_INT_CLK_CTRL,                             0x50},
+-	{WCD938X_AUX_INT_TEST_CTRL,                            0x00},
+-	{WCD938X_AUX_INT_STATUS_REG,                           0x00},
+-	{WCD938X_AUX_INT_MISC,                                 0x00},
+-	{WCD938X_LDORXTX_INT_BIAS,                             0x6E},
+-	{WCD938X_LDORXTX_INT_STB_LOADS_DTEST,                  0x50},
+-	{WCD938X_LDORXTX_INT_TEST0,                            0x1C},
+-	{WCD938X_LDORXTX_INT_STARTUP_TIMER,                    0xFF},
+-	{WCD938X_LDORXTX_INT_TEST1,                            0x1F},
+-	{WCD938X_LDORXTX_INT_STATUS,                           0x00},
+-	{WCD938X_SLEEP_INT_WATCHDOG_CTL_1,                     0x0A},
+-	{WCD938X_SLEEP_INT_WATCHDOG_CTL_2,                     0x0A},
+-	{WCD938X_DIE_CRACK_INT_DIE_CRK_DET_INT1,               0x02},
+-	{WCD938X_DIE_CRACK_INT_DIE_CRK_DET_INT2,               0x60},
+-	{WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L2,               0xFF},
+-	{WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L1,               0x7F},
+-	{WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L0,               0x3F},
+-	{WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_ULP1P2M,          0x1F},
+-	{WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_ULP0P6M,          0x0F},
+-	{WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_L2L1,          0xD7},
+-	{WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_L0,            0xC8},
+-	{WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_ULP,           0xC6},
+-	{WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_L2L1,      0xD5},
+-	{WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_L0,        0xCA},
+-	{WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_ULP,       0x05},
+-	{WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2CASC_L2L1L0,    0xA5},
+-	{WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2CASC_ULP,       0x13},
+-	{WCD938X_TX_COM_NEW_INT_TXADC_SCBIAS_L2L1,             0x88},
+-	{WCD938X_TX_COM_NEW_INT_TXADC_SCBIAS_L0ULP,            0x42},
+-	{WCD938X_TX_COM_NEW_INT_TXADC_INT_L2,                  0xFF},
+-	{WCD938X_TX_COM_NEW_INT_TXADC_INT_L1,                  0x64},
+-	{WCD938X_TX_COM_NEW_INT_TXADC_INT_L0,                  0x64},
+-	{WCD938X_TX_COM_NEW_INT_TXADC_INT_ULP,                 0x77},
+-	{WCD938X_DIGITAL_PAGE_REGISTER,                        0x00},
+-	{WCD938X_DIGITAL_CHIP_ID0,                             0x00},
+-	{WCD938X_DIGITAL_CHIP_ID1,                             0x00},
+-	{WCD938X_DIGITAL_CHIP_ID2,                             0x0D},
+-	{WCD938X_DIGITAL_CHIP_ID3,                             0x01},
+-	{WCD938X_DIGITAL_SWR_TX_CLK_RATE,                      0x00},
+-	{WCD938X_DIGITAL_CDC_RST_CTL,                          0x03},
+-	{WCD938X_DIGITAL_TOP_CLK_CFG,                          0x00},
+-	{WCD938X_DIGITAL_CDC_ANA_CLK_CTL,                      0x00},
+-	{WCD938X_DIGITAL_CDC_DIG_CLK_CTL,                      0xF0},
+-	{WCD938X_DIGITAL_SWR_RST_EN,                           0x00},
+-	{WCD938X_DIGITAL_CDC_PATH_MODE,                        0x55},
+-	{WCD938X_DIGITAL_CDC_RX_RST,                           0x00},
+-	{WCD938X_DIGITAL_CDC_RX0_CTL,                          0xFC},
+-	{WCD938X_DIGITAL_CDC_RX1_CTL,                          0xFC},
+-	{WCD938X_DIGITAL_CDC_RX2_CTL,                          0xFC},
+-	{WCD938X_DIGITAL_CDC_TX_ANA_MODE_0_1,                  0x00},
+-	{WCD938X_DIGITAL_CDC_TX_ANA_MODE_2_3,                  0x00},
+-	{WCD938X_DIGITAL_CDC_COMP_CTL_0,                       0x00},
+-	{WCD938X_DIGITAL_CDC_ANA_TX_CLK_CTL,                   0x1E},
+-	{WCD938X_DIGITAL_CDC_HPH_DSM_A1_0,                     0x00},
+-	{WCD938X_DIGITAL_CDC_HPH_DSM_A1_1,                     0x01},
+-	{WCD938X_DIGITAL_CDC_HPH_DSM_A2_0,                     0x63},
+-	{WCD938X_DIGITAL_CDC_HPH_DSM_A2_1,                     0x04},
+-	{WCD938X_DIGITAL_CDC_HPH_DSM_A3_0,                     0xAC},
+-	{WCD938X_DIGITAL_CDC_HPH_DSM_A3_1,                     0x04},
+-	{WCD938X_DIGITAL_CDC_HPH_DSM_A4_0,                     0x1A},
+-	{WCD938X_DIGITAL_CDC_HPH_DSM_A4_1,                     0x03},
+-	{WCD938X_DIGITAL_CDC_HPH_DSM_A5_0,                     0xBC},
+-	{WCD938X_DIGITAL_CDC_HPH_DSM_A5_1,                     0x02},
+-	{WCD938X_DIGITAL_CDC_HPH_DSM_A6_0,                     0xC7},
+-	{WCD938X_DIGITAL_CDC_HPH_DSM_A7_0,                     0xF8},
+-	{WCD938X_DIGITAL_CDC_HPH_DSM_C_0,                      0x47},
+-	{WCD938X_DIGITAL_CDC_HPH_DSM_C_1,                      0x43},
+-	{WCD938X_DIGITAL_CDC_HPH_DSM_C_2,                      0xB1},
+-	{WCD938X_DIGITAL_CDC_HPH_DSM_C_3,                      0x17},
+-	{WCD938X_DIGITAL_CDC_HPH_DSM_R1,                       0x4D},
+-	{WCD938X_DIGITAL_CDC_HPH_DSM_R2,                       0x29},
+-	{WCD938X_DIGITAL_CDC_HPH_DSM_R3,                       0x34},
+-	{WCD938X_DIGITAL_CDC_HPH_DSM_R4,                       0x59},
+-	{WCD938X_DIGITAL_CDC_HPH_DSM_R5,                       0x66},
+-	{WCD938X_DIGITAL_CDC_HPH_DSM_R6,                       0x87},
+-	{WCD938X_DIGITAL_CDC_HPH_DSM_R7,                       0x64},
+-	{WCD938X_DIGITAL_CDC_AUX_DSM_A1_0,                     0x00},
+-	{WCD938X_DIGITAL_CDC_AUX_DSM_A1_1,                     0x01},
+-	{WCD938X_DIGITAL_CDC_AUX_DSM_A2_0,                     0x96},
+-	{WCD938X_DIGITAL_CDC_AUX_DSM_A2_1,                     0x09},
+-	{WCD938X_DIGITAL_CDC_AUX_DSM_A3_0,                     0xAB},
+-	{WCD938X_DIGITAL_CDC_AUX_DSM_A3_1,                     0x05},
+-	{WCD938X_DIGITAL_CDC_AUX_DSM_A4_0,                     0x1C},
+-	{WCD938X_DIGITAL_CDC_AUX_DSM_A4_1,                     0x02},
+-	{WCD938X_DIGITAL_CDC_AUX_DSM_A5_0,                     0x17},
+-	{WCD938X_DIGITAL_CDC_AUX_DSM_A5_1,                     0x02},
+-	{WCD938X_DIGITAL_CDC_AUX_DSM_A6_0,                     0xAA},
+-	{WCD938X_DIGITAL_CDC_AUX_DSM_A7_0,                     0xE3},
+-	{WCD938X_DIGITAL_CDC_AUX_DSM_C_0,                      0x69},
+-	{WCD938X_DIGITAL_CDC_AUX_DSM_C_1,                      0x54},
+-	{WCD938X_DIGITAL_CDC_AUX_DSM_C_2,                      0x02},
+-	{WCD938X_DIGITAL_CDC_AUX_DSM_C_3,                      0x15},
+-	{WCD938X_DIGITAL_CDC_AUX_DSM_R1,                       0xA4},
+-	{WCD938X_DIGITAL_CDC_AUX_DSM_R2,                       0xB5},
+-	{WCD938X_DIGITAL_CDC_AUX_DSM_R3,                       0x86},
+-	{WCD938X_DIGITAL_CDC_AUX_DSM_R4,                       0x85},
+-	{WCD938X_DIGITAL_CDC_AUX_DSM_R5,                       0xAA},
+-	{WCD938X_DIGITAL_CDC_AUX_DSM_R6,                       0xE2},
+-	{WCD938X_DIGITAL_CDC_AUX_DSM_R7,                       0x62},
+-	{WCD938X_DIGITAL_CDC_HPH_GAIN_RX_0,                    0x55},
+-	{WCD938X_DIGITAL_CDC_HPH_GAIN_RX_1,                    0xA9},
+-	{WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_0,                   0x3D},
+-	{WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_1,                   0x2E},
+-	{WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_2,                   0x01},
+-	{WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_0,                   0x00},
+-	{WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_1,                   0xFC},
+-	{WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_2,                   0x01},
+-	{WCD938X_DIGITAL_CDC_HPH_GAIN_CTL,                     0x00},
+-	{WCD938X_DIGITAL_CDC_AUX_GAIN_CTL,                     0x00},
+-	{WCD938X_DIGITAL_CDC_EAR_PATH_CTL,                     0x00},
+-	{WCD938X_DIGITAL_CDC_SWR_CLH,                          0x00},
+-	{WCD938X_DIGITAL_SWR_CLH_BYP,                          0x00},
+-	{WCD938X_DIGITAL_CDC_TX0_CTL,                          0x68},
+-	{WCD938X_DIGITAL_CDC_TX1_CTL,                          0x68},
+-	{WCD938X_DIGITAL_CDC_TX2_CTL,                          0x68},
+-	{WCD938X_DIGITAL_CDC_TX_RST,                           0x00},
+-	{WCD938X_DIGITAL_CDC_REQ_CTL,                          0x01},
+-	{WCD938X_DIGITAL_CDC_RST,                              0x00},
+-	{WCD938X_DIGITAL_CDC_AMIC_CTL,                         0x0F},
+-	{WCD938X_DIGITAL_CDC_DMIC_CTL,                         0x04},
+-	{WCD938X_DIGITAL_CDC_DMIC1_CTL,                        0x01},
+-	{WCD938X_DIGITAL_CDC_DMIC2_CTL,                        0x01},
+-	{WCD938X_DIGITAL_CDC_DMIC3_CTL,                        0x01},
+-	{WCD938X_DIGITAL_CDC_DMIC4_CTL,                        0x01},
+-	{WCD938X_DIGITAL_EFUSE_PRG_CTL,                        0x00},
+-	{WCD938X_DIGITAL_EFUSE_CTL,                            0x2B},
+-	{WCD938X_DIGITAL_CDC_DMIC_RATE_1_2,                    0x11},
+-	{WCD938X_DIGITAL_CDC_DMIC_RATE_3_4,                    0x11},
+-	{WCD938X_DIGITAL_PDM_WD_CTL0,                          0x00},
+-	{WCD938X_DIGITAL_PDM_WD_CTL1,                          0x00},
+-	{WCD938X_DIGITAL_PDM_WD_CTL2,                          0x00},
+-	{WCD938X_DIGITAL_INTR_MODE,                            0x00},
+-	{WCD938X_DIGITAL_INTR_MASK_0,                          0xFF},
+-	{WCD938X_DIGITAL_INTR_MASK_1,                          0xFF},
+-	{WCD938X_DIGITAL_INTR_MASK_2,                          0x3F},
+-	{WCD938X_DIGITAL_INTR_STATUS_0,                        0x00},
+-	{WCD938X_DIGITAL_INTR_STATUS_1,                        0x00},
+-	{WCD938X_DIGITAL_INTR_STATUS_2,                        0x00},
+-	{WCD938X_DIGITAL_INTR_CLEAR_0,                         0x00},
+-	{WCD938X_DIGITAL_INTR_CLEAR_1,                         0x00},
+-	{WCD938X_DIGITAL_INTR_CLEAR_2,                         0x00},
+-	{WCD938X_DIGITAL_INTR_LEVEL_0,                         0x00},
+-	{WCD938X_DIGITAL_INTR_LEVEL_1,                         0x00},
+-	{WCD938X_DIGITAL_INTR_LEVEL_2,                         0x00},
+-	{WCD938X_DIGITAL_INTR_SET_0,                           0x00},
+-	{WCD938X_DIGITAL_INTR_SET_1,                           0x00},
+-	{WCD938X_DIGITAL_INTR_SET_2,                           0x00},
+-	{WCD938X_DIGITAL_INTR_TEST_0,                          0x00},
+-	{WCD938X_DIGITAL_INTR_TEST_1,                          0x00},
+-	{WCD938X_DIGITAL_INTR_TEST_2,                          0x00},
+-	{WCD938X_DIGITAL_TX_MODE_DBG_EN,                       0x00},
+-	{WCD938X_DIGITAL_TX_MODE_DBG_0_1,                      0x00},
+-	{WCD938X_DIGITAL_TX_MODE_DBG_2_3,                      0x00},
+-	{WCD938X_DIGITAL_LB_IN_SEL_CTL,                        0x00},
+-	{WCD938X_DIGITAL_LOOP_BACK_MODE,                       0x00},
+-	{WCD938X_DIGITAL_SWR_DAC_TEST,                         0x00},
+-	{WCD938X_DIGITAL_SWR_HM_TEST_RX_0,                     0x40},
+-	{WCD938X_DIGITAL_SWR_HM_TEST_TX_0,                     0x40},
+-	{WCD938X_DIGITAL_SWR_HM_TEST_RX_1,                     0x00},
+-	{WCD938X_DIGITAL_SWR_HM_TEST_TX_1,                     0x00},
+-	{WCD938X_DIGITAL_SWR_HM_TEST_TX_2,                     0x00},
+-	{WCD938X_DIGITAL_SWR_HM_TEST_0,                        0x00},
+-	{WCD938X_DIGITAL_SWR_HM_TEST_1,                        0x00},
+-	{WCD938X_DIGITAL_PAD_CTL_SWR_0,                        0x8F},
+-	{WCD938X_DIGITAL_PAD_CTL_SWR_1,                        0x06},
+-	{WCD938X_DIGITAL_I2C_CTL,                              0x00},
+-	{WCD938X_DIGITAL_CDC_TX_TANGGU_SW_MODE,                0x00},
+-	{WCD938X_DIGITAL_EFUSE_TEST_CTL_0,                     0x00},
+-	{WCD938X_DIGITAL_EFUSE_TEST_CTL_1,                     0x00},
+-	{WCD938X_DIGITAL_EFUSE_T_DATA_0,                       0x00},
+-	{WCD938X_DIGITAL_EFUSE_T_DATA_1,                       0x00},
+-	{WCD938X_DIGITAL_PAD_CTL_PDM_RX0,                      0xF1},
+-	{WCD938X_DIGITAL_PAD_CTL_PDM_RX1,                      0xF1},
+-	{WCD938X_DIGITAL_PAD_CTL_PDM_TX0,                      0xF1},
+-	{WCD938X_DIGITAL_PAD_CTL_PDM_TX1,                      0xF1},
+-	{WCD938X_DIGITAL_PAD_CTL_PDM_TX2,                      0xF1},
+-	{WCD938X_DIGITAL_PAD_INP_DIS_0,                        0x00},
+-	{WCD938X_DIGITAL_PAD_INP_DIS_1,                        0x00},
+-	{WCD938X_DIGITAL_DRIVE_STRENGTH_0,                     0x00},
+-	{WCD938X_DIGITAL_DRIVE_STRENGTH_1,                     0x00},
+-	{WCD938X_DIGITAL_DRIVE_STRENGTH_2,                     0x00},
+-	{WCD938X_DIGITAL_RX_DATA_EDGE_CTL,                     0x1F},
+-	{WCD938X_DIGITAL_TX_DATA_EDGE_CTL,                     0x80},
+-	{WCD938X_DIGITAL_GPIO_MODE,                            0x00},
+-	{WCD938X_DIGITAL_PIN_CTL_OE,                           0x00},
+-	{WCD938X_DIGITAL_PIN_CTL_DATA_0,                       0x00},
+-	{WCD938X_DIGITAL_PIN_CTL_DATA_1,                       0x00},
+-	{WCD938X_DIGITAL_PIN_STATUS_0,                         0x00},
+-	{WCD938X_DIGITAL_PIN_STATUS_1,                         0x00},
+-	{WCD938X_DIGITAL_DIG_DEBUG_CTL,                        0x00},
+-	{WCD938X_DIGITAL_DIG_DEBUG_EN,                         0x00},
+-	{WCD938X_DIGITAL_ANA_CSR_DBG_ADD,                      0x00},
+-	{WCD938X_DIGITAL_ANA_CSR_DBG_CTL,                      0x48},
+-	{WCD938X_DIGITAL_SSP_DBG,                              0x00},
+-	{WCD938X_DIGITAL_MODE_STATUS_0,                        0x00},
+-	{WCD938X_DIGITAL_MODE_STATUS_1,                        0x00},
+-	{WCD938X_DIGITAL_SPARE_0,                              0x00},
+-	{WCD938X_DIGITAL_SPARE_1,                              0x00},
+-	{WCD938X_DIGITAL_SPARE_2,                              0x00},
+-	{WCD938X_DIGITAL_EFUSE_REG_0,                          0x00},
+-	{WCD938X_DIGITAL_EFUSE_REG_1,                          0xFF},
+-	{WCD938X_DIGITAL_EFUSE_REG_2,                          0xFF},
+-	{WCD938X_DIGITAL_EFUSE_REG_3,                          0xFF},
+-	{WCD938X_DIGITAL_EFUSE_REG_4,                          0xFF},
+-	{WCD938X_DIGITAL_EFUSE_REG_5,                          0xFF},
+-	{WCD938X_DIGITAL_EFUSE_REG_6,                          0xFF},
+-	{WCD938X_DIGITAL_EFUSE_REG_7,                          0xFF},
+-	{WCD938X_DIGITAL_EFUSE_REG_8,                          0xFF},
+-	{WCD938X_DIGITAL_EFUSE_REG_9,                          0xFF},
+-	{WCD938X_DIGITAL_EFUSE_REG_10,                         0xFF},
+-	{WCD938X_DIGITAL_EFUSE_REG_11,                         0xFF},
+-	{WCD938X_DIGITAL_EFUSE_REG_12,                         0xFF},
+-	{WCD938X_DIGITAL_EFUSE_REG_13,                         0xFF},
+-	{WCD938X_DIGITAL_EFUSE_REG_14,                         0xFF},
+-	{WCD938X_DIGITAL_EFUSE_REG_15,                         0xFF},
+-	{WCD938X_DIGITAL_EFUSE_REG_16,                         0xFF},
+-	{WCD938X_DIGITAL_EFUSE_REG_17,                         0xFF},
+-	{WCD938X_DIGITAL_EFUSE_REG_18,                         0xFF},
+-	{WCD938X_DIGITAL_EFUSE_REG_19,                         0xFF},
+-	{WCD938X_DIGITAL_EFUSE_REG_20,                         0x0E},
+-	{WCD938X_DIGITAL_EFUSE_REG_21,                         0x00},
+-	{WCD938X_DIGITAL_EFUSE_REG_22,                         0x00},
+-	{WCD938X_DIGITAL_EFUSE_REG_23,                         0xF8},
+-	{WCD938X_DIGITAL_EFUSE_REG_24,                         0x16},
+-	{WCD938X_DIGITAL_EFUSE_REG_25,                         0x00},
+-	{WCD938X_DIGITAL_EFUSE_REG_26,                         0x00},
+-	{WCD938X_DIGITAL_EFUSE_REG_27,                         0x00},
+-	{WCD938X_DIGITAL_EFUSE_REG_28,                         0x00},
+-	{WCD938X_DIGITAL_EFUSE_REG_29,                         0x00},
+-	{WCD938X_DIGITAL_EFUSE_REG_30,                         0x00},
+-	{WCD938X_DIGITAL_EFUSE_REG_31,                         0x00},
+-	{WCD938X_DIGITAL_TX_REQ_FB_CTL_0,                      0x88},
+-	{WCD938X_DIGITAL_TX_REQ_FB_CTL_1,                      0x88},
+-	{WCD938X_DIGITAL_TX_REQ_FB_CTL_2,                      0x88},
+-	{WCD938X_DIGITAL_TX_REQ_FB_CTL_3,                      0x88},
+-	{WCD938X_DIGITAL_TX_REQ_FB_CTL_4,                      0x88},
+-	{WCD938X_DIGITAL_DEM_BYPASS_DATA0,                     0x55},
+-	{WCD938X_DIGITAL_DEM_BYPASS_DATA1,                     0x55},
+-	{WCD938X_DIGITAL_DEM_BYPASS_DATA2,                     0x55},
+-	{WCD938X_DIGITAL_DEM_BYPASS_DATA3,                     0x01},
+-};
+-
+-static bool wcd938x_rdwr_register(struct device *dev, unsigned int reg)
+-{
+-	switch (reg) {
+-	case WCD938X_ANA_PAGE_REGISTER:
+-	case WCD938X_ANA_BIAS:
+-	case WCD938X_ANA_RX_SUPPLIES:
+-	case WCD938X_ANA_HPH:
+-	case WCD938X_ANA_EAR:
+-	case WCD938X_ANA_EAR_COMPANDER_CTL:
+-	case WCD938X_ANA_TX_CH1:
+-	case WCD938X_ANA_TX_CH2:
+-	case WCD938X_ANA_TX_CH3:
+-	case WCD938X_ANA_TX_CH4:
+-	case WCD938X_ANA_MICB1_MICB2_DSP_EN_LOGIC:
+-	case WCD938X_ANA_MICB3_DSP_EN_LOGIC:
+-	case WCD938X_ANA_MBHC_MECH:
+-	case WCD938X_ANA_MBHC_ELECT:
+-	case WCD938X_ANA_MBHC_ZDET:
+-	case WCD938X_ANA_MBHC_BTN0:
+-	case WCD938X_ANA_MBHC_BTN1:
+-	case WCD938X_ANA_MBHC_BTN2:
+-	case WCD938X_ANA_MBHC_BTN3:
+-	case WCD938X_ANA_MBHC_BTN4:
+-	case WCD938X_ANA_MBHC_BTN5:
+-	case WCD938X_ANA_MBHC_BTN6:
+-	case WCD938X_ANA_MBHC_BTN7:
+-	case WCD938X_ANA_MICB1:
+-	case WCD938X_ANA_MICB2:
+-	case WCD938X_ANA_MICB2_RAMP:
+-	case WCD938X_ANA_MICB3:
+-	case WCD938X_ANA_MICB4:
+-	case WCD938X_BIAS_CTL:
+-	case WCD938X_BIAS_VBG_FINE_ADJ:
+-	case WCD938X_LDOL_VDDCX_ADJUST:
+-	case WCD938X_LDOL_DISABLE_LDOL:
+-	case WCD938X_MBHC_CTL_CLK:
+-	case WCD938X_MBHC_CTL_ANA:
+-	case WCD938X_MBHC_CTL_SPARE_1:
+-	case WCD938X_MBHC_CTL_SPARE_2:
+-	case WCD938X_MBHC_CTL_BCS:
+-	case WCD938X_MBHC_TEST_CTL:
+-	case WCD938X_LDOH_MODE:
+-	case WCD938X_LDOH_BIAS:
+-	case WCD938X_LDOH_STB_LOADS:
+-	case WCD938X_LDOH_SLOWRAMP:
+-	case WCD938X_MICB1_TEST_CTL_1:
+-	case WCD938X_MICB1_TEST_CTL_2:
+-	case WCD938X_MICB1_TEST_CTL_3:
+-	case WCD938X_MICB2_TEST_CTL_1:
+-	case WCD938X_MICB2_TEST_CTL_2:
+-	case WCD938X_MICB2_TEST_CTL_3:
+-	case WCD938X_MICB3_TEST_CTL_1:
+-	case WCD938X_MICB3_TEST_CTL_2:
+-	case WCD938X_MICB3_TEST_CTL_3:
+-	case WCD938X_MICB4_TEST_CTL_1:
+-	case WCD938X_MICB4_TEST_CTL_2:
+-	case WCD938X_MICB4_TEST_CTL_3:
+-	case WCD938X_TX_COM_ADC_VCM:
+-	case WCD938X_TX_COM_BIAS_ATEST:
+-	case WCD938X_TX_COM_SPARE1:
+-	case WCD938X_TX_COM_SPARE2:
+-	case WCD938X_TX_COM_TXFE_DIV_CTL:
+-	case WCD938X_TX_COM_TXFE_DIV_START:
+-	case WCD938X_TX_COM_SPARE3:
+-	case WCD938X_TX_COM_SPARE4:
+-	case WCD938X_TX_1_2_TEST_EN:
+-	case WCD938X_TX_1_2_ADC_IB:
+-	case WCD938X_TX_1_2_ATEST_REFCTL:
+-	case WCD938X_TX_1_2_TEST_CTL:
+-	case WCD938X_TX_1_2_TEST_BLK_EN1:
+-	case WCD938X_TX_1_2_TXFE1_CLKDIV:
+-	case WCD938X_TX_3_4_TEST_EN:
+-	case WCD938X_TX_3_4_ADC_IB:
+-	case WCD938X_TX_3_4_ATEST_REFCTL:
+-	case WCD938X_TX_3_4_TEST_CTL:
+-	case WCD938X_TX_3_4_TEST_BLK_EN3:
+-	case WCD938X_TX_3_4_TXFE3_CLKDIV:
+-	case WCD938X_TX_3_4_TEST_BLK_EN2:
+-	case WCD938X_TX_3_4_TXFE2_CLKDIV:
+-	case WCD938X_TX_3_4_SPARE1:
+-	case WCD938X_TX_3_4_TEST_BLK_EN4:
+-	case WCD938X_TX_3_4_TXFE4_CLKDIV:
+-	case WCD938X_TX_3_4_SPARE2:
+-	case WCD938X_CLASSH_MODE_1:
+-	case WCD938X_CLASSH_MODE_2:
+-	case WCD938X_CLASSH_MODE_3:
+-	case WCD938X_CLASSH_CTRL_VCL_1:
+-	case WCD938X_CLASSH_CTRL_VCL_2:
+-	case WCD938X_CLASSH_CTRL_CCL_1:
+-	case WCD938X_CLASSH_CTRL_CCL_2:
+-	case WCD938X_CLASSH_CTRL_CCL_3:
+-	case WCD938X_CLASSH_CTRL_CCL_4:
+-	case WCD938X_CLASSH_CTRL_CCL_5:
+-	case WCD938X_CLASSH_BUCK_TMUX_A_D:
+-	case WCD938X_CLASSH_BUCK_SW_DRV_CNTL:
+-	case WCD938X_CLASSH_SPARE:
+-	case WCD938X_FLYBACK_EN:
+-	case WCD938X_FLYBACK_VNEG_CTRL_1:
+-	case WCD938X_FLYBACK_VNEG_CTRL_2:
+-	case WCD938X_FLYBACK_VNEG_CTRL_3:
+-	case WCD938X_FLYBACK_VNEG_CTRL_4:
+-	case WCD938X_FLYBACK_VNEG_CTRL_5:
+-	case WCD938X_FLYBACK_VNEG_CTRL_6:
+-	case WCD938X_FLYBACK_VNEG_CTRL_7:
+-	case WCD938X_FLYBACK_VNEG_CTRL_8:
+-	case WCD938X_FLYBACK_VNEG_CTRL_9:
+-	case WCD938X_FLYBACK_VNEGDAC_CTRL_1:
+-	case WCD938X_FLYBACK_VNEGDAC_CTRL_2:
+-	case WCD938X_FLYBACK_VNEGDAC_CTRL_3:
+-	case WCD938X_FLYBACK_CTRL_1:
+-	case WCD938X_FLYBACK_TEST_CTL:
+-	case WCD938X_RX_AUX_SW_CTL:
+-	case WCD938X_RX_PA_AUX_IN_CONN:
+-	case WCD938X_RX_TIMER_DIV:
+-	case WCD938X_RX_OCP_CTL:
+-	case WCD938X_RX_OCP_COUNT:
+-	case WCD938X_RX_BIAS_EAR_DAC:
+-	case WCD938X_RX_BIAS_EAR_AMP:
+-	case WCD938X_RX_BIAS_HPH_LDO:
+-	case WCD938X_RX_BIAS_HPH_PA:
+-	case WCD938X_RX_BIAS_HPH_RDACBUFF_CNP2:
+-	case WCD938X_RX_BIAS_HPH_RDAC_LDO:
+-	case WCD938X_RX_BIAS_HPH_CNP1:
+-	case WCD938X_RX_BIAS_HPH_LOWPOWER:
+-	case WCD938X_RX_BIAS_AUX_DAC:
+-	case WCD938X_RX_BIAS_AUX_AMP:
+-	case WCD938X_RX_BIAS_VNEGDAC_BLEEDER:
+-	case WCD938X_RX_BIAS_MISC:
+-	case WCD938X_RX_BIAS_BUCK_RST:
+-	case WCD938X_RX_BIAS_BUCK_VREF_ERRAMP:
+-	case WCD938X_RX_BIAS_FLYB_ERRAMP:
+-	case WCD938X_RX_BIAS_FLYB_BUFF:
+-	case WCD938X_RX_BIAS_FLYB_MID_RST:
+-	case WCD938X_HPH_CNP_EN:
+-	case WCD938X_HPH_CNP_WG_CTL:
+-	case WCD938X_HPH_CNP_WG_TIME:
+-	case WCD938X_HPH_OCP_CTL:
+-	case WCD938X_HPH_AUTO_CHOP:
+-	case WCD938X_HPH_CHOP_CTL:
+-	case WCD938X_HPH_PA_CTL1:
+-	case WCD938X_HPH_PA_CTL2:
+-	case WCD938X_HPH_L_EN:
+-	case WCD938X_HPH_L_TEST:
+-	case WCD938X_HPH_L_ATEST:
+-	case WCD938X_HPH_R_EN:
+-	case WCD938X_HPH_R_TEST:
+-	case WCD938X_HPH_R_ATEST:
+-	case WCD938X_HPH_RDAC_CLK_CTL1:
+-	case WCD938X_HPH_RDAC_CLK_CTL2:
+-	case WCD938X_HPH_RDAC_LDO_CTL:
+-	case WCD938X_HPH_RDAC_CHOP_CLK_LP_CTL:
+-	case WCD938X_HPH_REFBUFF_UHQA_CTL:
+-	case WCD938X_HPH_REFBUFF_LP_CTL:
+-	case WCD938X_HPH_L_DAC_CTL:
+-	case WCD938X_HPH_R_DAC_CTL:
+-	case WCD938X_HPH_SURGE_HPHLR_SURGE_COMP_SEL:
+-	case WCD938X_HPH_SURGE_HPHLR_SURGE_EN:
+-	case WCD938X_HPH_SURGE_HPHLR_SURGE_MISC1:
+-	case WCD938X_EAR_EAR_EN_REG:
+-	case WCD938X_EAR_EAR_PA_CON:
+-	case WCD938X_EAR_EAR_SP_CON:
+-	case WCD938X_EAR_EAR_DAC_CON:
+-	case WCD938X_EAR_EAR_CNP_FSM_CON:
+-	case WCD938X_EAR_TEST_CTL:
+-	case WCD938X_ANA_NEW_PAGE_REGISTER:
+-	case WCD938X_HPH_NEW_ANA_HPH2:
+-	case WCD938X_HPH_NEW_ANA_HPH3:
+-	case WCD938X_SLEEP_CTL:
+-	case WCD938X_SLEEP_WATCHDOG_CTL:
+-	case WCD938X_MBHC_NEW_ELECT_REM_CLAMP_CTL:
+-	case WCD938X_MBHC_NEW_CTL_1:
+-	case WCD938X_MBHC_NEW_CTL_2:
+-	case WCD938X_MBHC_NEW_PLUG_DETECT_CTL:
+-	case WCD938X_MBHC_NEW_ZDET_ANA_CTL:
+-	case WCD938X_MBHC_NEW_ZDET_RAMP_CTL:
+-	case WCD938X_TX_NEW_AMIC_MUX_CFG:
+-	case WCD938X_AUX_AUXPA:
+-	case WCD938X_LDORXTX_MODE:
+-	case WCD938X_LDORXTX_CONFIG:
+-	case WCD938X_DIE_CRACK_DIE_CRK_DET_EN:
+-	case WCD938X_HPH_NEW_INT_RDAC_GAIN_CTL:
+-	case WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_L:
+-	case WCD938X_HPH_NEW_INT_RDAC_VREF_CTL:
+-	case WCD938X_HPH_NEW_INT_RDAC_OVERRIDE_CTL:
+-	case WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_R:
+-	case WCD938X_HPH_NEW_INT_PA_MISC1:
+-	case WCD938X_HPH_NEW_INT_PA_MISC2:
+-	case WCD938X_HPH_NEW_INT_PA_RDAC_MISC:
+-	case WCD938X_HPH_NEW_INT_HPH_TIMER1:
+-	case WCD938X_HPH_NEW_INT_HPH_TIMER2:
+-	case WCD938X_HPH_NEW_INT_HPH_TIMER3:
+-	case WCD938X_HPH_NEW_INT_HPH_TIMER4:
+-	case WCD938X_HPH_NEW_INT_PA_RDAC_MISC2:
+-	case WCD938X_HPH_NEW_INT_PA_RDAC_MISC3:
+-	case WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_L_NEW:
+-	case WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_R_NEW:
+-	case WCD938X_RX_NEW_INT_HPH_RDAC_BIAS_LOHIFI:
+-	case WCD938X_RX_NEW_INT_HPH_RDAC_BIAS_ULP:
+-	case WCD938X_RX_NEW_INT_HPH_RDAC_LDO_LP:
+-	case WCD938X_MBHC_NEW_INT_MOISTURE_DET_DC_CTRL:
+-	case WCD938X_MBHC_NEW_INT_MOISTURE_DET_POLLING_CTRL:
+-	case WCD938X_MBHC_NEW_INT_MECH_DET_CURRENT:
+-	case WCD938X_MBHC_NEW_INT_SPARE_2:
+-	case WCD938X_EAR_INT_NEW_EAR_CHOPPER_CON:
+-	case WCD938X_EAR_INT_NEW_CNP_VCM_CON1:
+-	case WCD938X_EAR_INT_NEW_CNP_VCM_CON2:
+-	case WCD938X_EAR_INT_NEW_EAR_DYNAMIC_BIAS:
+-	case WCD938X_AUX_INT_EN_REG:
+-	case WCD938X_AUX_INT_PA_CTRL:
+-	case WCD938X_AUX_INT_SP_CTRL:
+-	case WCD938X_AUX_INT_DAC_CTRL:
+-	case WCD938X_AUX_INT_CLK_CTRL:
+-	case WCD938X_AUX_INT_TEST_CTRL:
+-	case WCD938X_AUX_INT_MISC:
+-	case WCD938X_LDORXTX_INT_BIAS:
+-	case WCD938X_LDORXTX_INT_STB_LOADS_DTEST:
+-	case WCD938X_LDORXTX_INT_TEST0:
+-	case WCD938X_LDORXTX_INT_STARTUP_TIMER:
+-	case WCD938X_LDORXTX_INT_TEST1:
+-	case WCD938X_SLEEP_INT_WATCHDOG_CTL_1:
+-	case WCD938X_SLEEP_INT_WATCHDOG_CTL_2:
+-	case WCD938X_DIE_CRACK_INT_DIE_CRK_DET_INT1:
+-	case WCD938X_DIE_CRACK_INT_DIE_CRK_DET_INT2:
+-	case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L2:
+-	case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L1:
+-	case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L0:
+-	case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_ULP1P2M:
+-	case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_ULP0P6M:
+-	case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_L2L1:
+-	case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_L0:
+-	case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_ULP:
+-	case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_L2L1:
+-	case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_L0:
+-	case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_ULP:
+-	case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2CASC_L2L1L0:
+-	case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2CASC_ULP:
+-	case WCD938X_TX_COM_NEW_INT_TXADC_SCBIAS_L2L1:
+-	case WCD938X_TX_COM_NEW_INT_TXADC_SCBIAS_L0ULP:
+-	case WCD938X_TX_COM_NEW_INT_TXADC_INT_L2:
+-	case WCD938X_TX_COM_NEW_INT_TXADC_INT_L1:
+-	case WCD938X_TX_COM_NEW_INT_TXADC_INT_L0:
+-	case WCD938X_TX_COM_NEW_INT_TXADC_INT_ULP:
+-	case WCD938X_DIGITAL_PAGE_REGISTER:
+-	case WCD938X_DIGITAL_SWR_TX_CLK_RATE:
+-	case WCD938X_DIGITAL_CDC_RST_CTL:
+-	case WCD938X_DIGITAL_TOP_CLK_CFG:
+-	case WCD938X_DIGITAL_CDC_ANA_CLK_CTL:
+-	case WCD938X_DIGITAL_CDC_DIG_CLK_CTL:
+-	case WCD938X_DIGITAL_SWR_RST_EN:
+-	case WCD938X_DIGITAL_CDC_PATH_MODE:
+-	case WCD938X_DIGITAL_CDC_RX_RST:
+-	case WCD938X_DIGITAL_CDC_RX0_CTL:
+-	case WCD938X_DIGITAL_CDC_RX1_CTL:
+-	case WCD938X_DIGITAL_CDC_RX2_CTL:
+-	case WCD938X_DIGITAL_CDC_TX_ANA_MODE_0_1:
+-	case WCD938X_DIGITAL_CDC_TX_ANA_MODE_2_3:
+-	case WCD938X_DIGITAL_CDC_COMP_CTL_0:
+-	case WCD938X_DIGITAL_CDC_ANA_TX_CLK_CTL:
+-	case WCD938X_DIGITAL_CDC_HPH_DSM_A1_0:
+-	case WCD938X_DIGITAL_CDC_HPH_DSM_A1_1:
+-	case WCD938X_DIGITAL_CDC_HPH_DSM_A2_0:
+-	case WCD938X_DIGITAL_CDC_HPH_DSM_A2_1:
+-	case WCD938X_DIGITAL_CDC_HPH_DSM_A3_0:
+-	case WCD938X_DIGITAL_CDC_HPH_DSM_A3_1:
+-	case WCD938X_DIGITAL_CDC_HPH_DSM_A4_0:
+-	case WCD938X_DIGITAL_CDC_HPH_DSM_A4_1:
+-	case WCD938X_DIGITAL_CDC_HPH_DSM_A5_0:
+-	case WCD938X_DIGITAL_CDC_HPH_DSM_A5_1:
+-	case WCD938X_DIGITAL_CDC_HPH_DSM_A6_0:
+-	case WCD938X_DIGITAL_CDC_HPH_DSM_A7_0:
+-	case WCD938X_DIGITAL_CDC_HPH_DSM_C_0:
+-	case WCD938X_DIGITAL_CDC_HPH_DSM_C_1:
+-	case WCD938X_DIGITAL_CDC_HPH_DSM_C_2:
+-	case WCD938X_DIGITAL_CDC_HPH_DSM_C_3:
+-	case WCD938X_DIGITAL_CDC_HPH_DSM_R1:
+-	case WCD938X_DIGITAL_CDC_HPH_DSM_R2:
+-	case WCD938X_DIGITAL_CDC_HPH_DSM_R3:
+-	case WCD938X_DIGITAL_CDC_HPH_DSM_R4:
+-	case WCD938X_DIGITAL_CDC_HPH_DSM_R5:
+-	case WCD938X_DIGITAL_CDC_HPH_DSM_R6:
+-	case WCD938X_DIGITAL_CDC_HPH_DSM_R7:
+-	case WCD938X_DIGITAL_CDC_AUX_DSM_A1_0:
+-	case WCD938X_DIGITAL_CDC_AUX_DSM_A1_1:
+-	case WCD938X_DIGITAL_CDC_AUX_DSM_A2_0:
+-	case WCD938X_DIGITAL_CDC_AUX_DSM_A2_1:
+-	case WCD938X_DIGITAL_CDC_AUX_DSM_A3_0:
+-	case WCD938X_DIGITAL_CDC_AUX_DSM_A3_1:
+-	case WCD938X_DIGITAL_CDC_AUX_DSM_A4_0:
+-	case WCD938X_DIGITAL_CDC_AUX_DSM_A4_1:
+-	case WCD938X_DIGITAL_CDC_AUX_DSM_A5_0:
+-	case WCD938X_DIGITAL_CDC_AUX_DSM_A5_1:
+-	case WCD938X_DIGITAL_CDC_AUX_DSM_A6_0:
+-	case WCD938X_DIGITAL_CDC_AUX_DSM_A7_0:
+-	case WCD938X_DIGITAL_CDC_AUX_DSM_C_0:
+-	case WCD938X_DIGITAL_CDC_AUX_DSM_C_1:
+-	case WCD938X_DIGITAL_CDC_AUX_DSM_C_2:
+-	case WCD938X_DIGITAL_CDC_AUX_DSM_C_3:
+-	case WCD938X_DIGITAL_CDC_AUX_DSM_R1:
+-	case WCD938X_DIGITAL_CDC_AUX_DSM_R2:
+-	case WCD938X_DIGITAL_CDC_AUX_DSM_R3:
+-	case WCD938X_DIGITAL_CDC_AUX_DSM_R4:
+-	case WCD938X_DIGITAL_CDC_AUX_DSM_R5:
+-	case WCD938X_DIGITAL_CDC_AUX_DSM_R6:
+-	case WCD938X_DIGITAL_CDC_AUX_DSM_R7:
+-	case WCD938X_DIGITAL_CDC_HPH_GAIN_RX_0:
+-	case WCD938X_DIGITAL_CDC_HPH_GAIN_RX_1:
+-	case WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_0:
+-	case WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_1:
+-	case WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_2:
+-	case WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_0:
+-	case WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_1:
+-	case WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_2:
+-	case WCD938X_DIGITAL_CDC_HPH_GAIN_CTL:
+-	case WCD938X_DIGITAL_CDC_AUX_GAIN_CTL:
+-	case WCD938X_DIGITAL_CDC_EAR_PATH_CTL:
+-	case WCD938X_DIGITAL_CDC_SWR_CLH:
+-	case WCD938X_DIGITAL_SWR_CLH_BYP:
+-	case WCD938X_DIGITAL_CDC_TX0_CTL:
+-	case WCD938X_DIGITAL_CDC_TX1_CTL:
+-	case WCD938X_DIGITAL_CDC_TX2_CTL:
+-	case WCD938X_DIGITAL_CDC_TX_RST:
+-	case WCD938X_DIGITAL_CDC_REQ_CTL:
+-	case WCD938X_DIGITAL_CDC_RST:
+-	case WCD938X_DIGITAL_CDC_AMIC_CTL:
+-	case WCD938X_DIGITAL_CDC_DMIC_CTL:
+-	case WCD938X_DIGITAL_CDC_DMIC1_CTL:
+-	case WCD938X_DIGITAL_CDC_DMIC2_CTL:
+-	case WCD938X_DIGITAL_CDC_DMIC3_CTL:
+-	case WCD938X_DIGITAL_CDC_DMIC4_CTL:
+-	case WCD938X_DIGITAL_EFUSE_PRG_CTL:
+-	case WCD938X_DIGITAL_EFUSE_CTL:
+-	case WCD938X_DIGITAL_CDC_DMIC_RATE_1_2:
+-	case WCD938X_DIGITAL_CDC_DMIC_RATE_3_4:
+-	case WCD938X_DIGITAL_PDM_WD_CTL0:
+-	case WCD938X_DIGITAL_PDM_WD_CTL1:
+-	case WCD938X_DIGITAL_PDM_WD_CTL2:
+-	case WCD938X_DIGITAL_INTR_MODE:
+-	case WCD938X_DIGITAL_INTR_MASK_0:
+-	case WCD938X_DIGITAL_INTR_MASK_1:
+-	case WCD938X_DIGITAL_INTR_MASK_2:
+-	case WCD938X_DIGITAL_INTR_CLEAR_0:
+-	case WCD938X_DIGITAL_INTR_CLEAR_1:
+-	case WCD938X_DIGITAL_INTR_CLEAR_2:
+-	case WCD938X_DIGITAL_INTR_LEVEL_0:
+-	case WCD938X_DIGITAL_INTR_LEVEL_1:
+-	case WCD938X_DIGITAL_INTR_LEVEL_2:
+-	case WCD938X_DIGITAL_INTR_SET_0:
+-	case WCD938X_DIGITAL_INTR_SET_1:
+-	case WCD938X_DIGITAL_INTR_SET_2:
+-	case WCD938X_DIGITAL_INTR_TEST_0:
+-	case WCD938X_DIGITAL_INTR_TEST_1:
+-	case WCD938X_DIGITAL_INTR_TEST_2:
+-	case WCD938X_DIGITAL_TX_MODE_DBG_EN:
+-	case WCD938X_DIGITAL_TX_MODE_DBG_0_1:
+-	case WCD938X_DIGITAL_TX_MODE_DBG_2_3:
+-	case WCD938X_DIGITAL_LB_IN_SEL_CTL:
+-	case WCD938X_DIGITAL_LOOP_BACK_MODE:
+-	case WCD938X_DIGITAL_SWR_DAC_TEST:
+-	case WCD938X_DIGITAL_SWR_HM_TEST_RX_0:
+-	case WCD938X_DIGITAL_SWR_HM_TEST_TX_0:
+-	case WCD938X_DIGITAL_SWR_HM_TEST_RX_1:
+-	case WCD938X_DIGITAL_SWR_HM_TEST_TX_1:
+-	case WCD938X_DIGITAL_SWR_HM_TEST_TX_2:
+-	case WCD938X_DIGITAL_PAD_CTL_SWR_0:
+-	case WCD938X_DIGITAL_PAD_CTL_SWR_1:
+-	case WCD938X_DIGITAL_I2C_CTL:
+-	case WCD938X_DIGITAL_CDC_TX_TANGGU_SW_MODE:
+-	case WCD938X_DIGITAL_EFUSE_TEST_CTL_0:
+-	case WCD938X_DIGITAL_EFUSE_TEST_CTL_1:
+-	case WCD938X_DIGITAL_PAD_CTL_PDM_RX0:
+-	case WCD938X_DIGITAL_PAD_CTL_PDM_RX1:
+-	case WCD938X_DIGITAL_PAD_CTL_PDM_TX0:
+-	case WCD938X_DIGITAL_PAD_CTL_PDM_TX1:
+-	case WCD938X_DIGITAL_PAD_CTL_PDM_TX2:
+-	case WCD938X_DIGITAL_PAD_INP_DIS_0:
+-	case WCD938X_DIGITAL_PAD_INP_DIS_1:
+-	case WCD938X_DIGITAL_DRIVE_STRENGTH_0:
+-	case WCD938X_DIGITAL_DRIVE_STRENGTH_1:
+-	case WCD938X_DIGITAL_DRIVE_STRENGTH_2:
+-	case WCD938X_DIGITAL_RX_DATA_EDGE_CTL:
+-	case WCD938X_DIGITAL_TX_DATA_EDGE_CTL:
+-	case WCD938X_DIGITAL_GPIO_MODE:
+-	case WCD938X_DIGITAL_PIN_CTL_OE:
+-	case WCD938X_DIGITAL_PIN_CTL_DATA_0:
+-	case WCD938X_DIGITAL_PIN_CTL_DATA_1:
+-	case WCD938X_DIGITAL_DIG_DEBUG_CTL:
+-	case WCD938X_DIGITAL_DIG_DEBUG_EN:
+-	case WCD938X_DIGITAL_ANA_CSR_DBG_ADD:
+-	case WCD938X_DIGITAL_ANA_CSR_DBG_CTL:
+-	case WCD938X_DIGITAL_SSP_DBG:
+-	case WCD938X_DIGITAL_SPARE_0:
+-	case WCD938X_DIGITAL_SPARE_1:
+-	case WCD938X_DIGITAL_SPARE_2:
+-	case WCD938X_DIGITAL_TX_REQ_FB_CTL_0:
+-	case WCD938X_DIGITAL_TX_REQ_FB_CTL_1:
+-	case WCD938X_DIGITAL_TX_REQ_FB_CTL_2:
+-	case WCD938X_DIGITAL_TX_REQ_FB_CTL_3:
+-	case WCD938X_DIGITAL_TX_REQ_FB_CTL_4:
+-	case WCD938X_DIGITAL_DEM_BYPASS_DATA0:
+-	case WCD938X_DIGITAL_DEM_BYPASS_DATA1:
+-	case WCD938X_DIGITAL_DEM_BYPASS_DATA2:
+-	case WCD938X_DIGITAL_DEM_BYPASS_DATA3:
+-		return true;
+-	}
+-
+-	return false;
+-}
+-
+-static bool wcd938x_readonly_register(struct device *dev, unsigned int reg)
+-{
+-	switch (reg) {
+-	case WCD938X_ANA_MBHC_RESULT_1:
+-	case WCD938X_ANA_MBHC_RESULT_2:
+-	case WCD938X_ANA_MBHC_RESULT_3:
+-	case WCD938X_MBHC_MOISTURE_DET_FSM_STATUS:
+-	case WCD938X_TX_1_2_SAR2_ERR:
+-	case WCD938X_TX_1_2_SAR1_ERR:
+-	case WCD938X_TX_3_4_SAR4_ERR:
+-	case WCD938X_TX_3_4_SAR3_ERR:
+-	case WCD938X_HPH_L_STATUS:
+-	case WCD938X_HPH_R_STATUS:
+-	case WCD938X_HPH_SURGE_HPHLR_SURGE_STATUS:
+-	case WCD938X_EAR_STATUS_REG_1:
+-	case WCD938X_EAR_STATUS_REG_2:
+-	case WCD938X_MBHC_NEW_FSM_STATUS:
+-	case WCD938X_MBHC_NEW_ADC_RESULT:
+-	case WCD938X_DIE_CRACK_DIE_CRK_DET_OUT:
+-	case WCD938X_AUX_INT_STATUS_REG:
+-	case WCD938X_LDORXTX_INT_STATUS:
+-	case WCD938X_DIGITAL_CHIP_ID0:
+-	case WCD938X_DIGITAL_CHIP_ID1:
+-	case WCD938X_DIGITAL_CHIP_ID2:
+-	case WCD938X_DIGITAL_CHIP_ID3:
+-	case WCD938X_DIGITAL_INTR_STATUS_0:
+-	case WCD938X_DIGITAL_INTR_STATUS_1:
+-	case WCD938X_DIGITAL_INTR_STATUS_2:
+-	case WCD938X_DIGITAL_INTR_CLEAR_0:
+-	case WCD938X_DIGITAL_INTR_CLEAR_1:
+-	case WCD938X_DIGITAL_INTR_CLEAR_2:
+-	case WCD938X_DIGITAL_SWR_HM_TEST_0:
+-	case WCD938X_DIGITAL_SWR_HM_TEST_1:
+-	case WCD938X_DIGITAL_EFUSE_T_DATA_0:
+-	case WCD938X_DIGITAL_EFUSE_T_DATA_1:
+-	case WCD938X_DIGITAL_PIN_STATUS_0:
+-	case WCD938X_DIGITAL_PIN_STATUS_1:
+-	case WCD938X_DIGITAL_MODE_STATUS_0:
+-	case WCD938X_DIGITAL_MODE_STATUS_1:
+-	case WCD938X_DIGITAL_EFUSE_REG_0:
+-	case WCD938X_DIGITAL_EFUSE_REG_1:
+-	case WCD938X_DIGITAL_EFUSE_REG_2:
+-	case WCD938X_DIGITAL_EFUSE_REG_3:
+-	case WCD938X_DIGITAL_EFUSE_REG_4:
+-	case WCD938X_DIGITAL_EFUSE_REG_5:
+-	case WCD938X_DIGITAL_EFUSE_REG_6:
+-	case WCD938X_DIGITAL_EFUSE_REG_7:
+-	case WCD938X_DIGITAL_EFUSE_REG_8:
+-	case WCD938X_DIGITAL_EFUSE_REG_9:
+-	case WCD938X_DIGITAL_EFUSE_REG_10:
+-	case WCD938X_DIGITAL_EFUSE_REG_11:
+-	case WCD938X_DIGITAL_EFUSE_REG_12:
+-	case WCD938X_DIGITAL_EFUSE_REG_13:
+-	case WCD938X_DIGITAL_EFUSE_REG_14:
+-	case WCD938X_DIGITAL_EFUSE_REG_15:
+-	case WCD938X_DIGITAL_EFUSE_REG_16:
+-	case WCD938X_DIGITAL_EFUSE_REG_17:
+-	case WCD938X_DIGITAL_EFUSE_REG_18:
+-	case WCD938X_DIGITAL_EFUSE_REG_19:
+-	case WCD938X_DIGITAL_EFUSE_REG_20:
+-	case WCD938X_DIGITAL_EFUSE_REG_21:
+-	case WCD938X_DIGITAL_EFUSE_REG_22:
+-	case WCD938X_DIGITAL_EFUSE_REG_23:
+-	case WCD938X_DIGITAL_EFUSE_REG_24:
+-	case WCD938X_DIGITAL_EFUSE_REG_25:
+-	case WCD938X_DIGITAL_EFUSE_REG_26:
+-	case WCD938X_DIGITAL_EFUSE_REG_27:
+-	case WCD938X_DIGITAL_EFUSE_REG_28:
+-	case WCD938X_DIGITAL_EFUSE_REG_29:
+-	case WCD938X_DIGITAL_EFUSE_REG_30:
+-	case WCD938X_DIGITAL_EFUSE_REG_31:
+-		return true;
+-	}
+-	return false;
+-}
+-
+-static bool wcd938x_readable_register(struct device *dev, unsigned int reg)
+-{
+-	bool ret;
+-
+-	ret = wcd938x_readonly_register(dev, reg);
+-	if (!ret)
+-		return wcd938x_rdwr_register(dev, reg);
+-
+-	return ret;
+-}
+-
+-static bool wcd938x_writeable_register(struct device *dev, unsigned int reg)
+-{
+-	return wcd938x_rdwr_register(dev, reg);
+-}
+-
+-static bool wcd938x_volatile_register(struct device *dev, unsigned int reg)
+-{
+-	if (reg <= WCD938X_BASE_ADDRESS)
+-		return false;
+-
+-	if (reg == WCD938X_DIGITAL_SWR_TX_CLK_RATE)
+-		return true;
+-
+-	if (wcd938x_readonly_register(dev, reg))
+-		return true;
+-
+-	return false;
+-}
+-
+-static struct regmap_config wcd938x_regmap_config = {
+-	.name = "wcd938x_csr",
+-	.reg_bits = 32,
+-	.val_bits = 8,
+-	.cache_type = REGCACHE_RBTREE,
+-	.reg_defaults = wcd938x_defaults,
+-	.num_reg_defaults = ARRAY_SIZE(wcd938x_defaults),
+-	.max_register = WCD938X_MAX_REGISTER,
+-	.readable_reg = wcd938x_readable_register,
+-	.writeable_reg = wcd938x_writeable_register,
+-	.volatile_reg = wcd938x_volatile_register,
+-	.can_multi_write = true,
+-};
+-
+ static const struct regmap_irq wcd938x_irqs[WCD938X_NUM_IRQS] = {
+ 	REGMAP_IRQ_REG(WCD938X_IRQ_MBHC_BUTTON_PRESS_DET, 0, 0x01),
+ 	REGMAP_IRQ_REG(WCD938X_IRQ_MBHC_BUTTON_RELEASE_DET, 0, 0x02),
+@@ -4412,10 +3417,10 @@ static int wcd938x_bind(struct device *dev)
+ 		return -EINVAL;
+ 	}
+ 
+-	wcd938x->regmap = devm_regmap_init_sdw(wcd938x->tx_sdw_dev, &wcd938x_regmap_config);
+-	if (IS_ERR(wcd938x->regmap)) {
+-		dev_err(dev, "%s: tx csr regmap not found\n", __func__);
+-		return PTR_ERR(wcd938x->regmap);
++	wcd938x->regmap = dev_get_regmap(&wcd938x->tx_sdw_dev->dev, NULL);
++	if (!wcd938x->regmap) {
++		dev_err(dev, "could not get TX device regmap\n");
++		return -EINVAL;
+ 	}
+ 
+ 	ret = wcd938x_irq_init(wcd938x, dev);
+diff --git a/sound/soc/codecs/wcd938x.h b/sound/soc/codecs/wcd938x.h
+index ea82039e78435..74b1498fec38b 100644
+--- a/sound/soc/codecs/wcd938x.h
++++ b/sound/soc/codecs/wcd938x.h
+@@ -663,6 +663,7 @@ struct wcd938x_sdw_priv {
+ 	bool is_tx;
+ 	struct wcd938x_priv *wcd938x;
+ 	struct irq_domain *slave_irq;
++	struct regmap *regmap;
+ };
+ 
+ #if IS_ENABLED(CONFIG_SND_SOC_WCD938X_SDW)
+diff --git a/sound/soc/fsl/fsl_mqs.c b/sound/soc/fsl/fsl_mqs.c
+index 4922e6795b73f..32d20d351bbf7 100644
+--- a/sound/soc/fsl/fsl_mqs.c
++++ b/sound/soc/fsl/fsl_mqs.c
+@@ -210,10 +210,10 @@ static int fsl_mqs_probe(struct platform_device *pdev)
+ 		}
+ 
+ 		mqs_priv->regmap = syscon_node_to_regmap(gpr_np);
++		of_node_put(gpr_np);
+ 		if (IS_ERR(mqs_priv->regmap)) {
+ 			dev_err(&pdev->dev, "failed to get gpr regmap\n");
+-			ret = PTR_ERR(mqs_priv->regmap);
+-			goto err_free_gpr_np;
++			return PTR_ERR(mqs_priv->regmap);
+ 		}
+ 	} else {
+ 		regs = devm_platform_ioremap_resource(pdev, 0);
+@@ -242,8 +242,7 @@ static int fsl_mqs_probe(struct platform_device *pdev)
+ 	if (IS_ERR(mqs_priv->mclk)) {
+ 		dev_err(&pdev->dev, "failed to get the clock: %ld\n",
+ 			PTR_ERR(mqs_priv->mclk));
+-		ret = PTR_ERR(mqs_priv->mclk);
+-		goto err_free_gpr_np;
++		return PTR_ERR(mqs_priv->mclk);
+ 	}
+ 
+ 	dev_set_drvdata(&pdev->dev, mqs_priv);
+@@ -252,13 +251,9 @@ static int fsl_mqs_probe(struct platform_device *pdev)
+ 	ret = devm_snd_soc_register_component(&pdev->dev, &soc_codec_fsl_mqs,
+ 			&fsl_mqs_dai, 1);
+ 	if (ret)
+-		goto err_free_gpr_np;
+-	return 0;
+-
+-err_free_gpr_np:
+-	of_node_put(gpr_np);
++		return ret;
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ static int fsl_mqs_remove(struct platform_device *pdev)
+diff --git a/sound/soc/mediatek/common/mtk-soundcard-driver.c b/sound/soc/mediatek/common/mtk-soundcard-driver.c
+index 7c55c2cb1f214..738093451ccbf 100644
+--- a/sound/soc/mediatek/common/mtk-soundcard-driver.c
++++ b/sound/soc/mediatek/common/mtk-soundcard-driver.c
+@@ -47,20 +47,26 @@ int parse_dai_link_info(struct snd_soc_card *card)
+ 	/* Loop over all the dai link sub nodes */
+ 	for_each_available_child_of_node(dev->of_node, sub_node) {
+ 		if (of_property_read_string(sub_node, "link-name",
+-					    &dai_link_name))
++					    &dai_link_name)) {
++			of_node_put(sub_node);
+ 			return -EINVAL;
++		}
+ 
+ 		for_each_card_prelinks(card, i, dai_link) {
+ 			if (!strcmp(dai_link_name, dai_link->name))
+ 				break;
+ 		}
+ 
+-		if (i >= card->num_links)
++		if (i >= card->num_links) {
++			of_node_put(sub_node);
+ 			return -EINVAL;
++		}
+ 
+ 		ret = set_card_codec_info(card, sub_node, dai_link);
+-		if (ret < 0)
++		if (ret < 0) {
++			of_node_put(sub_node);
+ 			return ret;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/sound/soc/soc-compress.c b/sound/soc/soc-compress.c
+index e7aa6f360cabe..d649b0cf4744f 100644
+--- a/sound/soc/soc-compress.c
++++ b/sound/soc/soc-compress.c
+@@ -622,6 +622,9 @@ int snd_soc_new_compress(struct snd_soc_pcm_runtime *rtd, int num)
+ 			return ret;
+ 		}
+ 
++		/* inherit atomicity from DAI link */
++		be_pcm->nonatomic = rtd->dai_link->nonatomic;
++
+ 		rtd->pcm = be_pcm;
+ 		rtd->fe_compr = 1;
+ 		if (rtd->dai_link->dpcm_playback)
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 271884e350035..efb4a3311cc59 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -3884,6 +3884,64 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	}
+ },
+ 
++{
++	/*
++	 * PIONEER DJ DDJ-800
++	 * PCM is 6 channels out, 6 channels in @ 44.1 fixed
++	 * The Feedback for the output is the input
++	 */
++	USB_DEVICE_VENDOR_SPEC(0x2b73, 0x0029),
++		.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++		.ifnum = QUIRK_ANY_INTERFACE,
++		.type = QUIRK_COMPOSITE,
++		.data = (const struct snd_usb_audio_quirk[]) {
++			{
++				.ifnum = 0,
++				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
++				.data = &(const struct audioformat) {
++					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
++					.channels = 6,
++					.iface = 0,
++					.altsetting = 1,
++					.altset_idx = 1,
++					.endpoint = 0x01,
++					.ep_attr = USB_ENDPOINT_XFER_ISOC|
++						USB_ENDPOINT_SYNC_ASYNC,
++					.rates = SNDRV_PCM_RATE_44100,
++					.rate_min = 44100,
++					.rate_max = 44100,
++					.nr_rates = 1,
++					.rate_table = (unsigned int[]) { 44100 }
++				}
++			},
++			{
++				.ifnum = 0,
++				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
++				.data = &(const struct audioformat) {
++					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
++					.channels = 6,
++					.iface = 0,
++					.altsetting = 1,
++					.altset_idx = 1,
++					.endpoint = 0x82,
++					.ep_idx = 1,
++					.ep_attr = USB_ENDPOINT_XFER_ISOC|
++						USB_ENDPOINT_SYNC_ASYNC|
++					USB_ENDPOINT_USAGE_IMPLICIT_FB,
++					.rates = SNDRV_PCM_RATE_44100,
++					.rate_min = 44100,
++					.rate_max = 44100,
++					.nr_rates = 1,
++					.rate_table = (unsigned int[]) { 44100 }
++				}
++			},
++			{
++				.ifnum = -1
++			}
++		}
++	}
++},
++
+ /*
+  * MacroSilicon MS2100/MS2106 based AV capture cards
+  *
+diff --git a/tools/arch/x86/kcpuid/cpuid.csv b/tools/arch/x86/kcpuid/cpuid.csv
+index 4f1c4b0c29e98..9914bdf4fc9ec 100644
+--- a/tools/arch/x86/kcpuid/cpuid.csv
++++ b/tools/arch/x86/kcpuid/cpuid.csv
+@@ -184,8 +184,8 @@
+ 	 7,    0,  EBX,     27, avx512er, AVX512 Exponent Reciproca instr
+ 	 7,    0,  EBX,     28, avx512cd, AVX512 Conflict Detection instr
+ 	 7,    0,  EBX,     29, sha, Intel Secure Hash Algorithm Extensions instr
+-	 7,    0,  EBX,     26, avx512bw, AVX512 Byte & Word instr
+-	 7,    0,  EBX,     28, avx512vl, AVX512 Vector Length Extentions (VL)
++	 7,    0,  EBX,     30, avx512bw, AVX512 Byte & Word instr
++	 7,    0,  EBX,     31, avx512vl, AVX512 Vector Length Extentions (VL)
+ 	 7,    0,  ECX,      0, prefetchwt1, X
+ 	 7,    0,  ECX,      1, avx512vbmi, AVX512 Vector Byte Manipulation Instructions
+ 	 7,    0,  ECX,      2, umip, User-mode Instruction Prevention
+diff --git a/tools/bpf/bpftool/json_writer.c b/tools/bpf/bpftool/json_writer.c
+index 7fea83bedf488..bca5dd0a59e34 100644
+--- a/tools/bpf/bpftool/json_writer.c
++++ b/tools/bpf/bpftool/json_writer.c
+@@ -80,9 +80,6 @@ static void jsonw_puts(json_writer_t *self, const char *str)
+ 		case '"':
+ 			fputs("\\\"", self->out);
+ 			break;
+-		case '\'':
+-			fputs("\\\'", self->out);
+-			break;
+ 		default:
+ 			putc(*str, self->out);
+ 		}
+diff --git a/tools/bpf/bpftool/xlated_dumper.c b/tools/bpf/bpftool/xlated_dumper.c
+index 6fe3134ae45d4..3daa05d9bbb73 100644
+--- a/tools/bpf/bpftool/xlated_dumper.c
++++ b/tools/bpf/bpftool/xlated_dumper.c
+@@ -372,8 +372,15 @@ void dump_xlated_for_graph(struct dump_data *dd, void *buf_start, void *buf_end,
+ 	struct bpf_insn *insn_start = buf_start;
+ 	struct bpf_insn *insn_end = buf_end;
+ 	struct bpf_insn *cur = insn_start;
++	bool double_insn = false;
+ 
+ 	for (; cur <= insn_end; cur++) {
++		if (double_insn) {
++			double_insn = false;
++			continue;
++		}
++		double_insn = cur->code == (BPF_LD | BPF_IMM | BPF_DW);
++
+ 		printf("% 4d: ", (int)(cur - insn_start + start_idx));
+ 		print_bpf_insn(&cbs, cur, true);
+ 		if (cur != insn_end)
+diff --git a/tools/lib/bpf/bpf_tracing.h b/tools/lib/bpf/bpf_tracing.h
+index 6db88f41fa0df..2cd888733b1c9 100644
+--- a/tools/lib/bpf/bpf_tracing.h
++++ b/tools/lib/bpf/bpf_tracing.h
+@@ -204,6 +204,7 @@ struct pt_regs___s390 {
+ #define __PT_PARM2_SYSCALL_REG __PT_PARM2_REG
+ #define __PT_PARM3_SYSCALL_REG __PT_PARM3_REG
+ #define __PT_PARM4_SYSCALL_REG __PT_PARM4_REG
++#define __PT_PARM5_SYSCALL_REG uregs[4]
+ #define __PT_PARM6_SYSCALL_REG uregs[5]
+ #define __PT_PARM7_SYSCALL_REG uregs[6]
+ 
+diff --git a/tools/lib/bpf/gen_loader.c b/tools/lib/bpf/gen_loader.c
+index 23f5c46708f8f..b74c82bb831e6 100644
+--- a/tools/lib/bpf/gen_loader.c
++++ b/tools/lib/bpf/gen_loader.c
+@@ -804,11 +804,13 @@ static void emit_relo_ksym_btf(struct bpf_gen *gen, struct ksym_relo_desc *relo,
+ 		return;
+ 	/* try to copy from existing ldimm64 insn */
+ 	if (kdesc->ref > 1) {
+-		move_blob2blob(gen, insn + offsetof(struct bpf_insn, imm), 4,
+-			       kdesc->insn + offsetof(struct bpf_insn, imm));
+ 		move_blob2blob(gen, insn + sizeof(struct bpf_insn) + offsetof(struct bpf_insn, imm), 4,
+ 			       kdesc->insn + sizeof(struct bpf_insn) + offsetof(struct bpf_insn, imm));
+-		/* jump over src_reg adjustment if imm is not 0, reuse BPF_REG_0 from move_blob2blob */
++		move_blob2blob(gen, insn + offsetof(struct bpf_insn, imm), 4,
++			       kdesc->insn + offsetof(struct bpf_insn, imm));
++		/* jump over src_reg adjustment if imm (btf_id) is not 0, reuse BPF_REG_0 from move_blob2blob
++		 * If btf_id is zero, clear BPF_PSEUDO_BTF_ID flag in src_reg of ld_imm64 insn
++		 */
+ 		emit(gen, BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3));
+ 		goto clear_src_reg;
+ 	}
+@@ -831,7 +833,7 @@ static void emit_relo_ksym_btf(struct bpf_gen *gen, struct ksym_relo_desc *relo,
+ 	emit(gen, BPF_STX_MEM(BPF_W, BPF_REG_8, BPF_REG_7,
+ 			      sizeof(struct bpf_insn) + offsetof(struct bpf_insn, imm)));
+ 	/* skip src_reg adjustment */
+-	emit(gen, BPF_JMP_IMM(BPF_JSGE, BPF_REG_7, 0, 3));
++	emit(gen, BPF_JMP_IMM(BPF_JA, 0, 0, 3));
+ clear_src_reg:
+ 	/* clear bpf_object__relocate_data's src_reg assignment, otherwise we get a verifier failure */
+ 	reg_mask = src_reg_mask();
+diff --git a/tools/lib/bpf/netlink.c b/tools/lib/bpf/netlink.c
+index 1653e7a8b0a11..84dd5fa149058 100644
+--- a/tools/lib/bpf/netlink.c
++++ b/tools/lib/bpf/netlink.c
+@@ -468,8 +468,13 @@ int bpf_xdp_query(int ifindex, int xdp_flags, struct bpf_xdp_query_opts *opts)
+ 		return 0;
+ 
+ 	err = libbpf_netlink_resolve_genl_family_id("netdev", sizeof("netdev"), &id);
+-	if (err < 0)
++	if (err < 0) {
++		if (err == -ENOENT) {
++			opts->feature_flags = 0;
++			goto skip_feature_flags;
++		}
+ 		return libbpf_err(err);
++	}
+ 
+ 	memset(&req, 0, sizeof(req));
+ 	req.nh.nlmsg_len = NLMSG_LENGTH(GENL_HDRLEN);
+@@ -489,6 +494,7 @@ int bpf_xdp_query(int ifindex, int xdp_flags, struct bpf_xdp_query_opts *opts)
+ 
+ 	opts->feature_flags = md.flags;
+ 
++skip_feature_flags:
+ 	return 0;
+ }
+ 
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index f937be1afe65c..f62b85832ae01 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -2976,17 +2976,6 @@ static int update_cfi_state(struct instruction *insn,
+ 				break;
+ 			}
+ 
+-			if (!cfi->drap && op->src.reg == CFI_SP &&
+-			    op->dest.reg == CFI_BP && cfa->base == CFI_SP &&
+-			    check_reg_frame_pos(&regs[CFI_BP], -cfa->offset + op->src.offset)) {
+-
+-				/* lea disp(%rsp), %rbp */
+-				cfa->base = CFI_BP;
+-				cfa->offset -= op->src.offset;
+-				cfi->bp_scratch = false;
+-				break;
+-			}
+-
+ 			if (op->src.reg == CFI_SP && cfa->base == CFI_SP) {
+ 
+ 				/* drap: lea disp(%rsp), %drap */
+diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c
+index 498ff7f24463b..b2a5e5397badf 100644
+--- a/tools/perf/util/auxtrace.c
++++ b/tools/perf/util/auxtrace.c
+@@ -2449,6 +2449,7 @@ static int find_entire_kern_cb(void *arg, const char *name __maybe_unused,
+ 			       char type, u64 start)
+ {
+ 	struct sym_args *args = arg;
++	u64 size;
+ 
+ 	if (!kallsyms__is_function(type))
+ 		return 0;
+@@ -2458,7 +2459,9 @@ static int find_entire_kern_cb(void *arg, const char *name __maybe_unused,
+ 		args->start = start;
+ 	}
+ 	/* Don't know exactly where the kernel ends, so we add a page */
+-	args->size = round_up(start, page_size) + page_size - args->start;
++	size = round_up(start, page_size) + page_size - args->start;
++	if (size > args->size)
++		args->size = size;
+ 
+ 	return 0;
+ }
+diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+index 0ac860c8dd2b8..7145c5890de02 100644
+--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
++++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+@@ -1998,6 +1998,8 @@ static void intel_pt_calc_cbr(struct intel_pt_decoder *decoder)
+ 
+ 	decoder->cbr = cbr;
+ 	decoder->cbr_cyc_to_tsc = decoder->max_non_turbo_ratio_fp / cbr;
++	decoder->cyc_ref_timestamp = decoder->timestamp;
++	decoder->cycle_cnt = 0;
+ 
+ 	intel_pt_mtc_cyc_cnt_cbr(decoder);
+ }
+diff --git a/tools/testing/selftests/bpf/network_helpers.c b/tools/testing/selftests/bpf/network_helpers.c
+index 01de33191226b..596caa1765820 100644
+--- a/tools/testing/selftests/bpf/network_helpers.c
++++ b/tools/testing/selftests/bpf/network_helpers.c
+@@ -95,7 +95,7 @@ static int __start_server(int type, int protocol, const struct sockaddr *addr,
+ 	if (reuseport &&
+ 	    setsockopt(fd, SOL_SOCKET, SO_REUSEPORT, &on, sizeof(on))) {
+ 		log_err("Failed to set SO_REUSEPORT");
+-		return -1;
++		goto error_close;
+ 	}
+ 
+ 	if (bind(fd, addr, addrlen) < 0) {
+diff --git a/tools/testing/selftests/bpf/prog_tests/align.c b/tools/testing/selftests/bpf/prog_tests/align.c
+index 4666f88f2bb4f..8baebb41541dc 100644
+--- a/tools/testing/selftests/bpf/prog_tests/align.c
++++ b/tools/testing/selftests/bpf/prog_tests/align.c
+@@ -575,14 +575,14 @@ static struct bpf_align_test tests[] = {
+ 			/* New unknown value in R7 is (4n), >= 76 */
+ 			{14, "R7_w=scalar(umin=76,umax=1096,var_off=(0x0; 0x7fc))"},
+ 			/* Adding it to packet pointer gives nice bounds again */
+-			{16, "R5_w=pkt(id=3,off=0,r=0,umin=2,umax=1082,var_off=(0x2; 0xfffffffc)"},
++			{16, "R5_w=pkt(id=3,off=0,r=0,umin=2,umax=1082,var_off=(0x2; 0x7fc)"},
+ 			/* At the time the word size load is performed from R5,
+ 			 * its total fixed offset is NET_IP_ALIGN + reg->off (0)
+ 			 * which is 2.  Then the variable offset is (4n+2), so
+ 			 * the total offset is 4-byte aligned and meets the
+ 			 * load's requirements.
+ 			 */
+-			{20, "R5=pkt(id=3,off=0,r=4,umin=2,umax=1082,var_off=(0x2; 0xfffffffc)"},
++			{20, "R5=pkt(id=3,off=0,r=4,umin=2,umax=1082,var_off=(0x2; 0x7fc)"},
+ 		},
+ 	},
+ };
+diff --git a/tools/testing/selftests/bpf/prog_tests/cg_storage_multi.c b/tools/testing/selftests/bpf/prog_tests/cg_storage_multi.c
+index 621c572221918..63ee892bc7573 100644
+--- a/tools/testing/selftests/bpf/prog_tests/cg_storage_multi.c
++++ b/tools/testing/selftests/bpf/prog_tests/cg_storage_multi.c
+@@ -56,8 +56,9 @@ static bool assert_storage_noexist(struct bpf_map *map, const void *key)
+ 
+ static bool connect_send(const char *cgroup_path)
+ {
+-	bool res = true;
+ 	int server_fd = -1, client_fd = -1;
++	char message[] = "message";
++	bool res = true;
+ 
+ 	if (join_cgroup(cgroup_path))
+ 		goto out_clean;
+@@ -70,7 +71,10 @@ static bool connect_send(const char *cgroup_path)
+ 	if (client_fd < 0)
+ 		goto out_clean;
+ 
+-	if (send(client_fd, "message", strlen("message"), 0) < 0)
++	if (send(client_fd, &message, sizeof(message), 0) < 0)
++		goto out_clean;
++
++	if (read(server_fd, &message, sizeof(message)) < 0)
+ 		goto out_clean;
+ 
+ 	res = false;
+diff --git a/tools/testing/selftests/bpf/prog_tests/decap_sanity.c b/tools/testing/selftests/bpf/prog_tests/decap_sanity.c
+index 2853883b7cbb2..5c0ebe6ba8667 100644
+--- a/tools/testing/selftests/bpf/prog_tests/decap_sanity.c
++++ b/tools/testing/selftests/bpf/prog_tests/decap_sanity.c
+@@ -10,14 +10,6 @@
+ #include "network_helpers.h"
+ #include "decap_sanity.skel.h"
+ 
+-#define SYS(fmt, ...)						\
+-	({							\
+-		char cmd[1024];					\
+-		snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__);	\
+-		if (!ASSERT_OK(system(cmd), cmd))		\
+-			goto fail;				\
+-	})
+-
+ #define NS_TEST "decap_sanity_ns"
+ #define IPV6_IFACE_ADDR "face::1"
+ #define UDP_TEST_PORT 7777
+@@ -37,9 +29,9 @@ void test_decap_sanity(void)
+ 	if (!ASSERT_OK_PTR(skel, "skel open_and_load"))
+ 		return;
+ 
+-	SYS("ip netns add %s", NS_TEST);
+-	SYS("ip -net %s -6 addr add %s/128 dev lo nodad", NS_TEST, IPV6_IFACE_ADDR);
+-	SYS("ip -net %s link set dev lo up", NS_TEST);
++	SYS(fail, "ip netns add %s", NS_TEST);
++	SYS(fail, "ip -net %s -6 addr add %s/128 dev lo nodad", NS_TEST, IPV6_IFACE_ADDR);
++	SYS(fail, "ip -net %s link set dev lo up", NS_TEST);
+ 
+ 	nstoken = open_netns(NS_TEST);
+ 	if (!ASSERT_OK_PTR(nstoken, "open_netns"))
+@@ -80,6 +72,6 @@ fail:
+ 		bpf_tc_hook_destroy(&qdisc_hook);
+ 		close_netns(nstoken);
+ 	}
+-	system("ip netns del " NS_TEST " &> /dev/null");
++	SYS_NOFAIL("ip netns del " NS_TEST " &> /dev/null");
+ 	decap_sanity__destroy(skel);
+ }
+diff --git a/tools/testing/selftests/bpf/prog_tests/empty_skb.c b/tools/testing/selftests/bpf/prog_tests/empty_skb.c
+index 32dd731e9070c..3b77d8a422dbf 100644
+--- a/tools/testing/selftests/bpf/prog_tests/empty_skb.c
++++ b/tools/testing/selftests/bpf/prog_tests/empty_skb.c
+@@ -4,11 +4,6 @@
+ #include <net/if.h>
+ #include "empty_skb.skel.h"
+ 
+-#define SYS(cmd) ({ \
+-	if (!ASSERT_OK(system(cmd), (cmd))) \
+-		goto out; \
+-})
+-
+ void test_empty_skb(void)
+ {
+ 	LIBBPF_OPTS(bpf_test_run_opts, tattr);
+@@ -93,18 +88,18 @@ void test_empty_skb(void)
+ 		},
+ 	};
+ 
+-	SYS("ip netns add empty_skb");
++	SYS(out, "ip netns add empty_skb");
+ 	tok = open_netns("empty_skb");
+-	SYS("ip link add veth0 type veth peer veth1");
+-	SYS("ip link set dev veth0 up");
+-	SYS("ip link set dev veth1 up");
+-	SYS("ip addr add 10.0.0.1/8 dev veth0");
+-	SYS("ip addr add 10.0.0.2/8 dev veth1");
++	SYS(out, "ip link add veth0 type veth peer veth1");
++	SYS(out, "ip link set dev veth0 up");
++	SYS(out, "ip link set dev veth1 up");
++	SYS(out, "ip addr add 10.0.0.1/8 dev veth0");
++	SYS(out, "ip addr add 10.0.0.2/8 dev veth1");
+ 	veth_ifindex = if_nametoindex("veth0");
+ 
+-	SYS("ip link add ipip0 type ipip local 10.0.0.1 remote 10.0.0.2");
+-	SYS("ip link set ipip0 up");
+-	SYS("ip addr add 192.168.1.1/16 dev ipip0");
++	SYS(out, "ip link add ipip0 type ipip local 10.0.0.1 remote 10.0.0.2");
++	SYS(out, "ip link set ipip0 up");
++	SYS(out, "ip addr add 192.168.1.1/16 dev ipip0");
+ 	ipip_ifindex = if_nametoindex("ipip0");
+ 
+ 	bpf_obj = empty_skb__open_and_load();
+@@ -142,5 +137,5 @@ out:
+ 		empty_skb__destroy(bpf_obj);
+ 	if (tok)
+ 		close_netns(tok);
+-	system("ip netns del empty_skb");
++	SYS_NOFAIL("ip netns del empty_skb");
+ }
+diff --git a/tools/testing/selftests/bpf/prog_tests/fib_lookup.c b/tools/testing/selftests/bpf/prog_tests/fib_lookup.c
+index 61ccddccf485e..a1e7121058118 100644
+--- a/tools/testing/selftests/bpf/prog_tests/fib_lookup.c
++++ b/tools/testing/selftests/bpf/prog_tests/fib_lookup.c
+@@ -8,14 +8,6 @@
+ #include "network_helpers.h"
+ #include "fib_lookup.skel.h"
+ 
+-#define SYS(fmt, ...)						\
+-	({							\
+-		char cmd[1024];					\
+-		snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__);	\
+-		if (!ASSERT_OK(system(cmd), cmd))		\
+-			goto fail;				\
+-	})
+-
+ #define NS_TEST			"fib_lookup_ns"
+ #define IPV6_IFACE_ADDR		"face::face"
+ #define IPV6_NUD_FAILED_ADDR	"face::1"
+@@ -59,16 +51,24 @@ static int setup_netns(void)
+ {
+ 	int err;
+ 
+-	SYS("ip link add veth1 type veth peer name veth2");
+-	SYS("ip link set dev veth1 up");
++	SYS(fail, "ip link add veth1 type veth peer name veth2");
++	SYS(fail, "ip link set dev veth1 up");
++
++	err = write_sysctl("/proc/sys/net/ipv4/neigh/veth1/gc_stale_time", "900");
++	if (!ASSERT_OK(err, "write_sysctl(net.ipv4.neigh.veth1.gc_stale_time)"))
++		goto fail;
++
++	err = write_sysctl("/proc/sys/net/ipv6/neigh/veth1/gc_stale_time", "900");
++	if (!ASSERT_OK(err, "write_sysctl(net.ipv6.neigh.veth1.gc_stale_time)"))
++		goto fail;
+ 
+-	SYS("ip addr add %s/64 dev veth1 nodad", IPV6_IFACE_ADDR);
+-	SYS("ip neigh add %s dev veth1 nud failed", IPV6_NUD_FAILED_ADDR);
+-	SYS("ip neigh add %s dev veth1 lladdr %s nud stale", IPV6_NUD_STALE_ADDR, DMAC);
++	SYS(fail, "ip addr add %s/64 dev veth1 nodad", IPV6_IFACE_ADDR);
++	SYS(fail, "ip neigh add %s dev veth1 nud failed", IPV6_NUD_FAILED_ADDR);
++	SYS(fail, "ip neigh add %s dev veth1 lladdr %s nud stale", IPV6_NUD_STALE_ADDR, DMAC);
+ 
+-	SYS("ip addr add %s/24 dev veth1 nodad", IPV4_IFACE_ADDR);
+-	SYS("ip neigh add %s dev veth1 nud failed", IPV4_NUD_FAILED_ADDR);
+-	SYS("ip neigh add %s dev veth1 lladdr %s nud stale", IPV4_NUD_STALE_ADDR, DMAC);
++	SYS(fail, "ip addr add %s/24 dev veth1", IPV4_IFACE_ADDR);
++	SYS(fail, "ip neigh add %s dev veth1 nud failed", IPV4_NUD_FAILED_ADDR);
++	SYS(fail, "ip neigh add %s dev veth1 lladdr %s nud stale", IPV4_NUD_STALE_ADDR, DMAC);
+ 
+ 	err = write_sysctl("/proc/sys/net/ipv4/conf/veth1/forwarding", "1");
+ 	if (!ASSERT_OK(err, "write_sysctl(net.ipv4.conf.veth1.forwarding)"))
+@@ -140,7 +140,7 @@ void test_fib_lookup(void)
+ 		return;
+ 	prog_fd = bpf_program__fd(skel->progs.fib_lookup);
+ 
+-	SYS("ip netns add %s", NS_TEST);
++	SYS(fail, "ip netns add %s", NS_TEST);
+ 
+ 	nstoken = open_netns(NS_TEST);
+ 	if (!ASSERT_OK_PTR(nstoken, "open_netns"))
+@@ -166,7 +166,7 @@ void test_fib_lookup(void)
+ 		if (!ASSERT_OK(err, "bpf_prog_test_run_opts"))
+ 			continue;
+ 
+-		ASSERT_EQ(tests[i].expected_ret, skel->bss->fib_lookup_ret,
++		ASSERT_EQ(skel->bss->fib_lookup_ret, tests[i].expected_ret,
+ 			  "fib_lookup_ret");
+ 
+ 		ret = memcmp(tests[i].dmac, fib_params->dmac, sizeof(tests[i].dmac));
+@@ -182,6 +182,6 @@ void test_fib_lookup(void)
+ fail:
+ 	if (nstoken)
+ 		close_netns(nstoken);
+-	system("ip netns del " NS_TEST " &> /dev/null");
++	SYS_NOFAIL("ip netns del " NS_TEST " &> /dev/null");
+ 	fib_lookup__destroy(skel);
+ }
+diff --git a/tools/testing/selftests/bpf/prog_tests/get_stackid_cannot_attach.c b/tools/testing/selftests/bpf/prog_tests/get_stackid_cannot_attach.c
+index 5308de1ed478e..2715c68301f52 100644
+--- a/tools/testing/selftests/bpf/prog_tests/get_stackid_cannot_attach.c
++++ b/tools/testing/selftests/bpf/prog_tests/get_stackid_cannot_attach.c
+@@ -65,6 +65,7 @@ void test_get_stackid_cannot_attach(void)
+ 	skel->links.oncpu = bpf_program__attach_perf_event(skel->progs.oncpu,
+ 							   pmu_fd);
+ 	ASSERT_OK_PTR(skel->links.oncpu, "attach_perf_event_callchain");
++	bpf_link__destroy(skel->links.oncpu);
+ 	close(pmu_fd);
+ 
+ 	/* add exclude_callchain_kernel, attach should fail */
+diff --git a/tools/testing/selftests/bpf/prog_tests/perf_event_stackmap.c b/tools/testing/selftests/bpf/prog_tests/perf_event_stackmap.c
+index 33144c9432aeb..f4aad35afae16 100644
+--- a/tools/testing/selftests/bpf/prog_tests/perf_event_stackmap.c
++++ b/tools/testing/selftests/bpf/prog_tests/perf_event_stackmap.c
+@@ -63,7 +63,8 @@ void test_perf_event_stackmap(void)
+ 			PERF_SAMPLE_BRANCH_NO_FLAGS |
+ 			PERF_SAMPLE_BRANCH_NO_CYCLES |
+ 			PERF_SAMPLE_BRANCH_CALL_STACK,
+-		.sample_period = 5000,
++		.freq = 1,
++		.sample_freq = read_perf_max_sample_freq(),
+ 		.size = sizeof(struct perf_event_attr),
+ 	};
+ 	struct perf_event_stackmap *skel;
+diff --git a/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c b/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c
+index f4ea1a215ce4d..704f7f6c3704a 100644
+--- a/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c
++++ b/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c
+@@ -2,21 +2,6 @@
+ #include <test_progs.h>
+ #include "test_stacktrace_build_id.skel.h"
+ 
+-static __u64 read_perf_max_sample_freq(void)
+-{
+-	__u64 sample_freq = 5000; /* fallback to 5000 on error */
+-	FILE *f;
+-	__u32 duration = 0;
+-
+-	f = fopen("/proc/sys/kernel/perf_event_max_sample_rate", "r");
+-	if (f == NULL)
+-		return sample_freq;
+-	CHECK(fscanf(f, "%llu", &sample_freq) != 1, "Get max sample rate",
+-		  "return default value: 5000,err %d\n", -errno);
+-	fclose(f);
+-	return sample_freq;
+-}
+-
+ void test_stacktrace_build_id_nmi(void)
+ {
+ 	int control_map_fd, stackid_hmap_fd, stackmap_fd;
+diff --git a/tools/testing/selftests/bpf/prog_tests/tc_redirect.c b/tools/testing/selftests/bpf/prog_tests/tc_redirect.c
+index bca5e6839ac48..6ee22c3b251ad 100644
+--- a/tools/testing/selftests/bpf/prog_tests/tc_redirect.c
++++ b/tools/testing/selftests/bpf/prog_tests/tc_redirect.c
+@@ -137,24 +137,16 @@ static int get_ifaddr(const char *name, char *ifaddr)
+ 	return 0;
+ }
+ 
+-#define SYS(fmt, ...)						\
+-	({							\
+-		char cmd[1024];					\
+-		snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__);	\
+-		if (!ASSERT_OK(system(cmd), cmd))		\
+-			goto fail;				\
+-	})
+-
+ static int netns_setup_links_and_routes(struct netns_setup_result *result)
+ {
+ 	struct nstoken *nstoken = NULL;
+ 	char veth_src_fwd_addr[IFADDR_STR_LEN+1] = {};
+ 
+-	SYS("ip link add veth_src type veth peer name veth_src_fwd");
+-	SYS("ip link add veth_dst type veth peer name veth_dst_fwd");
++	SYS(fail, "ip link add veth_src type veth peer name veth_src_fwd");
++	SYS(fail, "ip link add veth_dst type veth peer name veth_dst_fwd");
+ 
+-	SYS("ip link set veth_dst_fwd address " MAC_DST_FWD);
+-	SYS("ip link set veth_dst address " MAC_DST);
++	SYS(fail, "ip link set veth_dst_fwd address " MAC_DST_FWD);
++	SYS(fail, "ip link set veth_dst address " MAC_DST);
+ 
+ 	if (get_ifaddr("veth_src_fwd", veth_src_fwd_addr))
+ 		goto fail;
+@@ -175,27 +167,27 @@ static int netns_setup_links_and_routes(struct netns_setup_result *result)
+ 	if (!ASSERT_GT(result->ifindex_veth_dst_fwd, 0, "ifindex_veth_dst_fwd"))
+ 		goto fail;
+ 
+-	SYS("ip link set veth_src netns " NS_SRC);
+-	SYS("ip link set veth_src_fwd netns " NS_FWD);
+-	SYS("ip link set veth_dst_fwd netns " NS_FWD);
+-	SYS("ip link set veth_dst netns " NS_DST);
++	SYS(fail, "ip link set veth_src netns " NS_SRC);
++	SYS(fail, "ip link set veth_src_fwd netns " NS_FWD);
++	SYS(fail, "ip link set veth_dst_fwd netns " NS_FWD);
++	SYS(fail, "ip link set veth_dst netns " NS_DST);
+ 
+ 	/** setup in 'src' namespace */
+ 	nstoken = open_netns(NS_SRC);
+ 	if (!ASSERT_OK_PTR(nstoken, "setns src"))
+ 		goto fail;
+ 
+-	SYS("ip addr add " IP4_SRC "/32 dev veth_src");
+-	SYS("ip addr add " IP6_SRC "/128 dev veth_src nodad");
+-	SYS("ip link set dev veth_src up");
++	SYS(fail, "ip addr add " IP4_SRC "/32 dev veth_src");
++	SYS(fail, "ip addr add " IP6_SRC "/128 dev veth_src nodad");
++	SYS(fail, "ip link set dev veth_src up");
+ 
+-	SYS("ip route add " IP4_DST "/32 dev veth_src scope global");
+-	SYS("ip route add " IP4_NET "/16 dev veth_src scope global");
+-	SYS("ip route add " IP6_DST "/128 dev veth_src scope global");
++	SYS(fail, "ip route add " IP4_DST "/32 dev veth_src scope global");
++	SYS(fail, "ip route add " IP4_NET "/16 dev veth_src scope global");
++	SYS(fail, "ip route add " IP6_DST "/128 dev veth_src scope global");
+ 
+-	SYS("ip neigh add " IP4_DST " dev veth_src lladdr %s",
++	SYS(fail, "ip neigh add " IP4_DST " dev veth_src lladdr %s",
+ 	    veth_src_fwd_addr);
+-	SYS("ip neigh add " IP6_DST " dev veth_src lladdr %s",
++	SYS(fail, "ip neigh add " IP6_DST " dev veth_src lladdr %s",
+ 	    veth_src_fwd_addr);
+ 
+ 	close_netns(nstoken);
+@@ -209,15 +201,15 @@ static int netns_setup_links_and_routes(struct netns_setup_result *result)
+ 	 * needs v4 one in order to start ARP probing. IP4_NET route is added
+ 	 * to the endpoints so that the ARP processing will reply.
+ 	 */
+-	SYS("ip addr add " IP4_SLL "/32 dev veth_src_fwd");
+-	SYS("ip addr add " IP4_DLL "/32 dev veth_dst_fwd");
+-	SYS("ip link set dev veth_src_fwd up");
+-	SYS("ip link set dev veth_dst_fwd up");
++	SYS(fail, "ip addr add " IP4_SLL "/32 dev veth_src_fwd");
++	SYS(fail, "ip addr add " IP4_DLL "/32 dev veth_dst_fwd");
++	SYS(fail, "ip link set dev veth_src_fwd up");
++	SYS(fail, "ip link set dev veth_dst_fwd up");
+ 
+-	SYS("ip route add " IP4_SRC "/32 dev veth_src_fwd scope global");
+-	SYS("ip route add " IP6_SRC "/128 dev veth_src_fwd scope global");
+-	SYS("ip route add " IP4_DST "/32 dev veth_dst_fwd scope global");
+-	SYS("ip route add " IP6_DST "/128 dev veth_dst_fwd scope global");
++	SYS(fail, "ip route add " IP4_SRC "/32 dev veth_src_fwd scope global");
++	SYS(fail, "ip route add " IP6_SRC "/128 dev veth_src_fwd scope global");
++	SYS(fail, "ip route add " IP4_DST "/32 dev veth_dst_fwd scope global");
++	SYS(fail, "ip route add " IP6_DST "/128 dev veth_dst_fwd scope global");
+ 
+ 	close_netns(nstoken);
+ 
+@@ -226,16 +218,16 @@ static int netns_setup_links_and_routes(struct netns_setup_result *result)
+ 	if (!ASSERT_OK_PTR(nstoken, "setns dst"))
+ 		goto fail;
+ 
+-	SYS("ip addr add " IP4_DST "/32 dev veth_dst");
+-	SYS("ip addr add " IP6_DST "/128 dev veth_dst nodad");
+-	SYS("ip link set dev veth_dst up");
++	SYS(fail, "ip addr add " IP4_DST "/32 dev veth_dst");
++	SYS(fail, "ip addr add " IP6_DST "/128 dev veth_dst nodad");
++	SYS(fail, "ip link set dev veth_dst up");
+ 
+-	SYS("ip route add " IP4_SRC "/32 dev veth_dst scope global");
+-	SYS("ip route add " IP4_NET "/16 dev veth_dst scope global");
+-	SYS("ip route add " IP6_SRC "/128 dev veth_dst scope global");
++	SYS(fail, "ip route add " IP4_SRC "/32 dev veth_dst scope global");
++	SYS(fail, "ip route add " IP4_NET "/16 dev veth_dst scope global");
++	SYS(fail, "ip route add " IP6_SRC "/128 dev veth_dst scope global");
+ 
+-	SYS("ip neigh add " IP4_SRC " dev veth_dst lladdr " MAC_DST_FWD);
+-	SYS("ip neigh add " IP6_SRC " dev veth_dst lladdr " MAC_DST_FWD);
++	SYS(fail, "ip neigh add " IP4_SRC " dev veth_dst lladdr " MAC_DST_FWD);
++	SYS(fail, "ip neigh add " IP6_SRC " dev veth_dst lladdr " MAC_DST_FWD);
+ 
+ 	close_netns(nstoken);
+ 
+@@ -375,7 +367,7 @@ done:
+ 
+ static int test_ping(int family, const char *addr)
+ {
+-	SYS("ip netns exec " NS_SRC " %s " PING_ARGS " %s > /dev/null", ping_command(family), addr);
++	SYS(fail, "ip netns exec " NS_SRC " %s " PING_ARGS " %s > /dev/null", ping_command(family), addr);
+ 	return 0;
+ fail:
+ 	return -1;
+@@ -953,7 +945,7 @@ static int tun_open(char *name)
+ 	if (!ASSERT_OK(err, "ioctl TUNSETIFF"))
+ 		goto fail;
+ 
+-	SYS("ip link set dev %s up", name);
++	SYS(fail, "ip link set dev %s up", name);
+ 
+ 	return fd;
+ fail:
+@@ -1076,23 +1068,23 @@ static void test_tc_redirect_peer_l3(struct netns_setup_result *setup_result)
+ 	XGRESS_FILTER_ADD(&qdisc_veth_dst_fwd, BPF_TC_EGRESS, skel->progs.tc_chk, 0);
+ 
+ 	/* Setup route and neigh tables */
+-	SYS("ip -netns " NS_SRC " addr add dev tun_src " IP4_TUN_SRC "/24");
+-	SYS("ip -netns " NS_FWD " addr add dev tun_fwd " IP4_TUN_FWD "/24");
++	SYS(fail, "ip -netns " NS_SRC " addr add dev tun_src " IP4_TUN_SRC "/24");
++	SYS(fail, "ip -netns " NS_FWD " addr add dev tun_fwd " IP4_TUN_FWD "/24");
+ 
+-	SYS("ip -netns " NS_SRC " addr add dev tun_src " IP6_TUN_SRC "/64 nodad");
+-	SYS("ip -netns " NS_FWD " addr add dev tun_fwd " IP6_TUN_FWD "/64 nodad");
++	SYS(fail, "ip -netns " NS_SRC " addr add dev tun_src " IP6_TUN_SRC "/64 nodad");
++	SYS(fail, "ip -netns " NS_FWD " addr add dev tun_fwd " IP6_TUN_FWD "/64 nodad");
+ 
+-	SYS("ip -netns " NS_SRC " route del " IP4_DST "/32 dev veth_src scope global");
+-	SYS("ip -netns " NS_SRC " route add " IP4_DST "/32 via " IP4_TUN_FWD
++	SYS(fail, "ip -netns " NS_SRC " route del " IP4_DST "/32 dev veth_src scope global");
++	SYS(fail, "ip -netns " NS_SRC " route add " IP4_DST "/32 via " IP4_TUN_FWD
+ 	    " dev tun_src scope global");
+-	SYS("ip -netns " NS_DST " route add " IP4_TUN_SRC "/32 dev veth_dst scope global");
+-	SYS("ip -netns " NS_SRC " route del " IP6_DST "/128 dev veth_src scope global");
+-	SYS("ip -netns " NS_SRC " route add " IP6_DST "/128 via " IP6_TUN_FWD
++	SYS(fail, "ip -netns " NS_DST " route add " IP4_TUN_SRC "/32 dev veth_dst scope global");
++	SYS(fail, "ip -netns " NS_SRC " route del " IP6_DST "/128 dev veth_src scope global");
++	SYS(fail, "ip -netns " NS_SRC " route add " IP6_DST "/128 via " IP6_TUN_FWD
+ 	    " dev tun_src scope global");
+-	SYS("ip -netns " NS_DST " route add " IP6_TUN_SRC "/128 dev veth_dst scope global");
++	SYS(fail, "ip -netns " NS_DST " route add " IP6_TUN_SRC "/128 dev veth_dst scope global");
+ 
+-	SYS("ip -netns " NS_DST " neigh add " IP4_TUN_SRC " dev veth_dst lladdr " MAC_DST_FWD);
+-	SYS("ip -netns " NS_DST " neigh add " IP6_TUN_SRC " dev veth_dst lladdr " MAC_DST_FWD);
++	SYS(fail, "ip -netns " NS_DST " neigh add " IP4_TUN_SRC " dev veth_dst lladdr " MAC_DST_FWD);
++	SYS(fail, "ip -netns " NS_DST " neigh add " IP6_TUN_SRC " dev veth_dst lladdr " MAC_DST_FWD);
+ 
+ 	if (!ASSERT_OK(set_forwarding(false), "disable forwarding"))
+ 		goto fail;
+diff --git a/tools/testing/selftests/bpf/prog_tests/test_ima.c b/tools/testing/selftests/bpf/prog_tests/test_ima.c
+index b13feceb38f1a..810b14981c2eb 100644
+--- a/tools/testing/selftests/bpf/prog_tests/test_ima.c
++++ b/tools/testing/selftests/bpf/prog_tests/test_ima.c
+@@ -70,7 +70,7 @@ void test_test_ima(void)
+ 	u64 bin_true_sample;
+ 	char cmd[256];
+ 
+-	int err, duration = 0;
++	int err, duration = 0, fresh_digest_idx = 0;
+ 	struct ima *skel = NULL;
+ 
+ 	skel = ima__open_and_load();
+@@ -129,7 +129,15 @@ void test_test_ima(void)
+ 	/*
+ 	 * Test #3
+ 	 * - Goal: confirm that bpf_ima_inode_hash() returns a non-fresh digest
+-	 * - Expected result: 2 samples (/bin/true: non-fresh, fresh)
++	 * - Expected result:
++	 *   1 sample (/bin/true: fresh) if commit 62622dab0a28 applied
++	 *   2 samples (/bin/true: non-fresh, fresh) if commit 62622dab0a28 is
++	 *     not applied
++	 *
++	 * If commit 62622dab0a28 ("ima: return IMA digest value only when
++	 * IMA_COLLECTED flag is set") is applied, bpf_ima_inode_hash() refuses
++	 * to give a non-fresh digest, hence the correct result is 1 instead of
++	 * 2.
+ 	 */
+ 	test_init(skel->bss);
+ 
+@@ -144,13 +152,18 @@ void test_test_ima(void)
+ 		goto close_clean;
+ 
+ 	err = ring_buffer__consume(ringbuf);
+-	ASSERT_EQ(err, 2, "num_samples_or_err");
+-	ASSERT_NEQ(ima_hash_from_bpf[0], 0, "ima_hash");
+-	ASSERT_NEQ(ima_hash_from_bpf[1], 0, "ima_hash");
+-	ASSERT_EQ(ima_hash_from_bpf[0], bin_true_sample, "sample_equal_or_err");
++	ASSERT_GE(err, 1, "num_samples_or_err");
++	if (err == 2) {
++		ASSERT_NEQ(ima_hash_from_bpf[0], 0, "ima_hash");
++		ASSERT_EQ(ima_hash_from_bpf[0], bin_true_sample,
++			  "sample_equal_or_err");
++		fresh_digest_idx = 1;
++	}
++
++	ASSERT_NEQ(ima_hash_from_bpf[fresh_digest_idx], 0, "ima_hash");
+ 	/* IMA refreshed the digest. */
+-	ASSERT_NEQ(ima_hash_from_bpf[1], bin_true_sample,
+-		   "sample_different_or_err");
++	ASSERT_NEQ(ima_hash_from_bpf[fresh_digest_idx], bin_true_sample,
++		   "sample_equal_or_err");
+ 
+ 	/*
+ 	 * Test #4
+diff --git a/tools/testing/selftests/bpf/prog_tests/test_tunnel.c b/tools/testing/selftests/bpf/prog_tests/test_tunnel.c
+index 07ad457f33709..47f1d482fe390 100644
+--- a/tools/testing/selftests/bpf/prog_tests/test_tunnel.c
++++ b/tools/testing/selftests/bpf/prog_tests/test_tunnel.c
+@@ -91,30 +91,15 @@
+ 
+ #define PING_ARGS "-i 0.01 -c 3 -w 10 -q"
+ 
+-#define SYS(fmt, ...)						\
+-	({							\
+-		char cmd[1024];					\
+-		snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__);	\
+-		if (!ASSERT_OK(system(cmd), cmd))		\
+-			goto fail;				\
+-	})
+-
+-#define SYS_NOFAIL(fmt, ...)					\
+-	({							\
+-		char cmd[1024];					\
+-		snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__);	\
+-		system(cmd);					\
+-	})
+-
+ static int config_device(void)
+ {
+-	SYS("ip netns add at_ns0");
+-	SYS("ip link add veth0 address " MAC_VETH1 " type veth peer name veth1");
+-	SYS("ip link set veth0 netns at_ns0");
+-	SYS("ip addr add " IP4_ADDR1_VETH1 "/24 dev veth1");
+-	SYS("ip link set dev veth1 up mtu 1500");
+-	SYS("ip netns exec at_ns0 ip addr add " IP4_ADDR_VETH0 "/24 dev veth0");
+-	SYS("ip netns exec at_ns0 ip link set dev veth0 up mtu 1500");
++	SYS(fail, "ip netns add at_ns0");
++	SYS(fail, "ip link add veth0 address " MAC_VETH1 " type veth peer name veth1");
++	SYS(fail, "ip link set veth0 netns at_ns0");
++	SYS(fail, "ip addr add " IP4_ADDR1_VETH1 "/24 dev veth1");
++	SYS(fail, "ip link set dev veth1 up mtu 1500");
++	SYS(fail, "ip netns exec at_ns0 ip addr add " IP4_ADDR_VETH0 "/24 dev veth0");
++	SYS(fail, "ip netns exec at_ns0 ip link set dev veth0 up mtu 1500");
+ 
+ 	return 0;
+ fail:
+@@ -132,23 +117,23 @@ static void cleanup(void)
+ static int add_vxlan_tunnel(void)
+ {
+ 	/* at_ns0 namespace */
+-	SYS("ip netns exec at_ns0 ip link add dev %s type vxlan external gbp dstport 4789",
++	SYS(fail, "ip netns exec at_ns0 ip link add dev %s type vxlan external gbp dstport 4789",
+ 	    VXLAN_TUNL_DEV0);
+-	SYS("ip netns exec at_ns0 ip link set dev %s address %s up",
++	SYS(fail, "ip netns exec at_ns0 ip link set dev %s address %s up",
+ 	    VXLAN_TUNL_DEV0, MAC_TUNL_DEV0);
+-	SYS("ip netns exec at_ns0 ip addr add dev %s %s/24",
++	SYS(fail, "ip netns exec at_ns0 ip addr add dev %s %s/24",
+ 	    VXLAN_TUNL_DEV0, IP4_ADDR_TUNL_DEV0);
+-	SYS("ip netns exec at_ns0 ip neigh add %s lladdr %s dev %s",
++	SYS(fail, "ip netns exec at_ns0 ip neigh add %s lladdr %s dev %s",
+ 	    IP4_ADDR_TUNL_DEV1, MAC_TUNL_DEV1, VXLAN_TUNL_DEV0);
+-	SYS("ip netns exec at_ns0 ip neigh add %s lladdr %s dev veth0",
++	SYS(fail, "ip netns exec at_ns0 ip neigh add %s lladdr %s dev veth0",
+ 	    IP4_ADDR2_VETH1, MAC_VETH1);
+ 
+ 	/* root namespace */
+-	SYS("ip link add dev %s type vxlan external gbp dstport 4789",
++	SYS(fail, "ip link add dev %s type vxlan external gbp dstport 4789",
+ 	    VXLAN_TUNL_DEV1);
+-	SYS("ip link set dev %s address %s up", VXLAN_TUNL_DEV1, MAC_TUNL_DEV1);
+-	SYS("ip addr add dev %s %s/24", VXLAN_TUNL_DEV1, IP4_ADDR_TUNL_DEV1);
+-	SYS("ip neigh add %s lladdr %s dev %s",
++	SYS(fail, "ip link set dev %s address %s up", VXLAN_TUNL_DEV1, MAC_TUNL_DEV1);
++	SYS(fail, "ip addr add dev %s %s/24", VXLAN_TUNL_DEV1, IP4_ADDR_TUNL_DEV1);
++	SYS(fail, "ip neigh add %s lladdr %s dev %s",
+ 	    IP4_ADDR_TUNL_DEV0, MAC_TUNL_DEV0, VXLAN_TUNL_DEV1);
+ 
+ 	return 0;
+@@ -165,26 +150,26 @@ static void delete_vxlan_tunnel(void)
+ 
+ static int add_ip6vxlan_tunnel(void)
+ {
+-	SYS("ip netns exec at_ns0 ip -6 addr add %s/96 dev veth0",
++	SYS(fail, "ip netns exec at_ns0 ip -6 addr add %s/96 dev veth0",
+ 	    IP6_ADDR_VETH0);
+-	SYS("ip netns exec at_ns0 ip link set dev veth0 up");
+-	SYS("ip -6 addr add %s/96 dev veth1", IP6_ADDR1_VETH1);
+-	SYS("ip -6 addr add %s/96 dev veth1", IP6_ADDR2_VETH1);
+-	SYS("ip link set dev veth1 up");
++	SYS(fail, "ip netns exec at_ns0 ip link set dev veth0 up");
++	SYS(fail, "ip -6 addr add %s/96 dev veth1", IP6_ADDR1_VETH1);
++	SYS(fail, "ip -6 addr add %s/96 dev veth1", IP6_ADDR2_VETH1);
++	SYS(fail, "ip link set dev veth1 up");
+ 
+ 	/* at_ns0 namespace */
+-	SYS("ip netns exec at_ns0 ip link add dev %s type vxlan external dstport 4789",
++	SYS(fail, "ip netns exec at_ns0 ip link add dev %s type vxlan external dstport 4789",
+ 	    IP6VXLAN_TUNL_DEV0);
+-	SYS("ip netns exec at_ns0 ip addr add dev %s %s/24",
++	SYS(fail, "ip netns exec at_ns0 ip addr add dev %s %s/24",
+ 	    IP6VXLAN_TUNL_DEV0, IP4_ADDR_TUNL_DEV0);
+-	SYS("ip netns exec at_ns0 ip link set dev %s address %s up",
++	SYS(fail, "ip netns exec at_ns0 ip link set dev %s address %s up",
+ 	    IP6VXLAN_TUNL_DEV0, MAC_TUNL_DEV0);
+ 
+ 	/* root namespace */
+-	SYS("ip link add dev %s type vxlan external dstport 4789",
++	SYS(fail, "ip link add dev %s type vxlan external dstport 4789",
+ 	    IP6VXLAN_TUNL_DEV1);
+-	SYS("ip addr add dev %s %s/24", IP6VXLAN_TUNL_DEV1, IP4_ADDR_TUNL_DEV1);
+-	SYS("ip link set dev %s address %s up",
++	SYS(fail, "ip addr add dev %s %s/24", IP6VXLAN_TUNL_DEV1, IP4_ADDR_TUNL_DEV1);
++	SYS(fail, "ip link set dev %s address %s up",
+ 	    IP6VXLAN_TUNL_DEV1, MAC_TUNL_DEV1);
+ 
+ 	return 0;
+@@ -205,7 +190,7 @@ static void delete_ip6vxlan_tunnel(void)
+ 
+ static int test_ping(int family, const char *addr)
+ {
+-	SYS("%s %s %s > /dev/null", ping_command(family), PING_ARGS, addr);
++	SYS(fail, "%s %s %s > /dev/null", ping_command(family), PING_ARGS, addr);
+ 	return 0;
+ fail:
+ 	return -1;
+diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_bonding.c b/tools/testing/selftests/bpf/prog_tests/xdp_bonding.c
+index 5e3a26b15ec62..d19f79048ff6f 100644
+--- a/tools/testing/selftests/bpf/prog_tests/xdp_bonding.c
++++ b/tools/testing/selftests/bpf/prog_tests/xdp_bonding.c
+@@ -141,41 +141,33 @@ static const char * const xmit_policy_names[] = {
+ static int bonding_setup(struct skeletons *skeletons, int mode, int xmit_policy,
+ 			 int bond_both_attach)
+ {
+-#define SYS(fmt, ...)						\
+-	({							\
+-		char cmd[1024];					\
+-		snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__);	\
+-		if (!ASSERT_OK(system(cmd), cmd))		\
+-			return -1;				\
+-	})
+-
+-	SYS("ip netns add ns_dst");
+-	SYS("ip link add veth1_1 type veth peer name veth2_1 netns ns_dst");
+-	SYS("ip link add veth1_2 type veth peer name veth2_2 netns ns_dst");
+-
+-	SYS("ip link add bond1 type bond mode %s xmit_hash_policy %s",
++	SYS(fail, "ip netns add ns_dst");
++	SYS(fail, "ip link add veth1_1 type veth peer name veth2_1 netns ns_dst");
++	SYS(fail, "ip link add veth1_2 type veth peer name veth2_2 netns ns_dst");
++
++	SYS(fail, "ip link add bond1 type bond mode %s xmit_hash_policy %s",
+ 	    mode_names[mode], xmit_policy_names[xmit_policy]);
+-	SYS("ip link set bond1 up address " BOND1_MAC_STR " addrgenmode none");
+-	SYS("ip -netns ns_dst link add bond2 type bond mode %s xmit_hash_policy %s",
++	SYS(fail, "ip link set bond1 up address " BOND1_MAC_STR " addrgenmode none");
++	SYS(fail, "ip -netns ns_dst link add bond2 type bond mode %s xmit_hash_policy %s",
+ 	    mode_names[mode], xmit_policy_names[xmit_policy]);
+-	SYS("ip -netns ns_dst link set bond2 up address " BOND2_MAC_STR " addrgenmode none");
++	SYS(fail, "ip -netns ns_dst link set bond2 up address " BOND2_MAC_STR " addrgenmode none");
+ 
+-	SYS("ip link set veth1_1 master bond1");
++	SYS(fail, "ip link set veth1_1 master bond1");
+ 	if (bond_both_attach == BOND_BOTH_AND_ATTACH) {
+-		SYS("ip link set veth1_2 master bond1");
++		SYS(fail, "ip link set veth1_2 master bond1");
+ 	} else {
+-		SYS("ip link set veth1_2 up addrgenmode none");
++		SYS(fail, "ip link set veth1_2 up addrgenmode none");
+ 
+ 		if (xdp_attach(skeletons, skeletons->xdp_dummy->progs.xdp_dummy_prog, "veth1_2"))
+ 			return -1;
+ 	}
+ 
+-	SYS("ip -netns ns_dst link set veth2_1 master bond2");
++	SYS(fail, "ip -netns ns_dst link set veth2_1 master bond2");
+ 
+ 	if (bond_both_attach == BOND_BOTH_AND_ATTACH)
+-		SYS("ip -netns ns_dst link set veth2_2 master bond2");
++		SYS(fail, "ip -netns ns_dst link set veth2_2 master bond2");
+ 	else
+-		SYS("ip -netns ns_dst link set veth2_2 up addrgenmode none");
++		SYS(fail, "ip -netns ns_dst link set veth2_2 up addrgenmode none");
+ 
+ 	/* Load a dummy program on sending side as with veth peer needs to have a
+ 	 * XDP program loaded as well.
+@@ -194,8 +186,8 @@ static int bonding_setup(struct skeletons *skeletons, int mode, int xmit_policy,
+ 	}
+ 
+ 	return 0;
+-
+-#undef SYS
++fail:
++	return -1;
+ }
+ 
+ static void bonding_cleanup(struct skeletons *skeletons)
+diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_do_redirect.c b/tools/testing/selftests/bpf/prog_tests/xdp_do_redirect.c
+index 8251a0fc6ee94..fde13010980d7 100644
+--- a/tools/testing/selftests/bpf/prog_tests/xdp_do_redirect.c
++++ b/tools/testing/selftests/bpf/prog_tests/xdp_do_redirect.c
+@@ -12,14 +12,6 @@
+ #include <uapi/linux/netdev.h>
+ #include "test_xdp_do_redirect.skel.h"
+ 
+-#define SYS(fmt, ...)						\
+-	({							\
+-		char cmd[1024];					\
+-		snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__);	\
+-		if (!ASSERT_OK(system(cmd), cmd))		\
+-			goto out;				\
+-	})
+-
+ struct udp_packet {
+ 	struct ethhdr eth;
+ 	struct ipv6hdr iph;
+@@ -127,19 +119,19 @@ void test_xdp_do_redirect(void)
+ 	 * iface and NUM_PKTS-2 in the TC hook. We match the packets on the UDP
+ 	 * payload.
+ 	 */
+-	SYS("ip netns add testns");
++	SYS(out, "ip netns add testns");
+ 	nstoken = open_netns("testns");
+ 	if (!ASSERT_OK_PTR(nstoken, "setns"))
+ 		goto out;
+ 
+-	SYS("ip link add veth_src type veth peer name veth_dst");
+-	SYS("ip link set dev veth_src address 00:11:22:33:44:55");
+-	SYS("ip link set dev veth_dst address 66:77:88:99:aa:bb");
+-	SYS("ip link set dev veth_src up");
+-	SYS("ip link set dev veth_dst up");
+-	SYS("ip addr add dev veth_src fc00::1/64");
+-	SYS("ip addr add dev veth_dst fc00::2/64");
+-	SYS("ip neigh add fc00::2 dev veth_src lladdr 66:77:88:99:aa:bb");
++	SYS(out, "ip link add veth_src type veth peer name veth_dst");
++	SYS(out, "ip link set dev veth_src address 00:11:22:33:44:55");
++	SYS(out, "ip link set dev veth_dst address 66:77:88:99:aa:bb");
++	SYS(out, "ip link set dev veth_src up");
++	SYS(out, "ip link set dev veth_dst up");
++	SYS(out, "ip addr add dev veth_src fc00::1/64");
++	SYS(out, "ip addr add dev veth_dst fc00::2/64");
++	SYS(out, "ip neigh add fc00::2 dev veth_src lladdr 66:77:88:99:aa:bb");
+ 
+ 	/* We enable forwarding in the test namespace because that will cause
+ 	 * the packets that go through the kernel stack (with XDP_PASS) to be
+@@ -152,7 +144,7 @@ void test_xdp_do_redirect(void)
+ 	 * code didn't have this, so we keep the test behaviour to make sure the
+ 	 * bug doesn't resurface.
+ 	 */
+-	SYS("sysctl -qw net.ipv6.conf.all.forwarding=1");
++	SYS(out, "sysctl -qw net.ipv6.conf.all.forwarding=1");
+ 
+ 	ifindex_src = if_nametoindex("veth_src");
+ 	ifindex_dst = if_nametoindex("veth_dst");
+@@ -250,6 +242,6 @@ out_tc:
+ out:
+ 	if (nstoken)
+ 		close_netns(nstoken);
+-	system("ip netns del testns");
++	SYS_NOFAIL("ip netns del testns");
+ 	test_xdp_do_redirect__destroy(skel);
+ }
+diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_metadata.c b/tools/testing/selftests/bpf/prog_tests/xdp_metadata.c
+index 8c5e98da9ae9f..626c461fa34d8 100644
+--- a/tools/testing/selftests/bpf/prog_tests/xdp_metadata.c
++++ b/tools/testing/selftests/bpf/prog_tests/xdp_metadata.c
+@@ -34,11 +34,6 @@
+ #define PREFIX_LEN "8"
+ #define FAMILY AF_INET
+ 
+-#define SYS(cmd) ({ \
+-	if (!ASSERT_OK(system(cmd), (cmd))) \
+-		goto out; \
+-})
+-
+ struct xsk {
+ 	void *umem_area;
+ 	struct xsk_umem *umem;
+@@ -300,16 +295,16 @@ void test_xdp_metadata(void)
+ 
+ 	/* Setup new networking namespace, with a veth pair. */
+ 
+-	SYS("ip netns add xdp_metadata");
++	SYS(out, "ip netns add xdp_metadata");
+ 	tok = open_netns("xdp_metadata");
+-	SYS("ip link add numtxqueues 1 numrxqueues 1 " TX_NAME
++	SYS(out, "ip link add numtxqueues 1 numrxqueues 1 " TX_NAME
+ 	    " type veth peer " RX_NAME " numtxqueues 1 numrxqueues 1");
+-	SYS("ip link set dev " TX_NAME " address 00:00:00:00:00:01");
+-	SYS("ip link set dev " RX_NAME " address 00:00:00:00:00:02");
+-	SYS("ip link set dev " TX_NAME " up");
+-	SYS("ip link set dev " RX_NAME " up");
+-	SYS("ip addr add " TX_ADDR "/" PREFIX_LEN " dev " TX_NAME);
+-	SYS("ip addr add " RX_ADDR "/" PREFIX_LEN " dev " RX_NAME);
++	SYS(out, "ip link set dev " TX_NAME " address 00:00:00:00:00:01");
++	SYS(out, "ip link set dev " RX_NAME " address 00:00:00:00:00:02");
++	SYS(out, "ip link set dev " TX_NAME " up");
++	SYS(out, "ip link set dev " RX_NAME " up");
++	SYS(out, "ip addr add " TX_ADDR "/" PREFIX_LEN " dev " TX_NAME);
++	SYS(out, "ip addr add " RX_ADDR "/" PREFIX_LEN " dev " RX_NAME);
+ 
+ 	rx_ifindex = if_nametoindex(RX_NAME);
+ 	tx_ifindex = if_nametoindex(TX_NAME);
+@@ -407,5 +402,5 @@ out:
+ 	xdp_metadata__destroy(bpf_obj);
+ 	if (tok)
+ 		close_netns(tok);
+-	system("ip netns del xdp_metadata");
++	SYS_NOFAIL("ip netns del xdp_metadata");
+ }
+diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_synproxy.c b/tools/testing/selftests/bpf/prog_tests/xdp_synproxy.c
+index c72083885b6d7..8b50a992d233b 100644
+--- a/tools/testing/selftests/bpf/prog_tests/xdp_synproxy.c
++++ b/tools/testing/selftests/bpf/prog_tests/xdp_synproxy.c
+@@ -8,11 +8,6 @@
+ 
+ #define CMD_OUT_BUF_SIZE 1023
+ 
+-#define SYS(cmd) ({ \
+-	if (!ASSERT_OK(system(cmd), (cmd))) \
+-		goto out; \
+-})
+-
+ #define SYS_OUT(cmd, ...) ({ \
+ 	char buf[1024]; \
+ 	snprintf(buf, sizeof(buf), (cmd), ##__VA_ARGS__); \
+@@ -69,37 +64,37 @@ static void test_synproxy(bool xdp)
+ 	char buf[CMD_OUT_BUF_SIZE];
+ 	size_t size;
+ 
+-	SYS("ip netns add synproxy");
++	SYS(out, "ip netns add synproxy");
+ 
+-	SYS("ip link add tmp0 type veth peer name tmp1");
+-	SYS("ip link set tmp1 netns synproxy");
+-	SYS("ip link set tmp0 up");
+-	SYS("ip addr replace 198.18.0.1/24 dev tmp0");
++	SYS(out, "ip link add tmp0 type veth peer name tmp1");
++	SYS(out, "ip link set tmp1 netns synproxy");
++	SYS(out, "ip link set tmp0 up");
++	SYS(out, "ip addr replace 198.18.0.1/24 dev tmp0");
+ 
+ 	/* When checksum offload is enabled, the XDP program sees wrong
+ 	 * checksums and drops packets.
+ 	 */
+-	SYS("ethtool -K tmp0 tx off");
++	SYS(out, "ethtool -K tmp0 tx off");
+ 	if (xdp)
+ 		/* Workaround required for veth. */
+-		SYS("ip link set tmp0 xdp object xdp_dummy.bpf.o section xdp 2> /dev/null");
++		SYS(out, "ip link set tmp0 xdp object xdp_dummy.bpf.o section xdp 2> /dev/null");
+ 
+ 	ns = open_netns("synproxy");
+ 	if (!ASSERT_OK_PTR(ns, "setns"))
+ 		goto out;
+ 
+-	SYS("ip link set lo up");
+-	SYS("ip link set tmp1 up");
+-	SYS("ip addr replace 198.18.0.2/24 dev tmp1");
+-	SYS("sysctl -w net.ipv4.tcp_syncookies=2");
+-	SYS("sysctl -w net.ipv4.tcp_timestamps=1");
+-	SYS("sysctl -w net.netfilter.nf_conntrack_tcp_loose=0");
+-	SYS("iptables-legacy -t raw -I PREROUTING \
++	SYS(out, "ip link set lo up");
++	SYS(out, "ip link set tmp1 up");
++	SYS(out, "ip addr replace 198.18.0.2/24 dev tmp1");
++	SYS(out, "sysctl -w net.ipv4.tcp_syncookies=2");
++	SYS(out, "sysctl -w net.ipv4.tcp_timestamps=1");
++	SYS(out, "sysctl -w net.netfilter.nf_conntrack_tcp_loose=0");
++	SYS(out, "iptables-legacy -t raw -I PREROUTING \
+ 	    -i tmp1 -p tcp -m tcp --syn --dport 8080 -j CT --notrack");
+-	SYS("iptables-legacy -t filter -A INPUT \
++	SYS(out, "iptables-legacy -t filter -A INPUT \
+ 	    -i tmp1 -p tcp -m tcp --dport 8080 -m state --state INVALID,UNTRACKED \
+ 	    -j SYNPROXY --sack-perm --timestamp --wscale 7 --mss 1460");
+-	SYS("iptables-legacy -t filter -A INPUT \
++	SYS(out, "iptables-legacy -t filter -A INPUT \
+ 	    -i tmp1 -m state --state INVALID -j DROP");
+ 
+ 	ctrl_file = SYS_OUT("./xdp_synproxy --iface tmp1 --ports 8080 \
+@@ -170,8 +165,8 @@ out:
+ 	if (ns)
+ 		close_netns(ns);
+ 
+-	system("ip link del tmp0");
+-	system("ip netns del synproxy");
++	SYS_NOFAIL("ip link del tmp0");
++	SYS_NOFAIL("ip netns del synproxy");
+ }
+ 
+ void test_xdp_synproxy(void)
+diff --git a/tools/testing/selftests/bpf/prog_tests/xfrm_info.c b/tools/testing/selftests/bpf/prog_tests/xfrm_info.c
+index 8b03c9bb48625..d37f5394e199e 100644
+--- a/tools/testing/selftests/bpf/prog_tests/xfrm_info.c
++++ b/tools/testing/selftests/bpf/prog_tests/xfrm_info.c
+@@ -69,21 +69,6 @@
+     "proto esp aead 'rfc4106(gcm(aes))' " \
+     "0xe4d8f4b4da1df18a3510b3781496daa82488b713 128 mode tunnel "
+ 
+-#define SYS(fmt, ...)						\
+-	({							\
+-		char cmd[1024];					\
+-		snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__);	\
+-		if (!ASSERT_OK(system(cmd), cmd))		\
+-			goto fail;				\
+-	})
+-
+-#define SYS_NOFAIL(fmt, ...)					\
+-	({							\
+-		char cmd[1024];					\
+-		snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__);	\
+-		system(cmd);					\
+-	})
+-
+ static int attach_tc_prog(struct bpf_tc_hook *hook, int igr_fd, int egr_fd)
+ {
+ 	LIBBPF_OPTS(bpf_tc_opts, opts1, .handle = 1, .priority = 1,
+@@ -126,23 +111,23 @@ static void cleanup(void)
+ 
+ static int config_underlay(void)
+ {
+-	SYS("ip netns add " NS0);
+-	SYS("ip netns add " NS1);
+-	SYS("ip netns add " NS2);
++	SYS(fail, "ip netns add " NS0);
++	SYS(fail, "ip netns add " NS1);
++	SYS(fail, "ip netns add " NS2);
+ 
+ 	/* NS0 <-> NS1 [veth01 <-> veth10] */
+-	SYS("ip link add veth01 netns " NS0 " type veth peer name veth10 netns " NS1);
+-	SYS("ip -net " NS0 " addr add " IP4_ADDR_VETH01 "/24 dev veth01");
+-	SYS("ip -net " NS0 " link set dev veth01 up");
+-	SYS("ip -net " NS1 " addr add " IP4_ADDR_VETH10 "/24 dev veth10");
+-	SYS("ip -net " NS1 " link set dev veth10 up");
++	SYS(fail, "ip link add veth01 netns " NS0 " type veth peer name veth10 netns " NS1);
++	SYS(fail, "ip -net " NS0 " addr add " IP4_ADDR_VETH01 "/24 dev veth01");
++	SYS(fail, "ip -net " NS0 " link set dev veth01 up");
++	SYS(fail, "ip -net " NS1 " addr add " IP4_ADDR_VETH10 "/24 dev veth10");
++	SYS(fail, "ip -net " NS1 " link set dev veth10 up");
+ 
+ 	/* NS0 <-> NS2 [veth02 <-> veth20] */
+-	SYS("ip link add veth02 netns " NS0 " type veth peer name veth20 netns " NS2);
+-	SYS("ip -net " NS0 " addr add " IP4_ADDR_VETH02 "/24 dev veth02");
+-	SYS("ip -net " NS0 " link set dev veth02 up");
+-	SYS("ip -net " NS2 " addr add " IP4_ADDR_VETH20 "/24 dev veth20");
+-	SYS("ip -net " NS2 " link set dev veth20 up");
++	SYS(fail, "ip link add veth02 netns " NS0 " type veth peer name veth20 netns " NS2);
++	SYS(fail, "ip -net " NS0 " addr add " IP4_ADDR_VETH02 "/24 dev veth02");
++	SYS(fail, "ip -net " NS0 " link set dev veth02 up");
++	SYS(fail, "ip -net " NS2 " addr add " IP4_ADDR_VETH20 "/24 dev veth20");
++	SYS(fail, "ip -net " NS2 " link set dev veth20 up");
+ 
+ 	return 0;
+ fail:
+@@ -153,20 +138,20 @@ static int setup_xfrm_tunnel_ns(const char *ns, const char *ipv4_local,
+ 				const char *ipv4_remote, int if_id)
+ {
+ 	/* State: local -> remote */
+-	SYS("ip -net %s xfrm state add src %s dst %s spi 1 "
++	SYS(fail, "ip -net %s xfrm state add src %s dst %s spi 1 "
+ 	    ESP_DUMMY_PARAMS "if_id %d", ns, ipv4_local, ipv4_remote, if_id);
+ 
+ 	/* State: local <- remote */
+-	SYS("ip -net %s xfrm state add src %s dst %s spi 1 "
++	SYS(fail, "ip -net %s xfrm state add src %s dst %s spi 1 "
+ 	    ESP_DUMMY_PARAMS "if_id %d", ns, ipv4_remote, ipv4_local, if_id);
+ 
+ 	/* Policy: local -> remote */
+-	SYS("ip -net %s xfrm policy add dir out src 0.0.0.0/0 dst 0.0.0.0/0 "
++	SYS(fail, "ip -net %s xfrm policy add dir out src 0.0.0.0/0 dst 0.0.0.0/0 "
+ 	    "if_id %d tmpl src %s dst %s proto esp mode tunnel if_id %d", ns,
+ 	    if_id, ipv4_local, ipv4_remote, if_id);
+ 
+ 	/* Policy: local <- remote */
+-	SYS("ip -net %s xfrm policy add dir in src 0.0.0.0/0 dst 0.0.0.0/0 "
++	SYS(fail, "ip -net %s xfrm policy add dir in src 0.0.0.0/0 dst 0.0.0.0/0 "
+ 	    "if_id %d tmpl src %s dst %s proto esp mode tunnel if_id %d", ns,
+ 	    if_id, ipv4_remote, ipv4_local, if_id);
+ 
+@@ -274,16 +259,16 @@ static int config_overlay(void)
+ 	if (!ASSERT_OK(setup_xfrmi_external_dev(NS0), "xfrmi"))
+ 		goto fail;
+ 
+-	SYS("ip -net " NS0 " addr add 192.168.1.100/24 dev ipsec0");
+-	SYS("ip -net " NS0 " link set dev ipsec0 up");
++	SYS(fail, "ip -net " NS0 " addr add 192.168.1.100/24 dev ipsec0");
++	SYS(fail, "ip -net " NS0 " link set dev ipsec0 up");
+ 
+-	SYS("ip -net " NS1 " link add ipsec0 type xfrm if_id %d", IF_ID_1);
+-	SYS("ip -net " NS1 " addr add 192.168.1.200/24 dev ipsec0");
+-	SYS("ip -net " NS1 " link set dev ipsec0 up");
++	SYS(fail, "ip -net " NS1 " link add ipsec0 type xfrm if_id %d", IF_ID_1);
++	SYS(fail, "ip -net " NS1 " addr add 192.168.1.200/24 dev ipsec0");
++	SYS(fail, "ip -net " NS1 " link set dev ipsec0 up");
+ 
+-	SYS("ip -net " NS2 " link add ipsec0 type xfrm if_id %d", IF_ID_2);
+-	SYS("ip -net " NS2 " addr add 192.168.1.200/24 dev ipsec0");
+-	SYS("ip -net " NS2 " link set dev ipsec0 up");
++	SYS(fail, "ip -net " NS2 " link add ipsec0 type xfrm if_id %d", IF_ID_2);
++	SYS(fail, "ip -net " NS2 " addr add 192.168.1.200/24 dev ipsec0");
++	SYS(fail, "ip -net " NS2 " link set dev ipsec0 up");
+ 
+ 	return 0;
+ fail:
+@@ -294,7 +279,7 @@ static int test_xfrm_ping(struct xfrm_info *skel, u32 if_id)
+ {
+ 	skel->bss->req_if_id = if_id;
+ 
+-	SYS("ping -i 0.01 -c 3 -w 10 -q 192.168.1.200 > /dev/null");
++	SYS(fail, "ping -i 0.01 -c 3 -w 10 -q 192.168.1.200 > /dev/null");
+ 
+ 	if (!ASSERT_EQ(skel->bss->resp_if_id, if_id, "if_id"))
+ 		goto fail;
+diff --git a/tools/testing/selftests/bpf/test_progs.h b/tools/testing/selftests/bpf/test_progs.h
+index d5d51ec97ec87..9fbdc57c5b570 100644
+--- a/tools/testing/selftests/bpf/test_progs.h
++++ b/tools/testing/selftests/bpf/test_progs.h
+@@ -376,6 +376,21 @@ int test__join_cgroup(const char *path);
+ 	___ok;								\
+ })
+ 
++#define SYS(goto_label, fmt, ...)					\
++	({								\
++		char cmd[1024];						\
++		snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__);		\
++		if (!ASSERT_OK(system(cmd), cmd))			\
++			goto goto_label;				\
++	})
++
++#define SYS_NOFAIL(fmt, ...)						\
++	({								\
++		char cmd[1024];						\
++		snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__);		\
++		system(cmd);						\
++	})
++
+ static inline __u64 ptr_to_u64(const void *ptr)
+ {
+ 	return (__u64) (unsigned long) ptr;
+diff --git a/tools/testing/selftests/bpf/test_xsk.sh b/tools/testing/selftests/bpf/test_xsk.sh
+index b077cf58f8258..377fb157a57c5 100755
+--- a/tools/testing/selftests/bpf/test_xsk.sh
++++ b/tools/testing/selftests/bpf/test_xsk.sh
+@@ -116,6 +116,7 @@ setup_vethPairs() {
+ 	ip link add ${VETH0} numtxqueues 4 numrxqueues 4 type veth peer name ${VETH1} numtxqueues 4 numrxqueues 4
+ 	if [ -f /proc/net/if_inet6 ]; then
+ 		echo 1 > /proc/sys/net/ipv6/conf/${VETH0}/disable_ipv6
++		echo 1 > /proc/sys/net/ipv6/conf/${VETH1}/disable_ipv6
+ 	fi
+ 	if [[ $verbose -eq 1 ]]; then
+ 	        echo "setting up ${VETH1}"
+diff --git a/tools/testing/selftests/bpf/testing_helpers.c b/tools/testing/selftests/bpf/testing_helpers.c
+index 6c44153755e66..3fe98ffed38a6 100644
+--- a/tools/testing/selftests/bpf/testing_helpers.c
++++ b/tools/testing/selftests/bpf/testing_helpers.c
+@@ -229,3 +229,23 @@ int bpf_test_load_program(enum bpf_prog_type type, const struct bpf_insn *insns,
+ 
+ 	return bpf_prog_load(type, NULL, license, insns, insns_cnt, &opts);
+ }
++
++__u64 read_perf_max_sample_freq(void)
++{
++	__u64 sample_freq = 5000; /* fallback to 5000 on error */
++	FILE *f;
++
++	f = fopen("/proc/sys/kernel/perf_event_max_sample_rate", "r");
++	if (f == NULL) {
++		printf("Failed to open /proc/sys/kernel/perf_event_max_sample_rate: err %d\n"
++		       "return default value: 5000\n", -errno);
++		return sample_freq;
++	}
++	if (fscanf(f, "%llu", &sample_freq) != 1) {
++		printf("Failed to parse /proc/sys/kernel/perf_event_max_sample_rate: err %d\n"
++		       "return default value: 5000\n", -errno);
++	}
++
++	fclose(f);
++	return sample_freq;
++}
+diff --git a/tools/testing/selftests/bpf/testing_helpers.h b/tools/testing/selftests/bpf/testing_helpers.h
+index 6ec00bf79cb55..eb8790f928e4c 100644
+--- a/tools/testing/selftests/bpf/testing_helpers.h
++++ b/tools/testing/selftests/bpf/testing_helpers.h
+@@ -20,3 +20,5 @@ struct test_filter_set;
+ int parse_test_list(const char *s,
+ 		    struct test_filter_set *test_set,
+ 		    bool is_glob_pattern);
++
++__u64 read_perf_max_sample_freq(void);
+diff --git a/tools/testing/selftests/bpf/xskxceiver.c b/tools/testing/selftests/bpf/xskxceiver.c
+index a17655107a94c..b6910df710d1e 100644
+--- a/tools/testing/selftests/bpf/xskxceiver.c
++++ b/tools/testing/selftests/bpf/xskxceiver.c
+@@ -631,7 +631,6 @@ static struct pkt_stream *pkt_stream_generate(struct xsk_umem_info *umem, u32 nb
+ 	if (!pkt_stream)
+ 		exit_with_error(ENOMEM);
+ 
+-	pkt_stream->nb_pkts = nb_pkts;
+ 	for (i = 0; i < nb_pkts; i++) {
+ 		pkt_set(umem, &pkt_stream->pkts[i], (i % umem->num_frames) * umem->frame_size,
+ 			pkt_len);
+@@ -1124,7 +1123,14 @@ static int validate_rx_dropped(struct ifobject *ifobject)
+ 	if (err)
+ 		return TEST_FAILURE;
+ 
+-	if (stats.rx_dropped == ifobject->pkt_stream->nb_pkts / 2)
++	/* The receiver calls getsockopt after receiving the last (valid)
++	 * packet which is not the final packet sent in this test (valid and
++	 * invalid packets are sent in alternating fashion with the final
++	 * packet being invalid). Since the last packet may or may not have
++	 * been dropped already, both outcomes must be allowed.
++	 */
++	if (stats.rx_dropped == ifobject->pkt_stream->nb_pkts / 2 ||
++	    stats.rx_dropped == ifobject->pkt_stream->nb_pkts / 2 - 1)
+ 		return TEST_PASS;
+ 
+ 	return TEST_FAILURE;
+@@ -1635,6 +1641,7 @@ static void testapp_single_pkt(struct test_spec *test)
+ 
+ static void testapp_invalid_desc(struct test_spec *test)
+ {
++	u64 umem_size = test->ifobj_tx->umem->num_frames * test->ifobj_tx->umem->frame_size;
+ 	struct pkt pkts[] = {
+ 		/* Zero packet address allowed */
+ 		{0, PKT_SIZE, 0, true},
+@@ -1645,9 +1652,9 @@ static void testapp_invalid_desc(struct test_spec *test)
+ 		/* Packet too large */
+ 		{0x2000, XSK_UMEM__INVALID_FRAME_SIZE, 0, false},
+ 		/* After umem ends */
+-		{UMEM_SIZE, PKT_SIZE, 0, false},
++		{umem_size, PKT_SIZE, 0, false},
+ 		/* Straddle the end of umem */
+-		{UMEM_SIZE - PKT_SIZE / 2, PKT_SIZE, 0, false},
++		{umem_size - PKT_SIZE / 2, PKT_SIZE, 0, false},
+ 		/* Straddle a page boundrary */
+ 		{0x3000 - PKT_SIZE / 2, PKT_SIZE, 0, false},
+ 		/* Straddle a 2K boundrary */
+@@ -1665,8 +1672,8 @@ static void testapp_invalid_desc(struct test_spec *test)
+ 	}
+ 
+ 	if (test->ifobj_tx->shared_umem) {
+-		pkts[4].addr += UMEM_SIZE;
+-		pkts[5].addr += UMEM_SIZE;
++		pkts[4].addr += umem_size;
++		pkts[5].addr += umem_size;
+ 	}
+ 
+ 	pkt_stream_generate_custom(test, pkts, ARRAY_SIZE(pkts));
+diff --git a/tools/testing/selftests/bpf/xskxceiver.h b/tools/testing/selftests/bpf/xskxceiver.h
+index 3e8ec7d8ec32a..03ed33a279774 100644
+--- a/tools/testing/selftests/bpf/xskxceiver.h
++++ b/tools/testing/selftests/bpf/xskxceiver.h
+@@ -53,7 +53,6 @@
+ #define THREAD_TMOUT 3
+ #define DEFAULT_PKT_CNT (4 * 1024)
+ #define DEFAULT_UMEM_BUFFERS (DEFAULT_PKT_CNT / 4)
+-#define UMEM_SIZE (DEFAULT_UMEM_BUFFERS * XSK_UMEM__DEFAULT_FRAME_SIZE)
+ #define RX_FULL_RXQSIZE 32
+ #define UMEM_HEADROOM_TEST_SIZE 128
+ #define XSK_UMEM__INVALID_FRAME_SIZE (XSK_UMEM__DEFAULT_FRAME_SIZE + 1)
+diff --git a/tools/testing/selftests/clone3/clone3.c b/tools/testing/selftests/clone3/clone3.c
+index 4fce46afe6db8..e495f895a2cdd 100644
+--- a/tools/testing/selftests/clone3/clone3.c
++++ b/tools/testing/selftests/clone3/clone3.c
+@@ -129,7 +129,7 @@ int main(int argc, char *argv[])
+ 	uid_t uid = getuid();
+ 
+ 	ksft_print_header();
+-	ksft_set_plan(17);
++	ksft_set_plan(18);
+ 	test_clone3_supported();
+ 
+ 	/* Just a simple clone3() should return 0.*/
+@@ -198,5 +198,5 @@ int main(int argc, char *argv[])
+ 	/* Do a clone3() in a new time namespace */
+ 	test_clone3(CLONE_NEWTIME, 0, 0, CLONE3_ARGS_NO_TEST);
+ 
+-	return !ksft_get_fail_cnt() ? ksft_exit_pass() : ksft_exit_fail();
++	ksft_finished();
+ }
+diff --git a/tools/testing/selftests/powerpc/pmu/sampling_tests/mmcra_thresh_marked_sample_test.c b/tools/testing/selftests/powerpc/pmu/sampling_tests/mmcra_thresh_marked_sample_test.c
+index 022cc1655eb52..75527876ad3c1 100644
+--- a/tools/testing/selftests/powerpc/pmu/sampling_tests/mmcra_thresh_marked_sample_test.c
++++ b/tools/testing/selftests/powerpc/pmu/sampling_tests/mmcra_thresh_marked_sample_test.c
+@@ -63,9 +63,9 @@ static int mmcra_thresh_marked_sample(void)
+ 			get_mmcra_thd_stop(get_reg_value(intr_regs, "MMCRA"), 4));
+ 	FAIL_IF(EV_CODE_EXTRACT(event.attr.config, marked) !=
+ 			get_mmcra_marked(get_reg_value(intr_regs, "MMCRA"), 4));
+-	FAIL_IF(EV_CODE_EXTRACT(event.attr.config, sample >> 2) !=
++	FAIL_IF((EV_CODE_EXTRACT(event.attr.config, sample) >> 2) !=
+ 			get_mmcra_rand_samp_elig(get_reg_value(intr_regs, "MMCRA"), 4));
+-	FAIL_IF(EV_CODE_EXTRACT(event.attr.config, sample & 0x3) !=
++	FAIL_IF((EV_CODE_EXTRACT(event.attr.config, sample) & 0x3) !=
+ 			get_mmcra_sample_mode(get_reg_value(intr_regs, "MMCRA"), 4));
+ 	FAIL_IF(EV_CODE_EXTRACT(event.attr.config, sm) !=
+ 			get_mmcra_sm(get_reg_value(intr_regs, "MMCRA"), 4));
+diff --git a/tools/testing/selftests/resctrl/cache.c b/tools/testing/selftests/resctrl/cache.c
+index 68ff856d36f0b..0485863a169f2 100644
+--- a/tools/testing/selftests/resctrl/cache.c
++++ b/tools/testing/selftests/resctrl/cache.c
+@@ -244,10 +244,12 @@ int cat_val(struct resctrl_val_param *param)
+ 	while (1) {
+ 		if (!strncmp(resctrl_val, CAT_STR, sizeof(CAT_STR))) {
+ 			ret = param->setup(1, param);
+-			if (ret) {
++			if (ret == END_OF_TESTS) {
+ 				ret = 0;
+ 				break;
+ 			}
++			if (ret < 0)
++				break;
+ 			ret = reset_enable_llc_perf(bm_pid, param->cpu_no);
+ 			if (ret)
+ 				break;
+diff --git a/tools/testing/selftests/resctrl/cat_test.c b/tools/testing/selftests/resctrl/cat_test.c
+index 1c5e90c632548..2d3c7c77ab6cb 100644
+--- a/tools/testing/selftests/resctrl/cat_test.c
++++ b/tools/testing/selftests/resctrl/cat_test.c
+@@ -40,7 +40,7 @@ static int cat_setup(int num, ...)
+ 
+ 	/* Run NUM_OF_RUNS times */
+ 	if (p->num_of_runs >= NUM_OF_RUNS)
+-		return -1;
++		return END_OF_TESTS;
+ 
+ 	if (p->num_of_runs == 0) {
+ 		sprintf(schemata, "%lx", p->mask);
+diff --git a/tools/testing/selftests/resctrl/cmt_test.c b/tools/testing/selftests/resctrl/cmt_test.c
+index 8968e36db99d7..3b0454e7fc826 100644
+--- a/tools/testing/selftests/resctrl/cmt_test.c
++++ b/tools/testing/selftests/resctrl/cmt_test.c
+@@ -32,7 +32,7 @@ static int cmt_setup(int num, ...)
+ 
+ 	/* Run NUM_OF_RUNS times */
+ 	if (p->num_of_runs >= NUM_OF_RUNS)
+-		return -1;
++		return END_OF_TESTS;
+ 
+ 	p->num_of_runs++;
+ 
+diff --git a/tools/testing/selftests/resctrl/fill_buf.c b/tools/testing/selftests/resctrl/fill_buf.c
+index 56ccbeae0638d..c20d0a7ecbe63 100644
+--- a/tools/testing/selftests/resctrl/fill_buf.c
++++ b/tools/testing/selftests/resctrl/fill_buf.c
+@@ -68,6 +68,8 @@ static void *malloc_and_init_memory(size_t s)
+ 	size_t s64;
+ 
+ 	void *p = memalign(PAGE_SIZE, s);
++	if (!p)
++		return NULL;
+ 
+ 	p64 = (uint64_t *)p;
+ 	s64 = s / sizeof(uint64_t);
+diff --git a/tools/testing/selftests/resctrl/mba_test.c b/tools/testing/selftests/resctrl/mba_test.c
+index 1a1bdb6180cf2..97dc98c0c9497 100644
+--- a/tools/testing/selftests/resctrl/mba_test.c
++++ b/tools/testing/selftests/resctrl/mba_test.c
+@@ -28,6 +28,7 @@ static int mba_setup(int num, ...)
+ 	struct resctrl_val_param *p;
+ 	char allocation_str[64];
+ 	va_list param;
++	int ret;
+ 
+ 	va_start(param, num);
+ 	p = va_arg(param, struct resctrl_val_param *);
+@@ -41,11 +42,15 @@ static int mba_setup(int num, ...)
+ 		return 0;
+ 
+ 	if (allocation < ALLOCATION_MIN || allocation > ALLOCATION_MAX)
+-		return -1;
++		return END_OF_TESTS;
+ 
+ 	sprintf(allocation_str, "%d", allocation);
+ 
+-	write_schemata(p->ctrlgrp, allocation_str, p->cpu_no, p->resctrl_val);
++	ret = write_schemata(p->ctrlgrp, allocation_str, p->cpu_no,
++			     p->resctrl_val);
++	if (ret < 0)
++		return ret;
++
+ 	allocation -= ALLOCATION_STEP;
+ 
+ 	return 0;
+diff --git a/tools/testing/selftests/resctrl/mbm_test.c b/tools/testing/selftests/resctrl/mbm_test.c
+index 8392e5c55ed02..280187628054d 100644
+--- a/tools/testing/selftests/resctrl/mbm_test.c
++++ b/tools/testing/selftests/resctrl/mbm_test.c
+@@ -95,7 +95,7 @@ static int mbm_setup(int num, ...)
+ 
+ 	/* Run NUM_OF_RUNS times */
+ 	if (num_of_runs++ >= NUM_OF_RUNS)
+-		return -1;
++		return END_OF_TESTS;
+ 
+ 	va_start(param, num);
+ 	p = va_arg(param, struct resctrl_val_param *);
+diff --git a/tools/testing/selftests/resctrl/resctrl.h b/tools/testing/selftests/resctrl/resctrl.h
+index f0ded31fb3c7c..f44fa2de4d986 100644
+--- a/tools/testing/selftests/resctrl/resctrl.h
++++ b/tools/testing/selftests/resctrl/resctrl.h
+@@ -37,6 +37,8 @@
+ #define ARCH_INTEL     1
+ #define ARCH_AMD       2
+ 
++#define END_OF_TESTS	1
++
+ #define PARENT_EXIT(err_msg)			\
+ 	do {					\
+ 		perror(err_msg);		\
+diff --git a/tools/testing/selftests/resctrl/resctrl_val.c b/tools/testing/selftests/resctrl/resctrl_val.c
+index b32b96356ec70..00864242d76c6 100644
+--- a/tools/testing/selftests/resctrl/resctrl_val.c
++++ b/tools/testing/selftests/resctrl/resctrl_val.c
+@@ -734,29 +734,24 @@ int resctrl_val(char **benchmark_cmd, struct resctrl_val_param *param)
+ 
+ 	/* Test runs until the callback setup() tells the test to stop. */
+ 	while (1) {
++		ret = param->setup(1, param);
++		if (ret == END_OF_TESTS) {
++			ret = 0;
++			break;
++		}
++		if (ret < 0)
++			break;
++
+ 		if (!strncmp(resctrl_val, MBM_STR, sizeof(MBM_STR)) ||
+ 		    !strncmp(resctrl_val, MBA_STR, sizeof(MBA_STR))) {
+-			ret = param->setup(1, param);
+-			if (ret) {
+-				ret = 0;
+-				break;
+-			}
+-
+ 			ret = measure_vals(param, &bw_resc_start);
+ 			if (ret)
+ 				break;
+ 		} else if (!strncmp(resctrl_val, CMT_STR, sizeof(CMT_STR))) {
+-			ret = param->setup(1, param);
+-			if (ret) {
+-				ret = 0;
+-				break;
+-			}
+ 			sleep(1);
+ 			ret = measure_cache_vals(param, bm_pid);
+ 			if (ret)
+ 				break;
+-		} else {
+-			break;
+ 		}
+ 	}
+ 
+diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq.json
+index 8acb904d14193..3593fb8f79ad3 100644
+--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq.json
++++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq.json
+@@ -114,6 +114,28 @@
+             "$IP link del dev $DUMMY type dummy"
+         ]
+     },
++    {
++        "id": "10f7",
++        "name": "Create FQ with invalid initial_quantum setting",
++        "category": [
++            "qdisc",
++            "fq"
++        ],
++        "plugins": {
++            "requires": "nsPlugin"
++        },
++        "setup": [
++            "$IP link add dev $DUMMY type dummy || /bin/true"
++        ],
++        "cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq initial_quantum 0x80000000",
++        "expExitCode": "2",
++        "verifyCmd": "$TC qdisc show dev $DUMMY",
++        "matchPattern": "qdisc fq 1: root.*initial_quantum 2048Mb",
++        "matchCount": "0",
++        "teardown": [
++            "$IP link del dev $DUMMY type dummy"
++        ]
++    },
+     {
+         "id": "9398",
+         "name": "Create FQ with maxrate setting",
+diff --git a/tools/testing/selftests/user_events/ftrace_test.c b/tools/testing/selftests/user_events/ftrace_test.c
+index 404a2713dcae8..1bc26e6476fc3 100644
+--- a/tools/testing/selftests/user_events/ftrace_test.c
++++ b/tools/testing/selftests/user_events/ftrace_test.c
+@@ -294,6 +294,11 @@ TEST_F(user, write_events) {
+ 	ASSERT_NE(-1, writev(self->data_fd, (const struct iovec *)io, 3));
+ 	after = trace_bytes();
+ 	ASSERT_GT(after, before);
++
++	/* Negative index should fail with EINVAL */
++	reg.write_index = -1;
++	ASSERT_EQ(-1, writev(self->data_fd, (const struct iovec *)io, 3));
++	ASSERT_EQ(EINVAL, errno);
+ }
+ 
+ TEST_F(user, write_fault) {
+diff --git a/tools/testing/vsock/.gitignore b/tools/testing/vsock/.gitignore
+index 87ca2731cff94..a8adcfdc292ba 100644
+--- a/tools/testing/vsock/.gitignore
++++ b/tools/testing/vsock/.gitignore
+@@ -2,3 +2,4 @@
+ *.d
+ vsock_test
+ vsock_diag_test
++vsock_perf
+diff --git a/tools/tracing/rtla/src/timerlat_aa.c b/tools/tracing/rtla/src/timerlat_aa.c
+index ec4e0f4b0e6cd..1843fff66da5b 100644
+--- a/tools/tracing/rtla/src/timerlat_aa.c
++++ b/tools/tracing/rtla/src/timerlat_aa.c
+@@ -548,7 +548,7 @@ static void timerlat_thread_analysis(struct timerlat_aa_data *taa_data, int cpu,
+ 	exp_irq_ts = taa_data->timer_irq_start_time - taa_data->timer_irq_start_delay;
+ 
+ 	if (exp_irq_ts < taa_data->prev_irq_timstamp + taa_data->prev_irq_duration)
+-		printf("  Previous IRQ interference:	\t	up to %9.2f us",
++		printf("  Previous IRQ interference:	\t\t up to  %9.2f us\n",
+ 			ns_to_usf(taa_data->prev_irq_duration));
+ 
+ 	/*
+diff --git a/tools/verification/rv/src/rv.c b/tools/verification/rv/src/rv.c
+index e601cd9c411e1..1ddb855328165 100644
+--- a/tools/verification/rv/src/rv.c
++++ b/tools/verification/rv/src/rv.c
+@@ -74,7 +74,7 @@ static void rv_list(int argc, char **argv)
+ static void rv_mon(int argc, char **argv)
+ {
+ 	char *monitor_name;
+-	int i, run;
++	int i, run = 0;
+ 
+ 	static const char *const usage[] = {
+ 		"",


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:6.3 commit in: /
@ 2023-05-11 16:15 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2023-05-11 16:15 UTC (permalink / raw
  To: gentoo-commits

commit:     8642c56064afcb9f7db671bc0bab71679edab6ad
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu May 11 16:14:34 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu May 11 16:14:34 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8642c560

Remove redundant patch

Removed:
1520_nf-tables-make-deleted-anon-sets-inactive.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |   4 -
 ...nf-tables-make-deleted-anon-sets-inactive.patch | 121 ---------------------
 2 files changed, 125 deletions(-)

diff --git a/0000_README b/0000_README
index d1663380..e13e9146 100644
--- a/0000_README
+++ b/0000_README
@@ -59,10 +59,6 @@ Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.
 
-Patch:  1520_fs-enable-link-security-restrictions-by-default.patch
-From:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/patch/?id=c1592a89942e9678f7d9c8030efa777c0d57edab
-Desc:   netfilter: nf_tables: deactivate anonymous set from preparation phase
-
 Patch:  1700_sparc-address-warray-bound-warnings.patch
 From:		https://github.com/KSPP/linux/issues/109
 Desc:		Address -Warray-bounds warnings 

diff --git a/1520_nf-tables-make-deleted-anon-sets-inactive.patch b/1520_nf-tables-make-deleted-anon-sets-inactive.patch
deleted file mode 100644
index cd75de5c..00000000
--- a/1520_nf-tables-make-deleted-anon-sets-inactive.patch
+++ /dev/null
@@ -1,121 +0,0 @@
-From c1592a89942e9678f7d9c8030efa777c0d57edab Mon Sep 17 00:00:00 2001
-From: Pablo Neira Ayuso <pablo@netfilter.org>
-Date: Tue, 2 May 2023 10:25:24 +0200
-Subject: netfilter: nf_tables: deactivate anonymous set from preparation phase
-
-Toggle deleted anonymous sets as inactive in the next generation, so
-users cannot perform any update on it. Clear the generation bitmask
-in case the transaction is aborted.
-
-The following KASAN splat shows a set element deletion for a bound
-anonymous set that has been already removed in the same transaction.
-
-[   64.921510] ==================================================================
-[   64.923123] BUG: KASAN: wild-memory-access in nf_tables_commit+0xa24/0x1490 [nf_tables]
-[   64.924745] Write of size 8 at addr dead000000000122 by task test/890
-[   64.927903] CPU: 3 PID: 890 Comm: test Not tainted 6.3.0+ #253
-[   64.931120] Call Trace:
-[   64.932699]  <TASK>
-[   64.934292]  dump_stack_lvl+0x33/0x50
-[   64.935908]  ? nf_tables_commit+0xa24/0x1490 [nf_tables]
-[   64.937551]  kasan_report+0xda/0x120
-[   64.939186]  ? nf_tables_commit+0xa24/0x1490 [nf_tables]
-[   64.940814]  nf_tables_commit+0xa24/0x1490 [nf_tables]
-[   64.942452]  ? __kasan_slab_alloc+0x2d/0x60
-[   64.944070]  ? nf_tables_setelem_notify+0x190/0x190 [nf_tables]
-[   64.945710]  ? kasan_set_track+0x21/0x30
-[   64.947323]  nfnetlink_rcv_batch+0x709/0xd90 [nfnetlink]
-[   64.948898]  ? nfnetlink_rcv_msg+0x480/0x480 [nfnetlink]
-
-Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
----
- include/net/netfilter/nf_tables.h |  1 +
- net/netfilter/nf_tables_api.c     | 12 ++++++++++++
- net/netfilter/nft_dynset.c        |  2 +-
- net/netfilter/nft_lookup.c        |  2 +-
- net/netfilter/nft_objref.c        |  2 +-
- 5 files changed, 16 insertions(+), 3 deletions(-)
-
-diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
-index 3ed21d2d56590..2e24ea1d744c2 100644
---- a/include/net/netfilter/nf_tables.h
-+++ b/include/net/netfilter/nf_tables.h
-@@ -619,6 +619,7 @@ struct nft_set_binding {
- };
- 
- enum nft_trans_phase;
-+void nf_tables_activate_set(const struct nft_ctx *ctx, struct nft_set *set);
- void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
- 			      struct nft_set_binding *binding,
- 			      enum nft_trans_phase phase);
-diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
-index 8b6c61a2196cb..59fb8320ab4d7 100644
---- a/net/netfilter/nf_tables_api.c
-+++ b/net/netfilter/nf_tables_api.c
-@@ -5127,12 +5127,24 @@ static void nf_tables_unbind_set(const struct nft_ctx *ctx, struct nft_set *set,
- 	}
- }
- 
-+void nf_tables_activate_set(const struct nft_ctx *ctx, struct nft_set *set)
-+{
-+	if (nft_set_is_anonymous(set))
-+		nft_clear(ctx->net, set);
-+
-+	set->use++;
-+}
-+EXPORT_SYMBOL_GPL(nf_tables_activate_set);
-+
- void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
- 			      struct nft_set_binding *binding,
- 			      enum nft_trans_phase phase)
- {
- 	switch (phase) {
- 	case NFT_TRANS_PREPARE:
-+		if (nft_set_is_anonymous(set))
-+			nft_deactivate_next(ctx->net, set);
-+
- 		set->use--;
- 		return;
- 	case NFT_TRANS_ABORT:
-diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c
-index 274579b1696e0..bd19c7aec92ee 100644
---- a/net/netfilter/nft_dynset.c
-+++ b/net/netfilter/nft_dynset.c
-@@ -342,7 +342,7 @@ static void nft_dynset_activate(const struct nft_ctx *ctx,
- {
- 	struct nft_dynset *priv = nft_expr_priv(expr);
- 
--	priv->set->use++;
-+	nf_tables_activate_set(ctx, priv->set);
- }
- 
- static void nft_dynset_destroy(const struct nft_ctx *ctx,
-diff --git a/net/netfilter/nft_lookup.c b/net/netfilter/nft_lookup.c
-index cecf8ab90e58f..03ef4fdaa460b 100644
---- a/net/netfilter/nft_lookup.c
-+++ b/net/netfilter/nft_lookup.c
-@@ -167,7 +167,7 @@ static void nft_lookup_activate(const struct nft_ctx *ctx,
- {
- 	struct nft_lookup *priv = nft_expr_priv(expr);
- 
--	priv->set->use++;
-+	nf_tables_activate_set(ctx, priv->set);
- }
- 
- static void nft_lookup_destroy(const struct nft_ctx *ctx,
-diff --git a/net/netfilter/nft_objref.c b/net/netfilter/nft_objref.c
-index cb37169608bab..a48dd5b5d45b1 100644
---- a/net/netfilter/nft_objref.c
-+++ b/net/netfilter/nft_objref.c
-@@ -185,7 +185,7 @@ static void nft_objref_map_activate(const struct nft_ctx *ctx,
- {
- 	struct nft_objref_map *priv = nft_expr_priv(expr);
- 
--	priv->set->use++;
-+	nf_tables_activate_set(ctx, priv->set);
- }
- 
- static void nft_objref_map_destroy(const struct nft_ctx *ctx,
--- 
-cgit 
-


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:6.3 commit in: /
@ 2023-05-17 13:16 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2023-05-17 13:16 UTC (permalink / raw
  To: gentoo-commits

commit:     86e2eaa41820c9111c8ab06f9e1c7e3756e79126
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed May 17 13:16:34 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed May 17 13:16:34 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=86e2eaa4

Linux patch 6.3.3

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |     4 +
 1002_linux-6.3.3.patch | 13003 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 13007 insertions(+)

diff --git a/0000_README b/0000_README
index e13e9146..20b6f8fe 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch:  1001_linux-6.3.2.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.3.2
 
+Patch:  1002_linux-6.3.3.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.3.3
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1002_linux-6.3.3.patch b/1002_linux-6.3.3.patch
new file mode 100644
index 00000000..92fc3526
--- /dev/null
+++ b/1002_linux-6.3.3.patch
@@ -0,0 +1,13003 @@
+diff --git a/Documentation/devicetree/bindings/perf/riscv,pmu.yaml b/Documentation/devicetree/bindings/perf/riscv,pmu.yaml
+index a55a4d047d3fd..c8448de2f2a07 100644
+--- a/Documentation/devicetree/bindings/perf/riscv,pmu.yaml
++++ b/Documentation/devicetree/bindings/perf/riscv,pmu.yaml
+@@ -91,7 +91,6 @@ properties:
+ 
+ dependencies:
+   "riscv,event-to-mhpmevent": [ "riscv,event-to-mhpmcounters" ]
+-  "riscv,event-to-mhpmcounters": [ "riscv,event-to-mhpmevent" ]
+ 
+ required:
+   - compatible
+diff --git a/Makefile b/Makefile
+index 80cdc03e25aa3..a3108cf700a07 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 3
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/arch/arm/boot/dts/aspeed-bmc-asrock-e3c246d4i.dts b/arch/arm/boot/dts/aspeed-bmc-asrock-e3c246d4i.dts
+index 9b4cf5ebe6d5f..c62aff908ab48 100644
+--- a/arch/arm/boot/dts/aspeed-bmc-asrock-e3c246d4i.dts
++++ b/arch/arm/boot/dts/aspeed-bmc-asrock-e3c246d4i.dts
+@@ -63,7 +63,7 @@
+ 		status = "okay";
+ 		m25p,fast-read;
+ 		label = "bmc";
+-		spi-max-frequency = <100000000>; /* 100 MHz */
++		spi-max-frequency = <50000000>; /* 50 MHz */
+ #include "openbmc-flash-layout.dtsi"
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/aspeed-bmc-asrock-romed8hm3.dts b/arch/arm/boot/dts/aspeed-bmc-asrock-romed8hm3.dts
+index ff4c07c69af1c..4554abf0c7cdf 100644
+--- a/arch/arm/boot/dts/aspeed-bmc-asrock-romed8hm3.dts
++++ b/arch/arm/boot/dts/aspeed-bmc-asrock-romed8hm3.dts
+@@ -31,7 +31,7 @@
+ 		};
+ 
+ 		system-fault {
+-			gpios = <&gpio ASPEED_GPIO(Z, 2) GPIO_ACTIVE_LOW>;
++			gpios = <&gpio ASPEED_GPIO(Z, 2) GPIO_ACTIVE_HIGH>;
+ 			panic-indicator;
+ 		};
+ 	};
+@@ -51,7 +51,7 @@
+ 		status = "okay";
+ 		m25p,fast-read;
+ 		label = "bmc";
+-		spi-max-frequency = <100000000>; /* 100 MHz */
++		spi-max-frequency = <50000000>; /* 50 MHz */
+ #include "openbmc-flash-layout-64.dtsi"
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/exynos4412-itop-elite.dts b/arch/arm/boot/dts/exynos4412-itop-elite.dts
+index b596e997e451a..6260da187e92c 100644
+--- a/arch/arm/boot/dts/exynos4412-itop-elite.dts
++++ b/arch/arm/boot/dts/exynos4412-itop-elite.dts
+@@ -182,7 +182,7 @@
+ 		compatible = "wlf,wm8960";
+ 		reg = <0x1a>;
+ 		clocks = <&pmu_system_controller 0>;
+-		clock-names = "MCLK1";
++		clock-names = "mclk";
+ 		wlf,shared-lrclk;
+ 		#sound-dai-cells = <0>;
+ 	};
+diff --git a/arch/arm/boot/dts/s5pv210.dtsi b/arch/arm/boot/dts/s5pv210.dtsi
+index 12e90a1cc6a14..1a9e4a96b2ff7 100644
+--- a/arch/arm/boot/dts/s5pv210.dtsi
++++ b/arch/arm/boot/dts/s5pv210.dtsi
+@@ -566,7 +566,7 @@
+ 				interrupts = <29>;
+ 				clocks = <&clocks CLK_CSIS>,
+ 						<&clocks SCLK_CSIS>;
+-				clock-names = "clk_csis",
++				clock-names = "csis",
+ 						"sclk_csis";
+ 				bus-width = <4>;
+ 				status = "disabled";
+diff --git a/arch/arm64/kernel/cpu-reset.S b/arch/arm64/kernel/cpu-reset.S
+index 6b752fe897451..c87445dde6745 100644
+--- a/arch/arm64/kernel/cpu-reset.S
++++ b/arch/arm64/kernel/cpu-reset.S
+@@ -14,7 +14,7 @@
+ #include <asm/virt.h>
+ 
+ .text
+-.pushsection    .idmap.text, "awx"
++.pushsection    .idmap.text, "a"
+ 
+ /*
+  * cpu_soft_restart(el2_switch, entry, arg0, arg1, arg2)
+diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
+index b98970907226b..e92caebff46a0 100644
+--- a/arch/arm64/kernel/head.S
++++ b/arch/arm64/kernel/head.S
+@@ -150,8 +150,8 @@ CPU_BE( tbz	x19, #SCTLR_ELx_EE_SHIFT, 1f	)
+ 	pre_disable_mmu_workaround
+ 	msr	sctlr_el2, x19
+ 	b	3f
+-	pre_disable_mmu_workaround
+-2:	msr	sctlr_el1, x19
++2:	pre_disable_mmu_workaround
++	msr	sctlr_el1, x19
+ 3:	isb
+ 	mov	x19, xzr
+ 	ret
+diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S
+index 2ae7cff1953aa..2aa5129d82537 100644
+--- a/arch/arm64/kernel/sleep.S
++++ b/arch/arm64/kernel/sleep.S
+@@ -97,7 +97,7 @@ SYM_FUNC_START(__cpu_suspend_enter)
+ 	ret
+ SYM_FUNC_END(__cpu_suspend_enter)
+ 
+-	.pushsection ".idmap.text", "awx"
++	.pushsection ".idmap.text", "a"
+ SYM_CODE_START(cpu_resume)
+ 	mov	x0, xzr
+ 	bl	init_kernel_el
+diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
+index 91410f4880900..c2cb437821ca4 100644
+--- a/arch/arm64/mm/proc.S
++++ b/arch/arm64/mm/proc.S
+@@ -167,7 +167,7 @@ alternative_else_nop_endif
+ SYM_FUNC_END(cpu_do_resume)
+ #endif
+ 
+-	.pushsection ".idmap.text", "awx"
++	.pushsection ".idmap.text", "a"
+ 
+ .macro	__idmap_cpu_set_reserved_ttbr1, tmp1, tmp2
+ 	adrp	\tmp1, reserved_pg_dir
+@@ -201,7 +201,7 @@ SYM_FUNC_END(idmap_cpu_replace_ttbr1)
+ 
+ #define KPTI_NG_PTE_FLAGS	(PTE_ATTRINDX(MT_NORMAL) | SWAPPER_PTE_FLAGS)
+ 
+-	.pushsection ".idmap.text", "awx"
++	.pushsection ".idmap.text", "a"
+ 
+ 	.macro	kpti_mk_tbl_ng, type, num_entries
+ 	add	end_\type\()p, cur_\type\()p, #\num_entries * 8
+@@ -400,7 +400,7 @@ SYM_FUNC_END(idmap_kpti_install_ng_mappings)
+  * Output:
+  *	Return in x0 the value of the SCTLR_EL1 register.
+  */
+-	.pushsection ".idmap.text", "awx"
++	.pushsection ".idmap.text", "a"
+ SYM_FUNC_START(__cpu_setup)
+ 	tlbi	vmalle1				// Invalidate local TLB
+ 	dsb	nsh
+diff --git a/arch/parisc/include/asm/pgtable.h b/arch/parisc/include/asm/pgtable.h
+index e2950f5db7c9c..e715df5385d6e 100644
+--- a/arch/parisc/include/asm/pgtable.h
++++ b/arch/parisc/include/asm/pgtable.h
+@@ -413,12 +413,12 @@ extern void paging_init (void);
+  *   For the 64bit version, the offset is extended by 32bit.
+  */
+ #define __swp_type(x)                     ((x).val & 0x1f)
+-#define __swp_offset(x)                   ( (((x).val >> 6) &  0x7) | \
+-					  (((x).val >> 8) & ~0x7) )
++#define __swp_offset(x)                   ( (((x).val >> 5) & 0x7) | \
++					  (((x).val >> 10) << 3) )
+ #define __swp_entry(type, offset)         ((swp_entry_t) { \
+ 					    ((type) & 0x1f) | \
+-					    ((offset &  0x7) << 6) | \
+-					    ((offset & ~0x7) << 8) })
++					    ((offset & 0x7) << 5) | \
++					    ((offset >> 3) << 10) })
+ #define __pte_to_swp_entry(pte)		((swp_entry_t) { pte_val(pte) })
+ #define __swp_entry_to_pte(x)		((pte_t) { (x).val })
+ 
+diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
+index 4cf303a779ab9..8d02b9d05738d 100644
+--- a/arch/riscv/kernel/Makefile
++++ b/arch/riscv/kernel/Makefile
+@@ -9,6 +9,7 @@ CFLAGS_REMOVE_patch.o	= $(CC_FLAGS_FTRACE)
+ CFLAGS_REMOVE_sbi.o	= $(CC_FLAGS_FTRACE)
+ endif
+ CFLAGS_syscall_table.o	+= $(call cc-option,-Wno-override-init,)
++CFLAGS_compat_syscall_table.o += $(call cc-option,-Wno-override-init,)
+ 
+ ifdef CONFIG_KEXEC
+ AFLAGS_kexec_relocate.o := -mcmodel=medany $(call cc-option,-mno-relax)
+diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
+index 86c56616e5dea..ea3d61de065b3 100644
+--- a/arch/riscv/mm/pageattr.c
++++ b/arch/riscv/mm/pageattr.c
+@@ -217,18 +217,26 @@ bool kernel_page_present(struct page *page)
+ 	pgd = pgd_offset_k(addr);
+ 	if (!pgd_present(*pgd))
+ 		return false;
++	if (pgd_leaf(*pgd))
++		return true;
+ 
+ 	p4d = p4d_offset(pgd, addr);
+ 	if (!p4d_present(*p4d))
+ 		return false;
++	if (p4d_leaf(*p4d))
++		return true;
+ 
+ 	pud = pud_offset(p4d, addr);
+ 	if (!pud_present(*pud))
+ 		return false;
++	if (pud_leaf(*pud))
++		return true;
+ 
+ 	pmd = pmd_offset(pud, addr);
+ 	if (!pmd_present(*pmd))
+ 		return false;
++	if (pmd_leaf(*pmd))
++		return true;
+ 
+ 	pte = pte_offset_kernel(pmd, addr);
+ 	return pte_present(*pte);
+diff --git a/arch/s390/boot/vmem.c b/arch/s390/boot/vmem.c
+index 4d1d0d8e99cb2..a354d8bc1f0f0 100644
+--- a/arch/s390/boot/vmem.c
++++ b/arch/s390/boot/vmem.c
+@@ -10,6 +10,10 @@
+ #include "decompressor.h"
+ #include "boot.h"
+ 
++#ifdef CONFIG_PROC_FS
++atomic_long_t __bootdata_preserved(direct_pages_count[PG_DIRECT_MAP_MAX]);
++#endif
++
+ #define init_mm			(*(struct mm_struct *)vmlinux.init_mm_off)
+ #define swapper_pg_dir		vmlinux.swapper_pg_dir_off
+ #define invalid_pg_dir		vmlinux.invalid_pg_dir_off
+@@ -29,7 +33,7 @@ unsigned long __bootdata(pgalloc_low);
+ 
+ enum populate_mode {
+ 	POPULATE_NONE,
+-	POPULATE_ONE2ONE,
++	POPULATE_DIRECT,
+ 	POPULATE_ABS_LOWCORE,
+ };
+ 
+@@ -102,7 +106,7 @@ static unsigned long _pa(unsigned long addr, enum populate_mode mode)
+ 	switch (mode) {
+ 	case POPULATE_NONE:
+ 		return -1;
+-	case POPULATE_ONE2ONE:
++	case POPULATE_DIRECT:
+ 		return addr;
+ 	case POPULATE_ABS_LOWCORE:
+ 		return __abs_lowcore_pa(addr);
+@@ -126,7 +130,7 @@ static bool can_large_pmd(pmd_t *pm_dir, unsigned long addr, unsigned long end)
+ static void pgtable_pte_populate(pmd_t *pmd, unsigned long addr, unsigned long end,
+ 				 enum populate_mode mode)
+ {
+-	unsigned long next;
++	unsigned long pages = 0;
+ 	pte_t *pte, entry;
+ 
+ 	pte = pte_offset_kernel(pmd, addr);
+@@ -135,14 +139,17 @@ static void pgtable_pte_populate(pmd_t *pmd, unsigned long addr, unsigned long e
+ 			entry = __pte(_pa(addr, mode));
+ 			entry = set_pte_bit(entry, PAGE_KERNEL_EXEC);
+ 			set_pte(pte, entry);
++			pages++;
+ 		}
+ 	}
++	if (mode == POPULATE_DIRECT)
++		update_page_count(PG_DIRECT_MAP_4K, pages);
+ }
+ 
+ static void pgtable_pmd_populate(pud_t *pud, unsigned long addr, unsigned long end,
+ 				 enum populate_mode mode)
+ {
+-	unsigned long next;
++	unsigned long next, pages = 0;
+ 	pmd_t *pmd, entry;
+ 	pte_t *pte;
+ 
+@@ -154,6 +161,7 @@ static void pgtable_pmd_populate(pud_t *pud, unsigned long addr, unsigned long e
+ 				entry = __pmd(_pa(addr, mode));
+ 				entry = set_pmd_bit(entry, SEGMENT_KERNEL_EXEC);
+ 				set_pmd(pmd, entry);
++				pages++;
+ 				continue;
+ 			}
+ 			pte = boot_pte_alloc();
+@@ -163,12 +171,14 @@ static void pgtable_pmd_populate(pud_t *pud, unsigned long addr, unsigned long e
+ 		}
+ 		pgtable_pte_populate(pmd, addr, next, mode);
+ 	}
++	if (mode == POPULATE_DIRECT)
++		update_page_count(PG_DIRECT_MAP_1M, pages);
+ }
+ 
+ static void pgtable_pud_populate(p4d_t *p4d, unsigned long addr, unsigned long end,
+ 				 enum populate_mode mode)
+ {
+-	unsigned long next;
++	unsigned long next, pages = 0;
+ 	pud_t *pud, entry;
+ 	pmd_t *pmd;
+ 
+@@ -180,6 +190,7 @@ static void pgtable_pud_populate(p4d_t *p4d, unsigned long addr, unsigned long e
+ 				entry = __pud(_pa(addr, mode));
+ 				entry = set_pud_bit(entry, REGION3_KERNEL_EXEC);
+ 				set_pud(pud, entry);
++				pages++;
+ 				continue;
+ 			}
+ 			pmd = boot_crst_alloc(_SEGMENT_ENTRY_EMPTY);
+@@ -189,6 +200,8 @@ static void pgtable_pud_populate(p4d_t *p4d, unsigned long addr, unsigned long e
+ 		}
+ 		pgtable_pmd_populate(pud, addr, next, mode);
+ 	}
++	if (mode == POPULATE_DIRECT)
++		update_page_count(PG_DIRECT_MAP_2G, pages);
+ }
+ 
+ static void pgtable_p4d_populate(pgd_t *pgd, unsigned long addr, unsigned long end,
+@@ -251,9 +264,9 @@ void setup_vmem(unsigned long asce_limit)
+ 	 * the lowcore and create the identity mapping only afterwards.
+ 	 */
+ 	pgtable_populate_init();
+-	pgtable_populate(0, sizeof(struct lowcore), POPULATE_ONE2ONE);
++	pgtable_populate(0, sizeof(struct lowcore), POPULATE_DIRECT);
+ 	for_each_mem_detect_usable_block(i, &start, &end)
+-		pgtable_populate(start, end, POPULATE_ONE2ONE);
++		pgtable_populate(start, end, POPULATE_DIRECT);
+ 	pgtable_populate(__abs_lowcore, __abs_lowcore + sizeof(struct lowcore),
+ 			 POPULATE_ABS_LOWCORE);
+ 	pgtable_populate(__memcpy_real_area, __memcpy_real_area + PAGE_SIZE,
+diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
+index 2c70b4d1263d2..acbe1ac2d5716 100644
+--- a/arch/s390/include/asm/pgtable.h
++++ b/arch/s390/include/asm/pgtable.h
+@@ -34,7 +34,7 @@ enum {
+ 	PG_DIRECT_MAP_MAX
+ };
+ 
+-extern atomic_long_t direct_pages_count[PG_DIRECT_MAP_MAX];
++extern atomic_long_t __bootdata_preserved(direct_pages_count[PG_DIRECT_MAP_MAX]);
+ 
+ static inline void update_page_count(int level, long count)
+ {
+diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c
+index 9f18a4af9c131..cb2ee06df286c 100644
+--- a/arch/s390/kernel/uv.c
++++ b/arch/s390/kernel/uv.c
+@@ -192,21 +192,10 @@ static int expected_page_refs(struct page *page)
+ 	return res;
+ }
+ 
+-static int make_secure_pte(pte_t *ptep, unsigned long addr,
+-			   struct page *exp_page, struct uv_cb_header *uvcb)
++static int make_page_secure(struct page *page, struct uv_cb_header *uvcb)
+ {
+-	pte_t entry = READ_ONCE(*ptep);
+-	struct page *page;
+ 	int expected, cc = 0;
+ 
+-	if (!pte_present(entry))
+-		return -ENXIO;
+-	if (pte_val(entry) & _PAGE_INVALID)
+-		return -ENXIO;
+-
+-	page = pte_page(entry);
+-	if (page != exp_page)
+-		return -ENXIO;
+ 	if (PageWriteback(page))
+ 		return -EAGAIN;
+ 	expected = expected_page_refs(page);
+@@ -304,17 +293,18 @@ again:
+ 		goto out;
+ 
+ 	rc = -ENXIO;
+-	page = follow_page(vma, uaddr, FOLL_WRITE);
+-	if (IS_ERR_OR_NULL(page))
+-		goto out;
+-
+-	lock_page(page);
+ 	ptep = get_locked_pte(gmap->mm, uaddr, &ptelock);
+-	if (should_export_before_import(uvcb, gmap->mm))
+-		uv_convert_from_secure(page_to_phys(page));
+-	rc = make_secure_pte(ptep, uaddr, page, uvcb);
++	if (pte_present(*ptep) && !(pte_val(*ptep) & _PAGE_INVALID) && pte_write(*ptep)) {
++		page = pte_page(*ptep);
++		rc = -EAGAIN;
++		if (trylock_page(page)) {
++			if (should_export_before_import(uvcb, gmap->mm))
++				uv_convert_from_secure(page_to_phys(page));
++			rc = make_page_secure(page, uvcb);
++			unlock_page(page);
++		}
++	}
+ 	pte_unmap_unlock(ptep, ptelock);
+-	unlock_page(page);
+ out:
+ 	mmap_read_unlock(gmap->mm);
+ 
+diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c
+index e032ebbf51b97..3ce5f4351156a 100644
+--- a/arch/s390/kvm/pv.c
++++ b/arch/s390/kvm/pv.c
+@@ -314,6 +314,11 @@ int kvm_s390_pv_set_aside(struct kvm *kvm, u16 *rc, u16 *rrc)
+ 	 */
+ 	if (kvm->arch.pv.set_aside)
+ 		return -EINVAL;
++
++	/* Guest with segment type ASCE, refuse to destroy asynchronously */
++	if ((kvm->arch.gmap->asce & _ASCE_TYPE_MASK) == _ASCE_TYPE_SEGMENT)
++		return -EINVAL;
++
+ 	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
+ 	if (!priv)
+ 		return -ENOMEM;
+diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
+index 5a716bdcba05b..2267cf9819b2f 100644
+--- a/arch/s390/mm/gmap.c
++++ b/arch/s390/mm/gmap.c
+@@ -2833,6 +2833,9 @@ EXPORT_SYMBOL_GPL(s390_unlist_old_asce);
+  * s390_replace_asce - Try to replace the current ASCE of a gmap with a copy
+  * @gmap: the gmap whose ASCE needs to be replaced
+  *
++ * If the ASCE is a SEGMENT type then this function will return -EINVAL,
++ * otherwise the pointers in the host_to_guest radix tree will keep pointing
++ * to the wrong pages, causing use-after-free and memory corruption.
+  * If the allocation of the new top level page table fails, the ASCE is not
+  * replaced.
+  * In any case, the old ASCE is always removed from the gmap CRST list.
+@@ -2847,6 +2850,10 @@ int s390_replace_asce(struct gmap *gmap)
+ 
+ 	s390_unlist_old_asce(gmap);
+ 
++	/* Replacing segment type ASCEs would cause serious issues */
++	if ((gmap->asce & _ASCE_TYPE_MASK) == _ASCE_TYPE_SEGMENT)
++		return -EINVAL;
++
+ 	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
+ 	if (!page)
+ 		return -ENOMEM;
+diff --git a/arch/s390/mm/pageattr.c b/arch/s390/mm/pageattr.c
+index 85195c18b2e82..7be699b4974ad 100644
+--- a/arch/s390/mm/pageattr.c
++++ b/arch/s390/mm/pageattr.c
+@@ -41,7 +41,7 @@ void __storage_key_init_range(unsigned long start, unsigned long end)
+ }
+ 
+ #ifdef CONFIG_PROC_FS
+-atomic_long_t direct_pages_count[PG_DIRECT_MAP_MAX];
++atomic_long_t __bootdata_preserved(direct_pages_count[PG_DIRECT_MAP_MAX]);
+ 
+ void arch_report_meminfo(struct seq_file *m)
+ {
+diff --git a/arch/sh/Kconfig.debug b/arch/sh/Kconfig.debug
+index 10290e5c1f438..c449e7c1b20ff 100644
+--- a/arch/sh/Kconfig.debug
++++ b/arch/sh/Kconfig.debug
+@@ -15,7 +15,7 @@ config SH_STANDARD_BIOS
+ 
+ config STACK_DEBUG
+ 	bool "Check for stack overflows"
+-	depends on DEBUG_KERNEL
++	depends on DEBUG_KERNEL && PRINTK
+ 	help
+ 	  This option will cause messages to be printed if free stack space
+ 	  drops below a certain limit. Saying Y here will add overhead to
+diff --git a/arch/sh/kernel/head_32.S b/arch/sh/kernel/head_32.S
+index 4adbd4ade3194..b603b7968b388 100644
+--- a/arch/sh/kernel/head_32.S
++++ b/arch/sh/kernel/head_32.S
+@@ -64,7 +64,7 @@ ENTRY(_stext)
+ 	ldc	r0, r6_bank
+ #endif
+ 
+-#ifdef CONFIG_OF_FLATTREE
++#ifdef CONFIG_OF_EARLY_FLATTREE
+ 	mov	r4, r12		! Store device tree blob pointer in r12
+ #endif
+ 	
+@@ -315,7 +315,7 @@ ENTRY(_stext)
+ 10:		
+ #endif
+ 
+-#ifdef CONFIG_OF_FLATTREE
++#ifdef CONFIG_OF_EARLY_FLATTREE
+ 	mov.l	8f, r0		! Make flat device tree available early.
+ 	jsr	@r0
+ 	 mov	r12, r4
+@@ -346,7 +346,7 @@ ENTRY(stack_start)
+ 5:	.long	start_kernel
+ 6:	.long	cpu_init
+ 7:	.long	init_thread_union
+-#if defined(CONFIG_OF_FLATTREE)
++#if defined(CONFIG_OF_EARLY_FLATTREE)
+ 8:	.long	sh_fdt_init
+ #endif
+ 
+diff --git a/arch/sh/kernel/nmi_debug.c b/arch/sh/kernel/nmi_debug.c
+index 11777867c6f5f..a212b645b4cf8 100644
+--- a/arch/sh/kernel/nmi_debug.c
++++ b/arch/sh/kernel/nmi_debug.c
+@@ -49,7 +49,7 @@ static int __init nmi_debug_setup(char *str)
+ 	register_die_notifier(&nmi_debug_nb);
+ 
+ 	if (*str != '=')
+-		return 0;
++		return 1;
+ 
+ 	for (p = str + 1; *p; p = sep + 1) {
+ 		sep = strchr(p, ',');
+@@ -70,6 +70,6 @@ static int __init nmi_debug_setup(char *str)
+ 			break;
+ 	}
+ 
+-	return 0;
++	return 1;
+ }
+ __setup("nmi_debug", nmi_debug_setup);
+diff --git a/arch/sh/kernel/setup.c b/arch/sh/kernel/setup.c
+index 1fcb6659822a3..af977ec4ca5e5 100644
+--- a/arch/sh/kernel/setup.c
++++ b/arch/sh/kernel/setup.c
+@@ -244,7 +244,7 @@ void __init __weak plat_early_device_setup(void)
+ {
+ }
+ 
+-#ifdef CONFIG_OF_FLATTREE
++#ifdef CONFIG_OF_EARLY_FLATTREE
+ void __ref sh_fdt_init(phys_addr_t dt_phys)
+ {
+ 	static int done = 0;
+@@ -326,7 +326,7 @@ void __init setup_arch(char **cmdline_p)
+ 	/* Let earlyprintk output early console messages */
+ 	sh_early_platform_driver_probe("earlyprintk", 1, 1);
+ 
+-#ifdef CONFIG_OF_FLATTREE
++#ifdef CONFIG_OF_EARLY_FLATTREE
+ #ifdef CONFIG_USE_BUILTIN_DTB
+ 	unflatten_and_copy_device_tree();
+ #else
+diff --git a/arch/sh/math-emu/sfp-util.h b/arch/sh/math-emu/sfp-util.h
+index 784f541344f36..bda50762b3d33 100644
+--- a/arch/sh/math-emu/sfp-util.h
++++ b/arch/sh/math-emu/sfp-util.h
+@@ -67,7 +67,3 @@
+   } while (0)
+ 
+ #define abort()	return 0
+-
+-#define __BYTE_ORDER __LITTLE_ENDIAN
+-
+-
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index d096b04bf80e8..9d248703cbddc 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -1703,10 +1703,8 @@ int x86_pmu_handle_irq(struct pt_regs *regs)
+ 
+ 		perf_sample_data_init(&data, 0, event->hw.last_period);
+ 
+-		if (has_branch_stack(event)) {
+-			data.br_stack = &cpuc->lbr_stack;
+-			data.sample_flags |= PERF_SAMPLE_BRANCH_STACK;
+-		}
++		if (has_branch_stack(event))
++			perf_sample_save_brstack(&data, event, &cpuc->lbr_stack);
+ 
+ 		if (perf_event_overflow(event, &data, regs))
+ 			x86_pmu_stop(event, 0);
+diff --git a/arch/x86/kernel/amd_nb.c b/arch/x86/kernel/amd_nb.c
+index 4266b64631a46..7e331e8f36929 100644
+--- a/arch/x86/kernel/amd_nb.c
++++ b/arch/x86/kernel/amd_nb.c
+@@ -36,6 +36,7 @@
+ #define PCI_DEVICE_ID_AMD_19H_M50H_DF_F4 0x166e
+ #define PCI_DEVICE_ID_AMD_19H_M60H_DF_F4 0x14e4
+ #define PCI_DEVICE_ID_AMD_19H_M70H_DF_F4 0x14f4
++#define PCI_DEVICE_ID_AMD_19H_M78H_DF_F4 0x12fc
+ 
+ /* Protect the PCI config register pairs used for SMN. */
+ static DEFINE_MUTEX(smn_mutex);
+@@ -79,6 +80,7 @@ static const struct pci_device_id amd_nb_misc_ids[] = {
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_19H_M50H_DF_F3) },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_19H_M60H_DF_F3) },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_19H_M70H_DF_F3) },
++	{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_19H_M78H_DF_F3) },
+ 	{}
+ };
+ 
+diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h
+index 4c91f626c0580..e50d353b5c1c4 100644
+--- a/arch/x86/kvm/kvm_cache_regs.h
++++ b/arch/x86/kvm/kvm_cache_regs.h
+@@ -4,7 +4,7 @@
+ 
+ #include <linux/kvm_host.h>
+ 
+-#define KVM_POSSIBLE_CR0_GUEST_BITS X86_CR0_TS
++#define KVM_POSSIBLE_CR0_GUEST_BITS	(X86_CR0_TS | X86_CR0_WP)
+ #define KVM_POSSIBLE_CR4_GUEST_BITS				  \
+ 	(X86_CR4_PVI | X86_CR4_DE | X86_CR4_PCE | X86_CR4_OSFXSR  \
+ 	 | X86_CR4_OSXMMEXCPT | X86_CR4_PGE | X86_CR4_TSD | X86_CR4_FSGSBASE)
+diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
+index 168c46fd8dd18..0f38b78ab04b7 100644
+--- a/arch/x86/kvm/mmu.h
++++ b/arch/x86/kvm/mmu.h
+@@ -113,6 +113,8 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
+ bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu);
+ int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code,
+ 				u64 fault_address, char *insn, int insn_len);
++void __kvm_mmu_refresh_passthrough_bits(struct kvm_vcpu *vcpu,
++					struct kvm_mmu *mmu);
+ 
+ int kvm_mmu_load(struct kvm_vcpu *vcpu);
+ void kvm_mmu_unload(struct kvm_vcpu *vcpu);
+@@ -153,6 +155,24 @@ static inline void kvm_mmu_load_pgd(struct kvm_vcpu *vcpu)
+ 					  vcpu->arch.mmu->root_role.level);
+ }
+ 
++static inline void kvm_mmu_refresh_passthrough_bits(struct kvm_vcpu *vcpu,
++						    struct kvm_mmu *mmu)
++{
++	/*
++	 * When EPT is enabled, KVM may passthrough CR0.WP to the guest, i.e.
++	 * @mmu's snapshot of CR0.WP and thus all related paging metadata may
++	 * be stale.  Refresh CR0.WP and the metadata on-demand when checking
++	 * for permission faults.  Exempt nested MMUs, i.e. MMUs for shadowing
++	 * nEPT and nNPT, as CR0.WP is ignored in both cases.  Note, KVM does
++	 * need to refresh nested_mmu, a.k.a. the walker used to translate L2
++	 * GVAs to GPAs, as that "MMU" needs to honor L2's CR0.WP.
++	 */
++	if (!tdp_enabled || mmu == &vcpu->arch.guest_mmu)
++		return;
++
++	__kvm_mmu_refresh_passthrough_bits(vcpu, mmu);
++}
++
+ /*
+  * Check if a given access (described through the I/D, W/R and U/S bits of a
+  * page fault error code pfec) causes a permission fault with the given PTE
+@@ -184,8 +204,12 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
+ 	u64 implicit_access = access & PFERR_IMPLICIT_ACCESS;
+ 	bool not_smap = ((rflags & X86_EFLAGS_AC) | implicit_access) == X86_EFLAGS_AC;
+ 	int index = (pfec + (not_smap << PFERR_RSVD_BIT)) >> 1;
+-	bool fault = (mmu->permissions[index] >> pte_access) & 1;
+ 	u32 errcode = PFERR_PRESENT_MASK;
++	bool fault;
++
++	kvm_mmu_refresh_passthrough_bits(vcpu, mmu);
++
++	fault = (mmu->permissions[index] >> pte_access) & 1;
+ 
+ 	WARN_ON(pfec & (PFERR_PK_MASK | PFERR_RSVD_MASK));
+ 	if (unlikely(mmu->pkru_mask)) {
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index c8ebe542c565f..d3812de54b02c 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -242,6 +242,20 @@ static struct kvm_mmu_role_regs vcpu_to_role_regs(struct kvm_vcpu *vcpu)
+ 	return regs;
+ }
+ 
++static unsigned long get_guest_cr3(struct kvm_vcpu *vcpu)
++{
++	return kvm_read_cr3(vcpu);
++}
++
++static inline unsigned long kvm_mmu_get_guest_pgd(struct kvm_vcpu *vcpu,
++						  struct kvm_mmu *mmu)
++{
++	if (IS_ENABLED(CONFIG_RETPOLINE) && mmu->get_guest_pgd == get_guest_cr3)
++		return kvm_read_cr3(vcpu);
++
++	return mmu->get_guest_pgd(vcpu);
++}
++
+ static inline bool kvm_available_flush_tlb_with_range(void)
+ {
+ 	return kvm_x86_ops.tlb_remote_flush_with_range;
+@@ -3731,7 +3745,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
+ 	int quadrant, i, r;
+ 	hpa_t root;
+ 
+-	root_pgd = mmu->get_guest_pgd(vcpu);
++	root_pgd = kvm_mmu_get_guest_pgd(vcpu, mmu);
+ 	root_gfn = root_pgd >> PAGE_SHIFT;
+ 
+ 	if (mmu_check_root(vcpu, root_gfn))
+@@ -4181,7 +4195,7 @@ static bool kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
+ 	arch.token = alloc_apf_token(vcpu);
+ 	arch.gfn = gfn;
+ 	arch.direct_map = vcpu->arch.mmu->root_role.direct;
+-	arch.cr3 = vcpu->arch.mmu->get_guest_pgd(vcpu);
++	arch.cr3 = kvm_mmu_get_guest_pgd(vcpu, vcpu->arch.mmu);
+ 
+ 	return kvm_setup_async_pf(vcpu, cr2_or_gpa,
+ 				  kvm_vcpu_gfn_to_hva(vcpu, gfn), &arch);
+@@ -4200,7 +4214,7 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work)
+ 		return;
+ 
+ 	if (!vcpu->arch.mmu->root_role.direct &&
+-	      work->arch.cr3 != vcpu->arch.mmu->get_guest_pgd(vcpu))
++	      work->arch.cr3 != kvm_mmu_get_guest_pgd(vcpu, vcpu->arch.mmu))
+ 		return;
+ 
+ 	kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, 0, true);
+@@ -4604,11 +4618,6 @@ void kvm_mmu_new_pgd(struct kvm_vcpu *vcpu, gpa_t new_pgd)
+ }
+ EXPORT_SYMBOL_GPL(kvm_mmu_new_pgd);
+ 
+-static unsigned long get_cr3(struct kvm_vcpu *vcpu)
+-{
+-	return kvm_read_cr3(vcpu);
+-}
+-
+ static bool sync_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn,
+ 			   unsigned int access)
+ {
+@@ -5112,6 +5121,21 @@ kvm_calc_cpu_role(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *regs)
+ 	return role;
+ }
+ 
++void __kvm_mmu_refresh_passthrough_bits(struct kvm_vcpu *vcpu,
++					struct kvm_mmu *mmu)
++{
++	const bool cr0_wp = !!kvm_read_cr0_bits(vcpu, X86_CR0_WP);
++
++	BUILD_BUG_ON((KVM_MMU_CR0_ROLE_BITS & KVM_POSSIBLE_CR0_GUEST_BITS) != X86_CR0_WP);
++	BUILD_BUG_ON((KVM_MMU_CR4_ROLE_BITS & KVM_POSSIBLE_CR4_GUEST_BITS));
++
++	if (is_cr0_wp(mmu) == cr0_wp)
++		return;
++
++	mmu->cpu_role.base.cr0_wp = cr0_wp;
++	reset_guest_paging_metadata(vcpu, mmu);
++}
++
+ static inline int kvm_mmu_get_tdp_level(struct kvm_vcpu *vcpu)
+ {
+ 	/* tdp_root_level is architecture forced level, use it if nonzero */
+@@ -5159,7 +5183,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu,
+ 	context->page_fault = kvm_tdp_page_fault;
+ 	context->sync_page = nonpaging_sync_page;
+ 	context->invlpg = NULL;
+-	context->get_guest_pgd = get_cr3;
++	context->get_guest_pgd = get_guest_cr3;
+ 	context->get_pdptr = kvm_pdptr_read;
+ 	context->inject_page_fault = kvm_inject_page_fault;
+ 
+@@ -5309,7 +5333,7 @@ static void init_kvm_softmmu(struct kvm_vcpu *vcpu,
+ 
+ 	kvm_init_shadow_mmu(vcpu, cpu_role);
+ 
+-	context->get_guest_pgd     = get_cr3;
++	context->get_guest_pgd     = get_guest_cr3;
+ 	context->get_pdptr         = kvm_pdptr_read;
+ 	context->inject_page_fault = kvm_inject_page_fault;
+ }
+@@ -5323,7 +5347,7 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu,
+ 		return;
+ 
+ 	g_context->cpu_role.as_u64   = new_mode.as_u64;
+-	g_context->get_guest_pgd     = get_cr3;
++	g_context->get_guest_pgd     = get_guest_cr3;
+ 	g_context->get_pdptr         = kvm_pdptr_read;
+ 	g_context->inject_page_fault = kvm_inject_page_fault;
+ 
+diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
+index 57f0b75c80f9d..2ea2861bbb3c1 100644
+--- a/arch/x86/kvm/mmu/paging_tmpl.h
++++ b/arch/x86/kvm/mmu/paging_tmpl.h
+@@ -324,7 +324,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
+ 	trace_kvm_mmu_pagetable_walk(addr, access);
+ retry_walk:
+ 	walker->level = mmu->cpu_role.base.level;
+-	pte           = mmu->get_guest_pgd(vcpu);
++	pte           = kvm_mmu_get_guest_pgd(vcpu, mmu);
+ 	have_ad       = PT_HAVE_ACCESSED_DIRTY(mmu);
+ 
+ #if PTTYPE == 64
+diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
+index 612e6c70ce2e7..f4aa170b5b972 100644
+--- a/arch/x86/kvm/pmu.c
++++ b/arch/x86/kvm/pmu.c
+@@ -540,9 +540,9 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data)
+ 	if (!pmc)
+ 		return 1;
+ 
+-	if (!(kvm_read_cr4(vcpu) & X86_CR4_PCE) &&
++	if (!(kvm_read_cr4_bits(vcpu, X86_CR4_PCE)) &&
+ 	    (static_call(kvm_x86_get_cpl)(vcpu) != 0) &&
+-	    (kvm_read_cr0(vcpu) & X86_CR0_PE))
++	    (kvm_read_cr0_bits(vcpu, X86_CR0_PE)))
+ 		return 1;
+ 
+ 	*data = pmc_read_counter(pmc) & mask;
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 768487611db78..89fa35fba3d86 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -4483,7 +4483,7 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu,
+ 	 * CR0_GUEST_HOST_MASK is already set in the original vmcs01
+ 	 * (KVM doesn't change it);
+ 	 */
+-	vcpu->arch.cr0_guest_owned_bits = KVM_POSSIBLE_CR0_GUEST_BITS;
++	vcpu->arch.cr0_guest_owned_bits = vmx_l1_guest_owned_cr0_bits();
+ 	vmx_set_cr0(vcpu, vmcs12->host_cr0);
+ 
+ 	/* Same as above - no reason to call set_cr4_guest_host_mask().  */
+@@ -4634,7 +4634,7 @@ static void nested_vmx_restore_host_state(struct kvm_vcpu *vcpu)
+ 	 */
+ 	vmx_set_efer(vcpu, nested_vmx_get_vmcs01_guest_efer(vmx));
+ 
+-	vcpu->arch.cr0_guest_owned_bits = KVM_POSSIBLE_CR0_GUEST_BITS;
++	vcpu->arch.cr0_guest_owned_bits = vmx_l1_guest_owned_cr0_bits();
+ 	vmx_set_cr0(vcpu, vmcs_readl(CR0_READ_SHADOW));
+ 
+ 	vcpu->arch.cr4_guest_owned_bits = ~vmcs_readl(CR4_GUEST_HOST_MASK);
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index dd92361f41b3f..8ead0916e252e 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -4773,7 +4773,7 @@ static void init_vmcs(struct vcpu_vmx *vmx)
+ 	/* 22.2.1, 20.8.1 */
+ 	vm_entry_controls_set(vmx, vmx_vmentry_ctrl());
+ 
+-	vmx->vcpu.arch.cr0_guest_owned_bits = KVM_POSSIBLE_CR0_GUEST_BITS;
++	vmx->vcpu.arch.cr0_guest_owned_bits = vmx_l1_guest_owned_cr0_bits();
+ 	vmcs_writel(CR0_GUEST_HOST_MASK, ~vmx->vcpu.arch.cr0_guest_owned_bits);
+ 
+ 	set_cr4_guest_host_mask(vmx);
+@@ -5500,7 +5500,7 @@ static int handle_cr(struct kvm_vcpu *vcpu)
+ 		break;
+ 	case 3: /* lmsw */
+ 		val = (exit_qualification >> LMSW_SOURCE_DATA_SHIFT) & 0x0f;
+-		trace_kvm_cr_write(0, (kvm_read_cr0(vcpu) & ~0xful) | val);
++		trace_kvm_cr_write(0, (kvm_read_cr0_bits(vcpu, ~0xful) | val));
+ 		kvm_lmsw(vcpu, val);
+ 
+ 		return kvm_skip_emulated_instruction(vcpu);
+@@ -7558,7 +7558,7 @@ static u8 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
+ 	if (!kvm_arch_has_noncoherent_dma(vcpu->kvm))
+ 		return (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT;
+ 
+-	if (kvm_read_cr0(vcpu) & X86_CR0_CD) {
++	if (kvm_read_cr0_bits(vcpu, X86_CR0_CD)) {
+ 		if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED))
+ 			cache = MTRR_TYPE_WRBACK;
+ 		else
+diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
+index 2acdc54bc34b1..423e9d3c9c408 100644
+--- a/arch/x86/kvm/vmx/vmx.h
++++ b/arch/x86/kvm/vmx/vmx.h
+@@ -640,6 +640,24 @@ BUILD_CONTROLS_SHADOW(tertiary_exec, TERTIARY_VM_EXEC_CONTROL, 64)
+ 				(1 << VCPU_EXREG_EXIT_INFO_1) | \
+ 				(1 << VCPU_EXREG_EXIT_INFO_2))
+ 
++static inline unsigned long vmx_l1_guest_owned_cr0_bits(void)
++{
++	unsigned long bits = KVM_POSSIBLE_CR0_GUEST_BITS;
++
++	/*
++	 * CR0.WP needs to be intercepted when KVM is shadowing legacy paging
++	 * in order to construct shadow PTEs with the correct protections.
++	 * Note!  CR0.WP technically can be passed through to the guest if
++	 * paging is disabled, but checking CR0.PG would generate a cyclical
++	 * dependency of sorts due to forcing the caller to ensure CR0 holds
++	 * the correct value prior to determining which CR0 bits can be owned
++	 * by L1.  Keep it simple and limit the optimization to EPT.
++	 */
++	if (!enable_ept)
++		bits &= ~X86_CR0_WP;
++	return bits;
++}
++
+ static __always_inline struct kvm_vmx *to_kvm_vmx(struct kvm *kvm)
+ {
+ 	return container_of(kvm, struct kvm_vmx, kvm);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 3d852ce849206..999b2db0737be 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -906,6 +906,18 @@ EXPORT_SYMBOL_GPL(load_pdptrs);
+ 
+ void kvm_post_set_cr0(struct kvm_vcpu *vcpu, unsigned long old_cr0, unsigned long cr0)
+ {
++	/*
++	 * CR0.WP is incorporated into the MMU role, but only for non-nested,
++	 * indirect shadow MMUs.  If TDP is enabled, the MMU's metadata needs
++	 * to be updated, e.g. so that emulating guest translations does the
++	 * right thing, but there's no need to unload the root as CR0.WP
++	 * doesn't affect SPTEs.
++	 */
++	if (tdp_enabled && (cr0 ^ old_cr0) == X86_CR0_WP) {
++		kvm_init_mmu(vcpu);
++		return;
++	}
++
+ 	if ((cr0 ^ old_cr0) & X86_CR0_PG) {
+ 		kvm_clear_async_pf_completion_queue(vcpu);
+ 		kvm_async_pf_hash_reset(vcpu);
+diff --git a/arch/x86/lib/clear_page_64.S b/arch/x86/lib/clear_page_64.S
+index ecbfb4dd3b019..faa4cdc747a3e 100644
+--- a/arch/x86/lib/clear_page_64.S
++++ b/arch/x86/lib/clear_page_64.S
+@@ -142,8 +142,8 @@ SYM_FUNC_START(clear_user_rep_good)
+ 	and $7, %edx
+ 	jz .Lrep_good_exit
+ 
+-.Lrep_good_bytes:
+ 	mov %edx, %ecx
++.Lrep_good_bytes:
+ 	rep stosb
+ 
+ .Lrep_good_exit:
+diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
+index 5f61c65322bea..22fc313c65004 100644
+--- a/arch/x86/lib/retpoline.S
++++ b/arch/x86/lib/retpoline.S
+@@ -144,8 +144,8 @@ SYM_CODE_END(__x86_indirect_jump_thunk_array)
+  */
+ 	.align 64
+ 	.skip 63, 0xcc
+-SYM_FUNC_START_NOALIGN(zen_untrain_ret);
+-
++SYM_START(zen_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
++	ANNOTATE_NOENDBR
+ 	/*
+ 	 * As executed from zen_untrain_ret, this is:
+ 	 *
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index bd50b55bdb613..75bad5d60c9f4 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -528,6 +528,9 @@ restart:
+ 	list_for_each_entry_safe(blkg, n, &q->blkg_list, q_node) {
+ 		struct blkcg *blkcg = blkg->blkcg;
+ 
++		if (hlist_unhashed(&blkg->blkcg_node))
++			continue;
++
+ 		spin_lock(&blkcg->lock);
+ 		blkg_destroy(blkg);
+ 		spin_unlock(&blkcg->lock);
+diff --git a/crypto/algapi.c b/crypto/algapi.c
+index 9de0677b3643d..60b98d2c400e3 100644
+--- a/crypto/algapi.c
++++ b/crypto/algapi.c
+@@ -963,6 +963,9 @@ EXPORT_SYMBOL_GPL(crypto_enqueue_request);
+ void crypto_enqueue_request_head(struct crypto_queue *queue,
+ 				 struct crypto_async_request *request)
+ {
++	if (unlikely(queue->qlen >= queue->max_qlen))
++		queue->backlog = queue->backlog->prev;
++
+ 	queue->qlen++;
+ 	list_add(&request->list, &queue->list);
+ }
+diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
+index 21f7916151145..74fcc08970411 100644
+--- a/crypto/crypto_engine.c
++++ b/crypto/crypto_engine.c
+@@ -129,9 +129,6 @@ start_request:
+ 	if (!engine->retry_support)
+ 		engine->cur_req = async_req;
+ 
+-	if (backlog)
+-		crypto_request_complete(backlog, -EINPROGRESS);
+-
+ 	if (engine->busy)
+ 		was_busy = true;
+ 	else
+@@ -217,6 +214,9 @@ req_err_2:
+ 	crypto_request_complete(async_req, ret);
+ 
+ retry:
++	if (backlog)
++		crypto_request_complete(backlog, -EINPROGRESS);
++
+ 	/* If retry mechanism is supported, send new requests to engine */
+ 	if (engine->retry_support) {
+ 		spin_lock_irqsave(&engine->queue_lock, flags);
+diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
+index 604c1a13c76ef..41c35ab2c25a1 100644
+--- a/drivers/block/ublk_drv.c
++++ b/drivers/block/ublk_drv.c
+@@ -128,6 +128,7 @@ struct ublk_queue {
+ 	unsigned long io_addr;	/* mapped vm address */
+ 	unsigned int max_io_sz;
+ 	bool force_abort;
++	bool timeout;
+ 	unsigned short nr_io_ready;	/* how many ios setup */
+ 	struct ublk_device *dev;
+ 	struct ublk_io ios[];
+@@ -900,6 +901,22 @@ static void ublk_queue_cmd(struct ublk_queue *ubq, struct request *rq)
+ 	}
+ }
+ 
++static enum blk_eh_timer_return ublk_timeout(struct request *rq)
++{
++	struct ublk_queue *ubq = rq->mq_hctx->driver_data;
++
++	if (ubq->flags & UBLK_F_UNPRIVILEGED_DEV) {
++		if (!ubq->timeout) {
++			send_sig(SIGKILL, ubq->ubq_daemon, 0);
++			ubq->timeout = true;
++		}
++
++		return BLK_EH_DONE;
++	}
++
++	return BLK_EH_RESET_TIMER;
++}
++
+ static blk_status_t ublk_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 		const struct blk_mq_queue_data *bd)
+ {
+@@ -959,6 +976,7 @@ static const struct blk_mq_ops ublk_mq_ops = {
+ 	.queue_rq       = ublk_queue_rq,
+ 	.init_hctx	= ublk_init_hctx,
+ 	.init_request   = ublk_init_rq,
++	.timeout	= ublk_timeout,
+ };
+ 
+ static int ublk_ch_open(struct inode *inode, struct file *filp)
+@@ -1721,6 +1739,18 @@ static int ublk_ctrl_add_dev(struct io_uring_cmd *cmd)
+ 	else if (!(info.flags & UBLK_F_UNPRIVILEGED_DEV))
+ 		return -EPERM;
+ 
++	/*
++	 * unprivileged device can't be trusted, but RECOVERY and
++	 * RECOVERY_REISSUE still may hang error handling, so can't
++	 * support recovery features for unprivileged ublk now
++	 *
++	 * TODO: provide forward progress for RECOVERY handler, so that
++	 * unprivileged device can benefit from it
++	 */
++	if (info.flags & UBLK_F_UNPRIVILEGED_DEV)
++		info.flags &= ~(UBLK_F_USER_RECOVERY_REISSUE |
++				UBLK_F_USER_RECOVERY);
++
+ 	/* the created device is always owned by current user */
+ 	ublk_store_owner_uid_gid(&info.owner_uid, &info.owner_gid);
+ 
+@@ -1989,6 +2019,7 @@ static void ublk_queue_reinit(struct ublk_device *ub, struct ublk_queue *ubq)
+ 	put_task_struct(ubq->ubq_daemon);
+ 	/* We have to reset it to NULL, otherwise ub won't accept new FETCH_REQ */
+ 	ubq->ubq_daemon = NULL;
++	ubq->timeout = false;
+ 
+ 	for (i = 0; i < ubq->q_depth; i++) {
+ 		struct ublk_io *io = &ubq->ios[i];
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
+index 83c6dfad77e1b..16966cc94e247 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
+@@ -151,7 +151,7 @@ static int sun8i_ss_setup_ivs(struct skcipher_request *areq)
+ 		}
+ 		rctx->p_iv[i] = a;
+ 		/* we need to setup all others IVs only in the decrypt way */
+-		if (rctx->op_dir & SS_ENCRYPTION)
++		if (rctx->op_dir == SS_ENCRYPTION)
+ 			return 0;
+ 		todo = min(len, sg_dma_len(sg));
+ 		len -= todo;
+diff --git a/drivers/crypto/ccp/psp-dev.c b/drivers/crypto/ccp/psp-dev.c
+index c9c741ac84421..949a3fa0b94a9 100644
+--- a/drivers/crypto/ccp/psp-dev.c
++++ b/drivers/crypto/ccp/psp-dev.c
+@@ -42,6 +42,9 @@ static irqreturn_t psp_irq_handler(int irq, void *data)
+ 	/* Read the interrupt status: */
+ 	status = ioread32(psp->io_regs + psp->vdata->intsts_reg);
+ 
++	/* Clear the interrupt status by writing the same value we read. */
++	iowrite32(status, psp->io_regs + psp->vdata->intsts_reg);
++
+ 	/* invoke subdevice interrupt handlers */
+ 	if (status) {
+ 		if (psp->sev_irq_handler)
+@@ -51,9 +54,6 @@ static irqreturn_t psp_irq_handler(int irq, void *data)
+ 			psp->tee_irq_handler(irq, psp->tee_irq_data, status);
+ 	}
+ 
+-	/* Clear the interrupt status by writing the same value we read. */
+-	iowrite32(status, psp->io_regs + psp->vdata->intsts_reg);
+-
+ 	return IRQ_HANDLED;
+ }
+ 
+diff --git a/drivers/edac/qcom_edac.c b/drivers/edac/qcom_edac.c
+index 3256254c3722c..a7158b570ddd0 100644
+--- a/drivers/edac/qcom_edac.c
++++ b/drivers/edac/qcom_edac.c
+@@ -76,6 +76,8 @@
+ #define DRP0_INTERRUPT_ENABLE           BIT(6)
+ #define SB_DB_DRP_INTERRUPT_ENABLE      0x3
+ 
++#define ECC_POLL_MSEC			5000
++
+ enum {
+ 	LLCC_DRAM_CE = 0,
+ 	LLCC_DRAM_UE,
+@@ -285,8 +287,7 @@ dump_syn_reg(struct edac_device_ctl_info *edev_ctl, int err_type, u32 bank)
+ 	return ret;
+ }
+ 
+-static irqreturn_t
+-llcc_ecc_irq_handler(int irq, void *edev_ctl)
++static irqreturn_t llcc_ecc_irq_handler(int irq, void *edev_ctl)
+ {
+ 	struct edac_device_ctl_info *edac_dev_ctl = edev_ctl;
+ 	struct llcc_drv_data *drv = edac_dev_ctl->dev->platform_data;
+@@ -332,6 +333,11 @@ llcc_ecc_irq_handler(int irq, void *edev_ctl)
+ 	return irq_rc;
+ }
+ 
++static void llcc_ecc_check(struct edac_device_ctl_info *edev_ctl)
++{
++	llcc_ecc_irq_handler(0, edev_ctl);
++}
++
+ static int qcom_llcc_edac_probe(struct platform_device *pdev)
+ {
+ 	struct llcc_drv_data *llcc_driv_data = pdev->dev.platform_data;
+@@ -359,29 +365,31 @@ static int qcom_llcc_edac_probe(struct platform_device *pdev)
+ 	edev_ctl->ctl_name = "llcc";
+ 	edev_ctl->panic_on_ue = LLCC_ERP_PANIC_ON_UE;
+ 
+-	rc = edac_device_add_device(edev_ctl);
+-	if (rc)
+-		goto out_mem;
+-
+-	platform_set_drvdata(pdev, edev_ctl);
+-
+-	/* Request for ecc irq */
++	/* Check if LLCC driver has passed ECC IRQ */
+ 	ecc_irq = llcc_driv_data->ecc_irq;
+-	if (ecc_irq < 0) {
+-		rc = -ENODEV;
+-		goto out_dev;
+-	}
+-	rc = devm_request_irq(dev, ecc_irq, llcc_ecc_irq_handler,
++	if (ecc_irq > 0) {
++		/* Use interrupt mode if IRQ is available */
++		rc = devm_request_irq(dev, ecc_irq, llcc_ecc_irq_handler,
+ 			      IRQF_TRIGGER_HIGH, "llcc_ecc", edev_ctl);
+-	if (rc)
+-		goto out_dev;
++		if (!rc) {
++			edac_op_state = EDAC_OPSTATE_INT;
++			goto irq_done;
++		}
++	}
+ 
+-	return rc;
++	/* Fall back to polling mode otherwise */
++	edev_ctl->poll_msec = ECC_POLL_MSEC;
++	edev_ctl->edac_check = llcc_ecc_check;
++	edac_op_state = EDAC_OPSTATE_POLL;
+ 
+-out_dev:
+-	edac_device_del_device(edev_ctl->dev);
+-out_mem:
+-	edac_device_free_ctl_info(edev_ctl);
++irq_done:
++	rc = edac_device_add_device(edev_ctl);
++	if (rc) {
++		edac_device_free_ctl_info(edev_ctl);
++		return rc;
++	}
++
++	platform_set_drvdata(pdev, edev_ctl);
+ 
+ 	return rc;
+ }
+diff --git a/drivers/firewire/net.c b/drivers/firewire/net.c
+index af22be84034bb..538bd677c254a 100644
+--- a/drivers/firewire/net.c
++++ b/drivers/firewire/net.c
+@@ -706,21 +706,22 @@ static void fwnet_receive_packet(struct fw_card *card, struct fw_request *r,
+ 	int rcode;
+ 
+ 	if (destination == IEEE1394_ALL_NODES) {
+-		kfree(r);
+-
+-		return;
+-	}
+-
+-	if (offset != dev->handler.offset)
++		// Although the response to the broadcast packet is not necessarily required, the
++		// fw_send_response() function should still be called to maintain the reference
++		// counting of the object. In the case, the call of function just releases the
++		// object as a result to decrease the reference counting.
++		rcode = RCODE_COMPLETE;
++	} else if (offset != dev->handler.offset) {
+ 		rcode = RCODE_ADDRESS_ERROR;
+-	else if (tcode != TCODE_WRITE_BLOCK_REQUEST)
++	} else if (tcode != TCODE_WRITE_BLOCK_REQUEST) {
+ 		rcode = RCODE_TYPE_ERROR;
+-	else if (fwnet_incoming_packet(dev, payload, length,
+-				       source, generation, false) != 0) {
++	} else if (fwnet_incoming_packet(dev, payload, length,
++					 source, generation, false) != 0) {
+ 		dev_err(&dev->netdev->dev, "incoming packet failure\n");
+ 		rcode = RCODE_CONFLICT_ERROR;
+-	} else
++	} else {
+ 		rcode = RCODE_COMPLETE;
++	}
+ 
+ 	fw_send_response(card, r, rcode);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index 08eced097bd8e..2eb2c66843a88 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -1276,7 +1276,7 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
+ 		r = drm_sched_job_add_dependency(&leader->base, fence);
+ 		if (r) {
+ 			dma_fence_put(fence);
+-			goto error_cleanup;
++			return r;
+ 		}
+ 	}
+ 
+@@ -1303,7 +1303,8 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
+ 	}
+ 	if (r) {
+ 		r = -EAGAIN;
+-		goto error_unlock;
++		mutex_unlock(&p->adev->notifier_lock);
++		return r;
+ 	}
+ 
+ 	p->fence = dma_fence_get(&leader->base.s_fence->finished);
+@@ -1350,14 +1351,6 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
+ 	mutex_unlock(&p->adev->notifier_lock);
+ 	mutex_unlock(&p->bo_list->bo_list_mutex);
+ 	return 0;
+-
+-error_unlock:
+-	mutex_unlock(&p->adev->notifier_lock);
+-
+-error_cleanup:
+-	for (i = 0; i < p->gang_size; ++i)
+-		drm_sched_job_cleanup(&p->jobs[i]->base);
+-	return r;
+ }
+ 
+ /* Cleanup the parser structure */
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 6f715fb930bb4..aa46726dfdb01 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -4482,7 +4482,11 @@ static int amdgpu_device_recover_vram(struct amdgpu_device *adev)
+ 	dev_info(adev->dev, "recover vram bo from shadow start\n");
+ 	mutex_lock(&adev->shadow_list_lock);
+ 	list_for_each_entry(vmbo, &adev->shadow_list, shadow_list) {
+-		shadow = &vmbo->bo;
++		/* If vm is compute context or adev is APU, shadow will be NULL */
++		if (!vmbo->shadow)
++			continue;
++		shadow = vmbo->shadow;
++
+ 		/* No need to recover an evicted BO */
+ 		if (shadow->tbo.resource->mem_type != TTM_PL_TT ||
+ 		    shadow->tbo.resource->start == AMDGPU_BO_INVALID_OFFSET ||
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+index 35ed46b9249c1..8a0a4464ea830 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+@@ -686,9 +686,11 @@ int amdgpu_gfx_ras_late_init(struct amdgpu_device *adev, struct ras_common_if *r
+ 		if (r)
+ 			return r;
+ 
+-		r = amdgpu_irq_get(adev, &adev->gfx.cp_ecc_error_irq, 0);
+-		if (r)
+-			goto late_fini;
++		if (adev->gfx.cp_ecc_error_irq.funcs) {
++			r = amdgpu_irq_get(adev, &adev->gfx.cp_ecc_error_irq, 0);
++			if (r)
++				goto late_fini;
++		}
+ 	} else {
+ 		amdgpu_ras_feature_enable_on_boot(adev, ras_block, 0);
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
+index e9b45089a28a6..863b2a34b2d64 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
+@@ -38,6 +38,7 @@ static int amdgpu_sched_process_priority_override(struct amdgpu_device *adev,
+ {
+ 	struct fd f = fdget(fd);
+ 	struct amdgpu_fpriv *fpriv;
++	struct amdgpu_ctx_mgr *mgr;
+ 	struct amdgpu_ctx *ctx;
+ 	uint32_t id;
+ 	int r;
+@@ -51,8 +52,11 @@ static int amdgpu_sched_process_priority_override(struct amdgpu_device *adev,
+ 		return r;
+ 	}
+ 
+-	idr_for_each_entry(&fpriv->ctx_mgr.ctx_handles, ctx, id)
++	mgr = &fpriv->ctx_mgr;
++	mutex_lock(&mgr->lock);
++	idr_for_each_entry(&mgr->ctx_handles, ctx, id)
+ 		amdgpu_ctx_priority_override(ctx, priority);
++	mutex_unlock(&mgr->lock);
+ 
+ 	fdput(f);
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+index ecf8ceb53311a..7609d206012fa 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+@@ -1313,13 +1313,6 @@ static int gfx_v11_0_sw_init(void *handle)
+ 	if (r)
+ 		return r;
+ 
+-	/* ECC error */
+-	r = amdgpu_irq_add_id(adev, SOC21_IH_CLIENTID_GRBM_CP,
+-				  GFX_11_0_0__SRCID__CP_ECC_ERROR,
+-				  &adev->gfx.cp_ecc_error_irq);
+-	if (r)
+-		return r;
+-
+ 	/* FED error */
+ 	r = amdgpu_irq_add_id(adev, SOC21_IH_CLIENTID_GFX,
+ 				  GFX_11_0_0__SRCID__RLC_GC_FED_INTERRUPT,
+@@ -4442,7 +4435,6 @@ static int gfx_v11_0_hw_fini(void *handle)
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 	int r;
+ 
+-	amdgpu_irq_put(adev, &adev->gfx.cp_ecc_error_irq, 0);
+ 	amdgpu_irq_put(adev, &adev->gfx.priv_reg_irq, 0);
+ 	amdgpu_irq_put(adev, &adev->gfx.priv_inst_irq, 0);
+ 
+@@ -5882,36 +5874,6 @@ static void gfx_v11_0_set_compute_eop_interrupt_state(struct amdgpu_device *adev
+ 	}
+ }
+ 
+-#define CP_ME1_PIPE_INST_ADDR_INTERVAL  0x1
+-#define SET_ECC_ME_PIPE_STATE(reg_addr, state) \
+-	do { \
+-		uint32_t tmp = RREG32_SOC15_IP(GC, reg_addr); \
+-		tmp = REG_SET_FIELD(tmp, CP_ME1_PIPE0_INT_CNTL, CP_ECC_ERROR_INT_ENABLE, state); \
+-		WREG32_SOC15_IP(GC, reg_addr, tmp); \
+-	} while (0)
+-
+-static int gfx_v11_0_set_cp_ecc_error_state(struct amdgpu_device *adev,
+-							struct amdgpu_irq_src *source,
+-							unsigned type,
+-							enum amdgpu_interrupt_state state)
+-{
+-	uint32_t ecc_irq_state = 0;
+-	uint32_t pipe0_int_cntl_addr = 0;
+-	int i = 0;
+-
+-	ecc_irq_state = (state == AMDGPU_IRQ_STATE_ENABLE) ? 1 : 0;
+-
+-	pipe0_int_cntl_addr = SOC15_REG_OFFSET(GC, 0, regCP_ME1_PIPE0_INT_CNTL);
+-
+-	WREG32_FIELD15_PREREG(GC, 0, CP_INT_CNTL_RING0, CP_ECC_ERROR_INT_ENABLE, ecc_irq_state);
+-
+-	for (i = 0; i < adev->gfx.mec.num_pipe_per_mec; i++)
+-		SET_ECC_ME_PIPE_STATE(pipe0_int_cntl_addr + i * CP_ME1_PIPE_INST_ADDR_INTERVAL,
+-					ecc_irq_state);
+-
+-	return 0;
+-}
+-
+ static int gfx_v11_0_set_eop_interrupt_state(struct amdgpu_device *adev,
+ 					    struct amdgpu_irq_src *src,
+ 					    unsigned type,
+@@ -6329,11 +6291,6 @@ static const struct amdgpu_irq_src_funcs gfx_v11_0_priv_inst_irq_funcs = {
+ 	.process = gfx_v11_0_priv_inst_irq,
+ };
+ 
+-static const struct amdgpu_irq_src_funcs gfx_v11_0_cp_ecc_error_irq_funcs = {
+-	.set = gfx_v11_0_set_cp_ecc_error_state,
+-	.process = amdgpu_gfx_cp_ecc_error_irq,
+-};
+-
+ static const struct amdgpu_irq_src_funcs gfx_v11_0_rlc_gc_fed_irq_funcs = {
+ 	.process = gfx_v11_0_rlc_gc_fed_irq,
+ };
+@@ -6349,9 +6306,6 @@ static void gfx_v11_0_set_irq_funcs(struct amdgpu_device *adev)
+ 	adev->gfx.priv_inst_irq.num_types = 1;
+ 	adev->gfx.priv_inst_irq.funcs = &gfx_v11_0_priv_inst_irq_funcs;
+ 
+-	adev->gfx.cp_ecc_error_irq.num_types = 1; /* CP ECC error */
+-	adev->gfx.cp_ecc_error_irq.funcs = &gfx_v11_0_cp_ecc_error_irq_funcs;
+-
+ 	adev->gfx.rlc_gc_fed_irq.num_types = 1; /* 0x80 FED error */
+ 	adev->gfx.rlc_gc_fed_irq.funcs = &gfx_v11_0_rlc_gc_fed_irq_funcs;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index ae09fc1cfe6b7..c54d05bdc2d8c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -3751,7 +3751,8 @@ static int gfx_v9_0_hw_fini(void *handle)
+ {
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 
+-	amdgpu_irq_put(adev, &adev->gfx.cp_ecc_error_irq, 0);
++	if (amdgpu_ras_is_supported(adev, AMDGPU_RAS_BLOCK__GFX))
++		amdgpu_irq_put(adev, &adev->gfx.cp_ecc_error_irq, 0);
+ 	amdgpu_irq_put(adev, &adev->gfx.priv_reg_irq, 0);
+ 	amdgpu_irq_put(adev, &adev->gfx.priv_inst_irq, 0);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
+index ab2556ca984e1..be16e627e54b9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
+@@ -1161,7 +1161,6 @@ static int gmc_v10_0_hw_fini(void *handle)
+ 		return 0;
+ 	}
+ 
+-	amdgpu_irq_put(adev, &adev->gmc.ecc_irq, 0);
+ 	amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0);
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
+index af7b3ba1ca000..abdb8d8421b1b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
+@@ -954,7 +954,6 @@ static int gmc_v11_0_hw_fini(void *handle)
+ 		return 0;
+ 	}
+ 
+-	amdgpu_irq_put(adev, &adev->gmc.ecc_irq, 0);
+ 	amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0);
+ 	gmc_v11_0_gart_disable(adev);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+index b06170c00dfca..83d22dd8b8715 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+@@ -1987,7 +1987,6 @@ static int gmc_v9_0_hw_fini(void *handle)
+ 	if (adev->mmhub.funcs->update_power_gating)
+ 		adev->mmhub.funcs->update_power_gating(adev, false);
+ 
+-	amdgpu_irq_put(adev, &adev->gmc.ecc_irq, 0);
+ 	amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0);
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
+index a1b751d9ac064..323d68b2124fa 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
+@@ -54,6 +54,7 @@ static int jpeg_v3_0_early_init(void *handle)
+ 
+ 	switch (adev->ip_versions[UVD_HWIP][0]) {
+ 	case IP_VERSION(3, 1, 1):
++	case IP_VERSION(3, 1, 2):
+ 		break;
+ 	default:
+ 		harvest = RREG32_SOC15(JPEG, 0, mmCC_UVD_HARVESTING);
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+index b5affba221569..8b8ddf0502661 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+@@ -1903,9 +1903,11 @@ static int sdma_v4_0_hw_fini(void *handle)
+ 		return 0;
+ 	}
+ 
+-	for (i = 0; i < adev->sdma.num_instances; i++) {
+-		amdgpu_irq_put(adev, &adev->sdma.ecc_irq,
+-			       AMDGPU_SDMA_IRQ_INSTANCE0 + i);
++	if (amdgpu_ras_is_supported(adev, AMDGPU_RAS_BLOCK__SDMA)) {
++		for (i = 0; i < adev->sdma.num_instances; i++) {
++			amdgpu_irq_put(adev, &adev->sdma.ecc_irq,
++				       AMDGPU_SDMA_IRQ_INSTANCE0 + i);
++		}
+ 	}
+ 
+ 	sdma_v4_0_ctx_switch_enable(adev, false);
+diff --git a/drivers/gpu/drm/amd/amdgpu/soc21.c b/drivers/gpu/drm/amd/amdgpu/soc21.c
+index c82b3a7ea5f08..20cf861bfda11 100644
+--- a/drivers/gpu/drm/amd/amdgpu/soc21.c
++++ b/drivers/gpu/drm/amd/amdgpu/soc21.c
+@@ -778,7 +778,7 @@ static int soc21_common_early_init(void *handle)
+ 			AMD_PG_SUPPORT_VCN_DPG |
+ 			AMD_PG_SUPPORT_GFX_PG |
+ 			AMD_PG_SUPPORT_JPEG;
+-		adev->external_rev_id = adev->rev_id + 0x1;
++		adev->external_rev_id = adev->rev_id + 0x80;
+ 		break;
+ 
+ 	default:
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 62af874f26e01..f54d670ab3abc 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2274,7 +2274,7 @@ static int dm_late_init(void *handle)
+ 		struct dc_link *edp_links[MAX_NUM_EDP];
+ 		int edp_num;
+ 
+-		get_edp_links(adev->dm.dc, edp_links, &edp_num);
++		dc_get_edp_links(adev->dm.dc, edp_links, &edp_num);
+ 		for (i = 0; i < edp_num; i++) {
+ 			if (!dmub_init_abm_config(adev->dm.dc->res_pool, params, i))
+ 				return -EINVAL;
+@@ -7876,6 +7876,13 @@ static void amdgpu_dm_commit_cursors(struct drm_atomic_state *state)
+ 			handle_cursor_update(plane, old_plane_state);
+ }
+ 
++static inline uint32_t get_mem_type(struct drm_framebuffer *fb)
++{
++	struct amdgpu_bo *abo = gem_to_amdgpu_bo(fb->obj[0]);
++
++	return abo->tbo.resource ? abo->tbo.resource->mem_type : 0;
++}
++
+ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ 				    struct dc_state *dc_state,
+ 				    struct drm_device *dev,
+@@ -7950,6 +7957,8 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ 			continue;
+ 
+ 		dc_plane = dm_new_plane_state->dc_state;
++		if (!dc_plane)
++			continue;
+ 
+ 		bundle->surface_updates[planes_count].surface = dc_plane;
+ 		if (new_pcrtc_state->color_mgmt_changed) {
+@@ -8016,11 +8025,13 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ 
+ 		/*
+ 		 * Only allow immediate flips for fast updates that don't
+-		 * change FB pitch, DCC state, rotation or mirroing.
++		 * change memory domain, FB pitch, DCC state, rotation or
++		 * mirroring.
+ 		 */
+ 		bundle->flip_addrs[planes_count].flip_immediate =
+ 			crtc->state->async_flip &&
+-			acrtc_state->update_type == UPDATE_TYPE_FAST;
++			acrtc_state->update_type == UPDATE_TYPE_FAST &&
++			get_mem_type(old_plane_state->fb) == get_mem_type(fb);
+ 
+ 		timestamp_ns = ktime_get_ns();
+ 		bundle->flip_addrs[planes_count].flip_timestamp_in_us = div_u64(timestamp_ns, 1000);
+@@ -8533,6 +8544,9 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
+ 		struct amdgpu_crtc *acrtc = to_amdgpu_crtc(dm_new_con_state->base.crtc);
+ 		struct amdgpu_dm_connector *aconnector = to_amdgpu_dm_connector(connector);
+ 
++		if (!adev->dm.hdcp_workqueue)
++			continue;
++
+ 		pr_debug("[HDCP_DM] -------------- i : %x ----------\n", i);
+ 
+ 		if (!connector)
+@@ -8581,6 +8595,9 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
+ 		struct amdgpu_crtc *acrtc = to_amdgpu_crtc(dm_new_con_state->base.crtc);
+ 		struct amdgpu_dm_connector *aconnector = to_amdgpu_dm_connector(connector);
+ 
++		if (!adev->dm.hdcp_workqueue)
++			continue;
++
+ 		new_crtc_state = NULL;
+ 		old_crtc_state = NULL;
+ 
+@@ -9601,8 +9618,9 @@ static int dm_update_plane_state(struct dc *dc,
+ 			return -EINVAL;
+ 		}
+ 
++		if (dm_old_plane_state->dc_state)
++			dc_plane_state_release(dm_old_plane_state->dc_state);
+ 
+-		dc_plane_state_release(dm_old_plane_state->dc_state);
+ 		dm_new_plane_state->dc_state = NULL;
+ 
+ 		*lock_and_validation_needed = true;
+@@ -10150,6 +10168,7 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+ 		ret = compute_mst_dsc_configs_for_state(state, dm_state->context, vars);
+ 		if (ret) {
+ 			DRM_DEBUG_DRIVER("compute_mst_dsc_configs_for_state() failed\n");
++			ret = -EINVAL;
+ 			goto fail;
+ 		}
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+index 09a3efa517da9..4a5dae578d970 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+@@ -724,7 +724,7 @@ static ssize_t dp_phy_test_pattern_debugfs_write(struct file *f, const char __us
+ 	for (i = 0; i < (unsigned int)(link_training_settings.link_settings.lane_count); i++)
+ 		link_training_settings.hw_lane_settings[i] = link->cur_lane_setting[i];
+ 
+-	dc_link_set_test_pattern(
++	dc_link_dp_set_test_pattern(
+ 		link,
+ 		test_pattern,
+ 		DP_TEST_PATTERN_COLOR_SPACE_RGB,
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 8dc442f90eafa..3da519957f6c8 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -385,13 +385,17 @@ static int dm_dp_mst_get_modes(struct drm_connector *connector)
+ 		if (aconnector->dc_sink && connector->state) {
+ 			struct drm_device *dev = connector->dev;
+ 			struct amdgpu_device *adev = drm_to_adev(dev);
+-			struct hdcp_workqueue *hdcp_work = adev->dm.hdcp_workqueue;
+-			struct hdcp_workqueue *hdcp_w = &hdcp_work[aconnector->dc_link->link_index];
+ 
+-			connector->state->hdcp_content_type =
+-			hdcp_w->hdcp_content_type[connector->index];
+-			connector->state->content_protection =
+-			hdcp_w->content_protection[connector->index];
++			if (adev->dm.hdcp_workqueue) {
++				struct hdcp_workqueue *hdcp_work = adev->dm.hdcp_workqueue;
++				struct hdcp_workqueue *hdcp_w =
++					&hdcp_work[aconnector->dc_link->link_index];
++
++				connector->state->hdcp_content_type =
++				hdcp_w->hdcp_content_type[connector->index];
++				connector->state->content_protection =
++				hdcp_w->content_protection[connector->index];
++			}
+ 		}
+ #endif
+ 
+@@ -1410,6 +1414,7 @@ int pre_validate_dsc(struct drm_atomic_state *state,
+ 	ret = pre_compute_mst_dsc_configs_for_state(state, local_dc_state, vars);
+ 	if (ret != 0) {
+ 		DRM_INFO_ONCE("pre_compute_mst_dsc_configs_for_state() failed\n");
++		ret = -EINVAL;
+ 		goto clean_exit;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
+index 69691daf4dbbd..73a45ec27f90d 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
+@@ -104,7 +104,7 @@ void clk_mgr_exit_optimized_pwr_state(const struct dc *dc, struct clk_mgr *clk_m
+ 	int edp_num;
+ 	unsigned int panel_inst;
+ 
+-	get_edp_links(dc, edp_links, &edp_num);
++	dc_get_edp_links(dc, edp_links, &edp_num);
+ 	if (dc->hwss.exit_optimized_pwr_state)
+ 		dc->hwss.exit_optimized_pwr_state(dc, dc->current_state);
+ 
+@@ -129,7 +129,7 @@ void clk_mgr_optimize_pwr_state(const struct dc *dc, struct clk_mgr *clk_mgr)
+ 	int edp_num;
+ 	unsigned int panel_inst;
+ 
+-	get_edp_links(dc, edp_links, &edp_num);
++	dc_get_edp_links(dc, edp_links, &edp_num);
+ 	if (edp_num) {
+ 		for (panel_inst = 0; panel_inst < edp_num; panel_inst++) {
+ 			edp_link = edp_links[panel_inst];
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
+index 61768bf726f8c..31ee81ebb4ea2 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
+@@ -805,6 +805,8 @@ void dcn32_clk_mgr_construct(
+ 		struct pp_smu_funcs *pp_smu,
+ 		struct dccg *dccg)
+ {
++	struct clk_log_info log_info = {0};
++
+ 	clk_mgr->base.ctx = ctx;
+ 	clk_mgr->base.funcs = &dcn32_funcs;
+ 	if (ASICREV_IS_GC_11_0_2(clk_mgr->base.ctx->asic_id.hw_internal_rev)) {
+@@ -838,6 +840,7 @@ void dcn32_clk_mgr_construct(
+ 			clk_mgr->base.clks.ref_dtbclk_khz = 268750;
+ 	}
+ 
++
+ 	/* integer part is now VCO frequency in kHz */
+ 	clk_mgr->base.dentist_vco_freq_khz = dcn32_get_vco_frequency_from_reg(clk_mgr);
+ 
+@@ -845,6 +848,8 @@ void dcn32_clk_mgr_construct(
+ 	if (clk_mgr->base.dentist_vco_freq_khz == 0)
+ 		clk_mgr->base.dentist_vco_freq_khz = 4300000; /* Updated as per HW docs */
+ 
++	dcn32_dump_clk_registers(&clk_mgr->base.boot_snapshot, &clk_mgr->base, &log_info);
++
+ 	if (ctx->dc->debug.disable_dtb_ref_clk_switch &&
+ 			clk_mgr->base.clks.ref_dtbclk_khz != clk_mgr->base.boot_snapshot.dtbclk) {
+ 		clk_mgr->base.clks.ref_dtbclk_khz = clk_mgr->base.boot_snapshot.dtbclk;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 1c218c5266509..d406d7b74c6c3 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -53,7 +53,6 @@
+ #include "link_encoder.h"
+ #include "link_enc_cfg.h"
+ 
+-#include "dc_link.h"
+ #include "link.h"
+ #include "dm_helpers.h"
+ #include "mem_input.h"
+@@ -1298,7 +1297,7 @@ static void detect_edp_presence(struct dc *dc)
+ 	int i;
+ 	int edp_num;
+ 
+-	get_edp_links(dc, edp_links, &edp_num);
++	dc_get_edp_links(dc, edp_links, &edp_num);
+ 	if (!edp_num)
+ 		return;
+ 
+@@ -4317,157 +4316,6 @@ bool dc_is_dmcu_initialized(struct dc *dc)
+ 	return false;
+ }
+ 
+-bool dc_is_oem_i2c_device_present(
+-	struct dc *dc,
+-	size_t slave_address)
+-{
+-	if (dc->res_pool->oem_device)
+-		return dce_i2c_oem_device_present(
+-			dc->res_pool,
+-			dc->res_pool->oem_device,
+-			slave_address);
+-
+-	return false;
+-}
+-
+-bool dc_submit_i2c(
+-		struct dc *dc,
+-		uint32_t link_index,
+-		struct i2c_command *cmd)
+-{
+-
+-	struct dc_link *link = dc->links[link_index];
+-	struct ddc_service *ddc = link->ddc;
+-	return dce_i2c_submit_command(
+-		dc->res_pool,
+-		ddc->ddc_pin,
+-		cmd);
+-}
+-
+-bool dc_submit_i2c_oem(
+-		struct dc *dc,
+-		struct i2c_command *cmd)
+-{
+-	struct ddc_service *ddc = dc->res_pool->oem_device;
+-	if (ddc)
+-		return dce_i2c_submit_command(
+-			dc->res_pool,
+-			ddc->ddc_pin,
+-			cmd);
+-
+-	return false;
+-}
+-
+-static bool link_add_remote_sink_helper(struct dc_link *dc_link, struct dc_sink *sink)
+-{
+-	if (dc_link->sink_count >= MAX_SINKS_PER_LINK) {
+-		BREAK_TO_DEBUGGER();
+-		return false;
+-	}
+-
+-	dc_sink_retain(sink);
+-
+-	dc_link->remote_sinks[dc_link->sink_count] = sink;
+-	dc_link->sink_count++;
+-
+-	return true;
+-}
+-
+-/*
+- * dc_link_add_remote_sink() - Create a sink and attach it to an existing link
+- *
+- * EDID length is in bytes
+- */
+-struct dc_sink *dc_link_add_remote_sink(
+-		struct dc_link *link,
+-		const uint8_t *edid,
+-		int len,
+-		struct dc_sink_init_data *init_data)
+-{
+-	struct dc_sink *dc_sink;
+-	enum dc_edid_status edid_status;
+-
+-	if (len > DC_MAX_EDID_BUFFER_SIZE) {
+-		dm_error("Max EDID buffer size breached!\n");
+-		return NULL;
+-	}
+-
+-	if (!init_data) {
+-		BREAK_TO_DEBUGGER();
+-		return NULL;
+-	}
+-
+-	if (!init_data->link) {
+-		BREAK_TO_DEBUGGER();
+-		return NULL;
+-	}
+-
+-	dc_sink = dc_sink_create(init_data);
+-
+-	if (!dc_sink)
+-		return NULL;
+-
+-	memmove(dc_sink->dc_edid.raw_edid, edid, len);
+-	dc_sink->dc_edid.length = len;
+-
+-	if (!link_add_remote_sink_helper(
+-			link,
+-			dc_sink))
+-		goto fail_add_sink;
+-
+-	edid_status = dm_helpers_parse_edid_caps(
+-			link,
+-			&dc_sink->dc_edid,
+-			&dc_sink->edid_caps);
+-
+-	/*
+-	 * Treat device as no EDID device if EDID
+-	 * parsing fails
+-	 */
+-	if (edid_status != EDID_OK && edid_status != EDID_PARTIAL_VALID) {
+-		dc_sink->dc_edid.length = 0;
+-		dm_error("Bad EDID, status%d!\n", edid_status);
+-	}
+-
+-	return dc_sink;
+-
+-fail_add_sink:
+-	dc_sink_release(dc_sink);
+-	return NULL;
+-}
+-
+-/*
+- * dc_link_remove_remote_sink() - Remove a remote sink from a dc_link
+- *
+- * Note that this just removes the struct dc_sink - it doesn't
+- * program hardware or alter other members of dc_link
+- */
+-void dc_link_remove_remote_sink(struct dc_link *link, struct dc_sink *sink)
+-{
+-	int i;
+-
+-	if (!link->sink_count) {
+-		BREAK_TO_DEBUGGER();
+-		return;
+-	}
+-
+-	for (i = 0; i < link->sink_count; i++) {
+-		if (link->remote_sinks[i] == sink) {
+-			dc_sink_release(sink);
+-			link->remote_sinks[i] = NULL;
+-
+-			/* shrink array to remove empty place */
+-			while (i < link->sink_count - 1) {
+-				link->remote_sinks[i] = link->remote_sinks[i+1];
+-				i++;
+-			}
+-			link->remote_sinks[i] = NULL;
+-			link->sink_count--;
+-			return;
+-		}
+-	}
+-}
+-
+ void get_clock_requirements_for_state(struct dc_state *state, struct AsicStateEx *info)
+ {
+ 	info->displayClock				= (unsigned int)state->bw_ctx.bw.dcn.clk.dispclk_khz;
+@@ -4990,7 +4838,7 @@ void dc_notify_vsync_int_state(struct dc *dc, struct dc_stream_state *stream, bo
+ 		return;
+ 	}
+ 
+-	get_edp_links(dc, edp_links, &edp_num);
++	dc_get_edp_links(dc, edp_links, &edp_num);
+ 
+ 	/* Determine panel inst */
+ 	for (i = 0; i < edp_num; i++) {
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_exports.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_exports.c
+index a951e10416ee6..862cb0f93b7d9 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_exports.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_exports.c
+@@ -34,6 +34,49 @@
+  * in this file which calls link functions.
+  */
+ #include "link.h"
++#include "dce/dce_i2c.h"
++struct dc_link *dc_get_link_at_index(struct dc *dc, uint32_t link_index)
++{
++	return dc->links[link_index];
++}
++
++void dc_get_edp_links(const struct dc *dc,
++		struct dc_link **edp_links,
++		int *edp_num)
++{
++	int i;
++
++	*edp_num = 0;
++	for (i = 0; i < dc->link_count; i++) {
++		// report any eDP links, even unconnected DDI's
++		if (!dc->links[i])
++			continue;
++		if (dc->links[i]->connector_signal == SIGNAL_TYPE_EDP) {
++			edp_links[*edp_num] = dc->links[i];
++			if (++(*edp_num) == MAX_NUM_EDP)
++				return;
++		}
++	}
++}
++
++bool dc_get_edp_link_panel_inst(const struct dc *dc,
++		const struct dc_link *link,
++		unsigned int *inst_out)
++{
++	struct dc_link *edp_links[MAX_NUM_EDP];
++	int edp_num, i;
++
++	*inst_out = 0;
++	if (link->connector_signal != SIGNAL_TYPE_EDP)
++		return false;
++	dc_get_edp_links(dc, edp_links, &edp_num);
++	for (i = 0; i < edp_num; i++) {
++		if (link == edp_links[i])
++			break;
++		(*inst_out)++;
++	}
++	return true;
++}
+ 
+ bool dc_link_detect(struct dc_link *link, enum dc_detect_reason reason)
+ {
+@@ -101,3 +144,47 @@ bool dc_link_update_dsc_config(struct pipe_ctx *pipe_ctx)
+ {
+ 	return link_update_dsc_config(pipe_ctx);
+ }
++
++bool dc_is_oem_i2c_device_present(
++	struct dc *dc,
++	size_t slave_address)
++{
++	if (dc->res_pool->oem_device)
++		return dce_i2c_oem_device_present(
++			dc->res_pool,
++			dc->res_pool->oem_device,
++			slave_address);
++
++	return false;
++}
++
++bool dc_submit_i2c(
++		struct dc *dc,
++		uint32_t link_index,
++		struct i2c_command *cmd)
++{
++
++	struct dc_link *link = dc->links[link_index];
++	struct ddc_service *ddc = link->ddc;
++
++	return dce_i2c_submit_command(
++		dc->res_pool,
++		ddc->ddc_pin,
++		cmd);
++}
++
++bool dc_submit_i2c_oem(
++		struct dc *dc,
++		struct i2c_command *cmd)
++{
++	struct ddc_service *ddc = dc->res_pool->oem_device;
++
++	if (ddc)
++		return dce_i2c_submit_command(
++			dc->res_pool,
++			ddc->ddc_pin,
++			cmd);
++
++	return false;
++}
++
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index d9f2ef242b0fb..0ae6dcc403a4b 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -1707,6 +1707,9 @@ bool dc_remove_plane_from_context(
+ 	struct dc_stream_status *stream_status = NULL;
+ 	struct resource_pool *pool = dc->res_pool;
+ 
++	if (!plane_state)
++		return true;
++
+ 	for (i = 0; i < context->stream_count; i++)
+ 		if (context->streams[i] == stream) {
+ 			stream_status = &context->stream_status[i];
+diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
+index 1fde433786894..3fb868f2f6f5b 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc.h
++++ b/drivers/gpu/drm/amd/display/dc/dc.h
+@@ -795,6 +795,7 @@ struct dc_debug_options {
+ 	unsigned int force_odm_combine; //bit vector based on otg inst
+ 	unsigned int seamless_boot_odm_combine;
+ 	unsigned int force_odm_combine_4to1; //bit vector based on otg inst
++	int minimum_z8_residency_time;
+ 	bool disable_z9_mpc;
+ 	unsigned int force_fclk_khz;
+ 	bool enable_tri_buf;
+@@ -1379,8 +1380,159 @@ struct dc_plane_state *dc_get_surface_for_mpcc(struct dc *dc,
+ uint32_t dc_get_opp_for_plane(struct dc *dc, struct dc_plane_state *plane);
+ 
+ /* Link Interfaces */
+-/* TODO: remove this after resolving external dependencies */
+-#include "dc_link.h"
++/*
++ * A link contains one or more sinks and their connected status.
++ * The currently active signal type (HDMI, DP-SST, DP-MST) is also reported.
++ */
++struct dc_link {
++	struct dc_sink *remote_sinks[MAX_SINKS_PER_LINK];
++	unsigned int sink_count;
++	struct dc_sink *local_sink;
++	unsigned int link_index;
++	enum dc_connection_type type;
++	enum signal_type connector_signal;
++	enum dc_irq_source irq_source_hpd;
++	enum dc_irq_source irq_source_hpd_rx;/* aka DP Short Pulse  */
++
++	bool is_hpd_filter_disabled;
++	bool dp_ss_off;
++
++	/**
++	 * @link_state_valid:
++	 *
++	 * If there is no link and local sink, this variable should be set to
++	 * false. Otherwise, it should be set to true; usually, the function
++	 * core_link_enable_stream sets this field to true.
++	 */
++	bool link_state_valid;
++	bool aux_access_disabled;
++	bool sync_lt_in_progress;
++	bool skip_stream_reenable;
++	bool is_internal_display;
++	/** @todo Rename. Flag an endpoint as having a programmable mapping to a DIG encoder. */
++	bool is_dig_mapping_flexible;
++	bool hpd_status; /* HPD status of link without physical HPD pin. */
++	bool is_hpd_pending; /* Indicates a new received hpd */
++	bool is_automated; /* Indicates automated testing */
++
++	bool edp_sink_present;
++
++	struct dp_trace dp_trace;
++
++	/* caps is the same as reported_link_cap. link_traing use
++	 * reported_link_cap. Will clean up.  TODO
++	 */
++	struct dc_link_settings reported_link_cap;
++	struct dc_link_settings verified_link_cap;
++	struct dc_link_settings cur_link_settings;
++	struct dc_lane_settings cur_lane_setting[LANE_COUNT_DP_MAX];
++	struct dc_link_settings preferred_link_setting;
++	/* preferred_training_settings are override values that
++	 * come from DM. DM is responsible for the memory
++	 * management of the override pointers.
++	 */
++	struct dc_link_training_overrides preferred_training_settings;
++	struct dp_audio_test_data audio_test_data;
++
++	uint8_t ddc_hw_inst;
++
++	uint8_t hpd_src;
++
++	uint8_t link_enc_hw_inst;
++	/* DIG link encoder ID. Used as index in link encoder resource pool.
++	 * For links with fixed mapping to DIG, this is not changed after dc_link
++	 * object creation.
++	 */
++	enum engine_id eng_id;
++
++	bool test_pattern_enabled;
++	union compliance_test_state compliance_test_state;
++
++	void *priv;
++
++	struct ddc_service *ddc;
++
++	bool aux_mode;
++
++	/* Private to DC core */
++
++	const struct dc *dc;
++
++	struct dc_context *ctx;
++
++	struct panel_cntl *panel_cntl;
++	struct link_encoder *link_enc;
++	struct graphics_object_id link_id;
++	/* Endpoint type distinguishes display endpoints which do not have entries
++	 * in the BIOS connector table from those that do. Helps when tracking link
++	 * encoder to display endpoint assignments.
++	 */
++	enum display_endpoint_type ep_type;
++	union ddi_channel_mapping ddi_channel_mapping;
++	struct connector_device_tag_info device_tag;
++	struct dpcd_caps dpcd_caps;
++	uint32_t dongle_max_pix_clk;
++	unsigned short chip_caps;
++	unsigned int dpcd_sink_count;
++#if defined(CONFIG_DRM_AMD_DC_HDCP)
++	struct hdcp_caps hdcp_caps;
++#endif
++	enum edp_revision edp_revision;
++	union dpcd_sink_ext_caps dpcd_sink_ext_caps;
++
++	struct psr_settings psr_settings;
++
++	/* Drive settings read from integrated info table */
++	struct dc_lane_settings bios_forced_drive_settings;
++
++	/* Vendor specific LTTPR workaround variables */
++	uint8_t vendor_specific_lttpr_link_rate_wa;
++	bool apply_vendor_specific_lttpr_link_rate_wa;
++
++	/* MST record stream using this link */
++	struct link_flags {
++		bool dp_keep_receiver_powered;
++		bool dp_skip_DID2;
++		bool dp_skip_reset_segment;
++		bool dp_skip_fs_144hz;
++		bool dp_mot_reset_segment;
++		/* Some USB4 docks do not handle turning off MST DSC once it has been enabled. */
++		bool dpia_mst_dsc_always_on;
++		/* Forced DPIA into TBT3 compatibility mode. */
++		bool dpia_forced_tbt3_mode;
++		bool dongle_mode_timing_override;
++	} wa_flags;
++	struct link_mst_stream_allocation_table mst_stream_alloc_table;
++
++	struct dc_link_status link_status;
++	struct dprx_states dprx_states;
++
++	struct gpio *hpd_gpio;
++	enum dc_link_fec_state fec_state;
++	bool link_powered_externally;	// Used to bypass hardware sequencing delays when panel is powered down forcibly
++
++	struct dc_panel_config panel_config;
++	struct phy_state phy_state;
++	// BW ALLOCATON USB4 ONLY
++	struct dc_dpia_bw_alloc dpia_bw_alloc_config;
++};
++
++/* Return an enumerated dc_link.
++ * dc_link order is constant and determined at
++ * boot time.  They cannot be created or destroyed.
++ * Use dc_get_caps() to get number of links.
++ */
++struct dc_link *dc_get_link_at_index(struct dc *dc, uint32_t link_index);
++
++/* Return instance id of the edp link. Inst 0 is primary edp link. */
++bool dc_get_edp_link_panel_inst(const struct dc *dc,
++		const struct dc_link *link,
++		unsigned int *inst_out);
++
++/* Return an array of link pointers to edp links. */
++void dc_get_edp_links(const struct dc *dc,
++		struct dc_link **edp_links,
++		int *edp_num);
+ 
+ /* The function initiates detection handshake over the given link. It first
+  * determines if there are display connections over the link. If so it initiates
+@@ -1404,6 +1556,38 @@ uint32_t dc_get_opp_for_plane(struct dc *dc, struct dc_plane_state *plane);
+  */
+ bool dc_link_detect(struct dc_link *link, enum dc_detect_reason reason);
+ 
++struct dc_sink_init_data;
++
++/* When link connection type is dc_connection_mst_branch, remote sink can be
++ * added to the link. The interface creates a remote sink and associates it with
++ * current link. The sink will be retained by link until remove remote sink is
++ * called.
++ *
++ * @dc_link - link the remote sink will be added to.
++ * @edid - byte array of EDID raw data.
++ * @len - size of the edid in byte
++ * @init_data -
++ */
++struct dc_sink *dc_link_add_remote_sink(
++		struct dc_link *dc_link,
++		const uint8_t *edid,
++		int len,
++		struct dc_sink_init_data *init_data);
++
++/* Remove remote sink from a link with dc_connection_mst_branch connection type.
++ * @link - link the sink should be removed from
++ * @sink - sink to be removed.
++ */
++void dc_link_remove_remote_sink(
++	struct dc_link *link,
++	struct dc_sink *sink);
++
++/* Enable HPD interrupt handler for a given link */
++void dc_link_enable_hpd(const struct dc_link *link);
++
++/* Disable HPD interrupt handler for a given link */
++void dc_link_disable_hpd(const struct dc_link *link);
++
+ /* determine if there is a sink connected to the link
+  *
+  * @type - dc_connection_single if connected, dc_connection_none otherwise.
+@@ -1417,15 +1601,119 @@ bool dc_link_detect(struct dc_link *link, enum dc_detect_reason reason);
+ bool dc_link_detect_connection_type(struct dc_link *link,
+ 		enum dc_connection_type *type);
+ 
++/* query current hpd pin value
++ * return - true HPD is asserted (HPD high), false otherwise (HPD low)
++ *
++ */
++bool dc_link_get_hpd_state(struct dc_link *dc_link);
++
+ /* Getter for cached link status from given link */
+ const struct dc_link_status *dc_link_get_status(const struct dc_link *link);
+ 
++/* enable/disable hardware HPD filter.
++ *
++ * @link - The link the HPD pin is associated with.
++ * @enable = true - enable hardware HPD filter. HPD event will only queued to irq
++ * handler once after no HPD change has been detected within dc default HPD
++ * filtering interval since last HPD event. i.e if display keeps toggling hpd
++ * pulses within default HPD interval, no HPD event will be received until HPD
++ * toggles have stopped. Then HPD event will be queued to irq handler once after
++ * dc default HPD filtering interval since last HPD event.
++ *
++ * @enable = false - disable hardware HPD filter. HPD event will be queued
++ * immediately to irq handler after no HPD change has been detected within
++ * IRQ_HPD (aka HPD short pulse) interval (i.e 2ms).
++ */
++void dc_link_enable_hpd_filter(struct dc_link *link, bool enable);
++
++/* submit i2c read/write payloads through ddc channel
++ * @link_index - index to a link with ddc in i2c mode
++ * @cmd - i2c command structure
++ * return - true if success, false otherwise.
++ */
++bool dc_submit_i2c(
++		struct dc *dc,
++		uint32_t link_index,
++		struct i2c_command *cmd);
++
++/* submit i2c read/write payloads through oem channel
++ * @link_index - index to a link with ddc in i2c mode
++ * @cmd - i2c command structure
++ * return - true if success, false otherwise.
++ */
++bool dc_submit_i2c_oem(
++		struct dc *dc,
++		struct i2c_command *cmd);
++
++enum aux_return_code_type;
++/* Attempt to transfer the given aux payload. This function does not perform
++ * retries or handle error states. The reply is returned in the payload->reply
++ * and the result through operation_result. Returns the number of bytes
++ * transferred,or -1 on a failure.
++ */
++int dc_link_aux_transfer_raw(struct ddc_service *ddc,
++		struct aux_payload *payload,
++		enum aux_return_code_type *operation_result);
++
++bool dc_is_oem_i2c_device_present(
++	struct dc *dc,
++	size_t slave_address
++);
++
+ #ifdef CONFIG_DRM_AMD_DC_HDCP
++
+ /* return true if the connected receiver supports the hdcp version */
+ bool dc_link_is_hdcp14(struct dc_link *link, enum signal_type signal);
+ bool dc_link_is_hdcp22(struct dc_link *link, enum signal_type signal);
+ #endif
+ 
++/* Notify DC about DP RX Interrupt (aka DP IRQ_HPD).
++ *
++ * TODO - When defer_handling is true the function will have a different purpose.
++ * It no longer does complete hpd rx irq handling. We should create a separate
++ * interface specifically for this case.
++ *
++ * Return:
++ * true - Downstream port status changed. DM should call DC to do the
++ * detection.
++ * false - no change in Downstream port status. No further action required
++ * from DM.
++ */
++bool dc_link_handle_hpd_rx_irq(struct dc_link *dc_link,
++		union hpd_irq_data *hpd_irq_dpcd_data, bool *out_link_loss,
++		bool defer_handling, bool *has_left_work);
++/* handle DP specs define test automation sequence*/
++void dc_link_dp_handle_automated_test(struct dc_link *link);
++
++/* handle DP Link loss sequence and try to recover RX link loss with best
++ * effort
++ */
++void dc_link_dp_handle_link_loss(struct dc_link *link);
++
++/* Determine if hpd rx irq should be handled or ignored
++ * return true - hpd rx irq should be handled.
++ * return false - it is safe to ignore hpd rx irq event
++ */
++bool dc_link_dp_allow_hpd_rx_irq(const struct dc_link *link);
++
++/* Determine if link loss is indicated with a given hpd_irq_dpcd_data.
++ * @link - link the hpd irq data associated with
++ * @hpd_irq_dpcd_data - input hpd irq data
++ * return - true if hpd irq data indicates a link lost
++ */
++bool dc_link_check_link_loss_status(struct dc_link *link,
++		union hpd_irq_data *hpd_irq_dpcd_data);
++
++/* Read hpd rx irq data from a given link
++ * @link - link where the hpd irq data should be read from
++ * @irq_data - output hpd irq data
++ * return - DC_OK if hpd irq data is read successfully, otherwise hpd irq data
++ * read has failed.
++ */
++enum dc_status dc_link_dp_read_hpd_rx_irq_data(
++	struct dc_link *link,
++	union hpd_irq_data *irq_data);
++
+ /* The function clears recorded DP RX states in the link. DM should call this
+  * function when it is resuming from S3 power state to previously connected links.
+  *
+@@ -1493,6 +1781,268 @@ void dc_restore_link_res_map(const struct dc *dc, uint32_t *map);
+  * interface i.e stream_update->dsc_config
+  */
+ bool dc_link_update_dsc_config(struct pipe_ctx *pipe_ctx);
++
++/* translate a raw link rate data to bandwidth in kbps */
++uint32_t dc_link_bw_kbps_from_raw_frl_link_rate_data(uint8_t bw);
++
++/* determine the optimal bandwidth given link and required bw.
++ * @link - current detected link
++ * @req_bw - requested bandwidth in kbps
++ * @link_settings - returned most optimal link settings that can fit the
++ * requested bandwidth
++ * return - false if link can't support requested bandwidth, true if link
++ * settings is found.
++ */
++bool dc_link_decide_edp_link_settings(struct dc_link *link,
++		struct dc_link_settings *link_settings,
++		uint32_t req_bw);
++
++/* return the max dp link settings can be driven by the link without considering
++ * connected RX device and its capability
++ */
++bool dc_link_dp_get_max_link_enc_cap(const struct dc_link *link,
++		struct dc_link_settings *max_link_enc_cap);
++
++/* determine when the link is driving MST mode, what DP link channel coding
++ * format will be used. The decision will remain unchanged until next HPD event.
++ *
++ * @link -  a link with DP RX connection
++ * return - if stream is committed to this link with MST signal type, type of
++ * channel coding format dc will choose.
++ */
++enum dp_link_encoding dc_link_dp_mst_decide_link_encoding_format(
++		const struct dc_link *link);
++
++/* get max dp link settings the link can enable with all things considered. (i.e
++ * TX/RX/Cable capabilities and dp override policies.
++ *
++ * @link - a link with DP RX connection
++ * return - max dp link settings the link can enable.
++ *
++ */
++const struct dc_link_settings *dc_link_get_link_cap(const struct dc_link *link);
++
++/* Check if a RX (ex. DP sink, MST hub, passive or active dongle) is connected
++ * to a link with dp connector signal type.
++ * @link - a link with dp connector signal type
++ * return - true if connected, false otherwise
++ */
++bool dc_link_is_dp_sink_present(struct dc_link *link);
++
++/* Force DP lane settings update to main-link video signal and notify the change
++ * to DP RX via DPCD. This is a debug interface used for video signal integrity
++ * tuning purpose. The interface assumes link has already been enabled with DP
++ * signal.
++ *
++ * @lt_settings - a container structure with desired hw_lane_settings
++ */
++void dc_link_set_drive_settings(struct dc *dc,
++				struct link_training_settings *lt_settings,
++				const struct dc_link *link);
++
++/* Enable a test pattern in Link or PHY layer in an active link for compliance
++ * test or debugging purpose. The test pattern will remain until next un-plug.
++ *
++ * @link - active link with DP signal output enabled.
++ * @test_pattern - desired test pattern to output.
++ * NOTE: set to DP_TEST_PATTERN_VIDEO_MODE to disable previous test pattern.
++ * @test_pattern_color_space - for video test pattern choose a desired color
++ * space.
++ * @p_link_settings - For PHY pattern choose a desired link settings
++ * @p_custom_pattern - some test pattern will require a custom input to
++ * customize some pattern details. Otherwise keep it to NULL.
++ * @cust_pattern_size - size of the custom pattern input.
++ *
++ */
++bool dc_link_dp_set_test_pattern(
++	struct dc_link *link,
++	enum dp_test_pattern test_pattern,
++	enum dp_test_pattern_color_space test_pattern_color_space,
++	const struct link_training_settings *p_link_settings,
++	const unsigned char *p_custom_pattern,
++	unsigned int cust_pattern_size);
++
++/* Force DP link settings to always use a specific value until reboot to a
++ * specific link. If link has already been enabled, the interface will also
++ * switch to desired link settings immediately. This is a debug interface to
++ * generic dp issue trouble shooting.
++ */
++void dc_link_set_preferred_link_settings(struct dc *dc,
++		struct dc_link_settings *link_setting,
++		struct dc_link *link);
++
++/* Force DP link to customize a specific link training behavior by overriding to
++ * standard DP specs defined protocol. This is a debug interface to trouble shoot
++ * display specific link training issues or apply some display specific
++ * workaround in link training.
++ *
++ * @link_settings - if not NULL, force preferred link settings to the link.
++ * @lt_override - a set of override pointers. If any pointer is none NULL, dc
++ * will apply this particular override in future link training. If NULL is
++ * passed in, dc resets previous overrides.
++ * NOTE: DM must keep the memory from override pointers until DM resets preferred
++ * training settings.
++ */
++void dc_link_set_preferred_training_settings(struct dc *dc,
++		struct dc_link_settings *link_setting,
++		struct dc_link_training_overrides *lt_overrides,
++		struct dc_link *link,
++		bool skip_immediate_retrain);
++
++/* return - true if FEC is supported with connected DP RX, false otherwise */
++bool dc_link_is_fec_supported(const struct dc_link *link);
++
++/* query FEC enablement policy to determine if FEC will be enabled by dc during
++ * link enablement.
++ * return - true if FEC should be enabled, false otherwise.
++ */
++bool dc_link_should_enable_fec(const struct dc_link *link);
++
++/* determine lttpr mode the current link should be enabled with a specific link
++ * settings.
++ */
++enum lttpr_mode dc_link_decide_lttpr_mode(struct dc_link *link,
++		struct dc_link_settings *link_setting);
++
++/* Force DP RX to update its power state.
++ * NOTE: this interface doesn't update dp main-link. Calling this function will
++ * cause DP TX main-link and DP RX power states out of sync. DM has to restore
++ * RX power state back upon finish DM specific execution requiring DP RX in a
++ * specific power state.
++ * @on - true to set DP RX in D0 power state, false to set DP RX in D3 power
++ * state.
++ */
++void dc_link_dp_receiver_power_ctrl(struct dc_link *link, bool on);
++
++/* Force link to read base dp receiver caps from dpcd 000h - 00Fh and overwrite
++ * current value read from extended receiver cap from 02200h - 0220Fh.
++ * Some DP RX has problems of providing accurate DP receiver caps from extended
++ * field, this interface is a workaround to revert link back to use base caps.
++ */
++void dc_link_overwrite_extended_receiver_cap(
++		struct dc_link *link);
++
++void dc_link_edp_panel_backlight_power_on(struct dc_link *link,
++		bool wait_for_hpd);
++
++/* Set backlight level of an embedded panel (eDP, LVDS).
++ * backlight_pwm_u16_16 is unsigned 32 bit with 16 bit integer
++ * and 16 bit fractional, where 1.0 is max backlight value.
++ */
++bool dc_link_set_backlight_level(const struct dc_link *dc_link,
++		uint32_t backlight_pwm_u16_16,
++		uint32_t frame_ramp);
++
++/* Set/get nits-based backlight level of an embedded panel (eDP, LVDS). */
++bool dc_link_set_backlight_level_nits(struct dc_link *link,
++		bool isHDR,
++		uint32_t backlight_millinits,
++		uint32_t transition_time_in_ms);
++
++bool dc_link_get_backlight_level_nits(struct dc_link *link,
++		uint32_t *backlight_millinits,
++		uint32_t *backlight_millinits_peak);
++
++int dc_link_get_backlight_level(const struct dc_link *dc_link);
++
++int dc_link_get_target_backlight_pwm(const struct dc_link *link);
++
++bool dc_link_set_psr_allow_active(struct dc_link *dc_link, const bool *enable,
++		bool wait, bool force_static, const unsigned int *power_opts);
++
++bool dc_link_get_psr_state(const struct dc_link *dc_link, enum dc_psr_state *state);
++
++bool dc_link_setup_psr(struct dc_link *dc_link,
++		const struct dc_stream_state *stream, struct psr_config *psr_config,
++		struct psr_context *psr_context);
++
++/* On eDP links this function call will stall until T12 has elapsed.
++ * If the panel is not in power off state, this function will return
++ * immediately.
++ */
++bool dc_link_wait_for_t12(struct dc_link *link);
++
++/* Determine if dp trace has been initialized to reflect upto date result *
++ * return - true if trace is initialized and has valid data. False dp trace
++ * doesn't have valid result.
++ */
++bool dc_dp_trace_is_initialized(struct dc_link *link);
++
++/* Query a dp trace flag to indicate if the current dp trace data has been
++ * logged before
++ */
++bool dc_dp_trace_is_logged(struct dc_link *link,
++		bool in_detection);
++
++/* Set dp trace flag to indicate whether DM has already logged the current dp
++ * trace data. DM can set is_logged to true upon logging and check
++ * dc_dp_trace_is_logged before logging to avoid logging the same result twice.
++ */
++void dc_dp_trace_set_is_logged_flag(struct dc_link *link,
++		bool in_detection,
++		bool is_logged);
++
++/* Obtain driver time stamp for last dp link training end. The time stamp is
++ * formatted based on dm_get_timestamp DM function.
++ * @in_detection - true to get link training end time stamp of last link
++ * training in detection sequence. false to get link training end time stamp
++ * of last link training in commit (dpms) sequence
++ */
++unsigned long long dc_dp_trace_get_lt_end_timestamp(struct dc_link *link,
++		bool in_detection);
++
++/* Get how many link training attempts dc has done with latest sequence.
++ * @in_detection - true to get link training count of last link
++ * training in detection sequence. false to get link training count of last link
++ * training in commit (dpms) sequence
++ */
++struct dp_trace_lt_counts *dc_dp_trace_get_lt_counts(struct dc_link *link,
++		bool in_detection);
++
++/* Get how many link loss has happened since last link training attempts */
++unsigned int dc_dp_trace_get_link_loss_count(struct dc_link *link);
++
++/*
++ *  USB4 DPIA BW ALLOCATION PUBLIC FUNCTIONS
++ */
++/*
++ * Send a request from DP-Tx requesting to allocate BW remotely after
++ * allocating it locally. This will get processed by CM and a CB function
++ * will be called.
++ *
++ * @link: pointer to the dc_link struct instance
++ * @req_bw: The requested bw in Kbyte to allocated
++ *
++ * return: none
++ */
++void dc_link_set_usb4_req_bw_req(struct dc_link *link, int req_bw);
++
++/*
++ * Handle function for when the status of the Request above is complete.
++ * We will find out the result of allocating on CM and update structs.
++ *
++ * @link: pointer to the dc_link struct instance
++ * @bw: Allocated or Estimated BW depending on the result
++ * @result: Response type
++ *
++ * return: none
++ */
++void dc_link_handle_usb4_bw_alloc_response(struct dc_link *link,
++		uint8_t bw, uint8_t result);
++
++/*
++ * Handle the USB4 BW Allocation related functionality here:
++ * Plug => Try to allocate max bw from timing parameters supported by the sink
++ * Unplug => de-allocate bw
++ *
++ * @link: pointer to the dc_link struct instance
++ * @peak_bw: Peak bw used by the link/sink
++ *
++ * return: allocated bw else return 0
++ */
++int dc_link_dp_dpia_handle_usb4_bandwidth_allocation_for_link(
++		struct dc_link *link, int peak_bw);
++
+ /* Sink Interfaces - A sink corresponds to a display output device */
+ 
+ struct dc_container_id {
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_dp_types.h b/drivers/gpu/drm/amd/display/dc/dc_dp_types.h
+index 809a1851f1965..4bccce94d83bb 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_dp_types.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_dp_types.h
+@@ -1261,4 +1261,111 @@ union dpcd_sink_ext_caps {
+ 	} bits;
+ 	uint8_t raw;
+ };
++
++enum dc_link_fec_state {
++	dc_link_fec_not_ready,
++	dc_link_fec_ready,
++	dc_link_fec_enabled
++};
++
++union dpcd_psr_configuration {
++	struct {
++		unsigned char ENABLE                    : 1;
++		unsigned char TRANSMITTER_ACTIVE_IN_PSR : 1;
++		unsigned char CRC_VERIFICATION          : 1;
++		unsigned char FRAME_CAPTURE_INDICATION  : 1;
++		/* For eDP 1.4, PSR v2*/
++		unsigned char LINE_CAPTURE_INDICATION   : 1;
++		/* For eDP 1.4, PSR v2*/
++		unsigned char IRQ_HPD_WITH_CRC_ERROR    : 1;
++		unsigned char ENABLE_PSR2               : 1;
++		unsigned char EARLY_TRANSPORT_ENABLE    : 1;
++	} bits;
++	unsigned char raw;
++};
++
++union dpcd_alpm_configuration {
++	struct {
++		unsigned char ENABLE                    : 1;
++		unsigned char IRQ_HPD_ENABLE            : 1;
++		unsigned char RESERVED                  : 6;
++	} bits;
++	unsigned char raw;
++};
++
++union dpcd_sink_active_vtotal_control_mode {
++	struct {
++		unsigned char ENABLE                    : 1;
++		unsigned char RESERVED                  : 7;
++	} bits;
++	unsigned char raw;
++};
++
++union psr_error_status {
++	struct {
++		unsigned char LINK_CRC_ERROR        :1;
++		unsigned char RFB_STORAGE_ERROR     :1;
++		unsigned char VSC_SDP_ERROR         :1;
++		unsigned char RESERVED              :5;
++	} bits;
++	unsigned char raw;
++};
++
++union psr_sink_psr_status {
++	struct {
++	unsigned char SINK_SELF_REFRESH_STATUS  :3;
++	unsigned char RESERVED                  :5;
++	} bits;
++	unsigned char raw;
++};
++
++struct edp_trace_power_timestamps {
++	uint64_t poweroff;
++	uint64_t poweron;
++};
++
++struct dp_trace_lt_counts {
++	unsigned int total;
++	unsigned int fail;
++};
++
++enum link_training_result {
++	LINK_TRAINING_SUCCESS,
++	LINK_TRAINING_CR_FAIL_LANE0,
++	LINK_TRAINING_CR_FAIL_LANE1,
++	LINK_TRAINING_CR_FAIL_LANE23,
++	/* CR DONE bit is cleared during EQ step */
++	LINK_TRAINING_EQ_FAIL_CR,
++	/* CR DONE bit is cleared but LANE0_CR_DONE is set during EQ step */
++	LINK_TRAINING_EQ_FAIL_CR_PARTIAL,
++	/* other failure during EQ step */
++	LINK_TRAINING_EQ_FAIL_EQ,
++	LINK_TRAINING_LQA_FAIL,
++	/* one of the CR,EQ or symbol lock is dropped */
++	LINK_TRAINING_LINK_LOSS,
++	/* Abort link training (because sink unplugged) */
++	LINK_TRAINING_ABORT,
++	DP_128b_132b_LT_FAILED,
++	DP_128b_132b_MAX_LOOP_COUNT_REACHED,
++	DP_128b_132b_CHANNEL_EQ_DONE_TIMEOUT,
++	DP_128b_132b_CDS_DONE_TIMEOUT,
++};
++
++struct dp_trace_lt {
++	struct dp_trace_lt_counts counts;
++	struct dp_trace_timestamps {
++		unsigned long long start;
++		unsigned long long end;
++	} timestamps;
++	enum link_training_result result;
++	bool is_logged;
++};
++
++struct dp_trace {
++	struct dp_trace_lt detect_lt_trace;
++	struct dp_trace_lt commit_lt_trace;
++	unsigned int link_loss_count;
++	bool is_initialized;
++	struct edp_trace_power_timestamps edp_trace_power_timestamps;
++};
+ #endif /* DC_DP_TYPES_H */
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_hw_types.h b/drivers/gpu/drm/amd/display/dc/dc_hw_types.h
+index cc3d6fb393640..a583a72845fe8 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_hw_types.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_hw_types.h
+@@ -1085,5 +1085,19 @@ struct tg_color {
+ 	uint16_t color_b_cb;
+ };
+ 
++enum symclk_state {
++	SYMCLK_OFF_TX_OFF,
++	SYMCLK_ON_TX_ON,
++	SYMCLK_ON_TX_OFF,
++};
++
++struct phy_state {
++	struct {
++		uint8_t otg		: 1;
++		uint8_t reserved	: 7;
++	} symclk_ref_cnts;
++	enum symclk_state symclk_state;
++};
++
+ #endif /* DC_HW_TYPES_H */
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_types.h b/drivers/gpu/drm/amd/display/dc/dc_types.h
+index 27d0242d6cbd4..cba65766ef47b 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_types.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_types.h
+@@ -38,6 +38,7 @@
+ #include "dc_hw_types.h"
+ #include "dal_types.h"
+ #include "grph_object_defs.h"
++#include "grph_object_ctrl_defs.h"
+ 
+ #ifdef CONFIG_DRM_AMD_DC_HDCP
+ #include "dm_cp_psp.h"
+@@ -982,4 +983,114 @@ struct hdcp_caps {
+ 	union hdcp_bcaps bcaps;
+ };
+ #endif
++
++/* DP MST stream allocation (payload bandwidth number) */
++struct link_mst_stream_allocation {
++	/* DIG front */
++	const struct stream_encoder *stream_enc;
++	/* HPO DP Stream Encoder */
++	const struct hpo_dp_stream_encoder *hpo_dp_stream_enc;
++	/* associate DRM payload table with DC stream encoder */
++	uint8_t vcp_id;
++	/* number of slots required for the DP stream in transport packet */
++	uint8_t slot_count;
++};
++
++#define MAX_CONTROLLER_NUM 6
++
++/* DP MST stream allocation table */
++struct link_mst_stream_allocation_table {
++	/* number of DP video streams */
++	int stream_count;
++	/* array of stream allocations */
++	struct link_mst_stream_allocation stream_allocations[MAX_CONTROLLER_NUM];
++};
++
++/* PSR feature flags */
++struct psr_settings {
++	bool psr_feature_enabled;		// PSR is supported by sink
++	bool psr_allow_active;			// PSR is currently active
++	enum dc_psr_version psr_version;		// Internal PSR version, determined based on DPCD
++	bool psr_vtotal_control_support;	// Vtotal control is supported by sink
++	unsigned long long psr_dirty_rects_change_timestamp_ns;	// for delay of enabling PSR-SU
++
++	/* These parameters are calculated in Driver,
++	 * based on display timing and Sink capabilities.
++	 * If VBLANK region is too small and Sink takes a long time
++	 * to set up RFB, it may take an extra frame to enter PSR state.
++	 */
++	bool psr_frame_capture_indication_req;
++	unsigned int psr_sdp_transmit_line_num_deadline;
++	uint8_t force_ffu_mode;
++	unsigned int psr_power_opt;
++};
++
++/* To split out "global" and "per-panel" config settings.
++ * Add a struct dc_panel_config under dc_link
++ */
++struct dc_panel_config {
++	/* extra panel power sequence parameters */
++	struct pps {
++		unsigned int extra_t3_ms;
++		unsigned int extra_t7_ms;
++		unsigned int extra_delay_backlight_off;
++		unsigned int extra_post_t7_ms;
++		unsigned int extra_pre_t11_ms;
++		unsigned int extra_t12_ms;
++		unsigned int extra_post_OUI_ms;
++	} pps;
++	/* nit brightness */
++	struct nits_brightness {
++		unsigned int peak; /* nits */
++		unsigned int max_avg; /* nits */
++		unsigned int min; /* 1/10000 nits */
++		unsigned int max_nonboost_brightness_millinits;
++		unsigned int min_brightness_millinits;
++	} nits_brightness;
++	/* PSR */
++	struct psr {
++		bool disable_psr;
++		bool disallow_psrsu;
++		bool rc_disable;
++		bool rc_allow_static_screen;
++		bool rc_allow_fullscreen_VPB;
++	} psr;
++	/* ABM */
++	struct varib {
++		unsigned int varibright_feature_enable;
++		unsigned int def_varibright_level;
++		unsigned int abm_config_setting;
++	} varib;
++	/* edp DSC */
++	struct dsc {
++		bool disable_dsc_edp;
++		unsigned int force_dsc_edp_policy;
++	} dsc;
++	/* eDP ILR */
++	struct ilr {
++		bool optimize_edp_link_rate; /* eDP ILR */
++	} ilr;
++};
++
++/*
++ *  USB4 DPIA BW ALLOCATION STRUCTS
++ */
++struct dc_dpia_bw_alloc {
++	int sink_verified_bw;  // The Verified BW that sink can allocated and use that has been verified already
++	int sink_allocated_bw; // The Actual Allocated BW that sink currently allocated
++	int sink_max_bw;       // The Max BW that sink can require/support
++	int estimated_bw;      // The estimated available BW for this DPIA
++	int bw_granularity;    // BW Granularity
++	bool bw_alloc_enabled; // The BW Alloc Mode Support is turned ON for all 3:  DP-Tx & Dpia & CM
++	bool response_ready;   // Response ready from the CM side
++};
++
++#define MAX_SINKS_PER_LINK 4
++
++enum dc_hpd_enable_select {
++	HPD_EN_FOR_ALL_EDP = 0,
++	HPD_EN_FOR_PRIMARY_EDP_ONLY,
++	HPD_EN_FOR_SECONDARY_EDP_ONLY,
++};
++
+ #endif /* DC_TYPES_H_ */
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_abm.c b/drivers/gpu/drm/amd/display/dc/dce/dmub_abm.c
+index fb0dec4ed3a6c..9fc48208c2e42 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_abm.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_abm.c
+@@ -148,7 +148,7 @@ static bool dmub_abm_set_level(struct abm *abm, uint32_t level)
+ 	int edp_num;
+ 	uint8_t panel_mask = 0;
+ 
+-	get_edp_links(dc->dc, edp_links, &edp_num);
++	dc_get_edp_links(dc->dc, edp_links, &edp_num);
+ 
+ 	for (i = 0; i < edp_num; i++) {
+ 		if (edp_links[i]->link_status.link_active)
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.h b/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.h
+index 74005b9d352a2..289e42070ece9 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.h
++++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.h
+@@ -26,8 +26,9 @@
+ #ifndef _DMUB_PSR_H_
+ #define _DMUB_PSR_H_
+ 
+-#include "os_types.h"
+-#include "dc_link.h"
++#include "dc_types.h"
++struct dc_link;
++struct dmub_psr_funcs;
+ 
+ struct dmub_psr {
+ 	struct dc_context *ctx;
+diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+index 0d4d3d586166d..cb3bb5402c52b 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+@@ -1739,7 +1739,7 @@ void dce110_enable_accelerated_mode(struct dc *dc, struct dc_state *context)
+ 
+ 
+ 	get_edp_links_with_sink(dc, edp_links_with_sink, &edp_with_sink_num);
+-	get_edp_links(dc, edp_links, &edp_num);
++	dc_get_edp_links(dc, edp_links, &edp_num);
+ 
+ 	if (hws->funcs.init_pipes)
+ 		hws->funcs.init_pipes(dc, context);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+index a1a29c508394e..e255519cb5ccc 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+@@ -726,11 +726,15 @@ void dcn10_hubp_pg_control(
+ 	}
+ }
+ 
+-static void power_on_plane(
++static void power_on_plane_resources(
+ 	struct dce_hwseq *hws,
+ 	int plane_id)
+ {
+ 	DC_LOGGER_INIT(hws->ctx->logger);
++
++	if (hws->funcs.dpp_root_clock_control)
++		hws->funcs.dpp_root_clock_control(hws, plane_id, true);
++
+ 	if (REG(DC_IP_REQUEST_CNTL)) {
+ 		REG_SET(DC_IP_REQUEST_CNTL, 0,
+ 				IP_REQUEST_EN, 1);
+@@ -1237,11 +1241,15 @@ void dcn10_plane_atomic_power_down(struct dc *dc,
+ 			hws->funcs.hubp_pg_control(hws, hubp->inst, false);
+ 
+ 		dpp->funcs->dpp_reset(dpp);
++
+ 		REG_SET(DC_IP_REQUEST_CNTL, 0,
+ 				IP_REQUEST_EN, 0);
+ 		DC_LOG_DEBUG(
+ 				"Power gated front end %d\n", hubp->inst);
+ 	}
++
++	if (hws->funcs.dpp_root_clock_control)
++		hws->funcs.dpp_root_clock_control(hws, dpp->inst, false);
+ }
+ 
+ /* disable HW used by plane.
+@@ -1638,7 +1646,7 @@ void dcn10_power_down_on_boot(struct dc *dc)
+ 	int edp_num;
+ 	int i = 0;
+ 
+-	get_edp_links(dc, edp_links, &edp_num);
++	dc_get_edp_links(dc, edp_links, &edp_num);
+ 	if (edp_num)
+ 		edp_link = edp_links[0];
+ 
+@@ -2462,7 +2470,7 @@ static void dcn10_enable_plane(
+ 
+ 	undo_DEGVIDCN10_253_wa(dc);
+ 
+-	power_on_plane(dc->hwseq,
++	power_on_plane_resources(dc->hwseq,
+ 		pipe_ctx->plane_res.hubp->inst);
+ 
+ 	/* enable DCFCLK current DCHUB */
+@@ -3385,7 +3393,9 @@ static bool dcn10_can_pipe_disable_cursor(struct pipe_ctx *pipe_ctx)
+ 	for (test_pipe = pipe_ctx->top_pipe; test_pipe;
+ 	     test_pipe = test_pipe->top_pipe) {
+ 		// Skip invisible layer and pipe-split plane on same layer
+-		if (!test_pipe->plane_state->visible || test_pipe->plane_state->layer_index == cur_layer)
++		if (!test_pipe->plane_state ||
++		    !test_pipe->plane_state->visible ||
++		    test_pipe->plane_state->layer_index == cur_layer)
+ 			continue;
+ 
+ 		r2 = test_pipe->plane_res.scl_data.recout;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+index b83873a3a534a..0789129727e49 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+@@ -1121,11 +1121,15 @@ void dcn20_blank_pixel_data(
+ }
+ 
+ 
+-static void dcn20_power_on_plane(
++static void dcn20_power_on_plane_resources(
+ 	struct dce_hwseq *hws,
+ 	struct pipe_ctx *pipe_ctx)
+ {
+ 	DC_LOGGER_INIT(hws->ctx->logger);
++
++	if (hws->funcs.dpp_root_clock_control)
++		hws->funcs.dpp_root_clock_control(hws, pipe_ctx->plane_res.dpp->inst, true);
++
+ 	if (REG(DC_IP_REQUEST_CNTL)) {
+ 		REG_SET(DC_IP_REQUEST_CNTL, 0,
+ 				IP_REQUEST_EN, 1);
+@@ -1149,7 +1153,7 @@ static void dcn20_enable_plane(struct dc *dc, struct pipe_ctx *pipe_ctx,
+ 	//if (dc->debug.sanity_checks) {
+ 	//	dcn10_verify_allow_pstate_change_high(dc);
+ 	//}
+-	dcn20_power_on_plane(dc->hwseq, pipe_ctx);
++	dcn20_power_on_plane_resources(dc->hwseq, pipe_ctx);
+ 
+ 	/* enable DCFCLK current DCHUB */
+ 	pipe_ctx->plane_res.hubp->funcs->hubp_clk_cntl(pipe_ctx->plane_res.hubp, true);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dio_stream_encoder.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dio_stream_encoder.c
+index 5f9079d3943a6..9d08127d209b8 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dio_stream_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dio_stream_encoder.c
+@@ -28,6 +28,7 @@
+ #include "dcn30_dio_stream_encoder.h"
+ #include "reg_helper.h"
+ #include "hw_shared.h"
++#include "dc.h"
+ #include "core_types.h"
+ #include <linux/delay.h>
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
+index df787fcf8e86e..b4df540c0c61e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
+@@ -567,7 +567,7 @@ void dcn30_init_hw(struct dc *dc)
+ 		struct dc_link *edp_links[MAX_NUM_EDP];
+ 		struct dc_link *edp_link = NULL;
+ 
+-		get_edp_links(dc, edp_links, &edp_num);
++		dc_get_edp_links(dc, edp_links, &edp_num);
+ 		if (edp_num)
+ 			edp_link = edp_links[0];
+ 		if (edp_link && edp_link->link_enc->funcs->is_dig_enabled &&
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c
+index 7f34418e63081..7d2b982506fd7 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c
+@@ -66,17 +66,8 @@ void dccg31_update_dpp_dto(struct dccg *dccg, int dpp_inst, int req_dppclk)
+ 		REG_UPDATE(DPPCLK_DTO_CTRL,
+ 				DPPCLK_DTO_ENABLE[dpp_inst], 1);
+ 	} else {
+-		//DTO must be enabled to generate a 0Hz clock output
+-		if (dccg->ctx->dc->debug.root_clock_optimization.bits.dpp) {
+-			REG_UPDATE(DPPCLK_DTO_CTRL,
+-					DPPCLK_DTO_ENABLE[dpp_inst], 1);
+-			REG_SET_2(DPPCLK_DTO_PARAM[dpp_inst], 0,
+-					DPPCLK0_DTO_PHASE, 0,
+-					DPPCLK0_DTO_MODULO, 1);
+-		} else {
+-			REG_UPDATE(DPPCLK_DTO_CTRL,
+-					DPPCLK_DTO_ENABLE[dpp_inst], 0);
+-		}
++		REG_UPDATE(DPPCLK_DTO_CTRL,
++				DPPCLK_DTO_ENABLE[dpp_inst], 0);
+ 	}
+ 	dccg->pipe_dppclk_khz[dpp_inst] = req_dppclk;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hpo_dp_link_encoder.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hpo_dp_link_encoder.c
+index 0b317ed31f918..5b7ad38f85e08 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hpo_dp_link_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hpo_dp_link_encoder.c
+@@ -26,7 +26,6 @@
+ #include "dc_bios_types.h"
+ #include "dcn31_hpo_dp_link_encoder.h"
+ #include "reg_helper.h"
+-#include "dc_link.h"
+ #include "stream_encoder.h"
+ 
+ #define DC_LOGGER \
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hpo_dp_stream_encoder.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hpo_dp_stream_encoder.c
+index d76f55a12eb41..0278bae50a9d6 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hpo_dp_stream_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hpo_dp_stream_encoder.c
+@@ -26,7 +26,7 @@
+ #include "dc_bios_types.h"
+ #include "dcn31_hpo_dp_stream_encoder.h"
+ #include "reg_helper.h"
+-#include "dc_link.h"
++#include "dc.h"
+ 
+ #define DC_LOGGER \
+ 		enc3->base.ctx->logger
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_dccg.c b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_dccg.c
+index 0b769ee714058..081ce168f6211 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_dccg.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_dccg.c
+@@ -289,8 +289,31 @@ static void dccg314_set_valid_pixel_rate(
+ 	dccg314_set_dtbclk_dto(dccg, &dto_params);
+ }
+ 
++static void dccg314_dpp_root_clock_control(
++		struct dccg *dccg,
++		unsigned int dpp_inst,
++		bool clock_on)
++{
++	struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
++
++	if (clock_on) {
++		/* turn off the DTO and leave phase/modulo at max */
++		REG_UPDATE(DPPCLK_DTO_CTRL, DPPCLK_DTO_ENABLE[dpp_inst], 0);
++		REG_SET_2(DPPCLK_DTO_PARAM[dpp_inst], 0,
++			  DPPCLK0_DTO_PHASE, 0xFF,
++			  DPPCLK0_DTO_MODULO, 0xFF);
++	} else {
++		/* turn on the DTO to generate a 0hz clock */
++		REG_UPDATE(DPPCLK_DTO_CTRL, DPPCLK_DTO_ENABLE[dpp_inst], 1);
++		REG_SET_2(DPPCLK_DTO_PARAM[dpp_inst], 0,
++			  DPPCLK0_DTO_PHASE, 0,
++			  DPPCLK0_DTO_MODULO, 1);
++	}
++}
++
+ static const struct dccg_funcs dccg314_funcs = {
+ 	.update_dpp_dto = dccg31_update_dpp_dto,
++	.dpp_root_clock_control = dccg314_dpp_root_clock_control,
+ 	.get_dccg_ref_freq = dccg31_get_dccg_ref_freq,
+ 	.dccg_init = dccg31_init,
+ 	.set_dpstreamclk = dccg314_set_dpstreamclk,
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.c
+index 575d3501c848a..0d180f13803d2 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.c
+@@ -390,6 +390,16 @@ void dcn314_set_pixels_per_cycle(struct pipe_ctx *pipe_ctx)
+ 				pix_per_cycle);
+ }
+ 
++void dcn314_dpp_root_clock_control(struct dce_hwseq *hws, unsigned int dpp_inst, bool clock_on)
++{
++	if (!hws->ctx->dc->debug.root_clock_optimization.bits.dpp)
++		return;
++
++	if (hws->ctx->dc->res_pool->dccg->funcs->dpp_root_clock_control)
++		hws->ctx->dc->res_pool->dccg->funcs->dpp_root_clock_control(
++			hws->ctx->dc->res_pool->dccg, dpp_inst, clock_on);
++}
++
+ void dcn314_hubp_pg_control(struct dce_hwseq *hws, unsigned int hubp_inst, bool power_on)
+ {
+ 	struct dc_context *ctx = hws->ctx;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.h b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.h
+index c419d3dbdfee6..c786d5e6a428e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.h
++++ b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.h
+@@ -43,4 +43,6 @@ void dcn314_set_pixels_per_cycle(struct pipe_ctx *pipe_ctx);
+ 
+ void dcn314_hubp_pg_control(struct dce_hwseq *hws, unsigned int hubp_inst, bool power_on);
+ 
++void dcn314_dpp_root_clock_control(struct dce_hwseq *hws, unsigned int dpp_inst, bool clock_on);
++
+ #endif /* __DC_HWSS_DCN314_H__ */
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_init.c b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_init.c
+index 343f4d9dd5e34..5267e901a35c1 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_init.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_init.c
+@@ -137,6 +137,7 @@ static const struct hwseq_private_funcs dcn314_private_funcs = {
+ 	.plane_atomic_disable = dcn20_plane_atomic_disable,
+ 	.plane_atomic_power_down = dcn10_plane_atomic_power_down,
+ 	.enable_power_gating_plane = dcn314_enable_power_gating_plane,
++	.dpp_root_clock_control = dcn314_dpp_root_clock_control,
+ 	.hubp_pg_control = dcn314_hubp_pg_control,
+ 	.program_all_writeback_pipes_in_tree = dcn30_program_all_writeback_pipes_in_tree,
+ 	.update_odm = dcn314_update_odm,
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c
+index 9ffba4c6fe550..30129fb9c27a9 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c
+@@ -887,6 +887,7 @@ static const struct dc_plane_cap plane_cap = {
+ static const struct dc_debug_options debug_defaults_drv = {
+ 	.disable_z10 = false,
+ 	.enable_z9_disable_interface = true,
++	.minimum_z8_residency_time = 2000,
+ 	.psr_skip_crtc_disable = true,
+ 	.disable_dmcu = true,
+ 	.force_abm_enable = false,
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hpo_dp_link_encoder.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hpo_dp_link_encoder.c
+index 4dbad8d4b4fc2..8af01f579690f 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hpo_dp_link_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hpo_dp_link_encoder.c
+@@ -26,7 +26,6 @@
+ #include "dcn31/dcn31_hpo_dp_link_encoder.h"
+ #include "dcn32_hpo_dp_link_encoder.h"
+ #include "reg_helper.h"
+-#include "dc_link.h"
+ #include "stream_encoder.h"
+ 
+ #define DC_LOGGER \
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
+index 9d14045cccd63..823f29c292d05 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
+@@ -828,7 +828,7 @@ void dcn32_init_hw(struct dc *dc)
+ 		struct dc_link *edp_links[MAX_NUM_EDP];
+ 		struct dc_link *edp_link;
+ 
+-		get_edp_links(dc, edp_links, &edp_num);
++		dc_get_edp_links(dc, edp_links, &edp_num);
+ 		if (edp_num) {
+ 			for (i = 0; i < edp_num; i++) {
+ 				edp_link = edp_links[i];
+@@ -912,6 +912,7 @@ void dcn32_init_hw(struct dc *dc)
+ 	if (dc->ctx->dmub_srv) {
+ 		dc_dmub_srv_query_caps_cmd(dc->ctx->dmub_srv->dmub);
+ 		dc->caps.dmub_caps.psr = dc->ctx->dmub_srv->dmub->feature_caps.psr;
++		dc->caps.dmub_caps.mclk_sw = dc->ctx->dmub_srv->dmub->feature_caps.fw_assisted_mclk_switch;
+ 	}
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
+index 4b7abb4af6235..a518243792dfa 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
+@@ -2021,7 +2021,7 @@ int dcn32_populate_dml_pipes_from_context(
+ 	// In general cases we want to keep the dram clock change requirement
+ 	// (prefer configs that support MCLK switch). Only override to false
+ 	// for SubVP
+-	if (subvp_in_use)
++	if (context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching || subvp_in_use)
+ 		context->bw_ctx.dml.soc.dram_clock_change_requirement_final = false;
+ 	else
+ 		context->bw_ctx.dml.soc.dram_clock_change_requirement_final = true;
+@@ -2077,6 +2077,14 @@ static struct resource_funcs dcn32_res_pool_funcs = {
+ 	.restore_mall_state = dcn32_restore_mall_state,
+ };
+ 
++static uint32_t read_pipe_fuses(struct dc_context *ctx)
++{
++	uint32_t value = REG_READ(CC_DC_PIPE_DIS);
++	/* DCN32 support max 4 pipes */
++	value = value & 0xf;
++	return value;
++}
++
+ 
+ static bool dcn32_resource_construct(
+ 	uint8_t num_virtual_links,
+@@ -2119,7 +2127,7 @@ static bool dcn32_resource_construct(
+ 	pool->base.res_cap = &res_cap_dcn32;
+ 	/* max number of pipes for ASIC before checking for pipe fuses */
+ 	num_pipes  = pool->base.res_cap->num_timing_generator;
+-	pipe_fuses = REG_READ(CC_DC_PIPE_DIS);
++	pipe_fuses = read_pipe_fuses(ctx);
+ 
+ 	for (i = 0; i < pool->base.res_cap->num_timing_generator; i++)
+ 		if (pipe_fuses & 1 << i)
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
+index 55f918b440771..1a805dcdf534b 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
+@@ -1626,6 +1626,14 @@ static struct resource_funcs dcn321_res_pool_funcs = {
+ 	.restore_mall_state = dcn32_restore_mall_state,
+ };
+ 
++static uint32_t read_pipe_fuses(struct dc_context *ctx)
++{
++	uint32_t value = REG_READ(CC_DC_PIPE_DIS);
++	/* DCN321 support max 4 pipes */
++	value = value & 0xf;
++	return value;
++}
++
+ 
+ static bool dcn321_resource_construct(
+ 	uint8_t num_virtual_links,
+@@ -1668,7 +1676,7 @@ static bool dcn321_resource_construct(
+ 	pool->base.res_cap = &res_cap_dcn321;
+ 	/* max number of pipes for ASIC before checking for pipe fuses */
+ 	num_pipes  = pool->base.res_cap->num_timing_generator;
+-	pipe_fuses = REG_READ(CC_DC_PIPE_DIS);
++	pipe_fuses = read_pipe_fuses(ctx);
+ 
+ 	for (i = 0; i < pool->base.res_cap->num_timing_generator; i++)
+ 		if (pipe_fuses & 1 << i)
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c
+index d3ba65efe1d2e..f3cfc144e3587 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c
+@@ -973,7 +973,8 @@ static enum dcn_zstate_support_state  decide_zstate_support(struct dc *dc, struc
+ 	else if (context->stream_count == 1 &&  context->streams[0]->signal == SIGNAL_TYPE_EDP) {
+ 		struct dc_link *link = context->streams[0]->sink->link;
+ 		struct dc_stream_status *stream_status = &context->stream_status[0];
+-		bool allow_z8 = context->bw_ctx.dml.vba.StutterPeriod > 1000.0;
++		int minmum_z8_residency = dc->debug.minimum_z8_residency_time > 0 ? dc->debug.minimum_z8_residency_time : 1000;
++		bool allow_z8 = context->bw_ctx.dml.vba.StutterPeriod > (double)minmum_z8_residency;
+ 		bool is_pwrseq0 = link->link_index == 0;
+ 
+ 		if (dc_extended_blank_supported(dc)) {
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn30/dcn30_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn30/dcn30_fpu.c
+index 4fa6363647937..fdfb19337ea6e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn30/dcn30_fpu.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn30/dcn30_fpu.c
+@@ -368,7 +368,9 @@ void dcn30_fpu_update_soc_for_wm_a(struct dc *dc, struct dc_state *context)
+ 	dc_assert_fp_enabled();
+ 
+ 	if (dc->clk_mgr->bw_params->wm_table.nv_entries[WM_A].valid) {
+-		context->bw_ctx.dml.soc.dram_clock_change_latency_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_A].dml_input.pstate_latency_us;
++		if (!context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching ||
++				context->bw_ctx.dml.soc.dram_clock_change_latency_us == 0)
++			context->bw_ctx.dml.soc.dram_clock_change_latency_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_A].dml_input.pstate_latency_us;
+ 		context->bw_ctx.dml.soc.sr_enter_plus_exit_time_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_A].dml_input.sr_enter_plus_exit_time_us;
+ 		context->bw_ctx.dml.soc.sr_exit_time_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_A].dml_input.sr_exit_time_us;
+ 	}
+@@ -520,6 +522,20 @@ void dcn30_fpu_calculate_wm_and_dlg(
+ 		pipe_idx++;
+ 	}
+ 
++	// WA: restrict FPO to use first non-strobe mode (NV24 BW issue)
++	if (context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching &&
++			dc->dml.soc.num_chans <= 4 &&
++			context->bw_ctx.dml.vba.DRAMSpeed <= 1700 &&
++			context->bw_ctx.dml.vba.DRAMSpeed >= 1500) {
++
++		for (i = 0; i < dc->dml.soc.num_states; i++) {
++			if (dc->dml.soc.clock_limits[i].dram_speed_mts > 1700) {
++				context->bw_ctx.dml.vba.DRAMSpeed = dc->dml.soc.clock_limits[i].dram_speed_mts;
++				break;
++			}
++		}
++	}
++
+ 	dcn20_calculate_dlg_params(dc, context, pipes, pipe_cnt, vlevel);
+ 
+ 	if (!pstate_en)
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
+index c3d75e56410cc..899105da04335 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
+@@ -25,7 +25,6 @@
+ 
+ #ifdef CONFIG_DRM_AMD_DC_DCN
+ #include "dc.h"
+-#include "dc_link.h"
+ #include "../display_mode_lib.h"
+ #include "display_mode_vba_30.h"
+ #include "../dml_inline_defs.h"
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c b/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c
+index 27f488405335f..2b57f5b2362a4 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c
+@@ -24,7 +24,6 @@
+  */
+ 
+ #include "dc.h"
+-#include "dc_link.h"
+ #include "../display_mode_lib.h"
+ #include "../dcn30/display_mode_vba_30.h"
+ #include "display_mode_vba_31.h"
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c
+index acda3e1babd4a..3bbc46a673355 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c
+@@ -149,8 +149,8 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_14_soc = {
+ 	.num_states = 5,
+ 	.sr_exit_time_us = 16.5,
+ 	.sr_enter_plus_exit_time_us = 18.5,
+-	.sr_exit_z8_time_us = 210.0,
+-	.sr_enter_plus_exit_z8_time_us = 310.0,
++	.sr_exit_z8_time_us = 268.0,
++	.sr_enter_plus_exit_z8_time_us = 393.0,
+ 	.writeback_latency_us = 12.0,
+ 	.dram_channel_width_bytes = 4,
+ 	.round_trip_ping_latency_dcfclk_cycles = 106,
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn314/display_mode_vba_314.c b/drivers/gpu/drm/amd/display/dc/dml/dcn314/display_mode_vba_314.c
+index c843b394aeb4a..461ab6d2030e2 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn314/display_mode_vba_314.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn314/display_mode_vba_314.c
+@@ -27,7 +27,6 @@
+ #define UNIT_TEST 0
+ #if !UNIT_TEST
+ #include "dc.h"
+-#include "dc_link.h"
+ #endif
+ #include "../display_mode_lib.h"
+ #include "display_mode_vba_314.h"
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.c b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.c
+index 3b2a014ccf8f5..02d99b6bfe5ec 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.c
+@@ -24,7 +24,6 @@
+  */
+ 
+ #include "dc.h"
+-#include "dc_link.h"
+ #include "../display_mode_lib.h"
+ #include "display_mode_vba_32.h"
+ #include "../dml_inline_defs.h"
+@@ -810,7 +809,8 @@ static void DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerforman
+ 					v->SwathHeightY[k],
+ 					v->SwathHeightC[k],
+ 					TWait,
+-					v->DRAMSpeedPerState[mode_lib->vba.VoltageLevel] <= MEM_STROBE_FREQ_MHZ ?
++					(v->DRAMSpeedPerState[mode_lib->vba.VoltageLevel] <= MEM_STROBE_FREQ_MHZ ||
++						v->DCFCLKPerState[mode_lib->vba.VoltageLevel] <= MIN_DCFCLK_FREQ_MHZ) ?
+ 							mode_lib->vba.ip.min_prefetch_in_strobe_us : 0,
+ 					/* Output */
+ 					&v->DSTXAfterScaler[k],
+@@ -3309,7 +3309,7 @@ void dml32_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ 							v->swath_width_chroma_ub_this_state[k],
+ 							v->SwathHeightYThisState[k],
+ 							v->SwathHeightCThisState[k], v->TWait,
+-							v->DRAMSpeedPerState[i] <= MEM_STROBE_FREQ_MHZ ?
++							(v->DRAMSpeedPerState[i] <= MEM_STROBE_FREQ_MHZ || v->DCFCLKState[i][j] <= MIN_DCFCLK_FREQ_MHZ) ?
+ 									mode_lib->vba.ip.min_prefetch_in_strobe_us : 0,
+ 
+ 							/* Output */
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.h b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.h
+index 500b3dd6052d9..d98e36a9a09cc 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.h
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.h
+@@ -53,6 +53,7 @@
+ #define BPP_BLENDED_PIPE 0xffffffff
+ 
+ #define MEM_STROBE_FREQ_MHZ 1600
++#define MIN_DCFCLK_FREQ_MHZ 200
+ #define MEM_STROBE_MAX_DELIVERY_TIME_US 60.0
+ 
+ struct display_mode_lib;
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn321/dcn321_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn321/dcn321_fpu.c
+index b80cef70fa60f..383a409a3f54c 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn321/dcn321_fpu.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn321/dcn321_fpu.c
+@@ -106,16 +106,16 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_21_soc = {
+ 	.clock_limits = {
+ 		{
+ 			.state = 0,
+-			.dcfclk_mhz = 1564.0,
+-			.fabricclk_mhz = 400.0,
+-			.dispclk_mhz = 2150.0,
+-			.dppclk_mhz = 2150.0,
++			.dcfclk_mhz = 1434.0,
++			.fabricclk_mhz = 2250.0,
++			.dispclk_mhz = 1720.0,
++			.dppclk_mhz = 1720.0,
+ 			.phyclk_mhz = 810.0,
+ 			.phyclk_d18_mhz = 667.0,
+-			.phyclk_d32_mhz = 625.0,
++			.phyclk_d32_mhz = 313.0,
+ 			.socclk_mhz = 1200.0,
+-			.dscclk_mhz = 716.667,
+-			.dram_speed_mts = 1600.0,
++			.dscclk_mhz = 573.333,
++			.dram_speed_mts = 16000.0,
+ 			.dtbclk_mhz = 1564.0,
+ 		},
+ 	},
+@@ -125,14 +125,14 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_21_soc = {
+ 	.sr_exit_z8_time_us = 285.0,
+ 	.sr_enter_plus_exit_z8_time_us = 320,
+ 	.writeback_latency_us = 12.0,
+-	.round_trip_ping_latency_dcfclk_cycles = 263,
++	.round_trip_ping_latency_dcfclk_cycles = 207,
+ 	.urgent_latency_pixel_data_only_us = 4,
+ 	.urgent_latency_pixel_mixed_with_vm_data_us = 4,
+ 	.urgent_latency_vm_data_only_us = 4,
+-	.fclk_change_latency_us = 20,
+-	.usr_retraining_latency_us = 2,
+-	.smn_latency_us = 2,
+-	.mall_allocated_for_dcn_mbytes = 64,
++	.fclk_change_latency_us = 7,
++	.usr_retraining_latency_us = 0,
++	.smn_latency_us = 0,
++	.mall_allocated_for_dcn_mbytes = 32,
+ 	.urgent_out_of_order_return_per_channel_pixel_only_bytes = 4096,
+ 	.urgent_out_of_order_return_per_channel_pixel_and_vm_bytes = 4096,
+ 	.urgent_out_of_order_return_per_channel_vm_only_bytes = 4096,
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h b/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h
+index ce006762f2571..ad6acd1b34e1d 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h
+@@ -148,18 +148,21 @@ struct dccg_funcs {
+ 		struct dccg *dccg,
+ 		int inst);
+ 
+-void (*set_pixel_rate_div)(
+-        struct dccg *dccg,
+-        uint32_t otg_inst,
+-        enum pixel_rate_div k1,
+-        enum pixel_rate_div k2);
+-
+-void (*set_valid_pixel_rate)(
+-        struct dccg *dccg,
+-	int ref_dtbclk_khz,
+-        int otg_inst,
+-        int pixclk_khz);
++	void (*set_pixel_rate_div)(struct dccg *dccg,
++			uint32_t otg_inst,
++			enum pixel_rate_div k1,
++			enum pixel_rate_div k2);
+ 
++	void (*set_valid_pixel_rate)(
++			struct dccg *dccg,
++			int ref_dtbclk_khz,
++			int otg_inst,
++			int pixclk_khz);
++
++	void (*dpp_root_clock_control)(
++			struct dccg *dccg,
++			unsigned int dpp_inst,
++			bool clock_on);
+ };
+ 
+ #endif //__DAL_DCCG_H__
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/hw_shared.h b/drivers/gpu/drm/amd/display/dc/inc/hw/hw_shared.h
+index a819f0f97c5f3..b95ae9596c3b1 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/hw_shared.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/hw_shared.h
+@@ -275,20 +275,6 @@ enum dc_lut_mode {
+ 	LUT_RAM_B
+ };
+ 
+-enum symclk_state {
+-	SYMCLK_OFF_TX_OFF,
+-	SYMCLK_ON_TX_ON,
+-	SYMCLK_ON_TX_OFF,
+-};
+-
+-struct phy_state {
+-	struct {
+-		uint8_t otg		: 1;
+-		uint8_t reserved	: 7;
+-	} symclk_ref_cnts;
+-	enum symclk_state symclk_state;
+-};
+-
+ /**
+  * speakersToChannels
+  *
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/link_encoder.h b/drivers/gpu/drm/amd/display/dc/inc/hw/link_encoder.h
+index ec572a9e40547..dbe7afa9d3a22 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/link_encoder.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/link_encoder.h
+@@ -75,58 +75,6 @@ struct encoder_feature_support {
+ 	bool fec_supported;
+ };
+ 
+-union dpcd_psr_configuration {
+-	struct {
+-		unsigned char ENABLE                    : 1;
+-		unsigned char TRANSMITTER_ACTIVE_IN_PSR : 1;
+-		unsigned char CRC_VERIFICATION          : 1;
+-		unsigned char FRAME_CAPTURE_INDICATION  : 1;
+-		/* For eDP 1.4, PSR v2*/
+-		unsigned char LINE_CAPTURE_INDICATION   : 1;
+-		/* For eDP 1.4, PSR v2*/
+-		unsigned char IRQ_HPD_WITH_CRC_ERROR    : 1;
+-		unsigned char ENABLE_PSR2               : 1;
+-		/* For eDP 1.5, PSR v2 w/ early transport */
+-		unsigned char EARLY_TRANSPORT_ENABLE    : 1;
+-	} bits;
+-	unsigned char raw;
+-};
+-
+-union dpcd_alpm_configuration {
+-	struct {
+-		unsigned char ENABLE                    : 1;
+-		unsigned char IRQ_HPD_ENABLE            : 1;
+-		unsigned char RESERVED                  : 6;
+-	} bits;
+-	unsigned char raw;
+-};
+-
+-union dpcd_sink_active_vtotal_control_mode {
+-	struct {
+-		unsigned char ENABLE                    : 1;
+-		unsigned char RESERVED                  : 7;
+-	} bits;
+-	unsigned char raw;
+-};
+-
+-union psr_error_status {
+-	struct {
+-		unsigned char LINK_CRC_ERROR        :1;
+-		unsigned char RFB_STORAGE_ERROR     :1;
+-		unsigned char VSC_SDP_ERROR         :1;
+-		unsigned char RESERVED              :5;
+-	} bits;
+-	unsigned char raw;
+-};
+-
+-union psr_sink_psr_status {
+-	struct {
+-	unsigned char SINK_SELF_REFRESH_STATUS  :3;
+-	unsigned char RESERVED                  :5;
+-	} bits;
+-	unsigned char raw;
+-};
+-
+ struct link_encoder {
+ 	const struct link_encoder_funcs *funcs;
+ 	int32_t aux_channel_offset;
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/stream_encoder.h b/drivers/gpu/drm/amd/display/dc/inc/hw/stream_encoder.h
+index bb5ad70d42662..c4fbbf08ef868 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/stream_encoder.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/stream_encoder.h
+@@ -30,7 +30,6 @@
+ 
+ #include "audio_types.h"
+ #include "hw_shared.h"
+-#include "dc_link.h"
+ 
+ struct dc_bios;
+ struct dc_context;
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer_private.h b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer_private.h
+index a4d61bb724b67..39bd53b790201 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer_private.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer_private.h
+@@ -115,6 +115,10 @@ struct hwseq_private_funcs {
+ 	void (*plane_atomic_disable)(struct dc *dc, struct pipe_ctx *pipe_ctx);
+ 	void (*enable_power_gating_plane)(struct dce_hwseq *hws,
+ 		bool enable);
++	void (*dpp_root_clock_control)(
++			struct dce_hwseq *hws,
++			unsigned int dpp_inst,
++			bool clock_on);
+ 	void (*dpp_pg_control)(struct dce_hwseq *hws,
+ 			unsigned int dpp_inst,
+ 			bool power_on);
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/link.h b/drivers/gpu/drm/amd/display/dc/inc/link.h
+index e70fa00592236..6a346a41f07b2 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/link.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/link.h
+@@ -38,7 +38,6 @@
+  * into this file and prefix it with "link_".
+  */
+ #include "core_types.h"
+-#include "dc_link.h"
+ 
+ struct link_init_data {
+ 	const struct dc *dc;
+diff --git a/drivers/gpu/drm/amd/display/dc/link/accessories/link_dp_cts.c b/drivers/gpu/drm/amd/display/dc/link/accessories/link_dp_cts.c
+index 942300e0bd929..7f36d733bfcab 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/accessories/link_dp_cts.c
++++ b/drivers/gpu/drm/amd/display/dc/link/accessories/link_dp_cts.c
+@@ -1027,20 +1027,3 @@ void dc_link_set_preferred_training_settings(struct dc *dc,
+ 	if (skip_immediate_retrain == false)
+ 		dc_link_set_preferred_link_settings(dc, &link->preferred_link_setting, link);
+ }
+-
+-void dc_link_set_test_pattern(struct dc_link *link,
+-		enum dp_test_pattern test_pattern,
+-		enum dp_test_pattern_color_space test_pattern_color_space,
+-		const struct link_training_settings *p_link_settings,
+-		const unsigned char *p_custom_pattern,
+-		unsigned int cust_pattern_size)
+-{
+-	if (link != NULL)
+-		dc_link_dp_set_test_pattern(
+-			link,
+-			test_pattern,
+-			test_pattern_color_space,
+-			p_link_settings,
+-			p_custom_pattern,
+-			cust_pattern_size);
+-}
+diff --git a/drivers/gpu/drm/amd/display/dc/link/link_detection.c b/drivers/gpu/drm/amd/display/dc/link/link_detection.c
+index f70025ef7b69e..9839ec222875a 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/link_detection.c
++++ b/drivers/gpu/drm/amd/display/dc/link/link_detection.c
+@@ -1329,3 +1329,102 @@ const struct dc_link_status *link_get_status(const struct dc_link *link)
+ 	return &link->link_status;
+ }
+ 
++
++static bool link_add_remote_sink_helper(struct dc_link *dc_link, struct dc_sink *sink)
++{
++	if (dc_link->sink_count >= MAX_SINKS_PER_LINK) {
++		BREAK_TO_DEBUGGER();
++		return false;
++	}
++
++	dc_sink_retain(sink);
++
++	dc_link->remote_sinks[dc_link->sink_count] = sink;
++	dc_link->sink_count++;
++
++	return true;
++}
++
++struct dc_sink *dc_link_add_remote_sink(
++		struct dc_link *link,
++		const uint8_t *edid,
++		int len,
++		struct dc_sink_init_data *init_data)
++{
++	struct dc_sink *dc_sink;
++	enum dc_edid_status edid_status;
++
++	if (len > DC_MAX_EDID_BUFFER_SIZE) {
++		dm_error("Max EDID buffer size breached!\n");
++		return NULL;
++	}
++
++	if (!init_data) {
++		BREAK_TO_DEBUGGER();
++		return NULL;
++	}
++
++	if (!init_data->link) {
++		BREAK_TO_DEBUGGER();
++		return NULL;
++	}
++
++	dc_sink = dc_sink_create(init_data);
++
++	if (!dc_sink)
++		return NULL;
++
++	memmove(dc_sink->dc_edid.raw_edid, edid, len);
++	dc_sink->dc_edid.length = len;
++
++	if (!link_add_remote_sink_helper(
++			link,
++			dc_sink))
++		goto fail_add_sink;
++
++	edid_status = dm_helpers_parse_edid_caps(
++			link,
++			&dc_sink->dc_edid,
++			&dc_sink->edid_caps);
++
++	/*
++	 * Treat device as no EDID device if EDID
++	 * parsing fails
++	 */
++	if (edid_status != EDID_OK && edid_status != EDID_PARTIAL_VALID) {
++		dc_sink->dc_edid.length = 0;
++		dm_error("Bad EDID, status%d!\n", edid_status);
++	}
++
++	return dc_sink;
++
++fail_add_sink:
++	dc_sink_release(dc_sink);
++	return NULL;
++}
++
++void dc_link_remove_remote_sink(struct dc_link *link, struct dc_sink *sink)
++{
++	int i;
++
++	if (!link->sink_count) {
++		BREAK_TO_DEBUGGER();
++		return;
++	}
++
++	for (i = 0; i < link->sink_count; i++) {
++		if (link->remote_sinks[i] == sink) {
++			dc_sink_release(sink);
++			link->remote_sinks[i] = NULL;
++
++			/* shrink array to remove empty place */
++			while (i < link->sink_count - 1) {
++				link->remote_sinks[i] = link->remote_sinks[i+1];
++				i++;
++			}
++			link->remote_sinks[i] = NULL;
++			link->sink_count--;
++			return;
++		}
++	}
++}
+diff --git a/drivers/gpu/drm/amd/display/dc/link/link_factory.c b/drivers/gpu/drm/amd/display/dc/link/link_factory.c
+index aeb26a4d539e9..8aaf14afa4271 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/link_factory.c
++++ b/drivers/gpu/drm/amd/display/dc/link/link_factory.c
+@@ -274,14 +274,18 @@ static bool dc_link_construct_phy(struct dc_link *link,
+ 				link->irq_source_hpd = DC_IRQ_SOURCE_INVALID;
+ 
+ 			switch (link->dc->config.allow_edp_hotplug_detection) {
+-			case 1: // only the 1st eDP handles hotplug
++			case HPD_EN_FOR_ALL_EDP:
++				link->irq_source_hpd_rx =
++						dal_irq_get_rx_source(link->hpd_gpio);
++				break;
++			case HPD_EN_FOR_PRIMARY_EDP_ONLY:
+ 				if (link->link_index == 0)
+ 					link->irq_source_hpd_rx =
+ 						dal_irq_get_rx_source(link->hpd_gpio);
+ 				else
+ 					link->irq_source_hpd = DC_IRQ_SOURCE_INVALID;
+ 				break;
+-			case 2: // only the 2nd eDP handles hotplug
++			case HPD_EN_FOR_SECONDARY_EDP_ONLY:
+ 				if (link->link_index == 1)
+ 					link->irq_source_hpd_rx =
+ 						dal_irq_get_rx_source(link->hpd_gpio);
+@@ -289,6 +293,7 @@ static bool dc_link_construct_phy(struct dc_link *link,
+ 					link->irq_source_hpd = DC_IRQ_SOURCE_INVALID;
+ 				break;
+ 			default:
++				link->irq_source_hpd = DC_IRQ_SOURCE_INVALID;
+ 				break;
+ 			}
+ 		}
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia.c
+index 32f48a48e9dde..cbfa9343ffaf9 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia.c
+@@ -26,7 +26,6 @@
+ 
+ #include "dc.h"
+ #include "inc/core_status.h"
+-#include "dc_link.h"
+ #include "dpcd_defs.h"
+ 
+ #include "link_dp_dpia.h"
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia_bw.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia_bw.c
+index f69e681b3b5bf..1ecf1d8573592 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia_bw.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia_bw.c
+@@ -27,7 +27,6 @@
+ //				USB4 DPIA BANDWIDTH ALLOCATION LOGIC
+ /*********************************************************************/
+ #include "dc.h"
+-#include "dc_link.h"
+ #include "link_dp_dpia_bw.h"
+ #include "drm_dp_helper_dc.h"
+ #include "link_dpcd.h"
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_dpia.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_dpia.c
+index e60da0532c539..4ded5f9cdecc6 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_dpia.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_dpia.c
+@@ -29,7 +29,6 @@
+ #include "link_dp_training_dpia.h"
+ #include "dc.h"
+ #include "inc/core_status.h"
+-#include "dc_link.h"
+ #include "dpcd_defs.h"
+ 
+ #include "link_dp_dpia.h"
+diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c
+index a76da0131addd..9c20516be066c 100644
+--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c
++++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c
+@@ -130,12 +130,13 @@ void dmub_dcn32_reset(struct dmub_srv *dmub)
+ 	REG_WRITE(DMCUB_INBOX1_WPTR, 0);
+ 	REG_WRITE(DMCUB_OUTBOX1_RPTR, 0);
+ 	REG_WRITE(DMCUB_OUTBOX1_WPTR, 0);
++	REG_WRITE(DMCUB_OUTBOX0_RPTR, 0);
++	REG_WRITE(DMCUB_OUTBOX0_WPTR, 0);
+ 	REG_WRITE(DMCUB_SCRATCH0, 0);
+ }
+ 
+ void dmub_dcn32_reset_release(struct dmub_srv *dmub)
+ {
+-	REG_WRITE(DMCUB_GPINT_DATAIN1, 0);
+ 	REG_UPDATE(MMHUBBUB_SOFT_RESET, DMUIF_SOFT_RESET, 0);
+ 	REG_WRITE(DMCUB_SCRATCH15, dmub->psp_version & 0x001100FF);
+ 	REG_UPDATE_2(DMCUB_CNTL, DMCUB_ENABLE, 1, DMCUB_TRACEPORT_EN, 1);
+diff --git a/drivers/gpu/drm/amd/display/include/link_service_types.h b/drivers/gpu/drm/amd/display/include/link_service_types.h
+index 18b9173d5a962..cd870af5fd250 100644
+--- a/drivers/gpu/drm/amd/display/include/link_service_types.h
++++ b/drivers/gpu/drm/amd/display/include/link_service_types.h
+@@ -34,10 +34,6 @@
+ struct ddc;
+ struct irq_manager;
+ 
+-enum {
+-	MAX_CONTROLLER_NUM = 6
+-};
+-
+ enum dp_power_state {
+ 	DP_POWER_STATE_D0 = 1,
+ 	DP_POWER_STATE_D3
+@@ -60,28 +56,6 @@ enum {
+ 	DATA_EFFICIENCY_128b_132b_x10000 = 9646, /* 96.71% data efficiency x 99.75% downspread factor */
+ };
+ 
+-enum link_training_result {
+-	LINK_TRAINING_SUCCESS,
+-	LINK_TRAINING_CR_FAIL_LANE0,
+-	LINK_TRAINING_CR_FAIL_LANE1,
+-	LINK_TRAINING_CR_FAIL_LANE23,
+-	/* CR DONE bit is cleared during EQ step */
+-	LINK_TRAINING_EQ_FAIL_CR,
+-	/* CR DONE bit is cleared but LANE0_CR_DONE is set during EQ step */
+-	LINK_TRAINING_EQ_FAIL_CR_PARTIAL,
+-	/* other failure during EQ step */
+-	LINK_TRAINING_EQ_FAIL_EQ,
+-	LINK_TRAINING_LQA_FAIL,
+-	/* one of the CR,EQ or symbol lock is dropped */
+-	LINK_TRAINING_LINK_LOSS,
+-	/* Abort link training (because sink unplugged) */
+-	LINK_TRAINING_ABORT,
+-	DP_128b_132b_LT_FAILED,
+-	DP_128b_132b_MAX_LOOP_COUNT_REACHED,
+-	DP_128b_132b_CHANNEL_EQ_DONE_TIMEOUT,
+-	DP_128b_132b_CDS_DONE_TIMEOUT,
+-};
+-
+ enum lttpr_mode {
+ 	LTTPR_MODE_UNKNOWN,
+ 	LTTPR_MODE_NON_LTTPR,
+diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
+index 6e79d3352d0bb..bdacc0a7b61d5 100644
+--- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
++++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
+@@ -36,6 +36,8 @@
+ #define amdgpu_dpm_enable_bapm(adev, e) \
+ 		((adev)->powerplay.pp_funcs->enable_bapm((adev)->powerplay.pp_handle, (e)))
+ 
++#define amdgpu_dpm_is_legacy_dpm(adev) ((adev)->powerplay.pp_handle == (adev))
++
+ int amdgpu_dpm_get_sclk(struct amdgpu_device *adev, bool low)
+ {
+ 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+@@ -1432,15 +1434,24 @@ int amdgpu_dpm_get_smu_prv_buf_details(struct amdgpu_device *adev,
+ 
+ int amdgpu_dpm_is_overdrive_supported(struct amdgpu_device *adev)
+ {
+-	struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle;
+-	struct smu_context *smu = adev->powerplay.pp_handle;
++	if (is_support_sw_smu(adev)) {
++		struct smu_context *smu = adev->powerplay.pp_handle;
+ 
+-	if ((is_support_sw_smu(adev) && smu->od_enabled) ||
+-	    (is_support_sw_smu(adev) && smu->is_apu) ||
+-		(!is_support_sw_smu(adev) && hwmgr->od_enabled))
+-		return true;
++		return (smu->od_enabled || smu->is_apu);
++	} else {
++		struct pp_hwmgr *hwmgr;
+ 
+-	return false;
++		/*
++		 * dpm on some legacy asics don't carry od_enabled member
++		 * as its pp_handle is casted directly from adev.
++		 */
++		if (amdgpu_dpm_is_legacy_dpm(adev))
++			return false;
++
++		hwmgr = (struct pp_hwmgr *)adev->powerplay.pp_handle;
++
++		return hwmgr->od_enabled;
++	}
+ }
+ 
+ int amdgpu_dpm_set_pp_table(struct amdgpu_device *adev,
+diff --git a/drivers/gpu/drm/bridge/lontium-lt8912b.c b/drivers/gpu/drm/bridge/lontium-lt8912b.c
+index b40baced13316..13c131ade2683 100644
+--- a/drivers/gpu/drm/bridge/lontium-lt8912b.c
++++ b/drivers/gpu/drm/bridge/lontium-lt8912b.c
+@@ -504,7 +504,6 @@ static int lt8912_attach_dsi(struct lt8912 *lt)
+ 	dsi->format = MIPI_DSI_FMT_RGB888;
+ 
+ 	dsi->mode_flags = MIPI_DSI_MODE_VIDEO |
+-			  MIPI_DSI_MODE_VIDEO_BURST |
+ 			  MIPI_DSI_MODE_LPM |
+ 			  MIPI_DSI_MODE_NO_EOT_PACKET;
+ 
+diff --git a/drivers/gpu/drm/i915/display/icl_dsi.c b/drivers/gpu/drm/i915/display/icl_dsi.c
+index fc0eaf40dc941..6dd9425220218 100644
+--- a/drivers/gpu/drm/i915/display/icl_dsi.c
++++ b/drivers/gpu/drm/i915/display/icl_dsi.c
+@@ -1211,7 +1211,7 @@ static void gen11_dsi_powerup_panel(struct intel_encoder *encoder)
+ 
+ 	/* panel power on related mipi dsi vbt sequences */
+ 	intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_POWER_ON);
+-	intel_dsi_msleep(intel_dsi, intel_dsi->panel_on_delay);
++	msleep(intel_dsi->panel_on_delay);
+ 	intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DEASSERT_RESET);
+ 	intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_INIT_OTP);
+ 	intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DISPLAY_ON);
+diff --git a/drivers/gpu/drm/i915/display/intel_dsi_vbt.c b/drivers/gpu/drm/i915/display/intel_dsi_vbt.c
+index 2cbc1292ab382..f102c13cb9590 100644
+--- a/drivers/gpu/drm/i915/display/intel_dsi_vbt.c
++++ b/drivers/gpu/drm/i915/display/intel_dsi_vbt.c
+@@ -762,17 +762,6 @@ void intel_dsi_vbt_exec_sequence(struct intel_dsi *intel_dsi,
+ 		gpiod_set_value_cansleep(intel_dsi->gpio_backlight, 0);
+ }
+ 
+-void intel_dsi_msleep(struct intel_dsi *intel_dsi, int msec)
+-{
+-	struct intel_connector *connector = intel_dsi->attached_connector;
+-
+-	/* For v3 VBTs in vid-mode the delays are part of the VBT sequences */
+-	if (is_vid_mode(intel_dsi) && connector->panel.vbt.dsi.seq_version >= 3)
+-		return;
+-
+-	msleep(msec);
+-}
+-
+ void intel_dsi_log_params(struct intel_dsi *intel_dsi)
+ {
+ 	struct drm_i915_private *i915 = to_i915(intel_dsi->base.base.dev);
+diff --git a/drivers/gpu/drm/i915/display/intel_dsi_vbt.h b/drivers/gpu/drm/i915/display/intel_dsi_vbt.h
+index dc642c1fe7efd..468d873fab1ae 100644
+--- a/drivers/gpu/drm/i915/display/intel_dsi_vbt.h
++++ b/drivers/gpu/drm/i915/display/intel_dsi_vbt.h
+@@ -16,7 +16,6 @@ void intel_dsi_vbt_gpio_init(struct intel_dsi *intel_dsi, bool panel_is_on);
+ void intel_dsi_vbt_gpio_cleanup(struct intel_dsi *intel_dsi);
+ void intel_dsi_vbt_exec_sequence(struct intel_dsi *intel_dsi,
+ 				 enum mipi_seq seq_id);
+-void intel_dsi_msleep(struct intel_dsi *intel_dsi, int msec);
+ void intel_dsi_log_params(struct intel_dsi *intel_dsi);
+ 
+ #endif /* __INTEL_DSI_VBT_H__ */
+diff --git a/drivers/gpu/drm/i915/display/skl_scaler.c b/drivers/gpu/drm/i915/display/skl_scaler.c
+index 473d53610b924..0e7e014fcc717 100644
+--- a/drivers/gpu/drm/i915/display/skl_scaler.c
++++ b/drivers/gpu/drm/i915/display/skl_scaler.c
+@@ -111,6 +111,8 @@ skl_update_scaler(struct intel_crtc_state *crtc_state, bool force_detach,
+ 	struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
+ 	const struct drm_display_mode *adjusted_mode =
+ 		&crtc_state->hw.adjusted_mode;
++	int pipe_src_w = drm_rect_width(&crtc_state->pipe_src);
++	int pipe_src_h = drm_rect_height(&crtc_state->pipe_src);
+ 	int min_src_w, min_src_h, min_dst_w, min_dst_h;
+ 	int max_src_w, max_src_h, max_dst_w, max_dst_h;
+ 
+@@ -207,6 +209,21 @@ skl_update_scaler(struct intel_crtc_state *crtc_state, bool force_detach,
+ 		return -EINVAL;
+ 	}
+ 
++	/*
++	 * The pipe scaler does not use all the bits of PIPESRC, at least
++	 * on the earlier platforms. So even when we're scaling a plane
++	 * the *pipe* source size must not be too large. For simplicity
++	 * we assume the limits match the scaler source size limits. Might
++	 * not be 100% accurate on all platforms, but good enough for now.
++	 */
++	if (pipe_src_w > max_src_w || pipe_src_h > max_src_h) {
++		drm_dbg_kms(&dev_priv->drm,
++			    "scaler_user index %u.%u: pipe src size %ux%u "
++			    "is out of scaler range\n",
++			    crtc->pipe, scaler_user, pipe_src_w, pipe_src_h);
++		return -EINVAL;
++	}
++
+ 	/* mark this plane as a scaler user in crtc_state */
+ 	scaler_state->scaler_users |= (1 << scaler_user);
+ 	drm_dbg_kms(&dev_priv->drm, "scaler_user index %u.%u: "
+diff --git a/drivers/gpu/drm/i915/display/vlv_dsi.c b/drivers/gpu/drm/i915/display/vlv_dsi.c
+index 2289f6b1b4eb5..37efeab52581c 100644
+--- a/drivers/gpu/drm/i915/display/vlv_dsi.c
++++ b/drivers/gpu/drm/i915/display/vlv_dsi.c
+@@ -783,7 +783,6 @@ static void intel_dsi_pre_enable(struct intel_atomic_state *state,
+ {
+ 	struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ 	struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc);
+-	struct intel_connector *connector = to_intel_connector(conn_state->connector);
+ 	struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
+ 	enum pipe pipe = crtc->pipe;
+ 	enum port port;
+@@ -831,21 +830,10 @@ static void intel_dsi_pre_enable(struct intel_atomic_state *state,
+ 	if (!IS_GEMINILAKE(dev_priv))
+ 		intel_dsi_prepare(encoder, pipe_config);
+ 
++	/* Give the panel time to power-on and then deassert its reset */
+ 	intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_POWER_ON);
+-
+-	/*
+-	 * Give the panel time to power-on and then deassert its reset.
+-	 * Depending on the VBT MIPI sequences version the deassert-seq
+-	 * may contain the necessary delay, intel_dsi_msleep() will skip
+-	 * the delay in that case. If there is no deassert-seq, then an
+-	 * unconditional msleep is used to give the panel time to power-on.
+-	 */
+-	if (connector->panel.vbt.dsi.sequence[MIPI_SEQ_DEASSERT_RESET]) {
+-		intel_dsi_msleep(intel_dsi, intel_dsi->panel_on_delay);
+-		intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DEASSERT_RESET);
+-	} else {
+-		msleep(intel_dsi->panel_on_delay);
+-	}
++	msleep(intel_dsi->panel_on_delay);
++	intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DEASSERT_RESET);
+ 
+ 	if (IS_GEMINILAKE(dev_priv)) {
+ 		glk_cold_boot = glk_dsi_enable_io(encoder);
+@@ -879,7 +867,7 @@ static void intel_dsi_pre_enable(struct intel_atomic_state *state,
+ 		msleep(20); /* XXX */
+ 		for_each_dsi_port(port, intel_dsi->ports)
+ 			dpi_send_cmd(intel_dsi, TURN_ON, false, port);
+-		intel_dsi_msleep(intel_dsi, 100);
++		msleep(100);
+ 
+ 		intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DISPLAY_ON);
+ 
+@@ -1007,7 +995,7 @@ static void intel_dsi_post_disable(struct intel_atomic_state *state,
+ 	/* Assert reset */
+ 	intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_ASSERT_RESET);
+ 
+-	intel_dsi_msleep(intel_dsi, intel_dsi->panel_off_delay);
++	msleep(intel_dsi->panel_off_delay);
+ 	intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_POWER_OFF);
+ 
+ 	intel_dsi->panel_power_off_time = ktime_get_boottime();
+diff --git a/drivers/gpu/drm/i915/gt/intel_gt_regs.h b/drivers/gpu/drm/i915/gt/intel_gt_regs.h
+index be0f6e305c881..e54891b0e2f43 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gt_regs.h
++++ b/drivers/gpu/drm/i915/gt/intel_gt_regs.h
+@@ -1145,6 +1145,8 @@
+ #define   ENABLE_SMALLPL			REG_BIT(15)
+ #define   SC_DISABLE_POWER_OPTIMIZATION_EBB	REG_BIT(9)
+ #define   GEN11_SAMPLER_ENABLE_HEADLESS_MSG	REG_BIT(5)
++#define   MTL_DISABLE_SAMPLER_SC_OOO		REG_BIT(3)
++#define   GEN11_INDIRECT_STATE_BASE_ADDR_OVERRIDE	REG_BIT(0)
+ 
+ #define GEN9_HALF_SLICE_CHICKEN7		MCR_REG(0xe194)
+ #define   DG2_DISABLE_ROUND_ENABLE_ALLOW_FOR_SSLA	REG_BIT(15)
+@@ -1171,7 +1173,9 @@
+ #define   THREAD_EX_ARB_MODE_RR_AFTER_DEP	REG_FIELD_PREP(THREAD_EX_ARB_MODE, 0x2)
+ 
+ #define HSW_ROW_CHICKEN3			_MMIO(0xe49c)
++#define GEN9_ROW_CHICKEN3			MCR_REG(0xe49c)
+ #define   HSW_ROW_CHICKEN3_L3_GLOBAL_ATOMICS_DISABLE	(1 << 6)
++#define   MTL_DISABLE_FIX_FOR_EOT_FLUSH		REG_BIT(9)
+ 
+ #define GEN8_ROW_CHICKEN			MCR_REG(0xe4f0)
+ #define   FLOW_CONTROL_ENABLE			REG_BIT(15)
+diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c
+index 485c5cc5d0f96..14f92a8082857 100644
+--- a/drivers/gpu/drm/i915/gt/intel_workarounds.c
++++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c
+@@ -3015,6 +3015,39 @@ general_render_compute_wa_init(struct intel_engine_cs *engine, struct i915_wa_li
+ 
+ 	add_render_compute_tuning_settings(i915, wal);
+ 
++	if (GRAPHICS_VER(i915) >= 11) {
++		/* This is not a Wa (although referred to as
++		 * WaSetInidrectStateOverride in places), this allows
++		 * applications that reference sampler states through
++		 * the BindlessSamplerStateBaseAddress to have their
++		 * border color relative to DynamicStateBaseAddress
++		 * rather than BindlessSamplerStateBaseAddress.
++		 *
++		 * Otherwise SAMPLER_STATE border colors have to be
++		 * copied in multiple heaps (DynamicStateBaseAddress &
++		 * BindlessSamplerStateBaseAddress)
++		 *
++		 * BSpec: 46052
++		 */
++		wa_mcr_masked_en(wal,
++				 GEN10_SAMPLER_MODE,
++				 GEN11_INDIRECT_STATE_BASE_ADDR_OVERRIDE);
++	}
++
++	if (IS_MTL_GRAPHICS_STEP(i915, M, STEP_B0, STEP_FOREVER) ||
++	    IS_MTL_GRAPHICS_STEP(i915, P, STEP_B0, STEP_FOREVER))
++		/* Wa_14017856879 */
++		wa_mcr_masked_en(wal, GEN9_ROW_CHICKEN3, MTL_DISABLE_FIX_FOR_EOT_FLUSH);
++
++	if (IS_MTL_GRAPHICS_STEP(i915, M, STEP_A0, STEP_B0) ||
++	    IS_MTL_GRAPHICS_STEP(i915, P, STEP_A0, STEP_B0))
++		/*
++		 * Wa_14017066071
++		 * Wa_14017654203
++		 */
++		wa_mcr_masked_en(wal, GEN10_SAMPLER_MODE,
++				 MTL_DISABLE_SAMPLER_SC_OOO);
++
+ 	if (IS_MTL_GRAPHICS_STEP(i915, M, STEP_A0, STEP_B0) ||
+ 	    IS_MTL_GRAPHICS_STEP(i915, P, STEP_A0, STEP_B0) ||
+ 	    IS_DG2_GRAPHICS_STEP(i915, G10, STEP_B0, STEP_FOREVER) ||
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
+index de7f987cf6111..6648691bd6450 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
+@@ -83,15 +83,15 @@ static int __intel_uc_reset_hw(struct intel_uc *uc)
+ 
+ static void __confirm_options(struct intel_uc *uc)
+ {
+-	struct drm_i915_private *i915 = uc_to_gt(uc)->i915;
++	struct intel_gt *gt = uc_to_gt(uc);
++	struct drm_i915_private *i915 = gt->i915;
+ 
+-	drm_dbg(&i915->drm,
+-		"enable_guc=%d (guc:%s submission:%s huc:%s slpc:%s)\n",
+-		i915->params.enable_guc,
+-		str_yes_no(intel_uc_wants_guc(uc)),
+-		str_yes_no(intel_uc_wants_guc_submission(uc)),
+-		str_yes_no(intel_uc_wants_huc(uc)),
+-		str_yes_no(intel_uc_wants_guc_slpc(uc)));
++	gt_dbg(gt, "enable_guc=%d (guc:%s submission:%s huc:%s slpc:%s)\n",
++	       i915->params.enable_guc,
++	       str_yes_no(intel_uc_wants_guc(uc)),
++	       str_yes_no(intel_uc_wants_guc_submission(uc)),
++	       str_yes_no(intel_uc_wants_huc(uc)),
++	       str_yes_no(intel_uc_wants_guc_slpc(uc)));
+ 
+ 	if (i915->params.enable_guc == 0) {
+ 		GEM_BUG_ON(intel_uc_wants_guc(uc));
+@@ -102,26 +102,22 @@ static void __confirm_options(struct intel_uc *uc)
+ 	}
+ 
+ 	if (!intel_uc_supports_guc(uc))
+-		drm_info(&i915->drm,
+-			 "Incompatible option enable_guc=%d - %s\n",
+-			 i915->params.enable_guc, "GuC is not supported!");
++		gt_info(gt,  "Incompatible option enable_guc=%d - %s\n",
++			i915->params.enable_guc, "GuC is not supported!");
+ 
+ 	if (i915->params.enable_guc & ENABLE_GUC_LOAD_HUC &&
+ 	    !intel_uc_supports_huc(uc))
+-		drm_info(&i915->drm,
+-			 "Incompatible option enable_guc=%d - %s\n",
+-			 i915->params.enable_guc, "HuC is not supported!");
++		gt_info(gt, "Incompatible option enable_guc=%d - %s\n",
++			i915->params.enable_guc, "HuC is not supported!");
+ 
+ 	if (i915->params.enable_guc & ENABLE_GUC_SUBMISSION &&
+ 	    !intel_uc_supports_guc_submission(uc))
+-		drm_info(&i915->drm,
+-			 "Incompatible option enable_guc=%d - %s\n",
+-			 i915->params.enable_guc, "GuC submission is N/A");
++		gt_info(gt, "Incompatible option enable_guc=%d - %s\n",
++			i915->params.enable_guc, "GuC submission is N/A");
+ 
+ 	if (i915->params.enable_guc & ~ENABLE_GUC_MASK)
+-		drm_info(&i915->drm,
+-			 "Incompatible option enable_guc=%d - %s\n",
+-			 i915->params.enable_guc, "undocumented flag");
++		gt_info(gt, "Incompatible option enable_guc=%d - %s\n",
++			i915->params.enable_guc, "undocumented flag");
+ }
+ 
+ void intel_uc_init_early(struct intel_uc *uc)
+@@ -549,10 +545,8 @@ static int __uc_init_hw(struct intel_uc *uc)
+ 
+ 	intel_gsc_uc_load_start(&uc->gsc);
+ 
+-	gt_info(gt, "GuC submission %s\n",
+-		str_enabled_disabled(intel_uc_uses_guc_submission(uc)));
+-	gt_info(gt, "GuC SLPC %s\n",
+-		str_enabled_disabled(intel_uc_uses_guc_slpc(uc)));
++	guc_info(guc, "submission %s\n", str_enabled_disabled(intel_uc_uses_guc_submission(uc)));
++	guc_info(guc, "SLPC %s\n", str_enabled_disabled(intel_uc_uses_guc_slpc(uc)));
+ 
+ 	return 0;
+ 
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c
+index 65672ff826054..22786d9116fd0 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c
+@@ -11,6 +11,7 @@
+ #include <drm/drm_print.h>
+ 
+ #include "gem/i915_gem_lmem.h"
++#include "gt/intel_gt_print.h"
+ #include "intel_uc_fw.h"
+ #include "intel_uc_fw_abi.h"
+ #include "i915_drv.h"
+@@ -44,11 +45,10 @@ void intel_uc_fw_change_status(struct intel_uc_fw *uc_fw,
+ 			       enum intel_uc_fw_status status)
+ {
+ 	uc_fw->__status =  status;
+-	drm_dbg(&__uc_fw_to_gt(uc_fw)->i915->drm,
+-		"%s firmware -> %s\n",
+-		intel_uc_fw_type_repr(uc_fw->type),
+-		status == INTEL_UC_FIRMWARE_SELECTED ?
+-		uc_fw->file_selected.path : intel_uc_fw_status_repr(status));
++	gt_dbg(__uc_fw_to_gt(uc_fw), "%s firmware -> %s\n",
++	       intel_uc_fw_type_repr(uc_fw->type),
++	       status == INTEL_UC_FIRMWARE_SELECTED ?
++	       uc_fw->file_selected.path : intel_uc_fw_status_repr(status));
+ }
+ #endif
+ 
+@@ -562,15 +562,14 @@ static int check_ccs_header(struct intel_gt *gt,
+ 			    const struct firmware *fw,
+ 			    struct intel_uc_fw *uc_fw)
+ {
+-	struct drm_i915_private *i915 = gt->i915;
+ 	struct uc_css_header *css;
+ 	size_t size;
+ 
+ 	/* Check the size of the blob before examining buffer contents */
+ 	if (unlikely(fw->size < sizeof(struct uc_css_header))) {
+-		drm_warn(&i915->drm, "%s firmware %s: invalid size: %zu < %zu\n",
+-			 intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_selected.path,
+-			 fw->size, sizeof(struct uc_css_header));
++		gt_warn(gt, "%s firmware %s: invalid size: %zu < %zu\n",
++			intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_selected.path,
++			fw->size, sizeof(struct uc_css_header));
+ 		return -ENODATA;
+ 	}
+ 
+@@ -580,10 +579,9 @@ static int check_ccs_header(struct intel_gt *gt,
+ 	size = (css->header_size_dw - css->key_size_dw - css->modulus_size_dw -
+ 		css->exponent_size_dw) * sizeof(u32);
+ 	if (unlikely(size != sizeof(struct uc_css_header))) {
+-		drm_warn(&i915->drm,
+-			 "%s firmware %s: unexpected header size: %zu != %zu\n",
+-			 intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_selected.path,
+-			 fw->size, sizeof(struct uc_css_header));
++		gt_warn(gt, "%s firmware %s: unexpected header size: %zu != %zu\n",
++			intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_selected.path,
++			fw->size, sizeof(struct uc_css_header));
+ 		return -EPROTO;
+ 	}
+ 
+@@ -596,18 +594,18 @@ static int check_ccs_header(struct intel_gt *gt,
+ 	/* At least, it should have header, uCode and RSA. Size of all three. */
+ 	size = sizeof(struct uc_css_header) + uc_fw->ucode_size + uc_fw->rsa_size;
+ 	if (unlikely(fw->size < size)) {
+-		drm_warn(&i915->drm, "%s firmware %s: invalid size: %zu < %zu\n",
+-			 intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_selected.path,
+-			 fw->size, size);
++		gt_warn(gt, "%s firmware %s: invalid size: %zu < %zu\n",
++			intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_selected.path,
++			fw->size, size);
+ 		return -ENOEXEC;
+ 	}
+ 
+ 	/* Sanity check whether this fw is not larger than whole WOPCM memory */
+ 	size = __intel_uc_fw_get_upload_size(uc_fw);
+ 	if (unlikely(size >= gt->wopcm.size)) {
+-		drm_warn(&i915->drm, "%s firmware %s: invalid size: %zu > %zu\n",
+-			 intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_selected.path,
+-			 size, (size_t)gt->wopcm.size);
++		gt_warn(gt, "%s firmware %s: invalid size: %zu > %zu\n",
++			intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_selected.path,
++			size, (size_t)gt->wopcm.size);
+ 		return -E2BIG;
+ 	}
+ 
+@@ -624,9 +622,10 @@ static bool is_ver_8bit(struct intel_uc_fw_ver *ver)
+ 	return ver->major < 0xFF && ver->minor < 0xFF && ver->patch < 0xFF;
+ }
+ 
+-static bool guc_check_version_range(struct intel_uc_fw *uc_fw)
++static int guc_check_version_range(struct intel_uc_fw *uc_fw)
+ {
+ 	struct intel_guc *guc = container_of(uc_fw, struct intel_guc, fw);
++	struct intel_gt *gt = __uc_fw_to_gt(uc_fw);
+ 
+ 	/*
+ 	 * GuC version number components are defined as being 8-bits.
+@@ -635,24 +634,24 @@ static bool guc_check_version_range(struct intel_uc_fw *uc_fw)
+ 	 */
+ 
+ 	if (!is_ver_8bit(&uc_fw->file_selected.ver)) {
+-		drm_warn(&__uc_fw_to_gt(uc_fw)->i915->drm, "%s firmware: invalid file version: 0x%02X:%02X:%02X\n",
+-			 intel_uc_fw_type_repr(uc_fw->type),
+-			 uc_fw->file_selected.ver.major,
+-			 uc_fw->file_selected.ver.minor,
+-			 uc_fw->file_selected.ver.patch);
+-		return false;
++		gt_warn(gt, "%s firmware: invalid file version: 0x%02X:%02X:%02X\n",
++			intel_uc_fw_type_repr(uc_fw->type),
++			uc_fw->file_selected.ver.major,
++			uc_fw->file_selected.ver.minor,
++			uc_fw->file_selected.ver.patch);
++		return -EINVAL;
+ 	}
+ 
+ 	if (!is_ver_8bit(&guc->submission_version)) {
+-		drm_warn(&__uc_fw_to_gt(uc_fw)->i915->drm, "%s firmware: invalid submit version: 0x%02X:%02X:%02X\n",
+-			 intel_uc_fw_type_repr(uc_fw->type),
+-			 guc->submission_version.major,
+-			 guc->submission_version.minor,
+-			 guc->submission_version.patch);
+-		return false;
++		gt_warn(gt, "%s firmware: invalid submit version: 0x%02X:%02X:%02X\n",
++			intel_uc_fw_type_repr(uc_fw->type),
++			guc->submission_version.major,
++			guc->submission_version.minor,
++			guc->submission_version.patch);
++		return -EINVAL;
+ 	}
+ 
+-	return true;
++	return i915_inject_probe_error(gt->i915, -EINVAL);
+ }
+ 
+ static int check_fw_header(struct intel_gt *gt,
+@@ -687,10 +686,9 @@ static int try_firmware_load(struct intel_uc_fw *uc_fw, const struct firmware **
+ 		return err;
+ 
+ 	if ((*fw)->size > INTEL_UC_RSVD_GGTT_PER_FW) {
+-		drm_err(&gt->i915->drm,
+-			"%s firmware %s: size (%zuKB) exceeds max supported size (%uKB)\n",
+-			intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_selected.path,
+-			(*fw)->size / SZ_1K, INTEL_UC_RSVD_GGTT_PER_FW / SZ_1K);
++		gt_err(gt, "%s firmware %s: size (%zuKB) exceeds max supported size (%uKB)\n",
++		       intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_selected.path,
++		       (*fw)->size / SZ_1K, INTEL_UC_RSVD_GGTT_PER_FW / SZ_1K);
+ 
+ 		/* try to find another blob to load */
+ 		release_firmware(*fw);
+@@ -762,16 +760,19 @@ int intel_uc_fw_fetch(struct intel_uc_fw *uc_fw)
+ 	if (err)
+ 		goto fail;
+ 
+-	if (uc_fw->type == INTEL_UC_FW_TYPE_GUC && !guc_check_version_range(uc_fw))
+-		goto fail;
++	if (uc_fw->type == INTEL_UC_FW_TYPE_GUC) {
++		err = guc_check_version_range(uc_fw);
++		if (err)
++			goto fail;
++	}
+ 
+ 	if (uc_fw->file_wanted.ver.major && uc_fw->file_selected.ver.major) {
+ 		/* Check the file's major version was as it claimed */
+ 		if (uc_fw->file_selected.ver.major != uc_fw->file_wanted.ver.major) {
+-			drm_notice(&i915->drm, "%s firmware %s: unexpected version: %u.%u != %u.%u\n",
+-				   intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_selected.path,
+-				   uc_fw->file_selected.ver.major, uc_fw->file_selected.ver.minor,
+-				   uc_fw->file_wanted.ver.major, uc_fw->file_wanted.ver.minor);
++			gt_notice(gt, "%s firmware %s: unexpected version: %u.%u != %u.%u\n",
++				  intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_selected.path,
++				  uc_fw->file_selected.ver.major, uc_fw->file_selected.ver.minor,
++				  uc_fw->file_wanted.ver.major, uc_fw->file_wanted.ver.minor);
+ 			if (!intel_uc_fw_is_overridden(uc_fw)) {
+ 				err = -ENOEXEC;
+ 				goto fail;
+@@ -786,16 +787,14 @@ int intel_uc_fw_fetch(struct intel_uc_fw *uc_fw)
+ 		/* Preserve the version that was really wanted */
+ 		memcpy(&uc_fw->file_wanted, &file_ideal, sizeof(uc_fw->file_wanted));
+ 
+-		drm_notice(&i915->drm,
+-			   "%s firmware %s (%d.%d) is recommended, but only %s (%d.%d) was found\n",
+-			   intel_uc_fw_type_repr(uc_fw->type),
+-			   uc_fw->file_wanted.path,
+-			   uc_fw->file_wanted.ver.major, uc_fw->file_wanted.ver.minor,
+-			   uc_fw->file_selected.path,
+-			   uc_fw->file_selected.ver.major, uc_fw->file_selected.ver.minor);
+-		drm_info(&i915->drm,
+-			 "Consider updating your linux-firmware pkg or downloading from %s\n",
+-			 INTEL_UC_FIRMWARE_URL);
++		gt_notice(gt, "%s firmware %s (%d.%d) is recommended, but only %s (%d.%d) was found\n",
++			  intel_uc_fw_type_repr(uc_fw->type),
++			  uc_fw->file_wanted.path,
++			  uc_fw->file_wanted.ver.major, uc_fw->file_wanted.ver.minor,
++			  uc_fw->file_selected.path,
++			  uc_fw->file_selected.ver.major, uc_fw->file_selected.ver.minor);
++		gt_info(gt, "Consider updating your linux-firmware pkg or downloading from %s\n",
++			INTEL_UC_FIRMWARE_URL);
+ 	}
+ 
+ 	if (HAS_LMEM(i915)) {
+@@ -823,10 +822,10 @@ fail:
+ 				  INTEL_UC_FIRMWARE_MISSING :
+ 				  INTEL_UC_FIRMWARE_ERROR);
+ 
+-	i915_probe_error(i915, "%s firmware %s: fetch failed with error %d\n",
+-			 intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_selected.path, err);
+-	drm_info(&i915->drm, "%s firmware(s) can be downloaded from %s\n",
+-		 intel_uc_fw_type_repr(uc_fw->type), INTEL_UC_FIRMWARE_URL);
++	gt_probe_error(gt, "%s firmware %s: fetch failed %pe\n",
++		       intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_selected.path, ERR_PTR(err));
++	gt_info(gt, "%s firmware(s) can be downloaded from %s\n",
++		intel_uc_fw_type_repr(uc_fw->type), INTEL_UC_FIRMWARE_URL);
+ 
+ 	release_firmware(fw);		/* OK even if fw is NULL */
+ 	return err;
+@@ -932,9 +931,9 @@ static int uc_fw_xfer(struct intel_uc_fw *uc_fw, u32 dst_offset, u32 dma_flags)
+ 	/* Wait for DMA to finish */
+ 	ret = intel_wait_for_register_fw(uncore, DMA_CTRL, START_DMA, 0, 100);
+ 	if (ret)
+-		drm_err(&gt->i915->drm, "DMA for %s fw failed, DMA_CTRL=%u\n",
+-			intel_uc_fw_type_repr(uc_fw->type),
+-			intel_uncore_read_fw(uncore, DMA_CTRL));
++		gt_err(gt, "DMA for %s fw failed, DMA_CTRL=%u\n",
++		       intel_uc_fw_type_repr(uc_fw->type),
++		       intel_uncore_read_fw(uncore, DMA_CTRL));
+ 
+ 	/* Disable the bits once DMA is over */
+ 	intel_uncore_write_fw(uncore, DMA_CTRL, _MASKED_BIT_DISABLE(dma_flags));
+@@ -950,9 +949,8 @@ int intel_uc_fw_mark_load_failed(struct intel_uc_fw *uc_fw, int err)
+ 
+ 	GEM_BUG_ON(!intel_uc_fw_is_loadable(uc_fw));
+ 
+-	i915_probe_error(gt->i915, "Failed to load %s firmware %s (%d)\n",
+-			 intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_selected.path,
+-			 err);
++	gt_probe_error(gt, "Failed to load %s firmware %s %pe\n",
++		       intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_selected.path, ERR_PTR(err));
+ 	intel_uc_fw_change_status(uc_fw, INTEL_UC_FIRMWARE_LOAD_FAIL);
+ 
+ 	return err;
+@@ -1078,15 +1076,15 @@ int intel_uc_fw_init(struct intel_uc_fw *uc_fw)
+ 
+ 	err = i915_gem_object_pin_pages_unlocked(uc_fw->obj);
+ 	if (err) {
+-		DRM_DEBUG_DRIVER("%s fw pin-pages err=%d\n",
+-				 intel_uc_fw_type_repr(uc_fw->type), err);
++		gt_dbg(__uc_fw_to_gt(uc_fw), "%s fw pin-pages failed %pe\n",
++		       intel_uc_fw_type_repr(uc_fw->type), ERR_PTR(err));
+ 		goto out;
+ 	}
+ 
+ 	err = uc_fw_rsa_data_create(uc_fw);
+ 	if (err) {
+-		DRM_DEBUG_DRIVER("%s fw rsa data creation failed, err=%d\n",
+-				 intel_uc_fw_type_repr(uc_fw->type), err);
++		gt_dbg(__uc_fw_to_gt(uc_fw), "%s fw rsa data creation failed %pe\n",
++		       intel_uc_fw_type_repr(uc_fw->type), ERR_PTR(err));
+ 		goto out_unpin;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c
+index a8d942b16223f..125f7ef1252c3 100644
+--- a/drivers/gpu/drm/i915/i915_pci.c
++++ b/drivers/gpu/drm/i915/i915_pci.c
+@@ -1135,6 +1135,8 @@ static const struct intel_gt_definition xelpmp_extra_gt[] = {
+ static const struct intel_device_info mtl_info = {
+ 	XE_HP_FEATURES,
+ 	XE_LPDP_FEATURES,
++	.__runtime.cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B) |
++			       BIT(TRANSCODER_C) | BIT(TRANSCODER_D),
+ 	/*
+ 	 * Real graphics IP version will be obtained from hardware GMD_ID
+ 	 * register.  Value provided here is just for sanity checking.
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index 747b53b567a09..1f9382c6c81c6 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -7596,8 +7596,8 @@ enum skl_power_gate {
+ 
+ #define _PLANE_CSC_RY_GY_1(pipe)	_PIPE(pipe, _PLANE_CSC_RY_GY_1_A, \
+ 					      _PLANE_CSC_RY_GY_1_B)
+-#define _PLANE_CSC_RY_GY_2(pipe)	_PIPE(pipe, _PLANE_INPUT_CSC_RY_GY_2_A, \
+-					      _PLANE_INPUT_CSC_RY_GY_2_B)
++#define _PLANE_CSC_RY_GY_2(pipe)	_PIPE(pipe, _PLANE_CSC_RY_GY_2_A, \
++					      _PLANE_CSC_RY_GY_2_B)
+ #define PLANE_CSC_COEFF(pipe, plane, index)	_MMIO_PLANE(plane, \
+ 							    _PLANE_CSC_RY_GY_1(pipe) +  (index) * 4, \
+ 							    _PLANE_CSC_RY_GY_2(pipe) + (index) * 4)
+diff --git a/drivers/gpu/drm/i915/i915_reg_defs.h b/drivers/gpu/drm/i915/i915_reg_defs.h
+index be43580a69793..983c5aa3045b3 100644
+--- a/drivers/gpu/drm/i915/i915_reg_defs.h
++++ b/drivers/gpu/drm/i915/i915_reg_defs.h
+@@ -119,6 +119,35 @@
+  */
+ #define _PICK_EVEN(__index, __a, __b) ((__a) + (__index) * ((__b) - (__a)))
+ 
++/*
++ * Like _PICK_EVEN(), but supports 2 ranges of evenly spaced address offsets.
++ * @__c_index corresponds to the index in which the second range starts to be
++ * used. Using math interval notation, the first range is used for indexes [ 0,
++ * @__c_index), while the second range is used for [ @__c_index, ... ). Example:
++ *
++ * #define _FOO_A			0xf000
++ * #define _FOO_B			0xf004
++ * #define _FOO_C			0xf008
++ * #define _SUPER_FOO_A			0xa000
++ * #define _SUPER_FOO_B			0xa100
++ * #define FOO(x)			_MMIO(_PICK_EVEN_2RANGES(x, 3,		\
++ *					      _FOO_A, _FOO_B,			\
++ *					      _SUPER_FOO_A, _SUPER_FOO_B))
++ *
++ * This expands to:
++ *	0: 0xf000,
++ *	1: 0xf004,
++ *	2: 0xf008,
++ *	3: 0xa000,
++ *	4: 0xa100,
++ *	5: 0xa200,
++ *	...
++ */
++#define _PICK_EVEN_2RANGES(__index, __c_index, __a, __b, __c, __d)		\
++	(BUILD_BUG_ON_ZERO(!__is_constexpr(__c_index)) +			\
++	 ((__index) < (__c_index) ? _PICK_EVEN(__index, __a, __b) :		\
++				   _PICK_EVEN((__index) - (__c_index), __c, __d)))
++
+ /*
+  * Given the arbitrary numbers in varargs, pick the 0-based __index'th number.
+  *
+diff --git a/drivers/gpu/drm/msm/adreno/adreno_device.c b/drivers/gpu/drm/msm/adreno/adreno_device.c
+index cd009d56d35d5..ed1e0c650bb1a 100644
+--- a/drivers/gpu/drm/msm/adreno/adreno_device.c
++++ b/drivers/gpu/drm/msm/adreno/adreno_device.c
+@@ -440,20 +440,21 @@ struct msm_gpu *adreno_load_gpu(struct drm_device *dev)
+ 
+ 	ret = pm_runtime_get_sync(&pdev->dev);
+ 	if (ret < 0) {
+-		pm_runtime_put_sync(&pdev->dev);
++		pm_runtime_put_noidle(&pdev->dev);
+ 		DRM_DEV_ERROR(dev->dev, "Couldn't power up the GPU: %d\n", ret);
+-		return NULL;
++		goto err_disable_rpm;
+ 	}
+ 
+ 	mutex_lock(&gpu->lock);
+ 	ret = msm_gpu_hw_init(gpu);
+ 	mutex_unlock(&gpu->lock);
+-	pm_runtime_put_autosuspend(&pdev->dev);
+ 	if (ret) {
+ 		DRM_DEV_ERROR(dev->dev, "gpu hw init failed: %d\n", ret);
+-		return NULL;
++		goto err_put_rpm;
+ 	}
+ 
++	pm_runtime_put_autosuspend(&pdev->dev);
++
+ #ifdef CONFIG_DEBUG_FS
+ 	if (gpu->funcs->debugfs_init) {
+ 		gpu->funcs->debugfs_init(gpu, dev->primary);
+@@ -462,6 +463,13 @@ struct msm_gpu *adreno_load_gpu(struct drm_device *dev)
+ #endif
+ 
+ 	return gpu;
++
++err_put_rpm:
++	pm_runtime_put_sync_suspend(&pdev->dev);
++err_disable_rpm:
++	pm_runtime_disable(&pdev->dev);
++
++	return NULL;
+ }
+ 
+ static int find_chipid(struct device *dev, struct adreno_rev *rev)
+diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
+index 9ded384acba46..73c597565f99c 100644
+--- a/drivers/gpu/drm/msm/msm_drv.c
++++ b/drivers/gpu/drm/msm/msm_drv.c
+@@ -51,6 +51,8 @@
+ #define MSM_VERSION_MINOR	10
+ #define MSM_VERSION_PATCHLEVEL	0
+ 
++static void msm_deinit_vram(struct drm_device *ddev);
++
+ static const struct drm_mode_config_funcs mode_config_funcs = {
+ 	.fb_create = msm_framebuffer_create,
+ 	.output_poll_changed = drm_fb_helper_output_poll_changed,
+@@ -242,7 +244,8 @@ static int msm_drm_uninit(struct device *dev)
+ 		msm_fbdev_free(ddev);
+ #endif
+ 
+-	msm_disp_snapshot_destroy(ddev);
++	if (kms)
++		msm_disp_snapshot_destroy(ddev);
+ 
+ 	drm_mode_config_cleanup(ddev);
+ 
+@@ -250,19 +253,16 @@ static int msm_drm_uninit(struct device *dev)
+ 		drm_bridge_remove(priv->bridges[i]);
+ 	priv->num_bridges = 0;
+ 
+-	pm_runtime_get_sync(dev);
+-	msm_irq_uninstall(ddev);
+-	pm_runtime_put_sync(dev);
++	if (kms) {
++		pm_runtime_get_sync(dev);
++		msm_irq_uninstall(ddev);
++		pm_runtime_put_sync(dev);
++	}
+ 
+ 	if (kms && kms->funcs)
+ 		kms->funcs->destroy(kms);
+ 
+-	if (priv->vram.paddr) {
+-		unsigned long attrs = DMA_ATTR_NO_KERNEL_MAPPING;
+-		drm_mm_takedown(&priv->vram.mm);
+-		dma_free_attrs(dev, priv->vram.size, NULL,
+-			       priv->vram.paddr, attrs);
+-	}
++	msm_deinit_vram(ddev);
+ 
+ 	component_unbind_all(dev, ddev);
+ 
+@@ -400,6 +400,19 @@ static int msm_init_vram(struct drm_device *dev)
+ 	return ret;
+ }
+ 
++static void msm_deinit_vram(struct drm_device *ddev)
++{
++	struct msm_drm_private *priv = ddev->dev_private;
++	unsigned long attrs = DMA_ATTR_NO_KERNEL_MAPPING;
++
++	if (!priv->vram.paddr)
++		return;
++
++	drm_mm_takedown(&priv->vram.mm);
++	dma_free_attrs(ddev->dev, priv->vram.size, NULL, priv->vram.paddr,
++			attrs);
++}
++
+ static int msm_drm_init(struct device *dev, const struct drm_driver *drv)
+ {
+ 	struct msm_drm_private *priv = dev_get_drvdata(dev);
+@@ -419,6 +432,10 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv)
+ 	priv->dev = ddev;
+ 
+ 	priv->wq = alloc_ordered_workqueue("msm", 0);
++	if (!priv->wq) {
++		ret = -ENOMEM;
++		goto err_put_dev;
++	}
+ 
+ 	INIT_LIST_HEAD(&priv->objects);
+ 	mutex_init(&priv->obj_lock);
+@@ -441,12 +458,12 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv)
+ 
+ 	ret = msm_init_vram(ddev);
+ 	if (ret)
+-		return ret;
++		goto err_cleanup_mode_config;
+ 
+ 	/* Bind all our sub-components: */
+ 	ret = component_bind_all(dev, ddev);
+ 	if (ret)
+-		return ret;
++		goto err_deinit_vram;
+ 
+ 	dma_set_max_seg_size(dev, UINT_MAX);
+ 
+@@ -541,6 +558,17 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv)
+ 
+ err_msm_uninit:
+ 	msm_drm_uninit(dev);
++
++	return ret;
++
++err_deinit_vram:
++	msm_deinit_vram(ddev);
++err_cleanup_mode_config:
++	drm_mode_config_cleanup(ddev);
++	destroy_workqueue(priv->wq);
++err_put_dev:
++	drm_dev_put(ddev);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/gpu/drm/panel/panel-orisetech-otm8009a.c b/drivers/gpu/drm/panel/panel-orisetech-otm8009a.c
+index b4729a94c34a8..898b892f11439 100644
+--- a/drivers/gpu/drm/panel/panel-orisetech-otm8009a.c
++++ b/drivers/gpu/drm/panel/panel-orisetech-otm8009a.c
+@@ -471,7 +471,7 @@ static int otm8009a_probe(struct mipi_dsi_device *dsi)
+ 		       DRM_MODE_CONNECTOR_DSI);
+ 
+ 	ctx->bl_dev = devm_backlight_device_register(dev, dev_name(dev),
+-						     dsi->host->dev, ctx,
++						     dev, ctx,
+ 						     &otm8009a_backlight_ops,
+ 						     NULL);
+ 	if (IS_ERR(ctx->bl_dev)) {
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 9312d611db8e5..0c6a82c665c1d 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -1308,6 +1308,9 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
+ 
+ 	struct input_dev *pen_input = wacom->pen_input;
+ 	unsigned char *data = wacom->data;
++	int number_of_valid_frames = 0;
++	int time_interval = 15000000;
++	ktime_t time_packet_received = ktime_get();
+ 	int i;
+ 
+ 	if (wacom->features.type == INTUOSP2_BT ||
+@@ -1328,12 +1331,30 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
+ 		wacom->id[0] |= (wacom->serial[0] >> 32) & 0xFFFFF;
+ 	}
+ 
++	/* number of valid frames */
+ 	for (i = 0; i < pen_frames; i++) {
+ 		unsigned char *frame = &data[i*pen_frame_len + 1];
+ 		bool valid = frame[0] & 0x80;
++
++		if (valid)
++			number_of_valid_frames++;
++	}
++
++	if (number_of_valid_frames) {
++		if (wacom->hid_data.time_delayed)
++			time_interval = ktime_get() - wacom->hid_data.time_delayed;
++		time_interval /= number_of_valid_frames;
++		wacom->hid_data.time_delayed = time_packet_received;
++	}
++
++	for (i = 0; i < number_of_valid_frames; i++) {
++		unsigned char *frame = &data[i*pen_frame_len + 1];
++		bool valid = frame[0] & 0x80;
+ 		bool prox = frame[0] & 0x40;
+ 		bool range = frame[0] & 0x20;
+ 		bool invert = frame[0] & 0x10;
++		int frames_number_reversed = number_of_valid_frames - i - 1;
++		int event_timestamp = time_packet_received - frames_number_reversed * time_interval;
+ 
+ 		if (!valid)
+ 			continue;
+@@ -1346,6 +1367,7 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
+ 			wacom->tool[0] = 0;
+ 			wacom->id[0] = 0;
+ 			wacom->serial[0] = 0;
++			wacom->hid_data.time_delayed = 0;
+ 			return;
+ 		}
+ 
+@@ -1382,6 +1404,7 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
+ 						 get_unaligned_le16(&frame[11]));
+ 			}
+ 		}
++
+ 		if (wacom->tool[0]) {
+ 			input_report_abs(pen_input, ABS_PRESSURE, get_unaligned_le16(&frame[5]));
+ 			if (wacom->features.type == INTUOSP2_BT ||
+@@ -1405,6 +1428,9 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
+ 
+ 		wacom->shared->stylus_in_proximity = prox;
+ 
++		/* add timestamp to unpack the frames */
++		input_set_timestamp(pen_input, event_timestamp);
++
+ 		input_sync(pen_input);
+ 	}
+ }
+@@ -1895,6 +1921,7 @@ static void wacom_map_usage(struct input_dev *input, struct hid_usage *usage,
+ 	int fmax = field->logical_maximum;
+ 	unsigned int equivalent_usage = wacom_equivalent_usage(usage->hid);
+ 	int resolution_code = code;
++	int resolution = hidinput_calc_abs_res(field, resolution_code);
+ 
+ 	if (equivalent_usage == HID_DG_TWIST) {
+ 		resolution_code = ABS_RZ;
+@@ -1915,8 +1942,15 @@ static void wacom_map_usage(struct input_dev *input, struct hid_usage *usage,
+ 	switch (type) {
+ 	case EV_ABS:
+ 		input_set_abs_params(input, code, fmin, fmax, fuzz, 0);
+-		input_abs_set_res(input, code,
+-				  hidinput_calc_abs_res(field, resolution_code));
++
++		/* older tablet may miss physical usage */
++		if ((code == ABS_X || code == ABS_Y) && !resolution) {
++			resolution = WACOM_INTUOS_RES;
++			hid_warn(input,
++				 "Wacom usage (%d) missing resolution \n",
++				 code);
++		}
++		input_abs_set_res(input, code, resolution);
+ 		break;
+ 	case EV_KEY:
+ 	case EV_MSC:
+diff --git a/drivers/hid/wacom_wac.h b/drivers/hid/wacom_wac.h
+index 16f221388563d..1a40bb8c5810c 100644
+--- a/drivers/hid/wacom_wac.h
++++ b/drivers/hid/wacom_wac.h
+@@ -324,6 +324,7 @@ struct hid_data {
+ 	int ps_connected;
+ 	bool pad_input_event_flag;
+ 	unsigned short sequence_number;
++	int time_delayed;
+ };
+ 
+ struct wacom_remote_data {
+diff --git a/drivers/i2c/busses/i2c-gxp.c b/drivers/i2c/busses/i2c-gxp.c
+index d4b55d989a268..8ea3fb5e4c7f7 100644
+--- a/drivers/i2c/busses/i2c-gxp.c
++++ b/drivers/i2c/busses/i2c-gxp.c
+@@ -353,7 +353,6 @@ static void gxp_i2c_chk_data_ack(struct gxp_i2c_drvdata *drvdata)
+ 	writew(value, drvdata->base + GXP_I2CMCMD);
+ }
+ 
+-#if IS_ENABLED(CONFIG_I2C_SLAVE)
+ static bool gxp_i2c_slave_irq_handler(struct gxp_i2c_drvdata *drvdata)
+ {
+ 	u8 value;
+@@ -437,7 +436,6 @@ static bool gxp_i2c_slave_irq_handler(struct gxp_i2c_drvdata *drvdata)
+ 
+ 	return true;
+ }
+-#endif
+ 
+ static irqreturn_t gxp_i2c_irq_handler(int irq, void *_drvdata)
+ {
+diff --git a/drivers/i2c/busses/i2c-tegra.c b/drivers/i2c/busses/i2c-tegra.c
+index 6aab84c8d22b4..157066f06a32d 100644
+--- a/drivers/i2c/busses/i2c-tegra.c
++++ b/drivers/i2c/busses/i2c-tegra.c
+@@ -242,9 +242,10 @@ struct tegra_i2c_hw_feature {
+  * @is_dvc: identifies the DVC I2C controller, has a different register layout
+  * @is_vi: identifies the VI I2C controller, has a different register layout
+  * @msg_complete: transfer completion notifier
++ * @msg_buf_remaining: size of unsent data in the message buffer
++ * @msg_len: length of message in current transfer
+  * @msg_err: error code for completed message
+  * @msg_buf: pointer to current message data
+- * @msg_buf_remaining: size of unsent data in the message buffer
+  * @msg_read: indicates that the transfer is a read access
+  * @timings: i2c timings information like bus frequency
+  * @multimaster_mode: indicates that I2C controller is in multi-master mode
+@@ -277,6 +278,7 @@ struct tegra_i2c_dev {
+ 
+ 	struct completion msg_complete;
+ 	size_t msg_buf_remaining;
++	unsigned int msg_len;
+ 	int msg_err;
+ 	u8 *msg_buf;
+ 
+@@ -1169,7 +1171,7 @@ static void tegra_i2c_push_packet_header(struct tegra_i2c_dev *i2c_dev,
+ 	else
+ 		i2c_writel(i2c_dev, packet_header, I2C_TX_FIFO);
+ 
+-	packet_header = msg->len - 1;
++	packet_header = i2c_dev->msg_len - 1;
+ 
+ 	if (i2c_dev->dma_mode && !i2c_dev->msg_read)
+ 		*dma_buf++ = packet_header;
+@@ -1242,20 +1244,32 @@ static int tegra_i2c_xfer_msg(struct tegra_i2c_dev *i2c_dev,
+ 		return err;
+ 
+ 	i2c_dev->msg_buf = msg->buf;
++	i2c_dev->msg_len = msg->len;
+ 
+-	/* The condition true implies smbus block read and len is already read */
+-	if (msg->flags & I2C_M_RECV_LEN && end_state != MSG_END_CONTINUE)
+-		i2c_dev->msg_buf = msg->buf + 1;
+-
+-	i2c_dev->msg_buf_remaining = msg->len;
+ 	i2c_dev->msg_err = I2C_ERR_NONE;
+ 	i2c_dev->msg_read = !!(msg->flags & I2C_M_RD);
+ 	reinit_completion(&i2c_dev->msg_complete);
+ 
++	/*
++	 * For SMBUS block read command, read only 1 byte in the first transfer.
++	 * Adjust that 1 byte for the next transfer in the msg buffer and msg
++	 * length.
++	 */
++	if (msg->flags & I2C_M_RECV_LEN) {
++		if (end_state == MSG_END_CONTINUE) {
++			i2c_dev->msg_len = 1;
++		} else {
++			i2c_dev->msg_buf += 1;
++			i2c_dev->msg_len -= 1;
++		}
++	}
++
++	i2c_dev->msg_buf_remaining = i2c_dev->msg_len;
++
+ 	if (i2c_dev->msg_read)
+-		xfer_size = msg->len;
++		xfer_size = i2c_dev->msg_len;
+ 	else
+-		xfer_size = msg->len + I2C_PACKET_HEADER_SIZE;
++		xfer_size = i2c_dev->msg_len + I2C_PACKET_HEADER_SIZE;
+ 
+ 	xfer_size = ALIGN(xfer_size, BYTES_PER_FIFO_WORD);
+ 
+@@ -1295,7 +1309,7 @@ static int tegra_i2c_xfer_msg(struct tegra_i2c_dev *i2c_dev,
+ 	if (!i2c_dev->msg_read) {
+ 		if (i2c_dev->dma_mode) {
+ 			memcpy(i2c_dev->dma_buf + I2C_PACKET_HEADER_SIZE,
+-			       msg->buf, msg->len);
++			       msg->buf, i2c_dev->msg_len);
+ 
+ 			dma_sync_single_for_device(i2c_dev->dma_dev,
+ 						   i2c_dev->dma_phys,
+@@ -1352,7 +1366,7 @@ static int tegra_i2c_xfer_msg(struct tegra_i2c_dev *i2c_dev,
+ 						i2c_dev->dma_phys,
+ 						xfer_size, DMA_FROM_DEVICE);
+ 
+-			memcpy(i2c_dev->msg_buf, i2c_dev->dma_buf, msg->len);
++			memcpy(i2c_dev->msg_buf, i2c_dev->dma_buf, i2c_dev->msg_len);
+ 		}
+ 	}
+ 
+@@ -1408,8 +1422,8 @@ static int tegra_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[],
+ 			ret = tegra_i2c_xfer_msg(i2c_dev, &msgs[i], MSG_END_CONTINUE);
+ 			if (ret)
+ 				break;
+-			/* Set the read byte as msg len */
+-			msgs[i].len = msgs[i].buf[0];
++			/* Set the msg length from first byte */
++			msgs[i].len += msgs[i].buf[0];
+ 			dev_dbg(i2c_dev->dev, "reading %d bytes\n", msgs[i].len);
+ 		}
+ 		ret = tegra_i2c_xfer_msg(i2c_dev, &msgs[i], end_type);
+diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c
+index a3f05fdd9fac2..7a7e713de52db 100644
+--- a/drivers/infiniband/sw/rxe/rxe.c
++++ b/drivers/infiniband/sw/rxe/rxe.c
+@@ -160,6 +160,8 @@ void rxe_set_mtu(struct rxe_dev *rxe, unsigned int ndev_mtu)
+ 
+ 	port->attr.active_mtu = mtu;
+ 	port->mtu_cap = ib_mtu_enum_to_int(mtu);
++
++	rxe_info_dev(rxe, "Set mtu to %d", port->mtu_cap);
+ }
+ 
+ /* called by ifc layer to create new rxe device.
+@@ -179,7 +181,7 @@ static int rxe_newlink(const char *ibdev_name, struct net_device *ndev)
+ 	int err = 0;
+ 
+ 	if (is_vlan_dev(ndev)) {
+-		pr_err("rxe creation allowed on top of a real device only\n");
++		rxe_err("rxe creation allowed on top of a real device only");
+ 		err = -EPERM;
+ 		goto err;
+ 	}
+@@ -187,14 +189,14 @@ static int rxe_newlink(const char *ibdev_name, struct net_device *ndev)
+ 	rxe = rxe_get_dev_from_net(ndev);
+ 	if (rxe) {
+ 		ib_device_put(&rxe->ib_dev);
+-		rxe_dbg(rxe, "already configured on %s\n", ndev->name);
++		rxe_err_dev(rxe, "already configured on %s", ndev->name);
+ 		err = -EEXIST;
+ 		goto err;
+ 	}
+ 
+ 	err = rxe_net_add(ibdev_name, ndev);
+ 	if (err) {
+-		pr_debug("failed to add %s\n", ndev->name);
++		rxe_err("failed to add %s\n", ndev->name);
+ 		goto err;
+ 	}
+ err:
+diff --git a/drivers/infiniband/sw/rxe/rxe.h b/drivers/infiniband/sw/rxe/rxe.h
+index 2415f3704f576..bd8a8ea4ea8fd 100644
+--- a/drivers/infiniband/sw/rxe/rxe.h
++++ b/drivers/infiniband/sw/rxe/rxe.h
+@@ -38,7 +38,8 @@
+ 
+ #define RXE_ROCE_V2_SPORT		(0xc000)
+ 
+-#define rxe_dbg(rxe, fmt, ...) ibdev_dbg(&(rxe)->ib_dev,		\
++#define rxe_dbg(fmt, ...) pr_debug("%s: " fmt "\n", __func__, ##__VA_ARGS__)
++#define rxe_dbg_dev(rxe, fmt, ...) ibdev_dbg(&(rxe)->ib_dev,		\
+ 		"%s: " fmt, __func__, ##__VA_ARGS__)
+ #define rxe_dbg_uc(uc, fmt, ...) ibdev_dbg((uc)->ibuc.device,		\
+ 		"uc#%d %s: " fmt, (uc)->elem.index, __func__, ##__VA_ARGS__)
+@@ -57,6 +58,48 @@
+ #define rxe_dbg_mw(mw, fmt, ...) ibdev_dbg((mw)->ibmw.device,		\
+ 		"mw#%d %s:  " fmt, (mw)->elem.index, __func__, ##__VA_ARGS__)
+ 
++#define rxe_err(fmt, ...) pr_err_ratelimited("%s: " fmt "\n", __func__, \
++					##__VA_ARGS__)
++#define rxe_err_dev(rxe, fmt, ...) ibdev_err_ratelimited(&(rxe)->ib_dev, \
++		"%s: " fmt, __func__, ##__VA_ARGS__)
++#define rxe_err_uc(uc, fmt, ...) ibdev_err_ratelimited((uc)->ibuc.device, \
++		"uc#%d %s: " fmt, (uc)->elem.index, __func__, ##__VA_ARGS__)
++#define rxe_err_pd(pd, fmt, ...) ibdev_err_ratelimited((pd)->ibpd.device, \
++		"pd#%d %s: " fmt, (pd)->elem.index, __func__, ##__VA_ARGS__)
++#define rxe_err_ah(ah, fmt, ...) ibdev_err_ratelimited((ah)->ibah.device, \
++		"ah#%d %s: " fmt, (ah)->elem.index, __func__, ##__VA_ARGS__)
++#define rxe_err_srq(srq, fmt, ...) ibdev_err_ratelimited((srq)->ibsrq.device, \
++		"srq#%d %s: " fmt, (srq)->elem.index, __func__, ##__VA_ARGS__)
++#define rxe_err_qp(qp, fmt, ...) ibdev_err_ratelimited((qp)->ibqp.device, \
++		"qp#%d %s: " fmt, (qp)->elem.index, __func__, ##__VA_ARGS__)
++#define rxe_err_cq(cq, fmt, ...) ibdev_err_ratelimited((cq)->ibcq.device, \
++		"cq#%d %s: " fmt, (cq)->elem.index, __func__, ##__VA_ARGS__)
++#define rxe_err_mr(mr, fmt, ...) ibdev_err_ratelimited((mr)->ibmr.device, \
++		"mr#%d %s:  " fmt, (mr)->elem.index, __func__, ##__VA_ARGS__)
++#define rxe_err_mw(mw, fmt, ...) ibdev_err_ratelimited((mw)->ibmw.device, \
++		"mw#%d %s:  " fmt, (mw)->elem.index, __func__, ##__VA_ARGS__)
++
++#define rxe_info(fmt, ...) pr_info_ratelimited("%s: " fmt "\n", __func__, \
++					##__VA_ARGS__)
++#define rxe_info_dev(rxe, fmt, ...) ibdev_info_ratelimited(&(rxe)->ib_dev, \
++		"%s: " fmt, __func__, ##__VA_ARGS__)
++#define rxe_info_uc(uc, fmt, ...) ibdev_info_ratelimited((uc)->ibuc.device, \
++		"uc#%d %s: " fmt, (uc)->elem.index, __func__, ##__VA_ARGS__)
++#define rxe_info_pd(pd, fmt, ...) ibdev_info_ratelimited((pd)->ibpd.device, \
++		"pd#%d %s: " fmt, (pd)->elem.index, __func__, ##__VA_ARGS__)
++#define rxe_info_ah(ah, fmt, ...) ibdev_info_ratelimited((ah)->ibah.device, \
++		"ah#%d %s: " fmt, (ah)->elem.index, __func__, ##__VA_ARGS__)
++#define rxe_info_srq(srq, fmt, ...) ibdev_info_ratelimited((srq)->ibsrq.device, \
++		"srq#%d %s: " fmt, (srq)->elem.index, __func__, ##__VA_ARGS__)
++#define rxe_info_qp(qp, fmt, ...) ibdev_info_ratelimited((qp)->ibqp.device, \
++		"qp#%d %s: " fmt, (qp)->elem.index, __func__, ##__VA_ARGS__)
++#define rxe_info_cq(cq, fmt, ...) ibdev_info_ratelimited((cq)->ibcq.device, \
++		"cq#%d %s: " fmt, (cq)->elem.index, __func__, ##__VA_ARGS__)
++#define rxe_info_mr(mr, fmt, ...) ibdev_info_ratelimited((mr)->ibmr.device, \
++		"mr#%d %s:  " fmt, (mr)->elem.index, __func__, ##__VA_ARGS__)
++#define rxe_info_mw(mw, fmt, ...) ibdev_info_ratelimited((mw)->ibmw.device, \
++		"mw#%d %s:  " fmt, (mw)->elem.index, __func__, ##__VA_ARGS__)
++
+ /* responder states */
+ enum resp_states {
+ 	RESPST_NONE,
+diff --git a/drivers/infiniband/sw/rxe/rxe_cq.c b/drivers/infiniband/sw/rxe/rxe_cq.c
+index faf49c50bbaba..519ddec29b4ba 100644
+--- a/drivers/infiniband/sw/rxe/rxe_cq.c
++++ b/drivers/infiniband/sw/rxe/rxe_cq.c
+@@ -14,12 +14,12 @@ int rxe_cq_chk_attr(struct rxe_dev *rxe, struct rxe_cq *cq,
+ 	int count;
+ 
+ 	if (cqe <= 0) {
+-		rxe_dbg(rxe, "cqe(%d) <= 0\n", cqe);
++		rxe_dbg_dev(rxe, "cqe(%d) <= 0\n", cqe);
+ 		goto err1;
+ 	}
+ 
+ 	if (cqe > rxe->attr.max_cqe) {
+-		rxe_dbg(rxe, "cqe(%d) > max_cqe(%d)\n",
++		rxe_dbg_dev(rxe, "cqe(%d) > max_cqe(%d)\n",
+ 				cqe, rxe->attr.max_cqe);
+ 		goto err1;
+ 	}
+@@ -50,7 +50,7 @@ int rxe_cq_from_init(struct rxe_dev *rxe, struct rxe_cq *cq, int cqe,
+ 	cq->queue = rxe_queue_init(rxe, &cqe,
+ 			sizeof(struct rxe_cqe), type);
+ 	if (!cq->queue) {
+-		rxe_dbg(rxe, "unable to create cq\n");
++		rxe_dbg_dev(rxe, "unable to create cq\n");
+ 		return -ENOMEM;
+ 	}
+ 
+diff --git a/drivers/infiniband/sw/rxe/rxe_icrc.c b/drivers/infiniband/sw/rxe/rxe_icrc.c
+index 71bc2c1895888..fdf5f08cd8f17 100644
+--- a/drivers/infiniband/sw/rxe/rxe_icrc.c
++++ b/drivers/infiniband/sw/rxe/rxe_icrc.c
+@@ -21,7 +21,7 @@ int rxe_icrc_init(struct rxe_dev *rxe)
+ 
+ 	tfm = crypto_alloc_shash("crc32", 0, 0);
+ 	if (IS_ERR(tfm)) {
+-		rxe_dbg(rxe, "failed to init crc32 algorithm err: %ld\n",
++		rxe_dbg_dev(rxe, "failed to init crc32 algorithm err: %ld\n",
+ 			       PTR_ERR(tfm));
+ 		return PTR_ERR(tfm);
+ 	}
+@@ -51,7 +51,7 @@ static __be32 rxe_crc32(struct rxe_dev *rxe, __be32 crc, void *next, size_t len)
+ 	*(__be32 *)shash_desc_ctx(shash) = crc;
+ 	err = crypto_shash_update(shash, next, len);
+ 	if (unlikely(err)) {
+-		rxe_dbg(rxe, "failed crc calculation, err: %d\n", err);
++		rxe_dbg_dev(rxe, "failed crc calculation, err: %d\n", err);
+ 		return (__force __be32)crc32_le((__force u32)crc, next, len);
+ 	}
+ 
+diff --git a/drivers/infiniband/sw/rxe/rxe_mmap.c b/drivers/infiniband/sw/rxe/rxe_mmap.c
+index a47d72dbc5376..6b7f2bd698799 100644
+--- a/drivers/infiniband/sw/rxe/rxe_mmap.c
++++ b/drivers/infiniband/sw/rxe/rxe_mmap.c
+@@ -79,7 +79,7 @@ int rxe_mmap(struct ib_ucontext *context, struct vm_area_struct *vma)
+ 
+ 		/* Don't allow a mmap larger than the object. */
+ 		if (size > ip->info.size) {
+-			rxe_dbg(rxe, "mmap region is larger than the object!\n");
++			rxe_dbg_dev(rxe, "mmap region is larger than the object!\n");
+ 			spin_unlock_bh(&rxe->pending_lock);
+ 			ret = -EINVAL;
+ 			goto done;
+@@ -87,7 +87,7 @@ int rxe_mmap(struct ib_ucontext *context, struct vm_area_struct *vma)
+ 
+ 		goto found_it;
+ 	}
+-	rxe_dbg(rxe, "unable to find pending mmap info\n");
++	rxe_dbg_dev(rxe, "unable to find pending mmap info\n");
+ 	spin_unlock_bh(&rxe->pending_lock);
+ 	ret = -EINVAL;
+ 	goto done;
+@@ -98,7 +98,7 @@ found_it:
+ 
+ 	ret = remap_vmalloc_range(vma, ip->obj, 0);
+ 	if (ret) {
+-		rxe_dbg(rxe, "err %d from remap_vmalloc_range\n", ret);
++		rxe_dbg_dev(rxe, "err %d from remap_vmalloc_range\n", ret);
+ 		goto done;
+ 	}
+ 
+diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c
+index e02e1624bcf4d..a2ace42e95366 100644
+--- a/drivers/infiniband/sw/rxe/rxe_net.c
++++ b/drivers/infiniband/sw/rxe/rxe_net.c
+@@ -596,7 +596,7 @@ static int rxe_notify(struct notifier_block *not_blk,
+ 		rxe_port_down(rxe);
+ 		break;
+ 	case NETDEV_CHANGEMTU:
+-		rxe_dbg(rxe, "%s changed mtu to %d\n", ndev->name, ndev->mtu);
++		rxe_dbg_dev(rxe, "%s changed mtu to %d\n", ndev->name, ndev->mtu);
+ 		rxe_set_mtu(rxe, ndev->mtu);
+ 		break;
+ 	case NETDEV_CHANGE:
+@@ -608,7 +608,7 @@ static int rxe_notify(struct notifier_block *not_blk,
+ 	case NETDEV_CHANGENAME:
+ 	case NETDEV_FEAT_CHANGE:
+ 	default:
+-		rxe_dbg(rxe, "ignoring netdev event = %ld for %s\n",
++		rxe_dbg_dev(rxe, "ignoring netdev event = %ld for %s\n",
+ 			event, ndev->name);
+ 		break;
+ 	}
+diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
+index 13283ec06f95e..d5de5ba6940f1 100644
+--- a/drivers/infiniband/sw/rxe/rxe_qp.c
++++ b/drivers/infiniband/sw/rxe/rxe_qp.c
+@@ -19,33 +19,33 @@ static int rxe_qp_chk_cap(struct rxe_dev *rxe, struct ib_qp_cap *cap,
+ 			  int has_srq)
+ {
+ 	if (cap->max_send_wr > rxe->attr.max_qp_wr) {
+-		rxe_dbg(rxe, "invalid send wr = %u > %d\n",
++		rxe_dbg_dev(rxe, "invalid send wr = %u > %d\n",
+ 			 cap->max_send_wr, rxe->attr.max_qp_wr);
+ 		goto err1;
+ 	}
+ 
+ 	if (cap->max_send_sge > rxe->attr.max_send_sge) {
+-		rxe_dbg(rxe, "invalid send sge = %u > %d\n",
++		rxe_dbg_dev(rxe, "invalid send sge = %u > %d\n",
+ 			 cap->max_send_sge, rxe->attr.max_send_sge);
+ 		goto err1;
+ 	}
+ 
+ 	if (!has_srq) {
+ 		if (cap->max_recv_wr > rxe->attr.max_qp_wr) {
+-			rxe_dbg(rxe, "invalid recv wr = %u > %d\n",
++			rxe_dbg_dev(rxe, "invalid recv wr = %u > %d\n",
+ 				 cap->max_recv_wr, rxe->attr.max_qp_wr);
+ 			goto err1;
+ 		}
+ 
+ 		if (cap->max_recv_sge > rxe->attr.max_recv_sge) {
+-			rxe_dbg(rxe, "invalid recv sge = %u > %d\n",
++			rxe_dbg_dev(rxe, "invalid recv sge = %u > %d\n",
+ 				 cap->max_recv_sge, rxe->attr.max_recv_sge);
+ 			goto err1;
+ 		}
+ 	}
+ 
+ 	if (cap->max_inline_data > rxe->max_inline_data) {
+-		rxe_dbg(rxe, "invalid max inline data = %u > %d\n",
++		rxe_dbg_dev(rxe, "invalid max inline data = %u > %d\n",
+ 			 cap->max_inline_data, rxe->max_inline_data);
+ 		goto err1;
+ 	}
+@@ -73,7 +73,7 @@ int rxe_qp_chk_init(struct rxe_dev *rxe, struct ib_qp_init_attr *init)
+ 	}
+ 
+ 	if (!init->recv_cq || !init->send_cq) {
+-		rxe_dbg(rxe, "missing cq\n");
++		rxe_dbg_dev(rxe, "missing cq\n");
+ 		goto err1;
+ 	}
+ 
+@@ -82,14 +82,14 @@ int rxe_qp_chk_init(struct rxe_dev *rxe, struct ib_qp_init_attr *init)
+ 
+ 	if (init->qp_type == IB_QPT_GSI) {
+ 		if (!rdma_is_port_valid(&rxe->ib_dev, port_num)) {
+-			rxe_dbg(rxe, "invalid port = %d\n", port_num);
++			rxe_dbg_dev(rxe, "invalid port = %d\n", port_num);
+ 			goto err1;
+ 		}
+ 
+ 		port = &rxe->port;
+ 
+ 		if (init->qp_type == IB_QPT_GSI && port->qp_gsi_index) {
+-			rxe_dbg(rxe, "GSI QP exists for port %d\n", port_num);
++			rxe_dbg_dev(rxe, "GSI QP exists for port %d\n", port_num);
+ 			goto err1;
+ 		}
+ 	}
+diff --git a/drivers/infiniband/sw/rxe/rxe_srq.c b/drivers/infiniband/sw/rxe/rxe_srq.c
+index 82e37a41ced40..27ca82ec0826b 100644
+--- a/drivers/infiniband/sw/rxe/rxe_srq.c
++++ b/drivers/infiniband/sw/rxe/rxe_srq.c
+@@ -13,13 +13,13 @@ int rxe_srq_chk_init(struct rxe_dev *rxe, struct ib_srq_init_attr *init)
+ 	struct ib_srq_attr *attr = &init->attr;
+ 
+ 	if (attr->max_wr > rxe->attr.max_srq_wr) {
+-		rxe_dbg(rxe, "max_wr(%d) > max_srq_wr(%d)\n",
++		rxe_dbg_dev(rxe, "max_wr(%d) > max_srq_wr(%d)\n",
+ 			attr->max_wr, rxe->attr.max_srq_wr);
+ 		goto err1;
+ 	}
+ 
+ 	if (attr->max_wr <= 0) {
+-		rxe_dbg(rxe, "max_wr(%d) <= 0\n", attr->max_wr);
++		rxe_dbg_dev(rxe, "max_wr(%d) <= 0\n", attr->max_wr);
+ 		goto err1;
+ 	}
+ 
+@@ -27,7 +27,7 @@ int rxe_srq_chk_init(struct rxe_dev *rxe, struct ib_srq_init_attr *init)
+ 		attr->max_wr = RXE_MIN_SRQ_WR;
+ 
+ 	if (attr->max_sge > rxe->attr.max_srq_sge) {
+-		rxe_dbg(rxe, "max_sge(%d) > max_srq_sge(%d)\n",
++		rxe_dbg_dev(rxe, "max_sge(%d) > max_srq_sge(%d)\n",
+ 			attr->max_sge, rxe->attr.max_srq_sge);
+ 		goto err1;
+ 	}
+diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
+index 6803ac76ae572..a40a6d0581500 100644
+--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
++++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
+@@ -1093,7 +1093,7 @@ int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name)
+ 
+ 	err = ib_register_device(dev, ibdev_name, NULL);
+ 	if (err)
+-		rxe_dbg(rxe, "failed with error %d\n", err);
++		rxe_dbg_dev(rxe, "failed with error %d\n", err);
+ 
+ 	/*
+ 	 * Note that rxe may be invalid at this point if another thread
+diff --git a/drivers/irqchip/irq-loongson-eiointc.c b/drivers/irqchip/irq-loongson-eiointc.c
+index d15fd38c17568..90181c42840b4 100644
+--- a/drivers/irqchip/irq-loongson-eiointc.c
++++ b/drivers/irqchip/irq-loongson-eiointc.c
+@@ -280,9 +280,6 @@ static void acpi_set_vec_parent(int node, struct irq_domain *parent, struct acpi
+ {
+ 	int i;
+ 
+-	if (cpu_has_flatmode)
+-		node = cpu_to_node(node * CORES_PER_EIO_NODE);
+-
+ 	for (i = 0; i < MAX_IO_PICS; i++) {
+ 		if (node == vec_group[i].node) {
+ 			vec_group[i].parent = parent;
+@@ -343,19 +340,27 @@ static int __init pch_pic_parse_madt(union acpi_subtable_headers *header,
+ 	if (parent)
+ 		return pch_pic_acpi_init(parent, pchpic_entry);
+ 
+-	return -EINVAL;
++	return 0;
+ }
+ 
+ static int __init pch_msi_parse_madt(union acpi_subtable_headers *header,
+ 					const unsigned long end)
+ {
++	struct irq_domain *parent;
+ 	struct acpi_madt_msi_pic *pchmsi_entry = (struct acpi_madt_msi_pic *)header;
+-	struct irq_domain *parent = acpi_get_vec_parent(eiointc_priv[nr_pics - 1]->node, msi_group);
++	int node;
++
++	if (cpu_has_flatmode)
++		node = cpu_to_node(eiointc_priv[nr_pics - 1]->node * CORES_PER_EIO_NODE);
++	else
++		node = eiointc_priv[nr_pics - 1]->node;
++
++	parent = acpi_get_vec_parent(node, msi_group);
+ 
+ 	if (parent)
+ 		return pch_msi_acpi_init(parent, pchmsi_entry);
+ 
+-	return -EINVAL;
++	return 0;
+ }
+ 
+ static int __init acpi_cascade_irqdomain_init(void)
+@@ -379,6 +384,7 @@ int __init eiointc_acpi_init(struct irq_domain *parent,
+ 	int i, ret, parent_irq;
+ 	unsigned long node_map;
+ 	struct eiointc_priv *priv;
++	int node;
+ 
+ 	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
+ 	if (!priv)
+@@ -416,13 +422,19 @@ int __init eiointc_acpi_init(struct irq_domain *parent,
+ 	parent_irq = irq_create_mapping(parent, acpi_eiointc->cascade);
+ 	irq_set_chained_handler_and_data(parent_irq, eiointc_irq_dispatch, priv);
+ 
+-	register_syscore_ops(&eiointc_syscore_ops);
+-	cpuhp_setup_state_nocalls(CPUHP_AP_IRQ_LOONGARCH_STARTING,
++	if (nr_pics == 1) {
++		register_syscore_ops(&eiointc_syscore_ops);
++		cpuhp_setup_state_nocalls(CPUHP_AP_IRQ_LOONGARCH_STARTING,
+ 				  "irqchip/loongarch/intc:starting",
+ 				  eiointc_router_init, NULL);
++	}
+ 
+-	acpi_set_vec_parent(acpi_eiointc->node, priv->eiointc_domain, pch_group);
+-	acpi_set_vec_parent(acpi_eiointc->node, priv->eiointc_domain, msi_group);
++	if (cpu_has_flatmode)
++		node = cpu_to_node(acpi_eiointc->node * CORES_PER_EIO_NODE);
++	else
++		node = acpi_eiointc->node;
++	acpi_set_vec_parent(node, priv->eiointc_domain, pch_group);
++	acpi_set_vec_parent(node, priv->eiointc_domain, msi_group);
+ 	ret = acpi_cascade_irqdomain_init();
+ 
+ 	return ret;
+diff --git a/drivers/irqchip/irq-loongson-pch-pic.c b/drivers/irqchip/irq-loongson-pch-pic.c
+index 437f1af693d01..e5fe4d50be056 100644
+--- a/drivers/irqchip/irq-loongson-pch-pic.c
++++ b/drivers/irqchip/irq-loongson-pch-pic.c
+@@ -311,7 +311,8 @@ static int pch_pic_init(phys_addr_t addr, unsigned long size, int vec_base,
+ 	pch_pic_handle[nr_pics] = domain_handle;
+ 	pch_pic_priv[nr_pics++] = priv;
+ 
+-	register_syscore_ops(&pch_pic_syscore_ops);
++	if (nr_pics == 1)
++		register_syscore_ops(&pch_pic_syscore_ops);
+ 
+ 	return 0;
+ 
+@@ -403,6 +404,9 @@ int __init pch_pic_acpi_init(struct irq_domain *parent,
+ 	int ret, vec_base;
+ 	struct fwnode_handle *domain_handle;
+ 
++	if (find_pch_pic(acpi_pchpic->gsi_base) >= 0)
++		return 0;
++
+ 	vec_base = acpi_pchpic->gsi_base - GSI_MIN_PCH_IRQ;
+ 
+ 	domain_handle = irq_domain_alloc_fwnode(&acpi_pchpic->address);
+diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c
+index 71ea5b2e10140..bcc181c425de6 100644
+--- a/drivers/mtd/spi-nor/core.c
++++ b/drivers/mtd/spi-nor/core.c
+@@ -2470,6 +2470,9 @@ static void spi_nor_init_flags(struct spi_nor *nor)
+ 
+ 	if (flags & NO_CHIP_ERASE)
+ 		nor->flags |= SNOR_F_NO_OP_CHIP_ERASE;
++
++	if (flags & SPI_NOR_RWW)
++		nor->flags |= SNOR_F_RWW;
+ }
+ 
+ /**
+@@ -2979,6 +2982,9 @@ static void spi_nor_set_mtd_info(struct spi_nor *nor)
+ 		mtd->name = dev_name(dev);
+ 	mtd->type = MTD_NORFLASH;
+ 	mtd->flags = MTD_CAP_NORFLASH;
++	/* Unset BIT_WRITEABLE to enable JFFS2 write buffer for ECC'd NOR */
++	if (nor->flags & SNOR_F_ECC)
++		mtd->flags &= ~MTD_BIT_WRITEABLE;
+ 	if (nor->info->flags & SPI_NOR_NO_ERASE)
+ 		mtd->flags |= MTD_NO_ERASE;
+ 	else
+diff --git a/drivers/mtd/spi-nor/core.h b/drivers/mtd/spi-nor/core.h
+index e0cc42a4a0c84..6eece1754ec0a 100644
+--- a/drivers/mtd/spi-nor/core.h
++++ b/drivers/mtd/spi-nor/core.h
+@@ -130,6 +130,8 @@ enum spi_nor_option_flags {
+ 	SNOR_F_IO_MODE_EN_VOLATILE = BIT(11),
+ 	SNOR_F_SOFT_RESET	= BIT(12),
+ 	SNOR_F_SWP_IS_VOLATILE	= BIT(13),
++	SNOR_F_RWW		= BIT(14),
++	SNOR_F_ECC		= BIT(15),
+ };
+ 
+ struct spi_nor_read_command {
+@@ -459,6 +461,7 @@ struct spi_nor_fixups {
+  *   NO_CHIP_ERASE:           chip does not support chip erase.
+  *   SPI_NOR_NO_FR:           can't do fastread.
+  *   SPI_NOR_QUAD_PP:         flash supports Quad Input Page Program.
++ *   SPI_NOR_RWW:             flash supports reads while write.
+  *
+  * @no_sfdp_flags:  flags that indicate support that can be discovered via SFDP.
+  *                  Used when SFDP tables are not defined in the flash. These
+@@ -509,6 +512,7 @@ struct flash_info {
+ #define NO_CHIP_ERASE			BIT(7)
+ #define SPI_NOR_NO_FR			BIT(8)
+ #define SPI_NOR_QUAD_PP			BIT(9)
++#define SPI_NOR_RWW			BIT(10)
+ 
+ 	u8 no_sfdp_flags;
+ #define SPI_NOR_SKIP_SFDP		BIT(0)
+diff --git a/drivers/mtd/spi-nor/debugfs.c b/drivers/mtd/spi-nor/debugfs.c
+index fc7ad203df128..e11536fffe0f4 100644
+--- a/drivers/mtd/spi-nor/debugfs.c
++++ b/drivers/mtd/spi-nor/debugfs.c
+@@ -25,6 +25,8 @@ static const char *const snor_f_names[] = {
+ 	SNOR_F_NAME(IO_MODE_EN_VOLATILE),
+ 	SNOR_F_NAME(SOFT_RESET),
+ 	SNOR_F_NAME(SWP_IS_VOLATILE),
++	SNOR_F_NAME(RWW),
++	SNOR_F_NAME(ECC),
+ };
+ #undef SNOR_F_NAME
+ 
+diff --git a/drivers/mtd/spi-nor/spansion.c b/drivers/mtd/spi-nor/spansion.c
+index 12a256c0ef4c6..27a3634762ad3 100644
+--- a/drivers/mtd/spi-nor/spansion.c
++++ b/drivers/mtd/spi-nor/spansion.c
+@@ -218,6 +218,17 @@ static int cypress_nor_set_page_size(struct spi_nor *nor)
+ 	return 0;
+ }
+ 
++static void cypress_nor_ecc_init(struct spi_nor *nor)
++{
++	/*
++	 * Programming is supported only in 16-byte ECC data unit granularity.
++	 * Byte-programming, bit-walking, or multiple program operations to the
++	 * same ECC data unit without an erase are not allowed.
++	 */
++	nor->params->writesize = 16;
++	nor->flags |= SNOR_F_ECC;
++}
++
+ static int
+ s25hx_t_post_bfpt_fixup(struct spi_nor *nor,
+ 			const struct sfdp_parameter_header *bfpt_header,
+@@ -255,13 +266,10 @@ static void s25hx_t_post_sfdp_fixup(struct spi_nor *nor)
+ 
+ static void s25hx_t_late_init(struct spi_nor *nor)
+ {
+-	struct spi_nor_flash_parameter *params = nor->params;
+-
+ 	/* Fast Read 4B requires mode cycles */
+-	params->reads[SNOR_CMD_READ_FAST].num_mode_clocks = 8;
++	nor->params->reads[SNOR_CMD_READ_FAST].num_mode_clocks = 8;
+ 
+-	/* The writesize should be ECC data unit size */
+-	params->writesize = 16;
++	cypress_nor_ecc_init(nor);
+ }
+ 
+ static struct spi_nor_fixups s25hx_t_fixups = {
+@@ -324,7 +332,7 @@ static int s28hx_t_post_bfpt_fixup(struct spi_nor *nor,
+ static void s28hx_t_late_init(struct spi_nor *nor)
+ {
+ 	nor->params->octal_dtr_enable = cypress_nor_octal_dtr_enable;
+-	nor->params->writesize = 16;
++	cypress_nor_ecc_init(nor);
+ }
+ 
+ static const struct spi_nor_fixups s28hx_t_fixups = {
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index 02410ac439b76..0c81f5830b90a 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -446,9 +446,9 @@ mt7530_pad_clk_setup(struct dsa_switch *ds, phy_interface_t interface)
+ 		else
+ 			ssc_delta = 0x87;
+ 		if (priv->id == ID_MT7621) {
+-			/* PLL frequency: 150MHz: 1.2GBit */
++			/* PLL frequency: 125MHz: 1.0GBit */
+ 			if (xtal == HWTRAP_XTAL_40MHZ)
+-				ncpo1 = 0x0780;
++				ncpo1 = 0x0640;
+ 			if (xtal == HWTRAP_XTAL_25MHZ)
+ 				ncpo1 = 0x0a00;
+ 		} else { /* PLL frequency: 250MHz: 2.0Gbit */
+@@ -1008,9 +1008,9 @@ mt753x_cpu_port_enable(struct dsa_switch *ds, int port)
+ 	mt7530_write(priv, MT7530_PVC_P(port),
+ 		     PORT_SPEC_TAG);
+ 
+-	/* Disable flooding by default */
+-	mt7530_rmw(priv, MT7530_MFC, BC_FFP_MASK | UNM_FFP_MASK | UNU_FFP_MASK,
+-		   BC_FFP(BIT(port)) | UNM_FFP(BIT(port)) | UNU_FFP(BIT(port)));
++	/* Enable flooding on the CPU port */
++	mt7530_set(priv, MT7530_MFC, BC_FFP(BIT(port)) | UNM_FFP(BIT(port)) |
++		   UNU_FFP(BIT(port)));
+ 
+ 	/* Set CPU port number */
+ 	if (priv->id == ID_MT7621)
+@@ -2307,12 +2307,69 @@ mt7530_setup(struct dsa_switch *ds)
+ 	return 0;
+ }
+ 
++static int
++mt7531_setup_common(struct dsa_switch *ds)
++{
++	struct mt7530_priv *priv = ds->priv;
++	struct dsa_port *cpu_dp;
++	int ret, i;
++
++	/* BPDU to CPU port */
++	dsa_switch_for_each_cpu_port(cpu_dp, ds) {
++		mt7530_rmw(priv, MT7531_CFC, MT7531_CPU_PMAP_MASK,
++			   BIT(cpu_dp->index));
++		break;
++	}
++	mt7530_rmw(priv, MT753X_BPC, MT753X_BPDU_PORT_FW_MASK,
++		   MT753X_BPDU_CPU_ONLY);
++
++	/* Enable and reset MIB counters */
++	mt7530_mib_reset(ds);
++
++	/* Disable flooding on all ports */
++	mt7530_clear(priv, MT7530_MFC, BC_FFP_MASK | UNM_FFP_MASK |
++		     UNU_FFP_MASK);
++
++	for (i = 0; i < MT7530_NUM_PORTS; i++) {
++		/* Disable forwarding by default on all ports */
++		mt7530_rmw(priv, MT7530_PCR_P(i), PCR_MATRIX_MASK,
++			   PCR_MATRIX_CLR);
++
++		/* Disable learning by default on all ports */
++		mt7530_set(priv, MT7530_PSC_P(i), SA_DIS);
++
++		mt7530_set(priv, MT7531_DBG_CNT(i), MT7531_DIS_CLR);
++
++		if (dsa_is_cpu_port(ds, i)) {
++			ret = mt753x_cpu_port_enable(ds, i);
++			if (ret)
++				return ret;
++		} else {
++			mt7530_port_disable(ds, i);
++
++			/* Set default PVID to 0 on all user ports */
++			mt7530_rmw(priv, MT7530_PPBV1_P(i), G0_PORT_VID_MASK,
++				   G0_PORT_VID_DEF);
++		}
++
++		/* Enable consistent egress tag */
++		mt7530_rmw(priv, MT7530_PVC_P(i), PVC_EG_TAG_MASK,
++			   PVC_EG_TAG(MT7530_VLAN_EG_CONSISTENT));
++	}
++
++	/* Flush the FDB table */
++	ret = mt7530_fdb_cmd(priv, MT7530_FDB_FLUSH, NULL);
++	if (ret < 0)
++		return ret;
++
++	return 0;
++}
++
+ static int
+ mt7531_setup(struct dsa_switch *ds)
+ {
+ 	struct mt7530_priv *priv = ds->priv;
+ 	struct mt7530_dummy_poll p;
+-	struct dsa_port *cpu_dp;
+ 	u32 val, id;
+ 	int ret, i;
+ 
+@@ -2390,44 +2447,7 @@ mt7531_setup(struct dsa_switch *ds)
+ 	mt7531_ind_c45_phy_write(priv, MT753X_CTRL_PHY_ADDR, MDIO_MMD_VEND2,
+ 				 CORE_PLL_GROUP4, val);
+ 
+-	/* BPDU to CPU port */
+-	dsa_switch_for_each_cpu_port(cpu_dp, ds) {
+-		mt7530_rmw(priv, MT7531_CFC, MT7531_CPU_PMAP_MASK,
+-			   BIT(cpu_dp->index));
+-		break;
+-	}
+-	mt7530_rmw(priv, MT753X_BPC, MT753X_BPDU_PORT_FW_MASK,
+-		   MT753X_BPDU_CPU_ONLY);
+-
+-	/* Enable and reset MIB counters */
+-	mt7530_mib_reset(ds);
+-
+-	for (i = 0; i < MT7530_NUM_PORTS; i++) {
+-		/* Disable forwarding by default on all ports */
+-		mt7530_rmw(priv, MT7530_PCR_P(i), PCR_MATRIX_MASK,
+-			   PCR_MATRIX_CLR);
+-
+-		/* Disable learning by default on all ports */
+-		mt7530_set(priv, MT7530_PSC_P(i), SA_DIS);
+-
+-		mt7530_set(priv, MT7531_DBG_CNT(i), MT7531_DIS_CLR);
+-
+-		if (dsa_is_cpu_port(ds, i)) {
+-			ret = mt753x_cpu_port_enable(ds, i);
+-			if (ret)
+-				return ret;
+-		} else {
+-			mt7530_port_disable(ds, i);
+-
+-			/* Set default PVID to 0 on all user ports */
+-			mt7530_rmw(priv, MT7530_PPBV1_P(i), G0_PORT_VID_MASK,
+-				   G0_PORT_VID_DEF);
+-		}
+-
+-		/* Enable consistent egress tag */
+-		mt7530_rmw(priv, MT7530_PVC_P(i), PVC_EG_TAG_MASK,
+-			   PVC_EG_TAG(MT7530_VLAN_EG_CONSISTENT));
+-	}
++	mt7531_setup_common(ds);
+ 
+ 	/* Setup VLAN ID 0 for VLAN-unaware bridges */
+ 	ret = mt7530_setup_vlan0(priv);
+@@ -2437,11 +2457,6 @@ mt7531_setup(struct dsa_switch *ds)
+ 	ds->assisted_learning_on_cpu_port = true;
+ 	ds->mtu_enforcement_ingress = true;
+ 
+-	/* Flush the FDB table */
+-	ret = mt7530_fdb_cmd(priv, MT7530_FDB_FLUSH, NULL);
+-	if (ret < 0)
+-		return ret;
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 7108f745fbf01..902f407213404 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -5182,6 +5182,7 @@ static const struct mv88e6xxx_ops mv88e6321_ops = {
+ 	.set_cpu_port = mv88e6095_g1_set_cpu_port,
+ 	.set_egress_port = mv88e6095_g1_set_egress_port,
+ 	.watchdog_ops = &mv88e6390_watchdog_ops,
++	.mgmt_rsvd2cpu = mv88e6352_g2_mgmt_rsvd2cpu,
+ 	.reset = mv88e6352_g1_reset,
+ 	.vtu_getnext = mv88e6185_g1_vtu_getnext,
+ 	.vtu_loadpurge = mv88e6185_g1_vtu_loadpurge,
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_qos.c b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
+index 130ebf6853e61..83c27bbbc6edf 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_qos.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
+@@ -1247,7 +1247,7 @@ static int enetc_psfp_parse_clsflower(struct enetc_ndev_priv *priv,
+ 		int index;
+ 
+ 		index = enetc_get_free_index(priv);
+-		if (sfi->handle < 0) {
++		if (index < 0) {
+ 			NL_SET_ERR_MSG_MOD(extack, "No Stream Filter resource!");
+ 			err = -ENOSPC;
+ 			goto free_fmi;
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 160c1b3525f5b..42ec6ca3bf035 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -3798,7 +3798,8 @@ static int fec_enet_txq_xmit_frame(struct fec_enet_private *fep,
+ 	entries_free = fec_enet_get_free_txdesc_num(txq);
+ 	if (entries_free < MAX_SKB_FRAGS + 1) {
+ 		netdev_err(fep->netdev, "NOT enough BD for SG!\n");
+-		return NETDEV_TX_OK;
++		xdp_return_frame(frame);
++		return NETDEV_TX_BUSY;
+ 	}
+ 
+ 	/* Fill in a Tx ring entry */
+@@ -3856,6 +3857,7 @@ static int fec_enet_xdp_xmit(struct net_device *dev,
+ 	struct fec_enet_private *fep = netdev_priv(dev);
+ 	struct fec_enet_priv_tx_q *txq;
+ 	int cpu = smp_processor_id();
++	unsigned int sent_frames = 0;
+ 	struct netdev_queue *nq;
+ 	unsigned int queue;
+ 	int i;
+@@ -3866,8 +3868,11 @@ static int fec_enet_xdp_xmit(struct net_device *dev,
+ 
+ 	__netif_tx_lock(nq, cpu);
+ 
+-	for (i = 0; i < num_frames; i++)
+-		fec_enet_txq_xmit_frame(fep, txq, frames[i]);
++	for (i = 0; i < num_frames; i++) {
++		if (fec_enet_txq_xmit_frame(fep, txq, frames[i]) != 0)
++			break;
++		sent_frames++;
++	}
+ 
+ 	/* Make sure the update to bdp and tx_skbuff are performed. */
+ 	wmb();
+@@ -3877,7 +3882,7 @@ static int fec_enet_xdp_xmit(struct net_device *dev,
+ 
+ 	__netif_tx_unlock(nq);
+ 
+-	return num_frames;
++	return sent_frames;
+ }
+ 
+ static const struct net_device_ops fec_netdev_ops = {
+diff --git a/drivers/net/ethernet/intel/ice/ice_tc_lib.c b/drivers/net/ethernet/intel/ice/ice_tc_lib.c
+index 76f29a5bf8d73..d1a31f236d26a 100644
+--- a/drivers/net/ethernet/intel/ice/ice_tc_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_tc_lib.c
+@@ -693,17 +693,18 @@ ice_eswitch_add_tc_fltr(struct ice_vsi *vsi, struct ice_tc_flower_fltr *fltr)
+ 	 * results into order of switch rule evaluation.
+ 	 */
+ 	rule_info.priority = 7;
++	rule_info.flags_info.act_valid = true;
+ 
+ 	if (fltr->direction == ICE_ESWITCH_FLTR_INGRESS) {
+ 		rule_info.sw_act.flag |= ICE_FLTR_RX;
+ 		rule_info.sw_act.src = hw->pf_id;
+ 		rule_info.rx = true;
++		rule_info.flags_info.act = ICE_SINGLE_ACT_LB_ENABLE;
+ 	} else {
+ 		rule_info.sw_act.flag |= ICE_FLTR_TX;
+ 		rule_info.sw_act.src = vsi->idx;
+ 		rule_info.rx = false;
+ 		rule_info.flags_info.act = ICE_SINGLE_ACT_LAN_ENABLE;
+-		rule_info.flags_info.act_valid = true;
+ 	}
+ 
+ 	/* specify the cookie as filter_rule_id */
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+index f8156fe4b1dc4..0ee943db3dc92 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+@@ -1035,9 +1035,6 @@ static void ixgbe_free_q_vector(struct ixgbe_adapter *adapter, int v_idx)
+ 	adapter->q_vector[v_idx] = NULL;
+ 	__netif_napi_del(&q_vector->napi);
+ 
+-	if (static_key_enabled(&ixgbe_xdp_locking_key))
+-		static_branch_dec(&ixgbe_xdp_locking_key);
+-
+ 	/*
+ 	 * after a call to __netif_napi_del() napi may still be used and
+ 	 * ixgbe_get_stats64() might access the rings on this vector,
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index 773c35fecacef..d7c247e46dfcc 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -6495,6 +6495,10 @@ static int ixgbe_sw_init(struct ixgbe_adapter *adapter,
+ 	set_bit(0, adapter->fwd_bitmask);
+ 	set_bit(__IXGBE_DOWN, &adapter->state);
+ 
++	/* enable locking for XDP_TX if we have more CPUs than queues */
++	if (nr_cpu_ids > IXGBE_MAX_XDP_QS)
++		static_branch_enable(&ixgbe_xdp_locking_key);
++
+ 	return 0;
+ }
+ 
+@@ -10290,8 +10294,6 @@ static int ixgbe_xdp_setup(struct net_device *dev, struct bpf_prog *prog)
+ 	 */
+ 	if (nr_cpu_ids > IXGBE_MAX_XDP_QS * 2)
+ 		return -ENOMEM;
+-	else if (nr_cpu_ids > IXGBE_MAX_XDP_QS)
+-		static_branch_inc(&ixgbe_xdp_locking_key);
+ 
+ 	old_prog = xchg(&adapter->xdp_prog, prog);
+ 	need_reset = (!!prog != !!old_prog);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+index 724df6398bbe2..bd77152bb8d7c 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+@@ -1231,6 +1231,14 @@ static inline void link_status_user_format(u64 lstat,
+ 	linfo->an = FIELD_GET(RESP_LINKSTAT_AN, lstat);
+ 	linfo->fec = FIELD_GET(RESP_LINKSTAT_FEC, lstat);
+ 	linfo->lmac_type_id = FIELD_GET(RESP_LINKSTAT_LMAC_TYPE, lstat);
++
++	if (linfo->lmac_type_id >= LMAC_MODE_MAX) {
++		dev_err(&cgx->pdev->dev, "Unknown lmac_type_id %d reported by firmware on cgx port%d:%d",
++			linfo->lmac_type_id, cgx->cgx_id, lmac_id);
++		strncpy(linfo->lmac_type, "Unknown", LMACTYPE_STR_LEN - 1);
++		return;
++	}
++
+ 	lmac_string = cgx_lmactype_string[linfo->lmac_type_id];
+ 	strncpy(linfo->lmac_type, lmac_string, LMACTYPE_STR_LEN - 1);
+ }
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.c b/drivers/net/ethernet/marvell/octeontx2/af/mbox.c
+index 2898931d5260a..9690ac01f02c8 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.c
+@@ -157,7 +157,7 @@ EXPORT_SYMBOL(otx2_mbox_init);
+  */
+ int otx2_mbox_regions_init(struct otx2_mbox *mbox, void **hwbase,
+ 			   struct pci_dev *pdev, void *reg_base,
+-			   int direction, int ndevs)
++			   int direction, int ndevs, unsigned long *pf_bmap)
+ {
+ 	struct otx2_mbox_dev *mdev;
+ 	int devid, err;
+@@ -169,6 +169,9 @@ int otx2_mbox_regions_init(struct otx2_mbox *mbox, void **hwbase,
+ 	mbox->hwbase = hwbase[0];
+ 
+ 	for (devid = 0; devid < ndevs; devid++) {
++		if (!test_bit(devid, pf_bmap))
++			continue;
++
+ 		mdev = &mbox->dev[devid];
+ 		mdev->mbase = hwbase[devid];
+ 		mdev->hwbase = hwbase[devid];
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+index 5727d67e0259c..26636a4d7dcc6 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+@@ -96,9 +96,10 @@ void otx2_mbox_destroy(struct otx2_mbox *mbox);
+ int otx2_mbox_init(struct otx2_mbox *mbox, void __force *hwbase,
+ 		   struct pci_dev *pdev, void __force *reg_base,
+ 		   int direction, int ndevs);
++
+ int otx2_mbox_regions_init(struct otx2_mbox *mbox, void __force **hwbase,
+ 			   struct pci_dev *pdev, void __force *reg_base,
+-			   int direction, int ndevs);
++			   int direction, int ndevs, unsigned long *bmap);
+ void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid);
+ int otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid);
+ int otx2_mbox_busy_poll_for_rsp(struct otx2_mbox *mbox, int devid);
+@@ -245,9 +246,9 @@ M(NPC_MCAM_READ_BASE_RULE, 0x6011, npc_read_base_steer_rule,            \
+ M(NPC_MCAM_GET_STATS, 0x6012, npc_mcam_entry_stats,                     \
+ 				   npc_mcam_get_stats_req,              \
+ 				   npc_mcam_get_stats_rsp)              \
+-M(NPC_GET_SECRET_KEY, 0x6013, npc_get_secret_key,                     \
+-				   npc_get_secret_key_req,              \
+-				   npc_get_secret_key_rsp)              \
++M(NPC_GET_FIELD_HASH_INFO, 0x6013, npc_get_field_hash_info,                     \
++				   npc_get_field_hash_info_req,              \
++				   npc_get_field_hash_info_rsp)              \
+ M(NPC_GET_FIELD_STATUS, 0x6014, npc_get_field_status,                     \
+ 				   npc_get_field_status_req,              \
+ 				   npc_get_field_status_rsp)              \
+@@ -1524,14 +1525,20 @@ struct npc_mcam_get_stats_rsp {
+ 	u8 stat_ena; /* enabled */
+ };
+ 
+-struct npc_get_secret_key_req {
++struct npc_get_field_hash_info_req {
+ 	struct mbox_msghdr hdr;
+ 	u8 intf;
+ };
+ 
+-struct npc_get_secret_key_rsp {
++struct npc_get_field_hash_info_rsp {
+ 	struct mbox_msghdr hdr;
+ 	u64 secret_key[3];
++#define NPC_MAX_HASH 2
++#define NPC_MAX_HASH_MASK 2
++	/* NPC_AF_INTF(0..1)_HASH(0..1)_MASK(0..1) */
++	u64 hash_mask[NPC_MAX_INTF][NPC_MAX_HASH][NPC_MAX_HASH_MASK];
++	/* NPC_AF_INTF(0..1)_HASH(0..1)_RESULT_CTRL */
++	u64 hash_ctrl[NPC_MAX_INTF][NPC_MAX_HASH];
+ };
+ 
+ enum ptp_op {
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs.c
+index f68a6a0e3aa41..c43f19dfbd744 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs.c
+@@ -473,6 +473,8 @@ void mcs_flowid_entry_write(struct mcs *mcs, u64 *data, u64 *mask, int flow_id,
+ 		for (reg_id = 0; reg_id < 4; reg_id++) {
+ 			reg = MCSX_CPM_RX_SLAVE_FLOWID_TCAM_DATAX(reg_id, flow_id);
+ 			mcs_reg_write(mcs, reg, data[reg_id]);
++		}
++		for (reg_id = 0; reg_id < 4; reg_id++) {
+ 			reg = MCSX_CPM_RX_SLAVE_FLOWID_TCAM_MASKX(reg_id, flow_id);
+ 			mcs_reg_write(mcs, reg, mask[reg_id]);
+ 		}
+@@ -480,6 +482,8 @@ void mcs_flowid_entry_write(struct mcs *mcs, u64 *data, u64 *mask, int flow_id,
+ 		for (reg_id = 0; reg_id < 4; reg_id++) {
+ 			reg = MCSX_CPM_TX_SLAVE_FLOWID_TCAM_DATAX(reg_id, flow_id);
+ 			mcs_reg_write(mcs, reg, data[reg_id]);
++		}
++		for (reg_id = 0; reg_id < 4; reg_id++) {
+ 			reg = MCSX_CPM_TX_SLAVE_FLOWID_TCAM_MASKX(reg_id, flow_id);
+ 			mcs_reg_write(mcs, reg, mask[reg_id]);
+ 		}
+@@ -494,6 +498,9 @@ int mcs_install_flowid_bypass_entry(struct mcs *mcs)
+ 
+ 	/* Flow entry */
+ 	flow_id = mcs->hw->tcam_entries - MCS_RSRC_RSVD_CNT;
++	__set_bit(flow_id, mcs->rx.flow_ids.bmap);
++	__set_bit(flow_id, mcs->tx.flow_ids.bmap);
++
+ 	for (reg_id = 0; reg_id < 4; reg_id++) {
+ 		reg = MCSX_CPM_RX_SLAVE_FLOWID_TCAM_MASKX(reg_id, flow_id);
+ 		mcs_reg_write(mcs, reg, GENMASK_ULL(63, 0));
+@@ -504,6 +511,8 @@ int mcs_install_flowid_bypass_entry(struct mcs *mcs)
+ 	}
+ 	/* secy */
+ 	secy_id = mcs->hw->secy_entries - MCS_RSRC_RSVD_CNT;
++	__set_bit(secy_id, mcs->rx.secy.bmap);
++	__set_bit(secy_id, mcs->tx.secy.bmap);
+ 
+ 	/* Set validate frames to NULL and enable control port */
+ 	plcy = 0x7ull;
+@@ -528,6 +537,7 @@ int mcs_install_flowid_bypass_entry(struct mcs *mcs)
+ 	/* Enable Flowid entry */
+ 	mcs_ena_dis_flowid_entry(mcs, flow_id, MCS_RX, true);
+ 	mcs_ena_dis_flowid_entry(mcs, flow_id, MCS_TX, true);
++
+ 	return 0;
+ }
+ 
+@@ -926,60 +936,42 @@ static void mcs_tx_misc_intr_handler(struct mcs *mcs, u64 intr)
+ 	mcs_add_intr_wq_entry(mcs, &event);
+ }
+ 
+-static void mcs_bbe_intr_handler(struct mcs *mcs, u64 intr, enum mcs_direction dir)
++void cn10kb_mcs_bbe_intr_handler(struct mcs *mcs, u64 intr,
++				 enum mcs_direction dir)
+ {
+-	struct mcs_intr_event event = { 0 };
+-	int i;
++	u64 val, reg;
++	int lmac;
+ 
+-	if (!(intr & MCS_BBE_INT_MASK))
++	if (!(intr & 0x6ULL))
+ 		return;
+ 
+-	event.mcs_id = mcs->mcs_id;
+-	event.pcifunc = mcs->pf_map[0];
++	if (intr & BIT_ULL(1))
++		reg = (dir == MCS_RX) ? MCSX_BBE_RX_SLAVE_DFIFO_OVERFLOW_0 :
++					MCSX_BBE_TX_SLAVE_DFIFO_OVERFLOW_0;
++	else
++		reg = (dir == MCS_RX) ? MCSX_BBE_RX_SLAVE_PLFIFO_OVERFLOW_0 :
++					MCSX_BBE_TX_SLAVE_PLFIFO_OVERFLOW_0;
++	val = mcs_reg_read(mcs, reg);
+ 
+-	for (i = 0; i < MCS_MAX_BBE_INT; i++) {
+-		if (!(intr & BIT_ULL(i)))
++	/* policy/data over flow occurred */
++	for (lmac = 0; lmac < mcs->hw->lmac_cnt; lmac++) {
++		if (!(val & BIT_ULL(lmac)))
+ 			continue;
+-
+-		/* Lower nibble denotes data fifo overflow interrupts and
+-		 * upper nibble indicates policy fifo overflow interrupts.
+-		 */
+-		if (intr & 0xFULL)
+-			event.intr_mask = (dir == MCS_RX) ?
+-					  MCS_BBE_RX_DFIFO_OVERFLOW_INT :
+-					  MCS_BBE_TX_DFIFO_OVERFLOW_INT;
+-		else
+-			event.intr_mask = (dir == MCS_RX) ?
+-					  MCS_BBE_RX_PLFIFO_OVERFLOW_INT :
+-					  MCS_BBE_TX_PLFIFO_OVERFLOW_INT;
+-
+-		/* Notify the lmac_id info which ran into BBE fatal error */
+-		event.lmac_id = i & 0x3ULL;
+-		mcs_add_intr_wq_entry(mcs, &event);
++		dev_warn(mcs->dev, "BEE:Policy or data overflow occurred on lmac:%d\n", lmac);
+ 	}
+ }
+ 
+-static void mcs_pab_intr_handler(struct mcs *mcs, u64 intr, enum mcs_direction dir)
++void cn10kb_mcs_pab_intr_handler(struct mcs *mcs, u64 intr,
++				 enum mcs_direction dir)
+ {
+-	struct mcs_intr_event event = { 0 };
+-	int i;
++	int lmac;
+ 
+-	if (!(intr & MCS_PAB_INT_MASK))
++	if (!(intr & 0xFFFFFULL))
+ 		return;
+ 
+-	event.mcs_id = mcs->mcs_id;
+-	event.pcifunc = mcs->pf_map[0];
+-
+-	for (i = 0; i < MCS_MAX_PAB_INT; i++) {
+-		if (!(intr & BIT_ULL(i)))
+-			continue;
+-
+-		event.intr_mask = (dir == MCS_RX) ? MCS_PAB_RX_CHAN_OVERFLOW_INT :
+-				  MCS_PAB_TX_CHAN_OVERFLOW_INT;
+-
+-		/* Notify the lmac_id info which ran into PAB fatal error */
+-		event.lmac_id = i;
+-		mcs_add_intr_wq_entry(mcs, &event);
++	for (lmac = 0; lmac < mcs->hw->lmac_cnt; lmac++) {
++		if (intr & BIT_ULL(lmac))
++			dev_warn(mcs->dev, "PAB: overflow occurred on lmac:%d\n", lmac);
+ 	}
+ }
+ 
+@@ -988,9 +980,8 @@ static irqreturn_t mcs_ip_intr_handler(int irq, void *mcs_irq)
+ 	struct mcs *mcs = (struct mcs *)mcs_irq;
+ 	u64 intr, cpm_intr, bbe_intr, pab_intr;
+ 
+-	/* Disable and clear the interrupt */
++	/* Disable  the interrupt */
+ 	mcs_reg_write(mcs, MCSX_IP_INT_ENA_W1C, BIT_ULL(0));
+-	mcs_reg_write(mcs, MCSX_IP_INT, BIT_ULL(0));
+ 
+ 	/* Check which block has interrupt*/
+ 	intr = mcs_reg_read(mcs, MCSX_TOP_SLAVE_INT_SUM);
+@@ -1037,7 +1028,7 @@ static irqreturn_t mcs_ip_intr_handler(int irq, void *mcs_irq)
+ 	/* BBE RX */
+ 	if (intr & MCS_BBE_RX_INT_ENA) {
+ 		bbe_intr = mcs_reg_read(mcs, MCSX_BBE_RX_SLAVE_BBE_INT);
+-		mcs_bbe_intr_handler(mcs, bbe_intr, MCS_RX);
++		mcs->mcs_ops->mcs_bbe_intr_handler(mcs, bbe_intr, MCS_RX);
+ 
+ 		/* Clear the interrupt */
+ 		mcs_reg_write(mcs, MCSX_BBE_RX_SLAVE_BBE_INT_INTR_RW, 0);
+@@ -1047,7 +1038,7 @@ static irqreturn_t mcs_ip_intr_handler(int irq, void *mcs_irq)
+ 	/* BBE TX */
+ 	if (intr & MCS_BBE_TX_INT_ENA) {
+ 		bbe_intr = mcs_reg_read(mcs, MCSX_BBE_TX_SLAVE_BBE_INT);
+-		mcs_bbe_intr_handler(mcs, bbe_intr, MCS_TX);
++		mcs->mcs_ops->mcs_bbe_intr_handler(mcs, bbe_intr, MCS_TX);
+ 
+ 		/* Clear the interrupt */
+ 		mcs_reg_write(mcs, MCSX_BBE_TX_SLAVE_BBE_INT_INTR_RW, 0);
+@@ -1057,7 +1048,7 @@ static irqreturn_t mcs_ip_intr_handler(int irq, void *mcs_irq)
+ 	/* PAB RX */
+ 	if (intr & MCS_PAB_RX_INT_ENA) {
+ 		pab_intr = mcs_reg_read(mcs, MCSX_PAB_RX_SLAVE_PAB_INT);
+-		mcs_pab_intr_handler(mcs, pab_intr, MCS_RX);
++		mcs->mcs_ops->mcs_pab_intr_handler(mcs, pab_intr, MCS_RX);
+ 
+ 		/* Clear the interrupt */
+ 		mcs_reg_write(mcs, MCSX_PAB_RX_SLAVE_PAB_INT_INTR_RW, 0);
+@@ -1067,14 +1058,15 @@ static irqreturn_t mcs_ip_intr_handler(int irq, void *mcs_irq)
+ 	/* PAB TX */
+ 	if (intr & MCS_PAB_TX_INT_ENA) {
+ 		pab_intr = mcs_reg_read(mcs, MCSX_PAB_TX_SLAVE_PAB_INT);
+-		mcs_pab_intr_handler(mcs, pab_intr, MCS_TX);
++		mcs->mcs_ops->mcs_pab_intr_handler(mcs, pab_intr, MCS_TX);
+ 
+ 		/* Clear the interrupt */
+ 		mcs_reg_write(mcs, MCSX_PAB_TX_SLAVE_PAB_INT_INTR_RW, 0);
+ 		mcs_reg_write(mcs, MCSX_PAB_TX_SLAVE_PAB_INT, pab_intr);
+ 	}
+ 
+-	/* Enable the interrupt */
++	/* Clear and enable the interrupt */
++	mcs_reg_write(mcs, MCSX_IP_INT, BIT_ULL(0));
+ 	mcs_reg_write(mcs, MCSX_IP_INT_ENA_W1S, BIT_ULL(0));
+ 
+ 	return IRQ_HANDLED;
+@@ -1156,7 +1148,7 @@ static int mcs_register_interrupts(struct mcs *mcs)
+ 		return ret;
+ 	}
+ 
+-	ret = request_irq(pci_irq_vector(mcs->pdev, MCS_INT_VEC_IP),
++	ret = request_irq(pci_irq_vector(mcs->pdev, mcs->hw->ip_vec),
+ 			  mcs_ip_intr_handler, 0, "MCS_IP", mcs);
+ 	if (ret) {
+ 		dev_err(mcs->dev, "MCS IP irq registration failed\n");
+@@ -1175,11 +1167,11 @@ static int mcs_register_interrupts(struct mcs *mcs)
+ 	mcs_reg_write(mcs, MCSX_CPM_TX_SLAVE_TX_INT_ENB, 0x7ULL);
+ 	mcs_reg_write(mcs, MCSX_CPM_RX_SLAVE_RX_INT_ENB, 0x7FULL);
+ 
+-	mcs_reg_write(mcs, MCSX_BBE_RX_SLAVE_BBE_INT_ENB, 0xff);
+-	mcs_reg_write(mcs, MCSX_BBE_TX_SLAVE_BBE_INT_ENB, 0xff);
++	mcs_reg_write(mcs, MCSX_BBE_RX_SLAVE_BBE_INT_ENB, 0xFFULL);
++	mcs_reg_write(mcs, MCSX_BBE_TX_SLAVE_BBE_INT_ENB, 0xFFULL);
+ 
+-	mcs_reg_write(mcs, MCSX_PAB_RX_SLAVE_PAB_INT_ENB, 0xff);
+-	mcs_reg_write(mcs, MCSX_PAB_TX_SLAVE_PAB_INT_ENB, 0xff);
++	mcs_reg_write(mcs, MCSX_PAB_RX_SLAVE_PAB_INT_ENB, 0xFFFFFULL);
++	mcs_reg_write(mcs, MCSX_PAB_TX_SLAVE_PAB_INT_ENB, 0xFFFFFULL);
+ 
+ 	mcs->tx_sa_active = alloc_mem(mcs, mcs->hw->sc_entries);
+ 	if (!mcs->tx_sa_active) {
+@@ -1190,7 +1182,7 @@ static int mcs_register_interrupts(struct mcs *mcs)
+ 	return ret;
+ 
+ free_irq:
+-	free_irq(pci_irq_vector(mcs->pdev, MCS_INT_VEC_IP), mcs);
++	free_irq(pci_irq_vector(mcs->pdev, mcs->hw->ip_vec), mcs);
+ exit:
+ 	pci_free_irq_vectors(mcs->pdev);
+ 	mcs->num_vec = 0;
+@@ -1325,8 +1317,11 @@ void mcs_reset_port(struct mcs *mcs, u8 port_id, u8 reset)
+ void mcs_set_lmac_mode(struct mcs *mcs, int lmac_id, u8 mode)
+ {
+ 	u64 reg;
++	int id = lmac_id * 2;
+ 
+-	reg = MCSX_MCS_TOP_SLAVE_CHANNEL_CFG(lmac_id * 2);
++	reg = MCSX_MCS_TOP_SLAVE_CHANNEL_CFG(id);
++	mcs_reg_write(mcs, reg, (u64)mode);
++	reg = MCSX_MCS_TOP_SLAVE_CHANNEL_CFG((id + 1));
+ 	mcs_reg_write(mcs, reg, (u64)mode);
+ }
+ 
+@@ -1484,6 +1479,7 @@ void cn10kb_mcs_set_hw_capabilities(struct mcs *mcs)
+ 	hw->lmac_cnt = 20;		/* lmacs/ports per mcs block */
+ 	hw->mcs_x2p_intf = 5;		/* x2p clabration intf */
+ 	hw->mcs_blks = 1;		/* MCS blocks */
++	hw->ip_vec = MCS_CN10KB_INT_VEC_IP; /* IP vector */
+ }
+ 
+ static struct mcs_ops cn10kb_mcs_ops = {
+@@ -1492,6 +1488,8 @@ static struct mcs_ops cn10kb_mcs_ops = {
+ 	.mcs_tx_sa_mem_map_write	= cn10kb_mcs_tx_sa_mem_map_write,
+ 	.mcs_rx_sa_mem_map_write	= cn10kb_mcs_rx_sa_mem_map_write,
+ 	.mcs_flowid_secy_map		= cn10kb_mcs_flowid_secy_map,
++	.mcs_bbe_intr_handler		= cn10kb_mcs_bbe_intr_handler,
++	.mcs_pab_intr_handler		= cn10kb_mcs_pab_intr_handler,
+ };
+ 
+ static int mcs_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+@@ -1592,7 +1590,7 @@ static void mcs_remove(struct pci_dev *pdev)
+ 
+ 	/* Set MCS to external bypass */
+ 	mcs_set_external_bypass(mcs, true);
+-	free_irq(pci_irq_vector(pdev, MCS_INT_VEC_IP), mcs);
++	free_irq(pci_irq_vector(pdev, mcs->hw->ip_vec), mcs);
+ 	pci_free_irq_vectors(pdev);
+ 	pci_release_regions(pdev);
+ 	pci_disable_device(pdev);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs.h b/drivers/net/ethernet/marvell/octeontx2/af/mcs.h
+index 64dc2b80e15dd..0f89dcb764654 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs.h
+@@ -43,24 +43,15 @@
+ /* Reserved resources for default bypass entry */
+ #define MCS_RSRC_RSVD_CNT		1
+ 
+-/* MCS Interrupt Vector Enumeration */
+-enum mcs_int_vec_e {
+-	MCS_INT_VEC_MIL_RX_GBL		= 0x0,
+-	MCS_INT_VEC_MIL_RX_LMACX	= 0x1,
+-	MCS_INT_VEC_MIL_TX_LMACX	= 0x5,
+-	MCS_INT_VEC_HIL_RX_GBL		= 0x9,
+-	MCS_INT_VEC_HIL_RX_LMACX	= 0xa,
+-	MCS_INT_VEC_HIL_TX_GBL		= 0xe,
+-	MCS_INT_VEC_HIL_TX_LMACX	= 0xf,
+-	MCS_INT_VEC_IP			= 0x13,
+-	MCS_INT_VEC_CNT			= 0x14,
+-};
++/* MCS Interrupt Vector */
++#define MCS_CNF10KB_INT_VEC_IP	0x13
++#define MCS_CN10KB_INT_VEC_IP	0x53
+ 
+ #define MCS_MAX_BBE_INT			8ULL
+ #define MCS_BBE_INT_MASK		0xFFULL
+ 
+-#define MCS_MAX_PAB_INT			4ULL
+-#define MCS_PAB_INT_MASK		0xFULL
++#define MCS_MAX_PAB_INT		8ULL
++#define MCS_PAB_INT_MASK	0xFULL
+ 
+ #define MCS_BBE_RX_INT_ENA		BIT_ULL(0)
+ #define MCS_BBE_TX_INT_ENA		BIT_ULL(1)
+@@ -137,6 +128,7 @@ struct hwinfo {
+ 	u8 lmac_cnt;
+ 	u8 mcs_blks;
+ 	unsigned long	lmac_bmap; /* bitmap of enabled mcs lmac */
++	u16 ip_vec;
+ };
+ 
+ struct mcs {
+@@ -165,6 +157,8 @@ struct mcs_ops {
+ 	void	(*mcs_tx_sa_mem_map_write)(struct mcs *mcs, struct mcs_tx_sc_sa_map *map);
+ 	void	(*mcs_rx_sa_mem_map_write)(struct mcs *mcs, struct mcs_rx_sc_sa_map *map);
+ 	void	(*mcs_flowid_secy_map)(struct mcs *mcs, struct secy_mem_map *map, int dir);
++	void	(*mcs_bbe_intr_handler)(struct mcs *mcs, u64 intr, enum mcs_direction dir);
++	void	(*mcs_pab_intr_handler)(struct mcs *mcs, u64 intr, enum mcs_direction dir);
+ };
+ 
+ extern struct pci_driver mcs_driver;
+@@ -219,6 +213,8 @@ void cn10kb_mcs_tx_sa_mem_map_write(struct mcs *mcs, struct mcs_tx_sc_sa_map *ma
+ void cn10kb_mcs_flowid_secy_map(struct mcs *mcs, struct secy_mem_map *map, int dir);
+ void cn10kb_mcs_rx_sa_mem_map_write(struct mcs *mcs, struct mcs_rx_sc_sa_map *map);
+ void cn10kb_mcs_parser_cfg(struct mcs *mcs);
++void cn10kb_mcs_pab_intr_handler(struct mcs *mcs, u64 intr, enum mcs_direction dir);
++void cn10kb_mcs_bbe_intr_handler(struct mcs *mcs, u64 intr, enum mcs_direction dir);
+ 
+ /* CNF10K-B APIs */
+ struct mcs_ops *cnf10kb_get_mac_ops(void);
+@@ -229,6 +225,8 @@ void cnf10kb_mcs_rx_sa_mem_map_write(struct mcs *mcs, struct mcs_rx_sc_sa_map *m
+ void cnf10kb_mcs_parser_cfg(struct mcs *mcs);
+ void cnf10kb_mcs_tx_pn_thresh_reached_handler(struct mcs *mcs);
+ void cnf10kb_mcs_tx_pn_wrapped_handler(struct mcs *mcs);
++void cnf10kb_mcs_bbe_intr_handler(struct mcs *mcs, u64 intr, enum mcs_direction dir);
++void cnf10kb_mcs_pab_intr_handler(struct mcs *mcs, u64 intr, enum mcs_direction dir);
+ 
+ /* Stats APIs */
+ void mcs_get_sc_stats(struct mcs *mcs, struct mcs_sc_stats *stats, int id, int dir);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_cnf10kb.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs_cnf10kb.c
+index 7b62054144286..9f9b904ab2cd0 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_cnf10kb.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_cnf10kb.c
+@@ -13,6 +13,8 @@ static struct mcs_ops cnf10kb_mcs_ops   = {
+ 	.mcs_tx_sa_mem_map_write	= cnf10kb_mcs_tx_sa_mem_map_write,
+ 	.mcs_rx_sa_mem_map_write	= cnf10kb_mcs_rx_sa_mem_map_write,
+ 	.mcs_flowid_secy_map		= cnf10kb_mcs_flowid_secy_map,
++	.mcs_bbe_intr_handler		= cnf10kb_mcs_bbe_intr_handler,
++	.mcs_pab_intr_handler		= cnf10kb_mcs_pab_intr_handler,
+ };
+ 
+ struct mcs_ops *cnf10kb_get_mac_ops(void)
+@@ -31,6 +33,7 @@ void cnf10kb_mcs_set_hw_capabilities(struct mcs *mcs)
+ 	hw->lmac_cnt = 4;		/* lmacs/ports per mcs block */
+ 	hw->mcs_x2p_intf = 1;		/* x2p clabration intf */
+ 	hw->mcs_blks = 7;		/* MCS blocks */
++	hw->ip_vec = MCS_CNF10KB_INT_VEC_IP; /* IP vector */
+ }
+ 
+ void cnf10kb_mcs_parser_cfg(struct mcs *mcs)
+@@ -212,3 +215,63 @@ void cnf10kb_mcs_tx_pn_wrapped_handler(struct mcs *mcs)
+ 		mcs_add_intr_wq_entry(mcs, &event);
+ 	}
+ }
++
++void cnf10kb_mcs_bbe_intr_handler(struct mcs *mcs, u64 intr,
++				  enum mcs_direction dir)
++{
++	struct mcs_intr_event event = { 0 };
++	int i;
++
++	if (!(intr & MCS_BBE_INT_MASK))
++		return;
++
++	event.mcs_id = mcs->mcs_id;
++	event.pcifunc = mcs->pf_map[0];
++
++	for (i = 0; i < MCS_MAX_BBE_INT; i++) {
++		if (!(intr & BIT_ULL(i)))
++			continue;
++
++		/* Lower nibble denotes data fifo overflow interrupts and
++		 * upper nibble indicates policy fifo overflow interrupts.
++		 */
++		if (intr & 0xFULL)
++			event.intr_mask = (dir == MCS_RX) ?
++					  MCS_BBE_RX_DFIFO_OVERFLOW_INT :
++					  MCS_BBE_TX_DFIFO_OVERFLOW_INT;
++		else
++			event.intr_mask = (dir == MCS_RX) ?
++					  MCS_BBE_RX_PLFIFO_OVERFLOW_INT :
++					  MCS_BBE_TX_PLFIFO_OVERFLOW_INT;
++
++		/* Notify the lmac_id info which ran into BBE fatal error */
++		event.lmac_id = i & 0x3ULL;
++		mcs_add_intr_wq_entry(mcs, &event);
++	}
++}
++
++void cnf10kb_mcs_pab_intr_handler(struct mcs *mcs, u64 intr,
++				  enum mcs_direction dir)
++{
++	struct mcs_intr_event event = { 0 };
++	int i;
++
++	if (!(intr & MCS_PAB_INT_MASK))
++		return;
++
++	event.mcs_id = mcs->mcs_id;
++	event.pcifunc = mcs->pf_map[0];
++
++	for (i = 0; i < MCS_MAX_PAB_INT; i++) {
++		if (!(intr & BIT_ULL(i)))
++			continue;
++
++		event.intr_mask = (dir == MCS_RX) ?
++				  MCS_PAB_RX_CHAN_OVERFLOW_INT :
++				  MCS_PAB_TX_CHAN_OVERFLOW_INT;
++
++		/* Notify the lmac_id info which ran into PAB fatal error */
++		event.lmac_id = i;
++		mcs_add_intr_wq_entry(mcs, &event);
++	}
++}
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h
+index c95a8b8f5eaf7..f3ab01fc363c8 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h
+@@ -97,6 +97,7 @@
+ #define MCSX_PEX_TX_SLAVE_VLAN_CFGX(a)          (0x46f8ull + (a) * 0x8ull)
+ #define MCSX_PEX_TX_SLAVE_CUSTOM_TAG_REL_MODE_SEL(a)	(0x788ull + (a) * 0x8ull)
+ #define MCSX_PEX_TX_SLAVE_PORT_CONFIG(a)		(0x4738ull + (a) * 0x8ull)
++#define MCSX_PEX_RX_SLAVE_PORT_CFGX(a)		(0x3b98ull + (a) * 0x8ull)
+ #define MCSX_PEX_RX_SLAVE_RULE_ETYPE_CFGX(a) ({	\
+ 	u64 offset;					\
+ 							\
+@@ -275,7 +276,10 @@
+ #define MCSX_BBE_RX_SLAVE_CAL_ENTRY			0x180ull
+ #define MCSX_BBE_RX_SLAVE_CAL_LEN			0x188ull
+ #define MCSX_PAB_RX_SLAVE_FIFO_SKID_CFGX(a)		(0x290ull + (a) * 0x40ull)
+-
++#define MCSX_BBE_RX_SLAVE_DFIFO_OVERFLOW_0		0xe20
++#define MCSX_BBE_TX_SLAVE_DFIFO_OVERFLOW_0		0x1298
++#define MCSX_BBE_RX_SLAVE_PLFIFO_OVERFLOW_0		0xe40
++#define MCSX_BBE_TX_SLAVE_PLFIFO_OVERFLOW_0		0x12b8
+ #define MCSX_BBE_RX_SLAVE_BBE_INT ({	\
+ 	u64 offset;			\
+ 					\
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
+index eb25e458266ca..dfd23580e3b8e 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
+@@ -11,6 +11,7 @@
+ 
+ #include "mcs.h"
+ #include "rvu.h"
++#include "mcs_reg.h"
+ #include "lmac_common.h"
+ 
+ #define M(_name, _id, _fn_name, _req_type, _rsp_type)			\
+@@ -32,6 +33,42 @@ static struct _req_type __maybe_unused					\
+ MBOX_UP_MCS_MESSAGES
+ #undef M
+ 
++void rvu_mcs_ptp_cfg(struct rvu *rvu, u8 rpm_id, u8 lmac_id, bool ena)
++{
++	struct mcs *mcs;
++	u64 cfg;
++	u8 port;
++
++	if (!rvu->mcs_blk_cnt)
++		return;
++
++	/* When ptp is enabled, RPM appends 8B header for all
++	 * RX packets. MCS PEX need to configure to skip 8B
++	 * during packet parsing.
++	 */
++
++	/* CNF10K-B */
++	if (rvu->mcs_blk_cnt > 1) {
++		mcs = mcs_get_pdata(rpm_id);
++		cfg = mcs_reg_read(mcs, MCSX_PEX_RX_SLAVE_PEX_CONFIGURATION);
++		if (ena)
++			cfg |= BIT_ULL(lmac_id);
++		else
++			cfg &= ~BIT_ULL(lmac_id);
++		mcs_reg_write(mcs, MCSX_PEX_RX_SLAVE_PEX_CONFIGURATION, cfg);
++		return;
++	}
++	/* CN10KB */
++	mcs = mcs_get_pdata(0);
++	port = (rpm_id * rvu->hw->lmac_per_cgx) + lmac_id;
++	cfg = mcs_reg_read(mcs, MCSX_PEX_RX_SLAVE_PORT_CFGX(port));
++	if (ena)
++		cfg |= BIT_ULL(0);
++	else
++		cfg &= ~BIT_ULL(0);
++	mcs_reg_write(mcs, MCSX_PEX_RX_SLAVE_PORT_CFGX(port), cfg);
++}
++
+ int rvu_mbox_handler_mcs_set_lmac_mode(struct rvu *rvu,
+ 				       struct mcs_set_lmac_mode *req,
+ 				       struct msg_rsp *rsp)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+index 8683ce57ed3fb..9f673bda9dbdd 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+@@ -2282,7 +2282,7 @@ static inline void rvu_afvf_mbox_up_handler(struct work_struct *work)
+ }
+ 
+ static int rvu_get_mbox_regions(struct rvu *rvu, void **mbox_addr,
+-				int num, int type)
++				int num, int type, unsigned long *pf_bmap)
+ {
+ 	struct rvu_hwinfo *hw = rvu->hw;
+ 	int region;
+@@ -2294,6 +2294,9 @@ static int rvu_get_mbox_regions(struct rvu *rvu, void **mbox_addr,
+ 	 */
+ 	if (type == TYPE_AFVF) {
+ 		for (region = 0; region < num; region++) {
++			if (!test_bit(region, pf_bmap))
++				continue;
++
+ 			if (hw->cap.per_pf_mbox_regs) {
+ 				bar4 = rvu_read64(rvu, BLKADDR_RVUM,
+ 						  RVU_AF_PFX_BAR4_ADDR(0)) +
+@@ -2315,6 +2318,9 @@ static int rvu_get_mbox_regions(struct rvu *rvu, void **mbox_addr,
+ 	 * RVU_AF_PF_BAR4_ADDR register.
+ 	 */
+ 	for (region = 0; region < num; region++) {
++		if (!test_bit(region, pf_bmap))
++			continue;
++
+ 		if (hw->cap.per_pf_mbox_regs) {
+ 			bar4 = rvu_read64(rvu, BLKADDR_RVUM,
+ 					  RVU_AF_PFX_BAR4_ADDR(region));
+@@ -2343,20 +2349,41 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw,
+ 	int err = -EINVAL, i, dir, dir_up;
+ 	void __iomem *reg_base;
+ 	struct rvu_work *mwork;
++	unsigned long *pf_bmap;
+ 	void **mbox_regions;
+ 	const char *name;
++	u64 cfg;
+ 
+-	mbox_regions = kcalloc(num, sizeof(void *), GFP_KERNEL);
+-	if (!mbox_regions)
++	pf_bmap = bitmap_zalloc(num, GFP_KERNEL);
++	if (!pf_bmap)
+ 		return -ENOMEM;
+ 
++	/* RVU VFs */
++	if (type == TYPE_AFVF)
++		bitmap_set(pf_bmap, 0, num);
++
++	if (type == TYPE_AFPF) {
++		/* Mark enabled PFs in bitmap */
++		for (i = 0; i < num; i++) {
++			cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_CFG(i));
++			if (cfg & BIT_ULL(20))
++				set_bit(i, pf_bmap);
++		}
++	}
++
++	mbox_regions = kcalloc(num, sizeof(void *), GFP_KERNEL);
++	if (!mbox_regions) {
++		err = -ENOMEM;
++		goto free_bitmap;
++	}
++
+ 	switch (type) {
+ 	case TYPE_AFPF:
+ 		name = "rvu_afpf_mailbox";
+ 		dir = MBOX_DIR_AFPF;
+ 		dir_up = MBOX_DIR_AFPF_UP;
+ 		reg_base = rvu->afreg_base;
+-		err = rvu_get_mbox_regions(rvu, mbox_regions, num, TYPE_AFPF);
++		err = rvu_get_mbox_regions(rvu, mbox_regions, num, TYPE_AFPF, pf_bmap);
+ 		if (err)
+ 			goto free_regions;
+ 		break;
+@@ -2365,7 +2392,7 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw,
+ 		dir = MBOX_DIR_PFVF;
+ 		dir_up = MBOX_DIR_PFVF_UP;
+ 		reg_base = rvu->pfreg_base;
+-		err = rvu_get_mbox_regions(rvu, mbox_regions, num, TYPE_AFVF);
++		err = rvu_get_mbox_regions(rvu, mbox_regions, num, TYPE_AFVF, pf_bmap);
+ 		if (err)
+ 			goto free_regions;
+ 		break;
+@@ -2396,16 +2423,19 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw,
+ 	}
+ 
+ 	err = otx2_mbox_regions_init(&mw->mbox, mbox_regions, rvu->pdev,
+-				     reg_base, dir, num);
++				     reg_base, dir, num, pf_bmap);
+ 	if (err)
+ 		goto exit;
+ 
+ 	err = otx2_mbox_regions_init(&mw->mbox_up, mbox_regions, rvu->pdev,
+-				     reg_base, dir_up, num);
++				     reg_base, dir_up, num, pf_bmap);
+ 	if (err)
+ 		goto exit;
+ 
+ 	for (i = 0; i < num; i++) {
++		if (!test_bit(i, pf_bmap))
++			continue;
++
+ 		mwork = &mw->mbox_wrk[i];
+ 		mwork->rvu = rvu;
+ 		INIT_WORK(&mwork->work, mbox_handler);
+@@ -2414,8 +2444,7 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw,
+ 		mwork->rvu = rvu;
+ 		INIT_WORK(&mwork->work, mbox_up_handler);
+ 	}
+-	kfree(mbox_regions);
+-	return 0;
++	goto free_regions;
+ 
+ exit:
+ 	destroy_workqueue(mw->mbox_wq);
+@@ -2424,6 +2453,8 @@ unmap_regions:
+ 		iounmap((void __iomem *)mbox_regions[num]);
+ free_regions:
+ 	kfree(mbox_regions);
++free_bitmap:
++	bitmap_free(pf_bmap);
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+index ef721caeac49b..d655bf04a483d 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+@@ -920,6 +920,7 @@ int rvu_get_hwvf(struct rvu *rvu, int pcifunc);
+ /* CN10K MCS */
+ int rvu_mcs_init(struct rvu *rvu);
+ int rvu_mcs_flr_handler(struct rvu *rvu, u16 pcifunc);
++void rvu_mcs_ptp_cfg(struct rvu *rvu, u8 rpm_id, u8 lmac_id, bool ena);
+ void rvu_mcs_exit(struct rvu *rvu);
+ 
+ #endif /* RVU_H */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+index 438b212fb54a7..83b342fa8d753 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+@@ -773,6 +773,8 @@ static int rvu_cgx_ptp_rx_cfg(struct rvu *rvu, u16 pcifunc, bool enable)
+ 	/* This flag is required to clean up CGX conf if app gets killed */
+ 	pfvf->hw_rx_tstamp_en = enable;
+ 
++	/* Inform MCS about 8B RX header */
++	rvu_mcs_ptp_cfg(rvu, cgx_id, lmac_id, enable);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
+index 4ad9ff025c964..0e74c5a2231e6 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
+@@ -60,13 +60,14 @@ static int rvu_get_lmtaddr(struct rvu *rvu, u16 pcifunc,
+ 			   u64 iova, u64 *lmt_addr)
+ {
+ 	u64 pa, val, pf;
+-	int err;
++	int err = 0;
+ 
+ 	if (!iova) {
+ 		dev_err(rvu->dev, "%s Requested Null address for transulation\n", __func__);
+ 		return -EINVAL;
+ 	}
+ 
++	mutex_lock(&rvu->rsrc_lock);
+ 	rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_SMMU_ADDR_REQ, iova);
+ 	pf = rvu_get_pf(pcifunc) & 0x1F;
+ 	val = BIT_ULL(63) | BIT_ULL(14) | BIT_ULL(13) | pf << 8 |
+@@ -76,12 +77,13 @@ static int rvu_get_lmtaddr(struct rvu *rvu, u16 pcifunc,
+ 	err = rvu_poll_reg(rvu, BLKADDR_RVUM, RVU_AF_SMMU_ADDR_RSP_STS, BIT_ULL(0), false);
+ 	if (err) {
+ 		dev_err(rvu->dev, "%s LMTLINE iova transulation failed\n", __func__);
+-		return err;
++		goto exit;
+ 	}
+ 	val = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_SMMU_ADDR_RSP_STS);
+ 	if (val & ~0x1ULL) {
+ 		dev_err(rvu->dev, "%s LMTLINE iova transulation failed err:%llx\n", __func__, val);
+-		return -EIO;
++		err = -EIO;
++		goto exit;
+ 	}
+ 	/* PA[51:12] = RVU_AF_SMMU_TLN_FLIT0[57:18]
+ 	 * PA[11:0] = IOVA[11:0]
+@@ -89,8 +91,9 @@ static int rvu_get_lmtaddr(struct rvu *rvu, u16 pcifunc,
+ 	pa = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_SMMU_TLN_FLIT0) >> 18;
+ 	pa &= GENMASK_ULL(39, 0);
+ 	*lmt_addr = (pa << 12) | (iova  & 0xFFF);
+-
+-	return 0;
++exit:
++	mutex_unlock(&rvu->rsrc_lock);
++	return err;
+ }
+ 
+ static int rvu_update_lmtaddr(struct rvu *rvu, u16 pcifunc, u64 lmt_addr)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
+index 26cfa501f1a11..9533b1d929604 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
+@@ -497,8 +497,9 @@ static int rvu_dbg_mcs_rx_secy_stats_display(struct seq_file *filp, void *unused
+ 			   stats.octet_validated_cnt);
+ 		seq_printf(filp, "secy%d: Pkts on disable port: %lld\n", secy_id,
+ 			   stats.pkt_port_disabled_cnt);
+-		seq_printf(filp, "secy%d: Octets validated: %lld\n", secy_id, stats.pkt_badtag_cnt);
+-		seq_printf(filp, "secy%d: Octets validated: %lld\n", secy_id, stats.pkt_nosa_cnt);
++		seq_printf(filp, "secy%d: Pkts with badtag: %lld\n", secy_id, stats.pkt_badtag_cnt);
++		seq_printf(filp, "secy%d: Pkts with no SA(sectag.tci.c=0): %lld\n", secy_id,
++			   stats.pkt_nosa_cnt);
+ 		seq_printf(filp, "secy%d: Pkts with nosaerror: %lld\n", secy_id,
+ 			   stats.pkt_nosaerror_cnt);
+ 		seq_printf(filp, "secy%d: Tagged ctrl pkts: %lld\n", secy_id,
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c
+index 006beb5cf98dd..952319453701b 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c
+@@ -13,11 +13,6 @@
+ #include "rvu_npc_fs.h"
+ #include "rvu_npc_hash.h"
+ 
+-#define NPC_BYTESM		GENMASK_ULL(19, 16)
+-#define NPC_HDR_OFFSET		GENMASK_ULL(15, 8)
+-#define NPC_KEY_OFFSET		GENMASK_ULL(5, 0)
+-#define NPC_LDATA_EN		BIT_ULL(7)
+-
+ static const char * const npc_flow_names[] = {
+ 	[NPC_DMAC]	= "dmac",
+ 	[NPC_SMAC]	= "smac",
+@@ -442,6 +437,7 @@ done:
+ static void npc_scan_ldata(struct rvu *rvu, int blkaddr, u8 lid,
+ 			   u8 lt, u64 cfg, u8 intf)
+ {
++	struct npc_mcam_kex_hash *mkex_hash = rvu->kpu.mkex_hash;
+ 	struct npc_mcam *mcam = &rvu->hw->mcam;
+ 	u8 hdr, key, nr_bytes, bit_offset;
+ 	u8 la_ltype, la_start;
+@@ -490,8 +486,21 @@ do {									       \
+ 	NPC_SCAN_HDR(NPC_SIP_IPV4, NPC_LID_LC, NPC_LT_LC_IP, 12, 4);
+ 	NPC_SCAN_HDR(NPC_DIP_IPV4, NPC_LID_LC, NPC_LT_LC_IP, 16, 4);
+ 	NPC_SCAN_HDR(NPC_IPFRAG_IPV6, NPC_LID_LC, NPC_LT_LC_IP6_EXT, 6, 1);
+-	NPC_SCAN_HDR(NPC_SIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 8, 16);
+-	NPC_SCAN_HDR(NPC_DIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 24, 16);
++	if (rvu->hw->cap.npc_hash_extract) {
++		if (mkex_hash->lid_lt_ld_hash_en[intf][lid][lt][0])
++			NPC_SCAN_HDR(NPC_SIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 8, 4);
++		else
++			NPC_SCAN_HDR(NPC_SIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 8, 16);
++
++		if (mkex_hash->lid_lt_ld_hash_en[intf][lid][lt][1])
++			NPC_SCAN_HDR(NPC_DIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 24, 4);
++		else
++			NPC_SCAN_HDR(NPC_DIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 24, 16);
++	} else {
++		NPC_SCAN_HDR(NPC_SIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 8, 16);
++		NPC_SCAN_HDR(NPC_DIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 24, 16);
++	}
++
+ 	NPC_SCAN_HDR(NPC_SPORT_UDP, NPC_LID_LD, NPC_LT_LD_UDP, 0, 2);
+ 	NPC_SCAN_HDR(NPC_DPORT_UDP, NPC_LID_LD, NPC_LT_LD_UDP, 2, 2);
+ 	NPC_SCAN_HDR(NPC_SPORT_TCP, NPC_LID_LD, NPC_LT_LD_TCP, 0, 2);
+@@ -594,8 +603,7 @@ static int npc_scan_kex(struct rvu *rvu, int blkaddr, u8 intf)
+ 	 */
+ 	masked_cfg = cfg & NPC_EXACT_NIBBLE;
+ 	bitnr = NPC_EXACT_NIBBLE_START;
+-	for_each_set_bit_from(bitnr, (unsigned long *)&masked_cfg,
+-			      NPC_EXACT_NIBBLE_START) {
++	for_each_set_bit_from(bitnr, (unsigned long *)&masked_cfg, NPC_EXACT_NIBBLE_END + 1) {
+ 		npc_scan_exact_result(mcam, bitnr, key_nibble, intf);
+ 		key_nibble++;
+ 	}
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.h
+index bdd65ce56a32d..3f5c9042d10e7 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.h
+@@ -9,6 +9,10 @@
+ #define __RVU_NPC_FS_H
+ 
+ #define IPV6_WORDS	4
++#define NPC_BYTESM	GENMASK_ULL(19, 16)
++#define NPC_HDR_OFFSET	GENMASK_ULL(15, 8)
++#define NPC_KEY_OFFSET	GENMASK_ULL(5, 0)
++#define NPC_LDATA_EN	BIT_ULL(7)
+ 
+ void npc_update_entry(struct rvu *rvu, enum key_fields type,
+ 		      struct mcam_entry *entry, u64 val_lo,
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c
+index 20ebb9c95c733..51209119f0f2f 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c
+@@ -78,42 +78,43 @@ static u32 rvu_npc_toeplitz_hash(const u64 *data, u64 *key, size_t data_bit_len,
+ 	return hash_out;
+ }
+ 
+-u32 npc_field_hash_calc(u64 *ldata, struct npc_mcam_kex_hash *mkex_hash,
+-			u64 *secret_key, u8 intf, u8 hash_idx)
++u32 npc_field_hash_calc(u64 *ldata, struct npc_get_field_hash_info_rsp rsp,
++			u8 intf, u8 hash_idx)
+ {
+ 	u64 hash_key[3];
+ 	u64 data_padded[2];
+ 	u32 field_hash;
+ 
+-	hash_key[0] = secret_key[1] << 31;
+-	hash_key[0] |= secret_key[2];
+-	hash_key[1] = secret_key[1] >> 33;
+-	hash_key[1] |= secret_key[0] << 31;
+-	hash_key[2] = secret_key[0] >> 33;
++	hash_key[0] = rsp.secret_key[1] << 31;
++	hash_key[0] |= rsp.secret_key[2];
++	hash_key[1] = rsp.secret_key[1] >> 33;
++	hash_key[1] |= rsp.secret_key[0] << 31;
++	hash_key[2] = rsp.secret_key[0] >> 33;
+ 
+-	data_padded[0] = mkex_hash->hash_mask[intf][hash_idx][0] & ldata[0];
+-	data_padded[1] = mkex_hash->hash_mask[intf][hash_idx][1] & ldata[1];
++	data_padded[0] = rsp.hash_mask[intf][hash_idx][0] & ldata[0];
++	data_padded[1] = rsp.hash_mask[intf][hash_idx][1] & ldata[1];
+ 	field_hash = rvu_npc_toeplitz_hash(data_padded, hash_key, 128, 159);
+ 
+-	field_hash &= mkex_hash->hash_ctrl[intf][hash_idx] >> 32;
+-	field_hash |= mkex_hash->hash_ctrl[intf][hash_idx];
++	field_hash &= FIELD_GET(GENMASK(63, 32), rsp.hash_ctrl[intf][hash_idx]);
++	field_hash += FIELD_GET(GENMASK(31, 0), rsp.hash_ctrl[intf][hash_idx]);
+ 	return field_hash;
+ }
+ 
+-static u64 npc_update_use_hash(int lt, int ld)
++static u64 npc_update_use_hash(struct rvu *rvu, int blkaddr,
++			       u8 intf, int lid, int lt, int ld)
+ {
+-	u64 cfg = 0;
+-
+-	switch (lt) {
+-	case NPC_LT_LC_IP6:
+-		/* Update use_hash(bit-20) and bytesm1 (bit-16:19)
+-		 * in KEX_LD_CFG
+-		 */
+-		cfg = KEX_LD_CFG_USE_HASH(0x1, 0x03,
+-					  ld ? 0x8 : 0x18,
+-					  0x1, 0x0, 0x10);
+-		break;
+-	}
++	u8 hdr, key;
++	u64 cfg;
++
++	cfg = rvu_read64(rvu, blkaddr, NPC_AF_INTFX_LIDX_LTX_LDX_CFG(intf, lid, lt, ld));
++	hdr = FIELD_GET(NPC_HDR_OFFSET, cfg);
++	key = FIELD_GET(NPC_KEY_OFFSET, cfg);
++
++	/* Update use_hash(bit-20) to 'true' and
++	 * bytesm1(bit-16:19) to '0x3' in KEX_LD_CFG
++	 */
++	cfg = KEX_LD_CFG_USE_HASH(0x1, 0x03,
++				  hdr, 0x1, 0x0, key);
+ 
+ 	return cfg;
+ }
+@@ -132,12 +133,13 @@ static void npc_program_mkex_hash_rx(struct rvu *rvu, int blkaddr,
+ 		for (lt = 0; lt < NPC_MAX_LT; lt++) {
+ 			for (ld = 0; ld < NPC_MAX_LD; ld++) {
+ 				if (mkex_hash->lid_lt_ld_hash_en[intf][lid][lt][ld]) {
+-					u64 cfg = npc_update_use_hash(lt, ld);
++					u64 cfg;
+ 
+-					hash_cnt++;
+ 					if (hash_cnt == NPC_MAX_HASH)
+ 						return;
+ 
++					cfg = npc_update_use_hash(rvu, blkaddr,
++								  intf, lid, lt, ld);
+ 					/* Set updated KEX configuration */
+ 					SET_KEX_LD(intf, lid, lt, ld, cfg);
+ 					/* Set HASH configuration */
+@@ -149,6 +151,8 @@ static void npc_program_mkex_hash_rx(struct rvu *rvu, int blkaddr,
+ 							     mkex_hash->hash_mask[intf][ld][1]);
+ 					SET_KEX_LD_HASH_CTRL(intf, ld,
+ 							     mkex_hash->hash_ctrl[intf][ld]);
++
++					hash_cnt++;
+ 				}
+ 			}
+ 		}
+@@ -169,12 +173,13 @@ static void npc_program_mkex_hash_tx(struct rvu *rvu, int blkaddr,
+ 		for (lt = 0; lt < NPC_MAX_LT; lt++) {
+ 			for (ld = 0; ld < NPC_MAX_LD; ld++)
+ 				if (mkex_hash->lid_lt_ld_hash_en[intf][lid][lt][ld]) {
+-					u64 cfg = npc_update_use_hash(lt, ld);
++					u64 cfg;
+ 
+-					hash_cnt++;
+ 					if (hash_cnt == NPC_MAX_HASH)
+ 						return;
+ 
++					cfg = npc_update_use_hash(rvu, blkaddr,
++								  intf, lid, lt, ld);
+ 					/* Set updated KEX configuration */
+ 					SET_KEX_LD(intf, lid, lt, ld, cfg);
+ 					/* Set HASH configuration */
+@@ -187,8 +192,6 @@ static void npc_program_mkex_hash_tx(struct rvu *rvu, int blkaddr,
+ 					SET_KEX_LD_HASH_CTRL(intf, ld,
+ 							     mkex_hash->hash_ctrl[intf][ld]);
+ 					hash_cnt++;
+-					if (hash_cnt == NPC_MAX_HASH)
+-						return;
+ 				}
+ 		}
+ 	}
+@@ -238,8 +241,8 @@ void npc_update_field_hash(struct rvu *rvu, u8 intf,
+ 			   struct flow_msg *omask)
+ {
+ 	struct npc_mcam_kex_hash *mkex_hash = rvu->kpu.mkex_hash;
+-	struct npc_get_secret_key_req req;
+-	struct npc_get_secret_key_rsp rsp;
++	struct npc_get_field_hash_info_req req;
++	struct npc_get_field_hash_info_rsp rsp;
+ 	u64 ldata[2], cfg;
+ 	u32 field_hash;
+ 	u8 hash_idx;
+@@ -250,7 +253,7 @@ void npc_update_field_hash(struct rvu *rvu, u8 intf,
+ 	}
+ 
+ 	req.intf = intf;
+-	rvu_mbox_handler_npc_get_secret_key(rvu, &req, &rsp);
++	rvu_mbox_handler_npc_get_field_hash_info(rvu, &req, &rsp);
+ 
+ 	for (hash_idx = 0; hash_idx < NPC_MAX_HASH; hash_idx++) {
+ 		cfg = rvu_read64(rvu, blkaddr, NPC_AF_INTFX_HASHX_CFG(intf, hash_idx));
+@@ -266,44 +269,45 @@ void npc_update_field_hash(struct rvu *rvu, u8 intf,
+ 				 * is hashed to 32 bit value.
+ 				 */
+ 				case NPC_LT_LC_IP6:
+-					if (features & BIT_ULL(NPC_SIP_IPV6)) {
++					/* ld[0] == hash_idx[0] == Source IPv6
++					 * ld[1] == hash_idx[1] == Destination IPv6
++					 */
++					if ((features & BIT_ULL(NPC_SIP_IPV6)) && !hash_idx) {
+ 						u32 src_ip[IPV6_WORDS];
+ 
+ 						be32_to_cpu_array(src_ip, pkt->ip6src, IPV6_WORDS);
+-						ldata[0] = (u64)src_ip[0] << 32 | src_ip[1];
+-						ldata[1] = (u64)src_ip[2] << 32 | src_ip[3];
++						ldata[1] = (u64)src_ip[0] << 32 | src_ip[1];
++						ldata[0] = (u64)src_ip[2] << 32 | src_ip[3];
+ 						field_hash = npc_field_hash_calc(ldata,
+-										 mkex_hash,
+-										 rsp.secret_key,
++										 rsp,
+ 										 intf,
+ 										 hash_idx);
+ 						npc_update_entry(rvu, NPC_SIP_IPV6, entry,
+-								 field_hash, 0, 32, 0, intf);
++								 field_hash, 0,
++								 GENMASK(31, 0), 0, intf);
+ 						memcpy(&opkt->ip6src, &pkt->ip6src,
+ 						       sizeof(pkt->ip6src));
+ 						memcpy(&omask->ip6src, &mask->ip6src,
+ 						       sizeof(mask->ip6src));
+-						break;
+-					}
+-
+-					if (features & BIT_ULL(NPC_DIP_IPV6)) {
++					} else if ((features & BIT_ULL(NPC_DIP_IPV6)) && hash_idx) {
+ 						u32 dst_ip[IPV6_WORDS];
+ 
+ 						be32_to_cpu_array(dst_ip, pkt->ip6dst, IPV6_WORDS);
+-						ldata[0] = (u64)dst_ip[0] << 32 | dst_ip[1];
+-						ldata[1] = (u64)dst_ip[2] << 32 | dst_ip[3];
++						ldata[1] = (u64)dst_ip[0] << 32 | dst_ip[1];
++						ldata[0] = (u64)dst_ip[2] << 32 | dst_ip[3];
+ 						field_hash = npc_field_hash_calc(ldata,
+-										 mkex_hash,
+-										 rsp.secret_key,
++										 rsp,
+ 										 intf,
+ 										 hash_idx);
+ 						npc_update_entry(rvu, NPC_DIP_IPV6, entry,
+-								 field_hash, 0, 32, 0, intf);
++								 field_hash, 0,
++								 GENMASK(31, 0), 0, intf);
+ 						memcpy(&opkt->ip6dst, &pkt->ip6dst,
+ 						       sizeof(pkt->ip6dst));
+ 						memcpy(&omask->ip6dst, &mask->ip6dst,
+ 						       sizeof(mask->ip6dst));
+ 					}
++
+ 					break;
+ 				}
+ 			}
+@@ -311,13 +315,13 @@ void npc_update_field_hash(struct rvu *rvu, u8 intf,
+ 	}
+ }
+ 
+-int rvu_mbox_handler_npc_get_secret_key(struct rvu *rvu,
+-					struct npc_get_secret_key_req *req,
+-					struct npc_get_secret_key_rsp *rsp)
++int rvu_mbox_handler_npc_get_field_hash_info(struct rvu *rvu,
++					     struct npc_get_field_hash_info_req *req,
++					     struct npc_get_field_hash_info_rsp *rsp)
+ {
+ 	u64 *secret_key = rsp->secret_key;
+ 	u8 intf = req->intf;
+-	int blkaddr;
++	int i, j, blkaddr;
+ 
+ 	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0);
+ 	if (blkaddr < 0) {
+@@ -329,6 +333,19 @@ int rvu_mbox_handler_npc_get_secret_key(struct rvu *rvu,
+ 	secret_key[1] = rvu_read64(rvu, blkaddr, NPC_AF_INTFX_SECRET_KEY1(intf));
+ 	secret_key[2] = rvu_read64(rvu, blkaddr, NPC_AF_INTFX_SECRET_KEY2(intf));
+ 
++	for (i = 0; i < NPC_MAX_HASH; i++) {
++		for (j = 0; j < NPC_MAX_HASH_MASK; j++) {
++			rsp->hash_mask[NIX_INTF_RX][i][j] =
++				GET_KEX_LD_HASH_MASK(NIX_INTF_RX, i, j);
++			rsp->hash_mask[NIX_INTF_TX][i][j] =
++				GET_KEX_LD_HASH_MASK(NIX_INTF_TX, i, j);
++		}
++	}
++
++	for (i = 0; i < NPC_MAX_INTF; i++)
++		for (j = 0; j < NPC_MAX_HASH; j++)
++			rsp->hash_ctrl[i][j] = GET_KEX_LD_HASH_CTRL(i, j);
++
+ 	return 0;
+ }
+ 
+@@ -1868,9 +1885,9 @@ int rvu_npc_exact_init(struct rvu *rvu)
+ 	rvu->hw->table = table;
+ 
+ 	/* Read table size, ways and depth */
+-	table->mem_table.depth = FIELD_GET(GENMASK_ULL(31, 24), npc_const3);
+ 	table->mem_table.ways = FIELD_GET(GENMASK_ULL(19, 16), npc_const3);
+-	table->cam_table.depth = FIELD_GET(GENMASK_ULL(15, 0), npc_const3);
++	table->mem_table.depth = FIELD_GET(GENMASK_ULL(15, 0), npc_const3);
++	table->cam_table.depth = FIELD_GET(GENMASK_ULL(31, 24), npc_const3);
+ 
+ 	dev_dbg(rvu->dev, "%s: NPC exact match 4way_2k table(ways=%d, depth=%d)\n",
+ 		__func__,  table->mem_table.ways, table->cam_table.depth);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.h
+index 3efeb09c58dec..a1c3d987b8044 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.h
+@@ -31,6 +31,12 @@
+ 	rvu_write64(rvu, blkaddr,	\
+ 		    NPC_AF_INTFX_HASHX_MASKX(intf, ld, mask_idx), cfg)
+ 
++#define GET_KEX_LD_HASH_CTRL(intf, ld)	\
++	rvu_read64(rvu, blkaddr, NPC_AF_INTFX_HASHX_RESULT_CTRL(intf, ld))
++
++#define GET_KEX_LD_HASH_MASK(intf, ld, mask_idx)	\
++	rvu_read64(rvu, blkaddr, NPC_AF_INTFX_HASHX_MASKX(intf, ld, mask_idx))
++
+ #define SET_KEX_LD_HASH_CTRL(intf, ld, cfg) \
+ 	rvu_write64(rvu, blkaddr,	\
+ 		    NPC_AF_INTFX_HASHX_RESULT_CTRL(intf, ld), cfg)
+@@ -56,8 +62,8 @@ void npc_update_field_hash(struct rvu *rvu, u8 intf,
+ 			   struct flow_msg *omask);
+ void npc_config_secret_key(struct rvu *rvu, int blkaddr);
+ void npc_program_mkex_hash(struct rvu *rvu, int blkaddr);
+-u32 npc_field_hash_calc(u64 *ldata, struct npc_mcam_kex_hash *mkex_hash,
+-			u64 *secret_key, u8 intf, u8 hash_idx);
++u32 npc_field_hash_calc(u64 *ldata, struct npc_get_field_hash_info_rsp rsp,
++			u8 intf, u8 hash_idx);
+ 
+ static struct npc_mcam_kex_hash npc_mkex_hash_default __maybe_unused = {
+ 	.lid_lt_ld_hash_en = {
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c
+index 9ec5f38d38a84..a487a98eac88c 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c
+@@ -9,6 +9,7 @@
+ #include <net/macsec.h>
+ #include "otx2_common.h"
+ 
++#define MCS_TCAM0_MAC_DA_MASK		GENMASK_ULL(47, 0)
+ #define MCS_TCAM0_MAC_SA_MASK		GENMASK_ULL(63, 48)
+ #define MCS_TCAM1_MAC_SA_MASK		GENMASK_ULL(31, 0)
+ #define MCS_TCAM1_ETYPE_MASK		GENMASK_ULL(47, 32)
+@@ -149,11 +150,20 @@ static void cn10k_mcs_free_rsrc(struct otx2_nic *pfvf, enum mcs_direction dir,
+ 				enum mcs_rsrc_type type, u16 hw_rsrc_id,
+ 				bool all)
+ {
++	struct mcs_clear_stats *clear_req;
+ 	struct mbox *mbox = &pfvf->mbox;
+ 	struct mcs_free_rsrc_req *req;
+ 
+ 	mutex_lock(&mbox->lock);
+ 
++	clear_req = otx2_mbox_alloc_msg_mcs_clear_stats(mbox);
++	if (!clear_req)
++		goto fail;
++
++	clear_req->id = hw_rsrc_id;
++	clear_req->type = type;
++	clear_req->dir = dir;
++
+ 	req = otx2_mbox_alloc_msg_mcs_free_resources(mbox);
+ 	if (!req)
+ 		goto fail;
+@@ -237,8 +247,10 @@ static int cn10k_mcs_write_rx_flowid(struct otx2_nic *pfvf,
+ 				     struct cn10k_mcs_rxsc *rxsc, u8 hw_secy_id)
+ {
+ 	struct macsec_rx_sc *sw_rx_sc = rxsc->sw_rxsc;
++	struct macsec_secy *secy = rxsc->sw_secy;
+ 	struct mcs_flowid_entry_write_req *req;
+ 	struct mbox *mbox = &pfvf->mbox;
++	u64 mac_da;
+ 	int ret;
+ 
+ 	mutex_lock(&mbox->lock);
+@@ -249,11 +261,16 @@ static int cn10k_mcs_write_rx_flowid(struct otx2_nic *pfvf,
+ 		goto fail;
+ 	}
+ 
++	mac_da = ether_addr_to_u64(secy->netdev->dev_addr);
++
++	req->data[0] = FIELD_PREP(MCS_TCAM0_MAC_DA_MASK, mac_da);
++	req->mask[0] = ~0ULL;
++	req->mask[0] = ~MCS_TCAM0_MAC_DA_MASK;
++
+ 	req->data[1] = FIELD_PREP(MCS_TCAM1_ETYPE_MASK, ETH_P_MACSEC);
+ 	req->mask[1] = ~0ULL;
+ 	req->mask[1] &= ~MCS_TCAM1_ETYPE_MASK;
+ 
+-	req->mask[0] = ~0ULL;
+ 	req->mask[2] = ~0ULL;
+ 	req->mask[3] = ~0ULL;
+ 
+@@ -997,7 +1014,7 @@ static void cn10k_mcs_sync_stats(struct otx2_nic *pfvf, struct macsec_secy *secy
+ 
+ 	/* Check if sync is really needed */
+ 	if (secy->validate_frames == txsc->last_validate_frames &&
+-	    secy->protect_frames == txsc->last_protect_frames)
++	    secy->replay_protect == txsc->last_replay_protect)
+ 		return;
+ 
+ 	cn10k_mcs_secy_stats(pfvf, txsc->hw_secy_id_rx, &rx_rsp, MCS_RX, true);
+@@ -1019,19 +1036,19 @@ static void cn10k_mcs_sync_stats(struct otx2_nic *pfvf, struct macsec_secy *secy
+ 		rxsc->stats.InPktsInvalid += sc_rsp.pkt_invalid_cnt;
+ 		rxsc->stats.InPktsNotValid += sc_rsp.pkt_notvalid_cnt;
+ 
+-		if (txsc->last_protect_frames)
++		if (txsc->last_replay_protect)
+ 			rxsc->stats.InPktsLate += sc_rsp.pkt_late_cnt;
+ 		else
+ 			rxsc->stats.InPktsDelayed += sc_rsp.pkt_late_cnt;
+ 
+-		if (txsc->last_validate_frames == MACSEC_VALIDATE_CHECK)
++		if (txsc->last_validate_frames == MACSEC_VALIDATE_DISABLED)
+ 			rxsc->stats.InPktsUnchecked += sc_rsp.pkt_unchecked_cnt;
+ 		else
+ 			rxsc->stats.InPktsOK += sc_rsp.pkt_unchecked_cnt;
+ 	}
+ 
+ 	txsc->last_validate_frames = secy->validate_frames;
+-	txsc->last_protect_frames = secy->protect_frames;
++	txsc->last_replay_protect = secy->replay_protect;
+ }
+ 
+ static int cn10k_mdo_open(struct macsec_context *ctx)
+@@ -1100,7 +1117,7 @@ static int cn10k_mdo_add_secy(struct macsec_context *ctx)
+ 	txsc->sw_secy = secy;
+ 	txsc->encoding_sa = secy->tx_sc.encoding_sa;
+ 	txsc->last_validate_frames = secy->validate_frames;
+-	txsc->last_protect_frames = secy->protect_frames;
++	txsc->last_replay_protect = secy->replay_protect;
+ 
+ 	list_add(&txsc->entry, &cfg->txsc_list);
+ 
+@@ -1117,6 +1134,7 @@ static int cn10k_mdo_upd_secy(struct macsec_context *ctx)
+ 	struct macsec_secy *secy = ctx->secy;
+ 	struct macsec_tx_sa *sw_tx_sa;
+ 	struct cn10k_mcs_txsc *txsc;
++	bool active;
+ 	u8 sa_num;
+ 	int err;
+ 
+@@ -1124,15 +1142,19 @@ static int cn10k_mdo_upd_secy(struct macsec_context *ctx)
+ 	if (!txsc)
+ 		return -ENOENT;
+ 
+-	txsc->encoding_sa = secy->tx_sc.encoding_sa;
+-
+-	sa_num = txsc->encoding_sa;
+-	sw_tx_sa = rcu_dereference_bh(secy->tx_sc.sa[sa_num]);
++	/* Encoding SA got changed */
++	if (txsc->encoding_sa != secy->tx_sc.encoding_sa) {
++		txsc->encoding_sa = secy->tx_sc.encoding_sa;
++		sa_num = txsc->encoding_sa;
++		sw_tx_sa = rcu_dereference_bh(secy->tx_sc.sa[sa_num]);
++		active = sw_tx_sa ? sw_tx_sa->active : false;
++		cn10k_mcs_link_tx_sa2sc(pfvf, secy, txsc, sa_num, active);
++	}
+ 
+ 	if (netif_running(secy->netdev)) {
+ 		cn10k_mcs_sync_stats(pfvf, secy, txsc);
+ 
+-		err = cn10k_mcs_secy_tx_cfg(pfvf, secy, txsc, sw_tx_sa, sa_num);
++		err = cn10k_mcs_secy_tx_cfg(pfvf, secy, txsc, NULL, 0);
+ 		if (err)
+ 			return err;
+ 	}
+@@ -1521,12 +1543,12 @@ static int cn10k_mdo_get_rx_sc_stats(struct macsec_context *ctx)
+ 	rxsc->stats.InPktsInvalid += rsp.pkt_invalid_cnt;
+ 	rxsc->stats.InPktsNotValid += rsp.pkt_notvalid_cnt;
+ 
+-	if (secy->protect_frames)
++	if (secy->replay_protect)
+ 		rxsc->stats.InPktsLate += rsp.pkt_late_cnt;
+ 	else
+ 		rxsc->stats.InPktsDelayed += rsp.pkt_late_cnt;
+ 
+-	if (secy->validate_frames == MACSEC_VALIDATE_CHECK)
++	if (secy->validate_frames == MACSEC_VALIDATE_DISABLED)
+ 		rxsc->stats.InPktsUnchecked += rsp.pkt_unchecked_cnt;
+ 	else
+ 		rxsc->stats.InPktsOK += rsp.pkt_unchecked_cnt;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+index 3d22cc6a2804a..0c8fc66ade82d 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+@@ -335,11 +335,11 @@ struct otx2_flow_config {
+ #define OTX2_PER_VF_VLAN_FLOWS	2 /* Rx + Tx per VF */
+ #define OTX2_VF_VLAN_RX_INDEX	0
+ #define OTX2_VF_VLAN_TX_INDEX	1
+-	u16			max_flows;
+-	u8			dmacflt_max_flows;
+ 	u32			*bmap_to_dmacindex;
+ 	unsigned long		*dmacflt_bmap;
+ 	struct list_head	flow_list;
++	u32			dmacflt_max_flows;
++	u16                     max_flows;
+ };
+ 
+ struct otx2_tc_info {
+@@ -389,7 +389,7 @@ struct cn10k_mcs_txsc {
+ 	struct cn10k_txsc_stats stats;
+ 	struct list_head entry;
+ 	enum macsec_validation_type last_validate_frames;
+-	bool last_protect_frames;
++	bool last_replay_protect;
+ 	u16 hw_secy_id_tx;
+ 	u16 hw_secy_id_rx;
+ 	u16 hw_flow_id;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+index 179433d0a54a6..18284ad751572 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+@@ -1835,13 +1835,22 @@ int otx2_open(struct net_device *netdev)
+ 		otx2_dmacflt_reinstall_flows(pf);
+ 
+ 	err = otx2_rxtx_enable(pf, true);
+-	if (err)
++	/* If a mbox communication error happens at this point then interface
++	 * will end up in a state such that it is in down state but hardware
++	 * mcam entries are enabled to receive the packets. Hence disable the
++	 * packet I/O.
++	 */
++	if (err == EIO)
++		goto err_disable_rxtx;
++	else if (err)
+ 		goto err_tx_stop_queues;
+ 
+ 	otx2_do_set_rx_mode(pf);
+ 
+ 	return 0;
+ 
++err_disable_rxtx:
++	otx2_rxtx_enable(pf, false);
+ err_tx_stop_queues:
+ 	netif_tx_stop_all_queues(netdev);
+ 	netif_carrier_off(netdev);
+@@ -3073,8 +3082,6 @@ static void otx2_remove(struct pci_dev *pdev)
+ 		otx2_config_pause_frm(pf);
+ 	}
+ 
+-	cn10k_mcs_free(pf);
+-
+ #ifdef CONFIG_DCB
+ 	/* Disable PFC config */
+ 	if (pf->pfc_en) {
+@@ -3088,6 +3095,7 @@ static void otx2_remove(struct pci_dev *pdev)
+ 
+ 	otx2_unregister_dl(pf);
+ 	unregister_netdev(netdev);
++	cn10k_mcs_free(pf);
+ 	otx2_sriov_disable(pf->pdev);
+ 	otx2_sriov_vfcfg_cleanup(pf);
+ 	if (pf->otx2_wq)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
+index 044cc211424ed..8392f63e433fc 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
+@@ -544,7 +544,7 @@ static int otx2_tc_prepare_flow(struct otx2_nic *nic, struct otx2_tc_flow *node,
+ 		if (match.mask->flags & FLOW_DIS_IS_FRAGMENT) {
+ 			if (ntohs(flow_spec->etype) == ETH_P_IP) {
+ 				flow_spec->ip_flag = IPV4_FLAG_MORE;
+-				flow_mask->ip_flag = 0xff;
++				flow_mask->ip_flag = IPV4_FLAG_MORE;
+ 				req->features |= BIT_ULL(NPC_IPFRAG_IPV4);
+ 			} else if (ntohs(flow_spec->etype) == ETH_P_IPV6) {
+ 				flow_spec->next_header = IPPROTO_FRAGMENT;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
+index ab126f8706c74..53366dbfbf27c 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
+@@ -621,7 +621,7 @@ static int otx2vf_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 
+ 	err = otx2vf_realloc_msix_vectors(vf);
+ 	if (err)
+-		goto err_mbox_destroy;
++		goto err_detach_rsrc;
+ 
+ 	err = otx2_set_real_num_queues(netdev, qcount, qcount);
+ 	if (err)
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index e14050e178624..c9fb1d7084d57 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -1921,9 +1921,7 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget,
+ 
+ 	while (done < budget) {
+ 		unsigned int pktlen, *rxdcsum;
+-		bool has_hwaccel_tag = false;
+ 		struct net_device *netdev;
+-		u16 vlan_proto, vlan_tci;
+ 		dma_addr_t dma_addr;
+ 		u32 hash, reason;
+ 		int mac = 0;
+@@ -2058,31 +2056,16 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget,
+ 			skb_checksum_none_assert(skb);
+ 		skb->protocol = eth_type_trans(skb, netdev);
+ 
+-		if (netdev->features & NETIF_F_HW_VLAN_CTAG_RX) {
+-			if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) {
+-				if (trxd.rxd3 & RX_DMA_VTAG_V2) {
+-					vlan_proto = RX_DMA_VPID(trxd.rxd4);
+-					vlan_tci = RX_DMA_VID(trxd.rxd4);
+-					has_hwaccel_tag = true;
+-				}
+-			} else if (trxd.rxd2 & RX_DMA_VTAG) {
+-				vlan_proto = RX_DMA_VPID(trxd.rxd3);
+-				vlan_tci = RX_DMA_VID(trxd.rxd3);
+-				has_hwaccel_tag = true;
+-			}
+-		}
+-
+ 		/* When using VLAN untagging in combination with DSA, the
+ 		 * hardware treats the MTK special tag as a VLAN and untags it.
+ 		 */
+-		if (has_hwaccel_tag && netdev_uses_dsa(netdev)) {
+-			unsigned int port = vlan_proto & GENMASK(2, 0);
++		if (!MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2) &&
++		    (trxd.rxd2 & RX_DMA_VTAG) && netdev_uses_dsa(netdev)) {
++			unsigned int port = RX_DMA_VPID(trxd.rxd3) & GENMASK(2, 0);
+ 
+ 			if (port < ARRAY_SIZE(eth->dsa_meta) &&
+ 			    eth->dsa_meta[port])
+ 				skb_dst_set_noref(skb, &eth->dsa_meta[port]->dst);
+-		} else if (has_hwaccel_tag) {
+-			__vlan_hwaccel_put_tag(skb, htons(vlan_proto), vlan_tci);
+ 		}
+ 
+ 		if (reason == MTK_PPE_CPU_REASON_HIT_UNBIND_RATE_REACHED)
+@@ -2910,29 +2893,11 @@ static netdev_features_t mtk_fix_features(struct net_device *dev,
+ 
+ static int mtk_set_features(struct net_device *dev, netdev_features_t features)
+ {
+-	struct mtk_mac *mac = netdev_priv(dev);
+-	struct mtk_eth *eth = mac->hw;
+ 	netdev_features_t diff = dev->features ^ features;
+-	int i;
+ 
+ 	if ((diff & NETIF_F_LRO) && !(features & NETIF_F_LRO))
+ 		mtk_hwlro_netdev_disable(dev);
+ 
+-	/* Set RX VLAN offloading */
+-	if (!(diff & NETIF_F_HW_VLAN_CTAG_RX))
+-		return 0;
+-
+-	mtk_w32(eth, !!(features & NETIF_F_HW_VLAN_CTAG_RX),
+-		MTK_CDMP_EG_CTRL);
+-
+-	/* sync features with other MAC */
+-	for (i = 0; i < MTK_MAC_COUNT; i++) {
+-		if (!eth->netdev[i] || eth->netdev[i] == dev)
+-			continue;
+-		eth->netdev[i]->features &= ~NETIF_F_HW_VLAN_CTAG_RX;
+-		eth->netdev[i]->features |= features & NETIF_F_HW_VLAN_CTAG_RX;
+-	}
+-
+ 	return 0;
+ }
+ 
+@@ -3250,30 +3215,6 @@ static int mtk_open(struct net_device *dev)
+ 	struct mtk_eth *eth = mac->hw;
+ 	int i, err;
+ 
+-	if (mtk_uses_dsa(dev) && !eth->prog) {
+-		for (i = 0; i < ARRAY_SIZE(eth->dsa_meta); i++) {
+-			struct metadata_dst *md_dst = eth->dsa_meta[i];
+-
+-			if (md_dst)
+-				continue;
+-
+-			md_dst = metadata_dst_alloc(0, METADATA_HW_PORT_MUX,
+-						    GFP_KERNEL);
+-			if (!md_dst)
+-				return -ENOMEM;
+-
+-			md_dst->u.port_info.port_id = i;
+-			eth->dsa_meta[i] = md_dst;
+-		}
+-	} else {
+-		/* Hardware special tag parsing needs to be disabled if at least
+-		 * one MAC does not use DSA.
+-		 */
+-		u32 val = mtk_r32(eth, MTK_CDMP_IG_CTRL);
+-		val &= ~MTK_CDMP_STAG_EN;
+-		mtk_w32(eth, val, MTK_CDMP_IG_CTRL);
+-	}
+-
+ 	err = phylink_of_phy_connect(mac->phylink, mac->of_node, 0);
+ 	if (err) {
+ 		netdev_err(dev, "%s: could not attach PHY: %d\n", __func__,
+@@ -3312,6 +3253,40 @@ static int mtk_open(struct net_device *dev)
+ 	phylink_start(mac->phylink);
+ 	netif_tx_start_all_queues(dev);
+ 
++	if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2))
++		return 0;
++
++	if (mtk_uses_dsa(dev) && !eth->prog) {
++		for (i = 0; i < ARRAY_SIZE(eth->dsa_meta); i++) {
++			struct metadata_dst *md_dst = eth->dsa_meta[i];
++
++			if (md_dst)
++				continue;
++
++			md_dst = metadata_dst_alloc(0, METADATA_HW_PORT_MUX,
++						    GFP_KERNEL);
++			if (!md_dst)
++				return -ENOMEM;
++
++			md_dst->u.port_info.port_id = i;
++			eth->dsa_meta[i] = md_dst;
++		}
++	} else {
++		/* Hardware special tag parsing needs to be disabled if at least
++		 * one MAC does not use DSA.
++		 */
++		u32 val = mtk_r32(eth, MTK_CDMP_IG_CTRL);
++
++		val &= ~MTK_CDMP_STAG_EN;
++		mtk_w32(eth, val, MTK_CDMP_IG_CTRL);
++
++		val = mtk_r32(eth, MTK_CDMQ_IG_CTRL);
++		val &= ~MTK_CDMQ_STAG_EN;
++		mtk_w32(eth, val, MTK_CDMQ_IG_CTRL);
++
++		mtk_w32(eth, 0, MTK_CDMP_EG_CTRL);
++	}
++
+ 	return 0;
+ }
+ 
+@@ -3796,10 +3771,9 @@ static int mtk_hw_init(struct mtk_eth *eth, bool reset)
+ 	if (!MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) {
+ 		val = mtk_r32(eth, MTK_CDMP_IG_CTRL);
+ 		mtk_w32(eth, val | MTK_CDMP_STAG_EN, MTK_CDMP_IG_CTRL);
+-	}
+ 
+-	/* Enable RX VLan Offloading */
+-	mtk_w32(eth, 1, MTK_CDMP_EG_CTRL);
++		mtk_w32(eth, 1, MTK_CDMP_EG_CTRL);
++	}
+ 
+ 	/* set interrupt delays based on current Net DIM sample */
+ 	mtk_dim_rx(&eth->rx_dim.work);
+@@ -4437,7 +4411,7 @@ static int mtk_add_mac(struct mtk_eth *eth, struct device_node *np)
+ 		eth->netdev[id]->hw_features |= NETIF_F_LRO;
+ 
+ 	eth->netdev[id]->vlan_features = eth->soc->hw_features &
+-		~(NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX);
++		~NETIF_F_HW_VLAN_CTAG_TX;
+ 	eth->netdev[id]->features |= eth->soc->hw_features;
+ 	eth->netdev[id]->ethtool_ops = &mtk_ethtool_ops;
+ 
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+index 084a6badef6d9..ac57dc87c59a3 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+@@ -48,7 +48,6 @@
+ #define MTK_HW_FEATURES		(NETIF_F_IP_CSUM | \
+ 				 NETIF_F_RXCSUM | \
+ 				 NETIF_F_HW_VLAN_CTAG_TX | \
+-				 NETIF_F_HW_VLAN_CTAG_RX | \
+ 				 NETIF_F_SG | NETIF_F_TSO | \
+ 				 NETIF_F_TSO6 | \
+ 				 NETIF_F_IPV6_CSUM |\
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_devlink.c b/drivers/net/ethernet/pensando/ionic/ionic_devlink.c
+index e6ff757895abb..4ec66a6be0738 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_devlink.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_devlink.c
+@@ -61,6 +61,8 @@ struct ionic *ionic_devlink_alloc(struct device *dev)
+ 	struct devlink *dl;
+ 
+ 	dl = devlink_alloc(&ionic_dl_ops, sizeof(struct ionic), dev);
++	if (!dl)
++		return NULL;
+ 
+ 	return devlink_priv(dl);
+ }
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c b/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c
+index cf33503468a3d..9b2b96fa36af8 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c
+@@ -794,7 +794,7 @@ static int ionic_get_rxnfc(struct net_device *netdev,
+ 		info->data = lif->nxqs;
+ 		break;
+ 	default:
+-		netdev_err(netdev, "Command parameter %d is not supported\n",
++		netdev_dbg(netdev, "Command parameter %d is not supported\n",
+ 			   info->cmd);
+ 		err = -EOPNOTSUPP;
+ 	}
+diff --git a/drivers/net/ethernet/sfc/mcdi_port_common.c b/drivers/net/ethernet/sfc/mcdi_port_common.c
+index 899cc16710048..0ab14f3d01d4d 100644
+--- a/drivers/net/ethernet/sfc/mcdi_port_common.c
++++ b/drivers/net/ethernet/sfc/mcdi_port_common.c
+@@ -972,12 +972,15 @@ static u32 efx_mcdi_phy_module_type(struct efx_nic *efx)
+ 
+ 	/* A QSFP+ NIC may actually have an SFP+ module attached.
+ 	 * The ID is page 0, byte 0.
++	 * QSFP28 is of type SFF_8636, however, this is treated
++	 * the same by ethtool, so we can also treat them the same.
+ 	 */
+ 	switch (efx_mcdi_phy_get_module_eeprom_byte(efx, 0, 0)) {
+-	case 0x3:
++	case 0x3: /* SFP */
+ 		return MC_CMD_MEDIA_SFP_PLUS;
+-	case 0xc:
+-	case 0xd:
++	case 0xc: /* QSFP */
++	case 0xd: /* QSFP+ */
++	case 0x11: /* QSFP28 */
+ 		return MC_CMD_MEDIA_QSFP_PLUS;
+ 	default:
+ 		return 0;
+@@ -1075,7 +1078,7 @@ int efx_mcdi_phy_get_module_info(struct efx_nic *efx, struct ethtool_modinfo *mo
+ 
+ 	case MC_CMD_MEDIA_QSFP_PLUS:
+ 		modinfo->type = ETH_MODULE_SFF_8436;
+-		modinfo->eeprom_len = ETH_MODULE_SFF_8436_LEN;
++		modinfo->eeprom_len = ETH_MODULE_SFF_8436_MAX_LEN;
+ 		break;
+ 
+ 	default:
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index 0fc4b959edc18..0999a58ca9d26 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -199,6 +199,7 @@
+ #define OCP_EEE_AR		0xa41a
+ #define OCP_EEE_DATA		0xa41c
+ #define OCP_PHY_STATUS		0xa420
++#define OCP_INTR_EN		0xa424
+ #define OCP_NCTL_CFG		0xa42c
+ #define OCP_POWER_CFG		0xa430
+ #define OCP_EEE_CFG		0xa432
+@@ -620,6 +621,9 @@ enum spd_duplex {
+ #define PHY_STAT_LAN_ON		3
+ #define PHY_STAT_PWRDN		5
+ 
++/* OCP_INTR_EN */
++#define INTR_SPEED_FORCE	BIT(3)
++
+ /* OCP_NCTL_CFG */
+ #define PGA_RETURN_EN		BIT(1)
+ 
+@@ -3023,12 +3027,16 @@ static int rtl_enable(struct r8152 *tp)
+ 	ocp_write_byte(tp, MCU_TYPE_PLA, PLA_CR, ocp_data);
+ 
+ 	switch (tp->version) {
+-	case RTL_VER_08:
+-	case RTL_VER_09:
+-	case RTL_VER_14:
+-		r8153b_rx_agg_chg_indicate(tp);
++	case RTL_VER_01:
++	case RTL_VER_02:
++	case RTL_VER_03:
++	case RTL_VER_04:
++	case RTL_VER_05:
++	case RTL_VER_06:
++	case RTL_VER_07:
+ 		break;
+ 	default:
++		r8153b_rx_agg_chg_indicate(tp);
+ 		break;
+ 	}
+ 
+@@ -3082,7 +3090,6 @@ static void r8153_set_rx_early_timeout(struct r8152 *tp)
+ 			       640 / 8);
+ 		ocp_write_word(tp, MCU_TYPE_USB, USB_RX_EXTRA_AGGR_TMR,
+ 			       ocp_data);
+-		r8153b_rx_agg_chg_indicate(tp);
+ 		break;
+ 
+ 	default:
+@@ -3116,7 +3123,6 @@ static void r8153_set_rx_early_size(struct r8152 *tp)
+ 	case RTL_VER_15:
+ 		ocp_write_word(tp, MCU_TYPE_USB, USB_RX_EARLY_SIZE,
+ 			       ocp_data / 8);
+-		r8153b_rx_agg_chg_indicate(tp);
+ 		break;
+ 	default:
+ 		WARN_ON_ONCE(1);
+@@ -5986,6 +5992,25 @@ static void rtl8153_disable(struct r8152 *tp)
+ 	r8153_aldps_en(tp, true);
+ }
+ 
++static u32 fc_pause_on_auto(struct r8152 *tp)
++{
++	return (ALIGN(mtu_to_size(tp->netdev->mtu), 1024) + 6 * 1024);
++}
++
++static u32 fc_pause_off_auto(struct r8152 *tp)
++{
++	return (ALIGN(mtu_to_size(tp->netdev->mtu), 1024) + 14 * 1024);
++}
++
++static void r8156_fc_parameter(struct r8152 *tp)
++{
++	u32 pause_on = tp->fc_pause_on ? tp->fc_pause_on : fc_pause_on_auto(tp);
++	u32 pause_off = tp->fc_pause_off ? tp->fc_pause_off : fc_pause_off_auto(tp);
++
++	ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_FULL, pause_on / 16);
++	ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_EMPTY, pause_off / 16);
++}
++
+ static int rtl8156_enable(struct r8152 *tp)
+ {
+ 	u32 ocp_data;
+@@ -5994,6 +6019,7 @@ static int rtl8156_enable(struct r8152 *tp)
+ 	if (test_bit(RTL8152_UNPLUG, &tp->flags))
+ 		return -ENODEV;
+ 
++	r8156_fc_parameter(tp);
+ 	set_tx_qlen(tp);
+ 	rtl_set_eee_plus(tp);
+ 	r8153_set_rx_early_timeout(tp);
+@@ -6025,9 +6051,24 @@ static int rtl8156_enable(struct r8152 *tp)
+ 		ocp_write_word(tp, MCU_TYPE_USB, USB_L1_CTRL, ocp_data);
+ 	}
+ 
++	ocp_data = ocp_read_word(tp, MCU_TYPE_USB, USB_FW_TASK);
++	ocp_data &= ~FC_PATCH_TASK;
++	ocp_write_word(tp, MCU_TYPE_USB, USB_FW_TASK, ocp_data);
++	usleep_range(1000, 2000);
++	ocp_data |= FC_PATCH_TASK;
++	ocp_write_word(tp, MCU_TYPE_USB, USB_FW_TASK, ocp_data);
++
+ 	return rtl_enable(tp);
+ }
+ 
++static void rtl8156_disable(struct r8152 *tp)
++{
++	ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_FULL, 0);
++	ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_EMPTY, 0);
++
++	rtl8153_disable(tp);
++}
++
+ static int rtl8156b_enable(struct r8152 *tp)
+ {
+ 	u32 ocp_data;
+@@ -6429,25 +6470,6 @@ static void rtl8153c_up(struct r8152 *tp)
+ 	r8153b_u1u2en(tp, true);
+ }
+ 
+-static inline u32 fc_pause_on_auto(struct r8152 *tp)
+-{
+-	return (ALIGN(mtu_to_size(tp->netdev->mtu), 1024) + 6 * 1024);
+-}
+-
+-static inline u32 fc_pause_off_auto(struct r8152 *tp)
+-{
+-	return (ALIGN(mtu_to_size(tp->netdev->mtu), 1024) + 14 * 1024);
+-}
+-
+-static void r8156_fc_parameter(struct r8152 *tp)
+-{
+-	u32 pause_on = tp->fc_pause_on ? tp->fc_pause_on : fc_pause_on_auto(tp);
+-	u32 pause_off = tp->fc_pause_off ? tp->fc_pause_off : fc_pause_off_auto(tp);
+-
+-	ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_FULL, pause_on / 16);
+-	ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_EMPTY, pause_off / 16);
+-}
+-
+ static void rtl8156_change_mtu(struct r8152 *tp)
+ {
+ 	u32 rx_max_size = mtu_to_size(tp->netdev->mtu);
+@@ -7538,6 +7560,11 @@ static void r8156_hw_phy_cfg(struct r8152 *tp)
+ 				      ((swap_a & 0x1f) << 8) |
+ 				      ((swap_a >> 8) & 0x1f));
+ 		}
++
++		/* Notify the MAC when the speed is changed to force mode. */
++		data = ocp_reg_read(tp, OCP_INTR_EN);
++		data |= INTR_SPEED_FORCE;
++		ocp_reg_write(tp, OCP_INTR_EN, data);
+ 		break;
+ 	default:
+ 		break;
+@@ -7933,6 +7960,11 @@ static void r8156b_hw_phy_cfg(struct r8152 *tp)
+ 		break;
+ 	}
+ 
++	/* Notify the MAC when the speed is changed to force mode. */
++	data = ocp_reg_read(tp, OCP_INTR_EN);
++	data |= INTR_SPEED_FORCE;
++	ocp_reg_write(tp, OCP_INTR_EN, data);
++
+ 	if (rtl_phy_patch_request(tp, true, true))
+ 		return;
+ 
+@@ -9340,7 +9372,7 @@ static int rtl_ops_init(struct r8152 *tp)
+ 	case RTL_VER_10:
+ 		ops->init		= r8156_init;
+ 		ops->enable		= rtl8156_enable;
+-		ops->disable		= rtl8153_disable;
++		ops->disable		= rtl8156_disable;
+ 		ops->up			= rtl8156_up;
+ 		ops->down		= rtl8156_down;
+ 		ops->unload		= rtl8153_unload;
+@@ -9878,6 +9910,7 @@ static struct usb_device_driver rtl8152_cfgselector_driver = {
+ 	.probe =	rtl8152_cfgselector_probe,
+ 	.id_table =	rtl8152_table,
+ 	.generic_subclass = 1,
++	.supports_autosuspend = 1,
+ };
+ 
+ static int __init rtl8152_driver_init(void)
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index ea1bd4bb326d1..744bdc8a1abd2 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -3559,12 +3559,14 @@ static void free_unused_bufs(struct virtnet_info *vi)
+ 		struct virtqueue *vq = vi->sq[i].vq;
+ 		while ((buf = virtqueue_detach_unused_buf(vq)) != NULL)
+ 			virtnet_sq_free_unused_buf(vq, buf);
++		cond_resched();
+ 	}
+ 
+ 	for (i = 0; i < vi->max_queue_pairs; i++) {
+ 		struct virtqueue *vq = vi->rq[i].vq;
+ 		while ((buf = virtqueue_detach_unused_buf(vq)) != NULL)
+ 			virtnet_rq_free_unused_buf(vq, buf);
++		cond_resched();
+ 	}
+ }
+ 
+diff --git a/drivers/platform/x86/hp/hp-wmi.c b/drivers/platform/x86/hp/hp-wmi.c
+index 873f59c3e2804..6364ae2627058 100644
+--- a/drivers/platform/x86/hp/hp-wmi.c
++++ b/drivers/platform/x86/hp/hp-wmi.c
+@@ -211,6 +211,7 @@ struct bios_rfkill2_state {
+ static const struct key_entry hp_wmi_keymap[] = {
+ 	{ KE_KEY, 0x02,    { KEY_BRIGHTNESSUP } },
+ 	{ KE_KEY, 0x03,    { KEY_BRIGHTNESSDOWN } },
++	{ KE_KEY, 0x270,   { KEY_MICMUTE } },
+ 	{ KE_KEY, 0x20e6,  { KEY_PROG1 } },
+ 	{ KE_KEY, 0x20e8,  { KEY_MEDIA } },
+ 	{ KE_KEY, 0x2142,  { KEY_MEDIA } },
+diff --git a/drivers/platform/x86/intel/uncore-frequency/uncore-frequency-common.c b/drivers/platform/x86/intel/uncore-frequency/uncore-frequency-common.c
+index cb24de9e97dc5..fa8f14c925ec3 100644
+--- a/drivers/platform/x86/intel/uncore-frequency/uncore-frequency-common.c
++++ b/drivers/platform/x86/intel/uncore-frequency/uncore-frequency-common.c
+@@ -44,14 +44,18 @@ static ssize_t store_min_max_freq_khz(struct uncore_data *data,
+ 				      int min_max)
+ {
+ 	unsigned int input;
++	int ret;
+ 
+ 	if (kstrtouint(buf, 10, &input))
+ 		return -EINVAL;
+ 
+ 	mutex_lock(&uncore_lock);
+-	uncore_write(data, input, min_max);
++	ret = uncore_write(data, input, min_max);
+ 	mutex_unlock(&uncore_lock);
+ 
++	if (ret)
++		return ret;
++
+ 	return count;
+ }
+ 
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
+index 7191ff2625b1e..e40cbe81b12c1 100644
+--- a/drivers/platform/x86/thinkpad_acpi.c
++++ b/drivers/platform/x86/thinkpad_acpi.c
+@@ -10318,6 +10318,7 @@ static atomic_t dytc_ignore_event = ATOMIC_INIT(0);
+ static DEFINE_MUTEX(dytc_mutex);
+ static int dytc_capabilities;
+ static bool dytc_mmc_get_available;
++static int profile_force;
+ 
+ static int convert_dytc_to_profile(int funcmode, int dytcmode,
+ 		enum platform_profile_option *profile)
+@@ -10580,6 +10581,21 @@ static int tpacpi_dytc_profile_init(struct ibm_init_struct *iibm)
+ 	if (err)
+ 		return err;
+ 
++	/* Check if user wants to override the profile selection */
++	if (profile_force) {
++		switch (profile_force) {
++		case -1:
++			dytc_capabilities = 0;
++			break;
++		case 1:
++			dytc_capabilities = BIT(DYTC_FC_MMC);
++			break;
++		case 2:
++			dytc_capabilities = BIT(DYTC_FC_PSC);
++			break;
++		}
++		pr_debug("Profile selection forced: 0x%x\n", dytc_capabilities);
++	}
+ 	if (dytc_capabilities & BIT(DYTC_FC_MMC)) { /* MMC MODE */
+ 		pr_debug("MMC is supported\n");
+ 		/*
+@@ -10593,11 +10609,6 @@ static int tpacpi_dytc_profile_init(struct ibm_init_struct *iibm)
+ 				dytc_mmc_get_available = true;
+ 		}
+ 	} else if (dytc_capabilities & BIT(DYTC_FC_PSC)) { /* PSC MODE */
+-		/* Support for this only works on AMD platforms */
+-		if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD) {
+-			dbg_printk(TPACPI_DBG_INIT, "PSC not support on Intel platforms\n");
+-			return -ENODEV;
+-		}
+ 		pr_debug("PSC is supported\n");
+ 	} else {
+ 		dbg_printk(TPACPI_DBG_INIT, "No DYTC support available\n");
+@@ -11646,6 +11657,9 @@ MODULE_PARM_DESC(uwb_state,
+ 		 "Initial state of the emulated UWB switch");
+ #endif
+ 
++module_param(profile_force, int, 0444);
++MODULE_PARM_DESC(profile_force, "Force profile mode. -1=off, 1=MMC, 2=PSC");
++
+ static void thinkpad_acpi_module_exit(void)
+ {
+ 	struct ibm_struct *ibm, *itmp;
+diff --git a/drivers/platform/x86/touchscreen_dmi.c b/drivers/platform/x86/touchscreen_dmi.c
+index 13802a3c3591d..68e66b60445c3 100644
+--- a/drivers/platform/x86/touchscreen_dmi.c
++++ b/drivers/platform/x86/touchscreen_dmi.c
+@@ -336,6 +336,22 @@ static const struct ts_dmi_data dexp_ursus_7w_data = {
+ 	.properties	= dexp_ursus_7w_props,
+ };
+ 
++static const struct property_entry dexp_ursus_kx210i_props[] = {
++	PROPERTY_ENTRY_U32("touchscreen-min-x", 5),
++	PROPERTY_ENTRY_U32("touchscreen-min-y",  2),
++	PROPERTY_ENTRY_U32("touchscreen-size-x", 1720),
++	PROPERTY_ENTRY_U32("touchscreen-size-y", 1137),
++	PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-dexp-ursus-kx210i.fw"),
++	PROPERTY_ENTRY_U32("silead,max-fingers", 10),
++	PROPERTY_ENTRY_BOOL("silead,home-button"),
++	{ }
++};
++
++static const struct ts_dmi_data dexp_ursus_kx210i_data = {
++	.acpi_name	= "MSSL1680:00",
++	.properties	= dexp_ursus_kx210i_props,
++};
++
+ static const struct property_entry digma_citi_e200_props[] = {
+ 	PROPERTY_ENTRY_U32("touchscreen-size-x", 1980),
+ 	PROPERTY_ENTRY_U32("touchscreen-size-y", 1500),
+@@ -378,6 +394,11 @@ static const struct ts_dmi_data gdix1001_01_upside_down_data = {
+ 	.properties	= gdix1001_upside_down_props,
+ };
+ 
++static const struct ts_dmi_data gdix1002_00_upside_down_data = {
++	.acpi_name	= "GDIX1002:00",
++	.properties	= gdix1001_upside_down_props,
++};
++
+ static const struct property_entry gp_electronic_t701_props[] = {
+ 	PROPERTY_ENTRY_U32("touchscreen-size-x", 960),
+ 	PROPERTY_ENTRY_U32("touchscreen-size-y", 640),
+@@ -1185,6 +1206,14 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "7W"),
+ 		},
+ 	},
++	{
++		/* DEXP Ursus KX210i */
++		.driver_data = (void *)&dexp_ursus_kx210i_data,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "INSYDE Corp."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "S107I"),
++		},
++	},
+ 	{
+ 		/* Digma Citi E200 */
+ 		.driver_data = (void *)&digma_citi_e200_data,
+@@ -1295,6 +1324,18 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
+ 			DMI_MATCH(DMI_BIOS_VERSION, "jumperx.T87.KFBNEEA"),
+ 		},
+ 	},
++	{
++		/* Juno Tablet */
++		.driver_data = (void *)&gdix1002_00_upside_down_data,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Default string"),
++			/* Both product- and board-name being "Default string" is somewhat rare */
++			DMI_MATCH(DMI_PRODUCT_NAME, "Default string"),
++			DMI_MATCH(DMI_BOARD_NAME, "Default string"),
++			/* Above matches are too generic, add partial bios-version match */
++			DMI_MATCH(DMI_BIOS_VERSION, "JP2V1."),
++		},
++	},
+ 	{
+ 		/* Mediacom WinPad 7.0 W700 (same hw as Wintron surftab 7") */
+ 		.driver_data = (void *)&trekstor_surftab_wintron70_data,
+diff --git a/drivers/remoteproc/imx_dsp_rproc.c b/drivers/remoteproc/imx_dsp_rproc.c
+index 95da1cbefacf0..506ec9565716b 100644
+--- a/drivers/remoteproc/imx_dsp_rproc.c
++++ b/drivers/remoteproc/imx_dsp_rproc.c
+@@ -627,15 +627,19 @@ static int imx_dsp_rproc_add_carveout(struct imx_dsp_rproc *priv)
+ 
+ 		rmem = of_reserved_mem_lookup(it.node);
+ 		if (!rmem) {
++			of_node_put(it.node);
+ 			dev_err(dev, "unable to acquire memory-region\n");
+ 			return -EINVAL;
+ 		}
+ 
+-		if (imx_dsp_rproc_sys_to_da(priv, rmem->base, rmem->size, &da))
++		if (imx_dsp_rproc_sys_to_da(priv, rmem->base, rmem->size, &da)) {
++			of_node_put(it.node);
+ 			return -EINVAL;
++		}
+ 
+ 		cpu_addr = devm_ioremap_wc(dev, rmem->base, rmem->size);
+ 		if (!cpu_addr) {
++			of_node_put(it.node);
+ 			dev_err(dev, "failed to map memory %p\n", &rmem->base);
+ 			return -ENOMEM;
+ 		}
+@@ -644,10 +648,12 @@ static int imx_dsp_rproc_add_carveout(struct imx_dsp_rproc *priv)
+ 		mem = rproc_mem_entry_init(dev, (void __force *)cpu_addr, (dma_addr_t)rmem->base,
+ 					   rmem->size, da, NULL, NULL, it.node->name);
+ 
+-		if (mem)
++		if (mem) {
+ 			rproc_coredump_add_segment(rproc, da, rmem->size);
+-		else
++		} else {
++			of_node_put(it.node);
+ 			return -ENOMEM;
++		}
+ 
+ 		rproc_add_carveout(rproc, mem);
+ 	}
+diff --git a/drivers/remoteproc/imx_rproc.c b/drivers/remoteproc/imx_rproc.c
+index 9fc978e0393ce..0ab840dc7e97f 100644
+--- a/drivers/remoteproc/imx_rproc.c
++++ b/drivers/remoteproc/imx_rproc.c
+@@ -541,6 +541,7 @@ static int imx_rproc_prepare(struct rproc *rproc)
+ 
+ 		rmem = of_reserved_mem_lookup(it.node);
+ 		if (!rmem) {
++			of_node_put(it.node);
+ 			dev_err(priv->dev, "unable to acquire memory-region\n");
+ 			return -EINVAL;
+ 		}
+@@ -553,10 +554,12 @@ static int imx_rproc_prepare(struct rproc *rproc)
+ 					   imx_rproc_mem_alloc, imx_rproc_mem_release,
+ 					   it.node->name);
+ 
+-		if (mem)
++		if (mem) {
+ 			rproc_coredump_add_segment(rproc, da, rmem->size);
+-		else
++		} else {
++			of_node_put(it.node);
+ 			return -ENOMEM;
++		}
+ 
+ 		rproc_add_carveout(rproc, mem);
+ 	}
+diff --git a/drivers/remoteproc/rcar_rproc.c b/drivers/remoteproc/rcar_rproc.c
+index aa86154109c77..1ff2a73ade907 100644
+--- a/drivers/remoteproc/rcar_rproc.c
++++ b/drivers/remoteproc/rcar_rproc.c
+@@ -62,13 +62,16 @@ static int rcar_rproc_prepare(struct rproc *rproc)
+ 
+ 		rmem = of_reserved_mem_lookup(it.node);
+ 		if (!rmem) {
++			of_node_put(it.node);
+ 			dev_err(&rproc->dev,
+ 				"unable to acquire memory-region\n");
+ 			return -EINVAL;
+ 		}
+ 
+-		if (rmem->base > U32_MAX)
++		if (rmem->base > U32_MAX) {
++			of_node_put(it.node);
+ 			return -EINVAL;
++		}
+ 
+ 		/* No need to translate pa to da, R-Car use same map */
+ 		da = rmem->base;
+@@ -79,8 +82,10 @@ static int rcar_rproc_prepare(struct rproc *rproc)
+ 					   rcar_rproc_mem_release,
+ 					   it.node->name);
+ 
+-		if (!mem)
++		if (!mem) {
++			of_node_put(it.node);
+ 			return -ENOMEM;
++		}
+ 
+ 		rproc_add_carveout(rproc, mem);
+ 	}
+diff --git a/drivers/remoteproc/st_remoteproc.c b/drivers/remoteproc/st_remoteproc.c
+index a3268d95a50e6..e6bd3c7a950a2 100644
+--- a/drivers/remoteproc/st_remoteproc.c
++++ b/drivers/remoteproc/st_remoteproc.c
+@@ -129,6 +129,7 @@ static int st_rproc_parse_fw(struct rproc *rproc, const struct firmware *fw)
+ 	while (of_phandle_iterator_next(&it) == 0) {
+ 		rmem = of_reserved_mem_lookup(it.node);
+ 		if (!rmem) {
++			of_node_put(it.node);
+ 			dev_err(dev, "unable to acquire memory-region\n");
+ 			return -EINVAL;
+ 		}
+@@ -150,8 +151,10 @@ static int st_rproc_parse_fw(struct rproc *rproc, const struct firmware *fw)
+ 							   it.node->name);
+ 		}
+ 
+-		if (!mem)
++		if (!mem) {
++			of_node_put(it.node);
+ 			return -ENOMEM;
++		}
+ 
+ 		rproc_add_carveout(rproc, mem);
+ 		index++;
+diff --git a/drivers/remoteproc/stm32_rproc.c b/drivers/remoteproc/stm32_rproc.c
+index 7d782ed9e5896..23c1690b8d73f 100644
+--- a/drivers/remoteproc/stm32_rproc.c
++++ b/drivers/remoteproc/stm32_rproc.c
+@@ -223,11 +223,13 @@ static int stm32_rproc_prepare(struct rproc *rproc)
+ 	while (of_phandle_iterator_next(&it) == 0) {
+ 		rmem = of_reserved_mem_lookup(it.node);
+ 		if (!rmem) {
++			of_node_put(it.node);
+ 			dev_err(dev, "unable to acquire memory-region\n");
+ 			return -EINVAL;
+ 		}
+ 
+ 		if (stm32_rproc_pa_to_da(rproc, rmem->base, &da) < 0) {
++			of_node_put(it.node);
+ 			dev_err(dev, "memory region not valid %pa\n",
+ 				&rmem->base);
+ 			return -EINVAL;
+@@ -254,8 +256,10 @@ static int stm32_rproc_prepare(struct rproc *rproc)
+ 							   it.node->name);
+ 		}
+ 
+-		if (!mem)
++		if (!mem) {
++			of_node_put(it.node);
+ 			return -ENOMEM;
++		}
+ 
+ 		rproc_add_carveout(rproc, mem);
+ 		index++;
+diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
+index f2ee49756df8d..45d3595541820 100644
+--- a/drivers/scsi/qedi/qedi_main.c
++++ b/drivers/scsi/qedi/qedi_main.c
+@@ -2450,6 +2450,9 @@ static void __qedi_remove(struct pci_dev *pdev, int mode)
+ 		qedi_ops->ll2->stop(qedi->cdev);
+ 	}
+ 
++	cancel_delayed_work_sync(&qedi->recovery_work);
++	cancel_delayed_work_sync(&qedi->board_disable_work);
++
+ 	qedi_free_iscsi_pf_param(qedi);
+ 
+ 	rval = qedi_ops->common->update_drv_state(qedi->cdev, false);
+diff --git a/drivers/soc/qcom/llcc-qcom.c b/drivers/soc/qcom/llcc-qcom.c
+index 26efe12012a0d..d4d3eced52f35 100644
+--- a/drivers/soc/qcom/llcc-qcom.c
++++ b/drivers/soc/qcom/llcc-qcom.c
+@@ -122,10 +122,11 @@ struct llcc_slice_config {
+ 
+ struct qcom_llcc_config {
+ 	const struct llcc_slice_config *sct_data;
+-	int size;
+-	bool need_llcc_cfg;
+ 	const u32 *reg_offset;
+ 	const struct llcc_edac_reg_offset *edac_reg_offset;
++	int size;
++	bool need_llcc_cfg;
++	bool no_edac;
+ };
+ 
+ enum llcc_reg_offset {
+@@ -454,6 +455,7 @@ static const struct qcom_llcc_config sdm845_cfg = {
+ 	.need_llcc_cfg	= false,
+ 	.reg_offset	= llcc_v1_reg_offset,
+ 	.edac_reg_offset = &llcc_v1_edac_reg_offset,
++	.no_edac	= true,
+ };
+ 
+ static const struct qcom_llcc_config sm6350_cfg = {
+@@ -1001,7 +1003,14 @@ static int qcom_llcc_probe(struct platform_device *pdev)
+ 		goto err;
+ 
+ 	drv_data->ecc_irq = platform_get_irq_optional(pdev, 0);
+-	if (drv_data->ecc_irq >= 0) {
++
++	/*
++	 * On some platforms, the access to EDAC registers will be locked by
++	 * the bootloader. So probing the EDAC driver will result in a crash.
++	 * Hence, disable the creation of EDAC platform device for the
++	 * problematic platforms.
++	 */
++	if (!cfg->no_edac) {
+ 		llcc_edac = platform_device_register_data(&pdev->dev,
+ 						"qcom_llcc_edac", -1, drv_data,
+ 						sizeof(*drv_data));
+diff --git a/drivers/spi/spi-fsl-cpm.c b/drivers/spi/spi-fsl-cpm.c
+index 17a44d4f50218..38452089e8f35 100644
+--- a/drivers/spi/spi-fsl-cpm.c
++++ b/drivers/spi/spi-fsl-cpm.c
+@@ -21,6 +21,7 @@
+ #include <linux/spi/spi.h>
+ #include <linux/types.h>
+ #include <linux/platform_device.h>
++#include <linux/byteorder/generic.h>
+ 
+ #include "spi-fsl-cpm.h"
+ #include "spi-fsl-lib.h"
+@@ -120,6 +121,21 @@ int fsl_spi_cpm_bufs(struct mpc8xxx_spi *mspi,
+ 		mspi->rx_dma = mspi->dma_dummy_rx;
+ 		mspi->map_rx_dma = 0;
+ 	}
++	if (t->bits_per_word == 16 && t->tx_buf) {
++		const u16 *src = t->tx_buf;
++		u16 *dst;
++		int i;
++
++		dst = kmalloc(t->len, GFP_KERNEL);
++		if (!dst)
++			return -ENOMEM;
++
++		for (i = 0; i < t->len >> 1; i++)
++			dst[i] = cpu_to_le16p(src + i);
++
++		mspi->tx = dst;
++		mspi->map_tx_dma = 1;
++	}
+ 
+ 	if (mspi->map_tx_dma) {
+ 		void *nonconst_tx = (void *)mspi->tx; /* shut up gcc */
+@@ -173,6 +189,13 @@ void fsl_spi_cpm_bufs_complete(struct mpc8xxx_spi *mspi)
+ 	if (mspi->map_rx_dma)
+ 		dma_unmap_single(dev, mspi->rx_dma, t->len, DMA_FROM_DEVICE);
+ 	mspi->xfer_in_progress = NULL;
++
++	if (t->bits_per_word == 16 && t->rx_buf) {
++		int i;
++
++		for (i = 0; i < t->len; i += 2)
++			le16_to_cpus(t->rx_buf + i);
++	}
+ }
+ EXPORT_SYMBOL_GPL(fsl_spi_cpm_bufs_complete);
+ 
+diff --git a/drivers/spi/spi-fsl-spi.c b/drivers/spi/spi-fsl-spi.c
+index 5602f052b2b50..b14f430a699d0 100644
+--- a/drivers/spi/spi-fsl-spi.c
++++ b/drivers/spi/spi-fsl-spi.c
+@@ -177,26 +177,6 @@ static int mspi_apply_cpu_mode_quirks(struct spi_mpc8xxx_cs *cs,
+ 	return bits_per_word;
+ }
+ 
+-static int mspi_apply_qe_mode_quirks(struct spi_mpc8xxx_cs *cs,
+-				struct spi_device *spi,
+-				int bits_per_word)
+-{
+-	/* CPM/QE uses Little Endian for words > 8
+-	 * so transform 16 and 32 bits words into 8 bits
+-	 * Unfortnatly that doesn't work for LSB so
+-	 * reject these for now */
+-	/* Note: 32 bits word, LSB works iff
+-	 * tfcr/rfcr is set to CPMFCR_GBL */
+-	if (spi->mode & SPI_LSB_FIRST &&
+-	    bits_per_word > 8)
+-		return -EINVAL;
+-	if (bits_per_word <= 8)
+-		return bits_per_word;
+-	if (bits_per_word == 16 || bits_per_word == 32)
+-		return 8; /* pretend its 8 bits */
+-	return -EINVAL;
+-}
+-
+ static int fsl_spi_setup_transfer(struct spi_device *spi,
+ 					struct spi_transfer *t)
+ {
+@@ -224,9 +204,6 @@ static int fsl_spi_setup_transfer(struct spi_device *spi,
+ 		bits_per_word = mspi_apply_cpu_mode_quirks(cs, spi,
+ 							   mpc8xxx_spi,
+ 							   bits_per_word);
+-	else
+-		bits_per_word = mspi_apply_qe_mode_quirks(cs, spi,
+-							  bits_per_word);
+ 
+ 	if (bits_per_word < 0)
+ 		return bits_per_word;
+@@ -361,6 +338,22 @@ static int fsl_spi_prepare_message(struct spi_controller *ctlr,
+ 				t->bits_per_word = 32;
+ 			else if ((t->len & 1) == 0)
+ 				t->bits_per_word = 16;
++		} else {
++			/*
++			 * CPM/QE uses Little Endian for words > 8
++			 * so transform 16 and 32 bits words into 8 bits
++			 * Unfortnatly that doesn't work for LSB so
++			 * reject these for now
++			 * Note: 32 bits word, LSB works iff
++			 * tfcr/rfcr is set to CPMFCR_GBL
++			 */
++			if (m->spi->mode & SPI_LSB_FIRST && t->bits_per_word > 8)
++				return -EINVAL;
++			if (t->bits_per_word == 16 || t->bits_per_word == 32)
++				t->bits_per_word = 8; /* pretend its 8 bits */
++			if (t->bits_per_word == 8 && t->len >= 256 &&
++			    (mpc8xxx_spi->flags & SPI_CPM1))
++				t->bits_per_word = 16;
+ 		}
+ 	}
+ 	return fsl_spi_setup_transfer(m->spi, first);
+@@ -594,8 +587,14 @@ static struct spi_master *fsl_spi_probe(struct device *dev,
+ 	if (mpc8xxx_spi->type == TYPE_GRLIB)
+ 		fsl_spi_grlib_probe(dev);
+ 
+-	master->bits_per_word_mask =
+-		(SPI_BPW_RANGE_MASK(4, 16) | SPI_BPW_MASK(32)) &
++	if (mpc8xxx_spi->flags & SPI_CPM_MODE)
++		master->bits_per_word_mask =
++			(SPI_BPW_RANGE_MASK(4, 8) | SPI_BPW_MASK(16) | SPI_BPW_MASK(32));
++	else
++		master->bits_per_word_mask =
++			(SPI_BPW_RANGE_MASK(4, 16) | SPI_BPW_MASK(32));
++
++	master->bits_per_word_mask &=
+ 		SPI_BPW_RANGE_MASK(1, mpc8xxx_spi->max_bits_per_word);
+ 
+ 	if (mpc8xxx_spi->flags & SPI_QE_CPU_MODE)
+diff --git a/drivers/ufs/core/ufs-mcq.c b/drivers/ufs/core/ufs-mcq.c
+index 31df052fbc417..202ff71e1b582 100644
+--- a/drivers/ufs/core/ufs-mcq.c
++++ b/drivers/ufs/core/ufs-mcq.c
+@@ -299,11 +299,11 @@ EXPORT_SYMBOL_GPL(ufshcd_mcq_poll_cqe_nolock);
+ unsigned long ufshcd_mcq_poll_cqe_lock(struct ufs_hba *hba,
+ 				       struct ufs_hw_queue *hwq)
+ {
+-	unsigned long completed_reqs;
++	unsigned long completed_reqs, flags;
+ 
+-	spin_lock(&hwq->cq_lock);
++	spin_lock_irqsave(&hwq->cq_lock, flags);
+ 	completed_reqs = ufshcd_mcq_poll_cqe_nolock(hba, hwq);
+-	spin_unlock(&hwq->cq_lock);
++	spin_unlock_irqrestore(&hwq->cq_lock, flags);
+ 
+ 	return completed_reqs;
+ }
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 3faac3244c7db..e63700937ba8c 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -2478,7 +2478,7 @@ static void __dwc3_gadget_set_speed(struct dwc3 *dwc)
+ 	dwc3_writel(dwc->regs, DWC3_DCFG, reg);
+ }
+ 
+-static int dwc3_gadget_run_stop(struct dwc3 *dwc, int is_on, int suspend)
++static int dwc3_gadget_run_stop(struct dwc3 *dwc, int is_on)
+ {
+ 	u32			reg;
+ 	u32			timeout = 2000;
+@@ -2497,17 +2497,11 @@ static int dwc3_gadget_run_stop(struct dwc3 *dwc, int is_on, int suspend)
+ 			reg &= ~DWC3_DCTL_KEEP_CONNECT;
+ 		reg |= DWC3_DCTL_RUN_STOP;
+ 
+-		if (dwc->has_hibernation)
+-			reg |= DWC3_DCTL_KEEP_CONNECT;
+-
+ 		__dwc3_gadget_set_speed(dwc);
+ 		dwc->pullups_connected = true;
+ 	} else {
+ 		reg &= ~DWC3_DCTL_RUN_STOP;
+ 
+-		if (dwc->has_hibernation && !suspend)
+-			reg &= ~DWC3_DCTL_KEEP_CONNECT;
+-
+ 		dwc->pullups_connected = false;
+ 	}
+ 
+@@ -2552,7 +2546,6 @@ static int dwc3_gadget_soft_disconnect(struct dwc3 *dwc)
+ 	 * bit.
+ 	 */
+ 	dwc3_stop_active_transfers(dwc);
+-	__dwc3_gadget_stop(dwc);
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
+ 	/*
+@@ -2589,7 +2582,19 @@ static int dwc3_gadget_soft_disconnect(struct dwc3 *dwc)
+ 	 * remaining event generated by the controller while polling for
+ 	 * DSTS.DEVCTLHLT.
+ 	 */
+-	return dwc3_gadget_run_stop(dwc, false, false);
++	ret = dwc3_gadget_run_stop(dwc, false);
++
++	/*
++	 * Stop the gadget after controller is halted, so that if needed, the
++	 * events to update EP0 state can still occur while the run/stop
++	 * routine polls for the halted state.  DEVTEN is cleared as part of
++	 * gadget stop.
++	 */
++	spin_lock_irqsave(&dwc->lock, flags);
++	__dwc3_gadget_stop(dwc);
++	spin_unlock_irqrestore(&dwc->lock, flags);
++
++	return ret;
+ }
+ 
+ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
+@@ -2643,7 +2648,7 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
+ 
+ 		dwc3_event_buffers_setup(dwc);
+ 		__dwc3_gadget_start(dwc);
+-		ret = dwc3_gadget_run_stop(dwc, true, false);
++		ret = dwc3_gadget_run_stop(dwc, true);
+ 	}
+ 
+ 	pm_runtime_put(dwc->dev);
+@@ -4210,30 +4215,6 @@ static void dwc3_gadget_suspend_interrupt(struct dwc3 *dwc,
+ 	dwc->link_state = next;
+ }
+ 
+-static void dwc3_gadget_hibernation_interrupt(struct dwc3 *dwc,
+-		unsigned int evtinfo)
+-{
+-	unsigned int is_ss = evtinfo & BIT(4);
+-
+-	/*
+-	 * WORKAROUND: DWC3 revision 2.20a with hibernation support
+-	 * have a known issue which can cause USB CV TD.9.23 to fail
+-	 * randomly.
+-	 *
+-	 * Because of this issue, core could generate bogus hibernation
+-	 * events which SW needs to ignore.
+-	 *
+-	 * Refers to:
+-	 *
+-	 * STAR#9000546576: Device Mode Hibernation: Issue in USB 2.0
+-	 * Device Fallback from SuperSpeed
+-	 */
+-	if (is_ss ^ (dwc->speed == USB_SPEED_SUPER))
+-		return;
+-
+-	/* enter hibernation here */
+-}
+-
+ static void dwc3_gadget_interrupt(struct dwc3 *dwc,
+ 		const struct dwc3_event_devt *event)
+ {
+@@ -4251,11 +4232,7 @@ static void dwc3_gadget_interrupt(struct dwc3 *dwc,
+ 		dwc3_gadget_wakeup_interrupt(dwc);
+ 		break;
+ 	case DWC3_DEVICE_EVENT_HIBER_REQ:
+-		if (dev_WARN_ONCE(dwc->dev, !dwc->has_hibernation,
+-					"unexpected hibernation event\n"))
+-			break;
+-
+-		dwc3_gadget_hibernation_interrupt(dwc, event->event_info);
++		dev_WARN_ONCE(dwc->dev, true, "unexpected hibernation event\n");
+ 		break;
+ 	case DWC3_DEVICE_EVENT_LINK_STATUS_CHANGE:
+ 		dwc3_gadget_linksts_change_interrupt(dwc, event->event_info);
+@@ -4592,7 +4569,7 @@ int dwc3_gadget_suspend(struct dwc3 *dwc)
+ 	if (!dwc->gadget_driver)
+ 		return 0;
+ 
+-	dwc3_gadget_run_stop(dwc, false, false);
++	dwc3_gadget_run_stop(dwc, false);
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	dwc3_disconnect_gadget(dwc);
+@@ -4613,7 +4590,7 @@ int dwc3_gadget_resume(struct dwc3 *dwc)
+ 	if (ret < 0)
+ 		goto err0;
+ 
+-	ret = dwc3_gadget_run_stop(dwc, true, false);
++	ret = dwc3_gadget_run_stop(dwc, true);
+ 	if (ret < 0)
+ 		goto err1;
+ 
+diff --git a/drivers/watchdog/dw_wdt.c b/drivers/watchdog/dw_wdt.c
+index 462f15bd5ffa6..8e2ac522c945d 100644
+--- a/drivers/watchdog/dw_wdt.c
++++ b/drivers/watchdog/dw_wdt.c
+@@ -635,7 +635,7 @@ static int dw_wdt_drv_probe(struct platform_device *pdev)
+ 
+ 	ret = dw_wdt_init_timeouts(dw_wdt, dev);
+ 	if (ret)
+-		goto out_disable_clk;
++		goto out_assert_rst;
+ 
+ 	wdd = &dw_wdt->wdd;
+ 	wdd->ops = &dw_wdt_ops;
+@@ -667,12 +667,15 @@ static int dw_wdt_drv_probe(struct platform_device *pdev)
+ 
+ 	ret = watchdog_register_device(wdd);
+ 	if (ret)
+-		goto out_disable_pclk;
++		goto out_assert_rst;
+ 
+ 	dw_wdt_dbgfs_init(dw_wdt);
+ 
+ 	return 0;
+ 
++out_assert_rst:
++	reset_control_assert(dw_wdt->rst);
++
+ out_disable_pclk:
+ 	clk_disable_unprepare(dw_wdt->pclk);
+ 
+diff --git a/fs/afs/afs.h b/fs/afs/afs.h
+index 432cb4b239614..81815724db6c9 100644
+--- a/fs/afs/afs.h
++++ b/fs/afs/afs.h
+@@ -19,8 +19,8 @@
+ #define AFSPATHMAX		1024	/* Maximum length of a pathname plus NUL */
+ #define AFSOPAQUEMAX		1024	/* Maximum length of an opaque field */
+ 
+-#define AFS_VL_MAX_LIFESPAN	(120 * HZ)
+-#define AFS_PROBE_MAX_LIFESPAN	(30 * HZ)
++#define AFS_VL_MAX_LIFESPAN	120
++#define AFS_PROBE_MAX_LIFESPAN	30
+ 
+ typedef u64			afs_volid_t;
+ typedef u64			afs_vnodeid_t;
+diff --git a/fs/afs/internal.h b/fs/afs/internal.h
+index ad8523d0d0386..68ae91d21b578 100644
+--- a/fs/afs/internal.h
++++ b/fs/afs/internal.h
+@@ -128,7 +128,7 @@ struct afs_call {
+ 	spinlock_t		state_lock;
+ 	int			error;		/* error code */
+ 	u32			abort_code;	/* Remote abort ID or 0 */
+-	unsigned int		max_lifespan;	/* Maximum lifespan to set if not 0 */
++	unsigned int		max_lifespan;	/* Maximum lifespan in secs to set if not 0 */
+ 	unsigned		request_size;	/* size of request data */
+ 	unsigned		reply_max;	/* maximum size of reply */
+ 	unsigned		count2;		/* count used in unmarshalling */
+diff --git a/fs/afs/rxrpc.c b/fs/afs/rxrpc.c
+index 7817e2b860e5e..6862e3dde364b 100644
+--- a/fs/afs/rxrpc.c
++++ b/fs/afs/rxrpc.c
+@@ -334,7 +334,9 @@ void afs_make_call(struct afs_addr_cursor *ac, struct afs_call *call, gfp_t gfp)
+ 	/* create a call */
+ 	rxcall = rxrpc_kernel_begin_call(call->net->socket, srx, call->key,
+ 					 (unsigned long)call,
+-					 tx_total_len, gfp,
++					 tx_total_len,
++					 call->max_lifespan,
++					 gfp,
+ 					 (call->async ?
+ 					  afs_wake_up_async_call :
+ 					  afs_wake_up_call_waiter),
+@@ -349,10 +351,6 @@ void afs_make_call(struct afs_addr_cursor *ac, struct afs_call *call, gfp_t gfp)
+ 	}
+ 
+ 	call->rxcall = rxcall;
+-
+-	if (call->max_lifespan)
+-		rxrpc_kernel_set_max_life(call->net->socket, rxcall,
+-					  call->max_lifespan);
+ 	call->issue_time = ktime_get_real();
+ 
+ 	/* send the request */
+diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
+index e54f0884802a0..79336fa853db3 100644
+--- a/fs/btrfs/backref.c
++++ b/fs/btrfs/backref.c
+@@ -45,7 +45,8 @@ static int check_extent_in_eb(struct btrfs_backref_walk_ctx *ctx,
+ 	int root_count;
+ 	bool cached;
+ 
+-	if (!btrfs_file_extent_compression(eb, fi) &&
++	if (!ctx->ignore_extent_item_pos &&
++	    !btrfs_file_extent_compression(eb, fi) &&
+ 	    !btrfs_file_extent_encryption(eb, fi) &&
+ 	    !btrfs_file_extent_other_encoding(eb, fi)) {
+ 		u64 data_offset;
+@@ -552,7 +553,7 @@ static int add_all_parents(struct btrfs_backref_walk_ctx *ctx,
+ 				count++;
+ 			else
+ 				goto next;
+-			if (!ctx->ignore_extent_item_pos) {
++			if (!ctx->skip_inode_ref_list) {
+ 				ret = check_extent_in_eb(ctx, &key, eb, fi, &eie);
+ 				if (ret == BTRFS_ITERATE_EXTENT_INODES_STOP ||
+ 				    ret < 0)
+@@ -564,7 +565,7 @@ static int add_all_parents(struct btrfs_backref_walk_ctx *ctx,
+ 						  eie, (void **)&old, GFP_NOFS);
+ 			if (ret < 0)
+ 				break;
+-			if (!ret && !ctx->ignore_extent_item_pos) {
++			if (!ret && !ctx->skip_inode_ref_list) {
+ 				while (old->next)
+ 					old = old->next;
+ 				old->next = eie;
+@@ -1606,7 +1607,7 @@ again:
+ 				goto out;
+ 		}
+ 		if (ref->count && ref->parent) {
+-			if (!ctx->ignore_extent_item_pos && !ref->inode_list &&
++			if (!ctx->skip_inode_ref_list && !ref->inode_list &&
+ 			    ref->level == 0) {
+ 				struct btrfs_tree_parent_check check = { 0 };
+ 				struct extent_buffer *eb;
+@@ -1647,7 +1648,7 @@ again:
+ 						  (void **)&eie, GFP_NOFS);
+ 			if (ret < 0)
+ 				goto out;
+-			if (!ret && !ctx->ignore_extent_item_pos) {
++			if (!ret && !ctx->skip_inode_ref_list) {
+ 				/*
+ 				 * We've recorded that parent, so we must extend
+ 				 * its inode list here.
+@@ -1743,7 +1744,7 @@ int btrfs_find_all_leafs(struct btrfs_backref_walk_ctx *ctx)
+ static int btrfs_find_all_roots_safe(struct btrfs_backref_walk_ctx *ctx)
+ {
+ 	const u64 orig_bytenr = ctx->bytenr;
+-	const bool orig_ignore_extent_item_pos = ctx->ignore_extent_item_pos;
++	const bool orig_skip_inode_ref_list = ctx->skip_inode_ref_list;
+ 	bool roots_ulist_allocated = false;
+ 	struct ulist_iterator uiter;
+ 	int ret = 0;
+@@ -1764,7 +1765,7 @@ static int btrfs_find_all_roots_safe(struct btrfs_backref_walk_ctx *ctx)
+ 		roots_ulist_allocated = true;
+ 	}
+ 
+-	ctx->ignore_extent_item_pos = true;
++	ctx->skip_inode_ref_list = true;
+ 
+ 	ULIST_ITER_INIT(&uiter);
+ 	while (1) {
+@@ -1789,7 +1790,7 @@ static int btrfs_find_all_roots_safe(struct btrfs_backref_walk_ctx *ctx)
+ 	ulist_free(ctx->refs);
+ 	ctx->refs = NULL;
+ 	ctx->bytenr = orig_bytenr;
+-	ctx->ignore_extent_item_pos = orig_ignore_extent_item_pos;
++	ctx->skip_inode_ref_list = orig_skip_inode_ref_list;
+ 
+ 	return ret;
+ }
+@@ -1912,7 +1913,7 @@ int btrfs_is_data_extent_shared(struct btrfs_inode *inode, u64 bytenr,
+ 		goto out_trans;
+ 	}
+ 
+-	walk_ctx.ignore_extent_item_pos = true;
++	walk_ctx.skip_inode_ref_list = true;
+ 	walk_ctx.trans = trans;
+ 	walk_ctx.fs_info = fs_info;
+ 	walk_ctx.refs = &ctx->refs;
+diff --git a/fs/btrfs/backref.h b/fs/btrfs/backref.h
+index ef6bbea3f4562..1616e3e3f1e41 100644
+--- a/fs/btrfs/backref.h
++++ b/fs/btrfs/backref.h
+@@ -60,6 +60,12 @@ struct btrfs_backref_walk_ctx {
+ 	 * @extent_item_pos is ignored.
+ 	 */
+ 	bool ignore_extent_item_pos;
++	/*
++	 * If true and bytenr corresponds to a data extent, then the inode list
++	 * (each member describing inode number, file offset and root) is not
++	 * added to each reference added to the @refs ulist.
++	 */
++	bool skip_inode_ref_list;
+ 	/* A valid transaction handle or NULL. */
+ 	struct btrfs_trans_handle *trans;
+ 	/*
+diff --git a/fs/btrfs/block-rsv.c b/fs/btrfs/block-rsv.c
+index 5367a14d44d2a..4bb4a48758723 100644
+--- a/fs/btrfs/block-rsv.c
++++ b/fs/btrfs/block-rsv.c
+@@ -124,7 +124,8 @@ static u64 block_rsv_release_bytes(struct btrfs_fs_info *fs_info,
+ 	} else {
+ 		num_bytes = 0;
+ 	}
+-	if (block_rsv->qgroup_rsv_reserved >= block_rsv->qgroup_rsv_size) {
++	if (qgroup_to_release_ret &&
++	    block_rsv->qgroup_rsv_reserved >= block_rsv->qgroup_rsv_size) {
+ 		qgroup_to_release = block_rsv->qgroup_rsv_reserved -
+ 				    block_rsv->qgroup_rsv_size;
+ 		block_rsv->qgroup_rsv_reserved = block_rsv->qgroup_rsv_size;
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index a5b6bb54545f6..26bb10b6ca85d 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -4489,10 +4489,12 @@ int btrfs_del_items(struct btrfs_trans_handle *trans, struct btrfs_root *root,
+ int btrfs_prev_leaf(struct btrfs_root *root, struct btrfs_path *path)
+ {
+ 	struct btrfs_key key;
++	struct btrfs_key orig_key;
+ 	struct btrfs_disk_key found_key;
+ 	int ret;
+ 
+ 	btrfs_item_key_to_cpu(path->nodes[0], &key, 0);
++	orig_key = key;
+ 
+ 	if (key.offset > 0) {
+ 		key.offset--;
+@@ -4509,8 +4511,36 @@ int btrfs_prev_leaf(struct btrfs_root *root, struct btrfs_path *path)
+ 
+ 	btrfs_release_path(path);
+ 	ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
+-	if (ret < 0)
++	if (ret <= 0)
+ 		return ret;
++
++	/*
++	 * Previous key not found. Even if we were at slot 0 of the leaf we had
++	 * before releasing the path and calling btrfs_search_slot(), we now may
++	 * be in a slot pointing to the same original key - this can happen if
++	 * after we released the path, one of more items were moved from a
++	 * sibling leaf into the front of the leaf we had due to an insertion
++	 * (see push_leaf_right()).
++	 * If we hit this case and our slot is > 0 and just decrement the slot
++	 * so that the caller does not process the same key again, which may or
++	 * may not break the caller, depending on its logic.
++	 */
++	if (path->slots[0] < btrfs_header_nritems(path->nodes[0])) {
++		btrfs_item_key(path->nodes[0], &found_key, path->slots[0]);
++		ret = comp_keys(&found_key, &orig_key);
++		if (ret == 0) {
++			if (path->slots[0] > 0) {
++				path->slots[0]--;
++				return 0;
++			}
++			/*
++			 * At slot 0, same key as before, it means orig_key is
++			 * the lowest, leftmost, key in the tree. We're done.
++			 */
++			return 1;
++		}
++	}
++
+ 	btrfs_item_key(path->nodes[0], &found_key, 0);
+ 	ret = comp_keys(&found_key, &key);
+ 	/*
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 9e1596bb208db..9892dae178b75 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -3123,23 +3123,34 @@ int btrfs_start_pre_rw_mount(struct btrfs_fs_info *fs_info)
+ {
+ 	int ret;
+ 	const bool cache_opt = btrfs_test_opt(fs_info, SPACE_CACHE);
+-	bool clear_free_space_tree = false;
++	bool rebuild_free_space_tree = false;
+ 
+ 	if (btrfs_test_opt(fs_info, CLEAR_CACHE) &&
+ 	    btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE)) {
+-		clear_free_space_tree = true;
++		rebuild_free_space_tree = true;
+ 	} else if (btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE) &&
+ 		   !btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE_VALID)) {
+ 		btrfs_warn(fs_info, "free space tree is invalid");
+-		clear_free_space_tree = true;
++		rebuild_free_space_tree = true;
+ 	}
+ 
+-	if (clear_free_space_tree) {
+-		btrfs_info(fs_info, "clearing free space tree");
+-		ret = btrfs_clear_free_space_tree(fs_info);
++	if (rebuild_free_space_tree) {
++		btrfs_info(fs_info, "rebuilding free space tree");
++		ret = btrfs_rebuild_free_space_tree(fs_info);
+ 		if (ret) {
+ 			btrfs_warn(fs_info,
+-				   "failed to clear free space tree: %d", ret);
++				   "failed to rebuild free space tree: %d", ret);
++			goto out;
++		}
++	}
++
++	if (btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE) &&
++	    !btrfs_test_opt(fs_info, FREE_SPACE_TREE)) {
++		btrfs_info(fs_info, "disabling free space tree");
++		ret = btrfs_delete_free_space_tree(fs_info);
++		if (ret) {
++			btrfs_warn(fs_info,
++				   "failed to disable free space tree: %d", ret);
+ 			goto out;
+ 		}
+ 	}
+diff --git a/fs/btrfs/file-item.c b/fs/btrfs/file-item.c
+index 41c77a1008535..a4584c629ba35 100644
+--- a/fs/btrfs/file-item.c
++++ b/fs/btrfs/file-item.c
+@@ -52,13 +52,13 @@ void btrfs_inode_safe_disk_i_size_write(struct btrfs_inode *inode, u64 new_i_siz
+ 	u64 start, end, i_size;
+ 	int ret;
+ 
++	spin_lock(&inode->lock);
+ 	i_size = new_i_size ?: i_size_read(&inode->vfs_inode);
+ 	if (btrfs_fs_incompat(fs_info, NO_HOLES)) {
+ 		inode->disk_i_size = i_size;
+-		return;
++		goto out_unlock;
+ 	}
+ 
+-	spin_lock(&inode->lock);
+ 	ret = find_contiguous_extent_bit(&inode->file_extent_tree, 0, &start,
+ 					 &end, EXTENT_DIRTY);
+ 	if (!ret && start == 0)
+@@ -66,6 +66,7 @@ void btrfs_inode_safe_disk_i_size_write(struct btrfs_inode *inode, u64 new_i_siz
+ 	else
+ 		i_size = 0;
+ 	inode->disk_i_size = i_size;
++out_unlock:
+ 	spin_unlock(&inode->lock);
+ }
+ 
+diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
+index d84cef89cdff5..cf98a3c054802 100644
+--- a/fs/btrfs/free-space-cache.c
++++ b/fs/btrfs/free-space-cache.c
+@@ -870,15 +870,16 @@ static int __load_free_space_cache(struct btrfs_root *root, struct inode *inode,
+ 			}
+ 			spin_lock(&ctl->tree_lock);
+ 			ret = link_free_space(ctl, e);
+-			ctl->total_bitmaps++;
+-			recalculate_thresholds(ctl);
+-			spin_unlock(&ctl->tree_lock);
+ 			if (ret) {
++				spin_unlock(&ctl->tree_lock);
+ 				btrfs_err(fs_info,
+ 					"Duplicate entries in free space cache, dumping");
+ 				kmem_cache_free(btrfs_free_space_cachep, e);
+ 				goto free_cache;
+ 			}
++			ctl->total_bitmaps++;
++			recalculate_thresholds(ctl);
++			spin_unlock(&ctl->tree_lock);
+ 			list_add_tail(&e->list, &bitmaps);
+ 		}
+ 
+diff --git a/fs/btrfs/free-space-tree.c b/fs/btrfs/free-space-tree.c
+index 4d155a48ec59d..b21da1446f2aa 100644
+--- a/fs/btrfs/free-space-tree.c
++++ b/fs/btrfs/free-space-tree.c
+@@ -1252,7 +1252,7 @@ out:
+ 	return ret;
+ }
+ 
+-int btrfs_clear_free_space_tree(struct btrfs_fs_info *fs_info)
++int btrfs_delete_free_space_tree(struct btrfs_fs_info *fs_info)
+ {
+ 	struct btrfs_trans_handle *trans;
+ 	struct btrfs_root *tree_root = fs_info->tree_root;
+@@ -1298,6 +1298,54 @@ abort:
+ 	return ret;
+ }
+ 
++int btrfs_rebuild_free_space_tree(struct btrfs_fs_info *fs_info)
++{
++	struct btrfs_trans_handle *trans;
++	struct btrfs_key key = {
++		.objectid = BTRFS_FREE_SPACE_TREE_OBJECTID,
++		.type = BTRFS_ROOT_ITEM_KEY,
++		.offset = 0,
++	};
++	struct btrfs_root *free_space_root = btrfs_global_root(fs_info, &key);
++	struct rb_node *node;
++	int ret;
++
++	trans = btrfs_start_transaction(free_space_root, 1);
++	if (IS_ERR(trans))
++		return PTR_ERR(trans);
++
++	set_bit(BTRFS_FS_CREATING_FREE_SPACE_TREE, &fs_info->flags);
++	set_bit(BTRFS_FS_FREE_SPACE_TREE_UNTRUSTED, &fs_info->flags);
++
++	ret = clear_free_space_tree(trans, free_space_root);
++	if (ret)
++		goto abort;
++
++	node = rb_first_cached(&fs_info->block_group_cache_tree);
++	while (node) {
++		struct btrfs_block_group *block_group;
++
++		block_group = rb_entry(node, struct btrfs_block_group,
++				       cache_node);
++		ret = populate_free_space_tree(trans, block_group);
++		if (ret)
++			goto abort;
++		node = rb_next(node);
++	}
++
++	btrfs_set_fs_compat_ro(fs_info, FREE_SPACE_TREE);
++	btrfs_set_fs_compat_ro(fs_info, FREE_SPACE_TREE_VALID);
++	clear_bit(BTRFS_FS_CREATING_FREE_SPACE_TREE, &fs_info->flags);
++
++	ret = btrfs_commit_transaction(trans);
++	clear_bit(BTRFS_FS_FREE_SPACE_TREE_UNTRUSTED, &fs_info->flags);
++	return ret;
++abort:
++	btrfs_abort_transaction(trans, ret);
++	btrfs_end_transaction(trans);
++	return ret;
++}
++
+ static int __add_block_group_free_space(struct btrfs_trans_handle *trans,
+ 					struct btrfs_block_group *block_group,
+ 					struct btrfs_path *path)
+diff --git a/fs/btrfs/free-space-tree.h b/fs/btrfs/free-space-tree.h
+index dc2463e4cfe3c..6d5551d0ced81 100644
+--- a/fs/btrfs/free-space-tree.h
++++ b/fs/btrfs/free-space-tree.h
+@@ -18,7 +18,8 @@ struct btrfs_caching_control;
+ 
+ void set_free_space_tree_thresholds(struct btrfs_block_group *block_group);
+ int btrfs_create_free_space_tree(struct btrfs_fs_info *fs_info);
+-int btrfs_clear_free_space_tree(struct btrfs_fs_info *fs_info);
++int btrfs_delete_free_space_tree(struct btrfs_fs_info *fs_info);
++int btrfs_rebuild_free_space_tree(struct btrfs_fs_info *fs_info);
+ int load_free_space_tree(struct btrfs_caching_control *caching_ctl);
+ int add_block_group_free_space(struct btrfs_trans_handle *trans,
+ 			       struct btrfs_block_group *block_group);
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 957e4d76a7b65..b31bb33524774 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -3167,6 +3167,9 @@ int btrfs_finish_ordered_io(struct btrfs_ordered_extent *ordered_extent)
+ 		btrfs_rewrite_logical_zoned(ordered_extent);
+ 		btrfs_zone_finish_endio(fs_info, ordered_extent->disk_bytenr,
+ 					ordered_extent->disk_num_bytes);
++	} else if (btrfs_is_data_reloc_root(inode->root)) {
++		btrfs_zone_finish_endio(fs_info, ordered_extent->disk_bytenr,
++					ordered_extent->disk_num_bytes);
+ 	}
+ 
+ 	if (test_bit(BTRFS_ORDERED_TRUNCATED, &ordered_extent->flags)) {
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 25833b4eeaf57..2fa36f694daa5 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -454,7 +454,9 @@ void btrfs_exclop_balance(struct btrfs_fs_info *fs_info,
+ 	case BTRFS_EXCLOP_BALANCE_PAUSED:
+ 		spin_lock(&fs_info->super_lock);
+ 		ASSERT(fs_info->exclusive_operation == BTRFS_EXCLOP_BALANCE ||
+-		       fs_info->exclusive_operation == BTRFS_EXCLOP_DEV_ADD);
++		       fs_info->exclusive_operation == BTRFS_EXCLOP_DEV_ADD ||
++		       fs_info->exclusive_operation == BTRFS_EXCLOP_NONE ||
++		       fs_info->exclusive_operation == BTRFS_EXCLOP_BALANCE_PAUSED);
+ 		fs_info->exclusive_operation = BTRFS_EXCLOP_BALANCE_PAUSED;
+ 		spin_unlock(&fs_info->super_lock);
+ 		break;
+diff --git a/fs/btrfs/print-tree.c b/fs/btrfs/print-tree.c
+index b93c962133048..497b9dbd8a133 100644
+--- a/fs/btrfs/print-tree.c
++++ b/fs/btrfs/print-tree.c
+@@ -151,10 +151,10 @@ static void print_extent_item(struct extent_buffer *eb, int slot, int type)
+ 			pr_cont("shared data backref parent %llu count %u\n",
+ 			       offset, btrfs_shared_data_ref_count(eb, sref));
+ 			/*
+-			 * offset is supposed to be a tree block which
+-			 * must be aligned to nodesize.
++			 * Offset is supposed to be a tree block which must be
++			 * aligned to sectorsize.
+ 			 */
+-			if (!IS_ALIGNED(offset, eb->fs_info->nodesize))
++			if (!IS_ALIGNED(offset, eb->fs_info->sectorsize))
+ 				pr_info(
+ 			"\t\t\t(parent %llu not aligned to sectorsize %u)\n",
+ 				     offset, eb->fs_info->sectorsize);
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index ef13a9d4e370f..c10670915f962 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -3422,7 +3422,7 @@ int add_data_references(struct reloc_control *rc,
+ 	btrfs_release_path(path);
+ 
+ 	ctx.bytenr = extent_key->objectid;
+-	ctx.ignore_extent_item_pos = true;
++	ctx.skip_inode_ref_list = true;
+ 	ctx.fs_info = rc->extent_root->fs_info;
+ 
+ 	ret = btrfs_find_all_leafs(&ctx);
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index 366fb4cde1458..e19792d919c66 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -826,7 +826,11 @@ out:
+ 	    !btrfs_test_opt(info, CLEAR_CACHE)) {
+ 		btrfs_err(info, "cannot disable free space tree");
+ 		ret = -EINVAL;
+-
++	}
++	if (btrfs_fs_compat_ro(info, BLOCK_GROUP_TREE) &&
++	     !btrfs_test_opt(info, FREE_SPACE_TREE)) {
++		btrfs_err(info, "cannot disable free space tree with block-group-tree feature");
++		ret = -EINVAL;
+ 	}
+ 	if (!ret)
+ 		ret = btrfs_check_mountopts_zoned(info);
+diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
+index 45d04092f2f8c..8db47f93007fd 100644
+--- a/fs/btrfs/zoned.c
++++ b/fs/btrfs/zoned.c
+@@ -122,10 +122,9 @@ static int sb_write_pointer(struct block_device *bdev, struct blk_zone *zones,
+ 		int i;
+ 
+ 		for (i = 0; i < BTRFS_NR_SB_LOG_ZONES; i++) {
+-			u64 bytenr;
+-
+-			bytenr = ((zones[i].start + zones[i].len)
+-				   << SECTOR_SHIFT) - BTRFS_SUPER_INFO_SIZE;
++			u64 zone_end = (zones[i].start + zones[i].capacity) << SECTOR_SHIFT;
++			u64 bytenr = ALIGN_DOWN(zone_end, BTRFS_SUPER_INFO_SIZE) -
++						BTRFS_SUPER_INFO_SIZE;
+ 
+ 			page[i] = read_cache_page_gfp(mapping,
+ 					bytenr >> PAGE_SHIFT, GFP_NOFS);
+@@ -1168,12 +1167,12 @@ int btrfs_ensure_empty_zones(struct btrfs_device *device, u64 start, u64 size)
+ 		return -ERANGE;
+ 
+ 	/* All the zones are conventional */
+-	if (find_next_bit(zinfo->seq_zones, begin, end) == end)
++	if (find_next_bit(zinfo->seq_zones, end, begin) == end)
+ 		return 0;
+ 
+ 	/* All the zones are sequential and empty */
+-	if (find_next_zero_bit(zinfo->seq_zones, begin, end) == end &&
+-	    find_next_zero_bit(zinfo->empty_zones, begin, end) == end)
++	if (find_next_zero_bit(zinfo->seq_zones, end, begin) == end &&
++	    find_next_zero_bit(zinfo->empty_zones, end, begin) == end)
+ 		return 0;
+ 
+ 	for (pos = start; pos < start + size; pos += zinfo->zone_size) {
+@@ -1610,11 +1609,11 @@ void btrfs_redirty_list_add(struct btrfs_transaction *trans,
+ 	    !list_empty(&eb->release_list))
+ 		return;
+ 
++	memzero_extent_buffer(eb, 0, eb->len);
++	set_bit(EXTENT_BUFFER_NO_CHECK, &eb->bflags);
+ 	set_extent_buffer_dirty(eb);
+ 	set_extent_bits_nowait(&trans->dirty_pages, eb->start,
+ 			       eb->start + eb->len - 1, EXTENT_DIRTY);
+-	memzero_extent_buffer(eb, 0, eb->len);
+-	set_bit(EXTENT_BUFFER_NO_CHECK, &eb->bflags);
+ 
+ 	spin_lock(&trans->releasing_ebs_lock);
+ 	list_add_tail(&eb->release_list, &trans->releasing_ebs);
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index ac9034fce409d..b30f8f768ac45 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -744,6 +744,7 @@ static void cifs_umount_begin(struct super_block *sb)
+ 	spin_unlock(&tcon->tc_lock);
+ 	spin_unlock(&cifs_tcp_ses_lock);
+ 
++	cifs_close_all_deferred_files(tcon);
+ 	/* cancel_brl_requests(tcon); */ /* BB mark all brl mids as exiting */
+ 	/* cancel_notify_requests(tcon); */
+ 	if (tcon->ses && tcon->ses->server) {
+@@ -759,6 +760,20 @@ static void cifs_umount_begin(struct super_block *sb)
+ 	return;
+ }
+ 
++static int cifs_freeze(struct super_block *sb)
++{
++	struct cifs_sb_info *cifs_sb = CIFS_SB(sb);
++	struct cifs_tcon *tcon;
++
++	if (cifs_sb == NULL)
++		return 0;
++
++	tcon = cifs_sb_master_tcon(cifs_sb);
++
++	cifs_close_all_deferred_files(tcon);
++	return 0;
++}
++
+ #ifdef CONFIG_CIFS_STATS2
+ static int cifs_show_stats(struct seq_file *s, struct dentry *root)
+ {
+@@ -797,6 +812,7 @@ static const struct super_operations cifs_super_ops = {
+ 	as opens */
+ 	.show_options = cifs_show_options,
+ 	.umount_begin   = cifs_umount_begin,
++	.freeze_fs      = cifs_freeze,
+ #ifdef CONFIG_CIFS_STATS2
+ 	.show_stats = cifs_show_stats,
+ #endif
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 2c573062ec874..59a10330e299b 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -2705,6 +2705,13 @@ cifs_match_super(struct super_block *sb, void *data)
+ 
+ 	spin_lock(&cifs_tcp_ses_lock);
+ 	cifs_sb = CIFS_SB(sb);
++
++	/* We do not want to use a superblock that has been shutdown */
++	if (CIFS_MOUNT_SHUTDOWN & cifs_sb->mnt_cifs_flags) {
++		spin_unlock(&cifs_tcp_ses_lock);
++		return 0;
++	}
++
+ 	tlink = cifs_get_tlink(cifs_sb_master_tlink(cifs_sb));
+ 	if (tlink == NULL) {
+ 		/* can not match superblock if tlink were ever null */
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index a81758225fcdc..a295e4c2d54e3 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -1682,7 +1682,7 @@ smb2_copychunk_range(const unsigned int xid,
+ 		pcchunk->SourceOffset = cpu_to_le64(src_off);
+ 		pcchunk->TargetOffset = cpu_to_le64(dest_off);
+ 		pcchunk->Length =
+-			cpu_to_le32(min_t(u32, len, tcon->max_bytes_chunk));
++			cpu_to_le32(min_t(u64, len, tcon->max_bytes_chunk));
+ 
+ 		/* Request server copy to target from src identified by key */
+ 		kfree(retbuf);
+diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
+index 8ff4b9192a9f5..f2c415f31b755 100644
+--- a/fs/ext4/balloc.c
++++ b/fs/ext4/balloc.c
+@@ -303,6 +303,22 @@ struct ext4_group_desc * ext4_get_group_desc(struct super_block *sb,
+ 	return desc;
+ }
+ 
++static ext4_fsblk_t ext4_valid_block_bitmap_padding(struct super_block *sb,
++						    ext4_group_t block_group,
++						    struct buffer_head *bh)
++{
++	ext4_grpblk_t next_zero_bit;
++	unsigned long bitmap_size = sb->s_blocksize * 8;
++	unsigned int offset = num_clusters_in_group(sb, block_group);
++
++	if (bitmap_size <= offset)
++		return 0;
++
++	next_zero_bit = ext4_find_next_zero_bit(bh->b_data, bitmap_size, offset);
++
++	return (next_zero_bit < bitmap_size ? next_zero_bit : 0);
++}
++
+ /*
+  * Return the block number which was discovered to be invalid, or 0 if
+  * the block bitmap is valid.
+@@ -401,6 +417,15 @@ static int ext4_validate_block_bitmap(struct super_block *sb,
+ 					EXT4_GROUP_INFO_BBITMAP_CORRUPT);
+ 		return -EFSCORRUPTED;
+ 	}
++	blk = ext4_valid_block_bitmap_padding(sb, block_group, bh);
++	if (unlikely(blk != 0)) {
++		ext4_unlock_group(sb, block_group);
++		ext4_error(sb, "bg %u: block %llu: padding at end of block bitmap is not set",
++			   block_group, blk);
++		ext4_mark_group_bitmap_corrupted(sb, block_group,
++						 EXT4_GROUP_INFO_BBITMAP_CORRUPT);
++		return -EFSCORRUPTED;
++	}
+ 	set_buffer_verified(bh);
+ verified:
+ 	ext4_unlock_group(sb, block_group);
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 08b29c289da4d..df0255b7d1faa 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -1774,6 +1774,30 @@ static inline struct ext4_inode_info *EXT4_I(struct inode *inode)
+ 	return container_of(inode, struct ext4_inode_info, vfs_inode);
+ }
+ 
++static inline int ext4_writepages_down_read(struct super_block *sb)
++{
++	percpu_down_read(&EXT4_SB(sb)->s_writepages_rwsem);
++	return memalloc_nofs_save();
++}
++
++static inline void ext4_writepages_up_read(struct super_block *sb, int ctx)
++{
++	memalloc_nofs_restore(ctx);
++	percpu_up_read(&EXT4_SB(sb)->s_writepages_rwsem);
++}
++
++static inline int ext4_writepages_down_write(struct super_block *sb)
++{
++	percpu_down_write(&EXT4_SB(sb)->s_writepages_rwsem);
++	return memalloc_nofs_save();
++}
++
++static inline void ext4_writepages_up_write(struct super_block *sb, int ctx)
++{
++	memalloc_nofs_restore(ctx);
++	percpu_up_write(&EXT4_SB(sb)->s_writepages_rwsem);
++}
++
+ static inline int ext4_valid_inum(struct super_block *sb, unsigned long ino)
+ {
+ 	return ino == EXT4_ROOT_INO ||
+diff --git a/fs/ext4/extents_status.c b/fs/ext4/extents_status.c
+index 7bc221038c6c1..595abb9e7d74b 100644
+--- a/fs/ext4/extents_status.c
++++ b/fs/ext4/extents_status.c
+@@ -267,14 +267,12 @@ static void __es_find_extent_range(struct inode *inode,
+ 
+ 	/* see if the extent has been cached */
+ 	es->es_lblk = es->es_len = es->es_pblk = 0;
+-	if (tree->cache_es) {
+-		es1 = tree->cache_es;
+-		if (in_range(lblk, es1->es_lblk, es1->es_len)) {
+-			es_debug("%u cached by [%u/%u) %llu %x\n",
+-				 lblk, es1->es_lblk, es1->es_len,
+-				 ext4_es_pblock(es1), ext4_es_status(es1));
+-			goto out;
+-		}
++	es1 = READ_ONCE(tree->cache_es);
++	if (es1 && in_range(lblk, es1->es_lblk, es1->es_len)) {
++		es_debug("%u cached by [%u/%u) %llu %x\n",
++			 lblk, es1->es_lblk, es1->es_len,
++			 ext4_es_pblock(es1), ext4_es_status(es1));
++		goto out;
+ 	}
+ 
+ 	es1 = __es_tree_search(&tree->root, lblk);
+@@ -293,7 +291,7 @@ out:
+ 	}
+ 
+ 	if (es1 && matching_fn(es1)) {
+-		tree->cache_es = es1;
++		WRITE_ONCE(tree->cache_es, es1);
+ 		es->es_lblk = es1->es_lblk;
+ 		es->es_len = es1->es_len;
+ 		es->es_pblk = es1->es_pblk;
+@@ -931,14 +929,12 @@ int ext4_es_lookup_extent(struct inode *inode, ext4_lblk_t lblk,
+ 
+ 	/* find extent in cache firstly */
+ 	es->es_lblk = es->es_len = es->es_pblk = 0;
+-	if (tree->cache_es) {
+-		es1 = tree->cache_es;
+-		if (in_range(lblk, es1->es_lblk, es1->es_len)) {
+-			es_debug("%u cached by [%u/%u)\n",
+-				 lblk, es1->es_lblk, es1->es_len);
+-			found = 1;
+-			goto out;
+-		}
++	es1 = READ_ONCE(tree->cache_es);
++	if (es1 && in_range(lblk, es1->es_lblk, es1->es_len)) {
++		es_debug("%u cached by [%u/%u)\n",
++			 lblk, es1->es_lblk, es1->es_len);
++		found = 1;
++		goto out;
+ 	}
+ 
+ 	node = tree->root.rb_node;
+diff --git a/fs/ext4/hash.c b/fs/ext4/hash.c
+index 147b5241dd94f..46c3423ddfa17 100644
+--- a/fs/ext4/hash.c
++++ b/fs/ext4/hash.c
+@@ -277,7 +277,11 @@ static int __ext4fs_dirhash(const struct inode *dir, const char *name, int len,
+ 	}
+ 	default:
+ 		hinfo->hash = 0;
+-		return -1;
++		hinfo->minor_hash = 0;
++		ext4_warning(dir->i_sb,
++			     "invalid/unsupported hash tree version %u",
++			     hinfo->hash_version);
++		return -EINVAL;
+ 	}
+ 	hash = hash & ~1;
+ 	if (hash == (EXT4_HTREE_EOF_32BIT << 1))
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index 1602d74b5eeb3..cb36037f20fc8 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -34,6 +34,7 @@ static int get_max_inline_xattr_value_size(struct inode *inode,
+ 	struct ext4_xattr_ibody_header *header;
+ 	struct ext4_xattr_entry *entry;
+ 	struct ext4_inode *raw_inode;
++	void *end;
+ 	int free, min_offs;
+ 
+ 	if (!EXT4_INODE_HAS_XATTR_SPACE(inode))
+@@ -57,14 +58,23 @@ static int get_max_inline_xattr_value_size(struct inode *inode,
+ 	raw_inode = ext4_raw_inode(iloc);
+ 	header = IHDR(inode, raw_inode);
+ 	entry = IFIRST(header);
++	end = (void *)raw_inode + EXT4_SB(inode->i_sb)->s_inode_size;
+ 
+ 	/* Compute min_offs. */
+-	for (; !IS_LAST_ENTRY(entry); entry = EXT4_XATTR_NEXT(entry)) {
++	while (!IS_LAST_ENTRY(entry)) {
++		void *next = EXT4_XATTR_NEXT(entry);
++
++		if (next >= end) {
++			EXT4_ERROR_INODE(inode,
++					 "corrupt xattr in inline inode");
++			return 0;
++		}
+ 		if (!entry->e_value_inum && entry->e_value_size) {
+ 			size_t offs = le16_to_cpu(entry->e_value_offs);
+ 			if (offs < min_offs)
+ 				min_offs = offs;
+ 		}
++		entry = next;
+ 	}
+ 	free = min_offs -
+ 		((void *)entry - (void *)IFIRST(header)) - sizeof(__u32);
+@@ -350,7 +360,7 @@ static int ext4_update_inline_data(handle_t *handle, struct inode *inode,
+ 
+ 	error = ext4_xattr_ibody_get(inode, i.name_index, i.name,
+ 				     value, len);
+-	if (error == -ENODATA)
++	if (error < 0)
+ 		goto out;
+ 
+ 	BUFFER_TRACE(is.iloc.bh, "get_write_access");
+@@ -1177,6 +1187,7 @@ static int ext4_finish_convert_inline_dir(handle_t *handle,
+ 		ext4_initialize_dirent_tail(dir_block,
+ 					    inode->i_sb->s_blocksize);
+ 	set_buffer_uptodate(dir_block);
++	unlock_buffer(dir_block);
+ 	err = ext4_handle_dirty_dirblock(handle, inode, dir_block);
+ 	if (err)
+ 		return err;
+@@ -1251,6 +1262,7 @@ static int ext4_convert_inline_data_nolock(handle_t *handle,
+ 	if (!S_ISDIR(inode->i_mode)) {
+ 		memcpy(data_bh->b_data, buf, inline_size);
+ 		set_buffer_uptodate(data_bh);
++		unlock_buffer(data_bh);
+ 		error = ext4_handle_dirty_metadata(handle,
+ 						   inode, data_bh);
+ 	} else {
+@@ -1258,7 +1270,6 @@ static int ext4_convert_inline_data_nolock(handle_t *handle,
+ 						       buf, inline_size);
+ 	}
+ 
+-	unlock_buffer(data_bh);
+ out_restore:
+ 	if (error)
+ 		ext4_restore_inline_data(handle, inode, iloc, buf, inline_size);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 41ba1c4328449..145ea24d589b8 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -2956,13 +2956,14 @@ static int ext4_writepages(struct address_space *mapping,
+ 		.can_map = 1,
+ 	};
+ 	int ret;
++	int alloc_ctx;
+ 
+ 	if (unlikely(ext4_forced_shutdown(EXT4_SB(sb))))
+ 		return -EIO;
+ 
+-	percpu_down_read(&EXT4_SB(sb)->s_writepages_rwsem);
++	alloc_ctx = ext4_writepages_down_read(sb);
+ 	ret = ext4_do_writepages(&mpd);
+-	percpu_up_read(&EXT4_SB(sb)->s_writepages_rwsem);
++	ext4_writepages_up_read(sb, alloc_ctx);
+ 
+ 	return ret;
+ }
+@@ -2990,17 +2991,18 @@ static int ext4_dax_writepages(struct address_space *mapping,
+ 	long nr_to_write = wbc->nr_to_write;
+ 	struct inode *inode = mapping->host;
+ 	struct ext4_sb_info *sbi = EXT4_SB(mapping->host->i_sb);
++	int alloc_ctx;
+ 
+ 	if (unlikely(ext4_forced_shutdown(EXT4_SB(inode->i_sb))))
+ 		return -EIO;
+ 
+-	percpu_down_read(&sbi->s_writepages_rwsem);
++	alloc_ctx = ext4_writepages_down_read(inode->i_sb);
+ 	trace_ext4_writepages(inode, wbc);
+ 
+ 	ret = dax_writeback_mapping_range(mapping, sbi->s_daxdev, wbc);
+ 	trace_ext4_writepages_result(inode, wbc, ret,
+ 				     nr_to_write - wbc->nr_to_write);
+-	percpu_up_read(&sbi->s_writepages_rwsem);
++	ext4_writepages_up_read(inode->i_sb, alloc_ctx);
+ 	return ret;
+ }
+ 
+@@ -3574,7 +3576,7 @@ static int ext4_iomap_overwrite_begin(struct inode *inode, loff_t offset,
+ 	 */
+ 	flags &= ~IOMAP_WRITE;
+ 	ret = ext4_iomap_begin(inode, offset, length, flags, iomap, srcmap);
+-	WARN_ON_ONCE(iomap->type != IOMAP_MAPPED);
++	WARN_ON_ONCE(!ret && iomap->type != IOMAP_MAPPED);
+ 	return ret;
+ }
+ 
+@@ -6122,7 +6124,7 @@ int ext4_change_inode_journal_flag(struct inode *inode, int val)
+ 	journal_t *journal;
+ 	handle_t *handle;
+ 	int err;
+-	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
++	int alloc_ctx;
+ 
+ 	/*
+ 	 * We have to be very careful here: changing a data block's
+@@ -6160,7 +6162,7 @@ int ext4_change_inode_journal_flag(struct inode *inode, int val)
+ 		}
+ 	}
+ 
+-	percpu_down_write(&sbi->s_writepages_rwsem);
++	alloc_ctx = ext4_writepages_down_write(inode->i_sb);
+ 	jbd2_journal_lock_updates(journal);
+ 
+ 	/*
+@@ -6177,7 +6179,7 @@ int ext4_change_inode_journal_flag(struct inode *inode, int val)
+ 		err = jbd2_journal_flush(journal, 0);
+ 		if (err < 0) {
+ 			jbd2_journal_unlock_updates(journal);
+-			percpu_up_write(&sbi->s_writepages_rwsem);
++			ext4_writepages_up_write(inode->i_sb, alloc_ctx);
+ 			return err;
+ 		}
+ 		ext4_clear_inode_flag(inode, EXT4_INODE_JOURNAL_DATA);
+@@ -6185,7 +6187,7 @@ int ext4_change_inode_journal_flag(struct inode *inode, int val)
+ 	ext4_set_aops(inode);
+ 
+ 	jbd2_journal_unlock_updates(journal);
+-	percpu_up_write(&sbi->s_writepages_rwsem);
++	ext4_writepages_up_write(inode->i_sb, alloc_ctx);
+ 
+ 	if (val)
+ 		filemap_invalidate_unlock(inode->i_mapping);
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 5b2ae37a8b80b..5639a4cf7ff98 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -4820,7 +4820,11 @@ ext4_mb_release_group_pa(struct ext4_buddy *e4b,
+ 	trace_ext4_mb_release_group_pa(sb, pa);
+ 	BUG_ON(pa->pa_deleted == 0);
+ 	ext4_get_group_no_and_offset(sb, pa->pa_pstart, &group, &bit);
+-	BUG_ON(group != e4b->bd_group && pa->pa_len != 0);
++	if (unlikely(group != e4b->bd_group && pa->pa_len != 0)) {
++		ext4_warning(sb, "bad group: expected %u, group %u, pa_start %llu",
++			     e4b->bd_group, group, pa->pa_pstart);
++		return 0;
++	}
+ 	mb_free_blocks(pa->pa_inode, e4b, bit, pa->pa_len);
+ 	atomic_add(pa->pa_len, &EXT4_SB(sb)->s_mb_discarded);
+ 	trace_ext4_mballoc_discard(sb, NULL, group, bit, pa->pa_len);
+diff --git a/fs/ext4/migrate.c b/fs/ext4/migrate.c
+index a19a9661646eb..d98ac2af8199f 100644
+--- a/fs/ext4/migrate.c
++++ b/fs/ext4/migrate.c
+@@ -408,7 +408,6 @@ static int free_ext_block(handle_t *handle, struct inode *inode)
+ 
+ int ext4_ext_migrate(struct inode *inode)
+ {
+-	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
+ 	handle_t *handle;
+ 	int retval = 0, i;
+ 	__le32 *i_data;
+@@ -418,6 +417,7 @@ int ext4_ext_migrate(struct inode *inode)
+ 	unsigned long max_entries;
+ 	__u32 goal, tmp_csum_seed;
+ 	uid_t owner[2];
++	int alloc_ctx;
+ 
+ 	/*
+ 	 * If the filesystem does not support extents, or the inode
+@@ -434,7 +434,7 @@ int ext4_ext_migrate(struct inode *inode)
+ 		 */
+ 		return retval;
+ 
+-	percpu_down_write(&sbi->s_writepages_rwsem);
++	alloc_ctx = ext4_writepages_down_write(inode->i_sb);
+ 
+ 	/*
+ 	 * Worst case we can touch the allocation bitmaps and a block
+@@ -586,7 +586,7 @@ out_tmp_inode:
+ 	unlock_new_inode(tmp_inode);
+ 	iput(tmp_inode);
+ out_unlock:
+-	percpu_up_write(&sbi->s_writepages_rwsem);
++	ext4_writepages_up_write(inode->i_sb, alloc_ctx);
+ 	return retval;
+ }
+ 
+@@ -605,6 +605,7 @@ int ext4_ind_migrate(struct inode *inode)
+ 	ext4_fsblk_t			blk;
+ 	handle_t			*handle;
+ 	int				ret, ret2 = 0;
++	int				alloc_ctx;
+ 
+ 	if (!ext4_has_feature_extents(inode->i_sb) ||
+ 	    (!ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)))
+@@ -621,7 +622,7 @@ int ext4_ind_migrate(struct inode *inode)
+ 	if (test_opt(inode->i_sb, DELALLOC))
+ 		ext4_alloc_da_blocks(inode);
+ 
+-	percpu_down_write(&sbi->s_writepages_rwsem);
++	alloc_ctx = ext4_writepages_down_write(inode->i_sb);
+ 
+ 	handle = ext4_journal_start(inode, EXT4_HT_MIGRATE, 1);
+ 	if (IS_ERR(handle)) {
+@@ -665,6 +666,6 @@ errout:
+ 	ext4_journal_stop(handle);
+ 	up_write(&EXT4_I(inode)->i_data_sem);
+ out_unlock:
+-	percpu_up_write(&sbi->s_writepages_rwsem);
++	ext4_writepages_up_write(inode->i_sb, alloc_ctx);
+ 	return ret;
+ }
+diff --git a/fs/ext4/mmp.c b/fs/ext4/mmp.c
+index 4681fff6665fe..46735ce315b5a 100644
+--- a/fs/ext4/mmp.c
++++ b/fs/ext4/mmp.c
+@@ -39,28 +39,36 @@ static void ext4_mmp_csum_set(struct super_block *sb, struct mmp_struct *mmp)
+  * Write the MMP block using REQ_SYNC to try to get the block on-disk
+  * faster.
+  */
+-static int write_mmp_block(struct super_block *sb, struct buffer_head *bh)
++static int write_mmp_block_thawed(struct super_block *sb,
++				  struct buffer_head *bh)
+ {
+ 	struct mmp_struct *mmp = (struct mmp_struct *)(bh->b_data);
+ 
+-	/*
+-	 * We protect against freezing so that we don't create dirty buffers
+-	 * on frozen filesystem.
+-	 */
+-	sb_start_write(sb);
+ 	ext4_mmp_csum_set(sb, mmp);
+ 	lock_buffer(bh);
+ 	bh->b_end_io = end_buffer_write_sync;
+ 	get_bh(bh);
+ 	submit_bh(REQ_OP_WRITE | REQ_SYNC | REQ_META | REQ_PRIO, bh);
+ 	wait_on_buffer(bh);
+-	sb_end_write(sb);
+ 	if (unlikely(!buffer_uptodate(bh)))
+ 		return -EIO;
+-
+ 	return 0;
+ }
+ 
++static int write_mmp_block(struct super_block *sb, struct buffer_head *bh)
++{
++	int err;
++
++	/*
++	 * We protect against freezing so that we don't create dirty buffers
++	 * on frozen filesystem.
++	 */
++	sb_start_write(sb);
++	err = write_mmp_block_thawed(sb, bh);
++	sb_end_write(sb);
++	return err;
++}
++
+ /*
+  * Read the MMP block. It _must_ be read from disk and hence we clear the
+  * uptodate flag on the buffer.
+@@ -340,7 +348,11 @@ skip:
+ 	seq = mmp_new_seq();
+ 	mmp->mmp_seq = cpu_to_le32(seq);
+ 
+-	retval = write_mmp_block(sb, bh);
++	/*
++	 * On mount / remount we are protected against fs freezing (by s_umount
++	 * semaphore) and grabbing freeze protection upsets lockdep
++	 */
++	retval = write_mmp_block_thawed(sb, bh);
+ 	if (retval)
+ 		goto failed;
+ 
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index a5010b5b8a8c1..45b579805c954 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -674,7 +674,7 @@ static struct stats dx_show_leaf(struct inode *dir,
+ 				len = de->name_len;
+ 				if (!IS_ENCRYPTED(dir)) {
+ 					/* Directory is not encrypted */
+-					ext4fs_dirhash(dir, de->name,
++					(void) ext4fs_dirhash(dir, de->name,
+ 						de->name_len, &h);
+ 					printk("%*.s:(U)%x.%u ", len,
+ 					       name, h.hash,
+@@ -709,8 +709,9 @@ static struct stats dx_show_leaf(struct inode *dir,
+ 					if (IS_CASEFOLDED(dir))
+ 						h.hash = EXT4_DIRENT_HASH(de);
+ 					else
+-						ext4fs_dirhash(dir, de->name,
+-						       de->name_len, &h);
++						(void) ext4fs_dirhash(dir,
++							de->name,
++							de->name_len, &h);
+ 					printk("%*.s:(E)%x.%u ", len, name,
+ 					       h.hash, (unsigned) ((char *) de
+ 								   - base));
+@@ -720,7 +721,8 @@ static struct stats dx_show_leaf(struct inode *dir,
+ #else
+ 				int len = de->name_len;
+ 				char *name = de->name;
+-				ext4fs_dirhash(dir, de->name, de->name_len, &h);
++				(void) ext4fs_dirhash(dir, de->name,
++						      de->name_len, &h);
+ 				printk("%*.s:%x.%u ", len, name, h.hash,
+ 				       (unsigned) ((char *) de - base));
+ #endif
+@@ -849,8 +851,14 @@ dx_probe(struct ext4_filename *fname, struct inode *dir,
+ 	hinfo->seed = EXT4_SB(dir->i_sb)->s_hash_seed;
+ 	/* hash is already computed for encrypted casefolded directory */
+ 	if (fname && fname_name(fname) &&
+-				!(IS_ENCRYPTED(dir) && IS_CASEFOLDED(dir)))
+-		ext4fs_dirhash(dir, fname_name(fname), fname_len(fname), hinfo);
++	    !(IS_ENCRYPTED(dir) && IS_CASEFOLDED(dir))) {
++		int ret = ext4fs_dirhash(dir, fname_name(fname),
++					 fname_len(fname), hinfo);
++		if (ret < 0) {
++			ret_err = ERR_PTR(ret);
++			goto fail;
++		}
++	}
+ 	hash = hinfo->hash;
+ 
+ 	if (root->info.unused_flags & 1) {
+@@ -1111,7 +1119,12 @@ static int htree_dirblock_to_tree(struct file *dir_file,
+ 				hinfo->minor_hash = 0;
+ 			}
+ 		} else {
+-			ext4fs_dirhash(dir, de->name, de->name_len, hinfo);
++			err = ext4fs_dirhash(dir, de->name,
++					     de->name_len, hinfo);
++			if (err < 0) {
++				count = err;
++				goto errout;
++			}
+ 		}
+ 		if ((hinfo->hash < start_hash) ||
+ 		    ((hinfo->hash == start_hash) &&
+@@ -1313,8 +1326,12 @@ static int dx_make_map(struct inode *dir, struct buffer_head *bh,
+ 		if (de->name_len && de->inode) {
+ 			if (ext4_hash_in_dirent(dir))
+ 				h.hash = EXT4_DIRENT_HASH(de);
+-			else
+-				ext4fs_dirhash(dir, de->name, de->name_len, &h);
++			else {
++				int err = ext4fs_dirhash(dir, de->name,
++						     de->name_len, &h);
++				if (err < 0)
++					return err;
++			}
+ 			map_tail--;
+ 			map_tail->hash = h.hash;
+ 			map_tail->offs = ((char *) de - base)>>2;
+@@ -1452,10 +1469,9 @@ int ext4_fname_setup_ci_filename(struct inode *dir, const struct qstr *iname,
+ 	hinfo->hash_version = DX_HASH_SIPHASH;
+ 	hinfo->seed = NULL;
+ 	if (cf_name->name)
+-		ext4fs_dirhash(dir, cf_name->name, cf_name->len, hinfo);
++		return ext4fs_dirhash(dir, cf_name->name, cf_name->len, hinfo);
+ 	else
+-		ext4fs_dirhash(dir, iname->name, iname->len, hinfo);
+-	return 0;
++		return ext4fs_dirhash(dir, iname->name, iname->len, hinfo);
+ }
+ #endif
+ 
+@@ -2298,10 +2314,15 @@ static int make_indexed_dir(handle_t *handle, struct ext4_filename *fname,
+ 	fname->hinfo.seed = EXT4_SB(dir->i_sb)->s_hash_seed;
+ 
+ 	/* casefolded encrypted hashes are computed on fname setup */
+-	if (!ext4_hash_in_dirent(dir))
+-		ext4fs_dirhash(dir, fname_name(fname),
+-				fname_len(fname), &fname->hinfo);
+-
++	if (!ext4_hash_in_dirent(dir)) {
++		int err = ext4fs_dirhash(dir, fname_name(fname),
++					 fname_len(fname), &fname->hinfo);
++		if (err < 0) {
++			brelse(bh2);
++			brelse(bh);
++			return err;
++		}
++	}
+ 	memset(frames, 0, sizeof(frames));
+ 	frame = frames;
+ 	frame->entries = entries;
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index f43e526112ae8..d6ac61f43ac35 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -3184,11 +3184,9 @@ static __le16 ext4_group_desc_csum(struct super_block *sb, __u32 block_group,
+ 	crc = crc16(crc, (__u8 *)gdp, offset);
+ 	offset += sizeof(gdp->bg_checksum); /* skip checksum */
+ 	/* for checksum of struct ext4_group_desc do the rest...*/
+-	if (ext4_has_feature_64bit(sb) &&
+-	    offset < le16_to_cpu(sbi->s_es->s_desc_size))
++	if (ext4_has_feature_64bit(sb) && offset < sbi->s_desc_size)
+ 		crc = crc16(crc, (__u8 *)gdp + offset,
+-			    le16_to_cpu(sbi->s_es->s_desc_size) -
+-				offset);
++			    sbi->s_desc_size - offset);
+ 
+ out:
+ 	return cpu_to_le16(crc);
+@@ -6581,9 +6579,6 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
+ 	}
+ 
+ #ifdef CONFIG_QUOTA
+-	/* Release old quota file names */
+-	for (i = 0; i < EXT4_MAXQUOTAS; i++)
+-		kfree(old_opts.s_qf_names[i]);
+ 	if (enable_quota) {
+ 		if (sb_any_quota_suspended(sb))
+ 			dquot_resume(sb, -1);
+@@ -6593,6 +6588,9 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
+ 				goto restore_opts;
+ 		}
+ 	}
++	/* Release old quota file names */
++	for (i = 0; i < EXT4_MAXQUOTAS; i++)
++		kfree(old_opts.s_qf_names[i]);
+ #endif
+ 	if (!test_opt(sb, BLOCK_VALIDITY) && sbi->s_system_blks)
+ 		ext4_release_system_zone(sb);
+@@ -6603,6 +6601,13 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
+ 	return 0;
+ 
+ restore_opts:
++	/*
++	 * If there was a failing r/w to ro transition, we may need to
++	 * re-enable quota
++	 */
++	if ((sb->s_flags & SB_RDONLY) && !(old_sb_flags & SB_RDONLY) &&
++	    sb_any_quota_suspended(sb))
++		dquot_resume(sb, -1);
+ 	sb->s_flags = old_sb_flags;
+ 	sbi->s_mount_opt = old_opts.s_mount_opt;
+ 	sbi->s_mount_opt2 = old_opts.s_mount_opt2;
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 767454d74cd69..e33a323faf3c3 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -2615,6 +2615,7 @@ static int ext4_xattr_move_to_block(handle_t *handle, struct inode *inode,
+ 		.in_inode = !!entry->e_value_inum,
+ 	};
+ 	struct ext4_xattr_ibody_header *header = IHDR(inode, raw_inode);
++	int needs_kvfree = 0;
+ 	int error;
+ 
+ 	is = kzalloc(sizeof(struct ext4_xattr_ibody_find), GFP_NOFS);
+@@ -2637,7 +2638,7 @@ static int ext4_xattr_move_to_block(handle_t *handle, struct inode *inode,
+ 			error = -ENOMEM;
+ 			goto out;
+ 		}
+-
++		needs_kvfree = 1;
+ 		error = ext4_xattr_inode_get(inode, entry, buffer, value_size);
+ 		if (error)
+ 			goto out;
+@@ -2676,7 +2677,7 @@ static int ext4_xattr_move_to_block(handle_t *handle, struct inode *inode,
+ 
+ out:
+ 	kfree(b_entry_name);
+-	if (entry->e_value_inum && buffer)
++	if (needs_kvfree && buffer)
+ 		kvfree(buffer);
+ 	if (is)
+ 		brelse(is->iloc.bh);
+diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c
+index 28b12553f2b34..9a8153895d203 100644
+--- a/fs/f2fs/extent_cache.c
++++ b/fs/f2fs/extent_cache.c
+@@ -161,118 +161,52 @@ static bool __is_front_mergeable(struct extent_info *cur,
+ 	return __is_extent_mergeable(cur, front, type);
+ }
+ 
+-static struct rb_entry *__lookup_rb_tree_fast(struct rb_entry *cached_re,
+-							unsigned int ofs)
+-{
+-	if (cached_re) {
+-		if (cached_re->ofs <= ofs &&
+-				cached_re->ofs + cached_re->len > ofs) {
+-			return cached_re;
+-		}
+-	}
+-	return NULL;
+-}
+-
+-static struct rb_entry *__lookup_rb_tree_slow(struct rb_root_cached *root,
+-							unsigned int ofs)
++static struct extent_node *__lookup_extent_node(struct rb_root_cached *root,
++			struct extent_node *cached_en, unsigned int fofs)
+ {
+ 	struct rb_node *node = root->rb_root.rb_node;
+-	struct rb_entry *re;
++	struct extent_node *en;
+ 
++	/* check a cached entry */
++	if (cached_en && cached_en->ei.fofs <= fofs &&
++			cached_en->ei.fofs + cached_en->ei.len > fofs)
++		return cached_en;
++
++	/* check rb_tree */
+ 	while (node) {
+-		re = rb_entry(node, struct rb_entry, rb_node);
++		en = rb_entry(node, struct extent_node, rb_node);
+ 
+-		if (ofs < re->ofs)
++		if (fofs < en->ei.fofs)
+ 			node = node->rb_left;
+-		else if (ofs >= re->ofs + re->len)
++		else if (fofs >= en->ei.fofs + en->ei.len)
+ 			node = node->rb_right;
+ 		else
+-			return re;
++			return en;
+ 	}
+ 	return NULL;
+ }
+ 
+-struct rb_entry *f2fs_lookup_rb_tree(struct rb_root_cached *root,
+-				struct rb_entry *cached_re, unsigned int ofs)
+-{
+-	struct rb_entry *re;
+-
+-	re = __lookup_rb_tree_fast(cached_re, ofs);
+-	if (!re)
+-		return __lookup_rb_tree_slow(root, ofs);
+-
+-	return re;
+-}
+-
+-struct rb_node **f2fs_lookup_rb_tree_ext(struct f2fs_sb_info *sbi,
+-					struct rb_root_cached *root,
+-					struct rb_node **parent,
+-					unsigned long long key, bool *leftmost)
+-{
+-	struct rb_node **p = &root->rb_root.rb_node;
+-	struct rb_entry *re;
+-
+-	while (*p) {
+-		*parent = *p;
+-		re = rb_entry(*parent, struct rb_entry, rb_node);
+-
+-		if (key < re->key) {
+-			p = &(*p)->rb_left;
+-		} else {
+-			p = &(*p)->rb_right;
+-			*leftmost = false;
+-		}
+-	}
+-
+-	return p;
+-}
+-
+-struct rb_node **f2fs_lookup_rb_tree_for_insert(struct f2fs_sb_info *sbi,
+-				struct rb_root_cached *root,
+-				struct rb_node **parent,
+-				unsigned int ofs, bool *leftmost)
+-{
+-	struct rb_node **p = &root->rb_root.rb_node;
+-	struct rb_entry *re;
+-
+-	while (*p) {
+-		*parent = *p;
+-		re = rb_entry(*parent, struct rb_entry, rb_node);
+-
+-		if (ofs < re->ofs) {
+-			p = &(*p)->rb_left;
+-		} else if (ofs >= re->ofs + re->len) {
+-			p = &(*p)->rb_right;
+-			*leftmost = false;
+-		} else {
+-			f2fs_bug_on(sbi, 1);
+-		}
+-	}
+-
+-	return p;
+-}
+-
+ /*
+- * lookup rb entry in position of @ofs in rb-tree,
++ * lookup rb entry in position of @fofs in rb-tree,
+  * if hit, return the entry, otherwise, return NULL
+- * @prev_ex: extent before ofs
+- * @next_ex: extent after ofs
+- * @insert_p: insert point for new extent at ofs
++ * @prev_ex: extent before fofs
++ * @next_ex: extent after fofs
++ * @insert_p: insert point for new extent at fofs
+  * in order to simplify the insertion after.
+  * tree must stay unchanged between lookup and insertion.
+  */
+-struct rb_entry *f2fs_lookup_rb_tree_ret(struct rb_root_cached *root,
+-				struct rb_entry *cached_re,
+-				unsigned int ofs,
+-				struct rb_entry **prev_entry,
+-				struct rb_entry **next_entry,
++static struct extent_node *__lookup_extent_node_ret(struct rb_root_cached *root,
++				struct extent_node *cached_en,
++				unsigned int fofs,
++				struct extent_node **prev_entry,
++				struct extent_node **next_entry,
+ 				struct rb_node ***insert_p,
+ 				struct rb_node **insert_parent,
+-				bool force, bool *leftmost)
++				bool *leftmost)
+ {
+ 	struct rb_node **pnode = &root->rb_root.rb_node;
+ 	struct rb_node *parent = NULL, *tmp_node;
+-	struct rb_entry *re = cached_re;
++	struct extent_node *en = cached_en;
+ 
+ 	*insert_p = NULL;
+ 	*insert_parent = NULL;
+@@ -282,24 +216,20 @@ struct rb_entry *f2fs_lookup_rb_tree_ret(struct rb_root_cached *root,
+ 	if (RB_EMPTY_ROOT(&root->rb_root))
+ 		return NULL;
+ 
+-	if (re) {
+-		if (re->ofs <= ofs && re->ofs + re->len > ofs)
+-			goto lookup_neighbors;
+-	}
++	if (en && en->ei.fofs <= fofs && en->ei.fofs + en->ei.len > fofs)
++		goto lookup_neighbors;
+ 
+-	if (leftmost)
+-		*leftmost = true;
++	*leftmost = true;
+ 
+ 	while (*pnode) {
+ 		parent = *pnode;
+-		re = rb_entry(*pnode, struct rb_entry, rb_node);
++		en = rb_entry(*pnode, struct extent_node, rb_node);
+ 
+-		if (ofs < re->ofs) {
++		if (fofs < en->ei.fofs) {
+ 			pnode = &(*pnode)->rb_left;
+-		} else if (ofs >= re->ofs + re->len) {
++		} else if (fofs >= en->ei.fofs + en->ei.len) {
+ 			pnode = &(*pnode)->rb_right;
+-			if (leftmost)
+-				*leftmost = false;
++			*leftmost = false;
+ 		} else {
+ 			goto lookup_neighbors;
+ 		}
+@@ -308,71 +238,32 @@ struct rb_entry *f2fs_lookup_rb_tree_ret(struct rb_root_cached *root,
+ 	*insert_p = pnode;
+ 	*insert_parent = parent;
+ 
+-	re = rb_entry(parent, struct rb_entry, rb_node);
++	en = rb_entry(parent, struct extent_node, rb_node);
+ 	tmp_node = parent;
+-	if (parent && ofs > re->ofs)
++	if (parent && fofs > en->ei.fofs)
+ 		tmp_node = rb_next(parent);
+-	*next_entry = rb_entry_safe(tmp_node, struct rb_entry, rb_node);
++	*next_entry = rb_entry_safe(tmp_node, struct extent_node, rb_node);
+ 
+ 	tmp_node = parent;
+-	if (parent && ofs < re->ofs)
++	if (parent && fofs < en->ei.fofs)
+ 		tmp_node = rb_prev(parent);
+-	*prev_entry = rb_entry_safe(tmp_node, struct rb_entry, rb_node);
++	*prev_entry = rb_entry_safe(tmp_node, struct extent_node, rb_node);
+ 	return NULL;
+ 
+ lookup_neighbors:
+-	if (ofs == re->ofs || force) {
++	if (fofs == en->ei.fofs) {
+ 		/* lookup prev node for merging backward later */
+-		tmp_node = rb_prev(&re->rb_node);
+-		*prev_entry = rb_entry_safe(tmp_node, struct rb_entry, rb_node);
++		tmp_node = rb_prev(&en->rb_node);
++		*prev_entry = rb_entry_safe(tmp_node,
++					struct extent_node, rb_node);
+ 	}
+-	if (ofs == re->ofs + re->len - 1 || force) {
++	if (fofs == en->ei.fofs + en->ei.len - 1) {
+ 		/* lookup next node for merging frontward later */
+-		tmp_node = rb_next(&re->rb_node);
+-		*next_entry = rb_entry_safe(tmp_node, struct rb_entry, rb_node);
++		tmp_node = rb_next(&en->rb_node);
++		*next_entry = rb_entry_safe(tmp_node,
++					struct extent_node, rb_node);
+ 	}
+-	return re;
+-}
+-
+-bool f2fs_check_rb_tree_consistence(struct f2fs_sb_info *sbi,
+-				struct rb_root_cached *root, bool check_key)
+-{
+-#ifdef CONFIG_F2FS_CHECK_FS
+-	struct rb_node *cur = rb_first_cached(root), *next;
+-	struct rb_entry *cur_re, *next_re;
+-
+-	if (!cur)
+-		return true;
+-
+-	while (cur) {
+-		next = rb_next(cur);
+-		if (!next)
+-			return true;
+-
+-		cur_re = rb_entry(cur, struct rb_entry, rb_node);
+-		next_re = rb_entry(next, struct rb_entry, rb_node);
+-
+-		if (check_key) {
+-			if (cur_re->key > next_re->key) {
+-				f2fs_info(sbi, "inconsistent rbtree, "
+-					"cur(%llu) next(%llu)",
+-					cur_re->key, next_re->key);
+-				return false;
+-			}
+-			goto next;
+-		}
+-
+-		if (cur_re->ofs + cur_re->len > next_re->ofs) {
+-			f2fs_info(sbi, "inconsistent rbtree, cur(%u, %u) next(%u, %u)",
+-				  cur_re->ofs, cur_re->len,
+-				  next_re->ofs, next_re->len);
+-			return false;
+-		}
+-next:
+-		cur = next;
+-	}
+-#endif
+-	return true;
++	return en;
+ }
+ 
+ static struct kmem_cache *extent_tree_slab;
+@@ -587,8 +478,7 @@ static bool __lookup_extent_tree(struct inode *inode, pgoff_t pgofs,
+ 		goto out;
+ 	}
+ 
+-	en = (struct extent_node *)f2fs_lookup_rb_tree(&et->root,
+-				(struct rb_entry *)et->cached_en, pgofs);
++	en = __lookup_extent_node(&et->root, et->cached_en, pgofs);
+ 	if (!en)
+ 		goto out;
+ 
+@@ -662,7 +552,7 @@ static struct extent_node *__insert_extent_tree(struct f2fs_sb_info *sbi,
+ 				bool leftmost)
+ {
+ 	struct extent_tree_info *eti = &sbi->extent_tree[et->type];
+-	struct rb_node **p;
++	struct rb_node **p = &et->root.rb_root.rb_node;
+ 	struct rb_node *parent = NULL;
+ 	struct extent_node *en = NULL;
+ 
+@@ -674,8 +564,21 @@ static struct extent_node *__insert_extent_tree(struct f2fs_sb_info *sbi,
+ 
+ 	leftmost = true;
+ 
+-	p = f2fs_lookup_rb_tree_for_insert(sbi, &et->root, &parent,
+-						ei->fofs, &leftmost);
++	/* look up extent_node in the rb tree */
++	while (*p) {
++		parent = *p;
++		en = rb_entry(parent, struct extent_node, rb_node);
++
++		if (ei->fofs < en->ei.fofs) {
++			p = &(*p)->rb_left;
++		} else if (ei->fofs >= en->ei.fofs + en->ei.len) {
++			p = &(*p)->rb_right;
++			leftmost = false;
++		} else {
++			f2fs_bug_on(sbi, 1);
++		}
++	}
++
+ do_insert:
+ 	en = __attach_extent_node(sbi, et, ei, parent, p, leftmost);
+ 	if (!en)
+@@ -734,11 +637,10 @@ static void __update_extent_tree_range(struct inode *inode,
+ 	}
+ 
+ 	/* 1. lookup first extent node in range [fofs, fofs + len - 1] */
+-	en = (struct extent_node *)f2fs_lookup_rb_tree_ret(&et->root,
+-					(struct rb_entry *)et->cached_en, fofs,
+-					(struct rb_entry **)&prev_en,
+-					(struct rb_entry **)&next_en,
+-					&insert_p, &insert_parent, false,
++	en = __lookup_extent_node_ret(&et->root,
++					et->cached_en, fofs,
++					&prev_en, &next_en,
++					&insert_p, &insert_parent,
+ 					&leftmost);
+ 	if (!en)
+ 		en = next_en;
+@@ -876,12 +778,11 @@ void f2fs_update_read_extent_tree_range_compressed(struct inode *inode,
+ 
+ 	write_lock(&et->lock);
+ 
+-	en = (struct extent_node *)f2fs_lookup_rb_tree_ret(&et->root,
+-				(struct rb_entry *)et->cached_en, fofs,
+-				(struct rb_entry **)&prev_en,
+-				(struct rb_entry **)&next_en,
+-				&insert_p, &insert_parent, false,
+-				&leftmost);
++	en = __lookup_extent_node_ret(&et->root,
++					et->cached_en, fofs,
++					&prev_en, &next_en,
++					&insert_p, &insert_parent,
++					&leftmost);
+ 	if (en)
+ 		goto unlock_out;
+ 
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 620343c65ab67..d6f9d6e0f13b9 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -353,15 +353,7 @@ struct discard_info {
+ 
+ struct discard_cmd {
+ 	struct rb_node rb_node;		/* rb node located in rb-tree */
+-	union {
+-		struct {
+-			block_t lstart;	/* logical start address */
+-			block_t len;	/* length */
+-			block_t start;	/* actual start address in dev */
+-		};
+-		struct discard_info di;	/* discard info */
+-
+-	};
++	struct discard_info di;		/* discard info */
+ 	struct list_head list;		/* command list */
+ 	struct completion wait;		/* compleation */
+ 	struct block_device *bdev;	/* bdev */
+@@ -628,17 +620,6 @@ enum extent_type {
+ 	NR_EXTENT_CACHES,
+ };
+ 
+-struct rb_entry {
+-	struct rb_node rb_node;		/* rb node located in rb-tree */
+-	union {
+-		struct {
+-			unsigned int ofs;	/* start offset of the entry */
+-			unsigned int len;	/* length of the entry */
+-		};
+-		unsigned long long key;		/* 64-bits key */
+-	} __packed;
+-};
+-
+ struct extent_info {
+ 	unsigned int fofs;		/* start offset in a file */
+ 	unsigned int len;		/* length of the extent */
+@@ -4139,23 +4120,6 @@ void f2fs_leave_shrinker(struct f2fs_sb_info *sbi);
+  * extent_cache.c
+  */
+ bool sanity_check_extent_cache(struct inode *inode);
+-struct rb_entry *f2fs_lookup_rb_tree(struct rb_root_cached *root,
+-				struct rb_entry *cached_re, unsigned int ofs);
+-struct rb_node **f2fs_lookup_rb_tree_ext(struct f2fs_sb_info *sbi,
+-				struct rb_root_cached *root,
+-				struct rb_node **parent,
+-				unsigned long long key, bool *left_most);
+-struct rb_node **f2fs_lookup_rb_tree_for_insert(struct f2fs_sb_info *sbi,
+-				struct rb_root_cached *root,
+-				struct rb_node **parent,
+-				unsigned int ofs, bool *leftmost);
+-struct rb_entry *f2fs_lookup_rb_tree_ret(struct rb_root_cached *root,
+-		struct rb_entry *cached_re, unsigned int ofs,
+-		struct rb_entry **prev_entry, struct rb_entry **next_entry,
+-		struct rb_node ***insert_p, struct rb_node **insert_parent,
+-		bool force, bool *leftmost);
+-bool f2fs_check_rb_tree_consistence(struct f2fs_sb_info *sbi,
+-				struct rb_root_cached *root, bool check_key);
+ void f2fs_init_extent_tree(struct inode *inode);
+ void f2fs_drop_extent_tree(struct inode *inode);
+ void f2fs_destroy_extent_node(struct inode *inode);
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 292a17d62f569..2996d38aa89c3 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -390,40 +390,95 @@ static unsigned int count_bits(const unsigned long *addr,
+ 	return sum;
+ }
+ 
+-static struct victim_entry *attach_victim_entry(struct f2fs_sb_info *sbi,
+-				unsigned long long mtime, unsigned int segno,
+-				struct rb_node *parent, struct rb_node **p,
+-				bool left_most)
++static bool f2fs_check_victim_tree(struct f2fs_sb_info *sbi,
++				struct rb_root_cached *root)
++{
++#ifdef CONFIG_F2FS_CHECK_FS
++	struct rb_node *cur = rb_first_cached(root), *next;
++	struct victim_entry *cur_ve, *next_ve;
++
++	while (cur) {
++		next = rb_next(cur);
++		if (!next)
++			return true;
++
++		cur_ve = rb_entry(cur, struct victim_entry, rb_node);
++		next_ve = rb_entry(next, struct victim_entry, rb_node);
++
++		if (cur_ve->mtime > next_ve->mtime) {
++			f2fs_info(sbi, "broken victim_rbtree, "
++				"cur_mtime(%llu) next_mtime(%llu)",
++				cur_ve->mtime, next_ve->mtime);
++			return false;
++		}
++		cur = next;
++	}
++#endif
++	return true;
++}
++
++static struct victim_entry *__lookup_victim_entry(struct f2fs_sb_info *sbi,
++					unsigned long long mtime)
++{
++	struct atgc_management *am = &sbi->am;
++	struct rb_node *node = am->root.rb_root.rb_node;
++	struct victim_entry *ve = NULL;
++
++	while (node) {
++		ve = rb_entry(node, struct victim_entry, rb_node);
++
++		if (mtime < ve->mtime)
++			node = node->rb_left;
++		else
++			node = node->rb_right;
++	}
++	return ve;
++}
++
++static struct victim_entry *__create_victim_entry(struct f2fs_sb_info *sbi,
++		unsigned long long mtime, unsigned int segno)
+ {
+ 	struct atgc_management *am = &sbi->am;
+ 	struct victim_entry *ve;
+ 
+-	ve =  f2fs_kmem_cache_alloc(victim_entry_slab,
+-				GFP_NOFS, true, NULL);
++	ve =  f2fs_kmem_cache_alloc(victim_entry_slab, GFP_NOFS, true, NULL);
+ 
+ 	ve->mtime = mtime;
+ 	ve->segno = segno;
+ 
+-	rb_link_node(&ve->rb_node, parent, p);
+-	rb_insert_color_cached(&ve->rb_node, &am->root, left_most);
+-
+ 	list_add_tail(&ve->list, &am->victim_list);
+-
+ 	am->victim_count++;
+ 
+ 	return ve;
+ }
+ 
+-static void insert_victim_entry(struct f2fs_sb_info *sbi,
++static void __insert_victim_entry(struct f2fs_sb_info *sbi,
+ 				unsigned long long mtime, unsigned int segno)
+ {
+ 	struct atgc_management *am = &sbi->am;
+-	struct rb_node **p;
++	struct rb_root_cached *root = &am->root;
++	struct rb_node **p = &root->rb_root.rb_node;
+ 	struct rb_node *parent = NULL;
++	struct victim_entry *ve;
+ 	bool left_most = true;
+ 
+-	p = f2fs_lookup_rb_tree_ext(sbi, &am->root, &parent, mtime, &left_most);
+-	attach_victim_entry(sbi, mtime, segno, parent, p, left_most);
++	/* look up rb tree to find parent node */
++	while (*p) {
++		parent = *p;
++		ve = rb_entry(parent, struct victim_entry, rb_node);
++
++		if (mtime < ve->mtime) {
++			p = &(*p)->rb_left;
++		} else {
++			p = &(*p)->rb_right;
++			left_most = false;
++		}
++	}
++
++	ve = __create_victim_entry(sbi, mtime, segno);
++
++	rb_link_node(&ve->rb_node, parent, p);
++	rb_insert_color_cached(&ve->rb_node, root, left_most);
+ }
+ 
+ static void add_victim_entry(struct f2fs_sb_info *sbi,
+@@ -459,19 +514,7 @@ static void add_victim_entry(struct f2fs_sb_info *sbi,
+ 	if (sit_i->dirty_max_mtime - mtime < p->age_threshold)
+ 		return;
+ 
+-	insert_victim_entry(sbi, mtime, segno);
+-}
+-
+-static struct rb_node *lookup_central_victim(struct f2fs_sb_info *sbi,
+-						struct victim_sel_policy *p)
+-{
+-	struct atgc_management *am = &sbi->am;
+-	struct rb_node *parent = NULL;
+-	bool left_most;
+-
+-	f2fs_lookup_rb_tree_ext(sbi, &am->root, &parent, p->age, &left_most);
+-
+-	return parent;
++	__insert_victim_entry(sbi, mtime, segno);
+ }
+ 
+ static void atgc_lookup_victim(struct f2fs_sb_info *sbi,
+@@ -481,7 +524,6 @@ static void atgc_lookup_victim(struct f2fs_sb_info *sbi,
+ 	struct atgc_management *am = &sbi->am;
+ 	struct rb_root_cached *root = &am->root;
+ 	struct rb_node *node;
+-	struct rb_entry *re;
+ 	struct victim_entry *ve;
+ 	unsigned long long total_time;
+ 	unsigned long long age, u, accu;
+@@ -508,12 +550,10 @@ static void atgc_lookup_victim(struct f2fs_sb_info *sbi,
+ 
+ 	node = rb_first_cached(root);
+ next:
+-	re = rb_entry_safe(node, struct rb_entry, rb_node);
+-	if (!re)
++	ve = rb_entry_safe(node, struct victim_entry, rb_node);
++	if (!ve)
+ 		return;
+ 
+-	ve = (struct victim_entry *)re;
+-
+ 	if (ve->mtime >= max_mtime || ve->mtime < min_mtime)
+ 		goto skip;
+ 
+@@ -555,8 +595,6 @@ static void atssr_lookup_victim(struct f2fs_sb_info *sbi,
+ {
+ 	struct sit_info *sit_i = SIT_I(sbi);
+ 	struct atgc_management *am = &sbi->am;
+-	struct rb_node *node;
+-	struct rb_entry *re;
+ 	struct victim_entry *ve;
+ 	unsigned long long age;
+ 	unsigned long long max_mtime = sit_i->dirty_max_mtime;
+@@ -566,25 +604,22 @@ static void atssr_lookup_victim(struct f2fs_sb_info *sbi,
+ 	unsigned int dirty_threshold = max(am->max_candidate_count,
+ 					am->candidate_ratio *
+ 					am->victim_count / 100);
+-	unsigned int cost;
+-	unsigned int iter = 0;
++	unsigned int cost, iter;
+ 	int stage = 0;
+ 
+ 	if (max_mtime < min_mtime)
+ 		return;
+ 	max_mtime += 1;
+ next_stage:
+-	node = lookup_central_victim(sbi, p);
++	iter = 0;
++	ve = __lookup_victim_entry(sbi, p->age);
+ next_node:
+-	re = rb_entry_safe(node, struct rb_entry, rb_node);
+-	if (!re) {
+-		if (stage == 0)
+-			goto skip_stage;
++	if (!ve) {
++		if (stage++ == 0)
++			goto next_stage;
+ 		return;
+ 	}
+ 
+-	ve = (struct victim_entry *)re;
+-
+ 	if (ve->mtime >= max_mtime || ve->mtime < min_mtime)
+ 		goto skip_node;
+ 
+@@ -610,24 +645,20 @@ next_node:
+ 	}
+ skip_node:
+ 	if (iter < dirty_threshold) {
+-		if (stage == 0)
+-			node = rb_prev(node);
+-		else if (stage == 1)
+-			node = rb_next(node);
++		ve = rb_entry(stage == 0 ? rb_prev(&ve->rb_node) :
++					rb_next(&ve->rb_node),
++					struct victim_entry, rb_node);
+ 		goto next_node;
+ 	}
+-skip_stage:
+-	if (stage < 1) {
+-		stage++;
+-		iter = 0;
++
++	if (stage++ == 0)
+ 		goto next_stage;
+-	}
+ }
++
+ static void lookup_victim_by_age(struct f2fs_sb_info *sbi,
+ 						struct victim_sel_policy *p)
+ {
+-	f2fs_bug_on(sbi, !f2fs_check_rb_tree_consistence(sbi,
+-						&sbi->am.root, true));
++	f2fs_bug_on(sbi, !f2fs_check_victim_tree(sbi, &sbi->am.root));
+ 
+ 	if (p->gc_mode == GC_AT)
+ 		atgc_lookup_victim(sbi, p);
+diff --git a/fs/f2fs/gc.h b/fs/f2fs/gc.h
+index 15bd1d680f677..5ad6ac63e13f3 100644
+--- a/fs/f2fs/gc.h
++++ b/fs/f2fs/gc.h
+@@ -55,20 +55,10 @@ struct gc_inode_list {
+ 	struct radix_tree_root iroot;
+ };
+ 
+-struct victim_info {
+-	unsigned long long mtime;	/* mtime of section */
+-	unsigned int segno;		/* section No. */
+-};
+-
+ struct victim_entry {
+ 	struct rb_node rb_node;		/* rb node located in rb-tree */
+-	union {
+-		struct {
+-			unsigned long long mtime;	/* mtime of section */
+-			unsigned int segno;		/* segment No. */
+-		};
+-		struct victim_info vi;	/* victim info */
+-	};
++	unsigned long long mtime;	/* mtime of section */
++	unsigned int segno;		/* segment No. */
+ 	struct list_head list;
+ };
+ 
+diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
+index 11fc4c8036a9d..f97073c75d677 100644
+--- a/fs/f2fs/namei.c
++++ b/fs/f2fs/namei.c
+@@ -995,12 +995,20 @@ static int f2fs_rename(struct mnt_idmap *idmap, struct inode *old_dir,
+ 			goto out;
+ 	}
+ 
++	/*
++	 * Copied from ext4_rename: we need to protect against old.inode
++	 * directory getting converted from inline directory format into
++	 * a normal one.
++	 */
++	if (S_ISDIR(old_inode->i_mode))
++		inode_lock_nested(old_inode, I_MUTEX_NONDIR2);
++
+ 	err = -ENOENT;
+ 	old_entry = f2fs_find_entry(old_dir, &old_dentry->d_name, &old_page);
+ 	if (!old_entry) {
+ 		if (IS_ERR(old_page))
+ 			err = PTR_ERR(old_page);
+-		goto out;
++		goto out_unlock_old;
+ 	}
+ 
+ 	if (S_ISDIR(old_inode->i_mode)) {
+@@ -1108,6 +1116,9 @@ static int f2fs_rename(struct mnt_idmap *idmap, struct inode *old_dir,
+ 
+ 	f2fs_unlock_op(sbi);
+ 
++	if (S_ISDIR(old_inode->i_mode))
++		inode_unlock(old_inode);
++
+ 	if (IS_DIRSYNC(old_dir) || IS_DIRSYNC(new_dir))
+ 		f2fs_sync_fs(sbi->sb, 1);
+ 
+@@ -1122,6 +1133,9 @@ out_dir:
+ 		f2fs_put_page(old_dir_page, 0);
+ out_old:
+ 	f2fs_put_page(old_page, 0);
++out_unlock_old:
++	if (S_ISDIR(old_inode->i_mode))
++		inode_unlock(old_inode);
+ out:
+ 	iput(whiteout);
+ 	return err;
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index b2a080c660c86..a705c1d427162 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -263,7 +263,7 @@ retry:
+ 	f2fs_put_dnode(&dn);
+ 
+ 	trace_f2fs_replace_atomic_write_block(inode, F2FS_I(inode)->cow_inode,
+-					index, *old_addr, new_addr, recover);
++			index, old_addr ? *old_addr : 0, new_addr, recover);
+ 	return 0;
+ }
+ 
+@@ -939,9 +939,9 @@ static struct discard_cmd *__create_discard_cmd(struct f2fs_sb_info *sbi,
+ 	dc = f2fs_kmem_cache_alloc(discard_cmd_slab, GFP_NOFS, true, NULL);
+ 	INIT_LIST_HEAD(&dc->list);
+ 	dc->bdev = bdev;
+-	dc->lstart = lstart;
+-	dc->start = start;
+-	dc->len = len;
++	dc->di.lstart = lstart;
++	dc->di.start = start;
++	dc->di.len = len;
+ 	dc->ref = 0;
+ 	dc->state = D_PREP;
+ 	dc->queued = 0;
+@@ -956,20 +956,108 @@ static struct discard_cmd *__create_discard_cmd(struct f2fs_sb_info *sbi,
+ 	return dc;
+ }
+ 
+-static struct discard_cmd *__attach_discard_cmd(struct f2fs_sb_info *sbi,
+-				struct block_device *bdev, block_t lstart,
+-				block_t start, block_t len,
+-				struct rb_node *parent, struct rb_node **p,
+-				bool leftmost)
++static bool f2fs_check_discard_tree(struct f2fs_sb_info *sbi)
++{
++#ifdef CONFIG_F2FS_CHECK_FS
++	struct discard_cmd_control *dcc = SM_I(sbi)->dcc_info;
++	struct rb_node *cur = rb_first_cached(&dcc->root), *next;
++	struct discard_cmd *cur_dc, *next_dc;
++
++	while (cur) {
++		next = rb_next(cur);
++		if (!next)
++			return true;
++
++		cur_dc = rb_entry(cur, struct discard_cmd, rb_node);
++		next_dc = rb_entry(next, struct discard_cmd, rb_node);
++
++		if (cur_dc->di.lstart + cur_dc->di.len > next_dc->di.lstart) {
++			f2fs_info(sbi, "broken discard_rbtree, "
++				"cur(%u, %u) next(%u, %u)",
++				cur_dc->di.lstart, cur_dc->di.len,
++				next_dc->di.lstart, next_dc->di.len);
++			return false;
++		}
++		cur = next;
++	}
++#endif
++	return true;
++}
++
++static struct discard_cmd *__lookup_discard_cmd(struct f2fs_sb_info *sbi,
++						block_t blkaddr)
+ {
+ 	struct discard_cmd_control *dcc = SM_I(sbi)->dcc_info;
++	struct rb_node *node = dcc->root.rb_root.rb_node;
+ 	struct discard_cmd *dc;
+ 
+-	dc = __create_discard_cmd(sbi, bdev, lstart, start, len);
++	while (node) {
++		dc = rb_entry(node, struct discard_cmd, rb_node);
+ 
+-	rb_link_node(&dc->rb_node, parent, p);
+-	rb_insert_color_cached(&dc->rb_node, &dcc->root, leftmost);
++		if (blkaddr < dc->di.lstart)
++			node = node->rb_left;
++		else if (blkaddr >= dc->di.lstart + dc->di.len)
++			node = node->rb_right;
++		else
++			return dc;
++	}
++	return NULL;
++}
++
++static struct discard_cmd *__lookup_discard_cmd_ret(struct rb_root_cached *root,
++				block_t blkaddr,
++				struct discard_cmd **prev_entry,
++				struct discard_cmd **next_entry,
++				struct rb_node ***insert_p,
++				struct rb_node **insert_parent)
++{
++	struct rb_node **pnode = &root->rb_root.rb_node;
++	struct rb_node *parent = NULL, *tmp_node;
++	struct discard_cmd *dc;
++
++	*insert_p = NULL;
++	*insert_parent = NULL;
++	*prev_entry = NULL;
++	*next_entry = NULL;
++
++	if (RB_EMPTY_ROOT(&root->rb_root))
++		return NULL;
++
++	while (*pnode) {
++		parent = *pnode;
++		dc = rb_entry(*pnode, struct discard_cmd, rb_node);
++
++		if (blkaddr < dc->di.lstart)
++			pnode = &(*pnode)->rb_left;
++		else if (blkaddr >= dc->di.lstart + dc->di.len)
++			pnode = &(*pnode)->rb_right;
++		else
++			goto lookup_neighbors;
++	}
++
++	*insert_p = pnode;
++	*insert_parent = parent;
++
++	dc = rb_entry(parent, struct discard_cmd, rb_node);
++	tmp_node = parent;
++	if (parent && blkaddr > dc->di.lstart)
++		tmp_node = rb_next(parent);
++	*next_entry = rb_entry_safe(tmp_node, struct discard_cmd, rb_node);
++
++	tmp_node = parent;
++	if (parent && blkaddr < dc->di.lstart)
++		tmp_node = rb_prev(parent);
++	*prev_entry = rb_entry_safe(tmp_node, struct discard_cmd, rb_node);
++	return NULL;
++
++lookup_neighbors:
++	/* lookup prev node for merging backward later */
++	tmp_node = rb_prev(&dc->rb_node);
++	*prev_entry = rb_entry_safe(tmp_node, struct discard_cmd, rb_node);
+ 
++	/* lookup next node for merging frontward later */
++	tmp_node = rb_next(&dc->rb_node);
++	*next_entry = rb_entry_safe(tmp_node, struct discard_cmd, rb_node);
+ 	return dc;
+ }
+ 
+@@ -981,7 +1069,7 @@ static void __detach_discard_cmd(struct discard_cmd_control *dcc,
+ 
+ 	list_del(&dc->list);
+ 	rb_erase_cached(&dc->rb_node, &dcc->root);
+-	dcc->undiscard_blks -= dc->len;
++	dcc->undiscard_blks -= dc->di.len;
+ 
+ 	kmem_cache_free(discard_cmd_slab, dc);
+ 
+@@ -994,7 +1082,7 @@ static void __remove_discard_cmd(struct f2fs_sb_info *sbi,
+ 	struct discard_cmd_control *dcc = SM_I(sbi)->dcc_info;
+ 	unsigned long flags;
+ 
+-	trace_f2fs_remove_discard(dc->bdev, dc->start, dc->len);
++	trace_f2fs_remove_discard(dc->bdev, dc->di.start, dc->di.len);
+ 
+ 	spin_lock_irqsave(&dc->lock, flags);
+ 	if (dc->bio_ref) {
+@@ -1012,7 +1100,7 @@ static void __remove_discard_cmd(struct f2fs_sb_info *sbi,
+ 		printk_ratelimited(
+ 			"%sF2FS-fs (%s): Issue discard(%u, %u, %u) failed, ret: %d",
+ 			KERN_INFO, sbi->sb->s_id,
+-			dc->lstart, dc->start, dc->len, dc->error);
++			dc->di.lstart, dc->di.start, dc->di.len, dc->error);
+ 	__detach_discard_cmd(dcc, dc);
+ }
+ 
+@@ -1128,14 +1216,14 @@ static int __submit_discard_cmd(struct f2fs_sb_info *sbi,
+ 	if (is_sbi_flag_set(sbi, SBI_NEED_FSCK))
+ 		return 0;
+ 
+-	trace_f2fs_issue_discard(bdev, dc->start, dc->len);
++	trace_f2fs_issue_discard(bdev, dc->di.start, dc->di.len);
+ 
+-	lstart = dc->lstart;
+-	start = dc->start;
+-	len = dc->len;
++	lstart = dc->di.lstart;
++	start = dc->di.start;
++	len = dc->di.len;
+ 	total_len = len;
+ 
+-	dc->len = 0;
++	dc->di.len = 0;
+ 
+ 	while (total_len && *issued < dpolicy->max_requests && !err) {
+ 		struct bio *bio = NULL;
+@@ -1151,7 +1239,7 @@ static int __submit_discard_cmd(struct f2fs_sb_info *sbi,
+ 		if (*issued == dpolicy->max_requests)
+ 			last = true;
+ 
+-		dc->len += len;
++		dc->di.len += len;
+ 
+ 		if (time_to_inject(sbi, FAULT_DISCARD)) {
+ 			err = -EIO;
+@@ -1213,34 +1301,41 @@ static int __submit_discard_cmd(struct f2fs_sb_info *sbi,
+ 	return err;
+ }
+ 
+-static void __insert_discard_tree(struct f2fs_sb_info *sbi,
++static void __insert_discard_cmd(struct f2fs_sb_info *sbi,
+ 				struct block_device *bdev, block_t lstart,
+-				block_t start, block_t len,
+-				struct rb_node **insert_p,
+-				struct rb_node *insert_parent)
++				block_t start, block_t len)
+ {
+ 	struct discard_cmd_control *dcc = SM_I(sbi)->dcc_info;
+-	struct rb_node **p;
++	struct rb_node **p = &dcc->root.rb_root.rb_node;
+ 	struct rb_node *parent = NULL;
++	struct discard_cmd *dc;
+ 	bool leftmost = true;
+ 
+-	if (insert_p && insert_parent) {
+-		parent = insert_parent;
+-		p = insert_p;
+-		goto do_insert;
++	/* look up rb tree to find parent node */
++	while (*p) {
++		parent = *p;
++		dc = rb_entry(parent, struct discard_cmd, rb_node);
++
++		if (lstart < dc->di.lstart) {
++			p = &(*p)->rb_left;
++		} else if (lstart >= dc->di.lstart + dc->di.len) {
++			p = &(*p)->rb_right;
++			leftmost = false;
++		} else {
++			f2fs_bug_on(sbi, 1);
++		}
+ 	}
+ 
+-	p = f2fs_lookup_rb_tree_for_insert(sbi, &dcc->root, &parent,
+-							lstart, &leftmost);
+-do_insert:
+-	__attach_discard_cmd(sbi, bdev, lstart, start, len, parent,
+-								p, leftmost);
++	dc = __create_discard_cmd(sbi, bdev, lstart, start, len);
++
++	rb_link_node(&dc->rb_node, parent, p);
++	rb_insert_color_cached(&dc->rb_node, &dcc->root, leftmost);
+ }
+ 
+ static void __relocate_discard_cmd(struct discard_cmd_control *dcc,
+ 						struct discard_cmd *dc)
+ {
+-	list_move_tail(&dc->list, &dcc->pend_list[plist_idx(dc->len)]);
++	list_move_tail(&dc->list, &dcc->pend_list[plist_idx(dc->di.len)]);
+ }
+ 
+ static void __punch_discard_cmd(struct f2fs_sb_info *sbi,
+@@ -1250,7 +1345,7 @@ static void __punch_discard_cmd(struct f2fs_sb_info *sbi,
+ 	struct discard_info di = dc->di;
+ 	bool modified = false;
+ 
+-	if (dc->state == D_DONE || dc->len == 1) {
++	if (dc->state == D_DONE || dc->di.len == 1) {
+ 		__remove_discard_cmd(sbi, dc);
+ 		return;
+ 	}
+@@ -1258,23 +1353,22 @@ static void __punch_discard_cmd(struct f2fs_sb_info *sbi,
+ 	dcc->undiscard_blks -= di.len;
+ 
+ 	if (blkaddr > di.lstart) {
+-		dc->len = blkaddr - dc->lstart;
+-		dcc->undiscard_blks += dc->len;
++		dc->di.len = blkaddr - dc->di.lstart;
++		dcc->undiscard_blks += dc->di.len;
+ 		__relocate_discard_cmd(dcc, dc);
+ 		modified = true;
+ 	}
+ 
+ 	if (blkaddr < di.lstart + di.len - 1) {
+ 		if (modified) {
+-			__insert_discard_tree(sbi, dc->bdev, blkaddr + 1,
++			__insert_discard_cmd(sbi, dc->bdev, blkaddr + 1,
+ 					di.start + blkaddr + 1 - di.lstart,
+-					di.lstart + di.len - 1 - blkaddr,
+-					NULL, NULL);
++					di.lstart + di.len - 1 - blkaddr);
+ 		} else {
+-			dc->lstart++;
+-			dc->len--;
+-			dc->start++;
+-			dcc->undiscard_blks += dc->len;
++			dc->di.lstart++;
++			dc->di.len--;
++			dc->di.start++;
++			dcc->undiscard_blks += dc->di.len;
+ 			__relocate_discard_cmd(dcc, dc);
+ 		}
+ 	}
+@@ -1293,17 +1387,14 @@ static void __update_discard_tree_range(struct f2fs_sb_info *sbi,
+ 			SECTOR_TO_BLOCK(bdev_max_discard_sectors(bdev));
+ 	block_t end = lstart + len;
+ 
+-	dc = (struct discard_cmd *)f2fs_lookup_rb_tree_ret(&dcc->root,
+-					NULL, lstart,
+-					(struct rb_entry **)&prev_dc,
+-					(struct rb_entry **)&next_dc,
+-					&insert_p, &insert_parent, true, NULL);
++	dc = __lookup_discard_cmd_ret(&dcc->root, lstart,
++				&prev_dc, &next_dc, &insert_p, &insert_parent);
+ 	if (dc)
+ 		prev_dc = dc;
+ 
+ 	if (!prev_dc) {
+ 		di.lstart = lstart;
+-		di.len = next_dc ? next_dc->lstart - lstart : len;
++		di.len = next_dc ? next_dc->di.lstart - lstart : len;
+ 		di.len = min(di.len, len);
+ 		di.start = start;
+ 	}
+@@ -1314,16 +1405,16 @@ static void __update_discard_tree_range(struct f2fs_sb_info *sbi,
+ 		struct discard_cmd *tdc = NULL;
+ 
+ 		if (prev_dc) {
+-			di.lstart = prev_dc->lstart + prev_dc->len;
++			di.lstart = prev_dc->di.lstart + prev_dc->di.len;
+ 			if (di.lstart < lstart)
+ 				di.lstart = lstart;
+ 			if (di.lstart >= end)
+ 				break;
+ 
+-			if (!next_dc || next_dc->lstart > end)
++			if (!next_dc || next_dc->di.lstart > end)
+ 				di.len = end - di.lstart;
+ 			else
+-				di.len = next_dc->lstart - di.lstart;
++				di.len = next_dc->di.lstart - di.lstart;
+ 			di.start = start + di.lstart - lstart;
+ 		}
+ 
+@@ -1356,10 +1447,9 @@ static void __update_discard_tree_range(struct f2fs_sb_info *sbi,
+ 			merged = true;
+ 		}
+ 
+-		if (!merged) {
+-			__insert_discard_tree(sbi, bdev, di.lstart, di.start,
+-							di.len, NULL, NULL);
+-		}
++		if (!merged)
++			__insert_discard_cmd(sbi, bdev,
++						di.lstart, di.start, di.len);
+  next:
+ 		prev_dc = next_dc;
+ 		if (!prev_dc)
+@@ -1398,15 +1488,11 @@ static void __issue_discard_cmd_orderly(struct f2fs_sb_info *sbi,
+ 	struct rb_node **insert_p = NULL, *insert_parent = NULL;
+ 	struct discard_cmd *dc;
+ 	struct blk_plug plug;
+-	unsigned int pos = dcc->next_pos;
+ 	bool io_interrupted = false;
+ 
+ 	mutex_lock(&dcc->cmd_lock);
+-	dc = (struct discard_cmd *)f2fs_lookup_rb_tree_ret(&dcc->root,
+-					NULL, pos,
+-					(struct rb_entry **)&prev_dc,
+-					(struct rb_entry **)&next_dc,
+-					&insert_p, &insert_parent, true, NULL);
++	dc = __lookup_discard_cmd_ret(&dcc->root, dcc->next_pos,
++				&prev_dc, &next_dc, &insert_p, &insert_parent);
+ 	if (!dc)
+ 		dc = next_dc;
+ 
+@@ -1424,7 +1510,7 @@ static void __issue_discard_cmd_orderly(struct f2fs_sb_info *sbi,
+ 			break;
+ 		}
+ 
+-		dcc->next_pos = dc->lstart + dc->len;
++		dcc->next_pos = dc->di.lstart + dc->di.len;
+ 		err = __submit_discard_cmd(sbi, dpolicy, dc, issued);
+ 
+ 		if (*issued >= dpolicy->max_requests)
+@@ -1483,8 +1569,7 @@ retry:
+ 		if (list_empty(pend_list))
+ 			goto next;
+ 		if (unlikely(dcc->rbtree_check))
+-			f2fs_bug_on(sbi, !f2fs_check_rb_tree_consistence(sbi,
+-							&dcc->root, false));
++			f2fs_bug_on(sbi, !f2fs_check_discard_tree(sbi));
+ 		blk_start_plug(&plug);
+ 		list_for_each_entry_safe(dc, tmp, pend_list, list) {
+ 			f2fs_bug_on(sbi, dc->state != D_PREP);
+@@ -1562,7 +1647,7 @@ static unsigned int __wait_one_discard_bio(struct f2fs_sb_info *sbi,
+ 	dc->ref--;
+ 	if (!dc->ref) {
+ 		if (!dc->error)
+-			len = dc->len;
++			len = dc->di.len;
+ 		__remove_discard_cmd(sbi, dc);
+ 	}
+ 	mutex_unlock(&dcc->cmd_lock);
+@@ -1585,14 +1670,15 @@ next:
+ 
+ 	mutex_lock(&dcc->cmd_lock);
+ 	list_for_each_entry_safe(iter, tmp, wait_list, list) {
+-		if (iter->lstart + iter->len <= start || end <= iter->lstart)
++		if (iter->di.lstart + iter->di.len <= start ||
++					end <= iter->di.lstart)
+ 			continue;
+-		if (iter->len < dpolicy->granularity)
++		if (iter->di.len < dpolicy->granularity)
+ 			continue;
+ 		if (iter->state == D_DONE && !iter->ref) {
+ 			wait_for_completion_io(&iter->wait);
+ 			if (!iter->error)
+-				trimmed += iter->len;
++				trimmed += iter->di.len;
+ 			__remove_discard_cmd(sbi, iter);
+ 		} else {
+ 			iter->ref++;
+@@ -1636,8 +1722,7 @@ static void f2fs_wait_discard_bio(struct f2fs_sb_info *sbi, block_t blkaddr)
+ 	bool need_wait = false;
+ 
+ 	mutex_lock(&dcc->cmd_lock);
+-	dc = (struct discard_cmd *)f2fs_lookup_rb_tree(&dcc->root,
+-							NULL, blkaddr);
++	dc = __lookup_discard_cmd(sbi, blkaddr);
+ 	if (dc) {
+ 		if (dc->state == D_PREP) {
+ 			__punch_discard_cmd(sbi, dc, blkaddr);
+@@ -2970,24 +3055,20 @@ next:
+ 
+ 	mutex_lock(&dcc->cmd_lock);
+ 	if (unlikely(dcc->rbtree_check))
+-		f2fs_bug_on(sbi, !f2fs_check_rb_tree_consistence(sbi,
+-							&dcc->root, false));
+-
+-	dc = (struct discard_cmd *)f2fs_lookup_rb_tree_ret(&dcc->root,
+-					NULL, start,
+-					(struct rb_entry **)&prev_dc,
+-					(struct rb_entry **)&next_dc,
+-					&insert_p, &insert_parent, true, NULL);
++		f2fs_bug_on(sbi, !f2fs_check_discard_tree(sbi));
++
++	dc = __lookup_discard_cmd_ret(&dcc->root, start,
++				&prev_dc, &next_dc, &insert_p, &insert_parent);
+ 	if (!dc)
+ 		dc = next_dc;
+ 
+ 	blk_start_plug(&plug);
+ 
+-	while (dc && dc->lstart <= end) {
++	while (dc && dc->di.lstart <= end) {
+ 		struct rb_node *node;
+ 		int err = 0;
+ 
+-		if (dc->len < dpolicy->granularity)
++		if (dc->di.len < dpolicy->granularity)
+ 			goto skip;
+ 
+ 		if (dc->state != D_PREP) {
+@@ -2998,7 +3079,7 @@ next:
+ 		err = __submit_discard_cmd(sbi, dpolicy, dc, &issued);
+ 
+ 		if (issued >= dpolicy->max_requests) {
+-			start = dc->lstart + dc->len;
++			start = dc->di.lstart + dc->di.len;
+ 
+ 			if (err)
+ 				__remove_discard_cmd(sbi, dc);
+diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
+index 1db3e3c24b43a..ae4e51e91ee33 100644
+--- a/fs/fs-writeback.c
++++ b/fs/fs-writeback.c
+@@ -829,7 +829,7 @@ void wbc_detach_inode(struct writeback_control *wbc)
+ 		 * is okay.  The main goal is avoiding keeping an inode on
+ 		 * the wrong wb for an extended period of time.
+ 		 */
+-		if (hweight32(history) > WB_FRN_HIST_THR_SLOTS)
++		if (hweight16(history) > WB_FRN_HIST_THR_SLOTS)
+ 			inode_switch_wbs(inode, max_id);
+ 	}
+ 
+diff --git a/fs/notify/inotify/inotify_fsnotify.c b/fs/notify/inotify/inotify_fsnotify.c
+index 49cfe2ae6d23d..993375f0db673 100644
+--- a/fs/notify/inotify/inotify_fsnotify.c
++++ b/fs/notify/inotify/inotify_fsnotify.c
+@@ -65,7 +65,7 @@ int inotify_handle_inode_event(struct fsnotify_mark *inode_mark, u32 mask,
+ 	struct fsnotify_event *fsn_event;
+ 	struct fsnotify_group *group = inode_mark->group;
+ 	int ret;
+-	int len = 0;
++	int len = 0, wd;
+ 	int alloc_len = sizeof(struct inotify_event_info);
+ 	struct mem_cgroup *old_memcg;
+ 
+@@ -80,6 +80,13 @@ int inotify_handle_inode_event(struct fsnotify_mark *inode_mark, u32 mask,
+ 	i_mark = container_of(inode_mark, struct inotify_inode_mark,
+ 			      fsn_mark);
+ 
++	/*
++	 * We can be racing with mark being detached. Don't report event with
++	 * invalid wd.
++	 */
++	wd = READ_ONCE(i_mark->wd);
++	if (wd == -1)
++		return 0;
+ 	/*
+ 	 * Whoever is interested in the event, pays for the allocation. Do not
+ 	 * trigger OOM killer in the target monitoring memcg as it may have
+@@ -110,7 +117,7 @@ int inotify_handle_inode_event(struct fsnotify_mark *inode_mark, u32 mask,
+ 	fsn_event = &event->fse;
+ 	fsnotify_init_event(fsn_event);
+ 	event->mask = mask;
+-	event->wd = i_mark->wd;
++	event->wd = wd;
+ 	event->sync_cookie = cookie;
+ 	event->name_len = len;
+ 	if (len)
+diff --git a/fs/ntfs3/bitmap.c b/fs/ntfs3/bitmap.c
+index 723fb64e65316..393c726ef17a9 100644
+--- a/fs/ntfs3/bitmap.c
++++ b/fs/ntfs3/bitmap.c
+@@ -658,7 +658,8 @@ int wnd_init(struct wnd_bitmap *wnd, struct super_block *sb, size_t nbits)
+ 	if (!wnd->bits_last)
+ 		wnd->bits_last = wbits;
+ 
+-	wnd->free_bits = kcalloc(wnd->nwnd, sizeof(u16), GFP_NOFS | __GFP_NOWARN);
++	wnd->free_bits =
++		kcalloc(wnd->nwnd, sizeof(u16), GFP_NOFS | __GFP_NOWARN);
+ 	if (!wnd->free_bits)
+ 		return -ENOMEM;
+ 
+diff --git a/fs/ntfs3/frecord.c b/fs/ntfs3/frecord.c
+index f1df52dfab74b..7d0473da12c33 100644
+--- a/fs/ntfs3/frecord.c
++++ b/fs/ntfs3/frecord.c
+@@ -1645,7 +1645,7 @@ struct ATTR_FILE_NAME *ni_fname_name(struct ntfs_inode *ni,
+ {
+ 	struct ATTRIB *attr = NULL;
+ 	struct ATTR_FILE_NAME *fname;
+-       struct le_str *fns;
++	struct le_str *fns;
+ 
+ 	if (le)
+ 		*le = NULL;
+diff --git a/fs/ntfs3/fsntfs.c b/fs/ntfs3/fsntfs.c
+index 567563771bf89..24c9aeb5a49e0 100644
+--- a/fs/ntfs3/fsntfs.c
++++ b/fs/ntfs3/fsntfs.c
+@@ -2594,8 +2594,10 @@ static inline bool is_reserved_name(struct ntfs_sb_info *sbi,
+ 	if (len == 4 || (len > 4 && le16_to_cpu(name[4]) == '.')) {
+ 		port_digit = le16_to_cpu(name[3]);
+ 		if (port_digit >= '1' && port_digit <= '9')
+-			if (!ntfs_cmp_names(name, 3, COM_NAME, 3, upcase, false) ||
+-			    !ntfs_cmp_names(name, 3, LPT_NAME, 3, upcase, false))
++			if (!ntfs_cmp_names(name, 3, COM_NAME, 3, upcase,
++					    false) ||
++			    !ntfs_cmp_names(name, 3, LPT_NAME, 3, upcase,
++					    false))
+ 				return true;
+ 	}
+ 
+diff --git a/fs/ntfs3/namei.c b/fs/ntfs3/namei.c
+index 407fe92394e22..92bbc8ee83cac 100644
+--- a/fs/ntfs3/namei.c
++++ b/fs/ntfs3/namei.c
+@@ -88,6 +88,16 @@ static struct dentry *ntfs_lookup(struct inode *dir, struct dentry *dentry,
+ 		__putname(uni);
+ 	}
+ 
++	/*
++	 * Check for a null pointer
++	 * If the MFT record of ntfs inode is not a base record, inode->i_op can be NULL.
++	 * This causes null pointer dereference in d_splice_alias().
++	 */
++	if (!IS_ERR_OR_NULL(inode) && !inode->i_op) {
++		iput(inode);
++		inode = ERR_PTR(-EINVAL);
++	}
++
+ 	return d_splice_alias(inode, dentry);
+ }
+ 
+diff --git a/fs/ntfs3/ntfs.h b/fs/ntfs3/ntfs.h
+index 86ea1826d0998..90151e56c1222 100644
+--- a/fs/ntfs3/ntfs.h
++++ b/fs/ntfs3/ntfs.h
+@@ -435,9 +435,6 @@ static inline u64 attr_svcn(const struct ATTRIB *attr)
+ 	return attr->non_res ? le64_to_cpu(attr->nres.svcn) : 0;
+ }
+ 
+-/* The size of resident attribute by its resident size. */
+-#define BYTES_PER_RESIDENT(b) (0x18 + (b))
+-
+ static_assert(sizeof(struct ATTRIB) == 0x48);
+ static_assert(sizeof(((struct ATTRIB *)NULL)->res) == 0x08);
+ static_assert(sizeof(((struct ATTRIB *)NULL)->nres) == 0x38);
+diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
+index 5851eb5bc7267..faf32caef89b4 100644
+--- a/fs/proc/proc_sysctl.c
++++ b/fs/proc/proc_sysctl.c
+@@ -1287,7 +1287,10 @@ out:
+  * __register_sysctl_table - register a leaf sysctl table
+  * @set: Sysctl tree to register on
+  * @path: The path to the directory the sysctl table is in.
+- * @table: the top-level table structure
++ * @table: the top-level table structure without any child. This table
++ * 	 should not be free'd after registration. So it should not be
++ * 	 used on stack. It can either be a global or dynamically allocated
++ * 	 by the caller and free'd later after sysctl unregistration.
+  *
+  * Register a sysctl table hierarchy. @table should be a filled in ctl_table
+  * array. A completely 0 filled entry terminates the table.
+@@ -1308,9 +1311,12 @@ out:
+  * proc_handler - the text handler routine (described below)
+  *
+  * extra1, extra2 - extra pointers usable by the proc handler routines
++ * XXX: we should eventually modify these to use long min / max [0]
++ * [0] https://lkml.kernel.org/87zgpte9o4.fsf@email.froward.int.ebiederm.org
+  *
+  * Leaf nodes in the sysctl tree will be represented by a single file
+- * under /proc; non-leaf nodes will be represented by directories.
++ * under /proc; non-leaf nodes (where child is not NULL) are not allowed,
++ * sysctl_check_table() verifies this.
+  *
+  * There must be a proc_handler routine for any terminal nodes.
+  * Several default handlers are available to cover common cases -
+@@ -1352,7 +1358,7 @@ struct ctl_table_header *__register_sysctl_table(
+ 
+ 	spin_lock(&sysctl_lock);
+ 	dir = &set->dir;
+-	/* Reference moved down the diretory tree get_subdir */
++	/* Reference moved down the directory tree get_subdir */
+ 	dir->header.nreg++;
+ 	spin_unlock(&sysctl_lock);
+ 
+@@ -1369,6 +1375,11 @@ struct ctl_table_header *__register_sysctl_table(
+ 		if (namelen == 0)
+ 			continue;
+ 
++		/*
++		 * namelen ensures if name is "foo/bar/yay" only foo is
++		 * registered first. We traverse as if using mkdir -p and
++		 * return a ctl_dir for the last directory entry.
++		 */
+ 		dir = get_subdir(dir, name, namelen);
+ 		if (IS_ERR(dir))
+ 			goto fail;
+@@ -1394,8 +1405,15 @@ fail:
+ 
+ /**
+  * register_sysctl - register a sysctl table
+- * @path: The path to the directory the sysctl table is in.
+- * @table: the table structure
++ * @path: The path to the directory the sysctl table is in. If the path
++ * 	doesn't exist we will create it for you.
++ * @table: the table structure. The calller must ensure the life of the @table
++ * 	will be kept during the lifetime use of the syctl. It must not be freed
++ * 	until unregister_sysctl_table() is called with the given returned table
++ * 	with this registration. If your code is non modular then you don't need
++ * 	to call unregister_sysctl_table() and can instead use something like
++ * 	register_sysctl_init() which does not care for the result of the syctl
++ * 	registration.
+  *
+  * Register a sysctl table. @table should be a filled in ctl_table
+  * array. A completely 0 filled entry terminates the table.
+@@ -1411,8 +1429,11 @@ EXPORT_SYMBOL(register_sysctl);
+ 
+ /**
+  * __register_sysctl_init() - register sysctl table to path
+- * @path: path name for sysctl base
+- * @table: This is the sysctl table that needs to be registered to the path
++ * @path: path name for sysctl base. If that path doesn't exist we will create
++ * 	it for you.
++ * @table: This is the sysctl table that needs to be registered to the path.
++ * 	The caller must ensure the life of the @table will be kept during the
++ * 	lifetime use of the sysctl.
+  * @table_name: The name of sysctl table, only used for log printing when
+  *              registration fails
+  *
+@@ -1424,10 +1445,7 @@ EXPORT_SYMBOL(register_sysctl);
+  * register_sysctl() failing on init are extremely low, and so for both reasons
+  * this function does not return any error as it is used by initialization code.
+  *
+- * Context: Can only be called after your respective sysctl base path has been
+- * registered. So for instance, most base directories are registered early on
+- * init before init levels are processed through proc_sys_init() and
+- * sysctl_init_bases().
++ * Context: if your base directory does not exist it will be created for you.
+  */
+ void __init __register_sysctl_init(const char *path, struct ctl_table *table,
+ 				 const char *table_name)
+@@ -1557,6 +1575,7 @@ out:
+  *
+  * Register a sysctl table hierarchy. @table should be a filled in ctl_table
+  * array. A completely 0 filled entry terminates the table.
++ * We are slowly deprecating this call so avoid its use.
+  *
+  * See __register_sysctl_table for more details.
+  */
+@@ -1628,6 +1647,7 @@ err_register_leaves:
+  *
+  * Register a sysctl table hierarchy. @table should be a filled in ctl_table
+  * array. A completely 0 filled entry terminates the table.
++ * We are slowly deprecating this caller so avoid future uses of it.
+  *
+  * See __register_sysctl_paths for more details.
+  */
+diff --git a/include/drm/display/drm_dp.h b/include/drm/display/drm_dp.h
+index 632376c291db6..4545ed6109584 100644
+--- a/include/drm/display/drm_dp.h
++++ b/include/drm/display/drm_dp.h
+@@ -286,7 +286,6 @@
+ 
+ #define DP_DSC_MAX_BITS_PER_PIXEL_HI        0x068   /* eDP 1.4 */
+ # define DP_DSC_MAX_BITS_PER_PIXEL_HI_MASK  (0x3 << 0)
+-# define DP_DSC_MAX_BITS_PER_PIXEL_HI_SHIFT 8
+ # define DP_DSC_MAX_BPP_DELTA_VERSION_MASK  0x06
+ # define DP_DSC_MAX_BPP_DELTA_AVAILABILITY  0x08
+ 
+diff --git a/include/drm/display/drm_dp_helper.h b/include/drm/display/drm_dp_helper.h
+index ab55453f2d2cd..ade9df59e156a 100644
+--- a/include/drm/display/drm_dp_helper.h
++++ b/include/drm/display/drm_dp_helper.h
+@@ -181,9 +181,8 @@ static inline u16
+ drm_edp_dsc_sink_output_bpp(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE])
+ {
+ 	return dsc_dpcd[DP_DSC_MAX_BITS_PER_PIXEL_LOW - DP_DSC_SUPPORT] |
+-		(dsc_dpcd[DP_DSC_MAX_BITS_PER_PIXEL_HI - DP_DSC_SUPPORT] &
+-		 DP_DSC_MAX_BITS_PER_PIXEL_HI_MASK <<
+-		 DP_DSC_MAX_BITS_PER_PIXEL_HI_SHIFT);
++		((dsc_dpcd[DP_DSC_MAX_BITS_PER_PIXEL_HI - DP_DSC_SUPPORT] &
++		  DP_DSC_MAX_BITS_PER_PIXEL_HI_MASK) << 8);
+ }
+ 
+ static inline u32
+diff --git a/include/linux/i2c.h b/include/linux/i2c.h
+index 5ba89663ea865..13a1ce38cb0c5 100644
+--- a/include/linux/i2c.h
++++ b/include/linux/i2c.h
+@@ -385,7 +385,6 @@ static inline void i2c_set_clientdata(struct i2c_client *client, void *data)
+ 
+ /* I2C slave support */
+ 
+-#if IS_ENABLED(CONFIG_I2C_SLAVE)
+ enum i2c_slave_event {
+ 	I2C_SLAVE_READ_REQUESTED,
+ 	I2C_SLAVE_WRITE_REQUESTED,
+@@ -396,9 +395,10 @@ enum i2c_slave_event {
+ 
+ int i2c_slave_register(struct i2c_client *client, i2c_slave_cb_t slave_cb);
+ int i2c_slave_unregister(struct i2c_client *client);
+-bool i2c_detect_slave_mode(struct device *dev);
+ int i2c_slave_event(struct i2c_client *client,
+ 		    enum i2c_slave_event event, u8 *val);
++#if IS_ENABLED(CONFIG_I2C_SLAVE)
++bool i2c_detect_slave_mode(struct device *dev);
+ #else
+ static inline bool i2c_detect_slave_mode(struct device *dev) { return false; }
+ #endif
+diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
+index 45c3d62e616d8..95f33dadb2be2 100644
+--- a/include/linux/pci_ids.h
++++ b/include/linux/pci_ids.h
+@@ -567,6 +567,7 @@
+ #define PCI_DEVICE_ID_AMD_19H_M50H_DF_F3 0x166d
+ #define PCI_DEVICE_ID_AMD_19H_M60H_DF_F3 0x14e3
+ #define PCI_DEVICE_ID_AMD_19H_M70H_DF_F3 0x14f3
++#define PCI_DEVICE_ID_AMD_19H_M78H_DF_F3 0x12fb
+ #define PCI_DEVICE_ID_AMD_CNB17H_F3	0x1703
+ #define PCI_DEVICE_ID_AMD_LANCE		0x2000
+ #define PCI_DEVICE_ID_AMD_LANCE_HOME	0x2001
+diff --git a/include/net/af_rxrpc.h b/include/net/af_rxrpc.h
+index ba717eac0229a..73644bd42a3f9 100644
+--- a/include/net/af_rxrpc.h
++++ b/include/net/af_rxrpc.h
+@@ -40,16 +40,17 @@ typedef void (*rxrpc_user_attach_call_t)(struct rxrpc_call *, unsigned long);
+ void rxrpc_kernel_new_call_notification(struct socket *,
+ 					rxrpc_notify_new_call_t,
+ 					rxrpc_discard_new_call_t);
+-struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *,
+-					   struct sockaddr_rxrpc *,
+-					   struct key *,
+-					   unsigned long,
+-					   s64,
+-					   gfp_t,
+-					   rxrpc_notify_rx_t,
+-					   bool,
+-					   enum rxrpc_interruptibility,
+-					   unsigned int);
++struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *sock,
++					   struct sockaddr_rxrpc *srx,
++					   struct key *key,
++					   unsigned long user_call_ID,
++					   s64 tx_total_len,
++					   u32 hard_timeout,
++					   gfp_t gfp,
++					   rxrpc_notify_rx_t notify_rx,
++					   bool upgrade,
++					   enum rxrpc_interruptibility interruptibility,
++					   unsigned int debug_id);
+ int rxrpc_kernel_send_data(struct socket *, struct rxrpc_call *,
+ 			   struct msghdr *, size_t,
+ 			   rxrpc_notify_end_tx_t);
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 9dace9bcba8e5..3eb7d20ddfc97 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -1602,6 +1602,8 @@ struct nft_trans_chain {
+ 	struct nft_stats __percpu	*stats;
+ 	u8				policy;
+ 	u32				chain_id;
++	struct nft_base_chain		*basechain;
++	struct list_head		hook_list;
+ };
+ 
+ #define nft_trans_chain_update(trans)	\
+@@ -1614,6 +1616,10 @@ struct nft_trans_chain {
+ 	(((struct nft_trans_chain *)trans->data)->policy)
+ #define nft_trans_chain_id(trans)	\
+ 	(((struct nft_trans_chain *)trans->data)->chain_id)
++#define nft_trans_basechain(trans)	\
++	(((struct nft_trans_chain *)trans->data)->basechain)
++#define nft_trans_chain_hooks(trans)	\
++	(((struct nft_trans_chain *)trans->data)->hook_list)
+ 
+ struct nft_trans_table {
+ 	bool				update;
+diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
+index acb5a50309a18..9eabd585ce7af 100644
+--- a/kernel/locking/rwsem.c
++++ b/kernel/locking/rwsem.c
+@@ -1240,7 +1240,7 @@ static struct rw_semaphore *rwsem_downgrade_wake(struct rw_semaphore *sem)
+ /*
+  * lock for reading
+  */
+-static inline int __down_read_common(struct rw_semaphore *sem, int state)
++static __always_inline int __down_read_common(struct rw_semaphore *sem, int state)
+ {
+ 	int ret = 0;
+ 	long count;
+@@ -1258,17 +1258,17 @@ out:
+ 	return ret;
+ }
+ 
+-static inline void __down_read(struct rw_semaphore *sem)
++static __always_inline void __down_read(struct rw_semaphore *sem)
+ {
+ 	__down_read_common(sem, TASK_UNINTERRUPTIBLE);
+ }
+ 
+-static inline int __down_read_interruptible(struct rw_semaphore *sem)
++static __always_inline int __down_read_interruptible(struct rw_semaphore *sem)
+ {
+ 	return __down_read_common(sem, TASK_INTERRUPTIBLE);
+ }
+ 
+-static inline int __down_read_killable(struct rw_semaphore *sem)
++static __always_inline int __down_read_killable(struct rw_semaphore *sem)
+ {
+ 	return __down_read_common(sem, TASK_KILLABLE);
+ }
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 2f9bb98170ab0..14bb41aafee30 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -1705,7 +1705,7 @@ int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask)
+ {
+ 	int num_frags = skb_shinfo(skb)->nr_frags;
+ 	struct page *page, *head = NULL;
+-	int i, new_frags;
++	int i, order, psize, new_frags;
+ 	u32 d_off;
+ 
+ 	if (skb_shared(skb) || skb_unclone(skb, gfp_mask))
+@@ -1714,9 +1714,17 @@ int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask)
+ 	if (!num_frags)
+ 		goto release;
+ 
+-	new_frags = (__skb_pagelen(skb) + PAGE_SIZE - 1) >> PAGE_SHIFT;
++	/* We might have to allocate high order pages, so compute what minimum
++	 * page order is needed.
++	 */
++	order = 0;
++	while ((PAGE_SIZE << order) * MAX_SKB_FRAGS < __skb_pagelen(skb))
++		order++;
++	psize = (PAGE_SIZE << order);
++
++	new_frags = (__skb_pagelen(skb) + psize - 1) >> (PAGE_SHIFT + order);
+ 	for (i = 0; i < new_frags; i++) {
+-		page = alloc_page(gfp_mask);
++		page = alloc_pages(gfp_mask | __GFP_COMP, order);
+ 		if (!page) {
+ 			while (head) {
+ 				struct page *next = (struct page *)page_private(head);
+@@ -1743,11 +1751,11 @@ int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask)
+ 			vaddr = kmap_atomic(p);
+ 
+ 			while (done < p_len) {
+-				if (d_off == PAGE_SIZE) {
++				if (d_off == psize) {
+ 					d_off = 0;
+ 					page = (struct page *)page_private(page);
+ 				}
+-				copy = min_t(u32, PAGE_SIZE - d_off, p_len - done);
++				copy = min_t(u32, psize - d_off, p_len - done);
+ 				memcpy(page_address(page) + d_off,
+ 				       vaddr + p_off + done, copy);
+ 				done += copy;
+@@ -1763,7 +1771,7 @@ int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask)
+ 
+ 	/* skb frags point to kernel buffers */
+ 	for (i = 0; i < new_frags - 1; i++) {
+-		__skb_fill_page_desc(skb, i, head, 0, PAGE_SIZE);
++		__skb_fill_page_desc(skb, i, head, 0, psize);
+ 		head = (struct page *)page_private(head);
+ 	}
+ 	__skb_fill_page_desc(skb, new_frags - 1, head, 0, d_off);
+diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c
+index 646b3e490c71a..f0c646a17700f 100644
+--- a/net/ethtool/ioctl.c
++++ b/net/ethtool/ioctl.c
+@@ -573,8 +573,8 @@ static int ethtool_get_link_ksettings(struct net_device *dev,
+ static int ethtool_set_link_ksettings(struct net_device *dev,
+ 				      void __user *useraddr)
+ {
++	struct ethtool_link_ksettings link_ksettings = {};
+ 	int err;
+-	struct ethtool_link_ksettings link_ksettings;
+ 
+ 	ASSERT_RTNL();
+ 
+diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
+index 70d81bba50939..3ffb6a5b1f82a 100644
+--- a/net/ipv6/sit.c
++++ b/net/ipv6/sit.c
+@@ -1095,12 +1095,13 @@ tx_err:
+ 
+ static void ipip6_tunnel_bind_dev(struct net_device *dev)
+ {
++	struct ip_tunnel *tunnel = netdev_priv(dev);
++	int t_hlen = tunnel->hlen + sizeof(struct iphdr);
+ 	struct net_device *tdev = NULL;
+-	struct ip_tunnel *tunnel;
++	int hlen = LL_MAX_HEADER;
+ 	const struct iphdr *iph;
+ 	struct flowi4 fl4;
+ 
+-	tunnel = netdev_priv(dev);
+ 	iph = &tunnel->parms.iph;
+ 
+ 	if (iph->daddr) {
+@@ -1123,14 +1124,15 @@ static void ipip6_tunnel_bind_dev(struct net_device *dev)
+ 		tdev = __dev_get_by_index(tunnel->net, tunnel->parms.link);
+ 
+ 	if (tdev && !netif_is_l3_master(tdev)) {
+-		int t_hlen = tunnel->hlen + sizeof(struct iphdr);
+ 		int mtu;
+ 
+ 		mtu = tdev->mtu - t_hlen;
+ 		if (mtu < IPV6_MIN_MTU)
+ 			mtu = IPV6_MIN_MTU;
+ 		WRITE_ONCE(dev->mtu, mtu);
++		hlen = tdev->hard_header_len + tdev->needed_headroom;
+ 	}
++	dev->needed_headroom = t_hlen + hlen;
+ }
+ 
+ static void ipip6_tunnel_update(struct ip_tunnel *t, struct ip_tunnel_parm *p,
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index 1e747241c7710..4d52e25deb9e3 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -1064,7 +1064,7 @@ static void tcp_v6_send_reset(const struct sock *sk, struct sk_buff *skb)
+ 			if (np->repflow)
+ 				label = ip6_flowlabel(ipv6h);
+ 			priority = sk->sk_priority;
+-			txhash = sk->sk_hash;
++			txhash = sk->sk_txhash;
+ 		}
+ 		if (sk->sk_state == TCP_TIME_WAIT) {
+ 			label = cpu_to_be32(inet_twsk(sk)->tw_flowlabel);
+diff --git a/net/ncsi/ncsi-aen.c b/net/ncsi/ncsi-aen.c
+index b635c194f0a85..62fb1031763d1 100644
+--- a/net/ncsi/ncsi-aen.c
++++ b/net/ncsi/ncsi-aen.c
+@@ -165,6 +165,7 @@ static int ncsi_aen_handler_cr(struct ncsi_dev_priv *ndp,
+ 	nc->state = NCSI_CHANNEL_INACTIVE;
+ 	list_add_tail_rcu(&nc->link, &ndp->channel_queue);
+ 	spin_unlock_irqrestore(&ndp->lock, flags);
++	nc->modes[NCSI_MODE_TX_ENABLE].enable = 0;
+ 
+ 	return ncsi_process_next_channel(ndp);
+ }
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 46f60648a57d1..45f701fd86f06 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -1575,7 +1575,8 @@ nla_put_failure:
+ }
+ 
+ static int nft_dump_basechain_hook(struct sk_buff *skb, int family,
+-				   const struct nft_base_chain *basechain)
++				   const struct nft_base_chain *basechain,
++				   const struct list_head *hook_list)
+ {
+ 	const struct nf_hook_ops *ops = &basechain->ops;
+ 	struct nft_hook *hook, *first = NULL;
+@@ -1592,7 +1593,11 @@ static int nft_dump_basechain_hook(struct sk_buff *skb, int family,
+ 
+ 	if (nft_base_chain_netdev(family, ops->hooknum)) {
+ 		nest_devs = nla_nest_start_noflag(skb, NFTA_HOOK_DEVS);
+-		list_for_each_entry(hook, &basechain->hook_list, list) {
++
++		if (!hook_list)
++			hook_list = &basechain->hook_list;
++
++		list_for_each_entry(hook, hook_list, list) {
+ 			if (!first)
+ 				first = hook;
+ 
+@@ -1617,7 +1622,8 @@ nla_put_failure:
+ static int nf_tables_fill_chain_info(struct sk_buff *skb, struct net *net,
+ 				     u32 portid, u32 seq, int event, u32 flags,
+ 				     int family, const struct nft_table *table,
+-				     const struct nft_chain *chain)
++				     const struct nft_chain *chain,
++				     const struct list_head *hook_list)
+ {
+ 	struct nlmsghdr *nlh;
+ 
+@@ -1639,7 +1645,7 @@ static int nf_tables_fill_chain_info(struct sk_buff *skb, struct net *net,
+ 		const struct nft_base_chain *basechain = nft_base_chain(chain);
+ 		struct nft_stats __percpu *stats;
+ 
+-		if (nft_dump_basechain_hook(skb, family, basechain))
++		if (nft_dump_basechain_hook(skb, family, basechain, hook_list))
+ 			goto nla_put_failure;
+ 
+ 		if (nla_put_be32(skb, NFTA_CHAIN_POLICY,
+@@ -1674,7 +1680,8 @@ nla_put_failure:
+ 	return -1;
+ }
+ 
+-static void nf_tables_chain_notify(const struct nft_ctx *ctx, int event)
++static void nf_tables_chain_notify(const struct nft_ctx *ctx, int event,
++				   const struct list_head *hook_list)
+ {
+ 	struct nftables_pernet *nft_net;
+ 	struct sk_buff *skb;
+@@ -1694,7 +1701,7 @@ static void nf_tables_chain_notify(const struct nft_ctx *ctx, int event)
+ 
+ 	err = nf_tables_fill_chain_info(skb, ctx->net, ctx->portid, ctx->seq,
+ 					event, flags, ctx->family, ctx->table,
+-					ctx->chain);
++					ctx->chain, hook_list);
+ 	if (err < 0) {
+ 		kfree_skb(skb);
+ 		goto err;
+@@ -1740,7 +1747,7 @@ static int nf_tables_dump_chains(struct sk_buff *skb,
+ 						      NFT_MSG_NEWCHAIN,
+ 						      NLM_F_MULTI,
+ 						      table->family, table,
+-						      chain) < 0)
++						      chain, NULL) < 0)
+ 				goto done;
+ 
+ 			nl_dump_check_consistent(cb, nlmsg_hdr(skb));
+@@ -1794,7 +1801,7 @@ static int nf_tables_getchain(struct sk_buff *skb, const struct nfnl_info *info,
+ 
+ 	err = nf_tables_fill_chain_info(skb2, net, NETLINK_CB(skb).portid,
+ 					info->nlh->nlmsg_seq, NFT_MSG_NEWCHAIN,
+-					0, family, table, chain);
++					0, family, table, chain, NULL);
+ 	if (err < 0)
+ 		goto err_fill_chain_info;
+ 
+@@ -1955,7 +1962,8 @@ static struct nft_hook *nft_hook_list_find(struct list_head *hook_list,
+ 
+ static int nf_tables_parse_netdev_hooks(struct net *net,
+ 					const struct nlattr *attr,
+-					struct list_head *hook_list)
++					struct list_head *hook_list,
++					struct netlink_ext_ack *extack)
+ {
+ 	struct nft_hook *hook, *next;
+ 	const struct nlattr *tmp;
+@@ -1969,10 +1977,12 @@ static int nf_tables_parse_netdev_hooks(struct net *net,
+ 
+ 		hook = nft_netdev_hook_alloc(net, tmp);
+ 		if (IS_ERR(hook)) {
++			NL_SET_BAD_ATTR(extack, tmp);
+ 			err = PTR_ERR(hook);
+ 			goto err_hook;
+ 		}
+ 		if (nft_hook_list_find(hook_list, hook)) {
++			NL_SET_BAD_ATTR(extack, tmp);
+ 			kfree(hook);
+ 			err = -EEXIST;
+ 			goto err_hook;
+@@ -2005,20 +2015,23 @@ struct nft_chain_hook {
+ 
+ static int nft_chain_parse_netdev(struct net *net,
+ 				  struct nlattr *tb[],
+-				  struct list_head *hook_list)
++				  struct list_head *hook_list,
++				  struct netlink_ext_ack *extack)
+ {
+ 	struct nft_hook *hook;
+ 	int err;
+ 
+ 	if (tb[NFTA_HOOK_DEV]) {
+ 		hook = nft_netdev_hook_alloc(net, tb[NFTA_HOOK_DEV]);
+-		if (IS_ERR(hook))
++		if (IS_ERR(hook)) {
++			NL_SET_BAD_ATTR(extack, tb[NFTA_HOOK_DEV]);
+ 			return PTR_ERR(hook);
++		}
+ 
+ 		list_add_tail(&hook->list, hook_list);
+ 	} else if (tb[NFTA_HOOK_DEVS]) {
+ 		err = nf_tables_parse_netdev_hooks(net, tb[NFTA_HOOK_DEVS],
+-						   hook_list);
++						   hook_list, extack);
+ 		if (err < 0)
+ 			return err;
+ 
+@@ -2032,9 +2045,10 @@ static int nft_chain_parse_netdev(struct net *net,
+ }
+ 
+ static int nft_chain_parse_hook(struct net *net,
++				struct nft_base_chain *basechain,
+ 				const struct nlattr * const nla[],
+ 				struct nft_chain_hook *hook, u8 family,
+-				struct netlink_ext_ack *extack, bool autoload)
++				struct netlink_ext_ack *extack)
+ {
+ 	struct nftables_pernet *nft_net = nft_pernet(net);
+ 	struct nlattr *ha[NFTA_HOOK_MAX + 1];
+@@ -2050,31 +2064,48 @@ static int nft_chain_parse_hook(struct net *net,
+ 	if (err < 0)
+ 		return err;
+ 
+-	if (ha[NFTA_HOOK_HOOKNUM] == NULL ||
+-	    ha[NFTA_HOOK_PRIORITY] == NULL)
+-		return -EINVAL;
++	if (!basechain) {
++		if (!ha[NFTA_HOOK_HOOKNUM] ||
++		    !ha[NFTA_HOOK_PRIORITY]) {
++			NL_SET_BAD_ATTR(extack, nla[NFTA_CHAIN_NAME]);
++			return -ENOENT;
++		}
+ 
+-	hook->num = ntohl(nla_get_be32(ha[NFTA_HOOK_HOOKNUM]));
+-	hook->priority = ntohl(nla_get_be32(ha[NFTA_HOOK_PRIORITY]));
++		hook->num = ntohl(nla_get_be32(ha[NFTA_HOOK_HOOKNUM]));
++		hook->priority = ntohl(nla_get_be32(ha[NFTA_HOOK_PRIORITY]));
+ 
+-	type = __nft_chain_type_get(family, NFT_CHAIN_T_DEFAULT);
+-	if (!type)
+-		return -EOPNOTSUPP;
++		type = __nft_chain_type_get(family, NFT_CHAIN_T_DEFAULT);
++		if (!type)
++			return -EOPNOTSUPP;
+ 
+-	if (nla[NFTA_CHAIN_TYPE]) {
+-		type = nf_tables_chain_type_lookup(net, nla[NFTA_CHAIN_TYPE],
+-						   family, autoload);
+-		if (IS_ERR(type)) {
+-			NL_SET_BAD_ATTR(extack, nla[NFTA_CHAIN_TYPE]);
+-			return PTR_ERR(type);
++		if (nla[NFTA_CHAIN_TYPE]) {
++			type = nf_tables_chain_type_lookup(net, nla[NFTA_CHAIN_TYPE],
++							   family, true);
++			if (IS_ERR(type)) {
++				NL_SET_BAD_ATTR(extack, nla[NFTA_CHAIN_TYPE]);
++				return PTR_ERR(type);
++			}
+ 		}
+-	}
+-	if (hook->num >= NFT_MAX_HOOKS || !(type->hook_mask & (1 << hook->num)))
+-		return -EOPNOTSUPP;
++		if (hook->num >= NFT_MAX_HOOKS || !(type->hook_mask & (1 << hook->num)))
++			return -EOPNOTSUPP;
+ 
+-	if (type->type == NFT_CHAIN_T_NAT &&
+-	    hook->priority <= NF_IP_PRI_CONNTRACK)
+-		return -EOPNOTSUPP;
++		if (type->type == NFT_CHAIN_T_NAT &&
++		    hook->priority <= NF_IP_PRI_CONNTRACK)
++			return -EOPNOTSUPP;
++	} else {
++		if (ha[NFTA_HOOK_HOOKNUM]) {
++			hook->num = ntohl(nla_get_be32(ha[NFTA_HOOK_HOOKNUM]));
++			if (hook->num != basechain->ops.hooknum)
++				return -EOPNOTSUPP;
++		}
++		if (ha[NFTA_HOOK_PRIORITY]) {
++			hook->priority = ntohl(nla_get_be32(ha[NFTA_HOOK_PRIORITY]));
++			if (hook->priority != basechain->ops.priority)
++				return -EOPNOTSUPP;
++		}
++
++		type = basechain->type;
++	}
+ 
+ 	if (!try_module_get(type->owner)) {
+ 		if (nla[NFTA_CHAIN_TYPE])
+@@ -2086,7 +2117,7 @@ static int nft_chain_parse_hook(struct net *net,
+ 
+ 	INIT_LIST_HEAD(&hook->list);
+ 	if (nft_base_chain_netdev(family, hook->num)) {
+-		err = nft_chain_parse_netdev(net, ha, &hook->list);
++		err = nft_chain_parse_netdev(net, ha, &hook->list, extack);
+ 		if (err < 0) {
+ 			module_put(type->owner);
+ 			return err;
+@@ -2172,12 +2203,8 @@ static int nft_basechain_init(struct nft_base_chain *basechain, u8 family,
+ 		list_splice_init(&hook->list, &basechain->hook_list);
+ 		list_for_each_entry(h, &basechain->hook_list, list)
+ 			nft_basechain_hook_init(&h->ops, family, hook, chain);
+-
+-		basechain->ops.hooknum	= hook->num;
+-		basechain->ops.priority	= hook->priority;
+-	} else {
+-		nft_basechain_hook_init(&basechain->ops, family, hook, chain);
+ 	}
++	nft_basechain_hook_init(&basechain->ops, family, hook, chain);
+ 
+ 	chain->flags |= NFT_CHAIN_BASE | flags;
+ 	basechain->policy = NF_ACCEPT;
+@@ -2228,13 +2255,13 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
+ 
+ 	if (nla[NFTA_CHAIN_HOOK]) {
+ 		struct nft_stats __percpu *stats = NULL;
+-		struct nft_chain_hook hook;
++		struct nft_chain_hook hook = {};
+ 
+ 		if (flags & NFT_CHAIN_BINDING)
+ 			return -EOPNOTSUPP;
+ 
+-		err = nft_chain_parse_hook(net, nla, &hook, family, extack,
+-					   true);
++		err = nft_chain_parse_hook(net, NULL, nla, &hook, family,
++					   extack);
+ 		if (err < 0)
+ 			return err;
+ 
+@@ -2349,65 +2376,57 @@ err_destroy_chain:
+ 	return err;
+ }
+ 
+-static bool nft_hook_list_equal(struct list_head *hook_list1,
+-				struct list_head *hook_list2)
+-{
+-	struct nft_hook *hook;
+-	int n = 0, m = 0;
+-
+-	n = 0;
+-	list_for_each_entry(hook, hook_list2, list) {
+-		if (!nft_hook_list_find(hook_list1, hook))
+-			return false;
+-
+-		n++;
+-	}
+-	list_for_each_entry(hook, hook_list1, list)
+-		m++;
+-
+-	return n == m;
+-}
+-
+ static int nf_tables_updchain(struct nft_ctx *ctx, u8 genmask, u8 policy,
+ 			      u32 flags, const struct nlattr *attr,
+ 			      struct netlink_ext_ack *extack)
+ {
+ 	const struct nlattr * const *nla = ctx->nla;
++	struct nft_base_chain *basechain = NULL;
+ 	struct nft_table *table = ctx->table;
+ 	struct nft_chain *chain = ctx->chain;
+-	struct nft_base_chain *basechain;
++	struct nft_chain_hook hook = {};
+ 	struct nft_stats *stats = NULL;
+-	struct nft_chain_hook hook;
++	struct nft_hook *h, *next;
+ 	struct nf_hook_ops *ops;
+ 	struct nft_trans *trans;
++	bool unregister = false;
+ 	int err;
+ 
+ 	if (chain->flags ^ flags)
+ 		return -EOPNOTSUPP;
+ 
++	INIT_LIST_HEAD(&hook.list);
++
+ 	if (nla[NFTA_CHAIN_HOOK]) {
+ 		if (!nft_is_base_chain(chain)) {
+ 			NL_SET_BAD_ATTR(extack, attr);
+ 			return -EEXIST;
+ 		}
+-		err = nft_chain_parse_hook(ctx->net, nla, &hook, ctx->family,
+-					   extack, false);
++
++		basechain = nft_base_chain(chain);
++		err = nft_chain_parse_hook(ctx->net, basechain, nla, &hook,
++					   ctx->family, extack);
+ 		if (err < 0)
+ 			return err;
+ 
+-		basechain = nft_base_chain(chain);
+ 		if (basechain->type != hook.type) {
+ 			nft_chain_release_hook(&hook);
+ 			NL_SET_BAD_ATTR(extack, attr);
+ 			return -EEXIST;
+ 		}
+ 
+-		if (nft_base_chain_netdev(ctx->family, hook.num)) {
+-			if (!nft_hook_list_equal(&basechain->hook_list,
+-						 &hook.list)) {
+-				nft_chain_release_hook(&hook);
+-				NL_SET_BAD_ATTR(extack, attr);
+-				return -EEXIST;
++		if (nft_base_chain_netdev(ctx->family, basechain->ops.hooknum)) {
++			list_for_each_entry_safe(h, next, &hook.list, list) {
++				h->ops.pf	= basechain->ops.pf;
++				h->ops.hooknum	= basechain->ops.hooknum;
++				h->ops.priority	= basechain->ops.priority;
++				h->ops.priv	= basechain->ops.priv;
++				h->ops.hook	= basechain->ops.hook;
++
++				if (nft_hook_list_find(&basechain->hook_list, h)) {
++					list_del(&h->list);
++					kfree(h);
++				}
+ 			}
+ 		} else {
+ 			ops = &basechain->ops;
+@@ -2418,7 +2437,6 @@ static int nf_tables_updchain(struct nft_ctx *ctx, u8 genmask, u8 policy,
+ 				return -EEXIST;
+ 			}
+ 		}
+-		nft_chain_release_hook(&hook);
+ 	}
+ 
+ 	if (nla[NFTA_CHAIN_HANDLE] &&
+@@ -2429,24 +2447,43 @@ static int nf_tables_updchain(struct nft_ctx *ctx, u8 genmask, u8 policy,
+ 					  nla[NFTA_CHAIN_NAME], genmask);
+ 		if (!IS_ERR(chain2)) {
+ 			NL_SET_BAD_ATTR(extack, nla[NFTA_CHAIN_NAME]);
+-			return -EEXIST;
++			err = -EEXIST;
++			goto err_hooks;
+ 		}
+ 	}
+ 
+ 	if (nla[NFTA_CHAIN_COUNTERS]) {
+-		if (!nft_is_base_chain(chain))
+-			return -EOPNOTSUPP;
++		if (!nft_is_base_chain(chain)) {
++			err = -EOPNOTSUPP;
++			goto err_hooks;
++		}
+ 
+ 		stats = nft_stats_alloc(nla[NFTA_CHAIN_COUNTERS]);
+-		if (IS_ERR(stats))
+-			return PTR_ERR(stats);
++		if (IS_ERR(stats)) {
++			err = PTR_ERR(stats);
++			goto err_hooks;
++		}
++	}
++
++	if (!(table->flags & NFT_TABLE_F_DORMANT) &&
++	    nft_is_base_chain(chain) &&
++	    !list_empty(&hook.list)) {
++		basechain = nft_base_chain(chain);
++		ops = &basechain->ops;
++
++		if (nft_base_chain_netdev(table->family, basechain->ops.hooknum)) {
++			err = nft_netdev_register_hooks(ctx->net, &hook.list);
++			if (err < 0)
++				goto err_hooks;
++		}
+ 	}
+ 
++	unregister = true;
+ 	err = -ENOMEM;
+ 	trans = nft_trans_alloc(ctx, NFT_MSG_NEWCHAIN,
+ 				sizeof(struct nft_trans_chain));
+ 	if (trans == NULL)
+-		goto err;
++		goto err_trans;
+ 
+ 	nft_trans_chain_stats(trans) = stats;
+ 	nft_trans_chain_update(trans) = true;
+@@ -2465,7 +2502,7 @@ static int nf_tables_updchain(struct nft_ctx *ctx, u8 genmask, u8 policy,
+ 		err = -ENOMEM;
+ 		name = nla_strdup(nla[NFTA_CHAIN_NAME], GFP_KERNEL_ACCOUNT);
+ 		if (!name)
+-			goto err;
++			goto err_trans;
+ 
+ 		err = -EEXIST;
+ 		list_for_each_entry(tmp, &nft_net->commit_list, list) {
+@@ -2476,18 +2513,35 @@ static int nf_tables_updchain(struct nft_ctx *ctx, u8 genmask, u8 policy,
+ 			    strcmp(name, nft_trans_chain_name(tmp)) == 0) {
+ 				NL_SET_BAD_ATTR(extack, nla[NFTA_CHAIN_NAME]);
+ 				kfree(name);
+-				goto err;
++				goto err_trans;
+ 			}
+ 		}
+ 
+ 		nft_trans_chain_name(trans) = name;
+ 	}
++
++	nft_trans_basechain(trans) = basechain;
++	INIT_LIST_HEAD(&nft_trans_chain_hooks(trans));
++	list_splice(&hook.list, &nft_trans_chain_hooks(trans));
++
+ 	nft_trans_commit_list_add_tail(ctx->net, trans);
+ 
+ 	return 0;
+-err:
++
++err_trans:
+ 	free_percpu(stats);
+ 	kfree(trans);
++err_hooks:
++	if (nla[NFTA_CHAIN_HOOK]) {
++		list_for_each_entry_safe(h, next, &hook.list, list) {
++			if (unregister)
++				nf_unregister_net_hook(ctx->net, &h->ops);
++			list_del(&h->list);
++			kfree_rcu(h, rcu);
++		}
++		module_put(hook.type->owner);
++	}
++
+ 	return err;
+ }
+ 
+@@ -7578,9 +7632,10 @@ static const struct nla_policy nft_flowtable_hook_policy[NFTA_FLOWTABLE_HOOK_MAX
+ };
+ 
+ static int nft_flowtable_parse_hook(const struct nft_ctx *ctx,
+-				    const struct nlattr *attr,
++				    const struct nlattr * const nla[],
+ 				    struct nft_flowtable_hook *flowtable_hook,
+-				    struct nft_flowtable *flowtable, bool add)
++				    struct nft_flowtable *flowtable,
++				    struct netlink_ext_ack *extack, bool add)
+ {
+ 	struct nlattr *tb[NFTA_FLOWTABLE_HOOK_MAX + 1];
+ 	struct nft_hook *hook;
+@@ -7589,15 +7644,18 @@ static int nft_flowtable_parse_hook(const struct nft_ctx *ctx,
+ 
+ 	INIT_LIST_HEAD(&flowtable_hook->list);
+ 
+-	err = nla_parse_nested_deprecated(tb, NFTA_FLOWTABLE_HOOK_MAX, attr,
++	err = nla_parse_nested_deprecated(tb, NFTA_FLOWTABLE_HOOK_MAX,
++					  nla[NFTA_FLOWTABLE_HOOK],
+ 					  nft_flowtable_hook_policy, NULL);
+ 	if (err < 0)
+ 		return err;
+ 
+ 	if (add) {
+ 		if (!tb[NFTA_FLOWTABLE_HOOK_NUM] ||
+-		    !tb[NFTA_FLOWTABLE_HOOK_PRIORITY])
+-			return -EINVAL;
++		    !tb[NFTA_FLOWTABLE_HOOK_PRIORITY]) {
++			NL_SET_BAD_ATTR(extack, nla[NFTA_FLOWTABLE_NAME]);
++			return -ENOENT;
++		}
+ 
+ 		hooknum = ntohl(nla_get_be32(tb[NFTA_FLOWTABLE_HOOK_NUM]));
+ 		if (hooknum != NF_NETDEV_INGRESS)
+@@ -7627,7 +7685,8 @@ static int nft_flowtable_parse_hook(const struct nft_ctx *ctx,
+ 	if (tb[NFTA_FLOWTABLE_HOOK_DEVS]) {
+ 		err = nf_tables_parse_netdev_hooks(ctx->net,
+ 						   tb[NFTA_FLOWTABLE_HOOK_DEVS],
+-						   &flowtable_hook->list);
++						   &flowtable_hook->list,
++						   extack);
+ 		if (err < 0)
+ 			return err;
+ 	}
+@@ -7759,7 +7818,7 @@ err_unregister_net_hooks:
+ 	return err;
+ }
+ 
+-static void nft_flowtable_hooks_destroy(struct list_head *hook_list)
++static void nft_hooks_destroy(struct list_head *hook_list)
+ {
+ 	struct nft_hook *hook, *next;
+ 
+@@ -7770,7 +7829,8 @@ static void nft_flowtable_hooks_destroy(struct list_head *hook_list)
+ }
+ 
+ static int nft_flowtable_update(struct nft_ctx *ctx, const struct nlmsghdr *nlh,
+-				struct nft_flowtable *flowtable)
++				struct nft_flowtable *flowtable,
++				struct netlink_ext_ack *extack)
+ {
+ 	const struct nlattr * const *nla = ctx->nla;
+ 	struct nft_flowtable_hook flowtable_hook;
+@@ -7780,8 +7840,8 @@ static int nft_flowtable_update(struct nft_ctx *ctx, const struct nlmsghdr *nlh,
+ 	u32 flags;
+ 	int err;
+ 
+-	err = nft_flowtable_parse_hook(ctx, nla[NFTA_FLOWTABLE_HOOK],
+-				       &flowtable_hook, flowtable, false);
++	err = nft_flowtable_parse_hook(ctx, nla, &flowtable_hook, flowtable,
++				       extack, false);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -7886,7 +7946,7 @@ static int nf_tables_newflowtable(struct sk_buff *skb,
+ 
+ 		nft_ctx_init(&ctx, net, skb, info->nlh, family, table, NULL, nla);
+ 
+-		return nft_flowtable_update(&ctx, info->nlh, flowtable);
++		return nft_flowtable_update(&ctx, info->nlh, flowtable, extack);
+ 	}
+ 
+ 	nft_ctx_init(&ctx, net, skb, info->nlh, family, table, NULL, nla);
+@@ -7926,8 +7986,8 @@ static int nf_tables_newflowtable(struct sk_buff *skb,
+ 	if (err < 0)
+ 		goto err3;
+ 
+-	err = nft_flowtable_parse_hook(&ctx, nla[NFTA_FLOWTABLE_HOOK],
+-				       &flowtable_hook, flowtable, true);
++	err = nft_flowtable_parse_hook(&ctx, nla, &flowtable_hook, flowtable,
++				       extack, true);
+ 	if (err < 0)
+ 		goto err4;
+ 
+@@ -7939,7 +7999,7 @@ static int nf_tables_newflowtable(struct sk_buff *skb,
+ 					       &flowtable->hook_list,
+ 					       flowtable);
+ 	if (err < 0) {
+-		nft_flowtable_hooks_destroy(&flowtable->hook_list);
++		nft_hooks_destroy(&flowtable->hook_list);
+ 		goto err4;
+ 	}
+ 
+@@ -7979,7 +8039,8 @@ static void nft_flowtable_hook_release(struct nft_flowtable_hook *flowtable_hook
+ }
+ 
+ static int nft_delflowtable_hook(struct nft_ctx *ctx,
+-				 struct nft_flowtable *flowtable)
++				 struct nft_flowtable *flowtable,
++				 struct netlink_ext_ack *extack)
+ {
+ 	const struct nlattr * const *nla = ctx->nla;
+ 	struct nft_flowtable_hook flowtable_hook;
+@@ -7988,8 +8049,8 @@ static int nft_delflowtable_hook(struct nft_ctx *ctx,
+ 	struct nft_trans *trans;
+ 	int err;
+ 
+-	err = nft_flowtable_parse_hook(ctx, nla[NFTA_FLOWTABLE_HOOK],
+-				       &flowtable_hook, flowtable, false);
++	err = nft_flowtable_parse_hook(ctx, nla, &flowtable_hook, flowtable,
++				       extack, false);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -8071,7 +8132,7 @@ static int nf_tables_delflowtable(struct sk_buff *skb,
+ 	nft_ctx_init(&ctx, net, skb, info->nlh, family, table, NULL, nla);
+ 
+ 	if (nla[NFTA_FLOWTABLE_HOOK])
+-		return nft_delflowtable_hook(&ctx, flowtable);
++		return nft_delflowtable_hook(&ctx, flowtable, extack);
+ 
+ 	if (flowtable->use > 0) {
+ 		NL_SET_BAD_ATTR(extack, attr);
+@@ -8766,7 +8827,7 @@ static void nft_commit_release(struct nft_trans *trans)
+ 	case NFT_MSG_DELFLOWTABLE:
+ 	case NFT_MSG_DESTROYFLOWTABLE:
+ 		if (nft_trans_flowtable_update(trans))
+-			nft_flowtable_hooks_destroy(&nft_trans_flowtable_hooks(trans));
++			nft_hooks_destroy(&nft_trans_flowtable_hooks(trans));
+ 		else
+ 			nf_tables_flowtable_destroy(nft_trans_flowtable(trans));
+ 		break;
+@@ -9223,19 +9284,22 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 		case NFT_MSG_NEWCHAIN:
+ 			if (nft_trans_chain_update(trans)) {
+ 				nft_chain_commit_update(trans);
+-				nf_tables_chain_notify(&trans->ctx, NFT_MSG_NEWCHAIN);
++				nf_tables_chain_notify(&trans->ctx, NFT_MSG_NEWCHAIN,
++						       &nft_trans_chain_hooks(trans));
++				list_splice(&nft_trans_chain_hooks(trans),
++					    &nft_trans_basechain(trans)->hook_list);
+ 				/* trans destroyed after rcu grace period */
+ 			} else {
+ 				nft_chain_commit_drop_policy(trans);
+ 				nft_clear(net, trans->ctx.chain);
+-				nf_tables_chain_notify(&trans->ctx, NFT_MSG_NEWCHAIN);
++				nf_tables_chain_notify(&trans->ctx, NFT_MSG_NEWCHAIN, NULL);
+ 				nft_trans_destroy(trans);
+ 			}
+ 			break;
+ 		case NFT_MSG_DELCHAIN:
+ 		case NFT_MSG_DESTROYCHAIN:
+ 			nft_chain_del(trans->ctx.chain);
+-			nf_tables_chain_notify(&trans->ctx, trans->msg_type);
++			nf_tables_chain_notify(&trans->ctx, trans->msg_type, NULL);
+ 			nf_tables_unregister_hook(trans->ctx.net,
+ 						  trans->ctx.table,
+ 						  trans->ctx.chain);
+@@ -9402,7 +9466,10 @@ static void nf_tables_abort_release(struct nft_trans *trans)
+ 		nf_tables_table_destroy(&trans->ctx);
+ 		break;
+ 	case NFT_MSG_NEWCHAIN:
+-		nf_tables_chain_destroy(&trans->ctx);
++		if (nft_trans_chain_update(trans))
++			nft_hooks_destroy(&nft_trans_chain_hooks(trans));
++		else
++			nf_tables_chain_destroy(&trans->ctx);
+ 		break;
+ 	case NFT_MSG_NEWRULE:
+ 		nf_tables_rule_destroy(&trans->ctx, nft_trans_rule(trans));
+@@ -9419,7 +9486,7 @@ static void nf_tables_abort_release(struct nft_trans *trans)
+ 		break;
+ 	case NFT_MSG_NEWFLOWTABLE:
+ 		if (nft_trans_flowtable_update(trans))
+-			nft_flowtable_hooks_destroy(&nft_trans_flowtable_hooks(trans));
++			nft_hooks_destroy(&nft_trans_flowtable_hooks(trans));
+ 		else
+ 			nf_tables_flowtable_destroy(nft_trans_flowtable(trans));
+ 		break;
+@@ -9465,6 +9532,9 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 			break;
+ 		case NFT_MSG_NEWCHAIN:
+ 			if (nft_trans_chain_update(trans)) {
++				nft_netdev_unregister_hooks(net,
++							    &nft_trans_chain_hooks(trans),
++							    true);
+ 				free_percpu(nft_trans_chain_stats(trans));
+ 				kfree(nft_trans_chain_name(trans));
+ 				nft_trans_destroy(trans);
+diff --git a/net/netfilter/nft_ct_fast.c b/net/netfilter/nft_ct_fast.c
+index 89983b0613fa3..e684c8a918487 100644
+--- a/net/netfilter/nft_ct_fast.c
++++ b/net/netfilter/nft_ct_fast.c
+@@ -15,10 +15,6 @@ void nft_ct_get_fast_eval(const struct nft_expr *expr,
+ 	unsigned int state;
+ 
+ 	ct = nf_ct_get(pkt->skb, &ctinfo);
+-	if (!ct) {
+-		regs->verdict.code = NFT_BREAK;
+-		return;
+-	}
+ 
+ 	switch (priv->key) {
+ 	case NFT_CT_STATE:
+@@ -30,6 +26,16 @@ void nft_ct_get_fast_eval(const struct nft_expr *expr,
+ 			state = NF_CT_STATE_INVALID_BIT;
+ 		*dest = state;
+ 		return;
++	default:
++		break;
++	}
++
++	if (!ct) {
++		regs->verdict.code = NFT_BREAK;
++		return;
++	}
++
++	switch (priv->key) {
+ 	case NFT_CT_DIRECTION:
+ 		nft_reg_store8(dest, CTINFO2DIR(ctinfo));
+ 		return;
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index ecd9fc27e360c..b8c62d88567ba 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -2034,7 +2034,7 @@ retry:
+ 		goto retry;
+ 	}
+ 
+-	if (!dev_validate_header(dev, skb->data, len)) {
++	if (!dev_validate_header(dev, skb->data, len) || !skb->len) {
+ 		err = -EINVAL;
+ 		goto out_unlock;
+ 	}
+diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c
+index 102f5cbff91a3..a6f0d29f35ef9 100644
+--- a/net/rxrpc/af_rxrpc.c
++++ b/net/rxrpc/af_rxrpc.c
+@@ -265,6 +265,7 @@ static int rxrpc_listen(struct socket *sock, int backlog)
+  * @key: The security context to use (defaults to socket setting)
+  * @user_call_ID: The ID to use
+  * @tx_total_len: Total length of data to transmit during the call (or -1)
++ * @hard_timeout: The maximum lifespan of the call in sec
+  * @gfp: The allocation constraints
+  * @notify_rx: Where to send notifications instead of socket queue
+  * @upgrade: Request service upgrade for call
+@@ -283,6 +284,7 @@ struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *sock,
+ 					   struct key *key,
+ 					   unsigned long user_call_ID,
+ 					   s64 tx_total_len,
++					   u32 hard_timeout,
+ 					   gfp_t gfp,
+ 					   rxrpc_notify_rx_t notify_rx,
+ 					   bool upgrade,
+@@ -313,6 +315,7 @@ struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *sock,
+ 	p.tx_total_len		= tx_total_len;
+ 	p.interruptibility	= interruptibility;
+ 	p.kernel		= true;
++	p.timeouts.hard		= hard_timeout;
+ 
+ 	memset(&cp, 0, sizeof(cp));
+ 	cp.local		= rx->local;
+diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
+index 67b0a894162d7..5d44dc08f66d0 100644
+--- a/net/rxrpc/ar-internal.h
++++ b/net/rxrpc/ar-internal.h
+@@ -616,6 +616,7 @@ struct rxrpc_call {
+ 	unsigned long		expect_term_by;	/* When we expect call termination by */
+ 	u32			next_rx_timo;	/* Timeout for next Rx packet (jif) */
+ 	u32			next_req_timo;	/* Timeout for next Rx request packet (jif) */
++	u32			hard_timo;	/* Maximum lifetime or 0 (jif) */
+ 	struct timer_list	timer;		/* Combined event timer */
+ 	struct work_struct	destroyer;	/* In-process-context destroyer */
+ 	rxrpc_notify_rx_t	notify_rx;	/* kernel service Rx notification function */
+diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
+index e9f1f49d18c2a..fecbc73054bc2 100644
+--- a/net/rxrpc/call_object.c
++++ b/net/rxrpc/call_object.c
+@@ -226,6 +226,13 @@ static struct rxrpc_call *rxrpc_alloc_client_call(struct rxrpc_sock *rx,
+ 	if (cp->exclusive)
+ 		__set_bit(RXRPC_CALL_EXCLUSIVE, &call->flags);
+ 
++	if (p->timeouts.normal)
++		call->next_rx_timo = min(msecs_to_jiffies(p->timeouts.normal), 1UL);
++	if (p->timeouts.idle)
++		call->next_req_timo = min(msecs_to_jiffies(p->timeouts.idle), 1UL);
++	if (p->timeouts.hard)
++		call->hard_timo = p->timeouts.hard * HZ;
++
+ 	ret = rxrpc_init_client_call_security(call);
+ 	if (ret < 0) {
+ 		rxrpc_prefail_call(call, RXRPC_CALL_LOCAL_ERROR, ret);
+@@ -257,7 +264,7 @@ void rxrpc_start_call_timer(struct rxrpc_call *call)
+ 	call->keepalive_at = j;
+ 	call->expect_rx_by = j;
+ 	call->expect_req_by = j;
+-	call->expect_term_by = j;
++	call->expect_term_by = j + call->hard_timo;
+ 	call->timer.expires = now;
+ }
+ 
+diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c
+index da49fcf1c4567..8e0b94714e849 100644
+--- a/net/rxrpc/sendmsg.c
++++ b/net/rxrpc/sendmsg.c
+@@ -50,15 +50,11 @@ static int rxrpc_wait_to_be_connected(struct rxrpc_call *call, long *timeo)
+ 	_enter("%d", call->debug_id);
+ 
+ 	if (rxrpc_call_state(call) != RXRPC_CALL_CLIENT_AWAIT_CONN)
+-		return call->error;
++		goto no_wait;
+ 
+ 	add_wait_queue_exclusive(&call->waitq, &myself);
+ 
+ 	for (;;) {
+-		ret = call->error;
+-		if (ret < 0)
+-			break;
+-
+ 		switch (call->interruptibility) {
+ 		case RXRPC_INTERRUPTIBLE:
+ 		case RXRPC_PREINTERRUPTIBLE:
+@@ -69,10 +65,9 @@ static int rxrpc_wait_to_be_connected(struct rxrpc_call *call, long *timeo)
+ 			set_current_state(TASK_UNINTERRUPTIBLE);
+ 			break;
+ 		}
+-		if (rxrpc_call_state(call) != RXRPC_CALL_CLIENT_AWAIT_CONN) {
+-			ret = call->error;
++
++		if (rxrpc_call_state(call) != RXRPC_CALL_CLIENT_AWAIT_CONN)
+ 			break;
+-		}
+ 		if ((call->interruptibility == RXRPC_INTERRUPTIBLE ||
+ 		     call->interruptibility == RXRPC_PREINTERRUPTIBLE) &&
+ 		    signal_pending(current)) {
+@@ -85,6 +80,7 @@ static int rxrpc_wait_to_be_connected(struct rxrpc_call *call, long *timeo)
+ 	remove_wait_queue(&call->waitq, &myself);
+ 	__set_current_state(TASK_RUNNING);
+ 
++no_wait:
+ 	if (ret == 0 && rxrpc_call_is_complete(call))
+ 		ret = call->error;
+ 
+@@ -655,15 +651,19 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len)
+ 		if (IS_ERR(call))
+ 			return PTR_ERR(call);
+ 		/* ... and we have the call lock. */
++		p.call.nr_timeouts = 0;
+ 		ret = 0;
+ 		if (rxrpc_call_is_complete(call))
+ 			goto out_put_unlock;
+ 	} else {
+ 		switch (rxrpc_call_state(call)) {
+-		case RXRPC_CALL_UNINITIALISED:
+ 		case RXRPC_CALL_CLIENT_AWAIT_CONN:
+-		case RXRPC_CALL_SERVER_PREALLOC:
+ 		case RXRPC_CALL_SERVER_SECURING:
++			if (p.command == RXRPC_CMD_SEND_ABORT)
++				break;
++			fallthrough;
++		case RXRPC_CALL_UNINITIALISED:
++		case RXRPC_CALL_SERVER_PREALLOC:
+ 			rxrpc_put_call(call, rxrpc_call_put_sendmsg);
+ 			ret = -EBUSY;
+ 			goto error_release_sock;
+@@ -703,7 +703,7 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len)
+ 		fallthrough;
+ 	case 1:
+ 		if (p.call.timeouts.hard > 0) {
+-			j = msecs_to_jiffies(p.call.timeouts.hard);
++			j = p.call.timeouts.hard * HZ;
+ 			now = jiffies;
+ 			j += now;
+ 			WRITE_ONCE(call->expect_term_by, j);
+diff --git a/net/sched/act_mirred.c b/net/sched/act_mirred.c
+index 8037ec9b1d311..a61482c5edbe7 100644
+--- a/net/sched/act_mirred.c
++++ b/net/sched/act_mirred.c
+@@ -264,7 +264,7 @@ TC_INDIRECT_SCOPE int tcf_mirred_act(struct sk_buff *skb,
+ 		goto out;
+ 	}
+ 
+-	if (unlikely(!(dev->flags & IFF_UP))) {
++	if (unlikely(!(dev->flags & IFF_UP)) || !netif_carrier_ok(dev)) {
+ 		net_notice_ratelimited("tc mirred to Houston: device %s is down\n",
+ 				       dev->name);
+ 		goto out;
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index 3c3629c9e7b65..2621550bfddc1 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -1589,6 +1589,7 @@ static int tcf_block_bind(struct tcf_block *block,
+ 
+ err_unroll:
+ 	list_for_each_entry_safe(block_cb, next, &bo->cb_list, list) {
++		list_del(&block_cb->driver_list);
+ 		if (i-- > 0) {
+ 			list_del(&block_cb->list);
+ 			tcf_block_playback_offloads(block, block_cb->cb,
+diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
+index 475fe222a8556..a1c4ee2e0be22 100644
+--- a/net/sched/cls_flower.c
++++ b/net/sched/cls_flower.c
+@@ -2210,10 +2210,10 @@ static int fl_change(struct net *net, struct sk_buff *in_skb,
+ 		spin_lock(&tp->lock);
+ 		if (!handle) {
+ 			handle = 1;
+-			err = idr_alloc_u32(&head->handle_idr, fnew, &handle,
++			err = idr_alloc_u32(&head->handle_idr, NULL, &handle,
+ 					    INT_MAX, GFP_ATOMIC);
+ 		} else {
+-			err = idr_alloc_u32(&head->handle_idr, fnew, &handle,
++			err = idr_alloc_u32(&head->handle_idr, NULL, &handle,
+ 					    handle, GFP_ATOMIC);
+ 
+ 			/* Filter with specified handle was concurrently
+@@ -2339,7 +2339,8 @@ errout_hw:
+ errout_mask:
+ 	fl_mask_put(head, fnew->mask);
+ errout_idr:
+-	idr_remove(&head->handle_idr, fnew->handle);
++	if (!fold)
++		idr_remove(&head->handle_idr, fnew->handle);
+ 	__fl_put(fnew);
+ errout_tb:
+ 	kfree(tb);
+@@ -2378,7 +2379,7 @@ static void fl_walk(struct tcf_proto *tp, struct tcf_walker *arg,
+ 	rcu_read_lock();
+ 	idr_for_each_entry_continue_ul(&head->handle_idr, f, tmp, id) {
+ 		/* don't return filters that are being deleted */
+-		if (!refcount_inc_not_zero(&f->refcnt))
++		if (!f || !refcount_inc_not_zero(&f->refcnt))
+ 			continue;
+ 		rcu_read_unlock();
+ 
+diff --git a/sound/soc/intel/common/soc-acpi-intel-byt-match.c b/sound/soc/intel/common/soc-acpi-intel-byt-match.c
+index db5a92b9875a8..87c44f284971a 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-byt-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-byt-match.c
+@@ -124,7 +124,7 @@ static const struct snd_soc_acpi_codecs rt5640_comp_ids = {
+ };
+ 
+ static const struct snd_soc_acpi_codecs wm5102_comp_ids = {
+-	.num_codecs = 2,
++	.num_codecs = 3,
+ 	.codecs = { "10WM5102", "WM510204", "WM510205"},
+ };
+ 
+diff --git a/sound/usb/caiaq/input.c b/sound/usb/caiaq/input.c
+index 1e2cf2f08eecd..84f26dce7f5d0 100644
+--- a/sound/usb/caiaq/input.c
++++ b/sound/usb/caiaq/input.c
+@@ -804,6 +804,7 @@ int snd_usb_caiaq_input_init(struct snd_usb_caiaqdev *cdev)
+ 
+ 	default:
+ 		/* no input methods supported on this device */
++		ret = -EINVAL;
+ 		goto exit_free_idev;
+ 	}
+ 
+diff --git a/tools/perf/Build b/tools/perf/Build
+index 6dd67e5022955..aa76236228349 100644
+--- a/tools/perf/Build
++++ b/tools/perf/Build
+@@ -56,6 +56,6 @@ CFLAGS_builtin-report.o	   += -DDOCDIR="BUILD_STR($(srcdir_SQ)/Documentation)"
+ perf-y += util/
+ perf-y += arch/
+ perf-y += ui/
+-perf-$(CONFIG_LIBTRACEEVENT) += scripts/
++perf-y += scripts/
+ 
+ gtk-y += ui/gtk/
+diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
+index bac9272682b75..2fcee585b225d 100644
+--- a/tools/perf/Makefile.perf
++++ b/tools/perf/Makefile.perf
+@@ -647,13 +647,16 @@ all: shell_compatibility_test $(ALL_PROGRAMS) $(LANG_BINDINGS) $(OTHER_PROGRAMS)
+ # Create python binding output directory if not already present
+ _dummy := $(shell [ -d '$(OUTPUT)python' ] || mkdir -p '$(OUTPUT)python')
+ 
+-$(OUTPUT)python/perf$(PYTHON_EXTENSION_SUFFIX): $(PYTHON_EXT_SRCS) $(PYTHON_EXT_DEPS) $(LIBPERF)
++$(OUTPUT)python/perf$(PYTHON_EXTENSION_SUFFIX): $(PYTHON_EXT_SRCS) $(PYTHON_EXT_DEPS) $(LIBPERF) $(LIBSUBCMD)
+ 	$(QUIET_GEN)LDSHARED="$(CC) -pthread -shared" \
+         CFLAGS='$(CFLAGS)' LDFLAGS='$(LDFLAGS)' \
+ 	  $(PYTHON_WORD) util/setup.py \
+ 	  --quiet build_ext; \
+ 	cp $(PYTHON_EXTBUILD_LIB)perf*.so $(OUTPUT)python/
+ 
++python_perf_target:
++	@echo "Target is: $(OUTPUT)python/perf$(PYTHON_EXTENSION_SUFFIX)"
++
+ please_set_SHELL_PATH_to_a_more_modern_shell:
+ 	$(Q)$$(:)
+ 
+@@ -1152,7 +1155,7 @@ FORCE:
+ .PHONY: all install clean config-clean strip install-gtk
+ .PHONY: shell_compatibility_test please_set_SHELL_PATH_to_a_more_modern_shell
+ .PHONY: .FORCE-PERF-VERSION-FILE TAGS tags cscope FORCE prepare
+-.PHONY: archheaders
++.PHONY: archheaders python_perf_target
+ 
+ endif # force_fixdep
+ 
+diff --git a/tools/perf/builtin-ftrace.c b/tools/perf/builtin-ftrace.c
+index d7fe00f66b831..fb1b66ef2e167 100644
+--- a/tools/perf/builtin-ftrace.c
++++ b/tools/perf/builtin-ftrace.c
+@@ -1228,10 +1228,12 @@ int cmd_ftrace(int argc, const char **argv)
+ 		goto out_delete_filters;
+ 	}
+ 
++	/* Make system wide (-a) the default target. */
++	if (!argc && target__none(&ftrace.target))
++		ftrace.target.system_wide = true;
++
+ 	switch (subcmd) {
+ 	case PERF_FTRACE_TRACE:
+-		if (!argc && target__none(&ftrace.target))
+-			ftrace.target.system_wide = true;
+ 		cmd_func = __cmd_ftrace;
+ 		break;
+ 	case PERF_FTRACE_LATENCY:
+diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
+index 8374117e66f6e..be7c0c29d15b0 100644
+--- a/tools/perf/builtin-record.c
++++ b/tools/perf/builtin-record.c
+@@ -1866,7 +1866,7 @@ static void __record__read_lost_samples(struct record *rec, struct evsel *evsel,
+ 	int id_hdr_size;
+ 
+ 	if (perf_evsel__read(&evsel->core, cpu_idx, thread_idx, &count) < 0) {
+-		pr_err("read LOST count failed\n");
++		pr_debug("read LOST count failed\n");
+ 		return;
+ 	}
+ 
+diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
+index a792214d1af85..6f085602b7bd3 100644
+--- a/tools/perf/builtin-script.c
++++ b/tools/perf/builtin-script.c
+@@ -2318,8 +2318,8 @@ static void setup_scripting(void)
+ {
+ #ifdef HAVE_LIBTRACEEVENT
+ 	setup_perl_scripting();
+-	setup_python_scripting();
+ #endif
++	setup_python_scripting();
+ }
+ 
+ static int flush_scripting(void)
+diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
+index fa7c40956d0fa..eeba93ae3b584 100644
+--- a/tools/perf/builtin-stat.c
++++ b/tools/perf/builtin-stat.c
+@@ -773,7 +773,7 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+ 		counter->reset_group = false;
+ 		if (bpf_counter__load(counter, &target))
+ 			return -1;
+-		if (!evsel__is_bpf(counter))
++		if (!(evsel__is_bperf(counter)))
+ 			all_counters_use_bpf = false;
+ 	}
+ 
+@@ -789,7 +789,7 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+ 
+ 		if (counter->reset_group || counter->errored)
+ 			continue;
+-		if (evsel__is_bpf(counter))
++		if (evsel__is_bperf(counter))
+ 			continue;
+ try_again:
+ 		if (create_perf_stat_counter(counter, &stat_config, &target,
+diff --git a/tools/perf/pmu-events/arch/powerpc/power9/other.json b/tools/perf/pmu-events/arch/powerpc/power9/other.json
+index 3f69422c21f99..f10bd554521a0 100644
+--- a/tools/perf/pmu-events/arch/powerpc/power9/other.json
++++ b/tools/perf/pmu-events/arch/powerpc/power9/other.json
+@@ -1417,7 +1417,7 @@
+   {
+     "EventCode": "0x45054",
+     "EventName": "PM_FMA_CMPL",
+-    "BriefDescription": "two flops operation completed (fmadd, fnmadd, fmsub, fnmsub) Scalar instructions only. "
++    "BriefDescription": "two flops operation completed (fmadd, fnmadd, fmsub, fnmsub) Scalar instructions only."
+   },
+   {
+     "EventCode": "0x201E8",
+@@ -2017,7 +2017,7 @@
+   {
+     "EventCode": "0xC0BC",
+     "EventName": "PM_LSU_FLUSH_OTHER",
+-    "BriefDescription": "Other LSU flushes including: Sync (sync ack from L2 caused search of LRQ for oldest snooped load, This will either signal a Precise Flush of the oldest snooped loa or a Flush Next PPC); Data Valid Flush Next (several cases of this, one example is store and reload are lined up such that a store-hit-reload scenario exists and the CDF has already launched and has gotten bad/stale data); Bad Data Valid Flush Next (might be a few cases of this, one example is a larxa (D$ hit) return data and dval but can't allocate to LMQ (LMQ full or other reason). Already gave dval but can't watch it for snoop_hit_larx. Need to take the “bad dval” back and flush all younger ops)"
++    "BriefDescription": "Other LSU flushes including: Sync (sync ack from L2 caused search of LRQ for oldest snooped load, This will either signal a Precise Flush of the oldest snooped loa or a Flush Next PPC); Data Valid Flush Next (several cases of this, one example is store and reload are lined up such that a store-hit-reload scenario exists and the CDF has already launched and has gotten bad/stale data); Bad Data Valid Flush Next (might be a few cases of this, one example is a larxa (D$ hit) return data and dval but can't allocate to LMQ (LMQ full or other reason). Already gave dval but can't watch it for snoop_hit_larx. Need to take the 'bad dval' back and flush all younger ops)"
+   },
+   {
+     "EventCode": "0x5094",
+diff --git a/tools/perf/pmu-events/arch/powerpc/power9/pipeline.json b/tools/perf/pmu-events/arch/powerpc/power9/pipeline.json
+index d0265f255de2b..723bffa41c448 100644
+--- a/tools/perf/pmu-events/arch/powerpc/power9/pipeline.json
++++ b/tools/perf/pmu-events/arch/powerpc/power9/pipeline.json
+@@ -442,7 +442,7 @@
+   {
+     "EventCode": "0x4D052",
+     "EventName": "PM_2FLOP_CMPL",
+-    "BriefDescription": "DP vector version of fmul, fsub, fcmp, fsel, fabs, fnabs, fres ,fsqrte, fneg "
++    "BriefDescription": "DP vector version of fmul, fsub, fcmp, fsel, fabs, fnabs, fres ,fsqrte, fneg"
+   },
+   {
+     "EventCode": "0x1F142",
+diff --git a/tools/perf/pmu-events/arch/s390/cf_z16/extended.json b/tools/perf/pmu-events/arch/s390/cf_z16/extended.json
+index c306190fc06f2..c2b10ec1c6e01 100644
+--- a/tools/perf/pmu-events/arch/s390/cf_z16/extended.json
++++ b/tools/perf/pmu-events/arch/s390/cf_z16/extended.json
+@@ -95,28 +95,28 @@
+ 		"EventCode": "145",
+ 		"EventName": "DCW_REQ",
+ 		"BriefDescription": "Directory Write Level 1 Data Cache from Cache",
+-		"PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestor’s Level-2 cache."
++		"PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestors Level-2 cache."
+ 	},
+ 	{
+ 		"Unit": "CPU-M-CF",
+ 		"EventCode": "146",
+ 		"EventName": "DCW_REQ_IV",
+ 		"BriefDescription": "Directory Write Level 1 Data Cache from Cache with Intervention",
+-		"PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestor’s Level-2 cache with intervention."
++		"PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestors Level-2 cache with intervention."
+ 	},
+ 	{
+ 		"Unit": "CPU-M-CF",
+ 		"EventCode": "147",
+ 		"EventName": "DCW_REQ_CHIP_HIT",
+ 		"BriefDescription": "Directory Write Level 1 Data Cache from Cache with Chip HP Hit",
+-		"PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestor’s Level-2 cache after using chip level horizontal persistence, Chip-HP hit."
++		"PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestors Level-2 cache after using chip level horizontal persistence, Chip-HP hit."
+ 	},
+ 	{
+ 		"Unit": "CPU-M-CF",
+ 		"EventCode": "148",
+ 		"EventName": "DCW_REQ_DRAWER_HIT",
+ 		"BriefDescription": "Directory Write Level 1 Data Cache from Cache with Drawer HP Hit",
+-		"PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestor’s Level-2 cache after using drawer level horizontal persistence, Drawer-HP hit."
++		"PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestors Level-2 cache after using drawer level horizontal persistence, Drawer-HP hit."
+ 	},
+ 	{
+ 		"Unit": "CPU-M-CF",
+@@ -284,7 +284,7 @@
+ 		"EventCode": "172",
+ 		"EventName": "ICW_REQ_DRAWER_HIT",
+ 		"BriefDescription": "Directory Write Level 1 Instruction Cache from Cache with Drawer HP Hit",
+-		"PublicDescription": "A directory write to the Level-1 Instruction cache directory where the returned cache line was sourced from the requestor’s Level-2 cache using drawer level horizontal persistence, Drawer-HP hit."
++		"PublicDescription": "A directory write to the Level-1 Instruction cache directory where the returned cache line was sourced from the requestors Level-2 cache using drawer level horizontal persistence, Drawer-HP hit."
+ 	},
+ 	{
+ 		"Unit": "CPU-M-CF",
+diff --git a/tools/perf/pmu-events/empty-pmu-events.c b/tools/perf/pmu-events/empty-pmu-events.c
+index a938b74cf487c..e74defb5284ff 100644
+--- a/tools/perf/pmu-events/empty-pmu-events.c
++++ b/tools/perf/pmu-events/empty-pmu-events.c
+@@ -227,7 +227,7 @@ static const struct pmu_events_map pmu_events_map[] = {
+ 	},
+ };
+ 
+-static const struct pmu_event pme_test_soc_sys[] = {
++static const struct pmu_event pmu_events__test_soc_sys[] = {
+ 	{
+ 		.name = "sys_ddr_pmu.write_cycles",
+ 		.event = "event=0x2b",
+@@ -258,8 +258,8 @@ struct pmu_sys_events {
+ 
+ static const struct pmu_sys_events pmu_sys_event_tables[] = {
+ 	{
+-		.table = { pme_test_soc_sys },
+-		.name = "pme_test_soc_sys",
++		.table = { pmu_events__test_soc_sys },
++		.name = "pmu_events__test_soc_sys",
+ 	},
+ 	{
+ 		.table = { 0 }
+diff --git a/tools/perf/scripts/Build b/tools/perf/scripts/Build
+index 68d4b54574adb..7d8e2e57faac5 100644
+--- a/tools/perf/scripts/Build
++++ b/tools/perf/scripts/Build
+@@ -1,2 +1,4 @@
+-perf-$(CONFIG_LIBPERL)   += perl/Perf-Trace-Util/
++ifeq ($(CONFIG_LIBTRACEEVENT),y)
++  perf-$(CONFIG_LIBPERL)   += perl/Perf-Trace-Util/
++endif
+ perf-$(CONFIG_LIBPYTHON) += python/Perf-Trace-Util/
+diff --git a/tools/perf/scripts/python/Perf-Trace-Util/Build b/tools/perf/scripts/python/Perf-Trace-Util/Build
+index d5fed4e426179..7d0e33ce6aba4 100644
+--- a/tools/perf/scripts/python/Perf-Trace-Util/Build
++++ b/tools/perf/scripts/python/Perf-Trace-Util/Build
+@@ -1,3 +1,3 @@
+-perf-$(CONFIG_LIBTRACEEVENT) += Context.o
++perf-y += Context.o
+ 
+ CFLAGS_Context.o += $(PYTHON_EMBED_CCOPTS) -Wno-redundant-decls -Wno-strict-prototypes -Wno-unused-parameter -Wno-nested-externs
+diff --git a/tools/perf/scripts/python/Perf-Trace-Util/Context.c b/tools/perf/scripts/python/Perf-Trace-Util/Context.c
+index 895f5fc239653..b0d449f41650f 100644
+--- a/tools/perf/scripts/python/Perf-Trace-Util/Context.c
++++ b/tools/perf/scripts/python/Perf-Trace-Util/Context.c
+@@ -59,6 +59,7 @@ static struct scripting_context *get_scripting_context(PyObject *args)
+ 	return get_args(args, "context", NULL);
+ }
+ 
++#ifdef HAVE_LIBTRACEEVENT
+ static PyObject *perf_trace_context_common_pc(PyObject *obj, PyObject *args)
+ {
+ 	struct scripting_context *c = get_scripting_context(args);
+@@ -90,6 +91,7 @@ static PyObject *perf_trace_context_common_lock_depth(PyObject *obj,
+ 
+ 	return Py_BuildValue("i", common_lock_depth(c));
+ }
++#endif
+ 
+ static PyObject *perf_sample_insn(PyObject *obj, PyObject *args)
+ {
+@@ -178,12 +180,14 @@ static PyObject *perf_sample_srccode(PyObject *obj, PyObject *args)
+ }
+ 
+ static PyMethodDef ContextMethods[] = {
++#ifdef HAVE_LIBTRACEEVENT
+ 	{ "common_pc", perf_trace_context_common_pc, METH_VARARGS,
+ 	  "Get the common preempt count event field value."},
+ 	{ "common_flags", perf_trace_context_common_flags, METH_VARARGS,
+ 	  "Get the common flags event field value."},
+ 	{ "common_lock_depth", perf_trace_context_common_lock_depth,
+ 	  METH_VARARGS,	"Get the common lock depth event field value."},
++#endif
+ 	{ "perf_sample_insn", perf_sample_insn,
+ 	  METH_VARARGS,	"Get the machine code instruction."},
+ 	{ "perf_set_itrace_options", perf_set_itrace_options,
+diff --git a/tools/perf/scripts/python/intel-pt-events.py b/tools/perf/scripts/python/intel-pt-events.py
+index 08862a2582f44..1c76368f13c1a 100644
+--- a/tools/perf/scripts/python/intel-pt-events.py
++++ b/tools/perf/scripts/python/intel-pt-events.py
+@@ -11,7 +11,7 @@
+ # FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ # more details.
+ 
+-from __future__ import print_function
++from __future__ import division, print_function
+ 
+ import io
+ import os
+diff --git a/tools/perf/tests/attr/base-record b/tools/perf/tests/attr/base-record
+index 3ef07a12aa142..27c21271a16c9 100644
+--- a/tools/perf/tests/attr/base-record
++++ b/tools/perf/tests/attr/base-record
+@@ -5,7 +5,7 @@ group_fd=-1
+ flags=0|8
+ cpu=*
+ type=0|1
+-size=128
++size=136
+ config=0
+ sample_period=*
+ sample_type=263
+diff --git a/tools/perf/tests/attr/base-stat b/tools/perf/tests/attr/base-stat
+index 4081644565306..a21fb65bc012e 100644
+--- a/tools/perf/tests/attr/base-stat
++++ b/tools/perf/tests/attr/base-stat
+@@ -5,7 +5,7 @@ group_fd=-1
+ flags=0|8
+ cpu=*
+ type=0
+-size=128
++size=136
+ config=0
+ sample_period=0
+ sample_type=65536
+diff --git a/tools/perf/tests/attr/system-wide-dummy b/tools/perf/tests/attr/system-wide-dummy
+index 8fec06eda5f90..2f3e3eb728eb4 100644
+--- a/tools/perf/tests/attr/system-wide-dummy
++++ b/tools/perf/tests/attr/system-wide-dummy
+@@ -7,7 +7,7 @@ cpu=*
+ pid=-1
+ flags=8
+ type=1
+-size=128
++size=136
+ config=9
+ sample_period=4000
+ sample_type=455
+diff --git a/tools/perf/tests/make b/tools/perf/tests/make
+index 009d6efb673ce..deb37fb982e97 100644
+--- a/tools/perf/tests/make
++++ b/tools/perf/tests/make
+@@ -62,10 +62,11 @@ lib = lib
+ endif
+ 
+ has = $(shell which $1 2>/dev/null)
++python_perf_so := $(shell $(MAKE) python_perf_target|grep "Target is:"|awk '{print $$3}')
+ 
+ # standard single make variable specified
+ make_clean_all      := clean all
+-make_python_perf_so := python/perf.so
++make_python_perf_so := $(python_perf_so)
+ make_debug          := DEBUG=1
+ make_no_libperl     := NO_LIBPERL=1
+ make_no_libpython   := NO_LIBPYTHON=1
+@@ -204,7 +205,7 @@ test_make_doc    := $(test_ok)
+ test_make_help_O := $(test_ok)
+ test_make_doc_O  := $(test_ok)
+ 
+-test_make_python_perf_so := test -f $(PERF_O)/python/perf.so
++test_make_python_perf_so := test -f $(PERF_O)/$(python_perf_so)
+ 
+ test_make_perf_o           := test -f $(PERF_O)/perf.o
+ test_make_util_map_o       := test -f $(PERF_O)/util/map.o
+diff --git a/tools/perf/tests/shell/record_offcpu.sh b/tools/perf/tests/shell/record_offcpu.sh
+index e01973d4e0fba..f062ae9a95e1a 100755
+--- a/tools/perf/tests/shell/record_offcpu.sh
++++ b/tools/perf/tests/shell/record_offcpu.sh
+@@ -65,7 +65,7 @@ test_offcpu_child() {
+ 
+   # perf bench sched messaging creates 400 processes
+   if ! perf record --off-cpu -e dummy -o ${perfdata} -- \
+-    perf bench sched messaging -g 10 > /dev/null 2&>1
++    perf bench sched messaging -g 10 > /dev/null 2>&1
+   then
+     echo "Child task off-cpu test [Failed record]"
+     err=1
+diff --git a/tools/perf/util/Build b/tools/perf/util/Build
+index 918b501f9bd8b..4868e3bf7df96 100644
+--- a/tools/perf/util/Build
++++ b/tools/perf/util/Build
+@@ -78,7 +78,7 @@ perf-y += pmu-bison.o
+ perf-y += pmu-hybrid.o
+ perf-y += svghelper.o
+ perf-$(CONFIG_LIBTRACEEVENT) += trace-event-info.o
+-perf-$(CONFIG_LIBTRACEEVENT) += trace-event-scripting.o
++perf-y += trace-event-scripting.o
+ perf-$(CONFIG_LIBTRACEEVENT) += trace-event.o
+ perf-$(CONFIG_LIBTRACEEVENT) += trace-event-parse.o
+ perf-$(CONFIG_LIBTRACEEVENT) += trace-event-read.o
+diff --git a/tools/perf/util/bpf_skel/lock_contention.bpf.c b/tools/perf/util/bpf_skel/lock_contention.bpf.c
+index e6007eaeda1a6..141b36d13b19a 100644
+--- a/tools/perf/util/bpf_skel/lock_contention.bpf.c
++++ b/tools/perf/util/bpf_skel/lock_contention.bpf.c
+@@ -182,7 +182,13 @@ static inline struct task_struct *get_lock_owner(__u64 lock, __u32 flags)
+ 		struct mutex *mutex = (void *)lock;
+ 		owner = BPF_CORE_READ(mutex, owner.counter);
+ 	} else if (flags == LCB_F_READ || flags == LCB_F_WRITE) {
+-#if __has_builtin(bpf_core_type_matches)
++	/*
++	 * Support for the BPF_TYPE_MATCHES argument to the
++	 * __builtin_preserve_type_info builtin was added at some point during
++	 * development of clang 15 and it's what is needed for
++	 * bpf_core_type_matches.
++	 */
++#if __has_builtin(__builtin_preserve_type_info) && __clang_major__ >= 15
+ 		if (bpf_core_type_matches(struct rw_semaphore___old)) {
+ 			struct rw_semaphore___old *rwsem = (void *)lock;
+ 			owner = (unsigned long)BPF_CORE_READ(rwsem, owner);
+diff --git a/tools/perf/util/cs-etm.c b/tools/perf/util/cs-etm.c
+index f65bac5ddbdb6..e43bc9eea3087 100644
+--- a/tools/perf/util/cs-etm.c
++++ b/tools/perf/util/cs-etm.c
+@@ -2517,26 +2517,29 @@ static int cs_etm__process_auxtrace_event(struct perf_session *session,
+ 	return 0;
+ }
+ 
+-static bool cs_etm__is_timeless_decoding(struct cs_etm_auxtrace *etm)
++static int cs_etm__setup_timeless_decoding(struct cs_etm_auxtrace *etm)
+ {
+ 	struct evsel *evsel;
+ 	struct evlist *evlist = etm->session->evlist;
+-	bool timeless_decoding = true;
+ 
+ 	/* Override timeless mode with user input from --itrace=Z */
+-	if (etm->synth_opts.timeless_decoding)
+-		return true;
++	if (etm->synth_opts.timeless_decoding) {
++		etm->timeless_decoding = true;
++		return 0;
++	}
+ 
+ 	/*
+-	 * Circle through the list of event and complain if we find one
+-	 * with the time bit set.
++	 * Find the cs_etm evsel and look at what its timestamp setting was
+ 	 */
+-	evlist__for_each_entry(evlist, evsel) {
+-		if ((evsel->core.attr.sample_type & PERF_SAMPLE_TIME))
+-			timeless_decoding = false;
+-	}
++	evlist__for_each_entry(evlist, evsel)
++		if (cs_etm__evsel_is_auxtrace(etm->session, evsel)) {
++			etm->timeless_decoding =
++				!(evsel->core.attr.config & BIT(ETM_OPT_TS));
++			return 0;
++		}
+ 
+-	return timeless_decoding;
++	pr_err("CS ETM: Couldn't find ETM evsel\n");
++	return -EINVAL;
+ }
+ 
+ /*
+@@ -2943,7 +2946,6 @@ int cs_etm__process_auxtrace_info_full(union perf_event *event,
+ 	etm->snapshot_mode = (ptr[CS_ETM_SNAPSHOT] != 0);
+ 	etm->metadata = metadata;
+ 	etm->auxtrace_type = auxtrace_info->type;
+-	etm->timeless_decoding = cs_etm__is_timeless_decoding(etm);
+ 
+ 	/* Use virtual timestamps if all ETMs report ts_source = 1 */
+ 	etm->has_virtual_ts = cs_etm__has_virtual_ts(metadata, num_cpu);
+@@ -2960,6 +2962,10 @@ int cs_etm__process_auxtrace_info_full(union perf_event *event,
+ 	etm->auxtrace.evsel_is_auxtrace = cs_etm__evsel_is_auxtrace;
+ 	session->auxtrace = &etm->auxtrace;
+ 
++	err = cs_etm__setup_timeless_decoding(etm);
++	if (err)
++		return err;
++
+ 	etm->unknown_thread = thread__new(999999999, 999999999);
+ 	if (!etm->unknown_thread) {
+ 		err = -ENOMEM;
+diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
+index 24cb807ef6ce0..1a7358b46ad4e 100644
+--- a/tools/perf/util/evsel.h
++++ b/tools/perf/util/evsel.h
+@@ -267,6 +267,11 @@ static inline bool evsel__is_bpf(struct evsel *evsel)
+ 	return evsel->bpf_counter_ops != NULL;
+ }
+ 
++static inline bool evsel__is_bperf(struct evsel *evsel)
++{
++	return evsel->bpf_counter_ops != NULL && list_empty(&evsel->bpf_counter_list);
++}
++
+ #define EVSEL__MAX_ALIASES 8
+ 
+ extern const char *const evsel__hw_cache[PERF_COUNT_HW_CACHE_MAX][EVSEL__MAX_ALIASES];
+diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
+index c256b29defad3..d64d7b43f806d 100644
+--- a/tools/perf/util/pmu.c
++++ b/tools/perf/util/pmu.c
+@@ -1745,7 +1745,7 @@ static int perf_pmu__new_caps(struct list_head *list, char *name, char *value)
+ 	return 0;
+ 
+ free_name:
+-	zfree(caps->name);
++	zfree(&caps->name);
+ free_caps:
+ 	free(caps);
+ 
+diff --git a/tools/perf/util/scripting-engines/Build b/tools/perf/util/scripting-engines/Build
+index 2c96aa3cc1ec8..c220fec970324 100644
+--- a/tools/perf/util/scripting-engines/Build
++++ b/tools/perf/util/scripting-engines/Build
+@@ -1,7 +1,7 @@
+ ifeq ($(CONFIG_LIBTRACEEVENT),y)
+   perf-$(CONFIG_LIBPERL)   += trace-event-perl.o
+-  perf-$(CONFIG_LIBPYTHON) += trace-event-python.o
+ endif
++perf-$(CONFIG_LIBPYTHON) += trace-event-python.o
+ 
+ CFLAGS_trace-event-perl.o += $(PERL_EMBED_CCOPTS) -Wno-redundant-decls -Wno-strict-prototypes -Wno-unused-parameter -Wno-shadow -Wno-nested-externs -Wno-undef -Wno-switch-default -Wno-bad-function-cast -Wno-declaration-after-statement -Wno-switch-enum
+ 
+diff --git a/tools/perf/util/scripting-engines/trace-event-python.c b/tools/perf/util/scripting-engines/trace-event-python.c
+index 2c2697c5d0254..0f4ef61f2ffae 100644
+--- a/tools/perf/util/scripting-engines/trace-event-python.c
++++ b/tools/perf/util/scripting-engines/trace-event-python.c
+@@ -30,7 +30,9 @@
+ #include <linux/bitmap.h>
+ #include <linux/compiler.h>
+ #include <linux/time64.h>
++#ifdef HAVE_LIBTRACEEVENT
+ #include <traceevent/event-parse.h>
++#endif
+ 
+ #include "../build-id.h"
+ #include "../counts.h"
+@@ -87,18 +89,21 @@ PyMODINIT_FUNC initperf_trace_context(void);
+ PyMODINIT_FUNC PyInit_perf_trace_context(void);
+ #endif
+ 
++#ifdef HAVE_LIBTRACEEVENT
+ #define TRACE_EVENT_TYPE_MAX				\
+ 	((1 << (sizeof(unsigned short) * 8)) - 1)
+ 
+ static DECLARE_BITMAP(events_defined, TRACE_EVENT_TYPE_MAX);
+ 
+-#define MAX_FIELDS	64
+ #define N_COMMON_FIELDS	7
+ 
+-extern struct scripting_context *scripting_context;
+-
+ static char *cur_field_name;
+ static int zero_flag_atom;
++#endif
++
++#define MAX_FIELDS	64
++
++extern struct scripting_context *scripting_context;
+ 
+ static PyObject *main_module, *main_dict;
+ 
+@@ -153,6 +158,26 @@ static PyObject *get_handler(const char *handler_name)
+ 	return handler;
+ }
+ 
++static void call_object(PyObject *handler, PyObject *args, const char *die_msg)
++{
++	PyObject *retval;
++
++	retval = PyObject_CallObject(handler, args);
++	if (retval == NULL)
++		handler_call_die(die_msg);
++	Py_DECREF(retval);
++}
++
++static void try_call_object(const char *handler_name, PyObject *args)
++{
++	PyObject *handler;
++
++	handler = get_handler(handler_name);
++	if (handler)
++		call_object(handler, args, handler_name);
++}
++
++#ifdef HAVE_LIBTRACEEVENT
+ static int get_argument_count(PyObject *handler)
+ {
+ 	int arg_count = 0;
+@@ -181,25 +206,6 @@ static int get_argument_count(PyObject *handler)
+ 	return arg_count;
+ }
+ 
+-static void call_object(PyObject *handler, PyObject *args, const char *die_msg)
+-{
+-	PyObject *retval;
+-
+-	retval = PyObject_CallObject(handler, args);
+-	if (retval == NULL)
+-		handler_call_die(die_msg);
+-	Py_DECREF(retval);
+-}
+-
+-static void try_call_object(const char *handler_name, PyObject *args)
+-{
+-	PyObject *handler;
+-
+-	handler = get_handler(handler_name);
+-	if (handler)
+-		call_object(handler, args, handler_name);
+-}
+-
+ static void define_value(enum tep_print_arg_type field_type,
+ 			 const char *ev_name,
+ 			 const char *field_name,
+@@ -379,6 +385,7 @@ static PyObject *get_field_numeric_entry(struct tep_event *event,
+ 		obj = list;
+ 	return obj;
+ }
++#endif
+ 
+ static const char *get_dsoname(struct map *map)
+ {
+@@ -906,6 +913,7 @@ static PyObject *get_perf_sample_dict(struct perf_sample *sample,
+ 	return dict;
+ }
+ 
++#ifdef HAVE_LIBTRACEEVENT
+ static void python_process_tracepoint(struct perf_sample *sample,
+ 				      struct evsel *evsel,
+ 				      struct addr_location *al,
+@@ -1035,6 +1043,16 @@ static void python_process_tracepoint(struct perf_sample *sample,
+ 
+ 	Py_DECREF(t);
+ }
++#else
++static void python_process_tracepoint(struct perf_sample *sample __maybe_unused,
++				      struct evsel *evsel __maybe_unused,
++				      struct addr_location *al __maybe_unused,
++				      struct addr_location *addr_al __maybe_unused)
++{
++	fprintf(stderr, "Tracepoint events are not supported because "
++			"perf is not linked with libtraceevent.\n");
++}
++#endif
+ 
+ static PyObject *tuple_new(unsigned int sz)
+ {
+@@ -1965,6 +1983,7 @@ static int python_stop_script(void)
+ 	return 0;
+ }
+ 
++#ifdef HAVE_LIBTRACEEVENT
+ static int python_generate_script(struct tep_handle *pevent, const char *outfile)
+ {
+ 	int i, not_first, count, nr_events;
+@@ -2155,6 +2174,18 @@ static int python_generate_script(struct tep_handle *pevent, const char *outfile
+ 
+ 	return 0;
+ }
++#else
++static int python_generate_script(struct tep_handle *pevent __maybe_unused,
++				  const char *outfile __maybe_unused)
++{
++	fprintf(stderr, "Generating Python perf-script is not supported."
++		"  Install libtraceevent and rebuild perf to enable it.\n"
++		"For example:\n  # apt install libtraceevent-dev (ubuntu)"
++		"\n  # yum install libtraceevent-devel (Fedora)"
++		"\n  etc.\n");
++	return -1;
++}
++#endif
+ 
+ struct scripting_ops python_scripting_ops = {
+ 	.name			= "Python",
+diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
+index 093a0c8b2e3d3..2770105823bf0 100644
+--- a/tools/perf/util/sort.c
++++ b/tools/perf/util/sort.c
+@@ -611,12 +611,7 @@ static char *hist_entry__get_srcfile(struct hist_entry *e)
+ static int64_t
+ sort__srcfile_cmp(struct hist_entry *left, struct hist_entry *right)
+ {
+-	if (!left->srcfile)
+-		left->srcfile = hist_entry__get_srcfile(left);
+-	if (!right->srcfile)
+-		right->srcfile = hist_entry__get_srcfile(right);
+-
+-	return strcmp(right->srcfile, left->srcfile);
++	return sort__srcline_cmp(left, right);
+ }
+ 
+ static int64_t
+@@ -979,8 +974,7 @@ static int hist_entry__dso_to_filter(struct hist_entry *he, int type,
+ static int64_t
+ sort__sym_from_cmp(struct hist_entry *left, struct hist_entry *right)
+ {
+-	struct addr_map_symbol *from_l = &left->branch_info->from;
+-	struct addr_map_symbol *from_r = &right->branch_info->from;
++	struct addr_map_symbol *from_l, *from_r;
+ 
+ 	if (!left->branch_info || !right->branch_info)
+ 		return cmp_null(left->branch_info, right->branch_info);
+diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
+index 41882ae8452e5..cfb4109a37098 100644
+--- a/tools/perf/util/symbol-elf.c
++++ b/tools/perf/util/symbol-elf.c
+@@ -565,9 +565,12 @@ static u32 get_x86_64_plt_disp(const u8 *p)
+ 		n += 1;
+ 	/* jmp with 4-byte displacement */
+ 	if (p[n] == 0xff && p[n + 1] == 0x25) {
++		u32 disp;
++
+ 		n += 2;
+ 		/* Also add offset from start of entry to end of instruction */
+-		return n + 4 + le32toh(*(const u32 *)(p + n));
++		memcpy(&disp, p + n, sizeof(disp));
++		return n + 4 + le32toh(disp);
+ 	}
+ 	return 0;
+ }
+@@ -580,6 +583,7 @@ static bool get_plt_got_name(GElf_Shdr *shdr, size_t i,
+ 	const char *sym_name;
+ 	char *demangled;
+ 	GElf_Sym sym;
++	bool result;
+ 	u32 disp;
+ 
+ 	if (!di->sorted)
+@@ -606,9 +610,11 @@ static bool get_plt_got_name(GElf_Shdr *shdr, size_t i,
+ 
+ 	snprintf(buf, buf_sz, "%s@plt", sym_name);
+ 
++	result = *sym_name;
++
+ 	free(demangled);
+ 
+-	return *sym_name;
++	return result;
+ }
+ 
+ static int dso__synthesize_plt_got_symbols(struct dso *dso, Elf *elf,
+@@ -903,7 +909,7 @@ static int elf_read_build_id(Elf *elf, void *bf, size_t size)
+ 				size_t sz = min(size, descsz);
+ 				memcpy(bf, ptr, sz);
+ 				memset(bf + sz, 0, size - sz);
+-				err = descsz;
++				err = sz;
+ 				break;
+ 			}
+ 		}
+diff --git a/tools/perf/util/trace-event-scripting.c b/tools/perf/util/trace-event-scripting.c
+index 56175c53f9af7..bd0000300c774 100644
+--- a/tools/perf/util/trace-event-scripting.c
++++ b/tools/perf/util/trace-event-scripting.c
+@@ -9,7 +9,9 @@
+ #include <stdlib.h>
+ #include <string.h>
+ #include <errno.h>
++#ifdef HAVE_LIBTRACEEVENT
+ #include <traceevent/event-parse.h>
++#endif
+ 
+ #include "debug.h"
+ #include "trace-event.h"
+@@ -27,10 +29,11 @@ void scripting_context__update(struct scripting_context *c,
+ 			       struct addr_location *addr_al)
+ {
+ 	c->event_data = sample->raw_data;
++	c->pevent = NULL;
++#ifdef HAVE_LIBTRACEEVENT
+ 	if (evsel->tp_format)
+ 		c->pevent = evsel->tp_format->tep;
+-	else
+-		c->pevent = NULL;
++#endif
+ 	c->event = event;
+ 	c->sample = sample;
+ 	c->evsel = evsel;
+@@ -122,6 +125,7 @@ void setup_python_scripting(void)
+ }
+ #endif
+ 
++#ifdef HAVE_LIBTRACEEVENT
+ static void print_perl_unsupported_msg(void)
+ {
+ 	fprintf(stderr, "Perl scripting not supported."
+@@ -186,3 +190,4 @@ void setup_perl_scripting(void)
+ 	register_perl_scripting(&perl_scripting_ops);
+ }
+ #endif
++#endif
+diff --git a/tools/perf/util/tracepoint.c b/tools/perf/util/tracepoint.c
+index 89ef56c433110..92dd8b455b902 100644
+--- a/tools/perf/util/tracepoint.c
++++ b/tools/perf/util/tracepoint.c
+@@ -50,6 +50,7 @@ int is_valid_tracepoint(const char *event_string)
+ 				 sys_dirent->d_name, evt_dirent->d_name);
+ 			if (!strcmp(evt_path, event_string)) {
+ 				closedir(evt_dir);
++				put_events_file(dir_path);
+ 				closedir(sys_dir);
+ 				return 1;
+ 			}
+diff --git a/tools/testing/selftests/net/srv6_end_dt46_l3vpn_test.sh b/tools/testing/selftests/net/srv6_end_dt46_l3vpn_test.sh
+index aebaab8ce44cb..441eededa0312 100755
+--- a/tools/testing/selftests/net/srv6_end_dt46_l3vpn_test.sh
++++ b/tools/testing/selftests/net/srv6_end_dt46_l3vpn_test.sh
+@@ -292,6 +292,11 @@ setup_hs()
+ 	ip netns exec ${hsname} sysctl -wq net.ipv6.conf.all.accept_dad=0
+ 	ip netns exec ${hsname} sysctl -wq net.ipv6.conf.default.accept_dad=0
+ 
++	# disable the rp_filter otherwise the kernel gets confused about how
++	# to route decap ipv4 packets.
++	ip netns exec ${rtname} sysctl -wq net.ipv4.conf.all.rp_filter=0
++	ip netns exec ${rtname} sysctl -wq net.ipv4.conf.default.rp_filter=0
++
+ 	ip -netns ${hsname} link add veth0 type veth peer name ${rtveth}
+ 	ip -netns ${hsname} link set ${rtveth} netns ${rtname}
+ 	ip -netns ${hsname} addr add ${IPv6_HS_NETWORK}::${hs}/64 dev veth0 nodad
+@@ -316,11 +321,6 @@ setup_hs()
+ 	ip netns exec ${rtname} sysctl -wq net.ipv6.conf.${rtveth}.proxy_ndp=1
+ 	ip netns exec ${rtname} sysctl -wq net.ipv4.conf.${rtveth}.proxy_arp=1
+ 
+-	# disable the rp_filter otherwise the kernel gets confused about how
+-	# to route decap ipv4 packets.
+-	ip netns exec ${rtname} sysctl -wq net.ipv4.conf.all.rp_filter=0
+-	ip netns exec ${rtname} sysctl -wq net.ipv4.conf.${rtveth}.rp_filter=0
+-
+ 	ip netns exec ${rtname} sh -c "echo 1 > /proc/sys/net/vrf/strict_mode"
+ }
+ 
+diff --git a/tools/testing/selftests/netfilter/Makefile b/tools/testing/selftests/netfilter/Makefile
+index 4504ee07be08d..3686bfa6c58d7 100644
+--- a/tools/testing/selftests/netfilter/Makefile
++++ b/tools/testing/selftests/netfilter/Makefile
+@@ -8,8 +8,11 @@ TEST_PROGS := nft_trans_stress.sh nft_fib.sh nft_nat.sh bridge_brouter.sh \
+ 	ipip-conntrack-mtu.sh conntrack_tcp_unreplied.sh \
+ 	conntrack_vrf.sh nft_synproxy.sh rpath.sh
+ 
+-CFLAGS += $(shell pkg-config --cflags libmnl 2>/dev/null || echo "-I/usr/include/libmnl")
+-LDLIBS = -lmnl
++HOSTPKG_CONFIG := pkg-config
++
++CFLAGS += $(shell $(HOSTPKG_CONFIG) --cflags libmnl 2>/dev/null)
++LDLIBS += $(shell $(HOSTPKG_CONFIG) --libs libmnl 2>/dev/null || echo -lmnl)
++
+ TEST_GEN_FILES =  nf-queue connect_close
+ 
+ include ../lib.mk


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:6.3 commit in: /
@ 2023-05-24 17:04 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2023-05-24 17:04 UTC (permalink / raw
  To: gentoo-commits

commit:     dc880142ada4026367f4e9068cae2284e8030680
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed May 24 17:04:33 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed May 24 17:04:33 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=dc880142

Linux patch 6.3.4

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |     4 +
 1003_linux-6.3.4.patch | 18035 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 18039 insertions(+)

diff --git a/0000_README b/0000_README
index 20b6f8fe..dfa966c1 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch:  1002_linux-6.3.3.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.3.3
 
+Patch:  1003_linux-6.3.4.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.3.4
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1003_linux-6.3.4.patch b/1003_linux-6.3.4.patch
new file mode 100644
index 00000000..1ffdc205
--- /dev/null
+++ b/1003_linux-6.3.4.patch
@@ -0,0 +1,18035 @@
+diff --git a/Documentation/arm64/silicon-errata.rst b/Documentation/arm64/silicon-errata.rst
+index ec5f889d76819..e31f6c0687041 100644
+--- a/Documentation/arm64/silicon-errata.rst
++++ b/Documentation/arm64/silicon-errata.rst
+@@ -172,6 +172,8 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | NVIDIA         | Carmel Core     | N/A             | NVIDIA_CARMEL_CNP_ERRATUM   |
+ +----------------+-----------------+-----------------+-----------------------------+
++| NVIDIA         | T241 GICv3/4.x  | T241-FABRIC-4   | N/A                         |
+++----------------+-----------------+-----------------+-----------------------------+
+ +----------------+-----------------+-----------------+-----------------------------+
+ | Freescale/NXP  | LS2080A/LS1043A | A-008585        | FSL_ERRATUM_A008585         |
+ +----------------+-----------------+-----------------+-----------------------------+
+diff --git a/Documentation/devicetree/bindings/ata/ceva,ahci-1v84.yaml b/Documentation/devicetree/bindings/ata/ceva,ahci-1v84.yaml
+index 9b31f864e071e..71364c6081ff5 100644
+--- a/Documentation/devicetree/bindings/ata/ceva,ahci-1v84.yaml
++++ b/Documentation/devicetree/bindings/ata/ceva,ahci-1v84.yaml
+@@ -32,7 +32,7 @@ properties:
+     maxItems: 1
+ 
+   iommus:
+-    maxItems: 1
++    maxItems: 4
+ 
+   power-domains:
+     maxItems: 1
+diff --git a/Documentation/devicetree/bindings/display/msm/dsi-controller-main.yaml b/Documentation/devicetree/bindings/display/msm/dsi-controller-main.yaml
+index e75a3efe4dace..c888dd37e11fc 100644
+--- a/Documentation/devicetree/bindings/display/msm/dsi-controller-main.yaml
++++ b/Documentation/devicetree/bindings/display/msm/dsi-controller-main.yaml
+@@ -82,6 +82,18 @@ properties:
+       Indicates if the DSI controller is driving a panel which needs
+       2 DSI links.
+ 
++  qcom,master-dsi:
++    type: boolean
++    description: |
++      Indicates if the DSI controller is the master DSI controller when
++      qcom,dual-dsi-mode enabled.
++
++  qcom,sync-dual-dsi:
++    type: boolean
++    description: |
++      Indicates if the DSI controller needs to sync the other DSI controller
++      with MIPI DCS commands when qcom,dual-dsi-mode enabled.
++
+   assigned-clocks:
+     minItems: 2
+     maxItems: 4
+diff --git a/Makefile b/Makefile
+index a3108cf700a07..3c5b606690182 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 3
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h
+index 06b48ce23e1ca..505a306e0271a 100644
+--- a/arch/arm/include/asm/assembler.h
++++ b/arch/arm/include/asm/assembler.h
+@@ -244,19 +244,6 @@ THUMB(	fpreg	.req	r7	)
+ 	.endm
+ #endif
+ 
+-	.macro	local_bh_disable, ti, tmp
+-	ldr	\tmp, [\ti, #TI_PREEMPT]
+-	add	\tmp, \tmp, #SOFTIRQ_DISABLE_OFFSET
+-	str	\tmp, [\ti, #TI_PREEMPT]
+-	.endm
+-
+-	.macro	local_bh_enable_ti, ti, tmp
+-	get_thread_info \ti
+-	ldr	\tmp, [\ti, #TI_PREEMPT]
+-	sub	\tmp, \tmp, #SOFTIRQ_DISABLE_OFFSET
+-	str	\tmp, [\ti, #TI_PREEMPT]
+-	.endm
+-
+ #define USERL(l, x...)				\
+ 9999:	x;					\
+ 	.pushsection __ex_table,"a";		\
+diff --git a/arch/arm/mach-sa1100/jornada720_ssp.c b/arch/arm/mach-sa1100/jornada720_ssp.c
+index 1dbe98948ce30..9627c4cf3e41d 100644
+--- a/arch/arm/mach-sa1100/jornada720_ssp.c
++++ b/arch/arm/mach-sa1100/jornada720_ssp.c
+@@ -1,5 +1,5 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+-/**
++/*
+  *  arch/arm/mac-sa1100/jornada720_ssp.c
+  *
+  *  Copyright (C) 2006/2007 Kristoffer Ericson <Kristoffer.Ericson@gmail.com>
+@@ -26,6 +26,7 @@ static unsigned long jornada_ssp_flags;
+ 
+ /**
+  * jornada_ssp_reverse - reverses input byte
++ * @byte: input byte to reverse
+  *
+  * we need to reverse all data we receive from the mcu due to its physical location
+  * returns : 01110111 -> 11101110
+@@ -46,6 +47,7 @@ EXPORT_SYMBOL(jornada_ssp_reverse);
+ 
+ /**
+  * jornada_ssp_byte - waits for ready ssp bus and sends byte
++ * @byte: input byte to transmit
+  *
+  * waits for fifo buffer to clear and then transmits, if it doesn't then we will
+  * timeout after <timeout> rounds. Needs mcu running before its called.
+@@ -77,6 +79,7 @@ EXPORT_SYMBOL(jornada_ssp_byte);
+ 
+ /**
+  * jornada_ssp_inout - decide if input is command or trading byte
++ * @byte: input byte to send (may be %TXDUMMY)
+  *
+  * returns : (jornada_ssp_byte(byte)) on success
+  *         : %-ETIMEDOUT on timeout failure
+diff --git a/arch/arm/vfp/entry.S b/arch/arm/vfp/entry.S
+index 6dabb47617781..62206ef250371 100644
+--- a/arch/arm/vfp/entry.S
++++ b/arch/arm/vfp/entry.S
+@@ -23,15 +23,9 @@
+ @
+ ENTRY(do_vfp)
+ 	mov	r1, r10
+-	mov	r3, r9
+- 	ldr	r4, .LCvfp
+-	ldr	pc, [r4]		@ call VFP entry point
++	str	lr, [sp, #-8]!
++	add	r3, sp, #4
++	str	r9, [r3]
++	bl	vfp_entry
++	ldr	pc, [sp], #8
+ ENDPROC(do_vfp)
+-
+-ENTRY(vfp_null_entry)
+-	ret	lr
+-ENDPROC(vfp_null_entry)
+-
+-	.align	2
+-.LCvfp:
+-	.word	vfp_vector
+diff --git a/arch/arm/vfp/vfphw.S b/arch/arm/vfp/vfphw.S
+index 60acd42e05786..a4610d0f32152 100644
+--- a/arch/arm/vfp/vfphw.S
++++ b/arch/arm/vfp/vfphw.S
+@@ -75,8 +75,6 @@
+ @  lr  = unrecognised instruction return address
+ @  IRQs enabled.
+ ENTRY(vfp_support_entry)
+-	local_bh_disable r1, r4
+-
+ 	ldr	r11, [r1, #TI_CPU]	@ CPU number
+ 	add	r10, r1, #TI_VFPSTATE	@ r10 = workspace
+ 
+@@ -174,14 +172,18 @@ vfp_hw_state_valid:
+ 					@ out before setting an FPEXC that
+ 					@ stops us reading stuff
+ 	VFPFMXR	FPEXC, r1		@ Restore FPEXC last
++	mov	sp, r3			@ we think we have handled things
++	pop	{lr}
+ 	sub	r2, r2, #4		@ Retry current instruction - if Thumb
+ 	str	r2, [sp, #S_PC]		@ mode it's two 16-bit instructions,
+ 					@ else it's one 32-bit instruction, so
+ 					@ always subtract 4 from the following
+ 					@ instruction address.
+-	local_bh_enable_ti r10, r4
+-	ret	r3			@ we think we have handled things
+ 
++local_bh_enable_and_ret:
++	adr	r0, .
++	mov	r1, #SOFTIRQ_DISABLE_OFFSET
++	b	__local_bh_enable_ip	@ tail call
+ 
+ look_for_VFP_exceptions:
+ 	@ Check for synchronous or asynchronous exception
+@@ -204,13 +206,13 @@ skip:
+ 	@ not recognised by VFP
+ 
+ 	DBGSTR	"not VFP"
+-	local_bh_enable_ti r10, r4
+-	ret	lr
++	b	local_bh_enable_and_ret
+ 
+ process_exception:
+ 	DBGSTR	"bounce"
++	mov	sp, r3			@ setup for a return to the user code.
++	pop	{lr}
+ 	mov	r2, sp			@ nothing stacked - regdump is at TOS
+-	mov	lr, r3			@ setup for a return to the user code.
+ 
+ 	@ Now call the C code to package up the bounce to the support code
+ 	@   r0 holds the trigger instruction
+diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c
+index 01bc48d738478..349dcb944a937 100644
+--- a/arch/arm/vfp/vfpmodule.c
++++ b/arch/arm/vfp/vfpmodule.c
+@@ -32,10 +32,9 @@
+ /*
+  * Our undef handlers (in entry.S)
+  */
+-asmlinkage void vfp_support_entry(void);
+-asmlinkage void vfp_null_entry(void);
++asmlinkage void vfp_support_entry(u32, void *, u32, u32);
+ 
+-asmlinkage void (*vfp_vector)(void) = vfp_null_entry;
++static bool have_vfp __ro_after_init;
+ 
+ /*
+  * Dual-use variable.
+@@ -645,6 +644,25 @@ static int vfp_starting_cpu(unsigned int unused)
+ 	return 0;
+ }
+ 
++/*
++ * Entered with:
++ *
++ *  r0  = instruction opcode (32-bit ARM or two 16-bit Thumb)
++ *  r1  = thread_info pointer
++ *  r2  = PC value to resume execution after successful emulation
++ *  r3  = normal "successful" return address
++ *  lr  = unrecognised instruction return address
++ */
++asmlinkage void vfp_entry(u32 trigger, struct thread_info *ti, u32 resume_pc,
++			  u32 resume_return_address)
++{
++	if (unlikely(!have_vfp))
++		return;
++
++	local_bh_disable();
++	vfp_support_entry(trigger, ti, resume_pc, resume_return_address);
++}
++
+ #ifdef CONFIG_KERNEL_MODE_NEON
+ 
+ static int vfp_kmode_exception(struct pt_regs *regs, unsigned int instr)
+@@ -798,7 +816,6 @@ static int __init vfp_init(void)
+ 	vfpsid = fmrx(FPSID);
+ 	barrier();
+ 	unregister_undef_hook(&vfp_detect_hook);
+-	vfp_vector = vfp_null_entry;
+ 
+ 	pr_info("VFP support v0.3: ");
+ 	if (VFP_arch) {
+@@ -883,7 +900,7 @@ static int __init vfp_init(void)
+ 				  "arm/vfp:starting", vfp_starting_cpu,
+ 				  vfp_dying_cpu);
+ 
+-	vfp_vector = vfp_support_entry;
++	have_vfp = true;
+ 
+ 	thread_register_notifier(&vfp_notifier_block);
+ 	vfp_pm_init();
+diff --git a/arch/arm64/boot/dts/freescale/imx8mq-librem5.dtsi b/arch/arm64/boot/dts/freescale/imx8mq-librem5.dtsi
+index 6895bcc121651..de0dde01fd5c4 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mq-librem5.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mq-librem5.dtsi
+@@ -1299,7 +1299,6 @@
+ 	#address-cells = <1>;
+ 	#size-cells = <0>;
+ 	dr_mode = "otg";
+-	snps,dis_u3_susphy_quirk;
+ 	usb-role-switch;
+ 	status = "okay";
+ 
+diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+index 66af9526c98ba..73da1a4d52462 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+@@ -3006,8 +3006,11 @@
+ 				interrupts = <0 131 IRQ_TYPE_LEVEL_HIGH>;
+ 				phys = <&hsusb_phy1>, <&ssusb_phy_0>;
+ 				phy-names = "usb2-phy", "usb3-phy";
++				snps,hird-threshold = /bits/ 8 <0>;
+ 				snps,dis_u2_susphy_quirk;
+ 				snps,dis_enblslpm_quirk;
++				snps,is-utmi-l1-suspend;
++				tx-fifo-resize;
+ 			};
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-xiaomi-polaris.dts b/arch/arm64/boot/dts/qcom/sdm845-xiaomi-polaris.dts
+index 1b7fdbae6a2b5..56f2d855df78d 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-xiaomi-polaris.dts
++++ b/arch/arm64/boot/dts/qcom/sdm845-xiaomi-polaris.dts
+@@ -712,7 +712,5 @@
+ 	vdd-1.3-rfa-supply = <&vreg_l17a_1p3>;
+ 	vdd-3.3-ch0-supply = <&vreg_l25a_3p3>;
+ 	vdd-3.3-ch1-supply = <&vreg_l23a_3p3>;
+-
+-	qcom,snoc-host-cap-skip-quirk;
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sm6115p-lenovo-j606f.dts b/arch/arm64/boot/dts/qcom/sm6115p-lenovo-j606f.dts
+index 4ce2d905d70e1..42d89bab5d04b 100644
+--- a/arch/arm64/boot/dts/qcom/sm6115p-lenovo-j606f.dts
++++ b/arch/arm64/boot/dts/qcom/sm6115p-lenovo-j606f.dts
+@@ -52,6 +52,17 @@
+ 			gpio-key,wakeup;
+ 		};
+ 	};
++
++	reserved-memory {
++		ramoops@ffc00000 {
++			compatible = "ramoops";
++			reg = <0x0 0xffc00000 0x0 0x100000>;
++			record-size = <0x1000>;
++			console-size = <0x40000>;
++			ftrace-size = <0x20000>;
++			ecc-size = <16>;
++		};
++	};
+ };
+ 
+ &dispcc {
+diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
+index 4cd6762bda805..dc3c072e862f1 100644
+--- a/arch/arm64/include/asm/kvm_pgtable.h
++++ b/arch/arm64/include/asm/kvm_pgtable.h
+@@ -209,6 +209,7 @@ struct kvm_pgtable_visit_ctx {
+ 	kvm_pte_t				old;
+ 	void					*arg;
+ 	struct kvm_pgtable_mm_ops		*mm_ops;
++	u64					start;
+ 	u64					addr;
+ 	u64					end;
+ 	u32					level;
+diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
+index f5bcb0dc62673..7e89968bd2824 100644
+--- a/arch/arm64/kernel/mte.c
++++ b/arch/arm64/kernel/mte.c
+@@ -66,13 +66,10 @@ void mte_sync_tags(pte_t old_pte, pte_t pte)
+ 		return;
+ 
+ 	/* if PG_mte_tagged is set, tags have already been initialised */
+-	for (i = 0; i < nr_pages; i++, page++) {
+-		if (!page_mte_tagged(page)) {
++	for (i = 0; i < nr_pages; i++, page++)
++		if (!page_mte_tagged(page))
+ 			mte_sync_page_tags(page, old_pte, check_swap,
+ 					   pte_is_tagged);
+-			set_page_mte_tagged(page);
+-		}
+-	}
+ 
+ 	/* ensure the tags are visible before the PTE is set */
+ 	smp_wmb();
+diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
+index 3d61bd3e591d2..140f82300db5a 100644
+--- a/arch/arm64/kvm/hyp/pgtable.c
++++ b/arch/arm64/kvm/hyp/pgtable.c
+@@ -58,6 +58,7 @@
+ struct kvm_pgtable_walk_data {
+ 	struct kvm_pgtable_walker	*walker;
+ 
++	u64				start;
+ 	u64				addr;
+ 	u64				end;
+ };
+@@ -201,6 +202,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
+ 		.old	= READ_ONCE(*ptep),
+ 		.arg	= data->walker->arg,
+ 		.mm_ops	= mm_ops,
++		.start	= data->start,
+ 		.addr	= data->addr,
+ 		.end	= data->end,
+ 		.level	= level,
+@@ -293,6 +295,7 @@ int kvm_pgtable_walk(struct kvm_pgtable *pgt, u64 addr, u64 size,
+ 		     struct kvm_pgtable_walker *walker)
+ {
+ 	struct kvm_pgtable_walk_data walk_data = {
++		.start	= ALIGN_DOWN(addr, PAGE_SIZE),
+ 		.addr	= ALIGN_DOWN(addr, PAGE_SIZE),
+ 		.end	= PAGE_ALIGN(walk_data.addr + size),
+ 		.walker	= walker,
+@@ -794,20 +797,43 @@ static bool stage2_pte_executable(kvm_pte_t pte)
+ 	return !(pte & KVM_PTE_LEAF_ATTR_HI_S2_XN);
+ }
+ 
++static u64 stage2_map_walker_phys_addr(const struct kvm_pgtable_visit_ctx *ctx,
++				       const struct stage2_map_data *data)
++{
++	u64 phys = data->phys;
++
++	/*
++	 * Stage-2 walks to update ownership data are communicated to the map
++	 * walker using an invalid PA. Avoid offsetting an already invalid PA,
++	 * which could overflow and make the address valid again.
++	 */
++	if (!kvm_phys_is_valid(phys))
++		return phys;
++
++	/*
++	 * Otherwise, work out the correct PA based on how far the walk has
++	 * gotten.
++	 */
++	return phys + (ctx->addr - ctx->start);
++}
++
+ static bool stage2_leaf_mapping_allowed(const struct kvm_pgtable_visit_ctx *ctx,
+ 					struct stage2_map_data *data)
+ {
++	u64 phys = stage2_map_walker_phys_addr(ctx, data);
++
+ 	if (data->force_pte && (ctx->level < (KVM_PGTABLE_MAX_LEVELS - 1)))
+ 		return false;
+ 
+-	return kvm_block_mapping_supported(ctx, data->phys);
++	return kvm_block_mapping_supported(ctx, phys);
+ }
+ 
+ static int stage2_map_walker_try_leaf(const struct kvm_pgtable_visit_ctx *ctx,
+ 				      struct stage2_map_data *data)
+ {
+ 	kvm_pte_t new;
+-	u64 granule = kvm_granule_size(ctx->level), phys = data->phys;
++	u64 phys = stage2_map_walker_phys_addr(ctx, data);
++	u64 granule = kvm_granule_size(ctx->level);
+ 	struct kvm_pgtable *pgt = data->mmu->pgt;
+ 	struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops;
+ 
+@@ -841,8 +867,6 @@ static int stage2_map_walker_try_leaf(const struct kvm_pgtable_visit_ctx *ctx,
+ 
+ 	stage2_make_pte(ctx, new);
+ 
+-	if (kvm_phys_is_valid(phys))
+-		data->phys += granule;
+ 	return 0;
+ }
+ 
+diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
+index 4aadcfb017545..a7bb20055ce09 100644
+--- a/arch/arm64/mm/copypage.c
++++ b/arch/arm64/mm/copypage.c
+@@ -21,9 +21,10 @@ void copy_highpage(struct page *to, struct page *from)
+ 
+ 	copy_page(kto, kfrom);
+ 
++	if (kasan_hw_tags_enabled())
++		page_kasan_tag_reset(to);
++
+ 	if (system_supports_mte() && page_mte_tagged(from)) {
+-		if (kasan_hw_tags_enabled())
+-			page_kasan_tag_reset(to);
+ 		/* It's a new page, shouldn't have been tagged yet */
+ 		WARN_ON_ONCE(!try_page_mte_tagging(to));
+ 		mte_copy_page_tags(kto, kfrom);
+diff --git a/arch/parisc/include/asm/pdc.h b/arch/parisc/include/asm/pdc.h
+index 40793bef8429f..2b4fad8328e85 100644
+--- a/arch/parisc/include/asm/pdc.h
++++ b/arch/parisc/include/asm/pdc.h
+@@ -80,6 +80,7 @@ int pdc_do_firm_test_reset(unsigned long ftc_bitmap);
+ int pdc_do_reset(void);
+ int pdc_soft_power_info(unsigned long *power_reg);
+ int pdc_soft_power_button(int sw_control);
++int pdc_soft_power_button_panic(int sw_control);
+ void pdc_io_reset(void);
+ void pdc_io_reset_devices(void);
+ int pdc_iodc_getc(void);
+diff --git a/arch/parisc/kernel/firmware.c b/arch/parisc/kernel/firmware.c
+index 6817892a2c585..cc124d9f1f7f7 100644
+--- a/arch/parisc/kernel/firmware.c
++++ b/arch/parisc/kernel/firmware.c
+@@ -1232,15 +1232,18 @@ int __init pdc_soft_power_info(unsigned long *power_reg)
+ }
+ 
+ /*
+- * pdc_soft_power_button - Control the soft power button behaviour
+- * @sw_control: 0 for hardware control, 1 for software control 
++ * pdc_soft_power_button{_panic} - Control the soft power button behaviour
++ * @sw_control: 0 for hardware control, 1 for software control
+  *
+  *
+  * This PDC function places the soft power button under software or
+  * hardware control.
+- * Under software control the OS may control to when to allow to shut 
+- * down the system. Under hardware control pressing the power button 
++ * Under software control the OS may control to when to allow to shut
++ * down the system. Under hardware control pressing the power button
+  * powers off the system immediately.
++ *
++ * The _panic version relies on spin_trylock to prevent deadlock
++ * on panic path.
+  */
+ int pdc_soft_power_button(int sw_control)
+ {
+@@ -1254,6 +1257,22 @@ int pdc_soft_power_button(int sw_control)
+ 	return retval;
+ }
+ 
++int pdc_soft_power_button_panic(int sw_control)
++{
++	int retval;
++	unsigned long flags;
++
++	if (!spin_trylock_irqsave(&pdc_lock, flags)) {
++		pr_emerg("Couldn't enable soft power button\n");
++		return -EBUSY; /* ignored by the panic notifier */
++	}
++
++	retval = mem_pdc_call(PDC_SOFT_POWER, PDC_SOFT_POWER_ENABLE, __pa(pdc_result), sw_control);
++	spin_unlock_irqrestore(&pdc_lock, flags);
++
++	return retval;
++}
++
+ /*
+  * pdc_io_reset - Hack to avoid overlapping range registers of Bridges devices.
+  * Primarily a problem on T600 (which parisc-linux doesn't support) but
+diff --git a/arch/powerpc/kernel/dma-iommu.c b/arch/powerpc/kernel/dma-iommu.c
+index 038ce8d9061d1..8920862ffd791 100644
+--- a/arch/powerpc/kernel/dma-iommu.c
++++ b/arch/powerpc/kernel/dma-iommu.c
+@@ -144,7 +144,7 @@ static bool dma_iommu_bypass_supported(struct device *dev, u64 mask)
+ /* We support DMA to/from any memory page via the iommu */
+ int dma_iommu_dma_supported(struct device *dev, u64 mask)
+ {
+-	struct iommu_table *tbl = get_iommu_table_base(dev);
++	struct iommu_table *tbl;
+ 
+ 	if (dev_is_pci(dev) && dma_iommu_bypass_supported(dev, mask)) {
+ 		/*
+@@ -162,6 +162,8 @@ int dma_iommu_dma_supported(struct device *dev, u64 mask)
+ 		return 1;
+ 	}
+ 
++	tbl = get_iommu_table_base(dev);
++
+ 	if (!tbl) {
+ 		dev_err(dev, "Warning: IOMMU dma not supported: mask 0x%08llx, table unavailable\n", mask);
+ 		return 0;
+diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
+index ee95937bdaf14..b8b7a189cd3ce 100644
+--- a/arch/powerpc/kernel/iommu.c
++++ b/arch/powerpc/kernel/iommu.c
+@@ -517,7 +517,7 @@ int ppc_iommu_map_sg(struct device *dev, struct iommu_table *tbl,
+ 		/* Convert entry to a dma_addr_t */
+ 		entry += tbl->it_offset;
+ 		dma_addr = entry << tbl->it_page_shift;
+-		dma_addr |= (s->offset & ~IOMMU_PAGE_MASK(tbl));
++		dma_addr |= (vaddr & ~IOMMU_PAGE_MASK(tbl));
+ 
+ 		DBG("  - %lu pages, entry: %lx, dma_addr: %lx\n",
+ 			    npages, entry, dma_addr);
+@@ -904,6 +904,7 @@ void *iommu_alloc_coherent(struct device *dev, struct iommu_table *tbl,
+ 	unsigned int order;
+ 	unsigned int nio_pages, io_order;
+ 	struct page *page;
++	int tcesize = (1 << tbl->it_page_shift);
+ 
+ 	size = PAGE_ALIGN(size);
+ 	order = get_order(size);
+@@ -930,7 +931,8 @@ void *iommu_alloc_coherent(struct device *dev, struct iommu_table *tbl,
+ 	memset(ret, 0, size);
+ 
+ 	/* Set up tces to cover the allocated range */
+-	nio_pages = size >> tbl->it_page_shift;
++	nio_pages = IOMMU_PAGE_ALIGN(size, tbl) >> tbl->it_page_shift;
++
+ 	io_order = get_iommu_order(size, tbl);
+ 	mapping = iommu_alloc(dev, tbl, ret, nio_pages, DMA_BIDIRECTIONAL,
+ 			      mask >> tbl->it_page_shift, io_order, 0);
+@@ -938,7 +940,8 @@ void *iommu_alloc_coherent(struct device *dev, struct iommu_table *tbl,
+ 		free_pages((unsigned long)ret, order);
+ 		return NULL;
+ 	}
+-	*dma_handle = mapping;
++
++	*dma_handle = mapping | ((u64)ret & (tcesize - 1));
+ 	return ret;
+ }
+ 
+@@ -949,7 +952,7 @@ void iommu_free_coherent(struct iommu_table *tbl, size_t size,
+ 		unsigned int nio_pages;
+ 
+ 		size = PAGE_ALIGN(size);
+-		nio_pages = size >> tbl->it_page_shift;
++		nio_pages = IOMMU_PAGE_ALIGN(size, tbl) >> tbl->it_page_shift;
+ 		iommu_free(tbl, dma_handle, nio_pages);
+ 		size = PAGE_ALIGN(size);
+ 		free_pages((unsigned long)vaddr, get_order(size));
+diff --git a/arch/powerpc/kernel/legacy_serial.c b/arch/powerpc/kernel/legacy_serial.c
+index f048c424c525b..1a3b7f3513b40 100644
+--- a/arch/powerpc/kernel/legacy_serial.c
++++ b/arch/powerpc/kernel/legacy_serial.c
+@@ -171,11 +171,11 @@ static int __init add_legacy_soc_port(struct device_node *np,
+ 	/* We only support ports that have a clock frequency properly
+ 	 * encoded in the device-tree.
+ 	 */
+-	if (of_get_property(np, "clock-frequency", NULL) == NULL)
++	if (!of_property_present(np, "clock-frequency"))
+ 		return -1;
+ 
+ 	/* if reg-offset don't try to use it */
+-	if ((of_get_property(np, "reg-offset", NULL) != NULL))
++	if (of_property_present(np, "reg-offset"))
+ 		return -1;
+ 
+ 	/* if rtas uses this device, don't try to use it as well */
+@@ -237,7 +237,7 @@ static int __init add_legacy_isa_port(struct device_node *np,
+ 	 * Note: Don't even try on P8 lpc, we know it's not directly mapped
+ 	 */
+ 	if (!of_device_is_compatible(isa_brg, "ibm,power8-lpc") ||
+-	    of_get_property(isa_brg, "ranges", NULL)) {
++	    of_property_present(isa_brg, "ranges")) {
+ 		taddr = of_translate_address(np, reg);
+ 		if (taddr == OF_BAD_ADDR)
+ 			taddr = 0;
+@@ -268,7 +268,7 @@ static int __init add_legacy_pci_port(struct device_node *np,
+ 	 * compatible UARTs on PCI need all sort of quirks (port offsets
+ 	 * etc...) that this code doesn't know about
+ 	 */
+-	if (of_get_property(np, "clock-frequency", NULL) == NULL)
++	if (!of_property_present(np, "clock-frequency"))
+ 		return -1;
+ 
+ 	/* Get the PCI address. Assume BAR 0 */
+diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
+index 26245aaf12b8b..2297aa764ecdb 100644
+--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
++++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
+@@ -1040,8 +1040,8 @@ void radix__ptep_set_access_flags(struct vm_area_struct *vma, pte_t *ptep,
+ 				  pte_t entry, unsigned long address, int psize)
+ {
+ 	struct mm_struct *mm = vma->vm_mm;
+-	unsigned long set = pte_val(entry) & (_PAGE_DIRTY | _PAGE_ACCESSED |
+-					      _PAGE_RW | _PAGE_EXEC);
++	unsigned long set = pte_val(entry) & (_PAGE_DIRTY | _PAGE_SOFT_DIRTY |
++					      _PAGE_ACCESSED | _PAGE_RW | _PAGE_EXEC);
+ 
+ 	unsigned long change = pte_val(entry) ^ pte_val(*ptep);
+ 	/*
+diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
+index e93aefcfb83f2..37043dfc1addd 100644
+--- a/arch/powerpc/net/bpf_jit_comp.c
++++ b/arch/powerpc/net/bpf_jit_comp.c
+@@ -101,6 +101,8 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
+ 		bpf_hdr = jit_data->header;
+ 		proglen = jit_data->proglen;
+ 		extra_pass = true;
++		/* During extra pass, ensure index is reset before repopulating extable entries */
++		cgctx.exentry_idx = 0;
+ 		goto skip_init_ctx;
+ 	}
+ 
+diff --git a/arch/powerpc/platforms/44x/iss4xx.c b/arch/powerpc/platforms/44x/iss4xx.c
+index c5f82591408c1..812765cf06324 100644
+--- a/arch/powerpc/platforms/44x/iss4xx.c
++++ b/arch/powerpc/platforms/44x/iss4xx.c
+@@ -52,7 +52,7 @@ static void __init iss4xx_init_irq(void)
+ 
+ 	/* Find top level interrupt controller */
+ 	for_each_node_with_property(np, "interrupt-controller") {
+-		if (of_get_property(np, "interrupts", NULL) == NULL)
++		if (!of_property_present(np, "interrupts"))
+ 			break;
+ 	}
+ 	if (np == NULL)
+diff --git a/arch/powerpc/platforms/44x/ppc476.c b/arch/powerpc/platforms/44x/ppc476.c
+index 7c91ac5a5241b..70556fd10f6b4 100644
+--- a/arch/powerpc/platforms/44x/ppc476.c
++++ b/arch/powerpc/platforms/44x/ppc476.c
+@@ -122,7 +122,7 @@ static void __init ppc47x_init_irq(void)
+ 
+ 	/* Find top level interrupt controller */
+ 	for_each_node_with_property(np, "interrupt-controller") {
+-		if (of_get_property(np, "interrupts", NULL) == NULL)
++		if (!of_property_present(np, "interrupts"))
+ 			break;
+ 	}
+ 	if (np == NULL)
+diff --git a/arch/powerpc/platforms/cell/spu_manage.c b/arch/powerpc/platforms/cell/spu_manage.c
+index f1ac4c7420690..74567b32c48c2 100644
+--- a/arch/powerpc/platforms/cell/spu_manage.c
++++ b/arch/powerpc/platforms/cell/spu_manage.c
+@@ -402,7 +402,7 @@ static int __init of_has_vicinity(void)
+ 	struct device_node *dn;
+ 
+ 	for_each_node_by_type(dn, "spe") {
+-		if (of_find_property(dn, "vicinity", NULL))  {
++		if (of_property_present(dn, "vicinity"))  {
+ 			of_node_put(dn);
+ 			return 1;
+ 		}
+diff --git a/arch/powerpc/platforms/powermac/pic.c b/arch/powerpc/platforms/powermac/pic.c
+index 8c8d8e0a7d137..7425f94e271e5 100644
+--- a/arch/powerpc/platforms/powermac/pic.c
++++ b/arch/powerpc/platforms/powermac/pic.c
+@@ -475,8 +475,7 @@ static int __init pmac_pic_probe_mpic(void)
+ 
+ 	/* We can have up to 2 MPICs cascaded */
+ 	for_each_node_by_type(np, "open-pic") {
+-		if (master == NULL &&
+-		    of_get_property(np, "interrupts", NULL) == NULL)
++		if (master == NULL && !of_property_present(np, "interrupts"))
+ 			master = of_node_get(np);
+ 		else if (slave == NULL)
+ 			slave = of_node_get(np);
+diff --git a/arch/powerpc/platforms/powernv/opal-lpc.c b/arch/powerpc/platforms/powernv/opal-lpc.c
+index d129d6d45a500..a16f07cdab267 100644
+--- a/arch/powerpc/platforms/powernv/opal-lpc.c
++++ b/arch/powerpc/platforms/powernv/opal-lpc.c
+@@ -403,7 +403,7 @@ void __init opal_lpc_init(void)
+ 		return;
+ 
+ 	/* Does it support direct mapping ? */
+-	if (of_get_property(np, "ranges", NULL)) {
++	if (of_property_present(np, "ranges")) {
+ 		pr_info("OPAL: Found memory mapped LPC bus on chip %d\n",
+ 			opal_lpc_chip_id);
+ 		isa_bridge_init_non_pci(np);
+diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+index 982e5e4b5e065..1a3cb313976a4 100644
+--- a/arch/powerpc/platforms/pseries/hotplug-cpu.c
++++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+@@ -493,7 +493,7 @@ static bool valid_cpu_drc_index(struct device_node *parent, u32 drc_index)
+ 	bool found = false;
+ 	int rc, index;
+ 
+-	if (of_find_property(parent, "ibm,drc-info", NULL))
++	if (of_property_present(parent, "ibm,drc-info"))
+ 		return drc_info_valid_index(parent, drc_index);
+ 
+ 	/* Note that the format of the ibm,drc-indexes array is
+diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
+index c74b71d4733d4..a8acd3b402d7c 100644
+--- a/arch/powerpc/platforms/pseries/iommu.c
++++ b/arch/powerpc/platforms/pseries/iommu.c
+@@ -85,19 +85,24 @@ static struct iommu_table_group *iommu_pseries_alloc_group(int node)
+ static void iommu_pseries_free_group(struct iommu_table_group *table_group,
+ 		const char *node_name)
+ {
+-	struct iommu_table *tbl;
+-
+ 	if (!table_group)
+ 		return;
+ 
+-	tbl = table_group->tables[0];
+ #ifdef CONFIG_IOMMU_API
+ 	if (table_group->group) {
+ 		iommu_group_put(table_group->group);
+ 		BUG_ON(table_group->group);
+ 	}
+ #endif
+-	iommu_tce_table_put(tbl);
++
++	/* Default DMA window table is at index 0, while DDW at 1. SR-IOV
++	 * adapters only have table on index 1.
++	 */
++	if (table_group->tables[0])
++		iommu_tce_table_put(table_group->tables[0]);
++
++	if (table_group->tables[1])
++		iommu_tce_table_put(table_group->tables[1]);
+ 
+ 	kfree(table_group);
+ }
+diff --git a/arch/powerpc/platforms/pseries/vio.c b/arch/powerpc/platforms/pseries/vio.c
+index 770df9351aaa9..d54306a936d55 100644
+--- a/arch/powerpc/platforms/pseries/vio.c
++++ b/arch/powerpc/platforms/pseries/vio.c
+@@ -1440,7 +1440,7 @@ struct vio_dev *vio_register_device_node(struct device_node *of_node)
+ 	viodev->dev.bus = &vio_bus_type;
+ 	viodev->dev.release = vio_dev_release;
+ 
+-	if (of_get_property(viodev->dev.of_node, "ibm,my-dma-window", NULL)) {
++	if (of_property_present(viodev->dev.of_node, "ibm,my-dma-window")) {
+ 		if (firmware_has_feature(FW_FEATURE_CMO))
+ 			vio_cmo_set_dma_ops(viodev);
+ 		else
+diff --git a/arch/powerpc/sysdev/mpic_msgr.c b/arch/powerpc/sysdev/mpic_msgr.c
+index d75064fb7d12f..1a3ac0b5dd89c 100644
+--- a/arch/powerpc/sysdev/mpic_msgr.c
++++ b/arch/powerpc/sysdev/mpic_msgr.c
+@@ -116,7 +116,7 @@ static unsigned int mpic_msgr_number_of_blocks(void)
+ 
+ 		for (;;) {
+ 			snprintf(buf, sizeof(buf), "mpic-msgr-block%d", count);
+-			if (!of_find_property(aliases, buf, NULL))
++			if (!of_property_present(aliases, buf))
+ 				break;
+ 
+ 			count += 1;
+diff --git a/arch/riscv/kernel/image-vars.h b/arch/riscv/kernel/image-vars.h
+index 7e2962ef73f92..15616155008cc 100644
+--- a/arch/riscv/kernel/image-vars.h
++++ b/arch/riscv/kernel/image-vars.h
+@@ -23,8 +23,6 @@
+  * linked at. The routines below are all implemented in assembler in a
+  * position independent manner
+  */
+-__efistub_strcmp		= strcmp;
+-
+ __efistub__start		= _start;
+ __efistub__start_kernel		= _start_kernel;
+ __efistub__end			= _end;
+diff --git a/arch/riscv/kernel/probes/Makefile b/arch/riscv/kernel/probes/Makefile
+index c40139e9ca478..8265ff497977c 100644
+--- a/arch/riscv/kernel/probes/Makefile
++++ b/arch/riscv/kernel/probes/Makefile
+@@ -4,3 +4,5 @@ obj-$(CONFIG_RETHOOK)		+= rethook.o rethook_trampoline.o
+ obj-$(CONFIG_KPROBES_ON_FTRACE)	+= ftrace.o
+ obj-$(CONFIG_UPROBES)		+= uprobes.o decode-insn.o simulate-insn.o
+ CFLAGS_REMOVE_simulate-insn.o = $(CC_FLAGS_FTRACE)
++CFLAGS_REMOVE_rethook.o = $(CC_FLAGS_FTRACE)
++CFLAGS_REMOVE_rethook_trampoline.o = $(CC_FLAGS_FTRACE)
+diff --git a/arch/s390/crypto/chacha-glue.c b/arch/s390/crypto/chacha-glue.c
+index 7752bd314558e..5fae187f947a0 100644
+--- a/arch/s390/crypto/chacha-glue.c
++++ b/arch/s390/crypto/chacha-glue.c
+@@ -82,7 +82,7 @@ void chacha_crypt_arch(u32 *state, u8 *dst, const u8 *src,
+ 	 * it cannot handle a block of data or less, but otherwise
+ 	 * it can handle data of arbitrary size
+ 	 */
+-	if (bytes <= CHACHA_BLOCK_SIZE || nrounds != 20)
++	if (bytes <= CHACHA_BLOCK_SIZE || nrounds != 20 || !MACHINE_HAS_VX)
+ 		chacha_crypt_generic(state, dst, src, bytes, nrounds);
+ 	else
+ 		chacha20_crypt_s390(state, dst, src, bytes,
+diff --git a/arch/s390/kernel/Makefile b/arch/s390/kernel/Makefile
+index 8983837b35659..6b2a051e1f8a2 100644
+--- a/arch/s390/kernel/Makefile
++++ b/arch/s390/kernel/Makefile
+@@ -10,6 +10,7 @@ CFLAGS_REMOVE_ftrace.o		= $(CC_FLAGS_FTRACE)
+ 
+ # Do not trace early setup code
+ CFLAGS_REMOVE_early.o		= $(CC_FLAGS_FTRACE)
++CFLAGS_REMOVE_rethook.o		= $(CC_FLAGS_FTRACE)
+ 
+ endif
+ 
+diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
+index dd61752f4c966..4070a01c11b72 100644
+--- a/arch/x86/kernel/Makefile
++++ b/arch/x86/kernel/Makefile
+@@ -17,6 +17,7 @@ CFLAGS_REMOVE_ftrace.o = -pg
+ CFLAGS_REMOVE_early_printk.o = -pg
+ CFLAGS_REMOVE_head64.o = -pg
+ CFLAGS_REMOVE_sev.o = -pg
++CFLAGS_REMOVE_rethook.o = -pg
+ endif
+ 
+ KASAN_SANITIZE_head$(BITS).o				:= n
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index d9ed3108c17af..bac977da4eb5b 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -649,6 +649,8 @@ retry:
+ 					sched_data->service_tree[i].wsum;
+ 			}
+ 		}
++		if (!wsum)
++			continue;
+ 		limit = DIV_ROUND_CLOSEST(limit * entity->weight, wsum);
+ 		if (entity->allocated >= limit) {
+ 			bfq_log_bfqq(bfqq->bfqd, bfqq,
+diff --git a/crypto/jitterentropy-kcapi.c b/crypto/jitterentropy-kcapi.c
+index 2d115bec15aeb..b9edfaa51b273 100644
+--- a/crypto/jitterentropy-kcapi.c
++++ b/crypto/jitterentropy-kcapi.c
+@@ -37,6 +37,7 @@
+  * DAMAGE.
+  */
+ 
++#include <linux/fips.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/slab.h>
+@@ -59,11 +60,6 @@ void jent_zfree(void *ptr)
+ 	kfree_sensitive(ptr);
+ }
+ 
+-void jent_panic(char *s)
+-{
+-	panic("%s", s);
+-}
+-
+ void jent_memcpy(void *dest, const void *src, unsigned int n)
+ {
+ 	memcpy(dest, src, n);
+@@ -102,7 +98,6 @@ void jent_get_nstime(__u64 *out)
+ struct jitterentropy {
+ 	spinlock_t jent_lock;
+ 	struct rand_data *entropy_collector;
+-	unsigned int reset_cnt;
+ };
+ 
+ static int jent_kcapi_init(struct crypto_tfm *tfm)
+@@ -138,32 +133,30 @@ static int jent_kcapi_random(struct crypto_rng *tfm,
+ 
+ 	spin_lock(&rng->jent_lock);
+ 
+-	/* Return a permanent error in case we had too many resets in a row. */
+-	if (rng->reset_cnt > (1<<10)) {
+-		ret = -EFAULT;
+-		goto out;
+-	}
+-
+ 	ret = jent_read_entropy(rng->entropy_collector, rdata, dlen);
+ 
+-	/* Reset RNG in case of health failures */
+-	if (ret < -1) {
+-		pr_warn_ratelimited("Reset Jitter RNG due to health test failure: %s failure\n",
+-				    (ret == -2) ? "Repetition Count Test" :
+-						  "Adaptive Proportion Test");
+-
+-		rng->reset_cnt++;
+-
++	if (ret == -3) {
++		/* Handle permanent health test error */
++		/*
++		 * If the kernel was booted with fips=1, it implies that
++		 * the entire kernel acts as a FIPS 140 module. In this case
++		 * an SP800-90B permanent health test error is treated as
++		 * a FIPS module error.
++		 */
++		if (fips_enabled)
++			panic("Jitter RNG permanent health test failure\n");
++
++		pr_err("Jitter RNG permanent health test failure\n");
++		ret = -EFAULT;
++	} else if (ret == -2) {
++		/* Handle intermittent health test error */
++		pr_warn_ratelimited("Reset Jitter RNG due to intermittent health test failure\n");
+ 		ret = -EAGAIN;
+-	} else {
+-		rng->reset_cnt = 0;
+-
+-		/* Convert the Jitter RNG error into a usable error code */
+-		if (ret == -1)
+-			ret = -EINVAL;
++	} else if (ret == -1) {
++		/* Handle other errors */
++		ret = -EINVAL;
+ 	}
+ 
+-out:
+ 	spin_unlock(&rng->jent_lock);
+ 
+ 	return ret;
+@@ -197,6 +190,10 @@ static int __init jent_mod_init(void)
+ 
+ 	ret = jent_entropy_init();
+ 	if (ret) {
++		/* Handle permanent health test error */
++		if (fips_enabled)
++			panic("jitterentropy: Initialization failed with host not compliant with requirements: %d\n", ret);
++
+ 		pr_info("jitterentropy: Initialization failed with host not compliant with requirements: %d\n", ret);
+ 		return -EFAULT;
+ 	}
+diff --git a/crypto/jitterentropy.c b/crypto/jitterentropy.c
+index 93bff32138238..22f48bf4c6f57 100644
+--- a/crypto/jitterentropy.c
++++ b/crypto/jitterentropy.c
+@@ -85,10 +85,14 @@ struct rand_data {
+ 				      * bit generation */
+ 
+ 	/* Repetition Count Test */
+-	int rct_count;			/* Number of stuck values */
++	unsigned int rct_count;			/* Number of stuck values */
+ 
+-	/* Adaptive Proportion Test for a significance level of 2^-30 */
++	/* Intermittent health test failure threshold of 2^-30 */
++#define JENT_RCT_CUTOFF		30	/* Taken from SP800-90B sec 4.4.1 */
+ #define JENT_APT_CUTOFF		325	/* Taken from SP800-90B sec 4.4.2 */
++	/* Permanent health test failure threshold of 2^-60 */
++#define JENT_RCT_CUTOFF_PERMANENT	60
++#define JENT_APT_CUTOFF_PERMANENT	355
+ #define JENT_APT_WINDOW_SIZE	512	/* Data window size */
+ 	/* LSB of time stamp to process */
+ #define JENT_APT_LSB		16
+@@ -97,8 +101,6 @@ struct rand_data {
+ 	unsigned int apt_count;		/* APT counter */
+ 	unsigned int apt_base;		/* APT base reference */
+ 	unsigned int apt_base_set:1;	/* APT base reference set? */
+-
+-	unsigned int health_failure:1;	/* Permanent health failure */
+ };
+ 
+ /* Flags that can be used to initialize the RNG */
+@@ -169,19 +171,26 @@ static void jent_apt_insert(struct rand_data *ec, unsigned int delta_masked)
+ 		return;
+ 	}
+ 
+-	if (delta_masked == ec->apt_base) {
++	if (delta_masked == ec->apt_base)
+ 		ec->apt_count++;
+ 
+-		if (ec->apt_count >= JENT_APT_CUTOFF)
+-			ec->health_failure = 1;
+-	}
+-
+ 	ec->apt_observations++;
+ 
+ 	if (ec->apt_observations >= JENT_APT_WINDOW_SIZE)
+ 		jent_apt_reset(ec, delta_masked);
+ }
+ 
++/* APT health test failure detection */
++static int jent_apt_permanent_failure(struct rand_data *ec)
++{
++	return (ec->apt_count >= JENT_APT_CUTOFF_PERMANENT) ? 1 : 0;
++}
++
++static int jent_apt_failure(struct rand_data *ec)
++{
++	return (ec->apt_count >= JENT_APT_CUTOFF) ? 1 : 0;
++}
++
+ /***************************************************************************
+  * Stuck Test and its use as Repetition Count Test
+  *
+@@ -206,55 +215,14 @@ static void jent_apt_insert(struct rand_data *ec, unsigned int delta_masked)
+  */
+ static void jent_rct_insert(struct rand_data *ec, int stuck)
+ {
+-	/*
+-	 * If we have a count less than zero, a previous RCT round identified
+-	 * a failure. We will not overwrite it.
+-	 */
+-	if (ec->rct_count < 0)
+-		return;
+-
+ 	if (stuck) {
+ 		ec->rct_count++;
+-
+-		/*
+-		 * The cutoff value is based on the following consideration:
+-		 * alpha = 2^-30 as recommended in FIPS 140-2 IG 9.8.
+-		 * In addition, we require an entropy value H of 1/OSR as this
+-		 * is the minimum entropy required to provide full entropy.
+-		 * Note, we collect 64 * OSR deltas for inserting them into
+-		 * the entropy pool which should then have (close to) 64 bits
+-		 * of entropy.
+-		 *
+-		 * Note, ec->rct_count (which equals to value B in the pseudo
+-		 * code of SP800-90B section 4.4.1) starts with zero. Hence
+-		 * we need to subtract one from the cutoff value as calculated
+-		 * following SP800-90B.
+-		 */
+-		if ((unsigned int)ec->rct_count >= (31 * ec->osr)) {
+-			ec->rct_count = -1;
+-			ec->health_failure = 1;
+-		}
+ 	} else {
++		/* Reset RCT */
+ 		ec->rct_count = 0;
+ 	}
+ }
+ 
+-/*
+- * Is there an RCT health test failure?
+- *
+- * @ec [in] Reference to entropy collector
+- *
+- * @return
+- * 	0 No health test failure
+- * 	1 Permanent health test failure
+- */
+-static int jent_rct_failure(struct rand_data *ec)
+-{
+-	if (ec->rct_count < 0)
+-		return 1;
+-	return 0;
+-}
+-
+ static inline __u64 jent_delta(__u64 prev, __u64 next)
+ {
+ #define JENT_UINT64_MAX		(__u64)(~((__u64) 0))
+@@ -303,18 +271,26 @@ static int jent_stuck(struct rand_data *ec, __u64 current_delta)
+ 	return 0;
+ }
+ 
+-/*
+- * Report any health test failures
+- *
+- * @ec [in] Reference to entropy collector
+- *
+- * @return
+- * 	0 No health test failure
+- * 	1 Permanent health test failure
+- */
++/* RCT health test failure detection */
++static int jent_rct_permanent_failure(struct rand_data *ec)
++{
++	return (ec->rct_count >= JENT_RCT_CUTOFF_PERMANENT) ? 1 : 0;
++}
++
++static int jent_rct_failure(struct rand_data *ec)
++{
++	return (ec->rct_count >= JENT_RCT_CUTOFF) ? 1 : 0;
++}
++
++/* Report of health test failures */
+ static int jent_health_failure(struct rand_data *ec)
+ {
+-	return ec->health_failure;
++	return jent_rct_failure(ec) | jent_apt_failure(ec);
++}
++
++static int jent_permanent_health_failure(struct rand_data *ec)
++{
++	return jent_rct_permanent_failure(ec) | jent_apt_permanent_failure(ec);
+ }
+ 
+ /***************************************************************************
+@@ -600,8 +576,8 @@ static void jent_gen_entropy(struct rand_data *ec)
+  *
+  * The following error codes can occur:
+  *	-1	entropy_collector is NULL
+- *	-2	RCT failed
+- *	-3	APT test failed
++ *	-2	Intermittent health failure
++ *	-3	Permanent health failure
+  */
+ int jent_read_entropy(struct rand_data *ec, unsigned char *data,
+ 		      unsigned int len)
+@@ -616,39 +592,23 @@ int jent_read_entropy(struct rand_data *ec, unsigned char *data,
+ 
+ 		jent_gen_entropy(ec);
+ 
+-		if (jent_health_failure(ec)) {
+-			int ret;
+-
+-			if (jent_rct_failure(ec))
+-				ret = -2;
+-			else
+-				ret = -3;
+-
++		if (jent_permanent_health_failure(ec)) {
+ 			/*
+-			 * Re-initialize the noise source
+-			 *
+-			 * If the health test fails, the Jitter RNG remains
+-			 * in failure state and will return a health failure
+-			 * during next invocation.
++			 * At this point, the Jitter RNG instance is considered
++			 * as a failed instance. There is no rerun of the
++			 * startup test any more, because the caller
++			 * is assumed to not further use this instance.
+ 			 */
+-			if (jent_entropy_init())
+-				return ret;
+-
+-			/* Set APT to initial state */
+-			jent_apt_reset(ec, 0);
+-			ec->apt_base_set = 0;
+-
+-			/* Set RCT to initial state */
+-			ec->rct_count = 0;
+-
+-			/* Re-enable Jitter RNG */
+-			ec->health_failure = 0;
+-
++			return -3;
++		} else if (jent_health_failure(ec)) {
+ 			/*
+-			 * Return the health test failure status to the
+-			 * caller as the generated value is not appropriate.
++			 * Perform startup health tests and return permanent
++			 * error if it fails.
+ 			 */
+-			return ret;
++			if (jent_entropy_init())
++				return -3;
++
++			return -2;
+ 		}
+ 
+ 		if ((DATA_SIZE_BITS / 8) < len)
+diff --git a/crypto/jitterentropy.h b/crypto/jitterentropy.h
+index b7397b617ef05..5cc583f6bc6b8 100644
+--- a/crypto/jitterentropy.h
++++ b/crypto/jitterentropy.h
+@@ -2,7 +2,6 @@
+ 
+ extern void *jent_zalloc(unsigned int len);
+ extern void jent_zfree(void *ptr);
+-extern void jent_panic(char *s);
+ extern void jent_memcpy(void *dest, const void *src, unsigned int n);
+ extern void jent_get_nstime(__u64 *out);
+ 
+diff --git a/drivers/accel/habanalabs/common/device.c b/drivers/accel/habanalabs/common/device.c
+index 9933e5858a363..c91436609f080 100644
+--- a/drivers/accel/habanalabs/common/device.c
++++ b/drivers/accel/habanalabs/common/device.c
+@@ -423,6 +423,9 @@ static void hpriv_release(struct kref *ref)
+ 	mutex_destroy(&hpriv->ctx_lock);
+ 	mutex_destroy(&hpriv->restore_phase_mutex);
+ 
++	/* There should be no memory buffers at this point and handles IDR can be destroyed */
++	hl_mem_mgr_idr_destroy(&hpriv->mem_mgr);
++
+ 	/* Device should be reset if reset-upon-device-release is enabled, or if there is a pending
+ 	 * reset that waits for device release.
+ 	 */
+@@ -514,6 +517,10 @@ static int hl_device_release(struct inode *inode, struct file *filp)
+ 	}
+ 
+ 	hl_ctx_mgr_fini(hdev, &hpriv->ctx_mgr);
++
++	/* Memory buffers might be still in use at this point and thus the handles IDR destruction
++	 * is postponed to hpriv_release().
++	 */
+ 	hl_mem_mgr_fini(&hpriv->mem_mgr);
+ 
+ 	hdev->compute_ctx_in_release = 1;
+@@ -887,6 +894,7 @@ static int device_early_init(struct hl_device *hdev)
+ 
+ free_cb_mgr:
+ 	hl_mem_mgr_fini(&hdev->kernel_mem_mgr);
++	hl_mem_mgr_idr_destroy(&hdev->kernel_mem_mgr);
+ free_chip_info:
+ 	kfree(hdev->hl_chip_info);
+ free_prefetch_wq:
+@@ -930,6 +938,7 @@ static void device_early_fini(struct hl_device *hdev)
+ 	mutex_destroy(&hdev->clk_throttling.lock);
+ 
+ 	hl_mem_mgr_fini(&hdev->kernel_mem_mgr);
++	hl_mem_mgr_idr_destroy(&hdev->kernel_mem_mgr);
+ 
+ 	kfree(hdev->hl_chip_info);
+ 
+diff --git a/drivers/accel/habanalabs/common/habanalabs.h b/drivers/accel/habanalabs/common/habanalabs.h
+index fa05e76d3d21a..829b30ab1961a 100644
+--- a/drivers/accel/habanalabs/common/habanalabs.h
++++ b/drivers/accel/habanalabs/common/habanalabs.h
+@@ -3861,6 +3861,7 @@ const char *hl_sync_engine_to_string(enum hl_sync_engine_type engine_type);
+ 
+ void hl_mem_mgr_init(struct device *dev, struct hl_mem_mgr *mmg);
+ void hl_mem_mgr_fini(struct hl_mem_mgr *mmg);
++void hl_mem_mgr_idr_destroy(struct hl_mem_mgr *mmg);
+ int hl_mem_mgr_mmap(struct hl_mem_mgr *mmg, struct vm_area_struct *vma,
+ 		    void *args);
+ struct hl_mmap_mem_buf *hl_mmap_mem_buf_get(struct hl_mem_mgr *mmg,
+diff --git a/drivers/accel/habanalabs/common/habanalabs_drv.c b/drivers/accel/habanalabs/common/habanalabs_drv.c
+index 03dae57dc8386..e3781cfe8a7fe 100644
+--- a/drivers/accel/habanalabs/common/habanalabs_drv.c
++++ b/drivers/accel/habanalabs/common/habanalabs_drv.c
+@@ -237,6 +237,7 @@ int hl_device_open(struct inode *inode, struct file *filp)
+ out_err:
+ 	mutex_unlock(&hdev->fpriv_list_lock);
+ 	hl_mem_mgr_fini(&hpriv->mem_mgr);
++	hl_mem_mgr_idr_destroy(&hpriv->mem_mgr);
+ 	hl_ctx_mgr_fini(hpriv->hdev, &hpriv->ctx_mgr);
+ 	filp->private_data = NULL;
+ 	mutex_destroy(&hpriv->ctx_lock);
+diff --git a/drivers/accel/habanalabs/common/memory_mgr.c b/drivers/accel/habanalabs/common/memory_mgr.c
+index 0f2759e265477..f8e8261cc83d8 100644
+--- a/drivers/accel/habanalabs/common/memory_mgr.c
++++ b/drivers/accel/habanalabs/common/memory_mgr.c
+@@ -341,8 +341,19 @@ void hl_mem_mgr_fini(struct hl_mem_mgr *mmg)
+ 				"%s: Buff handle %u for CTX is still alive\n",
+ 				topic, id);
+ 	}
++}
+ 
+-	/* TODO: can it happen that some buffer is still in use at this point? */
++/**
++ * hl_mem_mgr_idr_destroy() - destroy memory manager IDR.
++ * @mmg: parent unified memory manager
++ *
++ * Destroy the memory manager IDR.
++ * Shall be called when IDR is empty and no memory buffers are in use.
++ */
++void hl_mem_mgr_idr_destroy(struct hl_mem_mgr *mmg)
++{
++	if (!idr_is_empty(&mmg->handles))
++		dev_crit(mmg->dev, "memory manager IDR is destroyed while it is not empty!\n");
+ 
+ 	idr_destroy(&mmg->handles);
+ }
+diff --git a/drivers/accel/ivpu/ivpu_drv.c b/drivers/accel/ivpu/ivpu_drv.c
+index 6a320a73e3ccf..8396db2b52030 100644
+--- a/drivers/accel/ivpu/ivpu_drv.c
++++ b/drivers/accel/ivpu/ivpu_drv.c
+@@ -437,6 +437,10 @@ static int ivpu_pci_init(struct ivpu_device *vdev)
+ 	/* Clear any pending errors */
+ 	pcie_capability_clear_word(pdev, PCI_EXP_DEVSTA, 0x3f);
+ 
++	/* VPU MTL does not require PCI spec 10m D3hot delay */
++	if (ivpu_is_mtl(vdev))
++		pdev->d3hot_delay = 0;
++
+ 	ret = pcim_enable_device(pdev);
+ 	if (ret) {
+ 		ivpu_err(vdev, "Failed to enable PCI device: %d\n", ret);
+diff --git a/drivers/acpi/acpi_apd.c b/drivers/acpi/acpi_apd.c
+index 3bbe2276cac76..80f945cbec8a7 100644
+--- a/drivers/acpi/acpi_apd.c
++++ b/drivers/acpi/acpi_apd.c
+@@ -83,6 +83,8 @@ static int fch_misc_setup(struct apd_private_data *pdata)
+ 	if (!acpi_dev_get_property(adev, "clk-name", ACPI_TYPE_STRING, &obj)) {
+ 		clk_data->name = devm_kzalloc(&adev->dev, obj->string.length,
+ 					      GFP_KERNEL);
++		if (!clk_data->name)
++			return -ENOMEM;
+ 
+ 		strcpy(clk_data->name, obj->string.pointer);
+ 	} else {
+diff --git a/drivers/acpi/acpica/dbnames.c b/drivers/acpi/acpica/dbnames.c
+index 3615e1a6efd8a..b91155ea9c343 100644
+--- a/drivers/acpi/acpica/dbnames.c
++++ b/drivers/acpi/acpica/dbnames.c
+@@ -652,6 +652,9 @@ acpi_status acpi_db_display_objects(char *obj_type_arg, char *display_count_arg)
+ 		object_info =
+ 		    ACPI_ALLOCATE_ZEROED(sizeof(struct acpi_object_info));
+ 
++		if (!object_info)
++			return (AE_NO_MEMORY);
++
+ 		/* Walk the namespace from the root */
+ 
+ 		(void)acpi_walk_namespace(ACPI_TYPE_ANY, ACPI_ROOT_OBJECT,
+diff --git a/drivers/acpi/acpica/dswstate.c b/drivers/acpi/acpica/dswstate.c
+index 0aa735d3b93cc..77076da2029d9 100644
+--- a/drivers/acpi/acpica/dswstate.c
++++ b/drivers/acpi/acpica/dswstate.c
+@@ -576,9 +576,14 @@ acpi_ds_init_aml_walk(struct acpi_walk_state *walk_state,
+ 	ACPI_FUNCTION_TRACE(ds_init_aml_walk);
+ 
+ 	walk_state->parser_state.aml =
+-	    walk_state->parser_state.aml_start = aml_start;
+-	walk_state->parser_state.aml_end =
+-	    walk_state->parser_state.pkg_end = aml_start + aml_length;
++	    walk_state->parser_state.aml_start =
++	    walk_state->parser_state.aml_end =
++	    walk_state->parser_state.pkg_end = aml_start;
++	/* Avoid undefined behavior: applying zero offset to null pointer */
++	if (aml_length != 0) {
++		walk_state->parser_state.aml_end += aml_length;
++		walk_state->parser_state.pkg_end += aml_length;
++	}
+ 
+ 	/* The next_op of the next_walk will be the beginning of the method */
+ 
+diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
+index 105d2e795afad..9658e348dc17d 100644
+--- a/drivers/acpi/ec.c
++++ b/drivers/acpi/ec.c
+@@ -1122,6 +1122,7 @@ static void acpi_ec_remove_query_handlers(struct acpi_ec *ec,
+ void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit)
+ {
+ 	acpi_ec_remove_query_handlers(ec, false, query_bit);
++	flush_workqueue(ec_query_wq);
+ }
+ EXPORT_SYMBOL_GPL(acpi_ec_remove_query_handler);
+ 
+diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
+index 295744fe7c920..bcc25d457581d 100644
+--- a/drivers/acpi/video_detect.c
++++ b/drivers/acpi/video_detect.c
+@@ -130,12 +130,6 @@ static int video_detect_force_native(const struct dmi_system_id *d)
+ 	return 0;
+ }
+ 
+-static int video_detect_force_none(const struct dmi_system_id *d)
+-{
+-	acpi_backlight_dmi = acpi_backlight_none;
+-	return 0;
+-}
+-
+ static const struct dmi_system_id video_detect_dmi_table[] = {
+ 	/*
+ 	 * Models which should use the vendor backlight interface,
+@@ -754,35 +748,6 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 15 3535"),
+ 		},
+ 	},
+-
+-	/*
+-	 * Desktops which falsely report a backlight and which our heuristics
+-	 * for this do not catch.
+-	 */
+-	{
+-	 .callback = video_detect_force_none,
+-	 /* Dell OptiPlex 9020M */
+-	 .matches = {
+-		DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+-		DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex 9020M"),
+-		},
+-	},
+-	{
+-	 .callback = video_detect_force_none,
+-	 /* GIGABYTE GB-BXBT-2807 */
+-	 .matches = {
+-		DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
+-		DMI_MATCH(DMI_PRODUCT_NAME, "GB-BXBT-2807"),
+-		},
+-	},
+-	{
+-	 .callback = video_detect_force_none,
+-	 /* MSI MS-7721 */
+-	 .matches = {
+-		DMI_MATCH(DMI_SYS_VENDOR, "MSI"),
+-		DMI_MATCH(DMI_PRODUCT_NAME, "MS-7721"),
+-		},
+-	},
+ 	{ },
+ };
+ 
+diff --git a/drivers/base/regmap/regcache.c b/drivers/base/regmap/regcache.c
+index 362e043e26d86..8031007b4887d 100644
+--- a/drivers/base/regmap/regcache.c
++++ b/drivers/base/regmap/regcache.c
+@@ -349,6 +349,9 @@ int regcache_sync(struct regmap *map)
+ 	const char *name;
+ 	bool bypass;
+ 
++	if (WARN_ON(map->cache_type == REGCACHE_NONE))
++		return -EINVAL;
++
+ 	BUG_ON(!map->cache_ops);
+ 
+ 	map->lock(map->lock_arg);
+@@ -418,6 +421,9 @@ int regcache_sync_region(struct regmap *map, unsigned int min,
+ 	const char *name;
+ 	bool bypass;
+ 
++	if (WARN_ON(map->cache_type == REGCACHE_NONE))
++		return -EINVAL;
++
+ 	BUG_ON(!map->cache_ops);
+ 
+ 	map->lock(map->lock_arg);
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 592cfa8b765a5..e1c954094b6c0 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -325,6 +325,9 @@ static int nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
+ 	if (blk_validate_block_size(blksize))
+ 		return -EINVAL;
+ 
++	if (bytesize < 0)
++		return -EINVAL;
++
+ 	nbd->config->bytesize = bytesize;
+ 	nbd->config->blksize_bits = __ffs(blksize);
+ 
+@@ -1111,6 +1114,9 @@ static int nbd_add_socket(struct nbd_device *nbd, unsigned long arg,
+ 	struct nbd_sock *nsock;
+ 	int err;
+ 
++	/* Arg will be cast to int, check it to avoid overflow */
++	if (arg > INT_MAX)
++		return -EINVAL;
+ 	sock = nbd_get_socket(nbd, arg, &err);
+ 	if (!sock)
+ 		return err;
+diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
+index 9e6b032c8ecc2..14491952047f5 100644
+--- a/drivers/block/null_blk/main.c
++++ b/drivers/block/null_blk/main.c
+@@ -1964,6 +1964,11 @@ static int null_init_tag_set(struct nullb *nullb, struct blk_mq_tag_set *set)
+ 
+ static int null_validate_conf(struct nullb_device *dev)
+ {
++	if (dev->queue_mode == NULL_Q_RQ) {
++		pr_err("legacy IO path is no longer available\n");
++		return -EINVAL;
++	}
++
+ 	dev->blocksize = round_down(dev->blocksize, 512);
+ 	dev->blocksize = clamp_t(unsigned int, dev->blocksize, 512, 4096);
+ 
+diff --git a/drivers/bluetooth/btbcm.c b/drivers/bluetooth/btbcm.c
+index 43e98a598bd9a..de2ea589aa49b 100644
+--- a/drivers/bluetooth/btbcm.c
++++ b/drivers/bluetooth/btbcm.c
+@@ -6,6 +6,7 @@
+  *  Copyright (C) 2015  Intel Corporation
+  */
+ 
++#include <linux/efi.h>
+ #include <linux/module.h>
+ #include <linux/firmware.h>
+ #include <linux/dmi.h>
+@@ -34,6 +35,43 @@
+ /* For kmalloc-ing the fw-name array instead of putting it on the stack */
+ typedef char bcm_fw_name[BCM_FW_NAME_LEN];
+ 
++#ifdef CONFIG_EFI
++static int btbcm_set_bdaddr_from_efi(struct hci_dev *hdev)
++{
++	efi_guid_t guid = EFI_GUID(0x74b00bd9, 0x805a, 0x4d61, 0xb5, 0x1f,
++				   0x43, 0x26, 0x81, 0x23, 0xd1, 0x13);
++	bdaddr_t efi_bdaddr, bdaddr;
++	efi_status_t status;
++	unsigned long len;
++	int ret;
++
++	if (!efi_rt_services_supported(EFI_RT_SUPPORTED_GET_VARIABLE))
++		return -EOPNOTSUPP;
++
++	len = sizeof(efi_bdaddr);
++	status = efi.get_variable(L"BDADDR", &guid, NULL, &len, &efi_bdaddr);
++	if (status != EFI_SUCCESS)
++		return -ENXIO;
++
++	if (len != sizeof(efi_bdaddr))
++		return -EIO;
++
++	baswap(&bdaddr, &efi_bdaddr);
++
++	ret = btbcm_set_bdaddr(hdev, &bdaddr);
++	if (ret)
++		return ret;
++
++	bt_dev_info(hdev, "BCM: Using EFI device address (%pMR)", &bdaddr);
++	return 0;
++}
++#else
++static int btbcm_set_bdaddr_from_efi(struct hci_dev *hdev)
++{
++	return -EOPNOTSUPP;
++}
++#endif
++
+ int btbcm_check_bdaddr(struct hci_dev *hdev)
+ {
+ 	struct hci_rp_read_bd_addr *bda;
+@@ -87,9 +125,12 @@ int btbcm_check_bdaddr(struct hci_dev *hdev)
+ 	    !bacmp(&bda->bdaddr, BDADDR_BCM4345C5) ||
+ 	    !bacmp(&bda->bdaddr, BDADDR_BCM43430A0) ||
+ 	    !bacmp(&bda->bdaddr, BDADDR_BCM43341B)) {
+-		bt_dev_info(hdev, "BCM: Using default device address (%pMR)",
+-			    &bda->bdaddr);
+-		set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks);
++		/* Try falling back to BDADDR EFI variable */
++		if (btbcm_set_bdaddr_from_efi(hdev) != 0) {
++			bt_dev_info(hdev, "BCM: Using default device address (%pMR)",
++				    &bda->bdaddr);
++			set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks);
++		}
+ 	}
+ 
+ 	kfree_skb(skb);
+diff --git a/drivers/bluetooth/btintel.c b/drivers/bluetooth/btintel.c
+index af774688f1c0d..7a6dc05553f13 100644
+--- a/drivers/bluetooth/btintel.c
++++ b/drivers/bluetooth/btintel.c
+@@ -2684,9 +2684,8 @@ static int btintel_setup_combined(struct hci_dev *hdev)
+ 		 */
+ 		set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
+ 
+-		/* Valid LE States quirk for GfP */
+-		if (INTEL_HW_VARIANT(ver_tlv.cnvi_bt) == 0x18)
+-			set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
++		/* Apply LE States quirk from solar onwards */
++		set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
+ 
+ 		/* Setup MSFT Extension support */
+ 		btintel_set_msft_opcode(hdev,
+diff --git a/drivers/bluetooth/btrtl.c b/drivers/bluetooth/btrtl.c
+index 69c3fe649ca7d..4fbb282cac4b5 100644
+--- a/drivers/bluetooth/btrtl.c
++++ b/drivers/bluetooth/btrtl.c
+@@ -17,19 +17,25 @@
+ 
+ #define VERSION "0.1"
+ 
++#define RTL_CHIP_8723CS_CG	3
++#define RTL_CHIP_8723CS_VF	4
++#define RTL_CHIP_8723CS_XX	5
+ #define RTL_EPATCH_SIGNATURE	"Realtech"
++#define RTL_ROM_LMP_8703B	0x8703
+ #define RTL_ROM_LMP_8723A	0x1200
+ #define RTL_ROM_LMP_8723B	0x8723
+ #define RTL_ROM_LMP_8821A	0x8821
+ #define RTL_ROM_LMP_8761A	0x8761
+ #define RTL_ROM_LMP_8822B	0x8822
+ #define RTL_ROM_LMP_8852A	0x8852
++#define RTL_ROM_LMP_8851B	0x8851
+ #define RTL_CONFIG_MAGIC	0x8723ab55
+ 
+ #define IC_MATCH_FL_LMPSUBV	(1 << 0)
+ #define IC_MATCH_FL_HCIREV	(1 << 1)
+ #define IC_MATCH_FL_HCIVER	(1 << 2)
+ #define IC_MATCH_FL_HCIBUS	(1 << 3)
++#define IC_MATCH_FL_CHIP_TYPE	(1 << 4)
+ #define IC_INFO(lmps, hcir, hciv, bus) \
+ 	.match_flags = IC_MATCH_FL_LMPSUBV | IC_MATCH_FL_HCIREV | \
+ 		       IC_MATCH_FL_HCIVER | IC_MATCH_FL_HCIBUS, \
+@@ -51,6 +57,7 @@ enum btrtl_chip_id {
+ 	CHIP_ID_8852A = 18,
+ 	CHIP_ID_8852B = 20,
+ 	CHIP_ID_8852C = 25,
++	CHIP_ID_8851B = 36,
+ };
+ 
+ struct id_table {
+@@ -59,6 +66,7 @@ struct id_table {
+ 	__u16 hci_rev;
+ 	__u8 hci_ver;
+ 	__u8 hci_bus;
++	__u8 chip_type;
+ 	bool config_needed;
+ 	bool has_rom_version;
+ 	bool has_msft_ext;
+@@ -99,6 +107,39 @@ static const struct id_table ic_id_table[] = {
+ 	  .fw_name  = "rtl_bt/rtl8723b_fw.bin",
+ 	  .cfg_name = "rtl_bt/rtl8723b_config" },
+ 
++	/* 8723CS-CG */
++	{ .match_flags = IC_MATCH_FL_LMPSUBV | IC_MATCH_FL_CHIP_TYPE |
++			 IC_MATCH_FL_HCIBUS,
++	  .lmp_subver = RTL_ROM_LMP_8703B,
++	  .chip_type = RTL_CHIP_8723CS_CG,
++	  .hci_bus = HCI_UART,
++	  .config_needed = true,
++	  .has_rom_version = true,
++	  .fw_name  = "rtl_bt/rtl8723cs_cg_fw.bin",
++	  .cfg_name = "rtl_bt/rtl8723cs_cg_config" },
++
++	/* 8723CS-VF */
++	{ .match_flags = IC_MATCH_FL_LMPSUBV | IC_MATCH_FL_CHIP_TYPE |
++			 IC_MATCH_FL_HCIBUS,
++	  .lmp_subver = RTL_ROM_LMP_8703B,
++	  .chip_type = RTL_CHIP_8723CS_VF,
++	  .hci_bus = HCI_UART,
++	  .config_needed = true,
++	  .has_rom_version = true,
++	  .fw_name  = "rtl_bt/rtl8723cs_vf_fw.bin",
++	  .cfg_name = "rtl_bt/rtl8723cs_vf_config" },
++
++	/* 8723CS-XX */
++	{ .match_flags = IC_MATCH_FL_LMPSUBV | IC_MATCH_FL_CHIP_TYPE |
++			 IC_MATCH_FL_HCIBUS,
++	  .lmp_subver = RTL_ROM_LMP_8703B,
++	  .chip_type = RTL_CHIP_8723CS_XX,
++	  .hci_bus = HCI_UART,
++	  .config_needed = true,
++	  .has_rom_version = true,
++	  .fw_name  = "rtl_bt/rtl8723cs_xx_fw.bin",
++	  .cfg_name = "rtl_bt/rtl8723cs_xx_config" },
++
+ 	/* 8723D */
+ 	{ IC_INFO(RTL_ROM_LMP_8723B, 0xd, 0x8, HCI_USB),
+ 	  .config_needed = true,
+@@ -205,10 +246,19 @@ static const struct id_table ic_id_table[] = {
+ 	  .has_msft_ext = true,
+ 	  .fw_name  = "rtl_bt/rtl8852cu_fw.bin",
+ 	  .cfg_name = "rtl_bt/rtl8852cu_config" },
++
++	/* 8851B */
++	{ IC_INFO(RTL_ROM_LMP_8851B, 0xb, 0xc, HCI_USB),
++	  .config_needed = false,
++	  .has_rom_version = true,
++	  .has_msft_ext = false,
++	  .fw_name  = "rtl_bt/rtl8851bu_fw.bin",
++	  .cfg_name = "rtl_bt/rtl8851bu_config" },
+ 	};
+ 
+ static const struct id_table *btrtl_match_ic(u16 lmp_subver, u16 hci_rev,
+-					     u8 hci_ver, u8 hci_bus)
++					     u8 hci_ver, u8 hci_bus,
++					     u8 chip_type)
+ {
+ 	int i;
+ 
+@@ -225,6 +275,9 @@ static const struct id_table *btrtl_match_ic(u16 lmp_subver, u16 hci_rev,
+ 		if ((ic_id_table[i].match_flags & IC_MATCH_FL_HCIBUS) &&
+ 		    (ic_id_table[i].hci_bus != hci_bus))
+ 			continue;
++		if ((ic_id_table[i].match_flags & IC_MATCH_FL_CHIP_TYPE) &&
++		    (ic_id_table[i].chip_type != chip_type))
++			continue;
+ 
+ 		break;
+ 	}
+@@ -307,6 +360,7 @@ static int rtlbt_parse_firmware(struct hci_dev *hdev,
+ 		{ RTL_ROM_LMP_8723B, 1 },
+ 		{ RTL_ROM_LMP_8821A, 2 },
+ 		{ RTL_ROM_LMP_8761A, 3 },
++		{ RTL_ROM_LMP_8703B, 7 },
+ 		{ RTL_ROM_LMP_8822B, 8 },
+ 		{ RTL_ROM_LMP_8723B, 9 },	/* 8723D */
+ 		{ RTL_ROM_LMP_8821A, 10 },	/* 8821C */
+@@ -315,6 +369,7 @@ static int rtlbt_parse_firmware(struct hci_dev *hdev,
+ 		{ RTL_ROM_LMP_8852A, 18 },	/* 8852A */
+ 		{ RTL_ROM_LMP_8852A, 20 },	/* 8852B */
+ 		{ RTL_ROM_LMP_8852A, 25 },	/* 8852C */
++		{ RTL_ROM_LMP_8851B, 36 },	/* 8851B */
+ 	};
+ 
+ 	min_size = sizeof(struct rtl_epatch_header) + sizeof(extension_sig) + 3;
+@@ -587,6 +642,48 @@ out:
+ 	return ret;
+ }
+ 
++static bool rtl_has_chip_type(u16 lmp_subver)
++{
++	switch (lmp_subver) {
++	case RTL_ROM_LMP_8703B:
++		return true;
++	default:
++		break;
++	}
++
++	return  false;
++}
++
++static int rtl_read_chip_type(struct hci_dev *hdev, u8 *type)
++{
++	struct rtl_chip_type_evt *chip_type;
++	struct sk_buff *skb;
++	const unsigned char cmd_buf[] = {0x00, 0x94, 0xa0, 0x00, 0xb0};
++
++	/* Read RTL chip type command */
++	skb = __hci_cmd_sync(hdev, 0xfc61, 5, cmd_buf, HCI_INIT_TIMEOUT);
++	if (IS_ERR(skb)) {
++		rtl_dev_err(hdev, "Read chip type failed (%ld)",
++			    PTR_ERR(skb));
++		return PTR_ERR(skb);
++	}
++
++	chip_type = skb_pull_data(skb, sizeof(*chip_type));
++	if (!chip_type) {
++		rtl_dev_err(hdev, "RTL chip type event length mismatch");
++		kfree_skb(skb);
++		return -EIO;
++	}
++
++	rtl_dev_info(hdev, "chip_type status=%x type=%x",
++		     chip_type->status, chip_type->type);
++
++	*type = chip_type->type & 0x0f;
++
++	kfree_skb(skb);
++	return 0;
++}
++
+ void btrtl_free(struct btrtl_device_info *btrtl_dev)
+ {
+ 	kvfree(btrtl_dev->fw_data);
+@@ -603,7 +700,7 @@ struct btrtl_device_info *btrtl_initialize(struct hci_dev *hdev,
+ 	struct hci_rp_read_local_version *resp;
+ 	char cfg_name[40];
+ 	u16 hci_rev, lmp_subver;
+-	u8 hci_ver;
++	u8 hci_ver, chip_type = 0;
+ 	int ret;
+ 	u16 opcode;
+ 	u8 cmd[2];
+@@ -629,8 +726,14 @@ struct btrtl_device_info *btrtl_initialize(struct hci_dev *hdev,
+ 	hci_rev = le16_to_cpu(resp->hci_rev);
+ 	lmp_subver = le16_to_cpu(resp->lmp_subver);
+ 
++	if (rtl_has_chip_type(lmp_subver)) {
++		ret = rtl_read_chip_type(hdev, &chip_type);
++		if (ret)
++			goto err_free;
++	}
++
+ 	btrtl_dev->ic_info = btrtl_match_ic(lmp_subver, hci_rev, hci_ver,
+-					    hdev->bus);
++					    hdev->bus, chip_type);
+ 
+ 	if (!btrtl_dev->ic_info)
+ 		btrtl_dev->drop_fw = true;
+@@ -673,7 +776,7 @@ struct btrtl_device_info *btrtl_initialize(struct hci_dev *hdev,
+ 		lmp_subver = le16_to_cpu(resp->lmp_subver);
+ 
+ 		btrtl_dev->ic_info = btrtl_match_ic(lmp_subver, hci_rev, hci_ver,
+-						    hdev->bus);
++						    hdev->bus, chip_type);
+ 	}
+ out_free:
+ 	kfree_skb(skb);
+@@ -755,6 +858,8 @@ int btrtl_download_firmware(struct hci_dev *hdev,
+ 	case RTL_ROM_LMP_8761A:
+ 	case RTL_ROM_LMP_8822B:
+ 	case RTL_ROM_LMP_8852A:
++	case RTL_ROM_LMP_8703B:
++	case RTL_ROM_LMP_8851B:
+ 		return btrtl_setup_rtl8723b(hdev, btrtl_dev);
+ 	default:
+ 		rtl_dev_info(hdev, "assuming no firmware upload needed");
+@@ -779,6 +884,7 @@ void btrtl_set_quirks(struct hci_dev *hdev, struct btrtl_device_info *btrtl_dev)
+ 	case CHIP_ID_8852A:
+ 	case CHIP_ID_8852B:
+ 	case CHIP_ID_8852C:
++	case CHIP_ID_8851B:
+ 		set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
+ 		set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
+ 
+@@ -795,6 +901,22 @@ void btrtl_set_quirks(struct hci_dev *hdev, struct btrtl_device_info *btrtl_dev)
+ 		rtl_dev_dbg(hdev, "WBS supported not enabled.");
+ 		break;
+ 	}
++
++	if (!btrtl_dev->ic_info)
++		return;
++
++	switch (btrtl_dev->ic_info->lmp_subver) {
++	case RTL_ROM_LMP_8703B:
++		/* 8723CS reports two pages for local ext features,
++		 * but it doesn't support any features from page 2 -
++		 * it either responds with garbage or with error status
++		 */
++		set_bit(HCI_QUIRK_BROKEN_LOCAL_EXT_FEATURES_PAGE_2,
++			&hdev->quirks);
++		break;
++	default:
++		break;
++	}
+ }
+ EXPORT_SYMBOL_GPL(btrtl_set_quirks);
+ 
+@@ -953,6 +1075,12 @@ MODULE_FIRMWARE("rtl_bt/rtl8723b_fw.bin");
+ MODULE_FIRMWARE("rtl_bt/rtl8723b_config.bin");
+ MODULE_FIRMWARE("rtl_bt/rtl8723bs_fw.bin");
+ MODULE_FIRMWARE("rtl_bt/rtl8723bs_config.bin");
++MODULE_FIRMWARE("rtl_bt/rtl8723cs_cg_fw.bin");
++MODULE_FIRMWARE("rtl_bt/rtl8723cs_cg_config.bin");
++MODULE_FIRMWARE("rtl_bt/rtl8723cs_vf_fw.bin");
++MODULE_FIRMWARE("rtl_bt/rtl8723cs_vf_config.bin");
++MODULE_FIRMWARE("rtl_bt/rtl8723cs_xx_fw.bin");
++MODULE_FIRMWARE("rtl_bt/rtl8723cs_xx_config.bin");
+ MODULE_FIRMWARE("rtl_bt/rtl8723ds_fw.bin");
+ MODULE_FIRMWARE("rtl_bt/rtl8723ds_config.bin");
+ MODULE_FIRMWARE("rtl_bt/rtl8761a_fw.bin");
+@@ -967,3 +1095,5 @@ MODULE_FIRMWARE("rtl_bt/rtl8852bu_fw.bin");
+ MODULE_FIRMWARE("rtl_bt/rtl8852bu_config.bin");
+ MODULE_FIRMWARE("rtl_bt/rtl8852cu_fw.bin");
+ MODULE_FIRMWARE("rtl_bt/rtl8852cu_config.bin");
++MODULE_FIRMWARE("rtl_bt/rtl8851bu_fw.bin");
++MODULE_FIRMWARE("rtl_bt/rtl8851bu_config.bin");
+diff --git a/drivers/bluetooth/btrtl.h b/drivers/bluetooth/btrtl.h
+index ebf0101c959b0..349d72ee571b6 100644
+--- a/drivers/bluetooth/btrtl.h
++++ b/drivers/bluetooth/btrtl.h
+@@ -14,6 +14,11 @@
+ 
+ struct btrtl_device_info;
+ 
++struct rtl_chip_type_evt {
++	__u8 status;
++	__u8 type;
++} __packed;
++
+ struct rtl_download_cmd {
+ 	__u8 index;
+ 	__u8 data[RTL_FRAG_LEN];
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 5c536151ef836..0923582299f3a 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -558,6 +558,9 @@ static const struct usb_device_id blacklist_table[] = {
+ 	{ USB_DEVICE(0x043e, 0x310c), .driver_info = BTUSB_MEDIATEK |
+ 						     BTUSB_WIDEBAND_SPEECH |
+ 						     BTUSB_VALID_LE_STATES },
++	{ USB_DEVICE(0x04ca, 0x3801), .driver_info = BTUSB_MEDIATEK |
++						     BTUSB_WIDEBAND_SPEECH |
++						     BTUSB_VALID_LE_STATES },
+ 
+ 	/* Additional MediaTek MT7668 Bluetooth devices */
+ 	{ USB_DEVICE(0x043e, 0x3109), .driver_info = BTUSB_MEDIATEK |
+@@ -4102,6 +4105,9 @@ static int btusb_probe(struct usb_interface *intf,
+ 	if (id->driver_info & BTUSB_ACTIONS_SEMI) {
+ 		/* Support is advertised, but not implemented */
+ 		set_bit(HCI_QUIRK_BROKEN_ERR_DATA_REPORTING, &hdev->quirks);
++		set_bit(HCI_QUIRK_BROKEN_READ_TRANSMIT_POWER, &hdev->quirks);
++		set_bit(HCI_QUIRK_BROKEN_SET_RPA_TIMEOUT, &hdev->quirks);
++		set_bit(HCI_QUIRK_BROKEN_EXT_SCAN, &hdev->quirks);
+ 	}
+ 
+ 	if (!reset)
+diff --git a/drivers/bluetooth/hci_h5.c b/drivers/bluetooth/hci_h5.c
+index 6455bc4fb5bb3..e90670955df2c 100644
+--- a/drivers/bluetooth/hci_h5.c
++++ b/drivers/bluetooth/hci_h5.c
+@@ -936,6 +936,8 @@ static int h5_btrtl_setup(struct h5 *h5)
+ 	err = btrtl_download_firmware(h5->hu->hdev, btrtl_dev);
+ 	/* Give the device some time before the hci-core sends it a reset */
+ 	usleep_range(10000, 20000);
++	if (err)
++		goto out_free;
+ 
+ 	btrtl_set_quirks(h5->hu->hdev, btrtl_dev);
+ 
+@@ -1100,6 +1102,8 @@ static const struct of_device_id rtl_bluetooth_of_match[] = {
+ 	  .data = (const void *)&h5_data_rtl8822cs },
+ 	{ .compatible = "realtek,rtl8723bs-bt",
+ 	  .data = (const void *)&h5_data_rtl8723bs },
++	{ .compatible = "realtek,rtl8723cs-bt",
++	  .data = (const void *)&h5_data_rtl8723bs },
+ 	{ .compatible = "realtek,rtl8723ds-bt",
+ 	  .data = (const void *)&h5_data_rtl8723bs },
+ #endif
+diff --git a/drivers/char/tpm/tpm_tis.c b/drivers/char/tpm/tpm_tis.c
+index ed5dabd3c72d6..4be19d8f3ca95 100644
+--- a/drivers/char/tpm/tpm_tis.c
++++ b/drivers/char/tpm/tpm_tis.c
+@@ -83,6 +83,22 @@ static const struct dmi_system_id tpm_tis_dmi_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T490s"),
+ 		},
+ 	},
++	{
++		.callback = tpm_tis_disable_irq,
++		.ident = "ThinkStation P360 Tiny",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkStation P360 Tiny"),
++		},
++	},
++	{
++		.callback = tpm_tis_disable_irq,
++		.ident = "ThinkPad L490",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad L490"),
++		},
++	},
+ 	{}
+ };
+ 
+diff --git a/drivers/clk/rockchip/clk-rk3588.c b/drivers/clk/rockchip/clk-rk3588.c
+index b7ce3fbd6fa6a..6994165e03957 100644
+--- a/drivers/clk/rockchip/clk-rk3588.c
++++ b/drivers/clk/rockchip/clk-rk3588.c
+@@ -13,15 +13,25 @@
+ #include "clk.h"
+ 
+ /*
+- * GATE with additional linked clock. Downstream enables the linked clock
+- * (via runtime PM) whenever the gate is enabled. The downstream implementation
+- * does this via separate clock nodes for each of the linked gate clocks,
+- * which leaks parts of the clock tree into DT. It is unclear why this is
+- * actually needed and things work without it for simple use cases. Thus
+- * the linked clock is ignored for now.
++ * Recent Rockchip SoCs have a new hardware block called Native Interface
++ * Unit (NIU), which gates clocks to devices behind them. These effectively
++ * need two parent clocks.
++ *
++ * Downstream enables the linked clock via runtime PM whenever the gate is
++ * enabled. This implementation uses separate clock nodes for each of the
++ * linked gate clocks, which leaks parts of the clock tree into DT.
++ *
++ * The GATE_LINK macro instead takes the second parent via 'linkname', but
++ * ignores the information. Once the clock framework is ready to handle it, the
++ * information should be passed on here. But since these clocks are required to
++ * access multiple relevant IP blocks, such as PCIe or USB, we mark all linked
++ * clocks critical until a better solution is available. This will waste some
++ * power, but avoids leaking implementation details into DT or hanging the
++ * system.
+  */
+ #define GATE_LINK(_id, cname, pname, linkname, f, o, b, gf) \
+ 	GATE(_id, cname, pname, f, o, b, gf)
++#define RK3588_LINKED_CLK		CLK_IS_CRITICAL
+ 
+ 
+ #define RK3588_GRF_SOC_STATUS0		0x600
+@@ -1446,7 +1456,7 @@ static struct rockchip_clk_branch rk3588_clk_branches[] __initdata = {
+ 	COMPOSITE_NODIV(HCLK_NVM_ROOT,  "hclk_nvm_root", mux_200m_100m_50m_24m_p, 0,
+ 			RK3588_CLKSEL_CON(77), 0, 2, MFLAGS,
+ 			RK3588_CLKGATE_CON(31), 0, GFLAGS),
+-	COMPOSITE(ACLK_NVM_ROOT, "aclk_nvm_root", gpll_cpll_p, 0,
++	COMPOSITE(ACLK_NVM_ROOT, "aclk_nvm_root", gpll_cpll_p, RK3588_LINKED_CLK,
+ 			RK3588_CLKSEL_CON(77), 7, 1, MFLAGS, 2, 5, DFLAGS,
+ 			RK3588_CLKGATE_CON(31), 1, GFLAGS),
+ 	GATE(ACLK_EMMC, "aclk_emmc", "aclk_nvm_root", 0,
+@@ -1675,13 +1685,13 @@ static struct rockchip_clk_branch rk3588_clk_branches[] __initdata = {
+ 			RK3588_CLKGATE_CON(42), 9, GFLAGS),
+ 
+ 	/* vdpu */
+-	COMPOSITE(ACLK_VDPU_ROOT, "aclk_vdpu_root", gpll_cpll_aupll_p, 0,
++	COMPOSITE(ACLK_VDPU_ROOT, "aclk_vdpu_root", gpll_cpll_aupll_p, RK3588_LINKED_CLK,
+ 			RK3588_CLKSEL_CON(98), 5, 2, MFLAGS, 0, 5, DFLAGS,
+ 			RK3588_CLKGATE_CON(44), 0, GFLAGS),
+ 	COMPOSITE_NODIV(ACLK_VDPU_LOW_ROOT, "aclk_vdpu_low_root", mux_400m_200m_100m_24m_p, 0,
+ 			RK3588_CLKSEL_CON(98), 7, 2, MFLAGS,
+ 			RK3588_CLKGATE_CON(44), 1, GFLAGS),
+-	COMPOSITE_NODIV(HCLK_VDPU_ROOT, "hclk_vdpu_root", mux_200m_100m_50m_24m_p, 0,
++	COMPOSITE_NODIV(HCLK_VDPU_ROOT, "hclk_vdpu_root", mux_200m_100m_50m_24m_p, RK3588_LINKED_CLK,
+ 			RK3588_CLKSEL_CON(98), 9, 2, MFLAGS,
+ 			RK3588_CLKGATE_CON(44), 2, GFLAGS),
+ 	COMPOSITE(ACLK_JPEG_DECODER_ROOT, "aclk_jpeg_decoder_root", gpll_cpll_aupll_spll_p, 0,
+@@ -1732,9 +1742,9 @@ static struct rockchip_clk_branch rk3588_clk_branches[] __initdata = {
+ 	COMPOSITE(ACLK_RKVENC0_ROOT, "aclk_rkvenc0_root", gpll_cpll_npll_p, 0,
+ 			RK3588_CLKSEL_CON(102), 7, 2, MFLAGS, 2, 5, DFLAGS,
+ 			RK3588_CLKGATE_CON(47), 1, GFLAGS),
+-	GATE(HCLK_RKVENC0, "hclk_rkvenc0", "hclk_rkvenc0_root", 0,
++	GATE(HCLK_RKVENC0, "hclk_rkvenc0", "hclk_rkvenc0_root", RK3588_LINKED_CLK,
+ 			RK3588_CLKGATE_CON(47), 4, GFLAGS),
+-	GATE(ACLK_RKVENC0, "aclk_rkvenc0", "aclk_rkvenc0_root", 0,
++	GATE(ACLK_RKVENC0, "aclk_rkvenc0", "aclk_rkvenc0_root", RK3588_LINKED_CLK,
+ 			RK3588_CLKGATE_CON(47), 5, GFLAGS),
+ 	COMPOSITE(CLK_RKVENC0_CORE, "clk_rkvenc0_core", gpll_cpll_aupll_npll_p, 0,
+ 			RK3588_CLKSEL_CON(102), 14, 2, MFLAGS, 9, 5, DFLAGS,
+@@ -1744,10 +1754,10 @@ static struct rockchip_clk_branch rk3588_clk_branches[] __initdata = {
+ 			RK3588_CLKGATE_CON(48), 6, GFLAGS),
+ 
+ 	/* vi */
+-	COMPOSITE(ACLK_VI_ROOT, "aclk_vi_root", gpll_cpll_npll_aupll_spll_p, 0,
++	COMPOSITE(ACLK_VI_ROOT, "aclk_vi_root", gpll_cpll_npll_aupll_spll_p, RK3588_LINKED_CLK,
+ 			RK3588_CLKSEL_CON(106), 5, 3, MFLAGS, 0, 5, DFLAGS,
+ 			RK3588_CLKGATE_CON(49), 0, GFLAGS),
+-	COMPOSITE_NODIV(HCLK_VI_ROOT, "hclk_vi_root", mux_200m_100m_50m_24m_p, 0,
++	COMPOSITE_NODIV(HCLK_VI_ROOT, "hclk_vi_root", mux_200m_100m_50m_24m_p, RK3588_LINKED_CLK,
+ 			RK3588_CLKSEL_CON(106), 8, 2, MFLAGS,
+ 			RK3588_CLKGATE_CON(49), 1, GFLAGS),
+ 	COMPOSITE_NODIV(PCLK_VI_ROOT, "pclk_vi_root", mux_100m_50m_24m_p, 0,
+@@ -1919,10 +1929,10 @@ static struct rockchip_clk_branch rk3588_clk_branches[] __initdata = {
+ 	COMPOSITE(ACLK_VOP_ROOT, "aclk_vop_root", gpll_cpll_dmyaupll_npll_spll_p, 0,
+ 			RK3588_CLKSEL_CON(110), 5, 3, MFLAGS, 0, 5, DFLAGS,
+ 			RK3588_CLKGATE_CON(52), 0, GFLAGS),
+-	COMPOSITE_NODIV(ACLK_VOP_LOW_ROOT, "aclk_vop_low_root", mux_400m_200m_100m_24m_p, 0,
++	COMPOSITE_NODIV(ACLK_VOP_LOW_ROOT, "aclk_vop_low_root", mux_400m_200m_100m_24m_p, RK3588_LINKED_CLK,
+ 			RK3588_CLKSEL_CON(110), 8, 2, MFLAGS,
+ 			RK3588_CLKGATE_CON(52), 1, GFLAGS),
+-	COMPOSITE_NODIV(HCLK_VOP_ROOT, "hclk_vop_root", mux_200m_100m_50m_24m_p, 0,
++	COMPOSITE_NODIV(HCLK_VOP_ROOT, "hclk_vop_root", mux_200m_100m_50m_24m_p, RK3588_LINKED_CLK,
+ 			RK3588_CLKSEL_CON(110), 10, 2, MFLAGS,
+ 			RK3588_CLKGATE_CON(52), 2, GFLAGS),
+ 	COMPOSITE_NODIV(PCLK_VOP_ROOT, "pclk_vop_root", mux_100m_50m_24m_p, 0,
+@@ -2425,7 +2435,7 @@ static struct rockchip_clk_branch rk3588_clk_branches[] __initdata = {
+ 
+ 	GATE_LINK(ACLK_ISP1_PRE, "aclk_isp1_pre", "aclk_isp1_root", "aclk_vi_root", 0, RK3588_CLKGATE_CON(26), 6, GFLAGS),
+ 	GATE_LINK(HCLK_ISP1_PRE, "hclk_isp1_pre", "hclk_isp1_root", "hclk_vi_root", 0, RK3588_CLKGATE_CON(26), 8, GFLAGS),
+-	GATE_LINK(HCLK_NVM, "hclk_nvm", "hclk_nvm_root", "aclk_nvm_root", 0, RK3588_CLKGATE_CON(31), 2, GFLAGS),
++	GATE_LINK(HCLK_NVM, "hclk_nvm", "hclk_nvm_root", "aclk_nvm_root", RK3588_LINKED_CLK, RK3588_CLKGATE_CON(31), 2, GFLAGS),
+ 	GATE_LINK(ACLK_USB, "aclk_usb", "aclk_usb_root", "aclk_vo1usb_top_root", 0, RK3588_CLKGATE_CON(42), 2, GFLAGS),
+ 	GATE_LINK(HCLK_USB, "hclk_usb", "hclk_usb_root", "hclk_vo1usb_top_root", 0, RK3588_CLKGATE_CON(42), 3, GFLAGS),
+ 	GATE_LINK(ACLK_JPEG_DECODER_PRE, "aclk_jpeg_decoder_pre", "aclk_jpeg_decoder_root", "aclk_vdpu_root", 0, RK3588_CLKGATE_CON(44), 7, GFLAGS),
+diff --git a/drivers/clk/tegra/clk-tegra20.c b/drivers/clk/tegra/clk-tegra20.c
+index 422d782475532..dcacc5064d339 100644
+--- a/drivers/clk/tegra/clk-tegra20.c
++++ b/drivers/clk/tegra/clk-tegra20.c
+@@ -21,24 +21,24 @@
+ #define MISC_CLK_ENB 0x48
+ 
+ #define OSC_CTRL 0x50
+-#define OSC_CTRL_OSC_FREQ_MASK (3<<30)
+-#define OSC_CTRL_OSC_FREQ_13MHZ (0<<30)
+-#define OSC_CTRL_OSC_FREQ_19_2MHZ (1<<30)
+-#define OSC_CTRL_OSC_FREQ_12MHZ (2<<30)
+-#define OSC_CTRL_OSC_FREQ_26MHZ (3<<30)
+-#define OSC_CTRL_MASK (0x3f2 | OSC_CTRL_OSC_FREQ_MASK)
+-
+-#define OSC_CTRL_PLL_REF_DIV_MASK (3<<28)
+-#define OSC_CTRL_PLL_REF_DIV_1		(0<<28)
+-#define OSC_CTRL_PLL_REF_DIV_2		(1<<28)
+-#define OSC_CTRL_PLL_REF_DIV_4		(2<<28)
++#define OSC_CTRL_OSC_FREQ_MASK (3u<<30)
++#define OSC_CTRL_OSC_FREQ_13MHZ (0u<<30)
++#define OSC_CTRL_OSC_FREQ_19_2MHZ (1u<<30)
++#define OSC_CTRL_OSC_FREQ_12MHZ (2u<<30)
++#define OSC_CTRL_OSC_FREQ_26MHZ (3u<<30)
++#define OSC_CTRL_MASK (0x3f2u | OSC_CTRL_OSC_FREQ_MASK)
++
++#define OSC_CTRL_PLL_REF_DIV_MASK	(3u<<28)
++#define OSC_CTRL_PLL_REF_DIV_1		(0u<<28)
++#define OSC_CTRL_PLL_REF_DIV_2		(1u<<28)
++#define OSC_CTRL_PLL_REF_DIV_4		(2u<<28)
+ 
+ #define OSC_FREQ_DET 0x58
+-#define OSC_FREQ_DET_TRIG (1<<31)
++#define OSC_FREQ_DET_TRIG (1u<<31)
+ 
+ #define OSC_FREQ_DET_STATUS 0x5c
+-#define OSC_FREQ_DET_BUSY (1<<31)
+-#define OSC_FREQ_DET_CNT_MASK 0xFFFF
++#define OSC_FREQ_DET_BUSYu (1<<31)
++#define OSC_FREQ_DET_CNT_MASK 0xFFFFu
+ 
+ #define TEGRA20_CLK_PERIPH_BANKS	3
+ 
+diff --git a/drivers/firmware/arm_sdei.c b/drivers/firmware/arm_sdei.c
+index 1e1a51510e83b..f9040bd610812 100644
+--- a/drivers/firmware/arm_sdei.c
++++ b/drivers/firmware/arm_sdei.c
+@@ -43,6 +43,8 @@ static asmlinkage void (*sdei_firmware_call)(unsigned long function_id,
+ /* entry point from firmware to arch asm code */
+ static unsigned long sdei_entry_point;
+ 
++static int sdei_hp_state;
++
+ struct sdei_event {
+ 	/* These three are protected by the sdei_list_lock */
+ 	struct list_head	list;
+@@ -301,8 +303,6 @@ int sdei_mask_local_cpu(void)
+ {
+ 	int err;
+ 
+-	WARN_ON_ONCE(preemptible());
+-
+ 	err = invoke_sdei_fn(SDEI_1_0_FN_SDEI_PE_MASK, 0, 0, 0, 0, 0, NULL);
+ 	if (err && err != -EIO) {
+ 		pr_warn_once("failed to mask CPU[%u]: %d\n",
+@@ -315,6 +315,7 @@ int sdei_mask_local_cpu(void)
+ 
+ static void _ipi_mask_cpu(void *ignored)
+ {
++	WARN_ON_ONCE(preemptible());
+ 	sdei_mask_local_cpu();
+ }
+ 
+@@ -322,8 +323,6 @@ int sdei_unmask_local_cpu(void)
+ {
+ 	int err;
+ 
+-	WARN_ON_ONCE(preemptible());
+-
+ 	err = invoke_sdei_fn(SDEI_1_0_FN_SDEI_PE_UNMASK, 0, 0, 0, 0, 0, NULL);
+ 	if (err && err != -EIO) {
+ 		pr_warn_once("failed to unmask CPU[%u]: %d\n",
+@@ -336,6 +335,7 @@ int sdei_unmask_local_cpu(void)
+ 
+ static void _ipi_unmask_cpu(void *ignored)
+ {
++	WARN_ON_ONCE(preemptible());
+ 	sdei_unmask_local_cpu();
+ }
+ 
+@@ -343,6 +343,8 @@ static void _ipi_private_reset(void *ignored)
+ {
+ 	int err;
+ 
++	WARN_ON_ONCE(preemptible());
++
+ 	err = invoke_sdei_fn(SDEI_1_0_FN_SDEI_PRIVATE_RESET, 0, 0, 0, 0, 0,
+ 			     NULL);
+ 	if (err && err != -EIO)
+@@ -389,8 +391,6 @@ static void _local_event_enable(void *data)
+ 	int err;
+ 	struct sdei_crosscall_args *arg = data;
+ 
+-	WARN_ON_ONCE(preemptible());
+-
+ 	err = sdei_api_event_enable(arg->event->event_num);
+ 
+ 	sdei_cross_call_return(arg, err);
+@@ -479,8 +479,6 @@ static void _local_event_unregister(void *data)
+ 	int err;
+ 	struct sdei_crosscall_args *arg = data;
+ 
+-	WARN_ON_ONCE(preemptible());
+-
+ 	err = sdei_api_event_unregister(arg->event->event_num);
+ 
+ 	sdei_cross_call_return(arg, err);
+@@ -561,8 +559,6 @@ static void _local_event_register(void *data)
+ 	struct sdei_registered_event *reg;
+ 	struct sdei_crosscall_args *arg = data;
+ 
+-	WARN_ON(preemptible());
+-
+ 	reg = per_cpu_ptr(arg->event->private_registered, smp_processor_id());
+ 	err = sdei_api_event_register(arg->event->event_num, sdei_entry_point,
+ 				      reg, 0, 0);
+@@ -717,6 +713,8 @@ static int sdei_pm_notifier(struct notifier_block *nb, unsigned long action,
+ {
+ 	int rv;
+ 
++	WARN_ON_ONCE(preemptible());
++
+ 	switch (action) {
+ 	case CPU_PM_ENTER:
+ 		rv = sdei_mask_local_cpu();
+@@ -765,7 +763,7 @@ static int sdei_device_freeze(struct device *dev)
+ 	int err;
+ 
+ 	/* unregister private events */
+-	cpuhp_remove_state(CPUHP_AP_ARM_SDEI_STARTING);
++	cpuhp_remove_state(sdei_entry_point);
+ 
+ 	err = sdei_unregister_shared();
+ 	if (err)
+@@ -786,12 +784,15 @@ static int sdei_device_thaw(struct device *dev)
+ 		return err;
+ 	}
+ 
+-	err = cpuhp_setup_state(CPUHP_AP_ARM_SDEI_STARTING, "SDEI",
++	err = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "SDEI",
+ 				&sdei_cpuhp_up, &sdei_cpuhp_down);
+-	if (err)
++	if (err < 0) {
+ 		pr_warn("Failed to re-register CPU hotplug notifier...\n");
++		return err;
++	}
+ 
+-	return err;
++	sdei_hp_state = err;
++	return 0;
+ }
+ 
+ static int sdei_device_restore(struct device *dev)
+@@ -823,7 +824,7 @@ static int sdei_reboot_notifier(struct notifier_block *nb, unsigned long action,
+ 	 * We are going to reset the interface, after this there is no point
+ 	 * doing work when we take CPUs offline.
+ 	 */
+-	cpuhp_remove_state(CPUHP_AP_ARM_SDEI_STARTING);
++	cpuhp_remove_state(sdei_hp_state);
+ 
+ 	sdei_platform_reset();
+ 
+@@ -1003,13 +1004,15 @@ static int sdei_probe(struct platform_device *pdev)
+ 		goto remove_cpupm;
+ 	}
+ 
+-	err = cpuhp_setup_state(CPUHP_AP_ARM_SDEI_STARTING, "SDEI",
++	err = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "SDEI",
+ 				&sdei_cpuhp_up, &sdei_cpuhp_down);
+-	if (err) {
++	if (err < 0) {
+ 		pr_warn("Failed to register CPU hotplug notifier...\n");
+ 		goto remove_reboot;
+ 	}
+ 
++	sdei_hp_state = err;
++
+ 	return 0;
+ 
+ remove_reboot:
+diff --git a/drivers/firmware/smccc/smccc.c b/drivers/firmware/smccc/smccc.c
+index 60ccf3e90d7de..db818f9dcb8ee 100644
+--- a/drivers/firmware/smccc/smccc.c
++++ b/drivers/firmware/smccc/smccc.c
+@@ -17,9 +17,13 @@ static enum arm_smccc_conduit smccc_conduit = SMCCC_CONDUIT_NONE;
+ 
+ bool __ro_after_init smccc_trng_available = false;
+ u64 __ro_after_init smccc_has_sve_hint = false;
++s32 __ro_after_init smccc_soc_id_version = SMCCC_RET_NOT_SUPPORTED;
++s32 __ro_after_init smccc_soc_id_revision = SMCCC_RET_NOT_SUPPORTED;
+ 
+ void __init arm_smccc_version_init(u32 version, enum arm_smccc_conduit conduit)
+ {
++	struct arm_smccc_res res;
++
+ 	smccc_version = version;
+ 	smccc_conduit = conduit;
+ 
+@@ -27,6 +31,18 @@ void __init arm_smccc_version_init(u32 version, enum arm_smccc_conduit conduit)
+ 	if (IS_ENABLED(CONFIG_ARM64_SVE) &&
+ 	    smccc_version >= ARM_SMCCC_VERSION_1_3)
+ 		smccc_has_sve_hint = true;
++
++	if ((smccc_version >= ARM_SMCCC_VERSION_1_2) &&
++	    (smccc_conduit != SMCCC_CONDUIT_NONE)) {
++		arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
++				     ARM_SMCCC_ARCH_SOC_ID, &res);
++		if ((s32)res.a0 >= 0) {
++			arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_SOC_ID, 0, &res);
++			smccc_soc_id_version = (s32)res.a0;
++			arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_SOC_ID, 1, &res);
++			smccc_soc_id_revision = (s32)res.a0;
++		}
++	}
+ }
+ 
+ enum arm_smccc_conduit arm_smccc_1_1_get_conduit(void)
+@@ -44,6 +60,16 @@ u32 arm_smccc_get_version(void)
+ }
+ EXPORT_SYMBOL_GPL(arm_smccc_get_version);
+ 
++s32 arm_smccc_get_soc_id_version(void)
++{
++	return smccc_soc_id_version;
++}
++
++s32 arm_smccc_get_soc_id_revision(void)
++{
++	return smccc_soc_id_revision;
++}
++
+ static int __init smccc_devices_init(void)
+ {
+ 	struct platform_device *pdev;
+diff --git a/drivers/firmware/smccc/soc_id.c b/drivers/firmware/smccc/soc_id.c
+index dd7c3d5e8b0bb..890eb454599a3 100644
+--- a/drivers/firmware/smccc/soc_id.c
++++ b/drivers/firmware/smccc/soc_id.c
+@@ -42,41 +42,23 @@ static int __init smccc_soc_init(void)
+ 	if (arm_smccc_get_version() < ARM_SMCCC_VERSION_1_2)
+ 		return 0;
+ 
+-	if (arm_smccc_1_1_get_conduit() == SMCCC_CONDUIT_NONE) {
+-		pr_err("%s: invalid SMCCC conduit\n", __func__);
+-		return -EOPNOTSUPP;
+-	}
+-
+-	arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
+-			     ARM_SMCCC_ARCH_SOC_ID, &res);
+-
+-	if ((int)res.a0 == SMCCC_RET_NOT_SUPPORTED) {
++	soc_id_version = arm_smccc_get_soc_id_version();
++	if (soc_id_version == SMCCC_RET_NOT_SUPPORTED) {
+ 		pr_info("ARCH_SOC_ID not implemented, skipping ....\n");
+ 		return 0;
+ 	}
+ 
+-	if ((int)res.a0 < 0) {
+-		pr_info("ARCH_FEATURES(ARCH_SOC_ID) returned error: %lx\n",
+-			res.a0);
+-		return -EINVAL;
+-	}
+-
+-	arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_SOC_ID, 0, &res);
+-	if ((int)res.a0 < 0) {
++	if (soc_id_version < 0) {
+ 		pr_err("ARCH_SOC_ID(0) returned error: %lx\n", res.a0);
+ 		return -EINVAL;
+ 	}
+ 
+-	soc_id_version = res.a0;
+-
+-	arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_SOC_ID, 1, &res);
+-	if ((int)res.a0 < 0) {
++	soc_id_rev = arm_smccc_get_soc_id_revision();
++	if (soc_id_rev < 0) {
+ 		pr_err("ARCH_SOC_ID(1) returned error: %lx\n", res.a0);
+ 		return -EINVAL;
+ 	}
+ 
+-	soc_id_rev = res.a0;
+-
+ 	soc_dev_attr = kzalloc(sizeof(*soc_dev_attr), GFP_KERNEL);
+ 	if (!soc_dev_attr)
+ 		return -ENOMEM;
+diff --git a/drivers/firmware/sysfb_simplefb.c b/drivers/firmware/sysfb_simplefb.c
+index 82c64cb9f5316..74363ed7501f6 100644
+--- a/drivers/firmware/sysfb_simplefb.c
++++ b/drivers/firmware/sysfb_simplefb.c
+@@ -51,7 +51,8 @@ __init bool sysfb_parse_mode(const struct screen_info *si,
+ 	 *
+ 	 * It's not easily possible to fix this in struct screen_info,
+ 	 * as this could break UAPI. The best solution is to compute
+-	 * bits_per_pixel here and ignore lfb_depth. In the loop below,
++	 * bits_per_pixel from the color bits, reserved bits and
++	 * reported lfb_depth, whichever is highest.  In the loop below,
+ 	 * ignore simplefb formats with alpha bits, as EFI and VESA
+ 	 * don't specify alpha channels.
+ 	 */
+@@ -60,6 +61,7 @@ __init bool sysfb_parse_mode(const struct screen_info *si,
+ 					  si->green_size + si->green_pos,
+ 					  si->blue_size + si->blue_pos),
+ 				     si->rsvd_size + si->rsvd_pos);
++		bits_per_pixel = max_t(u32, bits_per_pixel, si->lfb_depth);
+ 	} else {
+ 		bits_per_pixel = si->lfb_depth;
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.h
+index e9f2c11ea416c..be243adf3e657 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.h
+@@ -98,6 +98,8 @@ struct amdgpu_irq {
+ 	struct irq_domain		*domain; /* GPU irq controller domain */
+ 	unsigned			virq[AMDGPU_MAX_IRQ_SRC_ID];
+ 	uint32_t                        srbm_soft_reset;
++	u32                             retry_cam_doorbell_index;
++	bool                            retry_cam_enabled;
+ };
+ 
+ void amdgpu_irq_disable_all(struct amdgpu_device *adev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
+index 82e27bd4f0383..7e8b7171068dc 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
+@@ -1432,13 +1432,31 @@ int amdgpu_mes_init_microcode(struct amdgpu_device *adev, int pipe)
+ 	struct amdgpu_firmware_info *info;
+ 	char ucode_prefix[30];
+ 	char fw_name[40];
++	bool need_retry = false;
+ 	int r;
+ 
+-	amdgpu_ucode_ip_version_decode(adev, GC_HWIP, ucode_prefix, sizeof(ucode_prefix));
+-	snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_mes%s.bin",
+-		ucode_prefix,
+-		pipe == AMDGPU_MES_SCHED_PIPE ? "" : "1");
++	amdgpu_ucode_ip_version_decode(adev, GC_HWIP, ucode_prefix,
++				       sizeof(ucode_prefix));
++	if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(11, 0, 0)) {
++		snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_mes%s.bin",
++			 ucode_prefix,
++			 pipe == AMDGPU_MES_SCHED_PIPE ? "_2" : "1");
++		need_retry = true;
++	} else {
++		snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_mes%s.bin",
++			 ucode_prefix,
++			 pipe == AMDGPU_MES_SCHED_PIPE ? "" : "1");
++	}
++
+ 	r = amdgpu_ucode_request(adev, &adev->mes.fw[pipe], fw_name);
++	if (r && need_retry && pipe == AMDGPU_MES_SCHED_PIPE) {
++		snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_mes.bin",
++			 ucode_prefix);
++		DRM_INFO("try to fall back to %s\n", fw_name);
++		r = amdgpu_ucode_request(adev, &adev->mes.fw[pipe],
++					 fw_name);
++	}
++
+ 	if (r)
+ 		goto out;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+index 6983acc456b28..b1428068fef7f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+@@ -8159,8 +8159,14 @@ static int gfx_v10_0_set_powergating_state(void *handle,
+ 	case IP_VERSION(10, 3, 3):
+ 	case IP_VERSION(10, 3, 6):
+ 	case IP_VERSION(10, 3, 7):
++		if (!enable)
++			amdgpu_gfx_off_ctrl(adev, false);
++
+ 		gfx_v10_cntl_pg(adev, enable);
+-		amdgpu_gfx_off_ctrl(adev, enable);
++
++		if (enable)
++			amdgpu_gfx_off_ctrl(adev, true);
++
+ 		break;
+ 	default:
+ 		break;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+index 7609d206012fa..6b6a06d742fcf 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+@@ -4663,13 +4663,29 @@ static int gfx_v11_0_post_soft_reset(void *handle)
+ static uint64_t gfx_v11_0_get_gpu_clock_counter(struct amdgpu_device *adev)
+ {
+ 	uint64_t clock;
++	uint64_t clock_counter_lo, clock_counter_hi_pre, clock_counter_hi_after;
++
++	if (amdgpu_sriov_vf(adev)) {
++		amdgpu_gfx_off_ctrl(adev, false);
++		mutex_lock(&adev->gfx.gpu_clock_mutex);
++		clock_counter_hi_pre = (uint64_t)RREG32_SOC15(GC, 0, regCP_MES_MTIME_HI);
++		clock_counter_lo = (uint64_t)RREG32_SOC15(GC, 0, regCP_MES_MTIME_LO);
++		clock_counter_hi_after = (uint64_t)RREG32_SOC15(GC, 0, regCP_MES_MTIME_HI);
++		if (clock_counter_hi_pre != clock_counter_hi_after)
++			clock_counter_lo = (uint64_t)RREG32_SOC15(GC, 0, regCP_MES_MTIME_LO);
++		mutex_unlock(&adev->gfx.gpu_clock_mutex);
++		amdgpu_gfx_off_ctrl(adev, true);
++	} else {
++		preempt_disable();
++		clock_counter_hi_pre = (uint64_t)RREG32_SOC15(SMUIO, 0, regGOLDEN_TSC_COUNT_UPPER);
++		clock_counter_lo = (uint64_t)RREG32_SOC15(SMUIO, 0, regGOLDEN_TSC_COUNT_LOWER);
++		clock_counter_hi_after = (uint64_t)RREG32_SOC15(SMUIO, 0, regGOLDEN_TSC_COUNT_UPPER);
++		if (clock_counter_hi_pre != clock_counter_hi_after)
++			clock_counter_lo = (uint64_t)RREG32_SOC15(SMUIO, 0, regGOLDEN_TSC_COUNT_LOWER);
++		preempt_enable();
++	}
++	clock = clock_counter_lo | (clock_counter_hi_after << 32ULL);
+ 
+-	amdgpu_gfx_off_ctrl(adev, false);
+-	mutex_lock(&adev->gfx.gpu_clock_mutex);
+-	clock = (uint64_t)RREG32_SOC15(SMUIO, 0, regGOLDEN_TSC_COUNT_LOWER) |
+-		((uint64_t)RREG32_SOC15(SMUIO, 0, regGOLDEN_TSC_COUNT_UPPER) << 32ULL);
+-	mutex_unlock(&adev->gfx.gpu_clock_mutex);
+-	amdgpu_gfx_off_ctrl(adev, true);
+ 	return clock;
+ }
+ 
+@@ -5135,8 +5151,14 @@ static int gfx_v11_0_set_powergating_state(void *handle,
+ 		break;
+ 	case IP_VERSION(11, 0, 1):
+ 	case IP_VERSION(11, 0, 4):
++		if (!enable)
++			amdgpu_gfx_off_ctrl(adev, false);
++
+ 		gfx_v11_cntl_pg(adev, enable);
+-		amdgpu_gfx_off_ctrl(adev, enable);
++
++		if (enable)
++			amdgpu_gfx_off_ctrl(adev, true);
++
+ 		break;
+ 	default:
+ 		break;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
+index abdb8d8421b1b..14ca327b602c4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
+@@ -31,6 +31,8 @@
+ #include "umc_v8_10.h"
+ #include "athub/athub_3_0_0_sh_mask.h"
+ #include "athub/athub_3_0_0_offset.h"
++#include "dcn/dcn_3_2_0_offset.h"
++#include "dcn/dcn_3_2_0_sh_mask.h"
+ #include "oss/osssys_6_0_0_offset.h"
+ #include "ivsrcid/vmc/irqsrcs_vmc_1_0.h"
+ #include "navi10_enum.h"
+@@ -542,7 +544,24 @@ static void gmc_v11_0_get_vm_pte(struct amdgpu_device *adev,
+ 
+ static unsigned gmc_v11_0_get_vbios_fb_size(struct amdgpu_device *adev)
+ {
+-	return 0;
++	u32 d1vga_control = RREG32_SOC15(DCE, 0, regD1VGA_CONTROL);
++	unsigned size;
++
++	if (REG_GET_FIELD(d1vga_control, D1VGA_CONTROL, D1VGA_MODE_ENABLE)) {
++		size = AMDGPU_VBIOS_VGA_ALLOCATION;
++	} else {
++		u32 viewport;
++		u32 pitch;
++
++		viewport = RREG32_SOC15(DCE, 0, regHUBP0_DCSURF_PRI_VIEWPORT_DIMENSION);
++		pitch = RREG32_SOC15(DCE, 0, regHUBPREQ0_DCSURF_SURFACE_PITCH);
++		size = (REG_GET_FIELD(viewport,
++					HUBP0_DCSURF_PRI_VIEWPORT_DIMENSION, PRI_VIEWPORT_HEIGHT) *
++				REG_GET_FIELD(pitch, HUBPREQ0_DCSURF_SURFACE_PITCH, PITCH) *
++				4);
++	}
++
++	return size;
+ }
+ 
+ static const struct amdgpu_gmc_funcs gmc_v11_0_gmc_funcs = {
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+index 83d22dd8b8715..bc8b4e405b7a7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+@@ -553,32 +553,49 @@ static int gmc_v9_0_process_interrupt(struct amdgpu_device *adev,
+ 	const char *mmhub_cid;
+ 	const char *hub_name;
+ 	u64 addr;
++	uint32_t cam_index = 0;
++	int ret;
+ 
+ 	addr = (u64)entry->src_data[0] << 12;
+ 	addr |= ((u64)entry->src_data[1] & 0xf) << 44;
+ 
+ 	if (retry_fault) {
+-		/* Returning 1 here also prevents sending the IV to the KFD */
++		if (adev->irq.retry_cam_enabled) {
++			/* Delegate it to a different ring if the hardware hasn't
++			 * already done it.
++			 */
++			if (entry->ih == &adev->irq.ih) {
++				amdgpu_irq_delegate(adev, entry, 8);
++				return 1;
++			}
++
++			cam_index = entry->src_data[2] & 0x3ff;
+ 
+-		/* Process it onyl if it's the first fault for this address */
+-		if (entry->ih != &adev->irq.ih_soft &&
+-		    amdgpu_gmc_filter_faults(adev, entry->ih, addr, entry->pasid,
++			ret = amdgpu_vm_handle_fault(adev, entry->pasid, addr, write_fault);
++			WDOORBELL32(adev->irq.retry_cam_doorbell_index, cam_index);
++			if (ret)
++				return 1;
++		} else {
++			/* Process it onyl if it's the first fault for this address */
++			if (entry->ih != &adev->irq.ih_soft &&
++			    amdgpu_gmc_filter_faults(adev, entry->ih, addr, entry->pasid,
+ 					     entry->timestamp))
+-			return 1;
++				return 1;
+ 
+-		/* Delegate it to a different ring if the hardware hasn't
+-		 * already done it.
+-		 */
+-		if (entry->ih == &adev->irq.ih) {
+-			amdgpu_irq_delegate(adev, entry, 8);
+-			return 1;
+-		}
++			/* Delegate it to a different ring if the hardware hasn't
++			 * already done it.
++			 */
++			if (entry->ih == &adev->irq.ih) {
++				amdgpu_irq_delegate(adev, entry, 8);
++				return 1;
++			}
+ 
+-		/* Try to handle the recoverable page faults by filling page
+-		 * tables
+-		 */
+-		if (amdgpu_vm_handle_fault(adev, entry->pasid, addr, write_fault))
+-			return 1;
++			/* Try to handle the recoverable page faults by filling page
++			 * tables
++			 */
++			if (amdgpu_vm_handle_fault(adev, entry->pasid, addr, write_fault))
++				return 1;
++		}
+ 	}
+ 
+ 	if (!printk_ratelimit())
+diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
+index 5826eac270d79..d06b98f821899 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
+@@ -33,14 +33,19 @@
+ #include "mes_v11_api_def.h"
+ 
+ MODULE_FIRMWARE("amdgpu/gc_11_0_0_mes.bin");
++MODULE_FIRMWARE("amdgpu/gc_11_0_0_mes_2.bin");
+ MODULE_FIRMWARE("amdgpu/gc_11_0_0_mes1.bin");
+ MODULE_FIRMWARE("amdgpu/gc_11_0_1_mes.bin");
++MODULE_FIRMWARE("amdgpu/gc_11_0_1_mes_2.bin");
+ MODULE_FIRMWARE("amdgpu/gc_11_0_1_mes1.bin");
+ MODULE_FIRMWARE("amdgpu/gc_11_0_2_mes.bin");
++MODULE_FIRMWARE("amdgpu/gc_11_0_2_mes_2.bin");
+ MODULE_FIRMWARE("amdgpu/gc_11_0_2_mes1.bin");
+ MODULE_FIRMWARE("amdgpu/gc_11_0_3_mes.bin");
++MODULE_FIRMWARE("amdgpu/gc_11_0_3_mes_2.bin");
+ MODULE_FIRMWARE("amdgpu/gc_11_0_3_mes1.bin");
+ MODULE_FIRMWARE("amdgpu/gc_11_0_4_mes.bin");
++MODULE_FIRMWARE("amdgpu/gc_11_0_4_mes_2.bin");
+ MODULE_FIRMWARE("amdgpu/gc_11_0_4_mes1.bin");
+ 
+ static int mes_v11_0_hw_fini(void *handle);
+diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c b/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
+index 19455a7259391..685abf57ffddc 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
++++ b/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
+@@ -238,7 +238,7 @@ static void nbio_v7_4_ih_doorbell_range(struct amdgpu_device *adev,
+ 
+ 	if (use_doorbell) {
+ 		ih_doorbell_range = REG_SET_FIELD(ih_doorbell_range, BIF_IH_DOORBELL_RANGE, OFFSET, doorbell_index);
+-		ih_doorbell_range = REG_SET_FIELD(ih_doorbell_range, BIF_IH_DOORBELL_RANGE, SIZE, 4);
++		ih_doorbell_range = REG_SET_FIELD(ih_doorbell_range, BIF_IH_DOORBELL_RANGE, SIZE, 8);
+ 	} else
+ 		ih_doorbell_range = REG_SET_FIELD(ih_doorbell_range, BIF_IH_DOORBELL_RANGE, SIZE, 0);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+index 8b8ddf0502661..a4d84e3fe9381 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+@@ -1870,7 +1870,7 @@ static int sdma_v4_0_sw_fini(void *handle)
+ 			amdgpu_ring_fini(&adev->sdma.instance[i].page);
+ 	}
+ 
+-	if (adev->ip_versions[SDMA0_HWIP][0] == IP_VERSION(4, 2, 0) ||
++	if (adev->ip_versions[SDMA0_HWIP][0] == IP_VERSION(4, 2, 2) ||
+             adev->ip_versions[SDMA0_HWIP][0] == IP_VERSION(4, 4, 0))
+ 		amdgpu_sdma_destroy_inst_ctx(adev, true);
+ 	else
+diff --git a/drivers/gpu/drm/amd/amdgpu/vega20_ih.c b/drivers/gpu/drm/amd/amdgpu/vega20_ih.c
+index 1706081d054dd..6a8fb1fb48a3d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vega20_ih.c
++++ b/drivers/gpu/drm/amd/amdgpu/vega20_ih.c
+@@ -38,6 +38,11 @@
+ #define mmIH_CHICKEN_ALDEBARAN			0x18d
+ #define mmIH_CHICKEN_ALDEBARAN_BASE_IDX		0
+ 
++#define mmIH_RETRY_INT_CAM_CNTL_ALDEBARAN		0x00ea
++#define mmIH_RETRY_INT_CAM_CNTL_ALDEBARAN_BASE_IDX	0
++#define IH_RETRY_INT_CAM_CNTL_ALDEBARAN__ENABLE__SHIFT	0x10
++#define IH_RETRY_INT_CAM_CNTL_ALDEBARAN__ENABLE_MASK	0x00010000L
++
+ static void vega20_ih_set_interrupt_funcs(struct amdgpu_device *adev);
+ 
+ /**
+@@ -251,36 +256,14 @@ static int vega20_ih_enable_ring(struct amdgpu_device *adev,
+ 	return 0;
+ }
+ 
+-/**
+- * vega20_ih_reroute_ih - reroute VMC/UTCL2 ih to an ih ring
+- *
+- * @adev: amdgpu_device pointer
+- *
+- * Reroute VMC and UMC interrupts on primary ih ring to
+- * ih ring 1 so they won't lose when bunches of page faults
+- * interrupts overwhelms the interrupt handler(VEGA20)
+- */
+-static void vega20_ih_reroute_ih(struct amdgpu_device *adev)
++static uint32_t vega20_setup_retry_doorbell(u32 doorbell_index)
+ {
+-	uint32_t tmp;
++	u32 val = 0;
+ 
+-	/* vega20 ih reroute will go through psp this
+-	 * function is used for newer asics starting arcturus
+-	 */
+-	if (adev->ip_versions[OSSSYS_HWIP][0] >= IP_VERSION(4, 2, 1)) {
+-		/* Reroute to IH ring 1 for VMC */
+-		WREG32_SOC15(OSSSYS, 0, mmIH_CLIENT_CFG_INDEX, 0x12);
+-		tmp = RREG32_SOC15(OSSSYS, 0, mmIH_CLIENT_CFG_DATA);
+-		tmp = REG_SET_FIELD(tmp, IH_CLIENT_CFG_DATA, CLIENT_TYPE, 1);
+-		tmp = REG_SET_FIELD(tmp, IH_CLIENT_CFG_DATA, RING_ID, 1);
+-		WREG32_SOC15(OSSSYS, 0, mmIH_CLIENT_CFG_DATA, tmp);
+-
+-		/* Reroute IH ring 1 for UTCL2 */
+-		WREG32_SOC15(OSSSYS, 0, mmIH_CLIENT_CFG_INDEX, 0x1B);
+-		tmp = RREG32_SOC15(OSSSYS, 0, mmIH_CLIENT_CFG_DATA);
+-		tmp = REG_SET_FIELD(tmp, IH_CLIENT_CFG_DATA, RING_ID, 1);
+-		WREG32_SOC15(OSSSYS, 0, mmIH_CLIENT_CFG_DATA, tmp);
+-	}
++	val = REG_SET_FIELD(val, IH_DOORBELL_RPTR, OFFSET, doorbell_index);
++	val = REG_SET_FIELD(val, IH_DOORBELL_RPTR, ENABLE, 1);
++
++	return val;
+ }
+ 
+ /**
+@@ -332,8 +315,6 @@ static int vega20_ih_irq_init(struct amdgpu_device *adev)
+ 
+ 	for (i = 0; i < ARRAY_SIZE(ih); i++) {
+ 		if (ih[i]->ring_size) {
+-			if (i == 1)
+-				vega20_ih_reroute_ih(adev);
+ 			ret = vega20_ih_enable_ring(adev, ih[i]);
+ 			if (ret)
+ 				return ret;
+@@ -346,6 +327,20 @@ static int vega20_ih_irq_init(struct amdgpu_device *adev)
+ 
+ 	pci_set_master(adev->pdev);
+ 
++	/* Allocate the doorbell for IH Retry CAM */
++	adev->irq.retry_cam_doorbell_index = (adev->doorbell_index.ih + 3) << 1;
++	WREG32_SOC15(OSSSYS, 0, mmIH_DOORBELL_RETRY_CAM,
++		vega20_setup_retry_doorbell(adev->irq.retry_cam_doorbell_index));
++
++	/* Enable IH Retry CAM */
++	if (adev->ip_versions[OSSSYS_HWIP][0] == IP_VERSION(4, 4, 0))
++		WREG32_FIELD15(OSSSYS, 0, IH_RETRY_INT_CAM_CNTL_ALDEBARAN,
++			       ENABLE, 1);
++	else
++		WREG32_FIELD15(OSSSYS, 0, IH_RETRY_INT_CAM_CNTL, ENABLE, 1);
++
++	adev->irq.retry_cam_enabled = true;
++
+ 	/* enable interrupts */
+ 	ret = vega20_ih_toggle_interrupts(adev, true);
+ 	if (ret)
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+index dc6fd69670509..96a138a395150 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+@@ -2172,7 +2172,15 @@ restart:
+ 		pr_debug("drain retry fault gpu %d svms %p\n", i, svms);
+ 
+ 		amdgpu_ih_wait_on_checkpoint_process_ts(pdd->dev->adev,
+-						     &pdd->dev->adev->irq.ih1);
++				pdd->dev->adev->irq.retry_cam_enabled ?
++				&pdd->dev->adev->irq.ih :
++				&pdd->dev->adev->irq.ih1);
++
++		if (pdd->dev->adev->irq.retry_cam_enabled)
++			amdgpu_ih_wait_on_checkpoint_process_ts(pdd->dev->adev,
++				&pdd->dev->adev->irq.ih_soft);
++
++
+ 		pr_debug("drain retry fault gpu %d svms 0x%p done\n", i, svms);
+ 	}
+ 	if (atomic_cmpxchg(&svms->drain_pagefaults, drain, 0) != drain)
+diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
+index e381de2429fa6..ae3783a7d7f45 100644
+--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
++++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
+@@ -515,11 +515,8 @@ static enum bp_result get_gpio_i2c_info(
+ 	info->i2c_slave_address = record->i2c_slave_addr;
+ 
+ 	/* TODO: check how to get register offset for en, Y, etc. */
+-	info->gpio_info.clk_a_register_index =
+-			le16_to_cpu(
+-			header->gpio_pin[table_index].data_a_reg_index);
+-	info->gpio_info.clk_a_shift =
+-			header->gpio_pin[table_index].gpio_bitshift;
++	info->gpio_info.clk_a_register_index = le16_to_cpu(pin->data_a_reg_index);
++	info->gpio_info.clk_a_shift = pin->gpio_bitshift;
+ 
+ 	return BP_RESULT_OK;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index d406d7b74c6c3..d4a1670a54506 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -73,6 +73,8 @@
+ 
+ #include "dc_trace.h"
+ 
++#include "hw_sequencer_private.h"
++
+ #include "dce/dmub_outbox.h"
+ 
+ #define CTX \
+@@ -1056,6 +1058,44 @@ static void apply_ctx_interdependent_lock(struct dc *dc, struct dc_state *contex
+ 	}
+ }
+ 
++static void phantom_pipe_blank(
++		struct dc *dc,
++		struct timing_generator *tg,
++		int width,
++		int height)
++{
++	struct dce_hwseq *hws = dc->hwseq;
++	enum dc_color_space color_space;
++	struct tg_color black_color = {0};
++	struct output_pixel_processor *opp = NULL;
++	uint32_t num_opps, opp_id_src0, opp_id_src1;
++	uint32_t otg_active_width, otg_active_height;
++
++	/* program opp dpg blank color */
++	color_space = COLOR_SPACE_SRGB;
++	color_space_to_black_color(dc, color_space, &black_color);
++
++	otg_active_width = width;
++	otg_active_height = height;
++
++	/* get the OPTC source */
++	tg->funcs->get_optc_source(tg, &num_opps, &opp_id_src0, &opp_id_src1);
++	ASSERT(opp_id_src0 < dc->res_pool->res_cap->num_opp);
++	opp = dc->res_pool->opps[opp_id_src0];
++
++	opp->funcs->opp_set_disp_pattern_generator(
++			opp,
++			CONTROLLER_DP_TEST_PATTERN_SOLID_COLOR,
++			CONTROLLER_DP_COLOR_SPACE_UDEFINED,
++			COLOR_DEPTH_UNDEFINED,
++			&black_color,
++			otg_active_width,
++			otg_active_height,
++			0);
++
++	hws->funcs.wait_for_blank_complete(opp);
++}
++
+ static void disable_dangling_plane(struct dc *dc, struct dc_state *context)
+ {
+ 	int i, j;
+@@ -1114,8 +1154,13 @@ static void disable_dangling_plane(struct dc *dc, struct dc_state *context)
+ 			 * again for different use.
+ 			 */
+ 			if (old_stream->mall_stream_config.type == SUBVP_PHANTOM) {
+-				if (tg->funcs->enable_crtc)
++				if (tg->funcs->enable_crtc) {
++					int main_pipe_width, main_pipe_height;
++					main_pipe_width = old_stream->mall_stream_config.paired_stream->dst.width;
++					main_pipe_height = old_stream->mall_stream_config.paired_stream->dst.height;
++					phantom_pipe_blank(dc, tg, main_pipe_width, main_pipe_height);
+ 					tg->funcs->enable_crtc(tg);
++				}
+ 			}
+ 			dc_rem_all_planes_for_stream(dc, old_stream, dangling_context);
+ 			disable_all_writeback_pipes_for_stream(dc, old_stream, dangling_context);
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
+index c2092775ca88f..7f27e29fae116 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
++++ b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
+@@ -750,7 +750,8 @@ void dc_dmub_setup_subvp_dmub_command(struct dc *dc,
+ 					!pipe->top_pipe && !pipe->prev_odm_pipe &&
+ 					pipe->stream->mall_stream_config.type == SUBVP_MAIN) {
+ 				populate_subvp_cmd_pipe_info(dc, context, &cmd, pipe, cmd_pipe_index++);
+-			} else if (pipe->plane_state && pipe->stream->mall_stream_config.type == SUBVP_NONE) {
++			} else if (pipe->plane_state && pipe->stream->mall_stream_config.type == SUBVP_NONE &&
++				    !pipe->top_pipe && !pipe->prev_odm_pipe) {
+ 				// Don't need to check for ActiveDRAMClockChangeMargin < 0, not valid in cases where
+ 				// we run through DML without calculating "natural" P-state support
+ 				populate_subvp_cmd_vblank_pipe_info(dc, context, &cmd, pipe, cmd_pipe_index++);
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c b/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c
+index d9fd4ec60588f..670d5ab9d9984 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c
+@@ -1009,7 +1009,7 @@ static void dce_transform_set_pixel_storage_depth(
+ 		color_depth = COLOR_DEPTH_101010;
+ 		pixel_depth = 0;
+ 		expan_mode  = 1;
+-		BREAK_TO_DEBUGGER();
++		DC_LOG_DC("The pixel depth %d is not valid, set COLOR_DEPTH_101010 instead.", depth);
+ 		break;
+ 	}
+ 
+@@ -1023,8 +1023,7 @@ static void dce_transform_set_pixel_storage_depth(
+ 	if (!(xfm_dce->lb_pixel_depth_supported & depth)) {
+ 		/*we should use unsupported capabilities
+ 		 *  unless it is required by w/a*/
+-		DC_LOG_WARNING("%s: Capability not supported",
+-			__func__);
++		DC_LOG_DC("%s: Capability not supported", __func__);
+ 	}
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
+index b4df540c0c61e..36fa413f8b42e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
+@@ -629,7 +629,8 @@ void dcn30_init_hw(struct dc *dc)
+ 	if (dc->clk_mgr->funcs->notify_wm_ranges)
+ 		dc->clk_mgr->funcs->notify_wm_ranges(dc->clk_mgr);
+ 
+-	if (dc->clk_mgr->funcs->set_hard_max_memclk)
++	//if softmax is enabled then hardmax will be set by a different call
++	if (dc->clk_mgr->funcs->set_hard_max_memclk && !dc->clk_mgr->dc_mode_softmax_enabled)
+ 		dc->clk_mgr->funcs->set_hard_max_memclk(dc->clk_mgr);
+ 
+ 	if (dc->res_pool->hubbub->funcs->force_pstate_change_control)
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hwseq.c
+index d13e46eeee3c0..6d3f2335b9f1e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hwseq.c
+@@ -285,7 +285,7 @@ void dcn31_init_hw(struct dc *dc)
+ 	if (dc->clk_mgr->funcs->notify_wm_ranges)
+ 		dc->clk_mgr->funcs->notify_wm_ranges(dc->clk_mgr);
+ 
+-	if (dc->clk_mgr->funcs->set_hard_max_memclk)
++	if (dc->clk_mgr->funcs->set_hard_max_memclk && !dc->clk_mgr->dc_mode_softmax_enabled)
+ 		dc->clk_mgr->funcs->set_hard_max_memclk(dc->clk_mgr);
+ 
+ 	if (dc->res_pool->hubbub->funcs->force_pstate_change_control)
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
+index 823f29c292d05..184310fa52b1a 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
+@@ -895,7 +895,7 @@ void dcn32_init_hw(struct dc *dc)
+ 	if (dc->clk_mgr->funcs->notify_wm_ranges)
+ 		dc->clk_mgr->funcs->notify_wm_ranges(dc->clk_mgr);
+ 
+-	if (dc->clk_mgr->funcs->set_hard_max_memclk)
++	if (dc->clk_mgr->funcs->set_hard_max_memclk && !dc->clk_mgr->dc_mode_softmax_enabled)
+ 		dc->clk_mgr->funcs->set_hard_max_memclk(dc->clk_mgr);
+ 
+ 	if (dc->res_pool->hubbub->funcs->force_pstate_change_control)
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource_helpers.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource_helpers.c
+index 3a2d7bcc4b6d6..8310bcf651728 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource_helpers.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource_helpers.c
+@@ -261,6 +261,8 @@ bool dcn32_is_psr_capable(struct pipe_ctx *pipe)
+ 	return psr_capable;
+ }
+ 
++#define DCN3_2_NEW_DET_OVERRIDE_MIN_MULTIPLIER 7
++
+ /**
+  * *******************************************************************************************
+  * dcn32_determine_det_override: Determine DET allocation for each pipe
+@@ -272,7 +274,6 @@ bool dcn32_is_psr_capable(struct pipe_ctx *pipe)
+  * If there is a plane that's driven by more than 1 pipe (i.e. pipe split), then the
+  * number of DET for that given plane will be split among the pipes driving that plane.
+  *
+- *
+  * High level algorithm:
+  * 1. Split total DET among number of streams
+  * 2. For each stream, split DET among the planes
+@@ -280,6 +281,18 @@ bool dcn32_is_psr_capable(struct pipe_ctx *pipe)
+  *    among those pipes.
+  * 4. Assign the DET override to the DML pipes.
+  *
++ * Special cases:
++ *
++ * For two displays that have a large difference in pixel rate, we may experience
++ *  underflow on the larger display when we divide the DET equally. For this, we
++ *  will implement a modified algorithm to assign more DET to larger display.
++ *
++ * 1. Calculate difference in pixel rates ( multiplier ) between two displays
++ * 2. If the multiplier exceeds DCN3_2_NEW_DET_OVERRIDE_MIN_MULTIPLIER, then
++ *    implement the modified DET override algorithm.
++ * 3. Assign smaller DET size for lower pixel display and higher DET size for
++ *    higher pixel display
++ *
+  * @param [in]: dc: Current DC state
+  * @param [in]: context: New DC state to be programmed
+  * @param [in]: pipes: Array of DML pipes
+@@ -299,18 +312,46 @@ void dcn32_determine_det_override(struct dc *dc,
+ 	struct dc_plane_state *current_plane = NULL;
+ 	uint8_t stream_count = 0;
+ 
++	int phy_pix_clk_mult, lower_mode_stream_index;
++	int phy_pix_clk[MAX_PIPES] = {0};
++	bool use_new_det_override_algorithm = false;
++
+ 	for (i = 0; i < context->stream_count; i++) {
+ 		/* Don't count SubVP streams for DET allocation */
+ 		if (context->streams[i]->mall_stream_config.type != SUBVP_PHANTOM) {
++			phy_pix_clk[i] = context->streams[i]->phy_pix_clk;
+ 			stream_count++;
+ 		}
+ 	}
+ 
++	/* Check for special case with two displays, one with much higher pixel rate */
++	if (stream_count == 2) {
++		ASSERT(!phy_pix_clk[0] || !phy_pix_clk[1]);
++		if (phy_pix_clk[0] < phy_pix_clk[1]) {
++			lower_mode_stream_index = 0;
++			phy_pix_clk_mult = phy_pix_clk[1] / phy_pix_clk[0];
++		} else {
++			lower_mode_stream_index = 1;
++			phy_pix_clk_mult = phy_pix_clk[0] / phy_pix_clk[1];
++		}
++
++		if (phy_pix_clk_mult >= DCN3_2_NEW_DET_OVERRIDE_MIN_MULTIPLIER)
++			use_new_det_override_algorithm = true;
++	}
++
+ 	if (stream_count > 0) {
+ 		stream_segments = 18 / stream_count;
+ 		for (i = 0; i < context->stream_count; i++) {
+ 			if (context->streams[i]->mall_stream_config.type == SUBVP_PHANTOM)
+ 				continue;
++
++			if (use_new_det_override_algorithm) {
++				if (i == lower_mode_stream_index)
++					stream_segments = 4;
++				else
++					stream_segments = 14;
++			}
++
+ 			if (context->stream_status[i].plane_count > 0)
+ 				plane_segments = stream_segments / context->stream_status[i].plane_count;
+ 			else
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
+index 899105da04335..111eb978520ac 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
+@@ -4865,7 +4865,7 @@ void dml30_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ 							v->DETBufferSizeCThisState[k],
+ 							&v->UrgentBurstFactorCursorPre[k],
+ 							&v->UrgentBurstFactorLumaPre[k],
+-							&v->UrgentBurstFactorChroma[k],
++							&v->UrgentBurstFactorChromaPre[k],
+ 							&v->NoUrgentLatencyHidingPre[k]);
+ 				}
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c b/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c
+index 2b57f5b2362a4..bd674dc30df33 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c
+@@ -4307,11 +4307,11 @@ void dml31_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ 							v->AudioSampleRate[k],
+ 							v->AudioSampleLayout[k],
+ 							v->ODMCombineEnablePerState[i][k]);
+-				} else if (v->Output[k] == dm_dp || v->Output[k] == dm_edp) {
++				} else if (v->Output[k] == dm_dp || v->Output[k] == dm_edp || v->Output[k] == dm_dp2p0) {
+ 					if (v->DSCEnable[k] == true) {
+ 						v->RequiresDSC[i][k] = true;
+ 						v->LinkDSCEnable = true;
+-						if (v->Output[k] == dm_dp) {
++						if (v->Output[k] == dm_dp || v->Output[k] == dm_dp2p0) {
+ 							v->RequiresFEC[i][k] = true;
+ 						} else {
+ 							v->RequiresFEC[i][k] = false;
+@@ -4319,107 +4319,201 @@ void dml31_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ 					} else {
+ 						v->RequiresDSC[i][k] = false;
+ 						v->LinkDSCEnable = false;
+-						v->RequiresFEC[i][k] = false;
+-					}
+-
+-					v->Outbpp = BPP_INVALID;
+-					if (v->PHYCLKPerState[i] >= 270.0) {
+-						v->Outbpp = TruncToValidBPP(
+-								(1.0 - v->Downspreading / 100.0) * 2700,
+-								v->OutputLinkDPLanes[k],
+-								v->HTotal[k],
+-								v->HActive[k],
+-								v->PixelClockBackEnd[k],
+-								v->ForcedOutputLinkBPP[k],
+-								v->LinkDSCEnable,
+-								v->Output[k],
+-								v->OutputFormat[k],
+-								v->DSCInputBitPerComponent[k],
+-								v->NumberOfDSCSlices[k],
+-								v->AudioSampleRate[k],
+-								v->AudioSampleLayout[k],
+-								v->ODMCombineEnablePerState[i][k]);
+-						v->OutputBppPerState[i][k] = v->Outbpp;
+-						// TODO: Need some other way to handle this nonsense
+-						// v->OutputTypeAndRatePerState[i][k] = v->Output[k] & " HBR"
+-					}
+-					if (v->Outbpp == BPP_INVALID && v->PHYCLKPerState[i] >= 540.0) {
+-						v->Outbpp = TruncToValidBPP(
+-								(1.0 - v->Downspreading / 100.0) * 5400,
+-								v->OutputLinkDPLanes[k],
+-								v->HTotal[k],
+-								v->HActive[k],
+-								v->PixelClockBackEnd[k],
+-								v->ForcedOutputLinkBPP[k],
+-								v->LinkDSCEnable,
+-								v->Output[k],
+-								v->OutputFormat[k],
+-								v->DSCInputBitPerComponent[k],
+-								v->NumberOfDSCSlices[k],
+-								v->AudioSampleRate[k],
+-								v->AudioSampleLayout[k],
+-								v->ODMCombineEnablePerState[i][k]);
+-						v->OutputBppPerState[i][k] = v->Outbpp;
+-						// TODO: Need some other way to handle this nonsense
+-						// v->OutputTypeAndRatePerState[i][k] = v->Output[k] & " HBR2"
+-					}
+-					if (v->Outbpp == BPP_INVALID && v->PHYCLKPerState[i] >= 810.0) {
+-						v->Outbpp = TruncToValidBPP(
+-								(1.0 - v->Downspreading / 100.0) * 8100,
+-								v->OutputLinkDPLanes[k],
+-								v->HTotal[k],
+-								v->HActive[k],
+-								v->PixelClockBackEnd[k],
+-								v->ForcedOutputLinkBPP[k],
+-								v->LinkDSCEnable,
+-								v->Output[k],
+-								v->OutputFormat[k],
+-								v->DSCInputBitPerComponent[k],
+-								v->NumberOfDSCSlices[k],
+-								v->AudioSampleRate[k],
+-								v->AudioSampleLayout[k],
+-								v->ODMCombineEnablePerState[i][k]);
+-						v->OutputBppPerState[i][k] = v->Outbpp;
+-						// TODO: Need some other way to handle this nonsense
+-						// v->OutputTypeAndRatePerState[i][k] = v->Output[k] & " HBR3"
+-					}
+-					if (v->Outbpp == BPP_INVALID && v->PHYCLKD18PerState[i] >= 10000.0 / 18) {
+-						v->Outbpp = TruncToValidBPP(
+-								(1.0 - v->Downspreading / 100.0) * 10000,
+-								4,
+-								v->HTotal[k],
+-								v->HActive[k],
+-								v->PixelClockBackEnd[k],
+-								v->ForcedOutputLinkBPP[k],
+-								v->LinkDSCEnable,
+-								v->Output[k],
+-								v->OutputFormat[k],
+-								v->DSCInputBitPerComponent[k],
+-								v->NumberOfDSCSlices[k],
+-								v->AudioSampleRate[k],
+-								v->AudioSampleLayout[k],
+-								v->ODMCombineEnablePerState[i][k]);
+-						v->OutputBppPerState[i][k] = v->Outbpp;
+-						//v->OutputTypeAndRatePerState[i][k] = v->Output[k] & "10x4";
++						if (v->Output[k] == dm_dp2p0) {
++							v->RequiresFEC[i][k] = true;
++						} else {
++							v->RequiresFEC[i][k] = false;
++						}
+ 					}
+-					if (v->Outbpp == BPP_INVALID && v->PHYCLKD18PerState[i] >= 12000.0 / 18) {
+-						v->Outbpp = TruncToValidBPP(
+-								12000,
+-								4,
+-								v->HTotal[k],
+-								v->HActive[k],
+-								v->PixelClockBackEnd[k],
+-								v->ForcedOutputLinkBPP[k],
+-								v->LinkDSCEnable,
+-								v->Output[k],
+-								v->OutputFormat[k],
+-								v->DSCInputBitPerComponent[k],
+-								v->NumberOfDSCSlices[k],
+-								v->AudioSampleRate[k],
+-								v->AudioSampleLayout[k],
+-								v->ODMCombineEnablePerState[i][k]);
+-						v->OutputBppPerState[i][k] = v->Outbpp;
+-						//v->OutputTypeAndRatePerState[i][k] = v->Output[k] & "12x4";
++					if (v->Output[k] == dm_dp2p0) {
++						v->Outbpp = BPP_INVALID;
++						if ((v->OutputLinkDPRate[k] == dm_dp_rate_na || v->OutputLinkDPRate[k] == dm_dp_rate_uhbr10) &&
++							v->PHYCLKD18PerState[k] >= 10000.0 / 18.0) {
++							v->Outbpp = TruncToValidBPP(
++									(1.0 - v->Downspreading / 100.0) * 10000,
++									v->OutputLinkDPLanes[k],
++									v->HTotal[k],
++									v->HActive[k],
++									v->PixelClockBackEnd[k],
++									v->ForcedOutputLinkBPP[k],
++									v->LinkDSCEnable,
++									v->Output[k],
++									v->OutputFormat[k],
++									v->DSCInputBitPerComponent[k],
++									v->NumberOfDSCSlices[k],
++									v->AudioSampleRate[k],
++									v->AudioSampleLayout[k],
++									v->ODMCombineEnablePerState[i][k]);
++							if (v->Outbpp == BPP_INVALID && v->PHYCLKD18PerState[k] < 13500.0 / 18.0 &&
++								v->DSCEnable[k] == true && v->ForcedOutputLinkBPP[k] == 0) {
++								v->RequiresDSC[i][k] = true;
++								v->LinkDSCEnable = true;
++								v->Outbpp = TruncToValidBPP(
++										(1.0 - v->Downspreading / 100.0) * 10000,
++										v->OutputLinkDPLanes[k],
++										v->HTotal[k],
++										v->HActive[k],
++										v->PixelClockBackEnd[k],
++										v->ForcedOutputLinkBPP[k],
++										v->LinkDSCEnable,
++										v->Output[k],
++										v->OutputFormat[k],
++										v->DSCInputBitPerComponent[k],
++										v->NumberOfDSCSlices[k],
++										v->AudioSampleRate[k],
++										v->AudioSampleLayout[k],
++										v->ODMCombineEnablePerState[i][k]);
++							}
++							v->OutputBppPerState[i][k] = v->Outbpp;
++							// TODO: Need some other way to handle this nonsense
++							// v->OutputTypeAndRatePerState[i][k] = v->Output[k] & " UHBR10"
++						}
++						if (v->Outbpp == BPP_INVALID &&
++							(v->OutputLinkDPRate[k] == dm_dp_rate_na || v->OutputLinkDPRate[k] == dm_dp_rate_uhbr13p5) &&
++							v->PHYCLKD18PerState[k] >= 13500.0 / 18.0) {
++							v->Outbpp = TruncToValidBPP(
++									(1.0 - v->Downspreading / 100.0) * 13500,
++									v->OutputLinkDPLanes[k],
++									v->HTotal[k],
++									v->HActive[k],
++									v->PixelClockBackEnd[k],
++									v->ForcedOutputLinkBPP[k],
++									v->LinkDSCEnable,
++									v->Output[k],
++									v->OutputFormat[k],
++									v->DSCInputBitPerComponent[k],
++									v->NumberOfDSCSlices[k],
++									v->AudioSampleRate[k],
++									v->AudioSampleLayout[k],
++									v->ODMCombineEnablePerState[i][k]);
++							if (v->Outbpp == BPP_INVALID && v->PHYCLKD18PerState[k] < 20000.0 / 18.0 &&
++								v->DSCEnable[k] == true && v->ForcedOutputLinkBPP[k] == 0) {
++								v->RequiresDSC[i][k] = true;
++								v->LinkDSCEnable = true;
++								v->Outbpp = TruncToValidBPP(
++										(1.0 - v->Downspreading / 100.0) * 13500,
++										v->OutputLinkDPLanes[k],
++										v->HTotal[k],
++										v->HActive[k],
++										v->PixelClockBackEnd[k],
++										v->ForcedOutputLinkBPP[k],
++										v->LinkDSCEnable,
++										v->Output[k],
++										v->OutputFormat[k],
++										v->DSCInputBitPerComponent[k],
++										v->NumberOfDSCSlices[k],
++										v->AudioSampleRate[k],
++										v->AudioSampleLayout[k],
++										v->ODMCombineEnablePerState[i][k]);
++							}
++							v->OutputBppPerState[i][k] = v->Outbpp;
++							// TODO: Need some other way to handle this nonsense
++							// v->OutputTypeAndRatePerState[i][k] = v->Output[k] & " UHBR13p5"
++						}
++						if (v->Outbpp == BPP_INVALID &&
++							(v->OutputLinkDPRate[k] == dm_dp_rate_na || v->OutputLinkDPRate[k] == dm_dp_rate_uhbr20) &&
++							v->PHYCLKD18PerState[k] >= 20000.0 / 18.0) {
++							v->Outbpp = TruncToValidBPP(
++									(1.0 - v->Downspreading / 100.0) * 20000,
++									v->OutputLinkDPLanes[k],
++									v->HTotal[k],
++									v->HActive[k],
++									v->PixelClockBackEnd[k],
++									v->ForcedOutputLinkBPP[k],
++									v->LinkDSCEnable,
++									v->Output[k],
++									v->OutputFormat[k],
++									v->DSCInputBitPerComponent[k],
++									v->NumberOfDSCSlices[k],
++									v->AudioSampleRate[k],
++									v->AudioSampleLayout[k],
++									v->ODMCombineEnablePerState[i][k]);
++							if (v->Outbpp == BPP_INVALID && v->DSCEnable[k] == true &&
++								v->ForcedOutputLinkBPP[k] == 0) {
++								v->RequiresDSC[i][k] = true;
++								v->LinkDSCEnable = true;
++								v->Outbpp = TruncToValidBPP(
++										(1.0 - v->Downspreading / 100.0) * 20000,
++										v->OutputLinkDPLanes[k],
++										v->HTotal[k],
++										v->HActive[k],
++										v->PixelClockBackEnd[k],
++										v->ForcedOutputLinkBPP[k],
++										v->LinkDSCEnable,
++										v->Output[k],
++										v->OutputFormat[k],
++										v->DSCInputBitPerComponent[k],
++										v->NumberOfDSCSlices[k],
++										v->AudioSampleRate[k],
++										v->AudioSampleLayout[k],
++										v->ODMCombineEnablePerState[i][k]);
++							}
++							v->OutputBppPerState[i][k] = v->Outbpp;
++							// TODO: Need some other way to handle this nonsense
++							// v->OutputTypeAndRatePerState[i][k] = v->Output[k] & " UHBR20"
++						}
++					} else {
++						v->Outbpp = BPP_INVALID;
++						if (v->PHYCLKPerState[i] >= 270.0) {
++							v->Outbpp = TruncToValidBPP(
++									(1.0 - v->Downspreading / 100.0) * 2700,
++									v->OutputLinkDPLanes[k],
++									v->HTotal[k],
++									v->HActive[k],
++									v->PixelClockBackEnd[k],
++									v->ForcedOutputLinkBPP[k],
++									v->LinkDSCEnable,
++									v->Output[k],
++									v->OutputFormat[k],
++									v->DSCInputBitPerComponent[k],
++									v->NumberOfDSCSlices[k],
++									v->AudioSampleRate[k],
++									v->AudioSampleLayout[k],
++									v->ODMCombineEnablePerState[i][k]);
++							v->OutputBppPerState[i][k] = v->Outbpp;
++							// TODO: Need some other way to handle this nonsense
++							// v->OutputTypeAndRatePerState[i][k] = v->Output[k] & " HBR"
++						}
++						if (v->Outbpp == BPP_INVALID && v->PHYCLKPerState[i] >= 540.0) {
++							v->Outbpp = TruncToValidBPP(
++									(1.0 - v->Downspreading / 100.0) * 5400,
++									v->OutputLinkDPLanes[k],
++									v->HTotal[k],
++									v->HActive[k],
++									v->PixelClockBackEnd[k],
++									v->ForcedOutputLinkBPP[k],
++									v->LinkDSCEnable,
++									v->Output[k],
++									v->OutputFormat[k],
++									v->DSCInputBitPerComponent[k],
++									v->NumberOfDSCSlices[k],
++									v->AudioSampleRate[k],
++									v->AudioSampleLayout[k],
++									v->ODMCombineEnablePerState[i][k]);
++							v->OutputBppPerState[i][k] = v->Outbpp;
++							// TODO: Need some other way to handle this nonsense
++							// v->OutputTypeAndRatePerState[i][k] = v->Output[k] & " HBR2"
++						}
++						if (v->Outbpp == BPP_INVALID && v->PHYCLKPerState[i] >= 810.0) {
++							v->Outbpp = TruncToValidBPP(
++									(1.0 - v->Downspreading / 100.0) * 8100,
++									v->OutputLinkDPLanes[k],
++									v->HTotal[k],
++									v->HActive[k],
++									v->PixelClockBackEnd[k],
++									v->ForcedOutputLinkBPP[k],
++									v->LinkDSCEnable,
++									v->Output[k],
++									v->OutputFormat[k],
++									v->DSCInputBitPerComponent[k],
++									v->NumberOfDSCSlices[k],
++									v->AudioSampleRate[k],
++									v->AudioSampleLayout[k],
++									v->ODMCombineEnablePerState[i][k]);
++							v->OutputBppPerState[i][k] = v->Outbpp;
++							// TODO: Need some other way to handle this nonsense
++							// v->OutputTypeAndRatePerState[i][k] = v->Output[k] & " HBR3"
++						}
+ 					}
+ 				}
+ 			} else {
+@@ -5097,7 +5191,7 @@ void dml31_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ 							v->DETBufferSizeCThisState[k],
+ 							&v->UrgentBurstFactorCursorPre[k],
+ 							&v->UrgentBurstFactorLumaPre[k],
+-							&v->UrgentBurstFactorChroma[k],
++							&v->UrgentBurstFactorChromaPre[k],
+ 							&v->NotUrgentLatencyHidingPre[k]);
+ 				}
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c
+index 3bbc46a673355..28163f7acd5b2 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c
+@@ -308,6 +308,10 @@ int dcn314_populate_dml_pipes_from_context_fpu(struct dc *dc, struct dc_state *c
+ 				pipe->plane_state->src_rect.width < pipe->plane_state->dst_rect.width))
+ 			upscaled = true;
+ 
++		/* Apply HostVM policy - either based on hypervisor globally enabled, or rIOMMU active */
++		if (dc->debug.dml_hostvm_override == DML_HOSTVM_NO_OVERRIDE)
++			pipes[i].pipe.src.hostvm = dc->vm_pa_config.is_hvm_enabled || dc->res_pool->hubbub->riommu_active;
++
+ 		/*
+ 		 * Immediate flip can be set dynamically after enabling the plane.
+ 		 * We need to require support for immediate flip or underflow can be
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn314/display_mode_vba_314.c b/drivers/gpu/drm/amd/display/dc/dml/dcn314/display_mode_vba_314.c
+index 461ab6d2030e2..7eb2173b7691e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn314/display_mode_vba_314.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn314/display_mode_vba_314.c
+@@ -4405,11 +4405,11 @@ void dml314_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_
+ 							v->AudioSampleRate[k],
+ 							v->AudioSampleLayout[k],
+ 							v->ODMCombineEnablePerState[i][k]);
+-				} else if (v->Output[k] == dm_dp || v->Output[k] == dm_edp) {
++				} else if (v->Output[k] == dm_dp || v->Output[k] == dm_edp || v->Output[k] == dm_dp2p0) {
+ 					if (v->DSCEnable[k] == true) {
+ 						v->RequiresDSC[i][k] = true;
+ 						v->LinkDSCEnable = true;
+-						if (v->Output[k] == dm_dp) {
++						if (v->Output[k] == dm_dp || v->Output[k] == dm_dp2p0) {
+ 							v->RequiresFEC[i][k] = true;
+ 						} else {
+ 							v->RequiresFEC[i][k] = false;
+@@ -4417,107 +4417,201 @@ void dml314_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_
+ 					} else {
+ 						v->RequiresDSC[i][k] = false;
+ 						v->LinkDSCEnable = false;
+-						v->RequiresFEC[i][k] = false;
+-					}
+-
+-					v->Outbpp = BPP_INVALID;
+-					if (v->PHYCLKPerState[i] >= 270.0) {
+-						v->Outbpp = TruncToValidBPP(
+-								(1.0 - v->Downspreading / 100.0) * 2700,
+-								v->OutputLinkDPLanes[k],
+-								v->HTotal[k],
+-								v->HActive[k],
+-								v->PixelClockBackEnd[k],
+-								v->ForcedOutputLinkBPP[k],
+-								v->LinkDSCEnable,
+-								v->Output[k],
+-								v->OutputFormat[k],
+-								v->DSCInputBitPerComponent[k],
+-								v->NumberOfDSCSlices[k],
+-								v->AudioSampleRate[k],
+-								v->AudioSampleLayout[k],
+-								v->ODMCombineEnablePerState[i][k]);
+-						v->OutputBppPerState[i][k] = v->Outbpp;
+-						// TODO: Need some other way to handle this nonsense
+-						// v->OutputTypeAndRatePerState[i][k] = v->Output[k] & " HBR"
+-					}
+-					if (v->Outbpp == BPP_INVALID && v->PHYCLKPerState[i] >= 540.0) {
+-						v->Outbpp = TruncToValidBPP(
+-								(1.0 - v->Downspreading / 100.0) * 5400,
+-								v->OutputLinkDPLanes[k],
+-								v->HTotal[k],
+-								v->HActive[k],
+-								v->PixelClockBackEnd[k],
+-								v->ForcedOutputLinkBPP[k],
+-								v->LinkDSCEnable,
+-								v->Output[k],
+-								v->OutputFormat[k],
+-								v->DSCInputBitPerComponent[k],
+-								v->NumberOfDSCSlices[k],
+-								v->AudioSampleRate[k],
+-								v->AudioSampleLayout[k],
+-								v->ODMCombineEnablePerState[i][k]);
+-						v->OutputBppPerState[i][k] = v->Outbpp;
+-						// TODO: Need some other way to handle this nonsense
+-						// v->OutputTypeAndRatePerState[i][k] = v->Output[k] & " HBR2"
+-					}
+-					if (v->Outbpp == BPP_INVALID && v->PHYCLKPerState[i] >= 810.0) {
+-						v->Outbpp = TruncToValidBPP(
+-								(1.0 - v->Downspreading / 100.0) * 8100,
+-								v->OutputLinkDPLanes[k],
+-								v->HTotal[k],
+-								v->HActive[k],
+-								v->PixelClockBackEnd[k],
+-								v->ForcedOutputLinkBPP[k],
+-								v->LinkDSCEnable,
+-								v->Output[k],
+-								v->OutputFormat[k],
+-								v->DSCInputBitPerComponent[k],
+-								v->NumberOfDSCSlices[k],
+-								v->AudioSampleRate[k],
+-								v->AudioSampleLayout[k],
+-								v->ODMCombineEnablePerState[i][k]);
+-						v->OutputBppPerState[i][k] = v->Outbpp;
+-						// TODO: Need some other way to handle this nonsense
+-						// v->OutputTypeAndRatePerState[i][k] = v->Output[k] & " HBR3"
+-					}
+-					if (v->Outbpp == BPP_INVALID && v->PHYCLKD18PerState[i] >= 10000.0 / 18) {
+-						v->Outbpp = TruncToValidBPP(
+-								(1.0 - v->Downspreading / 100.0) * 10000,
+-								4,
+-								v->HTotal[k],
+-								v->HActive[k],
+-								v->PixelClockBackEnd[k],
+-								v->ForcedOutputLinkBPP[k],
+-								v->LinkDSCEnable,
+-								v->Output[k],
+-								v->OutputFormat[k],
+-								v->DSCInputBitPerComponent[k],
+-								v->NumberOfDSCSlices[k],
+-								v->AudioSampleRate[k],
+-								v->AudioSampleLayout[k],
+-								v->ODMCombineEnablePerState[i][k]);
+-						v->OutputBppPerState[i][k] = v->Outbpp;
+-						//v->OutputTypeAndRatePerState[i][k] = v->Output[k] & "10x4";
++						if (v->Output[k] == dm_dp2p0) {
++							v->RequiresFEC[i][k] = true;
++						} else {
++							v->RequiresFEC[i][k] = false;
++						}
+ 					}
+-					if (v->Outbpp == BPP_INVALID && v->PHYCLKD18PerState[i] >= 12000.0 / 18) {
+-						v->Outbpp = TruncToValidBPP(
+-								12000,
+-								4,
+-								v->HTotal[k],
+-								v->HActive[k],
+-								v->PixelClockBackEnd[k],
+-								v->ForcedOutputLinkBPP[k],
+-								v->LinkDSCEnable,
+-								v->Output[k],
+-								v->OutputFormat[k],
+-								v->DSCInputBitPerComponent[k],
+-								v->NumberOfDSCSlices[k],
+-								v->AudioSampleRate[k],
+-								v->AudioSampleLayout[k],
+-								v->ODMCombineEnablePerState[i][k]);
+-						v->OutputBppPerState[i][k] = v->Outbpp;
+-						//v->OutputTypeAndRatePerState[i][k] = v->Output[k] & "12x4";
++					if (v->Output[k] == dm_dp2p0) {
++						v->Outbpp = BPP_INVALID;
++						if ((v->OutputLinkDPRate[k] == dm_dp_rate_na || v->OutputLinkDPRate[k] == dm_dp_rate_uhbr10) &&
++							v->PHYCLKD18PerState[k] >= 10000.0 / 18.0) {
++							v->Outbpp = TruncToValidBPP(
++									(1.0 - v->Downspreading / 100.0) * 10000,
++									v->OutputLinkDPLanes[k],
++									v->HTotal[k],
++									v->HActive[k],
++									v->PixelClockBackEnd[k],
++									v->ForcedOutputLinkBPP[k],
++									v->LinkDSCEnable,
++									v->Output[k],
++									v->OutputFormat[k],
++									v->DSCInputBitPerComponent[k],
++									v->NumberOfDSCSlices[k],
++									v->AudioSampleRate[k],
++									v->AudioSampleLayout[k],
++									v->ODMCombineEnablePerState[i][k]);
++							if (v->Outbpp == BPP_INVALID && v->PHYCLKD18PerState[k] < 13500.0 / 18.0 &&
++								v->DSCEnable[k] == true && v->ForcedOutputLinkBPP[k] == 0) {
++								v->RequiresDSC[i][k] = true;
++								v->LinkDSCEnable = true;
++								v->Outbpp = TruncToValidBPP(
++										(1.0 - v->Downspreading / 100.0) * 10000,
++										v->OutputLinkDPLanes[k],
++										v->HTotal[k],
++										v->HActive[k],
++										v->PixelClockBackEnd[k],
++										v->ForcedOutputLinkBPP[k],
++										v->LinkDSCEnable,
++										v->Output[k],
++										v->OutputFormat[k],
++										v->DSCInputBitPerComponent[k],
++										v->NumberOfDSCSlices[k],
++										v->AudioSampleRate[k],
++										v->AudioSampleLayout[k],
++										v->ODMCombineEnablePerState[i][k]);
++							}
++							v->OutputBppPerState[i][k] = v->Outbpp;
++							// TODO: Need some other way to handle this nonsense
++							// v->OutputTypeAndRatePerState[i][k] = v->Output[k] & " UHBR10"
++						}
++						if (v->Outbpp == BPP_INVALID &&
++							(v->OutputLinkDPRate[k] == dm_dp_rate_na || v->OutputLinkDPRate[k] == dm_dp_rate_uhbr13p5) &&
++							v->PHYCLKD18PerState[k] >= 13500.0 / 18.0) {
++							v->Outbpp = TruncToValidBPP(
++									(1.0 - v->Downspreading / 100.0) * 13500,
++									v->OutputLinkDPLanes[k],
++									v->HTotal[k],
++									v->HActive[k],
++									v->PixelClockBackEnd[k],
++									v->ForcedOutputLinkBPP[k],
++									v->LinkDSCEnable,
++									v->Output[k],
++									v->OutputFormat[k],
++									v->DSCInputBitPerComponent[k],
++									v->NumberOfDSCSlices[k],
++									v->AudioSampleRate[k],
++									v->AudioSampleLayout[k],
++									v->ODMCombineEnablePerState[i][k]);
++							if (v->Outbpp == BPP_INVALID && v->PHYCLKD18PerState[k] < 20000.0 / 18.0 &&
++								v->DSCEnable[k] == true && v->ForcedOutputLinkBPP[k] == 0) {
++								v->RequiresDSC[i][k] = true;
++								v->LinkDSCEnable = true;
++								v->Outbpp = TruncToValidBPP(
++										(1.0 - v->Downspreading / 100.0) * 13500,
++										v->OutputLinkDPLanes[k],
++										v->HTotal[k],
++										v->HActive[k],
++										v->PixelClockBackEnd[k],
++										v->ForcedOutputLinkBPP[k],
++										v->LinkDSCEnable,
++										v->Output[k],
++										v->OutputFormat[k],
++										v->DSCInputBitPerComponent[k],
++										v->NumberOfDSCSlices[k],
++										v->AudioSampleRate[k],
++										v->AudioSampleLayout[k],
++										v->ODMCombineEnablePerState[i][k]);
++							}
++							v->OutputBppPerState[i][k] = v->Outbpp;
++							// TODO: Need some other way to handle this nonsense
++							// v->OutputTypeAndRatePerState[i][k] = v->Output[k] & " UHBR13p5"
++						}
++						if (v->Outbpp == BPP_INVALID &&
++							(v->OutputLinkDPRate[k] == dm_dp_rate_na || v->OutputLinkDPRate[k] == dm_dp_rate_uhbr20) &&
++							v->PHYCLKD18PerState[k] >= 20000.0 / 18.0) {
++							v->Outbpp = TruncToValidBPP(
++									(1.0 - v->Downspreading / 100.0) * 20000,
++									v->OutputLinkDPLanes[k],
++									v->HTotal[k],
++									v->HActive[k],
++									v->PixelClockBackEnd[k],
++									v->ForcedOutputLinkBPP[k],
++									v->LinkDSCEnable,
++									v->Output[k],
++									v->OutputFormat[k],
++									v->DSCInputBitPerComponent[k],
++									v->NumberOfDSCSlices[k],
++									v->AudioSampleRate[k],
++									v->AudioSampleLayout[k],
++									v->ODMCombineEnablePerState[i][k]);
++							if (v->Outbpp == BPP_INVALID && v->DSCEnable[k] == true &&
++								v->ForcedOutputLinkBPP[k] == 0) {
++								v->RequiresDSC[i][k] = true;
++								v->LinkDSCEnable = true;
++								v->Outbpp = TruncToValidBPP(
++										(1.0 - v->Downspreading / 100.0) * 20000,
++										v->OutputLinkDPLanes[k],
++										v->HTotal[k],
++										v->HActive[k],
++										v->PixelClockBackEnd[k],
++										v->ForcedOutputLinkBPP[k],
++										v->LinkDSCEnable,
++										v->Output[k],
++										v->OutputFormat[k],
++										v->DSCInputBitPerComponent[k],
++										v->NumberOfDSCSlices[k],
++										v->AudioSampleRate[k],
++										v->AudioSampleLayout[k],
++										v->ODMCombineEnablePerState[i][k]);
++							}
++							v->OutputBppPerState[i][k] = v->Outbpp;
++							// TODO: Need some other way to handle this nonsense
++							// v->OutputTypeAndRatePerState[i][k] = v->Output[k] & " UHBR20"
++						}
++					} else {
++						v->Outbpp = BPP_INVALID;
++						if (v->PHYCLKPerState[i] >= 270.0) {
++							v->Outbpp = TruncToValidBPP(
++									(1.0 - v->Downspreading / 100.0) * 2700,
++									v->OutputLinkDPLanes[k],
++									v->HTotal[k],
++									v->HActive[k],
++									v->PixelClockBackEnd[k],
++									v->ForcedOutputLinkBPP[k],
++									v->LinkDSCEnable,
++									v->Output[k],
++									v->OutputFormat[k],
++									v->DSCInputBitPerComponent[k],
++									v->NumberOfDSCSlices[k],
++									v->AudioSampleRate[k],
++									v->AudioSampleLayout[k],
++									v->ODMCombineEnablePerState[i][k]);
++							v->OutputBppPerState[i][k] = v->Outbpp;
++							// TODO: Need some other way to handle this nonsense
++							// v->OutputTypeAndRatePerState[i][k] = v->Output[k] & " HBR"
++						}
++						if (v->Outbpp == BPP_INVALID && v->PHYCLKPerState[i] >= 540.0) {
++							v->Outbpp = TruncToValidBPP(
++									(1.0 - v->Downspreading / 100.0) * 5400,
++									v->OutputLinkDPLanes[k],
++									v->HTotal[k],
++									v->HActive[k],
++									v->PixelClockBackEnd[k],
++									v->ForcedOutputLinkBPP[k],
++									v->LinkDSCEnable,
++									v->Output[k],
++									v->OutputFormat[k],
++									v->DSCInputBitPerComponent[k],
++									v->NumberOfDSCSlices[k],
++									v->AudioSampleRate[k],
++									v->AudioSampleLayout[k],
++									v->ODMCombineEnablePerState[i][k]);
++							v->OutputBppPerState[i][k] = v->Outbpp;
++							// TODO: Need some other way to handle this nonsense
++							// v->OutputTypeAndRatePerState[i][k] = v->Output[k] & " HBR2"
++						}
++						if (v->Outbpp == BPP_INVALID && v->PHYCLKPerState[i] >= 810.0) {
++							v->Outbpp = TruncToValidBPP(
++									(1.0 - v->Downspreading / 100.0) * 8100,
++									v->OutputLinkDPLanes[k],
++									v->HTotal[k],
++									v->HActive[k],
++									v->PixelClockBackEnd[k],
++									v->ForcedOutputLinkBPP[k],
++									v->LinkDSCEnable,
++									v->Output[k],
++									v->OutputFormat[k],
++									v->DSCInputBitPerComponent[k],
++									v->NumberOfDSCSlices[k],
++									v->AudioSampleRate[k],
++									v->AudioSampleLayout[k],
++									v->ODMCombineEnablePerState[i][k]);
++							v->OutputBppPerState[i][k] = v->Outbpp;
++							// TODO: Need some other way to handle this nonsense
++							// v->OutputTypeAndRatePerState[i][k] = v->Output[k] & " HBR3"
++						}
+ 					}
+ 				}
+ 			} else {
+@@ -5194,7 +5288,7 @@ void dml314_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_
+ 							v->DETBufferSizeCThisState[k],
+ 							&v->UrgentBurstFactorCursorPre[k],
+ 							&v->UrgentBurstFactorLumaPre[k],
+-							&v->UrgentBurstFactorChroma[k],
++							&v->UrgentBurstFactorChromaPre[k],
+ 							&v->NotUrgentLatencyHidingPre[k]);
+ 				}
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.c b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.c
+index 02d99b6bfe5ec..705748a942952 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.c
+@@ -3353,7 +3353,7 @@ void dml32_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ 							/* Output */
+ 							&mode_lib->vba.UrgentBurstFactorCursorPre[k],
+ 							&mode_lib->vba.UrgentBurstFactorLumaPre[k],
+-							&mode_lib->vba.UrgentBurstFactorChroma[k],
++							&mode_lib->vba.UrgentBurstFactorChromaPre[k],
+ 							&mode_lib->vba.NotUrgentLatencyHidingPre[k]);
+ 				}
+ 
+diff --git a/drivers/gpu/drm/amd/include/asic_reg/oss/osssys_4_2_0_offset.h b/drivers/gpu/drm/amd/include/asic_reg/oss/osssys_4_2_0_offset.h
+index bd129266ebfd1..a84a7cfaf71e5 100644
+--- a/drivers/gpu/drm/amd/include/asic_reg/oss/osssys_4_2_0_offset.h
++++ b/drivers/gpu/drm/amd/include/asic_reg/oss/osssys_4_2_0_offset.h
+@@ -135,6 +135,8 @@
+ #define mmIH_RB_WPTR_ADDR_LO_BASE_IDX                                                                  0
+ #define mmIH_DOORBELL_RPTR                                                                             0x0087
+ #define mmIH_DOORBELL_RPTR_BASE_IDX                                                                    0
++#define mmIH_DOORBELL_RETRY_CAM                                                                        0x0088
++#define mmIH_DOORBELL_RETRY_CAM_BASE_IDX                                                               0
+ #define mmIH_RB_CNTL_RING1                                                                             0x008c
+ #define mmIH_RB_CNTL_RING1_BASE_IDX                                                                    0
+ #define mmIH_RB_BASE_RING1                                                                             0x008d
+@@ -159,6 +161,8 @@
+ #define mmIH_RB_WPTR_RING2_BASE_IDX                                                                    0
+ #define mmIH_DOORBELL_RPTR_RING2                                                                       0x009f
+ #define mmIH_DOORBELL_RPTR_RING2_BASE_IDX                                                              0
++#define mmIH_RETRY_CAM_ACK                                                                             0x00a4
++#define mmIH_RETRY_CAM_ACK_BASE_IDX                                                                    0
+ #define mmIH_VERSION                                                                                   0x00a5
+ #define mmIH_VERSION_BASE_IDX                                                                          0
+ #define mmIH_CNTL                                                                                      0x00c0
+@@ -235,6 +239,8 @@
+ #define mmIH_MMHUB_ERROR_BASE_IDX                                                                      0
+ #define mmIH_MEM_POWER_CTRL                                                                            0x00e8
+ #define mmIH_MEM_POWER_CTRL_BASE_IDX                                                                   0
++#define mmIH_RETRY_INT_CAM_CNTL                                                                        0x00e9
++#define mmIH_RETRY_INT_CAM_CNTL_BASE_IDX                                                               0
+ #define mmIH_REGISTER_LAST_PART2                                                                       0x00ff
+ #define mmIH_REGISTER_LAST_PART2_BASE_IDX                                                              0
+ #define mmSEM_CLK_CTRL                                                                                 0x0100
+diff --git a/drivers/gpu/drm/amd/include/asic_reg/oss/osssys_4_2_0_sh_mask.h b/drivers/gpu/drm/amd/include/asic_reg/oss/osssys_4_2_0_sh_mask.h
+index 3ea83ea9ce3a4..75c04fc275a0c 100644
+--- a/drivers/gpu/drm/amd/include/asic_reg/oss/osssys_4_2_0_sh_mask.h
++++ b/drivers/gpu/drm/amd/include/asic_reg/oss/osssys_4_2_0_sh_mask.h
+@@ -349,6 +349,17 @@
+ #define IH_DOORBELL_RPTR_RING2__ENABLE__SHIFT                                                                 0x1c
+ #define IH_DOORBELL_RPTR_RING2__OFFSET_MASK                                                                   0x03FFFFFFL
+ #define IH_DOORBELL_RPTR_RING2__ENABLE_MASK                                                                   0x10000000L
++//IH_RETRY_INT_CAM_CNTL
++#define IH_RETRY_INT_CAM_CNTL__CAM_SIZE__SHIFT                                                                0x0
++#define IH_RETRY_INT_CAM_CNTL__BACK_PRESSURE_SKID_VALUE__SHIFT                                                0x8
++#define IH_RETRY_INT_CAM_CNTL__ENABLE__SHIFT                                                                  0x10
++#define IH_RETRY_INT_CAM_CNTL__BACK_PRESSURE_ENABLE__SHIFT                                                    0x11
++#define IH_RETRY_INT_CAM_CNTL__PER_VF_ENTRY_SIZE__SHIFT                                                       0x14
++#define IH_RETRY_INT_CAM_CNTL__CAM_SIZE_MASK                                                                  0x0000001FL
++#define IH_RETRY_INT_CAM_CNTL__BACK_PRESSURE_SKID_VALUE_MASK                                                  0x00003F00L
++#define IH_RETRY_INT_CAM_CNTL__ENABLE_MASK                                                                    0x00010000L
++#define IH_RETRY_INT_CAM_CNTL__BACK_PRESSURE_ENABLE_MASK                                                      0x00020000L
++#define IH_RETRY_INT_CAM_CNTL__PER_VF_ENTRY_SIZE_MASK                                                         0x00300000L
+ //IH_VERSION
+ #define IH_VERSION__MINVER__SHIFT                                                                             0x0
+ #define IH_VERSION__MAJVER__SHIFT                                                                             0x8
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index 62ea57114a856..485c054b681f6 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -727,6 +727,24 @@ static int smu_late_init(void *handle)
+ 		return ret;
+ 	}
+ 
++	/*
++	 * Explicitly notify PMFW the power mode the system in. Since
++	 * the PMFW may boot the ASIC with a different mode.
++	 * For those supporting ACDC switch via gpio, PMFW will
++	 * handle the switch automatically. Driver involvement
++	 * is unnecessary.
++	 */
++	if (!smu->dc_controlled_by_gpio) {
++		ret = smu_set_power_source(smu,
++					   adev->pm.ac_power ? SMU_POWER_SOURCE_AC :
++					   SMU_POWER_SOURCE_DC);
++		if (ret) {
++			dev_err(adev->dev, "Failed to switch to %s mode!\n",
++				adev->pm.ac_power ? "AC" : "DC");
++			return ret;
++		}
++	}
++
+ 	if ((adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 1)) ||
+ 	    (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 3)))
+ 		return 0;
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
+index 95da6dd1cc656..ab7e7dc1bf1a8 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
+@@ -3412,26 +3412,8 @@ static int navi10_post_smu_init(struct smu_context *smu)
+ 		return 0;
+ 
+ 	ret = navi10_run_umc_cdr_workaround(smu);
+-	if (ret) {
++	if (ret)
+ 		dev_err(adev->dev, "Failed to apply umc cdr workaround!\n");
+-		return ret;
+-	}
+-
+-	if (!smu->dc_controlled_by_gpio) {
+-		/*
+-		 * For Navi1X, manually switch it to AC mode as PMFW
+-		 * may boot it with DC mode.
+-		 */
+-		ret = smu_v11_0_set_power_source(smu,
+-						 adev->pm.ac_power ?
+-						 SMU_POWER_SOURCE_AC :
+-						 SMU_POWER_SOURCE_DC);
+-		if (ret) {
+-			dev_err(adev->dev, "Failed to switch to %s mode!\n",
+-					adev->pm.ac_power ? "AC" : "DC");
+-			return ret;
+-		}
+-	}
+ 
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+index 4399416dd9b8f..ca0fca7da29e0 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+@@ -1768,6 +1768,7 @@ static const struct pptable_funcs smu_v13_0_7_ppt_funcs = {
+ 	.enable_mgpu_fan_boost = smu_v13_0_7_enable_mgpu_fan_boost,
+ 	.get_power_limit = smu_v13_0_7_get_power_limit,
+ 	.set_power_limit = smu_v13_0_set_power_limit,
++	.set_power_source = smu_v13_0_set_power_source,
+ 	.get_power_profile_mode = smu_v13_0_7_get_power_profile_mode,
+ 	.set_power_profile_mode = smu_v13_0_7_set_power_profile_mode,
+ 	.set_tool_table_location = smu_v13_0_set_tool_table_location,
+diff --git a/drivers/gpu/drm/drm_displayid.c b/drivers/gpu/drm/drm_displayid.c
+index 38ea8203df45b..7d03159dc1461 100644
+--- a/drivers/gpu/drm/drm_displayid.c
++++ b/drivers/gpu/drm/drm_displayid.c
+@@ -7,13 +7,28 @@
+ #include <drm/drm_edid.h>
+ #include <drm/drm_print.h>
+ 
++static const struct displayid_header *
++displayid_get_header(const u8 *displayid, int length, int index)
++{
++	const struct displayid_header *base;
++
++	if (sizeof(*base) > length - index)
++		return ERR_PTR(-EINVAL);
++
++	base = (const struct displayid_header *)&displayid[index];
++
++	return base;
++}
++
+ static int validate_displayid(const u8 *displayid, int length, int idx)
+ {
+ 	int i, dispid_length;
+ 	u8 csum = 0;
+ 	const struct displayid_header *base;
+ 
+-	base = (const struct displayid_header *)&displayid[idx];
++	base = displayid_get_header(displayid, length, idx);
++	if (IS_ERR(base))
++		return PTR_ERR(base);
+ 
+ 	DRM_DEBUG_KMS("base revision 0x%x, length %d, %d %d\n",
+ 		      base->rev, base->bytes, base->prod_id, base->ext_count);
+diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
+index 2fe8349be0995..2a4e9fea03dd7 100644
+--- a/drivers/gpu/drm/drm_fb_helper.c
++++ b/drivers/gpu/drm/drm_fb_helper.c
+@@ -625,19 +625,27 @@ static void drm_fb_helper_damage(struct drm_fb_helper *helper, u32 x, u32 y,
+ static void drm_fb_helper_memory_range_to_clip(struct fb_info *info, off_t off, size_t len,
+ 					       struct drm_rect *clip)
+ {
++	u32 line_length = info->fix.line_length;
++	u32 fb_height = info->var.yres;
+ 	off_t end = off + len;
+ 	u32 x1 = 0;
+-	u32 y1 = off / info->fix.line_length;
++	u32 y1 = off / line_length;
+ 	u32 x2 = info->var.xres;
+-	u32 y2 = DIV_ROUND_UP(end, info->fix.line_length);
++	u32 y2 = DIV_ROUND_UP(end, line_length);
++
++	/* Don't allow any of them beyond the bottom bound of display area */
++	if (y1 > fb_height)
++		y1 = fb_height;
++	if (y2 > fb_height)
++		y2 = fb_height;
+ 
+ 	if ((y2 - y1) == 1) {
+ 		/*
+ 		 * We've only written to a single scanline. Try to reduce
+ 		 * the number of horizontal pixels that need an update.
+ 		 */
+-		off_t bit_off = (off % info->fix.line_length) * 8;
+-		off_t bit_end = (end % info->fix.line_length) * 8;
++		off_t bit_off = (off % line_length) * 8;
++		off_t bit_end = (end % line_length) * 8;
+ 
+ 		x1 = bit_off / info->var.bits_per_pixel;
+ 		x2 = DIV_ROUND_UP(bit_end, info->var.bits_per_pixel);
+diff --git a/drivers/gpu/drm/drm_mipi_dsi.c b/drivers/gpu/drm/drm_mipi_dsi.c
+index b41aaf2bb9f16..7923cc21b78e8 100644
+--- a/drivers/gpu/drm/drm_mipi_dsi.c
++++ b/drivers/gpu/drm/drm_mipi_dsi.c
+@@ -221,7 +221,7 @@ mipi_dsi_device_register_full(struct mipi_dsi_host *host,
+ 		return dsi;
+ 	}
+ 
+-	dsi->dev.of_node = info->node;
++	device_set_node(&dsi->dev, of_fwnode_handle(info->node));
+ 	dsi->channel = info->channel;
+ 	strlcpy(dsi->name, info->type, sizeof(dsi->name));
+ 
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_g2d.h b/drivers/gpu/drm/exynos/exynos_drm_g2d.h
+index 74ea3c26deadc..1a5ae781b56c6 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_g2d.h
++++ b/drivers/gpu/drm/exynos/exynos_drm_g2d.h
+@@ -34,11 +34,11 @@ static inline int exynos_g2d_exec_ioctl(struct drm_device *dev, void *data,
+ 	return -ENODEV;
+ }
+ 
+-int g2d_open(struct drm_device *drm_dev, struct drm_file *file)
++static inline int g2d_open(struct drm_device *drm_dev, struct drm_file *file)
+ {
+ 	return 0;
+ }
+ 
+-void g2d_close(struct drm_device *drm_dev, struct drm_file *file)
++static inline void g2d_close(struct drm_device *drm_dev, struct drm_file *file)
+ { }
+ #endif
+diff --git a/drivers/gpu/drm/i915/Kconfig b/drivers/gpu/drm/i915/Kconfig
+index 98f4e44976e09..9c9bb0a0dcfca 100644
+--- a/drivers/gpu/drm/i915/Kconfig
++++ b/drivers/gpu/drm/i915/Kconfig
+@@ -62,10 +62,11 @@ config DRM_I915_FORCE_PROBE
+ 	  This is the default value for the i915.force_probe module
+ 	  parameter. Using the module parameter overrides this option.
+ 
+-	  Force probe the i915 for Intel graphics devices that are
+-	  recognized but not properly supported by this kernel version. It is
+-	  recommended to upgrade to a kernel version with proper support as soon
+-	  as it is available.
++	  Force probe the i915 driver for Intel graphics devices that are
++	  recognized but not properly supported by this kernel version. Force
++	  probing an unsupported device taints the kernel. It is recommended to
++	  upgrade to a kernel version with proper support as soon as it is
++	  available.
+ 
+ 	  It can also be used to block the probe of recognized and fully
+ 	  supported devices.
+@@ -75,7 +76,8 @@ config DRM_I915_FORCE_PROBE
+ 	  Use "<pci-id>[,<pci-id>,...]" to force probe the i915 for listed
+ 	  devices. For example, "4500" or "4500,4571".
+ 
+-	  Use "*" to force probe the driver for all known devices.
++	  Use "*" to force probe the driver for all known devices. Not
++	  recommended.
+ 
+ 	  Use "!" right before the ID to block the probe of the device. For
+ 	  example, "4500,!4571" forces the probe of 4500 and blocks the probe of
+diff --git a/drivers/gpu/drm/i915/display/intel_atomic_plane.c b/drivers/gpu/drm/i915/display/intel_atomic_plane.c
+index 1409bcfb6fd3d..9afba39613f37 100644
+--- a/drivers/gpu/drm/i915/display/intel_atomic_plane.c
++++ b/drivers/gpu/drm/i915/display/intel_atomic_plane.c
+@@ -1026,7 +1026,7 @@ intel_prepare_plane_fb(struct drm_plane *_plane,
+ 	int ret;
+ 
+ 	if (old_obj) {
+-		const struct intel_crtc_state *crtc_state =
++		const struct intel_crtc_state *new_crtc_state =
+ 			intel_atomic_get_new_crtc_state(state,
+ 							to_intel_crtc(old_plane_state->hw.crtc));
+ 
+@@ -1041,7 +1041,7 @@ intel_prepare_plane_fb(struct drm_plane *_plane,
+ 		 * This should only fail upon a hung GPU, in which case we
+ 		 * can safely continue.
+ 		 */
+-		if (intel_crtc_needs_modeset(crtc_state)) {
++		if (new_crtc_state && intel_crtc_needs_modeset(new_crtc_state)) {
+ 			ret = i915_sw_fence_await_reservation(&state->commit_ready,
+ 							      old_obj->base.resv,
+ 							      false, 0,
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index 62cbab7402e93..c1825f8f885c2 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -1533,6 +1533,11 @@ int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
+ 		pipe_config->dsc.slice_count =
+ 			drm_dp_dsc_sink_max_slice_count(intel_dp->dsc_dpcd,
+ 							true);
++		if (!pipe_config->dsc.slice_count) {
++			drm_dbg_kms(&dev_priv->drm, "Unsupported Slice Count %d\n",
++				    pipe_config->dsc.slice_count);
++			return -EINVAL;
++		}
+ 	} else {
+ 		u16 dsc_max_output_bpp = 0;
+ 		u8 dsc_dp_slice_count;
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_capture.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_capture.c
+index 710999d7189ee..8c08899aa3c8d 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_capture.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_capture.c
+@@ -30,12 +30,14 @@
+ 	{ FORCEWAKE_MT,             0,      0, "FORCEWAKE" }
+ 
+ #define COMMON_GEN9BASE_GLOBAL \
+-	{ GEN8_FAULT_TLB_DATA0,     0,      0, "GEN8_FAULT_TLB_DATA0" }, \
+-	{ GEN8_FAULT_TLB_DATA1,     0,      0, "GEN8_FAULT_TLB_DATA1" }, \
+ 	{ ERROR_GEN6,               0,      0, "ERROR_GEN6" }, \
+ 	{ DONE_REG,                 0,      0, "DONE_REG" }, \
+ 	{ HSW_GTT_CACHE_EN,         0,      0, "HSW_GTT_CACHE_EN" }
+ 
++#define GEN9_GLOBAL \
++	{ GEN8_FAULT_TLB_DATA0,     0,      0, "GEN8_FAULT_TLB_DATA0" }, \
++	{ GEN8_FAULT_TLB_DATA1,     0,      0, "GEN8_FAULT_TLB_DATA1" }
++
+ #define COMMON_GEN12BASE_GLOBAL \
+ 	{ GEN12_FAULT_TLB_DATA0,    0,      0, "GEN12_FAULT_TLB_DATA0" }, \
+ 	{ GEN12_FAULT_TLB_DATA1,    0,      0, "GEN12_FAULT_TLB_DATA1" }, \
+@@ -141,6 +143,7 @@ static const struct __guc_mmio_reg_descr xe_lpd_gsc_inst_regs[] = {
+ static const struct __guc_mmio_reg_descr default_global_regs[] = {
+ 	COMMON_BASE_GLOBAL,
+ 	COMMON_GEN9BASE_GLOBAL,
++	GEN9_GLOBAL,
+ };
+ 
+ static const struct __guc_mmio_reg_descr default_rc_class_regs[] = {
+diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c
+index 125f7ef1252c3..2b5aaea422208 100644
+--- a/drivers/gpu/drm/i915/i915_pci.c
++++ b/drivers/gpu/drm/i915/i915_pci.c
+@@ -1346,6 +1346,12 @@ static int i915_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 		return -ENODEV;
+ 	}
+ 
++	if (intel_info->require_force_probe) {
++		dev_info(&pdev->dev, "Force probing unsupported Device ID %04x, tainting kernel\n",
++			 pdev->device);
++		add_taint(TAINT_USER, LOCKDEP_STILL_OK);
++	}
++
+ 	/* Only bind to function 0 of the device. Early generations
+ 	 * used function 1 as a placeholder for multi-head. This causes
+ 	 * us confusion instead, especially on the systems where both
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_1_sm8450.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_1_sm8450.h
+new file mode 100644
+index 0000000000000..51f6a57e582c0
+--- /dev/null
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_1_sm8450.h
+@@ -0,0 +1,202 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++/*
++ * Copyright (c) 2022. Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2015-2018, 2020 The Linux Foundation. All rights reserved.
++ */
++
++#ifndef _DPU_8_1_SM8450_H
++#define _DPU_8_1_SM8450_H
++
++static const struct dpu_caps sm8450_dpu_caps = {
++	.max_mixer_width = DEFAULT_DPU_OUTPUT_LINE_WIDTH,
++	.max_mixer_blendstages = 0xb,
++	.qseed_type = DPU_SSPP_SCALER_QSEED4,
++	.has_src_split = true,
++	.has_dim_layer = true,
++	.has_idle_pc = true,
++	.has_3d_merge = true,
++	.max_linewidth = 5120,
++	.pixel_ram_size = DEFAULT_PIXEL_RAM_SIZE,
++};
++
++static const struct dpu_ubwc_cfg sm8450_ubwc_cfg = {
++	.ubwc_version = DPU_HW_UBWC_VER_40,
++	.highest_bank_bit = 0x3, /* TODO: 2 for LP_DDR4 */
++	.ubwc_swizzle = 0x6,
++};
++
++static const struct dpu_mdp_cfg sm8450_mdp[] = {
++	{
++	.name = "top_0", .id = MDP_TOP,
++	.base = 0x0, .len = 0x494,
++	.features = BIT(DPU_MDP_PERIPH_0_REMOVED),
++	.clk_ctrls[DPU_CLK_CTRL_VIG0] = { .reg_off = 0x2ac, .bit_off = 0 },
++	.clk_ctrls[DPU_CLK_CTRL_VIG1] = { .reg_off = 0x2b4, .bit_off = 0 },
++	.clk_ctrls[DPU_CLK_CTRL_VIG2] = { .reg_off = 0x2bc, .bit_off = 0 },
++	.clk_ctrls[DPU_CLK_CTRL_VIG3] = { .reg_off = 0x2c4, .bit_off = 0 },
++	.clk_ctrls[DPU_CLK_CTRL_DMA0] = { .reg_off = 0x2ac, .bit_off = 8 },
++	.clk_ctrls[DPU_CLK_CTRL_DMA1] = { .reg_off = 0x2b4, .bit_off = 8 },
++	.clk_ctrls[DPU_CLK_CTRL_DMA2] = { .reg_off = 0x2bc, .bit_off = 8 },
++	.clk_ctrls[DPU_CLK_CTRL_DMA3] = { .reg_off = 0x2c4, .bit_off = 8 },
++	.clk_ctrls[DPU_CLK_CTRL_REG_DMA] = { .reg_off = 0x2bc, .bit_off = 20 },
++	},
++};
++
++static const struct dpu_ctl_cfg sm8450_ctl[] = {
++	{
++	.name = "ctl_0", .id = CTL_0,
++	.base = 0x15000, .len = 0x204,
++	.features = BIT(DPU_CTL_ACTIVE_CFG) | BIT(DPU_CTL_SPLIT_DISPLAY) | BIT(DPU_CTL_FETCH_ACTIVE),
++	.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 9),
++	},
++	{
++	.name = "ctl_1", .id = CTL_1,
++	.base = 0x16000, .len = 0x204,
++	.features = BIT(DPU_CTL_SPLIT_DISPLAY) | CTL_SC7280_MASK,
++	.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 10),
++	},
++	{
++	.name = "ctl_2", .id = CTL_2,
++	.base = 0x17000, .len = 0x204,
++	.features = CTL_SC7280_MASK,
++	.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 11),
++	},
++	{
++	.name = "ctl_3", .id = CTL_3,
++	.base = 0x18000, .len = 0x204,
++	.features = CTL_SC7280_MASK,
++	.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 12),
++	},
++	{
++	.name = "ctl_4", .id = CTL_4,
++	.base = 0x19000, .len = 0x204,
++	.features = CTL_SC7280_MASK,
++	.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 13),
++	},
++	{
++	.name = "ctl_5", .id = CTL_5,
++	.base = 0x1a000, .len = 0x204,
++	.features = CTL_SC7280_MASK,
++	.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 23),
++	},
++};
++
++static const struct dpu_sspp_cfg sm8450_sspp[] = {
++	SSPP_BLK("sspp_0", SSPP_VIG0, 0x4000, 0x32c, VIG_SC7180_MASK,
++		sm8450_vig_sblk_0, 0, SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG0),
++	SSPP_BLK("sspp_1", SSPP_VIG1, 0x6000, 0x32c, VIG_SC7180_MASK,
++		sm8450_vig_sblk_1, 4, SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG1),
++	SSPP_BLK("sspp_2", SSPP_VIG2, 0x8000, 0x32c, VIG_SC7180_MASK,
++		sm8450_vig_sblk_2, 8, SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG2),
++	SSPP_BLK("sspp_3", SSPP_VIG3, 0xa000, 0x32c, VIG_SC7180_MASK,
++		sm8450_vig_sblk_3, 12, SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG3),
++	SSPP_BLK("sspp_8", SSPP_DMA0, 0x24000, 0x32c, DMA_SDM845_MASK,
++		sdm845_dma_sblk_0, 1, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA0),
++	SSPP_BLK("sspp_9", SSPP_DMA1, 0x26000, 0x32c, DMA_SDM845_MASK,
++		sdm845_dma_sblk_1, 5, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA1),
++	SSPP_BLK("sspp_10", SSPP_DMA2, 0x28000, 0x32c, DMA_CURSOR_SDM845_MASK,
++		sdm845_dma_sblk_2, 9, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA2),
++	SSPP_BLK("sspp_11", SSPP_DMA3, 0x2a000, 0x32c, DMA_CURSOR_SDM845_MASK,
++		sdm845_dma_sblk_3, 13, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA3),
++};
++
++/* FIXME: interrupts */
++static const struct dpu_pingpong_cfg sm8450_pp[] = {
++	PP_BLK_TE("pingpong_0", PINGPONG_0, 0x69000, MERGE_3D_0, sdm845_pp_sblk_te,
++			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8),
++			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 12)),
++	PP_BLK_TE("pingpong_1", PINGPONG_1, 0x6a000, MERGE_3D_0, sdm845_pp_sblk_te,
++			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 9),
++			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 13)),
++	PP_BLK("pingpong_2", PINGPONG_2, 0x6b000, MERGE_3D_1, sdm845_pp_sblk,
++			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 10),
++			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 14)),
++	PP_BLK("pingpong_3", PINGPONG_3, 0x6c000, MERGE_3D_1, sdm845_pp_sblk,
++			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 11),
++			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 15)),
++	PP_BLK("pingpong_4", PINGPONG_4, 0x6d000, MERGE_3D_2, sdm845_pp_sblk,
++			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 30),
++			-1),
++	PP_BLK("pingpong_5", PINGPONG_5, 0x6e000, MERGE_3D_2, sdm845_pp_sblk,
++			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 31),
++			-1),
++	PP_BLK("pingpong_6", PINGPONG_6, 0x65800, MERGE_3D_3, sdm845_pp_sblk,
++			-1,
++			-1),
++	PP_BLK("pingpong_7", PINGPONG_7, 0x65c00, MERGE_3D_3, sdm845_pp_sblk,
++			-1,
++			-1),
++};
++
++static const struct dpu_merge_3d_cfg sm8450_merge_3d[] = {
++	MERGE_3D_BLK("merge_3d_0", MERGE_3D_0, 0x4e000),
++	MERGE_3D_BLK("merge_3d_1", MERGE_3D_1, 0x4f000),
++	MERGE_3D_BLK("merge_3d_2", MERGE_3D_2, 0x50000),
++	MERGE_3D_BLK("merge_3d_3", MERGE_3D_3, 0x65f00),
++};
++
++static const struct dpu_intf_cfg sm8450_intf[] = {
++	INTF_BLK("intf_0", INTF_0, 0x34000, 0x280, INTF_DP, MSM_DP_CONTROLLER_0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 24, 25),
++	INTF_BLK("intf_1", INTF_1, 0x35000, 0x300, INTF_DSI, 0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 26, 27),
++	INTF_BLK("intf_2", INTF_2, 0x36000, 0x300, INTF_DSI, 1, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 28, 29),
++	INTF_BLK("intf_3", INTF_3, 0x37000, 0x280, INTF_DP, MSM_DP_CONTROLLER_1, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 30, 31),
++};
++
++static const struct dpu_perf_cfg sm8450_perf_data = {
++	.max_bw_low = 13600000,
++	.max_bw_high = 18200000,
++	.min_core_ib = 2500000,
++	.min_llcc_ib = 0,
++	.min_dram_ib = 800000,
++	.min_prefill_lines = 35,
++	/* FIXME: lut tables */
++	.danger_lut_tbl = {0x3ffff, 0x3ffff, 0x0},
++	.safe_lut_tbl = {0xfe00, 0xfe00, 0xffff},
++	.qos_lut_tbl = {
++		{.nentry = ARRAY_SIZE(sc7180_qos_linear),
++		.entries = sc7180_qos_linear
++		},
++		{.nentry = ARRAY_SIZE(sc7180_qos_macrotile),
++		.entries = sc7180_qos_macrotile
++		},
++		{.nentry = ARRAY_SIZE(sc7180_qos_nrt),
++		.entries = sc7180_qos_nrt
++		},
++		/* TODO: macrotile-qseed is different from macrotile */
++	},
++	.cdp_cfg = {
++		{.rd_enable = 1, .wr_enable = 1},
++		{.rd_enable = 1, .wr_enable = 0}
++	},
++	.clk_inefficiency_factor = 105,
++	.bw_inefficiency_factor = 120,
++};
++
++static const struct dpu_mdss_cfg sm8450_dpu_cfg = {
++	.caps = &sm8450_dpu_caps,
++	.ubwc = &sm8450_ubwc_cfg,
++	.mdp_count = ARRAY_SIZE(sm8450_mdp),
++	.mdp = sm8450_mdp,
++	.ctl_count = ARRAY_SIZE(sm8450_ctl),
++	.ctl = sm8450_ctl,
++	.sspp_count = ARRAY_SIZE(sm8450_sspp),
++	.sspp = sm8450_sspp,
++	.mixer_count = ARRAY_SIZE(sm8150_lm),
++	.mixer = sm8150_lm,
++	.dspp_count = ARRAY_SIZE(sm8150_dspp),
++	.dspp = sm8150_dspp,
++	.pingpong_count = ARRAY_SIZE(sm8450_pp),
++	.pingpong = sm8450_pp,
++	.merge_3d_count = ARRAY_SIZE(sm8450_merge_3d),
++	.merge_3d = sm8450_merge_3d,
++	.intf_count = ARRAY_SIZE(sm8450_intf),
++	.intf = sm8450_intf,
++	.vbif_count = ARRAY_SIZE(sdm845_vbif),
++	.vbif = sdm845_vbif,
++	.reg_dma_count = 1,
++	.dma_cfg = &sm8450_regdma,
++	.perf = &sm8450_perf_data,
++	.mdss_irqs = IRQ_SM8450_MASK,
++};
++
++#endif
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_0_sm8550.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_0_sm8550.h
+new file mode 100644
+index 0000000000000..6b71ab0162c68
+--- /dev/null
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_0_sm8550.h
+@@ -0,0 +1,177 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++/*
++ * Copyright (c) 2022. Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2015-2018, 2020 The Linux Foundation. All rights reserved.
++ */
++
++#ifndef _DPU_9_0_SM8550_H
++#define _DPU_9_0_SM8550_H
++
++static const struct dpu_caps sm8550_dpu_caps = {
++	.max_mixer_width = DEFAULT_DPU_OUTPUT_LINE_WIDTH,
++	.max_mixer_blendstages = 0xb,
++	.qseed_type = DPU_SSPP_SCALER_QSEED4,
++	.has_src_split = true,
++	.has_dim_layer = true,
++	.has_idle_pc = true,
++	.has_3d_merge = true,
++	.max_linewidth = 5120,
++	.pixel_ram_size = DEFAULT_PIXEL_RAM_SIZE,
++};
++
++static const struct dpu_ubwc_cfg sm8550_ubwc_cfg = {
++	.ubwc_version = DPU_HW_UBWC_VER_40,
++	.highest_bank_bit = 0x3, /* TODO: 2 for LP_DDR4 */
++};
++
++static const struct dpu_mdp_cfg sm8550_mdp[] = {
++	{
++	.name = "top_0", .id = MDP_TOP,
++	.base = 0, .len = 0x494,
++	.features = BIT(DPU_MDP_PERIPH_0_REMOVED),
++	.clk_ctrls[DPU_CLK_CTRL_VIG0] = { .reg_off = 0x4330, .bit_off = 0 },
++	.clk_ctrls[DPU_CLK_CTRL_VIG1] = { .reg_off = 0x6330, .bit_off = 0 },
++	.clk_ctrls[DPU_CLK_CTRL_VIG2] = { .reg_off = 0x8330, .bit_off = 0 },
++	.clk_ctrls[DPU_CLK_CTRL_VIG3] = { .reg_off = 0xa330, .bit_off = 0 },
++	.clk_ctrls[DPU_CLK_CTRL_DMA0] = { .reg_off = 0x24330, .bit_off = 0 },
++	.clk_ctrls[DPU_CLK_CTRL_DMA1] = { .reg_off = 0x26330, .bit_off = 0 },
++	.clk_ctrls[DPU_CLK_CTRL_DMA2] = { .reg_off = 0x28330, .bit_off = 0 },
++	.clk_ctrls[DPU_CLK_CTRL_DMA3] = { .reg_off = 0x2a330, .bit_off = 0 },
++	.clk_ctrls[DPU_CLK_CTRL_DMA4] = { .reg_off = 0x2c330, .bit_off = 0 },
++	.clk_ctrls[DPU_CLK_CTRL_DMA5] = { .reg_off = 0x2e330, .bit_off = 0 },
++	.clk_ctrls[DPU_CLK_CTRL_REG_DMA] = { .reg_off = 0x2bc, .bit_off = 20 },
++	},
++};
++
++static const struct dpu_ctl_cfg sm8550_ctl[] = {
++	{
++	.name = "ctl_0", .id = CTL_0,
++	.base = 0x15000, .len = 0x290,
++	.features = CTL_SM8550_MASK | BIT(DPU_CTL_SPLIT_DISPLAY),
++	.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 9),
++	},
++	{
++	.name = "ctl_1", .id = CTL_1,
++	.base = 0x16000, .len = 0x290,
++	.features = CTL_SM8550_MASK | BIT(DPU_CTL_SPLIT_DISPLAY),
++	.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 10),
++	},
++	{
++	.name = "ctl_2", .id = CTL_2,
++	.base = 0x17000, .len = 0x290,
++	.features = CTL_SM8550_MASK,
++	.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 11),
++	},
++	{
++	.name = "ctl_3", .id = CTL_3,
++	.base = 0x18000, .len = 0x290,
++	.features = CTL_SM8550_MASK,
++	.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 12),
++	},
++	{
++	.name = "ctl_4", .id = CTL_4,
++	.base = 0x19000, .len = 0x290,
++	.features = CTL_SM8550_MASK,
++	.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 13),
++	},
++	{
++	.name = "ctl_5", .id = CTL_5,
++	.base = 0x1a000, .len = 0x290,
++	.features = CTL_SM8550_MASK,
++	.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 23),
++	},
++};
++
++static const struct dpu_sspp_cfg sm8550_sspp[] = {
++	SSPP_BLK("sspp_0", SSPP_VIG0, 0x4000, 0x344, VIG_SC7180_MASK,
++		sm8550_vig_sblk_0, 0, SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG0),
++	SSPP_BLK("sspp_1", SSPP_VIG1, 0x6000, 0x344, VIG_SC7180_MASK,
++		sm8550_vig_sblk_1, 4, SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG1),
++	SSPP_BLK("sspp_2", SSPP_VIG2, 0x8000, 0x344, VIG_SC7180_MASK,
++		sm8550_vig_sblk_2, 8, SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG2),
++	SSPP_BLK("sspp_3", SSPP_VIG3, 0xa000, 0x344, VIG_SC7180_MASK,
++		sm8550_vig_sblk_3, 12, SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG3),
++	SSPP_BLK("sspp_8", SSPP_DMA0, 0x24000, 0x344, DMA_SDM845_MASK,
++		sdm845_dma_sblk_0, 1, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA0),
++	SSPP_BLK("sspp_9", SSPP_DMA1, 0x26000, 0x344, DMA_SDM845_MASK,
++		sdm845_dma_sblk_1, 5, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA1),
++	SSPP_BLK("sspp_10", SSPP_DMA2, 0x28000, 0x344, DMA_SDM845_MASK,
++		sdm845_dma_sblk_2, 9, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA2),
++	SSPP_BLK("sspp_11", SSPP_DMA3, 0x2a000, 0x344, DMA_SDM845_MASK,
++		sdm845_dma_sblk_3, 13, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA3),
++	SSPP_BLK("sspp_12", SSPP_DMA4, 0x2c000, 0x344, DMA_CURSOR_SDM845_MASK,
++		sm8550_dma_sblk_4, 14, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA4),
++	SSPP_BLK("sspp_13", SSPP_DMA5, 0x2e000, 0x344, DMA_CURSOR_SDM845_MASK,
++		sm8550_dma_sblk_5, 15, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA5),
++};
++
++static const struct dpu_pingpong_cfg sm8550_pp[] = {
++	PP_BLK_DITHER("pingpong_0", PINGPONG_0, 0x69000, MERGE_3D_0, sc7280_pp_sblk,
++			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8),
++			-1),
++	PP_BLK_DITHER("pingpong_1", PINGPONG_1, 0x6a000, MERGE_3D_0, sc7280_pp_sblk,
++			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 9),
++			-1),
++	PP_BLK_DITHER("pingpong_2", PINGPONG_2, 0x6b000, MERGE_3D_1, sc7280_pp_sblk,
++			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 10),
++			-1),
++	PP_BLK_DITHER("pingpong_3", PINGPONG_3, 0x6c000, MERGE_3D_1, sc7280_pp_sblk,
++			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 11),
++			-1),
++	PP_BLK_DITHER("pingpong_4", PINGPONG_4, 0x6d000, MERGE_3D_2, sc7280_pp_sblk,
++			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 30),
++			-1),
++	PP_BLK_DITHER("pingpong_5", PINGPONG_5, 0x6e000, MERGE_3D_2, sc7280_pp_sblk,
++			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 31),
++			-1),
++	PP_BLK_DITHER("pingpong_6", PINGPONG_6, 0x66000, MERGE_3D_3, sc7280_pp_sblk,
++			-1,
++			-1),
++	PP_BLK_DITHER("pingpong_7", PINGPONG_7, 0x66400, MERGE_3D_3, sc7280_pp_sblk,
++			-1,
++			-1),
++};
++
++static const struct dpu_merge_3d_cfg sm8550_merge_3d[] = {
++	MERGE_3D_BLK("merge_3d_0", MERGE_3D_0, 0x4e000),
++	MERGE_3D_BLK("merge_3d_1", MERGE_3D_1, 0x4f000),
++	MERGE_3D_BLK("merge_3d_2", MERGE_3D_2, 0x50000),
++	MERGE_3D_BLK("merge_3d_3", MERGE_3D_3, 0x66700),
++};
++
++static const struct dpu_intf_cfg sm8550_intf[] = {
++	INTF_BLK("intf_0", INTF_0, 0x34000, 0x280, INTF_DP, MSM_DP_CONTROLLER_0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 24, 25),
++	/* TODO TE sub-blocks for intf1 & intf2 */
++	INTF_BLK("intf_1", INTF_1, 0x35000, 0x300, INTF_DSI, 0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 26, 27),
++	INTF_BLK("intf_2", INTF_2, 0x36000, 0x300, INTF_DSI, 1, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 28, 29),
++	INTF_BLK("intf_3", INTF_3, 0x37000, 0x280, INTF_DP, MSM_DP_CONTROLLER_1, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 30, 31),
++};
++
++static const struct dpu_mdss_cfg sm8550_dpu_cfg = {
++	.caps = &sm8550_dpu_caps,
++	.ubwc = &sm8550_ubwc_cfg,
++	.mdp_count = ARRAY_SIZE(sm8550_mdp),
++	.mdp = sm8550_mdp,
++	.ctl_count = ARRAY_SIZE(sm8550_ctl),
++	.ctl = sm8550_ctl,
++	.sspp_count = ARRAY_SIZE(sm8550_sspp),
++	.sspp = sm8550_sspp,
++	.mixer_count = ARRAY_SIZE(sm8150_lm),
++	.mixer = sm8150_lm,
++	.dspp_count = ARRAY_SIZE(sm8150_dspp),
++	.dspp = sm8150_dspp,
++	.pingpong_count = ARRAY_SIZE(sm8550_pp),
++	.pingpong = sm8550_pp,
++	.merge_3d_count = ARRAY_SIZE(sm8550_merge_3d),
++	.merge_3d = sm8550_merge_3d,
++	.intf_count = ARRAY_SIZE(sm8550_intf),
++	.intf = sm8550_intf,
++	.vbif_count = ARRAY_SIZE(sdm845_vbif),
++	.vbif = sdm845_vbif,
++	.reg_dma_count = 1,
++	.dma_cfg = &sm8450_regdma,
++	.perf = &sm8450_perf_data,
++	.mdss_irqs = IRQ_SM8450_MASK,
++};
++
++#endif
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+index 497c9e1673abb..f7214c4401e19 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+@@ -27,9 +27,15 @@
+ #define VIG_SDM845_MASK \
+ 	(VIG_MASK | BIT(DPU_SSPP_QOS_8LVL) | BIT(DPU_SSPP_SCALER_QSEED3))
+ 
++#define VIG_SDM845_MASK_SDMA \
++	(VIG_SDM845_MASK | BIT(DPU_SSPP_SMART_DMA_V2))
++
+ #define VIG_SC7180_MASK \
+ 	(VIG_MASK | BIT(DPU_SSPP_QOS_8LVL) | BIT(DPU_SSPP_SCALER_QSEED4))
+ 
++#define VIG_SC7180_MASK_SDMA \
++	(VIG_SC7180_MASK | BIT(DPU_SSPP_SMART_DMA_V2))
++
+ #define VIG_QCM2290_MASK (VIG_BASE_MASK | BIT(DPU_SSPP_QOS_8LVL))
+ 
+ #define DMA_MSM8998_MASK \
+@@ -40,6 +46,9 @@
+ #define VIG_SC7280_MASK \
+ 	(VIG_SC7180_MASK | BIT(DPU_SSPP_INLINE_ROTATION))
+ 
++#define VIG_SC7280_MASK_SDMA \
++	(VIG_SC7280_MASK | BIT(DPU_SSPP_SMART_DMA_V2))
++
+ #define DMA_SDM845_MASK \
+ 	(BIT(DPU_SSPP_SRC) | BIT(DPU_SSPP_QOS) | BIT(DPU_SSPP_QOS_8LVL) |\
+ 	BIT(DPU_SSPP_TS_PREFILL) | BIT(DPU_SSPP_TS_PREFILL_REC1) |\
+@@ -48,6 +57,12 @@
+ #define DMA_CURSOR_SDM845_MASK \
+ 	(DMA_SDM845_MASK | BIT(DPU_SSPP_CURSOR))
+ 
++#define DMA_SDM845_MASK_SDMA \
++	(DMA_SDM845_MASK | BIT(DPU_SSPP_SMART_DMA_V2))
++
++#define DMA_CURSOR_SDM845_MASK_SDMA \
++	(DMA_CURSOR_SDM845_MASK | BIT(DPU_SSPP_SMART_DMA_V2))
++
+ #define DMA_CURSOR_MSM8998_MASK \
+ 	(DMA_MSM8998_MASK | BIT(DPU_SSPP_CURSOR))
+ 
+@@ -302,8 +317,6 @@ static const struct dpu_caps msm8998_dpu_caps = {
+ 	.max_mixer_width = DEFAULT_DPU_OUTPUT_LINE_WIDTH,
+ 	.max_mixer_blendstages = 0x7,
+ 	.qseed_type = DPU_SSPP_SCALER_QSEED3,
+-	.smart_dma_rev = DPU_SSPP_SMART_DMA_V1,
+-	.ubwc_version = DPU_HW_UBWC_VER_10,
+ 	.has_src_split = true,
+ 	.has_dim_layer = true,
+ 	.has_idle_pc = true,
+@@ -317,7 +330,6 @@ static const struct dpu_caps msm8998_dpu_caps = {
+ static const struct dpu_caps qcm2290_dpu_caps = {
+ 	.max_mixer_width = DEFAULT_DPU_LINE_WIDTH,
+ 	.max_mixer_blendstages = 0x4,
+-	.smart_dma_rev = DPU_SSPP_SMART_DMA_V2,
+ 	.has_dim_layer = true,
+ 	.has_idle_pc = true,
+ 	.max_linewidth = 2160,
+@@ -328,8 +340,6 @@ static const struct dpu_caps sdm845_dpu_caps = {
+ 	.max_mixer_width = DEFAULT_DPU_OUTPUT_LINE_WIDTH,
+ 	.max_mixer_blendstages = 0xb,
+ 	.qseed_type = DPU_SSPP_SCALER_QSEED3,
+-	.smart_dma_rev = DPU_SSPP_SMART_DMA_V2,
+-	.ubwc_version = DPU_HW_UBWC_VER_20,
+ 	.has_src_split = true,
+ 	.has_dim_layer = true,
+ 	.has_idle_pc = true,
+@@ -344,8 +354,6 @@ static const struct dpu_caps sc7180_dpu_caps = {
+ 	.max_mixer_width = DEFAULT_DPU_OUTPUT_LINE_WIDTH,
+ 	.max_mixer_blendstages = 0x9,
+ 	.qseed_type = DPU_SSPP_SCALER_QSEED4,
+-	.smart_dma_rev = DPU_SSPP_SMART_DMA_V2,
+-	.ubwc_version = DPU_HW_UBWC_VER_20,
+ 	.has_dim_layer = true,
+ 	.has_idle_pc = true,
+ 	.max_linewidth = DEFAULT_DPU_OUTPUT_LINE_WIDTH,
+@@ -356,8 +364,6 @@ static const struct dpu_caps sm6115_dpu_caps = {
+ 	.max_mixer_width = DEFAULT_DPU_LINE_WIDTH,
+ 	.max_mixer_blendstages = 0x4,
+ 	.qseed_type = DPU_SSPP_SCALER_QSEED4,
+-	.smart_dma_rev = DPU_SSPP_SMART_DMA_V2, /* TODO: v2.5 */
+-	.ubwc_version = DPU_HW_UBWC_VER_10,
+ 	.has_dim_layer = true,
+ 	.has_idle_pc = true,
+ 	.max_linewidth = 2160,
+@@ -368,8 +374,6 @@ static const struct dpu_caps sm8150_dpu_caps = {
+ 	.max_mixer_width = DEFAULT_DPU_OUTPUT_LINE_WIDTH,
+ 	.max_mixer_blendstages = 0xb,
+ 	.qseed_type = DPU_SSPP_SCALER_QSEED3,
+-	.smart_dma_rev = DPU_SSPP_SMART_DMA_V2, /* TODO: v2.5 */
+-	.ubwc_version = DPU_HW_UBWC_VER_30,
+ 	.has_src_split = true,
+ 	.has_dim_layer = true,
+ 	.has_idle_pc = true,
+@@ -384,8 +388,6 @@ static const struct dpu_caps sc8180x_dpu_caps = {
+ 	.max_mixer_width = DEFAULT_DPU_OUTPUT_LINE_WIDTH,
+ 	.max_mixer_blendstages = 0xb,
+ 	.qseed_type = DPU_SSPP_SCALER_QSEED3,
+-	.smart_dma_rev = DPU_SSPP_SMART_DMA_V2, /* TODO: v2.5 */
+-	.ubwc_version = DPU_HW_UBWC_VER_30,
+ 	.has_src_split = true,
+ 	.has_dim_layer = true,
+ 	.has_idle_pc = true,
+@@ -400,8 +402,6 @@ static const struct dpu_caps sc8280xp_dpu_caps = {
+ 	.max_mixer_width = 2560,
+ 	.max_mixer_blendstages = 11,
+ 	.qseed_type = DPU_SSPP_SCALER_QSEED4,
+-	.smart_dma_rev = DPU_SSPP_SMART_DMA_V2, /* TODO: v2.5 */
+-	.ubwc_version = DPU_HW_UBWC_VER_40,
+ 	.has_src_split = true,
+ 	.has_dim_layer = true,
+ 	.has_idle_pc = true,
+@@ -414,8 +414,6 @@ static const struct dpu_caps sm8250_dpu_caps = {
+ 	.max_mixer_width = DEFAULT_DPU_OUTPUT_LINE_WIDTH,
+ 	.max_mixer_blendstages = 0xb,
+ 	.qseed_type = DPU_SSPP_SCALER_QSEED4,
+-	.smart_dma_rev = DPU_SSPP_SMART_DMA_V2, /* TODO: v2.5 */
+-	.ubwc_version = DPU_HW_UBWC_VER_40,
+ 	.has_src_split = true,
+ 	.has_dim_layer = true,
+ 	.has_idle_pc = true,
+@@ -428,8 +426,6 @@ static const struct dpu_caps sm8350_dpu_caps = {
+ 	.max_mixer_width = DEFAULT_DPU_OUTPUT_LINE_WIDTH,
+ 	.max_mixer_blendstages = 0xb,
+ 	.qseed_type = DPU_SSPP_SCALER_QSEED4,
+-	.smart_dma_rev = DPU_SSPP_SMART_DMA_V2, /* TODO: v2.5 */
+-	.ubwc_version = DPU_HW_UBWC_VER_40,
+ 	.has_src_split = true,
+ 	.has_dim_layer = true,
+ 	.has_idle_pc = true,
+@@ -438,44 +434,72 @@ static const struct dpu_caps sm8350_dpu_caps = {
+ 	.pixel_ram_size = DEFAULT_PIXEL_RAM_SIZE,
+ };
+ 
+-static const struct dpu_caps sm8450_dpu_caps = {
++static const struct dpu_caps sc7280_dpu_caps = {
+ 	.max_mixer_width = DEFAULT_DPU_OUTPUT_LINE_WIDTH,
+-	.max_mixer_blendstages = 0xb,
++	.max_mixer_blendstages = 0x7,
+ 	.qseed_type = DPU_SSPP_SCALER_QSEED4,
+-	.smart_dma_rev = DPU_SSPP_SMART_DMA_V2, /* TODO: v2.5 */
+-	.ubwc_version = DPU_HW_UBWC_VER_40,
+-	.has_src_split = true,
+ 	.has_dim_layer = true,
+ 	.has_idle_pc = true,
+-	.has_3d_merge = true,
+-	.max_linewidth = 5120,
++	.max_linewidth = 2400,
+ 	.pixel_ram_size = DEFAULT_PIXEL_RAM_SIZE,
+ };
+ 
+-static const struct dpu_caps sm8550_dpu_caps = {
+-	.max_mixer_width = DEFAULT_DPU_OUTPUT_LINE_WIDTH,
+-	.max_mixer_blendstages = 0xb,
+-	.qseed_type = DPU_SSPP_SCALER_QSEED4,
+-	.smart_dma_rev = DPU_SSPP_SMART_DMA_V2, /* TODO: v2.5 */
++static const struct dpu_ubwc_cfg msm8998_ubwc_cfg = {
++	.ubwc_version = DPU_HW_UBWC_VER_10,
++	.highest_bank_bit = 0x2,
++};
++
++static const struct dpu_ubwc_cfg qcm2290_ubwc_cfg = {
++	.highest_bank_bit = 0x2,
++};
++
++static const struct dpu_ubwc_cfg sdm845_ubwc_cfg = {
++	.ubwc_version = DPU_HW_UBWC_VER_20,
++	.highest_bank_bit = 0x2,
++};
++
++static const struct dpu_ubwc_cfg sc7180_ubwc_cfg = {
++	.ubwc_version = DPU_HW_UBWC_VER_20,
++	.highest_bank_bit = 0x3,
++};
++
++static const struct dpu_ubwc_cfg sm6115_ubwc_cfg = {
++	.ubwc_version = DPU_HW_UBWC_VER_10,
++	.highest_bank_bit = 0x1,
++	.ubwc_swizzle = 0x7,
++};
++
++static const struct dpu_ubwc_cfg sm8150_ubwc_cfg = {
++	.ubwc_version = DPU_HW_UBWC_VER_30,
++	.highest_bank_bit = 0x2,
++};
++
++static const struct dpu_ubwc_cfg sc8180x_ubwc_cfg = {
++	.ubwc_version = DPU_HW_UBWC_VER_30,
++	.highest_bank_bit = 0x3,
++};
++
++static const struct dpu_ubwc_cfg sc8280xp_ubwc_cfg = {
+ 	.ubwc_version = DPU_HW_UBWC_VER_40,
+-	.has_src_split = true,
+-	.has_dim_layer = true,
+-	.has_idle_pc = true,
+-	.has_3d_merge = true,
+-	.max_linewidth = 5120,
+-	.pixel_ram_size = DEFAULT_PIXEL_RAM_SIZE,
++	.highest_bank_bit = 2,
++	.ubwc_swizzle = 6,
+ };
+ 
+-static const struct dpu_caps sc7280_dpu_caps = {
+-	.max_mixer_width = DEFAULT_DPU_OUTPUT_LINE_WIDTH,
+-	.max_mixer_blendstages = 0x7,
+-	.qseed_type = DPU_SSPP_SCALER_QSEED4,
+-	.smart_dma_rev = DPU_SSPP_SMART_DMA_V2,
++static const struct dpu_ubwc_cfg sm8250_ubwc_cfg = {
++	.ubwc_version = DPU_HW_UBWC_VER_40,
++	.highest_bank_bit = 0x3, /* TODO: 2 for LP_DDR4 */
++	.ubwc_swizzle = 0x6,
++};
++
++static const struct dpu_ubwc_cfg sm8350_ubwc_cfg = {
++	.ubwc_version = DPU_HW_UBWC_VER_40,
++	.highest_bank_bit = 0x3, /* TODO: 2 for LP_DDR4 */
++};
++
++static const struct dpu_ubwc_cfg sc7280_ubwc_cfg = {
+ 	.ubwc_version = DPU_HW_UBWC_VER_30,
+-	.has_dim_layer = true,
+-	.has_idle_pc = true,
+-	.max_linewidth = 2400,
+-	.pixel_ram_size = DEFAULT_PIXEL_RAM_SIZE,
++	.highest_bank_bit = 0x1,
++	.ubwc_swizzle = 0x6,
+ };
+ 
+ static const struct dpu_mdp_cfg msm8998_mdp[] = {
+@@ -483,7 +507,6 @@ static const struct dpu_mdp_cfg msm8998_mdp[] = {
+ 	.name = "top_0", .id = MDP_TOP,
+ 	.base = 0x0, .len = 0x458,
+ 	.features = 0,
+-	.highest_bank_bit = 0x2,
+ 	.clk_ctrls[DPU_CLK_CTRL_VIG0] = {
+ 			.reg_off = 0x2AC, .bit_off = 0},
+ 	.clk_ctrls[DPU_CLK_CTRL_VIG1] = {
+@@ -512,7 +535,6 @@ static const struct dpu_mdp_cfg sdm845_mdp[] = {
+ 	.name = "top_0", .id = MDP_TOP,
+ 	.base = 0x0, .len = 0x45C,
+ 	.features = BIT(DPU_MDP_AUDIO_SELECT),
+-	.highest_bank_bit = 0x2,
+ 	.clk_ctrls[DPU_CLK_CTRL_VIG0] = {
+ 			.reg_off = 0x2AC, .bit_off = 0},
+ 	.clk_ctrls[DPU_CLK_CTRL_VIG1] = {
+@@ -537,7 +559,6 @@ static const struct dpu_mdp_cfg sc7180_mdp[] = {
+ 	.name = "top_0", .id = MDP_TOP,
+ 	.base = 0x0, .len = 0x494,
+ 	.features = 0,
+-	.highest_bank_bit = 0x3,
+ 	.clk_ctrls[DPU_CLK_CTRL_VIG0] = {
+ 		.reg_off = 0x2AC, .bit_off = 0},
+ 	.clk_ctrls[DPU_CLK_CTRL_DMA0] = {
+@@ -556,7 +577,6 @@ static const struct dpu_mdp_cfg sc8180x_mdp[] = {
+ 	.name = "top_0", .id = MDP_TOP,
+ 	.base = 0x0, .len = 0x45C,
+ 	.features = BIT(DPU_MDP_AUDIO_SELECT),
+-	.highest_bank_bit = 0x3,
+ 	.clk_ctrls[DPU_CLK_CTRL_VIG0] = {
+ 			.reg_off = 0x2AC, .bit_off = 0},
+ 	.clk_ctrls[DPU_CLK_CTRL_VIG1] = {
+@@ -581,8 +601,6 @@ static const struct dpu_mdp_cfg sm6115_mdp[] = {
+ 	.name = "top_0", .id = MDP_TOP,
+ 	.base = 0x0, .len = 0x494,
+ 	.features = 0,
+-	.highest_bank_bit = 0x1,
+-	.ubwc_swizzle = 0x7,
+ 	.clk_ctrls[DPU_CLK_CTRL_VIG0] = {
+ 		.reg_off = 0x2ac, .bit_off = 0},
+ 	.clk_ctrls[DPU_CLK_CTRL_DMA0] = {
+@@ -595,8 +613,6 @@ static const struct dpu_mdp_cfg sm8250_mdp[] = {
+ 	.name = "top_0", .id = MDP_TOP,
+ 	.base = 0x0, .len = 0x494,
+ 	.features = 0,
+-	.highest_bank_bit = 0x3, /* TODO: 2 for LP_DDR4 */
+-	.ubwc_swizzle = 0x6,
+ 	.clk_ctrls[DPU_CLK_CTRL_VIG0] = {
+ 			.reg_off = 0x2AC, .bit_off = 0},
+ 	.clk_ctrls[DPU_CLK_CTRL_VIG1] = {
+@@ -625,7 +641,6 @@ static const struct dpu_mdp_cfg sm8350_mdp[] = {
+ 	.name = "top_0", .id = MDP_TOP,
+ 	.base = 0x0, .len = 0x494,
+ 	.features = 0,
+-	.highest_bank_bit = 0x3, /* TODO: 2 for LP_DDR4 */
+ 	.clk_ctrls[DPU_CLK_CTRL_VIG0] = {
+ 			.reg_off = 0x2ac, .bit_off = 0},
+ 	.clk_ctrls[DPU_CLK_CTRL_VIG1] = {
+@@ -647,40 +662,10 @@ static const struct dpu_mdp_cfg sm8350_mdp[] = {
+ 	},
+ };
+ 
+-static const struct dpu_mdp_cfg sm8450_mdp[] = {
+-	{
+-	.name = "top_0", .id = MDP_TOP,
+-	.base = 0x0, .len = 0x494,
+-	.features = BIT(DPU_MDP_PERIPH_0_REMOVED),
+-	.highest_bank_bit = 0x3, /* TODO: 2 for LP_DDR4 */
+-	.ubwc_swizzle = 0x6,
+-	.clk_ctrls[DPU_CLK_CTRL_VIG0] = {
+-			.reg_off = 0x2AC, .bit_off = 0},
+-	.clk_ctrls[DPU_CLK_CTRL_VIG1] = {
+-			.reg_off = 0x2B4, .bit_off = 0},
+-	.clk_ctrls[DPU_CLK_CTRL_VIG2] = {
+-			.reg_off = 0x2BC, .bit_off = 0},
+-	.clk_ctrls[DPU_CLK_CTRL_VIG3] = {
+-			.reg_off = 0x2C4, .bit_off = 0},
+-	.clk_ctrls[DPU_CLK_CTRL_DMA0] = {
+-			.reg_off = 0x2AC, .bit_off = 8},
+-	.clk_ctrls[DPU_CLK_CTRL_DMA1] = {
+-			.reg_off = 0x2B4, .bit_off = 8},
+-	.clk_ctrls[DPU_CLK_CTRL_DMA2] = {
+-			.reg_off = 0x2BC, .bit_off = 8},
+-	.clk_ctrls[DPU_CLK_CTRL_DMA3] = {
+-			.reg_off = 0x2C4, .bit_off = 8},
+-	.clk_ctrls[DPU_CLK_CTRL_REG_DMA] = {
+-			.reg_off = 0x2BC, .bit_off = 20},
+-	},
+-};
+-
+ static const struct dpu_mdp_cfg sc7280_mdp[] = {
+ 	{
+ 	.name = "top_0", .id = MDP_TOP,
+ 	.base = 0x0, .len = 0x2014,
+-	.highest_bank_bit = 0x1,
+-	.ubwc_swizzle = 0x6,
+ 	.clk_ctrls[DPU_CLK_CTRL_VIG0] = {
+ 		.reg_off = 0x2AC, .bit_off = 0},
+ 	.clk_ctrls[DPU_CLK_CTRL_DMA0] = {
+@@ -697,8 +682,6 @@ static const struct dpu_mdp_cfg sc8280xp_mdp[] = {
+ 	.name = "top_0", .id = MDP_TOP,
+ 	.base = 0x0, .len = 0x494,
+ 	.features = BIT(DPU_MDP_PERIPH_0_REMOVED),
+-	.highest_bank_bit = 2,
+-	.ubwc_swizzle = 6,
+ 	.clk_ctrls[DPU_CLK_CTRL_VIG0] = { .reg_off = 0x2ac, .bit_off = 0},
+ 	.clk_ctrls[DPU_CLK_CTRL_VIG1] = { .reg_off = 0x2b4, .bit_off = 0},
+ 	.clk_ctrls[DPU_CLK_CTRL_VIG2] = { .reg_off = 0x2bc, .bit_off = 0},
+@@ -711,44 +694,11 @@ static const struct dpu_mdp_cfg sc8280xp_mdp[] = {
+ 	},
+ };
+ 
+-static const struct dpu_mdp_cfg sm8550_mdp[] = {
+-	{
+-	.name = "top_0", .id = MDP_TOP,
+-	.base = 0, .len = 0x494,
+-	.features = BIT(DPU_MDP_PERIPH_0_REMOVED),
+-	.highest_bank_bit = 0x3, /* TODO: 2 for LP_DDR4 */
+-	.ubwc_swizzle = 0x6,
+-	.clk_ctrls[DPU_CLK_CTRL_VIG0] = {
+-			.reg_off = 0x4330, .bit_off = 0},
+-	.clk_ctrls[DPU_CLK_CTRL_VIG1] = {
+-			.reg_off = 0x6330, .bit_off = 0},
+-	.clk_ctrls[DPU_CLK_CTRL_VIG2] = {
+-			.reg_off = 0x8330, .bit_off = 0},
+-	.clk_ctrls[DPU_CLK_CTRL_VIG3] = {
+-			.reg_off = 0xa330, .bit_off = 0},
+-	.clk_ctrls[DPU_CLK_CTRL_DMA0] = {
+-			.reg_off = 0x24330, .bit_off = 0},
+-	.clk_ctrls[DPU_CLK_CTRL_DMA1] = {
+-			.reg_off = 0x26330, .bit_off = 0},
+-	.clk_ctrls[DPU_CLK_CTRL_DMA2] = {
+-			.reg_off = 0x28330, .bit_off = 0},
+-	.clk_ctrls[DPU_CLK_CTRL_DMA3] = {
+-			.reg_off = 0x2a330, .bit_off = 0},
+-	.clk_ctrls[DPU_CLK_CTRL_DMA4] = {
+-			.reg_off = 0x2c330, .bit_off = 0},
+-	.clk_ctrls[DPU_CLK_CTRL_DMA5] = {
+-			.reg_off = 0x2e330, .bit_off = 0},
+-	.clk_ctrls[DPU_CLK_CTRL_REG_DMA] = {
+-			.reg_off = 0x2bc, .bit_off = 20},
+-	},
+-};
+-
+ static const struct dpu_mdp_cfg qcm2290_mdp[] = {
+ 	{
+ 	.name = "top_0", .id = MDP_TOP,
+ 	.base = 0x0, .len = 0x494,
+ 	.features = 0,
+-	.highest_bank_bit = 0x2,
+ 	.clk_ctrls[DPU_CLK_CTRL_VIG0] = {
+ 		.reg_off = 0x2AC, .bit_off = 0},
+ 	.clk_ctrls[DPU_CLK_CTRL_DMA0] = {
+@@ -963,84 +913,6 @@ static const struct dpu_ctl_cfg sm8350_ctl[] = {
+ 	},
+ };
+ 
+-static const struct dpu_ctl_cfg sm8450_ctl[] = {
+-	{
+-	.name = "ctl_0", .id = CTL_0,
+-	.base = 0x15000, .len = 0x204,
+-	.features = BIT(DPU_CTL_ACTIVE_CFG) | BIT(DPU_CTL_SPLIT_DISPLAY) | BIT(DPU_CTL_FETCH_ACTIVE),
+-	.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 9),
+-	},
+-	{
+-	.name = "ctl_1", .id = CTL_1,
+-	.base = 0x16000, .len = 0x204,
+-	.features = BIT(DPU_CTL_SPLIT_DISPLAY) | CTL_SC7280_MASK,
+-	.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 10),
+-	},
+-	{
+-	.name = "ctl_2", .id = CTL_2,
+-	.base = 0x17000, .len = 0x204,
+-	.features = CTL_SC7280_MASK,
+-	.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 11),
+-	},
+-	{
+-	.name = "ctl_3", .id = CTL_3,
+-	.base = 0x18000, .len = 0x204,
+-	.features = CTL_SC7280_MASK,
+-	.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 12),
+-	},
+-	{
+-	.name = "ctl_4", .id = CTL_4,
+-	.base = 0x19000, .len = 0x204,
+-	.features = CTL_SC7280_MASK,
+-	.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 13),
+-	},
+-	{
+-	.name = "ctl_5", .id = CTL_5,
+-	.base = 0x1a000, .len = 0x204,
+-	.features = CTL_SC7280_MASK,
+-	.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 23),
+-	},
+-};
+-
+-static const struct dpu_ctl_cfg sm8550_ctl[] = {
+-	{
+-	.name = "ctl_0", .id = CTL_0,
+-	.base = 0x15000, .len = 0x290,
+-	.features = CTL_SM8550_MASK | BIT(DPU_CTL_SPLIT_DISPLAY),
+-	.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 9),
+-	},
+-	{
+-	.name = "ctl_1", .id = CTL_1,
+-	.base = 0x16000, .len = 0x290,
+-	.features = CTL_SM8550_MASK | BIT(DPU_CTL_SPLIT_DISPLAY),
+-	.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 10),
+-	},
+-	{
+-	.name = "ctl_2", .id = CTL_2,
+-	.base = 0x17000, .len = 0x290,
+-	.features = CTL_SM8550_MASK,
+-	.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 11),
+-	},
+-	{
+-	.name = "ctl_3", .id = CTL_3,
+-	.base = 0x18000, .len = 0x290,
+-	.features = CTL_SM8550_MASK,
+-	.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 12),
+-	},
+-	{
+-	.name = "ctl_4", .id = CTL_4,
+-	.base = 0x19000, .len = 0x290,
+-	.features = CTL_SM8550_MASK,
+-	.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 13),
+-	},
+-	{
+-	.name = "ctl_5", .id = CTL_5,
+-	.base = 0x1a000, .len = 0x290,
+-	.features = CTL_SM8550_MASK,
+-	.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 23),
+-	},
+-};
+-
+ static const struct dpu_ctl_cfg sc7280_ctl[] = {
+ 	{
+ 	.name = "ctl_0", .id = CTL_0,
+@@ -1164,11 +1036,11 @@ static const struct dpu_sspp_sub_blks sdm845_dma_sblk_1 = _DMA_SBLK("9", 2);
+ static const struct dpu_sspp_sub_blks sdm845_dma_sblk_2 = _DMA_SBLK("10", 3);
+ static const struct dpu_sspp_sub_blks sdm845_dma_sblk_3 = _DMA_SBLK("11", 4);
+ 
+-#define SSPP_BLK(_name, _id, _base, _features, \
++#define SSPP_BLK(_name, _id, _base, _len, _features, \
+ 		_sblk, _xinid, _type, _clkctrl) \
+ 	{ \
+ 	.name = _name, .id = _id, \
+-	.base = _base, .len = 0x1c8, \
++	.base = _base, .len = _len, \
+ 	.features = _features, \
+ 	.sblk = &_sblk, \
+ 	.xin_id = _xinid, \
+@@ -1177,40 +1049,40 @@ static const struct dpu_sspp_sub_blks sdm845_dma_sblk_3 = _DMA_SBLK("11", 4);
+ 	}
+ 
+ static const struct dpu_sspp_cfg msm8998_sspp[] = {
+-	SSPP_BLK("sspp_0", SSPP_VIG0, 0x4000, VIG_MSM8998_MASK,
++	SSPP_BLK("sspp_0", SSPP_VIG0, 0x4000, 0x1ac, VIG_MSM8998_MASK,
+ 		msm8998_vig_sblk_0, 0,  SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG0),
+-	SSPP_BLK("sspp_1", SSPP_VIG1, 0x6000, VIG_MSM8998_MASK,
++	SSPP_BLK("sspp_1", SSPP_VIG1, 0x6000, 0x1ac, VIG_MSM8998_MASK,
+ 		msm8998_vig_sblk_1, 4,  SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG1),
+-	SSPP_BLK("sspp_2", SSPP_VIG2, 0x8000, VIG_MSM8998_MASK,
++	SSPP_BLK("sspp_2", SSPP_VIG2, 0x8000, 0x1ac, VIG_MSM8998_MASK,
+ 		msm8998_vig_sblk_2, 8, SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG2),
+-	SSPP_BLK("sspp_3", SSPP_VIG3, 0xa000, VIG_MSM8998_MASK,
++	SSPP_BLK("sspp_3", SSPP_VIG3, 0xa000, 0x1ac, VIG_MSM8998_MASK,
+ 		msm8998_vig_sblk_3, 12,  SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG3),
+-	SSPP_BLK("sspp_8", SSPP_DMA0, 0x24000,  DMA_MSM8998_MASK,
++	SSPP_BLK("sspp_8", SSPP_DMA0, 0x24000, 0x1ac, DMA_MSM8998_MASK,
+ 		sdm845_dma_sblk_0, 1, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA0),
+-	SSPP_BLK("sspp_9", SSPP_DMA1, 0x26000,  DMA_MSM8998_MASK,
++	SSPP_BLK("sspp_9", SSPP_DMA1, 0x26000, 0x1ac, DMA_MSM8998_MASK,
+ 		sdm845_dma_sblk_1, 5, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA1),
+-	SSPP_BLK("sspp_10", SSPP_DMA2, 0x28000,  DMA_CURSOR_MSM8998_MASK,
++	SSPP_BLK("sspp_10", SSPP_DMA2, 0x28000, 0x1ac, DMA_CURSOR_MSM8998_MASK,
+ 		sdm845_dma_sblk_2, 9, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA2),
+-	SSPP_BLK("sspp_11", SSPP_DMA3, 0x2a000,  DMA_CURSOR_MSM8998_MASK,
++	SSPP_BLK("sspp_11", SSPP_DMA3, 0x2a000, 0x1ac, DMA_CURSOR_MSM8998_MASK,
+ 		sdm845_dma_sblk_3, 13, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA3),
+ };
+ 
+ static const struct dpu_sspp_cfg sdm845_sspp[] = {
+-	SSPP_BLK("sspp_0", SSPP_VIG0, 0x4000, VIG_SDM845_MASK,
++	SSPP_BLK("sspp_0", SSPP_VIG0, 0x4000, 0x1c8, VIG_SDM845_MASK_SDMA,
+ 		sdm845_vig_sblk_0, 0,  SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG0),
+-	SSPP_BLK("sspp_1", SSPP_VIG1, 0x6000, VIG_SDM845_MASK,
++	SSPP_BLK("sspp_1", SSPP_VIG1, 0x6000, 0x1c8, VIG_SDM845_MASK_SDMA,
+ 		sdm845_vig_sblk_1, 4,  SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG1),
+-	SSPP_BLK("sspp_2", SSPP_VIG2, 0x8000, VIG_SDM845_MASK,
++	SSPP_BLK("sspp_2", SSPP_VIG2, 0x8000, 0x1c8, VIG_SDM845_MASK_SDMA,
+ 		sdm845_vig_sblk_2, 8, SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG2),
+-	SSPP_BLK("sspp_3", SSPP_VIG3, 0xa000, VIG_SDM845_MASK,
++	SSPP_BLK("sspp_3", SSPP_VIG3, 0xa000, 0x1c8, VIG_SDM845_MASK_SDMA,
+ 		sdm845_vig_sblk_3, 12,  SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG3),
+-	SSPP_BLK("sspp_8", SSPP_DMA0, 0x24000,  DMA_SDM845_MASK,
++	SSPP_BLK("sspp_8", SSPP_DMA0, 0x24000, 0x1c8, DMA_SDM845_MASK_SDMA,
+ 		sdm845_dma_sblk_0, 1, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA0),
+-	SSPP_BLK("sspp_9", SSPP_DMA1, 0x26000,  DMA_SDM845_MASK,
++	SSPP_BLK("sspp_9", SSPP_DMA1, 0x26000, 0x1c8, DMA_SDM845_MASK_SDMA,
+ 		sdm845_dma_sblk_1, 5, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA1),
+-	SSPP_BLK("sspp_10", SSPP_DMA2, 0x28000,  DMA_CURSOR_SDM845_MASK,
++	SSPP_BLK("sspp_10", SSPP_DMA2, 0x28000, 0x1c8, DMA_CURSOR_SDM845_MASK_SDMA,
+ 		sdm845_dma_sblk_2, 9, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA2),
+-	SSPP_BLK("sspp_11", SSPP_DMA3, 0x2a000,  DMA_CURSOR_SDM845_MASK,
++	SSPP_BLK("sspp_11", SSPP_DMA3, 0x2a000, 0x1c8, DMA_CURSOR_SDM845_MASK_SDMA,
+ 		sdm845_dma_sblk_3, 13, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA3),
+ };
+ 
+@@ -1221,13 +1093,13 @@ static const struct dpu_sspp_sub_blks sc7280_vig_sblk_0 =
+ 			_VIG_SBLK_ROT("0", 4, DPU_SSPP_SCALER_QSEED4, &dpu_rot_sc7280_cfg_v2);
+ 
+ static const struct dpu_sspp_cfg sc7180_sspp[] = {
+-	SSPP_BLK("sspp_0", SSPP_VIG0, 0x4000, VIG_SC7180_MASK,
++	SSPP_BLK("sspp_0", SSPP_VIG0, 0x4000, 0x1f8, VIG_SC7180_MASK,
+ 		sc7180_vig_sblk_0, 0,  SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG0),
+-	SSPP_BLK("sspp_8", SSPP_DMA0, 0x24000,  DMA_SDM845_MASK,
++	SSPP_BLK("sspp_8", SSPP_DMA0, 0x24000, 0x1f8, DMA_SDM845_MASK,
+ 		sdm845_dma_sblk_0, 1, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA0),
+-	SSPP_BLK("sspp_9", SSPP_DMA1, 0x26000,  DMA_CURSOR_SDM845_MASK,
++	SSPP_BLK("sspp_9", SSPP_DMA1, 0x26000, 0x1f8, DMA_CURSOR_SDM845_MASK,
+ 		sdm845_dma_sblk_1, 5, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA1),
+-	SSPP_BLK("sspp_10", SSPP_DMA2, 0x28000,  DMA_CURSOR_SDM845_MASK,
++	SSPP_BLK("sspp_10", SSPP_DMA2, 0x28000, 0x1f8, DMA_CURSOR_SDM845_MASK,
+ 		sdm845_dma_sblk_2, 9, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA2),
+ };
+ 
+@@ -1235,9 +1107,9 @@ static const struct dpu_sspp_sub_blks sm6115_vig_sblk_0 =
+ 				_VIG_SBLK("0", 2, DPU_SSPP_SCALER_QSEED4);
+ 
+ static const struct dpu_sspp_cfg sm6115_sspp[] = {
+-	SSPP_BLK("sspp_0", SSPP_VIG0, 0x4000, VIG_SC7180_MASK,
++	SSPP_BLK("sspp_0", SSPP_VIG0, 0x4000, 0x1f8, VIG_SC7180_MASK,
+ 		sm6115_vig_sblk_0, 0, SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG0),
+-	SSPP_BLK("sspp_8", SSPP_DMA0, 0x24000,  DMA_SDM845_MASK,
++	SSPP_BLK("sspp_8", SSPP_DMA0, 0x24000, 0x1f8, DMA_SDM845_MASK,
+ 		sdm845_dma_sblk_0, 1, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA0),
+ };
+ 
+@@ -1251,21 +1123,21 @@ static const struct dpu_sspp_sub_blks sm8250_vig_sblk_3 =
+ 				_VIG_SBLK("3", 8, DPU_SSPP_SCALER_QSEED4);
+ 
+ static const struct dpu_sspp_cfg sm8250_sspp[] = {
+-	SSPP_BLK("sspp_0", SSPP_VIG0, 0x4000, VIG_SC7180_MASK,
++	SSPP_BLK("sspp_0", SSPP_VIG0, 0x4000, 0x1f8, VIG_SC7180_MASK_SDMA,
+ 		sm8250_vig_sblk_0, 0,  SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG0),
+-	SSPP_BLK("sspp_1", SSPP_VIG1, 0x6000, VIG_SC7180_MASK,
++	SSPP_BLK("sspp_1", SSPP_VIG1, 0x6000, 0x1f8, VIG_SC7180_MASK_SDMA,
+ 		sm8250_vig_sblk_1, 4,  SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG1),
+-	SSPP_BLK("sspp_2", SSPP_VIG2, 0x8000, VIG_SC7180_MASK,
++	SSPP_BLK("sspp_2", SSPP_VIG2, 0x8000, 0x1f8, VIG_SC7180_MASK_SDMA,
+ 		sm8250_vig_sblk_2, 8, SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG2),
+-	SSPP_BLK("sspp_3", SSPP_VIG3, 0xa000, VIG_SC7180_MASK,
++	SSPP_BLK("sspp_3", SSPP_VIG3, 0xa000, 0x1f8, VIG_SC7180_MASK_SDMA,
+ 		sm8250_vig_sblk_3, 12,  SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG3),
+-	SSPP_BLK("sspp_8", SSPP_DMA0, 0x24000,  DMA_SDM845_MASK,
++	SSPP_BLK("sspp_8", SSPP_DMA0, 0x24000, 0x1f8, DMA_SDM845_MASK_SDMA,
+ 		sdm845_dma_sblk_0, 1, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA0),
+-	SSPP_BLK("sspp_9", SSPP_DMA1, 0x26000,  DMA_SDM845_MASK,
++	SSPP_BLK("sspp_9", SSPP_DMA1, 0x26000, 0x1f8, DMA_SDM845_MASK_SDMA,
+ 		sdm845_dma_sblk_1, 5, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA1),
+-	SSPP_BLK("sspp_10", SSPP_DMA2, 0x28000,  DMA_CURSOR_SDM845_MASK,
++	SSPP_BLK("sspp_10", SSPP_DMA2, 0x28000, 0x1f8, DMA_CURSOR_SDM845_MASK_SDMA,
+ 		sdm845_dma_sblk_2, 9, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA2),
+-	SSPP_BLK("sspp_11", SSPP_DMA3, 0x2a000,  DMA_CURSOR_SDM845_MASK,
++	SSPP_BLK("sspp_11", SSPP_DMA3, 0x2a000, 0x1f8, DMA_CURSOR_SDM845_MASK_SDMA,
+ 		sdm845_dma_sblk_3, 13, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA3),
+ };
+ 
+@@ -1278,25 +1150,6 @@ static const struct dpu_sspp_sub_blks sm8450_vig_sblk_2 =
+ static const struct dpu_sspp_sub_blks sm8450_vig_sblk_3 =
+ 				_VIG_SBLK("3", 8, DPU_SSPP_SCALER_QSEED4);
+ 
+-static const struct dpu_sspp_cfg sm8450_sspp[] = {
+-	SSPP_BLK("sspp_0", SSPP_VIG0, 0x4000, VIG_SC7180_MASK,
+-		sm8450_vig_sblk_0, 0,  SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG0),
+-	SSPP_BLK("sspp_1", SSPP_VIG1, 0x6000, VIG_SC7180_MASK,
+-		sm8450_vig_sblk_1, 4,  SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG1),
+-	SSPP_BLK("sspp_2", SSPP_VIG2, 0x8000, VIG_SC7180_MASK,
+-		sm8450_vig_sblk_2, 8, SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG2),
+-	SSPP_BLK("sspp_3", SSPP_VIG3, 0xa000, VIG_SC7180_MASK,
+-		sm8450_vig_sblk_3, 12,  SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG3),
+-	SSPP_BLK("sspp_8", SSPP_DMA0, 0x24000,  DMA_SDM845_MASK,
+-		sdm845_dma_sblk_0, 1, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA0),
+-	SSPP_BLK("sspp_9", SSPP_DMA1, 0x26000,  DMA_SDM845_MASK,
+-		sdm845_dma_sblk_1, 5, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA1),
+-	SSPP_BLK("sspp_10", SSPP_DMA2, 0x28000,  DMA_CURSOR_SDM845_MASK,
+-		sdm845_dma_sblk_2, 9, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA2),
+-	SSPP_BLK("sspp_11", SSPP_DMA3, 0x2a000,  DMA_CURSOR_SDM845_MASK,
+-		sdm845_dma_sblk_3, 13, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA3),
+-};
+-
+ static const struct dpu_sspp_sub_blks sm8550_vig_sblk_0 =
+ 				_VIG_SBLK("0", 7, DPU_SSPP_SCALER_QSEED4);
+ static const struct dpu_sspp_sub_blks sm8550_vig_sblk_1 =
+@@ -1308,37 +1161,14 @@ static const struct dpu_sspp_sub_blks sm8550_vig_sblk_3 =
+ static const struct dpu_sspp_sub_blks sm8550_dma_sblk_4 = _DMA_SBLK("12", 5);
+ static const struct dpu_sspp_sub_blks sm8550_dma_sblk_5 = _DMA_SBLK("13", 6);
+ 
+-static const struct dpu_sspp_cfg sm8550_sspp[] = {
+-	SSPP_BLK("sspp_0", SSPP_VIG0, 0x4000, VIG_SC7180_MASK,
+-		sm8550_vig_sblk_0, 0,  SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG0),
+-	SSPP_BLK("sspp_1", SSPP_VIG1, 0x6000, VIG_SC7180_MASK,
+-		sm8550_vig_sblk_1, 4,  SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG1),
+-	SSPP_BLK("sspp_2", SSPP_VIG2, 0x8000, VIG_SC7180_MASK,
+-		sm8550_vig_sblk_2, 8, SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG2),
+-	SSPP_BLK("sspp_3", SSPP_VIG3, 0xa000, VIG_SC7180_MASK,
+-		sm8550_vig_sblk_3, 12,  SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG3),
+-	SSPP_BLK("sspp_8", SSPP_DMA0, 0x24000,  DMA_SDM845_MASK,
+-		sdm845_dma_sblk_0, 1, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA0),
+-	SSPP_BLK("sspp_9", SSPP_DMA1, 0x26000,  DMA_SDM845_MASK,
+-		sdm845_dma_sblk_1, 5, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA1),
+-	SSPP_BLK("sspp_10", SSPP_DMA2, 0x28000,  DMA_SDM845_MASK,
+-		sdm845_dma_sblk_2, 9, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA2),
+-	SSPP_BLK("sspp_11", SSPP_DMA3, 0x2a000,  DMA_SDM845_MASK,
+-		sdm845_dma_sblk_3, 13, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA3),
+-	SSPP_BLK("sspp_12", SSPP_DMA4, 0x2c000,  DMA_CURSOR_SDM845_MASK,
+-		sm8550_dma_sblk_4, 14, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA4),
+-	SSPP_BLK("sspp_13", SSPP_DMA5, 0x2e000,  DMA_CURSOR_SDM845_MASK,
+-		sm8550_dma_sblk_5, 15, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA5),
+-};
+-
+ static const struct dpu_sspp_cfg sc7280_sspp[] = {
+-	SSPP_BLK("sspp_0", SSPP_VIG0, 0x4000, VIG_SC7280_MASK,
++	SSPP_BLK("sspp_0", SSPP_VIG0, 0x4000, 0x1f8, VIG_SC7280_MASK_SDMA,
+ 		sc7280_vig_sblk_0, 0,  SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG0),
+-	SSPP_BLK("sspp_8", SSPP_DMA0, 0x24000,  DMA_SDM845_MASK,
++	SSPP_BLK("sspp_8", SSPP_DMA0, 0x24000, 0x1f8, DMA_SDM845_MASK_SDMA,
+ 		sdm845_dma_sblk_0, 1, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA0),
+-	SSPP_BLK("sspp_9", SSPP_DMA1, 0x26000,  DMA_CURSOR_SDM845_MASK,
++	SSPP_BLK("sspp_9", SSPP_DMA1, 0x26000, 0x1f8, DMA_CURSOR_SDM845_MASK_SDMA,
+ 		sdm845_dma_sblk_1, 5, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA1),
+-	SSPP_BLK("sspp_10", SSPP_DMA2, 0x28000,  DMA_CURSOR_SDM845_MASK,
++	SSPP_BLK("sspp_10", SSPP_DMA2, 0x28000, 0x1f8, DMA_CURSOR_SDM845_MASK_SDMA,
+ 		sdm845_dma_sblk_2, 9, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA2),
+ };
+ 
+@@ -1352,21 +1182,21 @@ static const struct dpu_sspp_sub_blks sc8280xp_vig_sblk_3 =
+ 				_VIG_SBLK("3", 8, DPU_SSPP_SCALER_QSEED4);
+ 
+ static const struct dpu_sspp_cfg sc8280xp_sspp[] = {
+-	SSPP_BLK("sspp_0", SSPP_VIG0, 0x4000, VIG_SC7180_MASK,
+-		 sc8280xp_vig_sblk_0, 0,  SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG0),
+-	SSPP_BLK("sspp_1", SSPP_VIG1, 0x6000, VIG_SC7180_MASK,
+-		 sc8280xp_vig_sblk_1, 4,  SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG1),
+-	SSPP_BLK("sspp_2", SSPP_VIG2, 0x8000, VIG_SC7180_MASK,
++	SSPP_BLK("sspp_0", SSPP_VIG0, 0x4000, 0x2ac, VIG_SC7180_MASK,
++		 sc8280xp_vig_sblk_0, 0, SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG0),
++	SSPP_BLK("sspp_1", SSPP_VIG1, 0x6000, 0x2ac, VIG_SC7180_MASK,
++		 sc8280xp_vig_sblk_1, 4, SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG1),
++	SSPP_BLK("sspp_2", SSPP_VIG2, 0x8000, 0x2ac, VIG_SC7180_MASK,
+ 		 sc8280xp_vig_sblk_2, 8, SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG2),
+-	SSPP_BLK("sspp_3", SSPP_VIG3, 0xa000, VIG_SC7180_MASK,
+-		 sc8280xp_vig_sblk_3, 12,  SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG3),
+-	SSPP_BLK("sspp_8", SSPP_DMA0, 0x24000, DMA_SDM845_MASK,
++	SSPP_BLK("sspp_3", SSPP_VIG3, 0xa000, 0x2ac, VIG_SC7180_MASK,
++		 sc8280xp_vig_sblk_3, 12, SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG3),
++	SSPP_BLK("sspp_8", SSPP_DMA0, 0x24000, 0x2ac, DMA_SDM845_MASK,
+ 		 sdm845_dma_sblk_0, 1, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA0),
+-	SSPP_BLK("sspp_9", SSPP_DMA1, 0x26000, DMA_SDM845_MASK,
++	SSPP_BLK("sspp_9", SSPP_DMA1, 0x26000, 0x2ac, DMA_SDM845_MASK,
+ 		 sdm845_dma_sblk_1, 5, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA1),
+-	SSPP_BLK("sspp_10", SSPP_DMA2, 0x28000, DMA_CURSOR_SDM845_MASK,
++	SSPP_BLK("sspp_10", SSPP_DMA2, 0x28000, 0x2ac, DMA_CURSOR_SDM845_MASK,
+ 		 sdm845_dma_sblk_2, 9, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA2),
+-	SSPP_BLK("sspp_11", SSPP_DMA3, 0x2a000, DMA_CURSOR_SDM845_MASK,
++	SSPP_BLK("sspp_11", SSPP_DMA3, 0x2a000, 0x2ac, DMA_CURSOR_SDM845_MASK,
+ 		 sdm845_dma_sblk_3, 13, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA3),
+ };
+ 
+@@ -1387,9 +1217,9 @@ static const struct dpu_sspp_sub_blks qcm2290_vig_sblk_0 = _VIG_SBLK_NOSCALE("0"
+ static const struct dpu_sspp_sub_blks qcm2290_dma_sblk_0 = _DMA_SBLK("8", 1);
+ 
+ static const struct dpu_sspp_cfg qcm2290_sspp[] = {
+-	SSPP_BLK("sspp_0", SSPP_VIG0, 0x4000, VIG_QCM2290_MASK,
++	SSPP_BLK("sspp_0", SSPP_VIG0, 0x4000, 0x1f8, VIG_QCM2290_MASK,
+ 		 qcm2290_vig_sblk_0, 0, SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG0),
+-	SSPP_BLK("sspp_8", SSPP_DMA0, 0x24000,  DMA_SDM845_MASK,
++	SSPP_BLK("sspp_8", SSPP_DMA0, 0x24000, 0x1f8, DMA_SDM845_MASK,
+ 		 qcm2290_dma_sblk_0, 1, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA0),
+ };
+ 
+@@ -1605,7 +1435,7 @@ static const struct dpu_pingpong_sub_blks sc7280_pp_sblk = {
+ 	.len = 0x20, .version = 0x20000},
+ };
+ 
+-#define PP_BLK_DIPHER(_name, _id, _base, _merge_3d, _sblk, _done, _rdptr) \
++#define PP_BLK_DITHER(_name, _id, _base, _merge_3d, _sblk, _done, _rdptr) \
+ 	{\
+ 	.name = _name, .id = _id, \
+ 	.base = _base, .len = 0, \
+@@ -1726,61 +1556,6 @@ static struct dpu_pingpong_cfg qcm2290_pp[] = {
+ 		DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 12)),
+ };
+ 
+-/* FIXME: interrupts */
+-static const struct dpu_pingpong_cfg sm8450_pp[] = {
+-	PP_BLK_TE("pingpong_0", PINGPONG_0, 0x69000, MERGE_3D_0, sdm845_pp_sblk_te,
+-			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8),
+-			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 12)),
+-	PP_BLK_TE("pingpong_1", PINGPONG_1, 0x6a000, MERGE_3D_0, sdm845_pp_sblk_te,
+-			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 9),
+-			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 13)),
+-	PP_BLK("pingpong_2", PINGPONG_2, 0x6b000, MERGE_3D_1, sdm845_pp_sblk,
+-			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 10),
+-			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 14)),
+-	PP_BLK("pingpong_3", PINGPONG_3, 0x6c000, MERGE_3D_1, sdm845_pp_sblk,
+-			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 11),
+-			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 15)),
+-	PP_BLK("pingpong_4", PINGPONG_4, 0x6d000, MERGE_3D_2, sdm845_pp_sblk,
+-			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 30),
+-			-1),
+-	PP_BLK("pingpong_5", PINGPONG_5, 0x6e000, MERGE_3D_2, sdm845_pp_sblk,
+-			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 31),
+-			-1),
+-	PP_BLK("pingpong_6", PINGPONG_6, 0x65800, MERGE_3D_3, sdm845_pp_sblk,
+-			-1,
+-			-1),
+-	PP_BLK("pingpong_7", PINGPONG_7, 0x65c00, MERGE_3D_3, sdm845_pp_sblk,
+-			-1,
+-			-1),
+-};
+-
+-static const struct dpu_pingpong_cfg sm8550_pp[] = {
+-	PP_BLK_DIPHER("pingpong_0", PINGPONG_0, 0x69000, MERGE_3D_0, sc7280_pp_sblk,
+-			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8),
+-			-1),
+-	PP_BLK_DIPHER("pingpong_1", PINGPONG_1, 0x6a000, MERGE_3D_0, sc7280_pp_sblk,
+-			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 9),
+-			-1),
+-	PP_BLK_DIPHER("pingpong_2", PINGPONG_2, 0x6b000, MERGE_3D_1, sc7280_pp_sblk,
+-			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 10),
+-			-1),
+-	PP_BLK_DIPHER("pingpong_3", PINGPONG_3, 0x6c000, MERGE_3D_1, sc7280_pp_sblk,
+-			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 11),
+-			-1),
+-	PP_BLK_DIPHER("pingpong_4", PINGPONG_4, 0x6d000, MERGE_3D_2, sc7280_pp_sblk,
+-			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 30),
+-			-1),
+-	PP_BLK_DIPHER("pingpong_5", PINGPONG_5, 0x6e000, MERGE_3D_2, sc7280_pp_sblk,
+-			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 31),
+-			-1),
+-	PP_BLK_DIPHER("pingpong_6", PINGPONG_6, 0x66000, MERGE_3D_3, sc7280_pp_sblk,
+-			-1,
+-			-1),
+-	PP_BLK_DIPHER("pingpong_7", PINGPONG_7, 0x66400, MERGE_3D_3, sc7280_pp_sblk,
+-			-1,
+-			-1),
+-};
+-
+ /*************************************************************
+  * MERGE_3D sub blocks config
+  *************************************************************/
+@@ -1804,20 +1579,6 @@ static const struct dpu_merge_3d_cfg sm8350_merge_3d[] = {
+ 	MERGE_3D_BLK("merge_3d_2", MERGE_3D_2, 0x50000),
+ };
+ 
+-static const struct dpu_merge_3d_cfg sm8450_merge_3d[] = {
+-	MERGE_3D_BLK("merge_3d_0", MERGE_3D_0, 0x4e000),
+-	MERGE_3D_BLK("merge_3d_1", MERGE_3D_1, 0x4f000),
+-	MERGE_3D_BLK("merge_3d_2", MERGE_3D_2, 0x50000),
+-	MERGE_3D_BLK("merge_3d_3", MERGE_3D_3, 0x65f00),
+-};
+-
+-static const struct dpu_merge_3d_cfg sm8550_merge_3d[] = {
+-	MERGE_3D_BLK("merge_3d_0", MERGE_3D_0, 0x4e000),
+-	MERGE_3D_BLK("merge_3d_1", MERGE_3D_1, 0x4f000),
+-	MERGE_3D_BLK("merge_3d_2", MERGE_3D_2, 0x50000),
+-	MERGE_3D_BLK("merge_3d_3", MERGE_3D_3, 0x66700),
+-};
+-
+ /*************************************************************
+  * DSC sub blocks config
+  *************************************************************/
+@@ -1845,10 +1606,10 @@ static struct dpu_dsc_cfg sm8150_dsc[] = {
+ /*************************************************************
+  * INTF sub blocks config
+  *************************************************************/
+-#define INTF_BLK(_name, _id, _base, _type, _ctrl_id, _progfetch, _features, _reg, _underrun_bit, _vsync_bit) \
++#define INTF_BLK(_name, _id, _base, _len, _type, _ctrl_id, _progfetch, _features, _reg, _underrun_bit, _vsync_bit) \
+ 	{\
+ 	.name = _name, .id = _id, \
+-	.base = _base, .len = 0x280, \
++	.base = _base, .len = _len, \
+ 	.features = _features, \
+ 	.type = _type, \
+ 	.controller_id = _ctrl_id, \
+@@ -1858,85 +1619,70 @@ static struct dpu_dsc_cfg sm8150_dsc[] = {
+ 	}
+ 
+ static const struct dpu_intf_cfg msm8998_intf[] = {
+-	INTF_BLK("intf_0", INTF_0, 0x6A000, INTF_DP, 0, 25, INTF_SDM845_MASK, MDP_SSPP_TOP0_INTR, 24, 25),
+-	INTF_BLK("intf_1", INTF_1, 0x6A800, INTF_DSI, 0, 25, INTF_SDM845_MASK, MDP_SSPP_TOP0_INTR, 26, 27),
+-	INTF_BLK("intf_2", INTF_2, 0x6B000, INTF_DSI, 1, 25, INTF_SDM845_MASK, MDP_SSPP_TOP0_INTR, 28, 29),
+-	INTF_BLK("intf_3", INTF_3, 0x6B800, INTF_HDMI, 0, 25, INTF_SDM845_MASK, MDP_SSPP_TOP0_INTR, 30, 31),
++	INTF_BLK("intf_0", INTF_0, 0x6A000, 0x280, INTF_DP, 0, 25, INTF_SDM845_MASK, MDP_SSPP_TOP0_INTR, 24, 25),
++	INTF_BLK("intf_1", INTF_1, 0x6A800, 0x280, INTF_DSI, 0, 25, INTF_SDM845_MASK, MDP_SSPP_TOP0_INTR, 26, 27),
++	INTF_BLK("intf_2", INTF_2, 0x6B000, 0x280, INTF_DSI, 1, 25, INTF_SDM845_MASK, MDP_SSPP_TOP0_INTR, 28, 29),
++	INTF_BLK("intf_3", INTF_3, 0x6B800, 0x280, INTF_HDMI, 0, 25, INTF_SDM845_MASK, MDP_SSPP_TOP0_INTR, 30, 31),
+ };
+ 
+ static const struct dpu_intf_cfg sdm845_intf[] = {
+-	INTF_BLK("intf_0", INTF_0, 0x6A000, INTF_DP, 0, 24, INTF_SDM845_MASK, MDP_SSPP_TOP0_INTR, 24, 25),
+-	INTF_BLK("intf_1", INTF_1, 0x6A800, INTF_DSI, 0, 24, INTF_SDM845_MASK, MDP_SSPP_TOP0_INTR, 26, 27),
+-	INTF_BLK("intf_2", INTF_2, 0x6B000, INTF_DSI, 1, 24, INTF_SDM845_MASK, MDP_SSPP_TOP0_INTR, 28, 29),
+-	INTF_BLK("intf_3", INTF_3, 0x6B800, INTF_DP, 1, 24, INTF_SDM845_MASK, MDP_SSPP_TOP0_INTR, 30, 31),
++	INTF_BLK("intf_0", INTF_0, 0x6A000, 0x280, INTF_DP, 0, 24, INTF_SDM845_MASK, MDP_SSPP_TOP0_INTR, 24, 25),
++	INTF_BLK("intf_1", INTF_1, 0x6A800, 0x280, INTF_DSI, 0, 24, INTF_SDM845_MASK, MDP_SSPP_TOP0_INTR, 26, 27),
++	INTF_BLK("intf_2", INTF_2, 0x6B000, 0x280, INTF_DSI, 1, 24, INTF_SDM845_MASK, MDP_SSPP_TOP0_INTR, 28, 29),
++	INTF_BLK("intf_3", INTF_3, 0x6B800, 0x280, INTF_DP, 1, 24, INTF_SDM845_MASK, MDP_SSPP_TOP0_INTR, 30, 31),
+ };
+ 
+ static const struct dpu_intf_cfg sc7180_intf[] = {
+-	INTF_BLK("intf_0", INTF_0, 0x6A000, INTF_DP, MSM_DP_CONTROLLER_0, 24, INTF_SC7180_MASK, MDP_SSPP_TOP0_INTR, 24, 25),
+-	INTF_BLK("intf_1", INTF_1, 0x6A800, INTF_DSI, 0, 24, INTF_SC7180_MASK, MDP_SSPP_TOP0_INTR, 26, 27),
++	INTF_BLK("intf_0", INTF_0, 0x6A000, 0x280, INTF_DP, MSM_DP_CONTROLLER_0, 24, INTF_SC7180_MASK, MDP_SSPP_TOP0_INTR, 24, 25),
++	INTF_BLK("intf_1", INTF_1, 0x6A800, 0x2c0, INTF_DSI, 0, 24, INTF_SC7180_MASK, MDP_SSPP_TOP0_INTR, 26, 27),
+ };
+ 
+ static const struct dpu_intf_cfg sm8150_intf[] = {
+-	INTF_BLK("intf_0", INTF_0, 0x6A000, INTF_DP, 0, 24, INTF_SC7180_MASK, MDP_SSPP_TOP0_INTR, 24, 25),
+-	INTF_BLK("intf_1", INTF_1, 0x6A800, INTF_DSI, 0, 24, INTF_SC7180_MASK, MDP_SSPP_TOP0_INTR, 26, 27),
+-	INTF_BLK("intf_2", INTF_2, 0x6B000, INTF_DSI, 1, 24, INTF_SC7180_MASK, MDP_SSPP_TOP0_INTR, 28, 29),
+-	INTF_BLK("intf_3", INTF_3, 0x6B800, INTF_DP, 1, 24, INTF_SC7180_MASK, MDP_SSPP_TOP0_INTR, 30, 31),
++	INTF_BLK("intf_0", INTF_0, 0x6A000, 0x280, INTF_DP, 0, 24, INTF_SC7180_MASK, MDP_SSPP_TOP0_INTR, 24, 25),
++	INTF_BLK("intf_1", INTF_1, 0x6A800, 0x2bc, INTF_DSI, 0, 24, INTF_SC7180_MASK, MDP_SSPP_TOP0_INTR, 26, 27),
++	INTF_BLK("intf_2", INTF_2, 0x6B000, 0x2bc, INTF_DSI, 1, 24, INTF_SC7180_MASK, MDP_SSPP_TOP0_INTR, 28, 29),
++	INTF_BLK("intf_3", INTF_3, 0x6B800, 0x280, INTF_DP, 1, 24, INTF_SC7180_MASK, MDP_SSPP_TOP0_INTR, 30, 31),
+ };
+ 
+ static const struct dpu_intf_cfg sc7280_intf[] = {
+-	INTF_BLK("intf_0", INTF_0, 0x34000, INTF_DP, MSM_DP_CONTROLLER_0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 24, 25),
+-	INTF_BLK("intf_1", INTF_1, 0x35000, INTF_DSI, 0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 26, 27),
+-	INTF_BLK("intf_5", INTF_5, 0x39000, INTF_DP, MSM_DP_CONTROLLER_1, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 22, 23),
++	INTF_BLK("intf_0", INTF_0, 0x34000, 0x280, INTF_DP, MSM_DP_CONTROLLER_0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 24, 25),
++	INTF_BLK("intf_1", INTF_1, 0x35000, 0x2c4, INTF_DSI, 0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 26, 27),
++	INTF_BLK("intf_5", INTF_5, 0x39000, 0x280, INTF_DP, MSM_DP_CONTROLLER_1, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 22, 23),
+ };
+ 
+ static const struct dpu_intf_cfg sm8350_intf[] = {
+-	INTF_BLK("intf_0", INTF_0, 0x34000, INTF_DP, MSM_DP_CONTROLLER_0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 24, 25),
+-	INTF_BLK("intf_1", INTF_1, 0x35000, INTF_DSI, 0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 26, 27),
+-	INTF_BLK("intf_2", INTF_2, 0x36000, INTF_DSI, 1, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 28, 29),
+-	INTF_BLK("intf_3", INTF_3, 0x37000, INTF_DP, MSM_DP_CONTROLLER_1, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 30, 31),
++	INTF_BLK("intf_0", INTF_0, 0x34000, 0x280, INTF_DP, MSM_DP_CONTROLLER_0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 24, 25),
++	INTF_BLK("intf_1", INTF_1, 0x35000, 0x2c4, INTF_DSI, 0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 26, 27),
++	INTF_BLK("intf_2", INTF_2, 0x36000, 0x2c4, INTF_DSI, 1, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 28, 29),
++	INTF_BLK("intf_3", INTF_3, 0x37000, 0x280, INTF_DP, MSM_DP_CONTROLLER_1, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 30, 31),
+ };
+ 
+ static const struct dpu_intf_cfg sc8180x_intf[] = {
+-	INTF_BLK("intf_0", INTF_0, 0x6A000, INTF_DP, MSM_DP_CONTROLLER_0, 24, INTF_SC7180_MASK, MDP_SSPP_TOP0_INTR, 24, 25),
+-	INTF_BLK("intf_1", INTF_1, 0x6A800, INTF_DSI, 0, 24, INTF_SC7180_MASK, MDP_SSPP_TOP0_INTR, 26, 27),
+-	INTF_BLK("intf_2", INTF_2, 0x6B000, INTF_DSI, 1, 24, INTF_SC7180_MASK, MDP_SSPP_TOP0_INTR, 28, 29),
++	INTF_BLK("intf_0", INTF_0, 0x6A000, 0x280, INTF_DP, MSM_DP_CONTROLLER_0, 24, INTF_SC7180_MASK, MDP_SSPP_TOP0_INTR, 24, 25),
++	INTF_BLK("intf_1", INTF_1, 0x6A800, 0x2bc, INTF_DSI, 0, 24, INTF_SC7180_MASK, MDP_SSPP_TOP0_INTR, 26, 27),
++	INTF_BLK("intf_2", INTF_2, 0x6B000, 0x2bc, INTF_DSI, 1, 24, INTF_SC7180_MASK, MDP_SSPP_TOP0_INTR, 28, 29),
+ 	/* INTF_3 is for MST, wired to INTF_DP 0 and 1, use dummy index until this is supported */
+-	INTF_BLK("intf_3", INTF_3, 0x6B800, INTF_DP, 999, 24, INTF_SC7180_MASK, MDP_SSPP_TOP0_INTR, 30, 31),
+-	INTF_BLK("intf_4", INTF_4, 0x6C000, INTF_DP, MSM_DP_CONTROLLER_1, 24, INTF_SC7180_MASK, MDP_SSPP_TOP0_INTR, 20, 21),
+-	INTF_BLK("intf_5", INTF_5, 0x6C800, INTF_DP, MSM_DP_CONTROLLER_2, 24, INTF_SC7180_MASK, MDP_SSPP_TOP0_INTR, 22, 23),
++	INTF_BLK("intf_3", INTF_3, 0x6B800, 0x280, INTF_DP, 999, 24, INTF_SC7180_MASK, MDP_SSPP_TOP0_INTR, 30, 31),
++	INTF_BLK("intf_4", INTF_4, 0x6C000, 0x280, INTF_DP, MSM_DP_CONTROLLER_1, 24, INTF_SC7180_MASK, MDP_SSPP_TOP0_INTR, 20, 21),
++	INTF_BLK("intf_5", INTF_5, 0x6C800, 0x280, INTF_DP, MSM_DP_CONTROLLER_2, 24, INTF_SC7180_MASK, MDP_SSPP_TOP0_INTR, 22, 23),
+ };
+ 
+ /* TODO: INTF 3, 8 and 7 are used for MST, marked as INTF_NONE for now */
+ static const struct dpu_intf_cfg sc8280xp_intf[] = {
+-	INTF_BLK("intf_0", INTF_0, 0x34000, INTF_DP, MSM_DP_CONTROLLER_0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 24, 25),
+-	INTF_BLK("intf_1", INTF_1, 0x35000, INTF_DSI, 0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 26, 27),
+-	INTF_BLK("intf_2", INTF_2, 0x36000, INTF_DSI, 1, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 28, 29),
+-	INTF_BLK("intf_3", INTF_3, 0x37000, INTF_NONE, MSM_DP_CONTROLLER_0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 30, 31),
+-	INTF_BLK("intf_4", INTF_4, 0x38000, INTF_DP, MSM_DP_CONTROLLER_1, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 20, 21),
+-	INTF_BLK("intf_5", INTF_5, 0x39000, INTF_DP, MSM_DP_CONTROLLER_3, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 22, 23),
+-	INTF_BLK("intf_6", INTF_6, 0x3a000, INTF_DP, MSM_DP_CONTROLLER_2, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 16, 17),
+-	INTF_BLK("intf_7", INTF_7, 0x3b000, INTF_NONE, MSM_DP_CONTROLLER_2, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 18, 19),
+-	INTF_BLK("intf_8", INTF_8, 0x3c000, INTF_NONE, MSM_DP_CONTROLLER_1, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 12, 13),
++	INTF_BLK("intf_0", INTF_0, 0x34000, 0x280, INTF_DP, MSM_DP_CONTROLLER_0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 24, 25),
++	INTF_BLK("intf_1", INTF_1, 0x35000, 0x300, INTF_DSI, 0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 26, 27),
++	INTF_BLK("intf_2", INTF_2, 0x36000, 0x300, INTF_DSI, 1, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 28, 29),
++	INTF_BLK("intf_3", INTF_3, 0x37000, 0x280, INTF_DP, MSM_DP_CONTROLLER_0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 30, 31),
++	INTF_BLK("intf_4", INTF_4, 0x38000, 0x280, INTF_DP, MSM_DP_CONTROLLER_1, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 20, 21),
++	INTF_BLK("intf_5", INTF_5, 0x39000, 0x280, INTF_DP, MSM_DP_CONTROLLER_3, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 22, 23),
++	INTF_BLK("intf_6", INTF_6, 0x3a000, 0x280, INTF_DP, MSM_DP_CONTROLLER_2, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 16, 17),
++	INTF_BLK("intf_7", INTF_7, 0x3b000, 0x280, INTF_DP, MSM_DP_CONTROLLER_2, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 18, 19),
++	INTF_BLK("intf_8", INTF_8, 0x3c000, 0x280, INTF_DP, MSM_DP_CONTROLLER_1, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 12, 13),
+ };
+ 
+ static const struct dpu_intf_cfg qcm2290_intf[] = {
+-	INTF_BLK("intf_0", INTF_0, 0x00000, INTF_NONE, 0, 0, 0, 0, 0, 0),
+-	INTF_BLK("intf_1", INTF_1, 0x6A800, INTF_DSI, 0, 24, INTF_SC7180_MASK, MDP_SSPP_TOP0_INTR, 26, 27),
+-};
+-
+-static const struct dpu_intf_cfg sm8450_intf[] = {
+-	INTF_BLK("intf_0", INTF_0, 0x34000, INTF_DP, MSM_DP_CONTROLLER_0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 24, 25),
+-	INTF_BLK("intf_1", INTF_1, 0x35000, INTF_DSI, 0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 26, 27),
+-	INTF_BLK("intf_2", INTF_2, 0x36000, INTF_DSI, 1, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 28, 29),
+-	INTF_BLK("intf_3", INTF_3, 0x37000, INTF_DP, MSM_DP_CONTROLLER_1, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 30, 31),
+-};
+-
+-static const struct dpu_intf_cfg sm8550_intf[] = {
+-	INTF_BLK("intf_0", INTF_0, 0x34000, INTF_DP, MSM_DP_CONTROLLER_0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 24, 25),
+-	/* TODO TE sub-blocks for intf1 & intf2 */
+-	INTF_BLK("intf_1", INTF_1, 0x35000, INTF_DSI, 0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 26, 27),
+-	INTF_BLK("intf_2", INTF_2, 0x36000, INTF_DSI, 1, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 28, 29),
+-	INTF_BLK("intf_3", INTF_3, 0x37000, INTF_DP, MSM_DP_CONTROLLER_1, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 30, 31),
++	INTF_BLK("intf_0", INTF_0, 0x00000, 0x280, INTF_DP, 0, 0, 0, 0, 0, 0),
++	INTF_BLK("intf_1", INTF_1, 0x6A800, 0x2c0, INTF_DSI, 0, 24, INTF_SC7180_MASK, MDP_SSPP_TOP0_INTR, 26, 27),
+ };
+ 
+ /*************************************************************
+@@ -2407,36 +2153,6 @@ static const struct dpu_perf_cfg sm8250_perf_data = {
+ 	.bw_inefficiency_factor = 120,
+ };
+ 
+-static const struct dpu_perf_cfg sm8450_perf_data = {
+-	.max_bw_low = 13600000,
+-	.max_bw_high = 18200000,
+-	.min_core_ib = 2500000,
+-	.min_llcc_ib = 0,
+-	.min_dram_ib = 800000,
+-	.min_prefill_lines = 35,
+-	/* FIXME: lut tables */
+-	.danger_lut_tbl = {0x3ffff, 0x3ffff, 0x0},
+-	.safe_lut_tbl = {0xfe00, 0xfe00, 0xffff},
+-	.qos_lut_tbl = {
+-		{.nentry = ARRAY_SIZE(sc7180_qos_linear),
+-		.entries = sc7180_qos_linear
+-		},
+-		{.nentry = ARRAY_SIZE(sc7180_qos_macrotile),
+-		.entries = sc7180_qos_macrotile
+-		},
+-		{.nentry = ARRAY_SIZE(sc7180_qos_nrt),
+-		.entries = sc7180_qos_nrt
+-		},
+-		/* TODO: macrotile-qseed is different from macrotile */
+-	},
+-	.cdp_cfg = {
+-		{.rd_enable = 1, .wr_enable = 1},
+-		{.rd_enable = 1, .wr_enable = 0}
+-	},
+-	.clk_inefficiency_factor = 105,
+-	.bw_inefficiency_factor = 120,
+-};
+-
+ static const struct dpu_perf_cfg sc7280_perf_data = {
+ 	.max_bw_low = 4700000,
+ 	.max_bw_high = 8800000,
+@@ -2522,6 +2238,7 @@ static const struct dpu_perf_cfg qcm2290_perf_data = {
+ 
+ static const struct dpu_mdss_cfg msm8998_dpu_cfg = {
+ 	.caps = &msm8998_dpu_caps,
++	.ubwc = &msm8998_ubwc_cfg,
+ 	.mdp_count = ARRAY_SIZE(msm8998_mdp),
+ 	.mdp = msm8998_mdp,
+ 	.ctl_count = ARRAY_SIZE(msm8998_ctl),
+@@ -2545,6 +2262,7 @@ static const struct dpu_mdss_cfg msm8998_dpu_cfg = {
+ 
+ static const struct dpu_mdss_cfg sdm845_dpu_cfg = {
+ 	.caps = &sdm845_dpu_caps,
++	.ubwc = &sdm845_ubwc_cfg,
+ 	.mdp_count = ARRAY_SIZE(sdm845_mdp),
+ 	.mdp = sdm845_mdp,
+ 	.ctl_count = ARRAY_SIZE(sdm845_ctl),
+@@ -2569,6 +2287,7 @@ static const struct dpu_mdss_cfg sdm845_dpu_cfg = {
+ 
+ static const struct dpu_mdss_cfg sc7180_dpu_cfg = {
+ 	.caps = &sc7180_dpu_caps,
++	.ubwc = &sc7180_ubwc_cfg,
+ 	.mdp_count = ARRAY_SIZE(sc7180_mdp),
+ 	.mdp = sc7180_mdp,
+ 	.ctl_count = ARRAY_SIZE(sc7180_ctl),
+@@ -2595,6 +2314,7 @@ static const struct dpu_mdss_cfg sc7180_dpu_cfg = {
+ 
+ static const struct dpu_mdss_cfg sm6115_dpu_cfg = {
+ 	.caps = &sm6115_dpu_caps,
++	.ubwc = &sm6115_ubwc_cfg,
+ 	.mdp_count = ARRAY_SIZE(sm6115_mdp),
+ 	.mdp = sm6115_mdp,
+ 	.ctl_count = ARRAY_SIZE(qcm2290_ctl),
+@@ -2617,6 +2337,7 @@ static const struct dpu_mdss_cfg sm6115_dpu_cfg = {
+ 
+ static const struct dpu_mdss_cfg sm8150_dpu_cfg = {
+ 	.caps = &sm8150_dpu_caps,
++	.ubwc = &sm8150_ubwc_cfg,
+ 	.mdp_count = ARRAY_SIZE(sdm845_mdp),
+ 	.mdp = sdm845_mdp,
+ 	.ctl_count = ARRAY_SIZE(sm8150_ctl),
+@@ -2645,6 +2366,7 @@ static const struct dpu_mdss_cfg sm8150_dpu_cfg = {
+ 
+ static const struct dpu_mdss_cfg sc8180x_dpu_cfg = {
+ 	.caps = &sc8180x_dpu_caps,
++	.ubwc = &sc8180x_ubwc_cfg,
+ 	.mdp_count = ARRAY_SIZE(sc8180x_mdp),
+ 	.mdp = sc8180x_mdp,
+ 	.ctl_count = ARRAY_SIZE(sm8150_ctl),
+@@ -2669,6 +2391,7 @@ static const struct dpu_mdss_cfg sc8180x_dpu_cfg = {
+ 
+ static const struct dpu_mdss_cfg sc8280xp_dpu_cfg = {
+ 	.caps = &sc8280xp_dpu_caps,
++	.ubwc = &sc8280xp_ubwc_cfg,
+ 	.mdp_count = ARRAY_SIZE(sc8280xp_mdp),
+ 	.mdp = sc8280xp_mdp,
+ 	.ctl_count = ARRAY_SIZE(sc8280xp_ctl),
+@@ -2695,6 +2418,7 @@ static const struct dpu_mdss_cfg sc8280xp_dpu_cfg = {
+ 
+ static const struct dpu_mdss_cfg sm8250_dpu_cfg = {
+ 	.caps = &sm8250_dpu_caps,
++	.ubwc = &sm8250_ubwc_cfg,
+ 	.mdp_count = ARRAY_SIZE(sm8250_mdp),
+ 	.mdp = sm8250_mdp,
+ 	.ctl_count = ARRAY_SIZE(sm8150_ctl),
+@@ -2725,6 +2449,7 @@ static const struct dpu_mdss_cfg sm8250_dpu_cfg = {
+ 
+ static const struct dpu_mdss_cfg sm8350_dpu_cfg = {
+ 	.caps = &sm8350_dpu_caps,
++	.ubwc = &sm8350_ubwc_cfg,
+ 	.mdp_count = ARRAY_SIZE(sm8350_mdp),
+ 	.mdp = sm8350_mdp,
+ 	.ctl_count = ARRAY_SIZE(sm8350_ctl),
+@@ -2749,60 +2474,9 @@ static const struct dpu_mdss_cfg sm8350_dpu_cfg = {
+ 	.mdss_irqs = IRQ_SM8350_MASK,
+ };
+ 
+-static const struct dpu_mdss_cfg sm8450_dpu_cfg = {
+-	.caps = &sm8450_dpu_caps,
+-	.mdp_count = ARRAY_SIZE(sm8450_mdp),
+-	.mdp = sm8450_mdp,
+-	.ctl_count = ARRAY_SIZE(sm8450_ctl),
+-	.ctl = sm8450_ctl,
+-	.sspp_count = ARRAY_SIZE(sm8450_sspp),
+-	.sspp = sm8450_sspp,
+-	.mixer_count = ARRAY_SIZE(sm8150_lm),
+-	.mixer = sm8150_lm,
+-	.dspp_count = ARRAY_SIZE(sm8150_dspp),
+-	.dspp = sm8150_dspp,
+-	.pingpong_count = ARRAY_SIZE(sm8450_pp),
+-	.pingpong = sm8450_pp,
+-	.merge_3d_count = ARRAY_SIZE(sm8450_merge_3d),
+-	.merge_3d = sm8450_merge_3d,
+-	.intf_count = ARRAY_SIZE(sm8450_intf),
+-	.intf = sm8450_intf,
+-	.vbif_count = ARRAY_SIZE(sdm845_vbif),
+-	.vbif = sdm845_vbif,
+-	.reg_dma_count = 1,
+-	.dma_cfg = &sm8450_regdma,
+-	.perf = &sm8450_perf_data,
+-	.mdss_irqs = IRQ_SM8450_MASK,
+-};
+-
+-static const struct dpu_mdss_cfg sm8550_dpu_cfg = {
+-	.caps = &sm8550_dpu_caps,
+-	.mdp_count = ARRAY_SIZE(sm8550_mdp),
+-	.mdp = sm8550_mdp,
+-	.ctl_count = ARRAY_SIZE(sm8550_ctl),
+-	.ctl = sm8550_ctl,
+-	.sspp_count = ARRAY_SIZE(sm8550_sspp),
+-	.sspp = sm8550_sspp,
+-	.mixer_count = ARRAY_SIZE(sm8150_lm),
+-	.mixer = sm8150_lm,
+-	.dspp_count = ARRAY_SIZE(sm8150_dspp),
+-	.dspp = sm8150_dspp,
+-	.pingpong_count = ARRAY_SIZE(sm8550_pp),
+-	.pingpong = sm8550_pp,
+-	.merge_3d_count = ARRAY_SIZE(sm8550_merge_3d),
+-	.merge_3d = sm8550_merge_3d,
+-	.intf_count = ARRAY_SIZE(sm8550_intf),
+-	.intf = sm8550_intf,
+-	.vbif_count = ARRAY_SIZE(sdm845_vbif),
+-	.vbif = sdm845_vbif,
+-	.reg_dma_count = 1,
+-	.dma_cfg = &sm8450_regdma,
+-	.perf = &sm8450_perf_data,
+-	.mdss_irqs = IRQ_SM8450_MASK,
+-};
+-
+ static const struct dpu_mdss_cfg sc7280_dpu_cfg = {
+ 	.caps = &sc7280_dpu_caps,
++	.ubwc = &sc7280_ubwc_cfg,
+ 	.mdp_count = ARRAY_SIZE(sc7280_mdp),
+ 	.mdp = sc7280_mdp,
+ 	.ctl_count = ARRAY_SIZE(sc7280_ctl),
+@@ -2825,6 +2499,7 @@ static const struct dpu_mdss_cfg sc7280_dpu_cfg = {
+ 
+ static const struct dpu_mdss_cfg qcm2290_dpu_cfg = {
+ 	.caps = &qcm2290_dpu_caps,
++	.ubwc = &qcm2290_ubwc_cfg,
+ 	.mdp_count = ARRAY_SIZE(qcm2290_mdp),
+ 	.mdp = qcm2290_mdp,
+ 	.ctl_count = ARRAY_SIZE(qcm2290_ctl),
+@@ -2845,6 +2520,10 @@ static const struct dpu_mdss_cfg qcm2290_dpu_cfg = {
+ 	.mdss_irqs = IRQ_SC7180_MASK,
+ };
+ 
++#include "catalog/dpu_8_1_sm8450.h"
++
++#include "catalog/dpu_9_0_sm8550.h"
++
+ static const struct dpu_mdss_hw_cfg_handler cfg_handler[] = {
+ 	{ .hw_rev = DPU_HW_VER_300, .dpu_cfg = &msm8998_dpu_cfg},
+ 	{ .hw_rev = DPU_HW_VER_301, .dpu_cfg = &msm8998_dpu_cfg},
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h
+index 2c5bafacd609c..5f96dd8def092 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h
+@@ -394,8 +394,6 @@ struct dpu_rotation_cfg {
+  * @max_mixer_blendstages max layer mixer blend stages or
+  *                       supported z order
+  * @qseed_type         qseed2 or qseed3 support.
+- * @smart_dma_rev      Supported version of SmartDMA feature.
+- * @ubwc_version       UBWC feature version (0x0 for not supported)
+  * @has_src_split      source split feature status
+  * @has_dim_layer      dim layer feature status
+  * @has_idle_pc        indicate if idle power collapse feature is supported
+@@ -409,8 +407,6 @@ struct dpu_caps {
+ 	u32 max_mixer_width;
+ 	u32 max_mixer_blendstages;
+ 	u32 qseed_type;
+-	u32 smart_dma_rev;
+-	u32 ubwc_version;
+ 	bool has_src_split;
+ 	bool has_dim_layer;
+ 	bool has_idle_pc;
+@@ -539,15 +535,24 @@ struct dpu_clk_ctrl_reg {
+  * @id:                index identifying this block
+  * @base:              register base offset to mdss
+  * @features           bit mask identifying sub-blocks/features
+- * @highest_bank_bit:  UBWC parameter
+- * @ubwc_swizzle:      ubwc default swizzle setting
+  * @clk_ctrls          clock control register definition
+  */
+ struct dpu_mdp_cfg {
+ 	DPU_HW_BLK_INFO;
++	struct dpu_clk_ctrl_reg clk_ctrls[DPU_CLK_CTRL_MAX];
++};
++
++/**
++ * struct dpu_ubwc_cfg - UBWC and memory configuration
++ *
++ * @ubwc_version       UBWC feature version (0x0 for not supported)
++ * @highest_bank_bit:  UBWC parameter
++ * @ubwc_swizzle:      ubwc default swizzle setting
++ */
++struct dpu_ubwc_cfg {
++	u32 ubwc_version;
+ 	u32 highest_bank_bit;
+ 	u32 ubwc_swizzle;
+-	struct dpu_clk_ctrl_reg clk_ctrls[DPU_CLK_CTRL_MAX];
+ };
+ 
+ /* struct dpu_ctl_cfg : MDP CTL instance info
+@@ -849,6 +854,8 @@ struct dpu_perf_cfg {
+ struct dpu_mdss_cfg {
+ 	const struct dpu_caps *caps;
+ 
++	const struct dpu_ubwc_cfg *ubwc;
++
+ 	u32 mdp_count;
+ 	const struct dpu_mdp_cfg *mdp;
+ 
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_interrupts.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_interrupts.c
+index 53326f25e40ef..17f3e7e4f1941 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_interrupts.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_interrupts.c
+@@ -15,7 +15,7 @@
+ 
+ /*
+  * Register offsets in MDSS register file for the interrupt registers
+- * w.r.t. to the MDP base
++ * w.r.t. the MDP base
+  */
+ #define MDP_SSPP_TOP0_OFF		0x0
+ #define MDP_INTF_0_OFF			0x6A000
+@@ -24,20 +24,23 @@
+ #define MDP_INTF_3_OFF			0x6B800
+ #define MDP_INTF_4_OFF			0x6C000
+ #define MDP_INTF_5_OFF			0x6C800
++#define INTF_INTR_EN			0x1c0
++#define INTF_INTR_STATUS		0x1c4
++#define INTF_INTR_CLEAR			0x1c8
+ #define MDP_AD4_0_OFF			0x7C000
+ #define MDP_AD4_1_OFF			0x7D000
+ #define MDP_AD4_INTR_EN_OFF		0x41c
+ #define MDP_AD4_INTR_CLEAR_OFF		0x424
+ #define MDP_AD4_INTR_STATUS_OFF		0x420
+-#define MDP_INTF_0_OFF_REV_7xxx             0x34000
+-#define MDP_INTF_1_OFF_REV_7xxx             0x35000
+-#define MDP_INTF_2_OFF_REV_7xxx             0x36000
+-#define MDP_INTF_3_OFF_REV_7xxx             0x37000
+-#define MDP_INTF_4_OFF_REV_7xxx             0x38000
+-#define MDP_INTF_5_OFF_REV_7xxx             0x39000
+-#define MDP_INTF_6_OFF_REV_7xxx             0x3a000
+-#define MDP_INTF_7_OFF_REV_7xxx             0x3b000
+-#define MDP_INTF_8_OFF_REV_7xxx             0x3c000
++#define MDP_INTF_0_OFF_REV_7xxx		0x34000
++#define MDP_INTF_1_OFF_REV_7xxx		0x35000
++#define MDP_INTF_2_OFF_REV_7xxx		0x36000
++#define MDP_INTF_3_OFF_REV_7xxx		0x37000
++#define MDP_INTF_4_OFF_REV_7xxx		0x38000
++#define MDP_INTF_5_OFF_REV_7xxx		0x39000
++#define MDP_INTF_6_OFF_REV_7xxx		0x3a000
++#define MDP_INTF_7_OFF_REV_7xxx		0x3b000
++#define MDP_INTF_8_OFF_REV_7xxx		0x3c000
+ 
+ /**
+  * struct dpu_intr_reg - array of DPU register sets
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
+index 7ce66bf3f4c8d..b2a94b9a3e987 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
+@@ -56,11 +56,6 @@
+ #define   INTF_TPG_RGB_MAPPING          0x11C
+ #define   INTF_PROG_FETCH_START         0x170
+ #define   INTF_PROG_ROT_START           0x174
+-
+-#define   INTF_FRAME_LINE_COUNT_EN      0x0A8
+-#define   INTF_FRAME_COUNT              0x0AC
+-#define   INTF_LINE_COUNT               0x0B0
+-
+ #define   INTF_MUX                      0x25C
+ 
+ #define INTF_CFG_ACTIVE_H_EN	BIT(29)
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.c
+index 4246ab0b3beea..a82113b7d632a 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.c
+@@ -307,25 +307,25 @@ static void dpu_hw_sspp_setup_format(struct dpu_hw_pipe *ctx,
+ 		src_format |= (fmt->fetch_mode & 3) << 30; /*FRAME_FORMAT */
+ 		DPU_REG_WRITE(c, SSPP_FETCH_CONFIG,
+ 			DPU_FETCH_CONFIG_RESET_VALUE |
+-			ctx->mdp->highest_bank_bit << 18);
+-		switch (ctx->catalog->caps->ubwc_version) {
++			ctx->ubwc->highest_bank_bit << 18);
++		switch (ctx->ubwc->ubwc_version) {
+ 		case DPU_HW_UBWC_VER_10:
+ 			fast_clear = fmt->alpha_enable ? BIT(31) : 0;
+ 			DPU_REG_WRITE(c, SSPP_UBWC_STATIC_CTRL,
+-					fast_clear | (ctx->mdp->ubwc_swizzle & 0x1) |
++					fast_clear | (ctx->ubwc->ubwc_swizzle & 0x1) |
+ 					BIT(8) |
+-					(ctx->mdp->highest_bank_bit << 4));
++					(ctx->ubwc->highest_bank_bit << 4));
+ 			break;
+ 		case DPU_HW_UBWC_VER_20:
+ 			fast_clear = fmt->alpha_enable ? BIT(31) : 0;
+ 			DPU_REG_WRITE(c, SSPP_UBWC_STATIC_CTRL,
+-					fast_clear | (ctx->mdp->ubwc_swizzle) |
+-					(ctx->mdp->highest_bank_bit << 4));
++					fast_clear | (ctx->ubwc->ubwc_swizzle) |
++					(ctx->ubwc->highest_bank_bit << 4));
+ 			break;
+ 		case DPU_HW_UBWC_VER_30:
+ 			DPU_REG_WRITE(c, SSPP_UBWC_STATIC_CTRL,
+-					BIT(30) | (ctx->mdp->ubwc_swizzle) |
+-					(ctx->mdp->highest_bank_bit << 4));
++					BIT(30) | (ctx->ubwc->ubwc_swizzle) |
++					(ctx->ubwc->highest_bank_bit << 4));
+ 			break;
+ 		case DPU_HW_UBWC_VER_40:
+ 			DPU_REG_WRITE(c, SSPP_UBWC_STATIC_CTRL,
+@@ -804,7 +804,7 @@ struct dpu_hw_pipe *dpu_hw_sspp_init(enum dpu_sspp idx,
+ 
+ 	/* Assign ops */
+ 	hw_pipe->catalog = catalog;
+-	hw_pipe->mdp = &catalog->mdp[0];
++	hw_pipe->ubwc = catalog->ubwc;
+ 	hw_pipe->idx = idx;
+ 	hw_pipe->cap = cfg;
+ 	_setup_layer_ops(hw_pipe, hw_pipe->cap->features);
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.h
+index 0c95b7e64f6c2..cc435fa58f382 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.h
+@@ -351,7 +351,7 @@ struct dpu_hw_sspp_ops {
+  * @base: hardware block base structure
+  * @hw: block hardware details
+  * @catalog: back pointer to catalog
+- * @mdp: pointer to associated mdp portion of the catalog
++ * @ubwc: ubwc configuration data
+  * @idx: pipe index
+  * @cap: pointer to layer_cfg
+  * @ops: pointer to operations possible for this pipe
+@@ -360,7 +360,7 @@ struct dpu_hw_pipe {
+ 	struct dpu_hw_blk base;
+ 	struct dpu_hw_blk_reg_map hw;
+ 	const struct dpu_mdss_cfg *catalog;
+-	const struct dpu_mdp_cfg *mdp;
++	const struct dpu_ubwc_cfg *ubwc;
+ 
+ 	/* Pipe */
+ 	enum dpu_sspp idx;
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_wb.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_wb.c
+index 2d28afdf860ef..a3e413d277175 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_wb.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_wb.c
+@@ -61,6 +61,7 @@ static const struct dpu_wb_cfg *_wb_offset(enum dpu_wb wb,
+ 	for (i = 0; i < m->wb_count; i++) {
+ 		if (wb == m->wb[i].id) {
+ 			b->blk_addr = addr + m->wb[i].base;
++			b->log_mask = DPU_DBG_MASK_WB;
+ 			return &m->wb[i];
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hwio.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hwio.h
+index feb9a729844a3..5acd5683d25a4 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hwio.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hwio.h
+@@ -21,9 +21,6 @@
+ #define HIST_INTR_EN                    0x01c
+ #define HIST_INTR_STATUS                0x020
+ #define HIST_INTR_CLEAR                 0x024
+-#define INTF_INTR_EN                    0x1C0
+-#define INTF_INTR_STATUS                0x1C4
+-#define INTF_INTR_CLEAR                 0x1C8
+ #define SPLIT_DISPLAY_EN                0x2F4
+ #define SPLIT_DISPLAY_UPPER_PIPE_CTRL   0x2F8
+ #define DSPP_IGC_COLOR0_RAM_LUTN        0x300
+diff --git a/drivers/gpu/drm/msm/dp/dp_audio.c b/drivers/gpu/drm/msm/dp/dp_audio.c
+index 6666783e1468e..1245c7aa49df8 100644
+--- a/drivers/gpu/drm/msm/dp/dp_audio.c
++++ b/drivers/gpu/drm/msm/dp/dp_audio.c
+@@ -593,6 +593,18 @@ static struct hdmi_codec_pdata codec_data = {
+ 	.i2s = 1,
+ };
+ 
++void dp_unregister_audio_driver(struct device *dev, struct dp_audio *dp_audio)
++{
++	struct dp_audio_private *audio_priv;
++
++	audio_priv = container_of(dp_audio, struct dp_audio_private, dp_audio);
++
++	if (audio_priv->audio_pdev) {
++		platform_device_unregister(audio_priv->audio_pdev);
++		audio_priv->audio_pdev = NULL;
++	}
++}
++
+ int dp_register_audio_driver(struct device *dev,
+ 		struct dp_audio *dp_audio)
+ {
+diff --git a/drivers/gpu/drm/msm/dp/dp_audio.h b/drivers/gpu/drm/msm/dp/dp_audio.h
+index 84e5f4a5d26ba..4ab78880af829 100644
+--- a/drivers/gpu/drm/msm/dp/dp_audio.h
++++ b/drivers/gpu/drm/msm/dp/dp_audio.h
+@@ -53,6 +53,8 @@ struct dp_audio *dp_audio_get(struct platform_device *pdev,
+ int dp_register_audio_driver(struct device *dev,
+ 		struct dp_audio *dp_audio);
+ 
++void dp_unregister_audio_driver(struct device *dev, struct dp_audio *dp_audio);
++
+ /**
+  * dp_audio_put()
+  *
+diff --git a/drivers/gpu/drm/msm/dp/dp_aux.c b/drivers/gpu/drm/msm/dp/dp_aux.c
+index cc3efed593aa1..84f9e3e5f9642 100644
+--- a/drivers/gpu/drm/msm/dp/dp_aux.c
++++ b/drivers/gpu/drm/msm/dp/dp_aux.c
+@@ -162,47 +162,6 @@ static ssize_t dp_aux_cmd_fifo_rx(struct dp_aux_private *aux,
+ 	return i;
+ }
+ 
+-static void dp_aux_native_handler(struct dp_aux_private *aux, u32 isr)
+-{
+-	if (isr & DP_INTR_AUX_I2C_DONE)
+-		aux->aux_error_num = DP_AUX_ERR_NONE;
+-	else if (isr & DP_INTR_WRONG_ADDR)
+-		aux->aux_error_num = DP_AUX_ERR_ADDR;
+-	else if (isr & DP_INTR_TIMEOUT)
+-		aux->aux_error_num = DP_AUX_ERR_TOUT;
+-	if (isr & DP_INTR_NACK_DEFER)
+-		aux->aux_error_num = DP_AUX_ERR_NACK;
+-	if (isr & DP_INTR_AUX_ERROR) {
+-		aux->aux_error_num = DP_AUX_ERR_PHY;
+-		dp_catalog_aux_clear_hw_interrupts(aux->catalog);
+-	}
+-}
+-
+-static void dp_aux_i2c_handler(struct dp_aux_private *aux, u32 isr)
+-{
+-	if (isr & DP_INTR_AUX_I2C_DONE) {
+-		if (isr & (DP_INTR_I2C_NACK | DP_INTR_I2C_DEFER))
+-			aux->aux_error_num = DP_AUX_ERR_NACK;
+-		else
+-			aux->aux_error_num = DP_AUX_ERR_NONE;
+-	} else {
+-		if (isr & DP_INTR_WRONG_ADDR)
+-			aux->aux_error_num = DP_AUX_ERR_ADDR;
+-		else if (isr & DP_INTR_TIMEOUT)
+-			aux->aux_error_num = DP_AUX_ERR_TOUT;
+-		if (isr & DP_INTR_NACK_DEFER)
+-			aux->aux_error_num = DP_AUX_ERR_NACK_DEFER;
+-		if (isr & DP_INTR_I2C_NACK)
+-			aux->aux_error_num = DP_AUX_ERR_NACK;
+-		if (isr & DP_INTR_I2C_DEFER)
+-			aux->aux_error_num = DP_AUX_ERR_DEFER;
+-		if (isr & DP_INTR_AUX_ERROR) {
+-			aux->aux_error_num = DP_AUX_ERR_PHY;
+-			dp_catalog_aux_clear_hw_interrupts(aux->catalog);
+-		}
+-	}
+-}
+-
+ static void dp_aux_update_offset_and_segment(struct dp_aux_private *aux,
+ 					     struct drm_dp_aux_msg *input_msg)
+ {
+@@ -427,13 +386,42 @@ void dp_aux_isr(struct drm_dp_aux *dp_aux)
+ 	if (!isr)
+ 		return;
+ 
+-	if (!aux->cmd_busy)
++	if (!aux->cmd_busy) {
++		DRM_ERROR("Unexpected DP AUX IRQ %#010x when not busy\n", isr);
+ 		return;
++	}
+ 
+-	if (aux->native)
+-		dp_aux_native_handler(aux, isr);
+-	else
+-		dp_aux_i2c_handler(aux, isr);
++	/*
++	 * The logic below assumes only one error bit is set (other than "done"
++	 * which can apparently be set at the same time as some of the other
++	 * bits). Warn if more than one get set so we know we need to improve
++	 * the logic.
++	 */
++	if (hweight32(isr & ~DP_INTR_AUX_XFER_DONE) > 1)
++		DRM_WARN("Some DP AUX interrupts unhandled: %#010x\n", isr);
++
++	if (isr & DP_INTR_AUX_ERROR) {
++		aux->aux_error_num = DP_AUX_ERR_PHY;
++		dp_catalog_aux_clear_hw_interrupts(aux->catalog);
++	} else if (isr & DP_INTR_NACK_DEFER) {
++		aux->aux_error_num = DP_AUX_ERR_NACK_DEFER;
++	} else if (isr & DP_INTR_WRONG_ADDR) {
++		aux->aux_error_num = DP_AUX_ERR_ADDR;
++	} else if (isr & DP_INTR_TIMEOUT) {
++		aux->aux_error_num = DP_AUX_ERR_TOUT;
++	} else if (!aux->native && (isr & DP_INTR_I2C_NACK)) {
++		aux->aux_error_num = DP_AUX_ERR_NACK;
++	} else if (!aux->native && (isr & DP_INTR_I2C_DEFER)) {
++		if (isr & DP_INTR_AUX_XFER_DONE)
++			aux->aux_error_num = DP_AUX_ERR_NACK;
++		else
++			aux->aux_error_num = DP_AUX_ERR_DEFER;
++	} else if (isr & DP_INTR_AUX_XFER_DONE) {
++		aux->aux_error_num = DP_AUX_ERR_NONE;
++	} else {
++		DRM_WARN("Unexpected interrupt: %#010x\n", isr);
++		return;
++	}
+ 
+ 	complete(&aux->comp);
+ }
+diff --git a/drivers/gpu/drm/msm/dp/dp_catalog.c b/drivers/gpu/drm/msm/dp/dp_catalog.c
+index 676279d0ca8d9..421391755427d 100644
+--- a/drivers/gpu/drm/msm/dp/dp_catalog.c
++++ b/drivers/gpu/drm/msm/dp/dp_catalog.c
+@@ -27,7 +27,7 @@
+ #define DP_INTF_CONFIG_DATABUS_WIDEN     BIT(4)
+ 
+ #define DP_INTERRUPT_STATUS1 \
+-	(DP_INTR_AUX_I2C_DONE| \
++	(DP_INTR_AUX_XFER_DONE| \
+ 	DP_INTR_WRONG_ADDR | DP_INTR_TIMEOUT | \
+ 	DP_INTR_NACK_DEFER | DP_INTR_WRONG_DATA_CNT | \
+ 	DP_INTR_I2C_NACK | DP_INTR_I2C_DEFER | \
+diff --git a/drivers/gpu/drm/msm/dp/dp_catalog.h b/drivers/gpu/drm/msm/dp/dp_catalog.h
+index 1f717f45c1158..f36b7b372a065 100644
+--- a/drivers/gpu/drm/msm/dp/dp_catalog.h
++++ b/drivers/gpu/drm/msm/dp/dp_catalog.h
+@@ -13,7 +13,7 @@
+ 
+ /* interrupts */
+ #define DP_INTR_HPD		BIT(0)
+-#define DP_INTR_AUX_I2C_DONE	BIT(3)
++#define DP_INTR_AUX_XFER_DONE	BIT(3)
+ #define DP_INTR_WRONG_ADDR	BIT(6)
+ #define DP_INTR_TIMEOUT		BIT(9)
+ #define DP_INTR_NACK_DEFER	BIT(12)
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
+index bde1a7ce442ff..3f9a18410c0bb 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.c
++++ b/drivers/gpu/drm/msm/dp/dp_display.c
+@@ -326,6 +326,7 @@ static void dp_display_unbind(struct device *dev, struct device *master,
+ 	kthread_stop(dp->ev_tsk);
+ 
+ 	dp_power_client_deinit(dp->power);
++	dp_unregister_audio_driver(dev, dp->audio);
+ 	dp_aux_unregister(dp->aux);
+ 	dp->drm_dev = NULL;
+ 	dp->aux->drm_dev = NULL;
+diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
+index ac8ed731f76d9..842dbf96af291 100644
+--- a/drivers/gpu/drm/msm/msm_gem_submit.c
++++ b/drivers/gpu/drm/msm/msm_gem_submit.c
+@@ -719,7 +719,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
+ 	struct msm_drm_private *priv = dev->dev_private;
+ 	struct drm_msm_gem_submit *args = data;
+ 	struct msm_file_private *ctx = file->driver_priv;
+-	struct msm_gem_submit *submit;
++	struct msm_gem_submit *submit = NULL;
+ 	struct msm_gpu *gpu = priv->gpu;
+ 	struct msm_gpu_submitqueue *queue;
+ 	struct msm_ringbuffer *ring;
+@@ -766,13 +766,15 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
+ 		out_fence_fd = get_unused_fd_flags(O_CLOEXEC);
+ 		if (out_fence_fd < 0) {
+ 			ret = out_fence_fd;
+-			return ret;
++			goto out_post_unlock;
+ 		}
+ 	}
+ 
+ 	submit = submit_create(dev, gpu, queue, args->nr_bos, args->nr_cmds);
+-	if (IS_ERR(submit))
+-		return PTR_ERR(submit);
++	if (IS_ERR(submit)) {
++		ret = PTR_ERR(submit);
++		goto out_post_unlock;
++	}
+ 
+ 	trace_msm_gpu_submit(pid_nr(submit->pid), ring->id, submit->ident,
+ 		args->nr_bos, args->nr_cmds);
+@@ -955,11 +957,20 @@ out:
+ 	if (has_ww_ticket)
+ 		ww_acquire_fini(&submit->ticket);
+ out_unlock:
+-	if (ret && (out_fence_fd >= 0))
+-		put_unused_fd(out_fence_fd);
+ 	mutex_unlock(&queue->lock);
+ out_post_unlock:
+-	msm_gem_submit_put(submit);
++	if (ret && (out_fence_fd >= 0))
++		put_unused_fd(out_fence_fd);
++
++	if (!IS_ERR_OR_NULL(submit)) {
++		msm_gem_submit_put(submit);
++	} else {
++		/*
++		 * If the submit hasn't yet taken ownership of the queue
++		 * then we need to drop the reference ourself:
++		 */
++		msm_submitqueue_put(queue);
++	}
+ 	if (!IS_ERR_OR_NULL(post_deps)) {
+ 		for (i = 0; i < args->nr_out_syncobjs; ++i) {
+ 			kfree(post_deps[i].chain);
+diff --git a/drivers/gpu/drm/nouveau/include/nvif/if0012.h b/drivers/gpu/drm/nouveau/include/nvif/if0012.h
+index eb99d84eb8443..16d4ad5023a3e 100644
+--- a/drivers/gpu/drm/nouveau/include/nvif/if0012.h
++++ b/drivers/gpu/drm/nouveau/include/nvif/if0012.h
+@@ -2,6 +2,8 @@
+ #ifndef __NVIF_IF0012_H__
+ #define __NVIF_IF0012_H__
+ 
++#include <drm/display/drm_dp.h>
++
+ union nvif_outp_args {
+ 	struct nvif_outp_v0 {
+ 		__u8 version;
+@@ -63,7 +65,7 @@ union nvif_outp_acquire_args {
+ 				__u8 hda;
+ 				__u8 mst;
+ 				__u8 pad04[4];
+-				__u8 dpcd[16];
++				__u8 dpcd[DP_RECEIVER_CAP_SIZE];
+ 			} dp;
+ 		};
+ 	} v0;
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.h b/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.h
+index b7631c1ab2420..4e7f873f66e27 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.h
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.h
+@@ -3,6 +3,7 @@
+ #define __NVKM_DISP_OUTP_H__
+ #include "priv.h"
+ 
++#include <drm/display/drm_dp.h>
+ #include <subdev/bios.h>
+ #include <subdev/bios/dcb.h>
+ #include <subdev/bios/dp.h>
+@@ -42,7 +43,7 @@ struct nvkm_outp {
+ 			bool aux_pwr_pu;
+ 			u8 lttpr[6];
+ 			u8 lttprs;
+-			u8 dpcd[16];
++			u8 dpcd[DP_RECEIVER_CAP_SIZE];
+ 
+ 			struct {
+ 				int dpcd; /* -1, or index into SUPPORTED_LINK_RATES table */
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/uoutp.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/uoutp.c
+index 4f0ca709c85a4..fc283a4a1522a 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/uoutp.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/uoutp.c
+@@ -146,7 +146,7 @@ nvkm_uoutp_mthd_release(struct nvkm_outp *outp, void *argv, u32 argc)
+ }
+ 
+ static int
+-nvkm_uoutp_mthd_acquire_dp(struct nvkm_outp *outp, u8 dpcd[16],
++nvkm_uoutp_mthd_acquire_dp(struct nvkm_outp *outp, u8 dpcd[DP_RECEIVER_CAP_SIZE],
+ 			   u8 link_nr, u8 link_bw, bool hda, bool mst)
+ {
+ 	int ret;
+diff --git a/drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c b/drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c
+index 2f4b8f64cbad3..ae857bf8bd624 100644
+--- a/drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c
++++ b/drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c
+@@ -640,6 +640,7 @@ static void dw_hdmi_rockchip_unbind(struct device *dev, struct device *master,
+ 	struct rockchip_hdmi *hdmi = dev_get_drvdata(dev);
+ 
+ 	dw_hdmi_unbind(hdmi->hdmi);
++	drm_encoder_cleanup(&hdmi->encoder.encoder);
+ 	clk_disable_unprepare(hdmi->ref_clk);
+ 
+ 	regulator_disable(hdmi->avdd_1v8);
+diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
+index 1e08cc5a17029..78c959eaef0c5 100644
+--- a/drivers/gpu/drm/scheduler/sched_main.c
++++ b/drivers/gpu/drm/scheduler/sched_main.c
+@@ -308,7 +308,7 @@ static void drm_sched_start_timeout(struct drm_gpu_scheduler *sched)
+  */
+ void drm_sched_fault(struct drm_gpu_scheduler *sched)
+ {
+-	if (sched->ready)
++	if (sched->timeout_wq)
+ 		mod_delayed_work(sched->timeout_wq, &sched->work_tdr, 0);
+ }
+ EXPORT_SYMBOL(drm_sched_fault);
+diff --git a/drivers/gpu/drm/tegra/sor.c b/drivers/gpu/drm/tegra/sor.c
+index 8af632740673a..77723d5f1d3fd 100644
+--- a/drivers/gpu/drm/tegra/sor.c
++++ b/drivers/gpu/drm/tegra/sor.c
+@@ -1153,7 +1153,7 @@ static int tegra_sor_compute_config(struct tegra_sor *sor,
+ 				    struct drm_dp_link *link)
+ {
+ 	const u64 f = 100000, link_rate = link->rate * 1000;
+-	const u64 pclk = mode->clock * 1000;
++	const u64 pclk = (u64)mode->clock * 1000;
+ 	u64 input, output, watermark, num;
+ 	struct tegra_sor_params params;
+ 	u32 num_syms_per_line;
+diff --git a/drivers/hid/hid-apple.c b/drivers/hid/hid-apple.c
+index 1ccab8aa326cd..e2c73a78b5972 100644
+--- a/drivers/hid/hid-apple.c
++++ b/drivers/hid/hid-apple.c
+@@ -875,14 +875,16 @@ static const struct hid_device_id apple_devices[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER3_ANSI),
+ 		.driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER3_ISO),
+-		.driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
++		.driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
++			APPLE_ISO_TILDE_QUIRK },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER3_JIS),
+ 		.driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
+ 			APPLE_RDESC_JIS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER4_ANSI),
+ 		.driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER4_ISO),
+-		.driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
++		.driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
++			APPLE_ISO_TILDE_QUIRK },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER4_JIS),
+ 		.driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
+ 			APPLE_RDESC_JIS },
+@@ -901,7 +903,8 @@ static const struct hid_device_id apple_devices[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER4_HF_ANSI),
+ 		.driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER4_HF_ISO),
+-		.driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
++		.driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
++			APPLE_ISO_TILDE_QUIRK },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER4_HF_JIS),
+ 		.driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
+ 			APPLE_RDESC_JIS },
+@@ -942,31 +945,31 @@ static const struct hid_device_id apple_devices[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING_ANSI),
+ 		.driver_data = APPLE_HAS_FN },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING_ISO),
+-		.driver_data = APPLE_HAS_FN },
++		.driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING_JIS),
+ 		.driver_data = APPLE_HAS_FN | APPLE_RDESC_JIS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING2_ANSI),
+ 		.driver_data = APPLE_HAS_FN },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING2_ISO),
+-		.driver_data = APPLE_HAS_FN },
++		.driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING2_JIS),
+ 		.driver_data = APPLE_HAS_FN | APPLE_RDESC_JIS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING3_ANSI),
+ 		.driver_data = APPLE_HAS_FN },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING3_ISO),
+-		.driver_data = APPLE_HAS_FN },
++		.driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING3_JIS),
+ 		.driver_data = APPLE_HAS_FN | APPLE_RDESC_JIS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING4_ANSI),
+ 		.driver_data = APPLE_HAS_FN },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING4_ISO),
+-		.driver_data = APPLE_HAS_FN },
++		.driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING4_JIS),
+ 		.driver_data = APPLE_HAS_FN | APPLE_RDESC_JIS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING4A_ANSI),
+ 		.driver_data = APPLE_HAS_FN },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING4A_ISO),
+-		.driver_data = APPLE_HAS_FN },
++		.driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING4A_JIS),
+ 		.driver_data = APPLE_HAS_FN | APPLE_RDESC_JIS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING5_ANSI),
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index c2e9b6d1fd7d3..8f3e0a5d5f834 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -415,6 +415,7 @@
+ #define I2C_DEVICE_ID_HP_SPECTRE_X360_15	0x2817
+ #define I2C_DEVICE_ID_HP_SPECTRE_X360_13_AW0020NG  0x29DF
+ #define I2C_DEVICE_ID_ASUS_TP420IA_TOUCHSCREEN 0x2BC8
++#define I2C_DEVICE_ID_ASUS_GV301RA_TOUCHSCREEN 0x2C82
+ #define USB_DEVICE_ID_ASUS_UX550VE_TOUCHSCREEN	0x2544
+ #define USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN	0x2706
+ #define I2C_DEVICE_ID_SURFACE_GO_TOUCHSCREEN	0x261A
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index 5c65a584b3fa0..5a88866505eab 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -372,6 +372,8 @@ static const struct hid_device_id hid_battery_quirks[] = {
+ 	  HID_BATTERY_QUIRK_IGNORE },
+ 	{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_ASUS_TP420IA_TOUCHSCREEN),
+ 	  HID_BATTERY_QUIRK_IGNORE },
++	{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_ASUS_GV301RA_TOUCHSCREEN),
++	  HID_BATTERY_QUIRK_IGNORE },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN),
+ 	  HID_BATTERY_QUIRK_IGNORE },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ASUS_UX550VE_TOUCHSCREEN),
+diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
+index 5fc88a0632978..da89e84c9cbeb 100644
+--- a/drivers/hid/hid-logitech-hidpp.c
++++ b/drivers/hid/hid-logitech-hidpp.c
+@@ -853,8 +853,7 @@ static int hidpp_unifying_init(struct hidpp_device *hidpp)
+ 	if (ret)
+ 		return ret;
+ 
+-	snprintf(hdev->uniq, sizeof(hdev->uniq), "%04x-%4phD",
+-		 hdev->product, &serial);
++	snprintf(hdev->uniq, sizeof(hdev->uniq), "%4phD", &serial);
+ 	dbg_hid("HID++ Unifying: Got serial: %s\n", hdev->uniq);
+ 
+ 	name = hidpp_unifying_get_name(hidpp);
+@@ -947,6 +946,54 @@ print_version:
+ 	return 0;
+ }
+ 
++/* -------------------------------------------------------------------------- */
++/* 0x0003: Device Information                                                 */
++/* -------------------------------------------------------------------------- */
++
++#define HIDPP_PAGE_DEVICE_INFORMATION			0x0003
++
++#define CMD_GET_DEVICE_INFO				0x00
++
++static int hidpp_get_serial(struct hidpp_device *hidpp, u32 *serial)
++{
++	struct hidpp_report response;
++	u8 feature_type;
++	u8 feature_index;
++	int ret;
++
++	ret = hidpp_root_get_feature(hidpp, HIDPP_PAGE_DEVICE_INFORMATION,
++				     &feature_index,
++				     &feature_type);
++	if (ret)
++		return ret;
++
++	ret = hidpp_send_fap_command_sync(hidpp, feature_index,
++					  CMD_GET_DEVICE_INFO,
++					  NULL, 0, &response);
++	if (ret)
++		return ret;
++
++	/* See hidpp_unifying_get_serial() */
++	*serial = *((u32 *)&response.rap.params[1]);
++	return 0;
++}
++
++static int hidpp_serial_init(struct hidpp_device *hidpp)
++{
++	struct hid_device *hdev = hidpp->hid_dev;
++	u32 serial;
++	int ret;
++
++	ret = hidpp_get_serial(hidpp, &serial);
++	if (ret)
++		return ret;
++
++	snprintf(hdev->uniq, sizeof(hdev->uniq), "%4phD", &serial);
++	dbg_hid("HID++ DeviceInformation: Got serial: %s\n", hdev->uniq);
++
++	return 0;
++}
++
+ /* -------------------------------------------------------------------------- */
+ /* 0x0005: GetDeviceNameType                                                  */
+ /* -------------------------------------------------------------------------- */
+@@ -4210,6 +4257,8 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 
+ 	if (hidpp->quirks & HIDPP_QUIRK_UNIFYING)
+ 		hidpp_unifying_init(hidpp);
++	else if (hid_is_usb(hidpp->hid_dev))
++		hidpp_serial_init(hidpp);
+ 
+ 	connected = hidpp_root_get_protocol_version(hidpp) == 0;
+ 	atomic_set(&hidpp->connected, connected);
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 0c6a82c665c1d..d2f500242ed40 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -1963,18 +1963,7 @@ static void wacom_map_usage(struct input_dev *input, struct hid_usage *usage,
+ static void wacom_wac_battery_usage_mapping(struct hid_device *hdev,
+ 		struct hid_field *field, struct hid_usage *usage)
+ {
+-	struct wacom *wacom = hid_get_drvdata(hdev);
+-	struct wacom_wac *wacom_wac = &wacom->wacom_wac;
+-	struct wacom_features *features = &wacom_wac->features;
+-	unsigned equivalent_usage = wacom_equivalent_usage(usage->hid);
+-
+-	switch (equivalent_usage) {
+-	case HID_DG_BATTERYSTRENGTH:
+-	case WACOM_HID_WD_BATTERY_LEVEL:
+-	case WACOM_HID_WD_BATTERY_CHARGING:
+-		features->quirks |= WACOM_QUIRK_BATTERY;
+-		break;
+-	}
++	return;
+ }
+ 
+ static void wacom_wac_battery_event(struct hid_device *hdev, struct hid_field *field,
+@@ -1995,18 +1984,21 @@ static void wacom_wac_battery_event(struct hid_device *hdev, struct hid_field *f
+ 			wacom_wac->hid_data.bat_connected = 1;
+ 			wacom_wac->hid_data.bat_status = WACOM_POWER_SUPPLY_STATUS_AUTO;
+ 		}
++		wacom_wac->features.quirks |= WACOM_QUIRK_BATTERY;
+ 		break;
+ 	case WACOM_HID_WD_BATTERY_LEVEL:
+ 		value = value * 100 / (field->logical_maximum - field->logical_minimum);
+ 		wacom_wac->hid_data.battery_capacity = value;
+ 		wacom_wac->hid_data.bat_connected = 1;
+ 		wacom_wac->hid_data.bat_status = WACOM_POWER_SUPPLY_STATUS_AUTO;
++		wacom_wac->features.quirks |= WACOM_QUIRK_BATTERY;
+ 		break;
+ 	case WACOM_HID_WD_BATTERY_CHARGING:
+ 		wacom_wac->hid_data.bat_charging = value;
+ 		wacom_wac->hid_data.ps_connected = value;
+ 		wacom_wac->hid_data.bat_connected = 1;
+ 		wacom_wac->hid_data.bat_status = WACOM_POWER_SUPPLY_STATUS_AUTO;
++		wacom_wac->features.quirks |= WACOM_QUIRK_BATTERY;
+ 		break;
+ 	}
+ }
+@@ -2022,18 +2014,15 @@ static void wacom_wac_battery_report(struct hid_device *hdev,
+ {
+ 	struct wacom *wacom = hid_get_drvdata(hdev);
+ 	struct wacom_wac *wacom_wac = &wacom->wacom_wac;
+-	struct wacom_features *features = &wacom_wac->features;
+ 
+-	if (features->quirks & WACOM_QUIRK_BATTERY) {
+-		int status = wacom_wac->hid_data.bat_status;
+-		int capacity = wacom_wac->hid_data.battery_capacity;
+-		bool charging = wacom_wac->hid_data.bat_charging;
+-		bool connected = wacom_wac->hid_data.bat_connected;
+-		bool powered = wacom_wac->hid_data.ps_connected;
++	int status = wacom_wac->hid_data.bat_status;
++	int capacity = wacom_wac->hid_data.battery_capacity;
++	bool charging = wacom_wac->hid_data.bat_charging;
++	bool connected = wacom_wac->hid_data.bat_connected;
++	bool powered = wacom_wac->hid_data.ps_connected;
+ 
+-		wacom_notify_battery(wacom_wac, status, capacity, charging,
+-				     connected, powered);
+-	}
++	wacom_notify_battery(wacom_wac, status, capacity, charging,
++			     connected, powered);
+ }
+ 
+ static void wacom_wac_pad_usage_mapping(struct hid_device *hdev,
+diff --git a/drivers/hwmon/nzxt-smart2.c b/drivers/hwmon/nzxt-smart2.c
+index 2b93ba89610ae..a8e72d8fd0605 100644
+--- a/drivers/hwmon/nzxt-smart2.c
++++ b/drivers/hwmon/nzxt-smart2.c
+@@ -791,7 +791,8 @@ static const struct hid_device_id nzxt_smart2_hid_id_table[] = {
+ 	{ HID_USB_DEVICE(0x1e71, 0x2009) }, /* NZXT RGB & Fan Controller */
+ 	{ HID_USB_DEVICE(0x1e71, 0x200e) }, /* NZXT RGB & Fan Controller */
+ 	{ HID_USB_DEVICE(0x1e71, 0x2010) }, /* NZXT RGB & Fan Controller */
+-	{ HID_USB_DEVICE(0x1e71, 0x2019) }, /* NZXT RGB & Fan Controller */
++	{ HID_USB_DEVICE(0x1e71, 0x2011) }, /* NZXT RGB & Fan Controller (6 RGB) */
++	{ HID_USB_DEVICE(0x1e71, 0x2019) }, /* NZXT RGB & Fan Controller (6 RGB) */
+ 	{},
+ };
+ 
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h
+index 499fcf8875b40..8e119d78730ba 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h
+@@ -137,6 +137,13 @@ struct st_lsm6dsx_odr_table_entry {
+ 	int odr_len;
+ };
+ 
++struct st_lsm6dsx_samples_to_discard {
++	struct {
++		u32 milli_hz;
++		u16 samples;
++	} val[ST_LSM6DSX_ODR_LIST_SIZE];
++};
++
+ struct st_lsm6dsx_fs {
+ 	u32 gain;
+ 	u8 val;
+@@ -291,6 +298,7 @@ struct st_lsm6dsx_ext_dev_settings {
+  * @irq_config: interrupts related registers.
+  * @drdy_mask: register info for data-ready mask (addr + mask).
+  * @odr_table: Hw sensors odr table (Hz + val).
++ * @samples_to_discard: Number of samples to discard for filters settling time.
+  * @fs_table: Hw sensors gain table (gain + val).
+  * @decimator: List of decimator register info (addr + mask).
+  * @batch: List of FIFO batching register info (addr + mask).
+@@ -323,6 +331,7 @@ struct st_lsm6dsx_settings {
+ 	} irq_config;
+ 	struct st_lsm6dsx_reg drdy_mask;
+ 	struct st_lsm6dsx_odr_table_entry odr_table[2];
++	struct st_lsm6dsx_samples_to_discard samples_to_discard[2];
+ 	struct st_lsm6dsx_fs_table_entry fs_table[2];
+ 	struct st_lsm6dsx_reg decimator[ST_LSM6DSX_MAX_ID];
+ 	struct st_lsm6dsx_reg batch[ST_LSM6DSX_MAX_ID];
+@@ -353,6 +362,7 @@ enum st_lsm6dsx_fifo_mode {
+  * @hw: Pointer to instance of struct st_lsm6dsx_hw.
+  * @gain: Configured sensor sensitivity.
+  * @odr: Output data rate of the sensor [Hz].
++ * @samples_to_discard: Number of samples to discard for filters settling time.
+  * @watermark: Sensor watermark level.
+  * @decimator: Sensor decimation factor.
+  * @sip: Number of samples in a given pattern.
+@@ -367,6 +377,7 @@ struct st_lsm6dsx_sensor {
+ 	u32 gain;
+ 	u32 odr;
+ 
++	u16 samples_to_discard;
+ 	u16 watermark;
+ 	u8 decimator;
+ 	u8 sip;
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
+index 7dd5205aea5b4..f6c11d6fb0b0f 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
+@@ -457,17 +457,31 @@ int st_lsm6dsx_read_fifo(struct st_lsm6dsx_hw *hw)
+ 			}
+ 
+ 			if (gyro_sip > 0 && !(sip % gyro_sensor->decimator)) {
+-				iio_push_to_buffers_with_timestamp(
+-					hw->iio_devs[ST_LSM6DSX_ID_GYRO],
+-					&hw->scan[ST_LSM6DSX_ID_GYRO],
+-					gyro_sensor->ts_ref + ts);
++				/*
++				 * We need to discards gyro samples during
++				 * filters settling time
++				 */
++				if (gyro_sensor->samples_to_discard > 0)
++					gyro_sensor->samples_to_discard--;
++				else
++					iio_push_to_buffers_with_timestamp(
++						hw->iio_devs[ST_LSM6DSX_ID_GYRO],
++						&hw->scan[ST_LSM6DSX_ID_GYRO],
++						gyro_sensor->ts_ref + ts);
+ 				gyro_sip--;
+ 			}
+ 			if (acc_sip > 0 && !(sip % acc_sensor->decimator)) {
+-				iio_push_to_buffers_with_timestamp(
+-					hw->iio_devs[ST_LSM6DSX_ID_ACC],
+-					&hw->scan[ST_LSM6DSX_ID_ACC],
+-					acc_sensor->ts_ref + ts);
++				/*
++				 * We need to discards accel samples during
++				 * filters settling time
++				 */
++				if (acc_sensor->samples_to_discard > 0)
++					acc_sensor->samples_to_discard--;
++				else
++					iio_push_to_buffers_with_timestamp(
++						hw->iio_devs[ST_LSM6DSX_ID_ACC],
++						&hw->scan[ST_LSM6DSX_ID_ACC],
++						acc_sensor->ts_ref + ts);
+ 				acc_sip--;
+ 			}
+ 			if (ext_sip > 0 && !(sip % ext_sensor->decimator)) {
+@@ -654,6 +668,30 @@ int st_lsm6dsx_flush_fifo(struct st_lsm6dsx_hw *hw)
+ 	return err;
+ }
+ 
++static void
++st_lsm6dsx_update_samples_to_discard(struct st_lsm6dsx_sensor *sensor)
++{
++	const struct st_lsm6dsx_samples_to_discard *data;
++	struct st_lsm6dsx_hw *hw = sensor->hw;
++	int i;
++
++	if (sensor->id != ST_LSM6DSX_ID_GYRO &&
++	    sensor->id != ST_LSM6DSX_ID_ACC)
++		return;
++
++	/* check if drdy mask is supported in hw */
++	if (hw->settings->drdy_mask.addr)
++		return;
++
++	data = &hw->settings->samples_to_discard[sensor->id];
++	for (i = 0; i < ST_LSM6DSX_ODR_LIST_SIZE; i++) {
++		if (data->val[i].milli_hz == sensor->odr) {
++			sensor->samples_to_discard = data->val[i].samples;
++			return;
++		}
++	}
++}
++
+ int st_lsm6dsx_update_fifo(struct st_lsm6dsx_sensor *sensor, bool enable)
+ {
+ 	struct st_lsm6dsx_hw *hw = sensor->hw;
+@@ -673,6 +711,9 @@ int st_lsm6dsx_update_fifo(struct st_lsm6dsx_sensor *sensor, bool enable)
+ 			goto out;
+ 	}
+ 
++	if (enable)
++		st_lsm6dsx_update_samples_to_discard(sensor);
++
+ 	err = st_lsm6dsx_device_set_enable(sensor, enable);
+ 	if (err < 0)
+ 		goto out;
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+index 3f6060c64f32b..966df6ffe8740 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+@@ -634,6 +634,24 @@ static const struct st_lsm6dsx_settings st_lsm6dsx_sensor_settings[] = {
+ 				.fs_len = 4,
+ 			},
+ 		},
++		.samples_to_discard = {
++			[ST_LSM6DSX_ID_ACC] = {
++				.val[0] = {  12500, 1 },
++				.val[1] = {  26000, 1 },
++				.val[2] = {  52000, 1 },
++				.val[3] = { 104000, 2 },
++				.val[4] = { 208000, 2 },
++				.val[5] = { 416000, 2 },
++			},
++			[ST_LSM6DSX_ID_GYRO] = {
++				.val[0] = {  12500,  2 },
++				.val[1] = {  26000,  5 },
++				.val[2] = {  52000,  7 },
++				.val[3] = { 104000, 12 },
++				.val[4] = { 208000, 20 },
++				.val[5] = { 416000, 36 },
++			},
++		},
+ 		.irq_config = {
+ 			.irq1 = {
+ 				.addr = 0x0d,
+diff --git a/drivers/infiniband/core/user_mad.c b/drivers/infiniband/core/user_mad.c
+index f83954180a338..d21c0a042f0a5 100644
+--- a/drivers/infiniband/core/user_mad.c
++++ b/drivers/infiniband/core/user_mad.c
+@@ -131,6 +131,11 @@ struct ib_umad_packet {
+ 	struct ib_user_mad mad;
+ };
+ 
++struct ib_rmpp_mad_hdr {
++	struct ib_mad_hdr	mad_hdr;
++	struct ib_rmpp_hdr      rmpp_hdr;
++} __packed;
++
+ #define CREATE_TRACE_POINTS
+ #include <trace/events/ib_umad.h>
+ 
+@@ -494,11 +499,11 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
+ 			     size_t count, loff_t *pos)
+ {
+ 	struct ib_umad_file *file = filp->private_data;
++	struct ib_rmpp_mad_hdr *rmpp_mad_hdr;
+ 	struct ib_umad_packet *packet;
+ 	struct ib_mad_agent *agent;
+ 	struct rdma_ah_attr ah_attr;
+ 	struct ib_ah *ah;
+-	struct ib_rmpp_mad *rmpp_mad;
+ 	__be64 *tid;
+ 	int ret, data_len, hdr_len, copy_offset, rmpp_active;
+ 	u8 base_version;
+@@ -506,7 +511,7 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
+ 	if (count < hdr_size(file) + IB_MGMT_RMPP_HDR)
+ 		return -EINVAL;
+ 
+-	packet = kzalloc(sizeof *packet + IB_MGMT_RMPP_HDR, GFP_KERNEL);
++	packet = kzalloc(sizeof(*packet) + IB_MGMT_RMPP_HDR, GFP_KERNEL);
+ 	if (!packet)
+ 		return -ENOMEM;
+ 
+@@ -560,13 +565,13 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
+ 		goto err_up;
+ 	}
+ 
+-	rmpp_mad = (struct ib_rmpp_mad *) packet->mad.data;
+-	hdr_len = ib_get_mad_data_offset(rmpp_mad->mad_hdr.mgmt_class);
++	rmpp_mad_hdr = (struct ib_rmpp_mad_hdr *)packet->mad.data;
++	hdr_len = ib_get_mad_data_offset(rmpp_mad_hdr->mad_hdr.mgmt_class);
+ 
+-	if (ib_is_mad_class_rmpp(rmpp_mad->mad_hdr.mgmt_class)
++	if (ib_is_mad_class_rmpp(rmpp_mad_hdr->mad_hdr.mgmt_class)
+ 	    && ib_mad_kernel_rmpp_agent(agent)) {
+ 		copy_offset = IB_MGMT_RMPP_HDR;
+-		rmpp_active = ib_get_rmpp_flags(&rmpp_mad->rmpp_hdr) &
++		rmpp_active = ib_get_rmpp_flags(&rmpp_mad_hdr->rmpp_hdr) &
+ 						IB_MGMT_RMPP_FLAG_ACTIVE;
+ 	} else {
+ 		copy_offset = IB_MGMT_MAD_HDR;
+@@ -615,12 +620,12 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
+ 		tid = &((struct ib_mad_hdr *) packet->msg->mad)->tid;
+ 		*tid = cpu_to_be64(((u64) agent->hi_tid) << 32 |
+ 				   (be64_to_cpup(tid) & 0xffffffff));
+-		rmpp_mad->mad_hdr.tid = *tid;
++		rmpp_mad_hdr->mad_hdr.tid = *tid;
+ 	}
+ 
+ 	if (!ib_mad_kernel_rmpp_agent(agent)
+-	   && ib_is_mad_class_rmpp(rmpp_mad->mad_hdr.mgmt_class)
+-	   && (ib_get_rmpp_flags(&rmpp_mad->rmpp_hdr) & IB_MGMT_RMPP_FLAG_ACTIVE)) {
++	    && ib_is_mad_class_rmpp(rmpp_mad_hdr->mad_hdr.mgmt_class)
++	    && (ib_get_rmpp_flags(&rmpp_mad_hdr->rmpp_hdr) & IB_MGMT_RMPP_FLAG_ACTIVE)) {
+ 		spin_lock_irq(&file->send_lock);
+ 		list_add_tail(&packet->list, &file->send_list);
+ 		spin_unlock_irq(&file->send_lock);
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 67356f5152616..bd0a818ba1cd8 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -67,11 +67,11 @@ static void set_mkc_access_pd_addr_fields(void *mkc, int acc, u64 start_addr,
+ 	MLX5_SET(mkc, mkc, lw, !!(acc & IB_ACCESS_LOCAL_WRITE));
+ 	MLX5_SET(mkc, mkc, lr, 1);
+ 
+-	if ((acc & IB_ACCESS_RELAXED_ORDERING) &&
+-	    pcie_relaxed_ordering_enabled(dev->mdev->pdev)) {
++	if (acc & IB_ACCESS_RELAXED_ORDERING) {
+ 		if (MLX5_CAP_GEN(dev->mdev, relaxed_ordering_write))
+ 			MLX5_SET(mkc, mkc, relaxed_ordering_write, 1);
+-		if (MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read))
++		if (MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read) &&
++		    pcie_relaxed_ordering_enabled(dev->mdev->pdev))
+ 			MLX5_SET(mkc, mkc, relaxed_ordering_read, 1);
+ 	}
+ 
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index 29131f1a2f067..f617b2c60819c 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -559,6 +559,9 @@ struct xboxone_init_packet {
+ #define GIP_MOTOR_LT BIT(3)
+ #define GIP_MOTOR_ALL (GIP_MOTOR_R | GIP_MOTOR_L | GIP_MOTOR_RT | GIP_MOTOR_LT)
+ 
++#define GIP_WIRED_INTF_DATA 0
++#define GIP_WIRED_INTF_AUDIO 1
++
+ /*
+  * This packet is required for all Xbox One pads with 2015
+  * or later firmware installed (or present from the factory).
+@@ -2003,7 +2006,7 @@ static int xpad_probe(struct usb_interface *intf, const struct usb_device_id *id
+ 	}
+ 
+ 	if (xpad->xtype == XTYPE_XBOXONE &&
+-	    intf->cur_altsetting->desc.bInterfaceNumber != 0) {
++	    intf->cur_altsetting->desc.bInterfaceNumber != GIP_WIRED_INTF_DATA) {
+ 		/*
+ 		 * The Xbox One controller lists three interfaces all with the
+ 		 * same interface class, subclass and protocol. Differentiate by
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+index f2425b0f0cd62..7614739ea2c1b 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+@@ -152,6 +152,18 @@ static void queue_inc_cons(struct arm_smmu_ll_queue *q)
+ 	q->cons = Q_OVF(q->cons) | Q_WRP(q, cons) | Q_IDX(q, cons);
+ }
+ 
++static void queue_sync_cons_ovf(struct arm_smmu_queue *q)
++{
++	struct arm_smmu_ll_queue *llq = &q->llq;
++
++	if (likely(Q_OVF(llq->prod) == Q_OVF(llq->cons)))
++		return;
++
++	llq->cons = Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) |
++		      Q_IDX(llq, llq->cons);
++	queue_sync_cons_out(q);
++}
++
+ static int queue_sync_prod_in(struct arm_smmu_queue *q)
+ {
+ 	u32 prod;
+@@ -1577,8 +1589,7 @@ static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
+ 	} while (!queue_empty(llq));
+ 
+ 	/* Sync our overflow flag, as we believe we're up to speed */
+-	llq->cons = Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) |
+-		    Q_IDX(llq, llq->cons);
++	queue_sync_cons_ovf(q);
+ 	return IRQ_HANDLED;
+ }
+ 
+@@ -1636,9 +1647,7 @@ static irqreturn_t arm_smmu_priq_thread(int irq, void *dev)
+ 	} while (!queue_empty(llq));
+ 
+ 	/* Sync our overflow flag, as we believe we're up to speed */
+-	llq->cons = Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) |
+-		      Q_IDX(llq, llq->cons);
+-	queue_sync_cons_out(q);
++	queue_sync_cons_ovf(q);
+ 	return IRQ_HANDLED;
+ }
+ 
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+index d1b296b95c860..c71afda79d644 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+@@ -268,12 +268,26 @@ static int qcom_smmu_init_context(struct arm_smmu_domain *smmu_domain,
+ 
+ static int qcom_smmu_cfg_probe(struct arm_smmu_device *smmu)
+ {
+-	unsigned int last_s2cr = ARM_SMMU_GR0_S2CR(smmu->num_mapping_groups - 1);
+ 	struct qcom_smmu *qsmmu = to_qcom_smmu(smmu);
++	unsigned int last_s2cr;
+ 	u32 reg;
+ 	u32 smr;
+ 	int i;
+ 
++	/*
++	 * Some platforms support more than the Arm SMMU architected maximum of
++	 * 128 stream matching groups. For unknown reasons, the additional
++	 * groups don't exhibit the same behavior as the architected registers,
++	 * so limit the groups to 128 until the behavior is fixed for the other
++	 * groups.
++	 */
++	if (smmu->num_mapping_groups > 128) {
++		dev_notice(smmu->dev, "\tLimiting the stream matching groups to 128\n");
++		smmu->num_mapping_groups = 128;
++	}
++
++	last_s2cr = ARM_SMMU_GR0_S2CR(smmu->num_mapping_groups - 1);
++
+ 	/*
+ 	 * With some firmware versions writes to S2CR of type FAULT are
+ 	 * ignored, and writing BYPASS will end up written as FAULT in the
+@@ -503,6 +517,7 @@ static const struct of_device_id __maybe_unused qcom_smmu_impl_of_match[] = {
+ 	{ .compatible = "qcom,qcm2290-smmu-500", .data = &qcom_smmu_500_impl0_data },
+ 	{ .compatible = "qcom,qdu1000-smmu-500", .data = &qcom_smmu_500_impl0_data  },
+ 	{ .compatible = "qcom,sc7180-smmu-500", .data = &qcom_smmu_500_impl0_data },
++	{ .compatible = "qcom,sc7180-smmu-v2", .data = &qcom_smmu_v2_data },
+ 	{ .compatible = "qcom,sc7280-smmu-500", .data = &qcom_smmu_500_impl0_data },
+ 	{ .compatible = "qcom,sc8180x-smmu-500", .data = &qcom_smmu_500_impl0_data },
+ 	{ .compatible = "qcom,sc8280xp-smmu-500", .data = &qcom_smmu_500_impl0_data },
+@@ -547,5 +562,14 @@ struct arm_smmu_device *qcom_smmu_impl_init(struct arm_smmu_device *smmu)
+ 	if (match)
+ 		return qcom_smmu_create(smmu, match->data);
+ 
++	/*
++	 * If you hit this WARN_ON() you are missing an entry in the
++	 * qcom_smmu_impl_of_match[] table, and GPU per-process page-
++	 * tables will be broken.
++	 */
++	WARN(of_device_is_compatible(np, "qcom,adreno-smmu"),
++	     "Missing qcom_smmu_impl_of_match entry for: %s",
++	     dev_name(smmu->dev));
++
+ 	return smmu;
+ }
+diff --git a/drivers/iommu/sprd-iommu.c b/drivers/iommu/sprd-iommu.c
+index ae94d74b73f46..7df1f730c778e 100644
+--- a/drivers/iommu/sprd-iommu.c
++++ b/drivers/iommu/sprd-iommu.c
+@@ -151,13 +151,6 @@ static struct iommu_domain *sprd_iommu_domain_alloc(unsigned int domain_type)
+ 	return &dom->domain;
+ }
+ 
+-static void sprd_iommu_domain_free(struct iommu_domain *domain)
+-{
+-	struct sprd_iommu_domain *dom = to_sprd_domain(domain);
+-
+-	kfree(dom);
+-}
+-
+ static void sprd_iommu_first_vpn(struct sprd_iommu_domain *dom)
+ {
+ 	struct sprd_iommu_device *sdev = dom->sdev;
+@@ -230,6 +223,28 @@ static void sprd_iommu_hw_en(struct sprd_iommu_device *sdev, bool en)
+ 	sprd_iommu_update_bits(sdev, reg_cfg, mask, 0, val);
+ }
+ 
++static void sprd_iommu_cleanup(struct sprd_iommu_domain *dom)
++{
++	size_t pgt_size;
++
++	/* Nothing need to do if the domain hasn't been attached */
++	if (!dom->sdev)
++		return;
++
++	pgt_size = sprd_iommu_pgt_size(&dom->domain);
++	dma_free_coherent(dom->sdev->dev, pgt_size, dom->pgt_va, dom->pgt_pa);
++	dom->sdev = NULL;
++	sprd_iommu_hw_en(dom->sdev, false);
++}
++
++static void sprd_iommu_domain_free(struct iommu_domain *domain)
++{
++	struct sprd_iommu_domain *dom = to_sprd_domain(domain);
++
++	sprd_iommu_cleanup(dom);
++	kfree(dom);
++}
++
+ static int sprd_iommu_attach_device(struct iommu_domain *domain,
+ 				    struct device *dev)
+ {
+diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig
+index 7dc990eb2c9ba..9efb383fc688f 100644
+--- a/drivers/irqchip/Kconfig
++++ b/drivers/irqchip/Kconfig
+@@ -35,6 +35,7 @@ config ARM_GIC_V3
+ 	select IRQ_DOMAIN_HIERARCHY
+ 	select PARTITION_PERCPU
+ 	select GENERIC_IRQ_EFFECTIVE_AFF_MASK if SMP
++	select HAVE_ARM_SMCCC_DISCOVERY
+ 
+ config ARM_GIC_V3_ITS
+ 	bool
+diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
+index fd134e1f481a2..6fcee221f2017 100644
+--- a/drivers/irqchip/irq-gic-v3.c
++++ b/drivers/irqchip/irq-gic-v3.c
+@@ -24,6 +24,9 @@
+ #include <linux/irqchip/arm-gic-common.h>
+ #include <linux/irqchip/arm-gic-v3.h>
+ #include <linux/irqchip/irq-partition-percpu.h>
++#include <linux/bitfield.h>
++#include <linux/bits.h>
++#include <linux/arm-smccc.h>
+ 
+ #include <asm/cputype.h>
+ #include <asm/exception.h>
+@@ -47,6 +50,7 @@ struct redist_region {
+ 
+ struct gic_chip_data {
+ 	struct fwnode_handle	*fwnode;
++	phys_addr_t		dist_phys_base;
+ 	void __iomem		*dist_base;
+ 	struct redist_region	*redist_regions;
+ 	struct rdists		rdists;
+@@ -59,6 +63,10 @@ struct gic_chip_data {
+ 	struct partition_desc	**ppi_descs;
+ };
+ 
++#define T241_CHIPS_MAX		4
++static void __iomem *t241_dist_base_alias[T241_CHIPS_MAX] __read_mostly;
++static DEFINE_STATIC_KEY_FALSE(gic_nvidia_t241_erratum);
++
+ static struct gic_chip_data gic_data __read_mostly;
+ static DEFINE_STATIC_KEY_TRUE(supports_deactivate_key);
+ 
+@@ -179,6 +187,39 @@ static inline bool gic_irq_in_rdist(struct irq_data *d)
+ 	}
+ }
+ 
++static inline void __iomem *gic_dist_base_alias(struct irq_data *d)
++{
++	if (static_branch_unlikely(&gic_nvidia_t241_erratum)) {
++		irq_hw_number_t hwirq = irqd_to_hwirq(d);
++		u32 chip;
++
++		/*
++		 * For the erratum T241-FABRIC-4, read accesses to GICD_In{E}
++		 * registers are directed to the chip that owns the SPI. The
++		 * the alias region can also be used for writes to the
++		 * GICD_In{E} except GICD_ICENABLERn. Each chip has support
++		 * for 320 {E}SPIs. Mappings for all 4 chips:
++		 *    Chip0 = 32-351
++		 *    Chip1 = 352-671
++		 *    Chip2 = 672-991
++		 *    Chip3 = 4096-4415
++		 */
++		switch (__get_intid_range(hwirq)) {
++		case SPI_RANGE:
++			chip = (hwirq - 32) / 320;
++			break;
++		case ESPI_RANGE:
++			chip = 3;
++			break;
++		default:
++			unreachable();
++		}
++		return t241_dist_base_alias[chip];
++	}
++
++	return gic_data.dist_base;
++}
++
+ static inline void __iomem *gic_dist_base(struct irq_data *d)
+ {
+ 	switch (get_intid_range(d)) {
+@@ -337,7 +378,7 @@ static int gic_peek_irq(struct irq_data *d, u32 offset)
+ 	if (gic_irq_in_rdist(d))
+ 		base = gic_data_rdist_sgi_base();
+ 	else
+-		base = gic_data.dist_base;
++		base = gic_dist_base_alias(d);
+ 
+ 	return !!(readl_relaxed(base + offset + (index / 32) * 4) & mask);
+ }
+@@ -588,7 +629,7 @@ static int gic_set_type(struct irq_data *d, unsigned int type)
+ 	if (gic_irq_in_rdist(d))
+ 		base = gic_data_rdist_sgi_base();
+ 	else
+-		base = gic_data.dist_base;
++		base = gic_dist_base_alias(d);
+ 
+ 	offset = convert_offset_index(d, GICD_ICFGR, &index);
+ 
+@@ -1708,6 +1749,43 @@ static bool gic_enable_quirk_hip06_07(void *data)
+ 	return false;
+ }
+ 
++#define T241_CHIPN_MASK		GENMASK_ULL(45, 44)
++#define T241_CHIP_GICDA_OFFSET	0x1580000
++#define SMCCC_SOC_ID_T241	0x036b0241
++
++static bool gic_enable_quirk_nvidia_t241(void *data)
++{
++	s32 soc_id = arm_smccc_get_soc_id_version();
++	unsigned long chip_bmask = 0;
++	phys_addr_t phys;
++	u32 i;
++
++	/* Check JEP106 code for NVIDIA T241 chip (036b:0241) */
++	if ((soc_id < 0) || (soc_id != SMCCC_SOC_ID_T241))
++		return false;
++
++	/* Find the chips based on GICR regions PHYS addr */
++	for (i = 0; i < gic_data.nr_redist_regions; i++) {
++		chip_bmask |= BIT(FIELD_GET(T241_CHIPN_MASK,
++				  (u64)gic_data.redist_regions[i].phys_base));
++	}
++
++	if (hweight32(chip_bmask) < 3)
++		return false;
++
++	/* Setup GICD alias regions */
++	for (i = 0; i < ARRAY_SIZE(t241_dist_base_alias); i++) {
++		if (chip_bmask & BIT(i)) {
++			phys = gic_data.dist_phys_base + T241_CHIP_GICDA_OFFSET;
++			phys |= FIELD_PREP(T241_CHIPN_MASK, i);
++			t241_dist_base_alias[i] = ioremap(phys, SZ_64K);
++			WARN_ON_ONCE(!t241_dist_base_alias[i]);
++		}
++	}
++	static_branch_enable(&gic_nvidia_t241_erratum);
++	return true;
++}
++
+ static const struct gic_quirk gic_quirks[] = {
+ 	{
+ 		.desc	= "GICv3: Qualcomm MSM8996 broken firmware",
+@@ -1739,6 +1817,12 @@ static const struct gic_quirk gic_quirks[] = {
+ 		.mask	= 0xe8f00fff,
+ 		.init	= gic_enable_quirk_cavium_38539,
+ 	},
++	{
++		.desc	= "GICv3: NVIDIA erratum T241-FABRIC-4",
++		.iidr	= 0x0402043b,
++		.mask	= 0xffffffff,
++		.init	= gic_enable_quirk_nvidia_t241,
++	},
+ 	{
+ 	}
+ };
+@@ -1798,7 +1882,8 @@ static void gic_enable_nmi_support(void)
+ 		gic_chip.flags |= IRQCHIP_SUPPORTS_NMI;
+ }
+ 
+-static int __init gic_init_bases(void __iomem *dist_base,
++static int __init gic_init_bases(phys_addr_t dist_phys_base,
++				 void __iomem *dist_base,
+ 				 struct redist_region *rdist_regs,
+ 				 u32 nr_redist_regions,
+ 				 u64 redist_stride,
+@@ -1814,6 +1899,7 @@ static int __init gic_init_bases(void __iomem *dist_base,
+ 		pr_info("GIC: Using split EOI/Deactivate mode\n");
+ 
+ 	gic_data.fwnode = handle;
++	gic_data.dist_phys_base = dist_phys_base;
+ 	gic_data.dist_base = dist_base;
+ 	gic_data.redist_regions = rdist_regs;
+ 	gic_data.nr_redist_regions = nr_redist_regions;
+@@ -1841,10 +1927,13 @@ static int __init gic_init_bases(void __iomem *dist_base,
+ 	gic_data.domain = irq_domain_create_tree(handle, &gic_irq_domain_ops,
+ 						 &gic_data);
+ 	gic_data.rdists.rdist = alloc_percpu(typeof(*gic_data.rdists.rdist));
+-	gic_data.rdists.has_rvpeid = true;
+-	gic_data.rdists.has_vlpis = true;
+-	gic_data.rdists.has_direct_lpi = true;
+-	gic_data.rdists.has_vpend_valid_dirty = true;
++	if (!static_branch_unlikely(&gic_nvidia_t241_erratum)) {
++		/* Disable GICv4.x features for the erratum T241-FABRIC-4 */
++		gic_data.rdists.has_rvpeid = true;
++		gic_data.rdists.has_vlpis = true;
++		gic_data.rdists.has_direct_lpi = true;
++		gic_data.rdists.has_vpend_valid_dirty = true;
++	}
+ 
+ 	if (WARN_ON(!gic_data.domain) || WARN_ON(!gic_data.rdists.rdist)) {
+ 		err = -ENOMEM;
+@@ -2050,6 +2139,7 @@ static void __iomem *gic_of_iomap(struct device_node *node, int idx,
+ 
+ static int __init gic_of_init(struct device_node *node, struct device_node *parent)
+ {
++	phys_addr_t dist_phys_base;
+ 	void __iomem *dist_base;
+ 	struct redist_region *rdist_regs;
+ 	struct resource res;
+@@ -2063,6 +2153,8 @@ static int __init gic_of_init(struct device_node *node, struct device_node *pare
+ 		return PTR_ERR(dist_base);
+ 	}
+ 
++	dist_phys_base = res.start;
++
+ 	err = gic_validate_dist_version(dist_base);
+ 	if (err) {
+ 		pr_err("%pOF: no distributor detected, giving up\n", node);
+@@ -2094,8 +2186,8 @@ static int __init gic_of_init(struct device_node *node, struct device_node *pare
+ 
+ 	gic_enable_of_quirks(node, gic_quirks, &gic_data);
+ 
+-	err = gic_init_bases(dist_base, rdist_regs, nr_redist_regions,
+-			     redist_stride, &node->fwnode);
++	err = gic_init_bases(dist_phys_base, dist_base, rdist_regs,
++			     nr_redist_regions, redist_stride, &node->fwnode);
+ 	if (err)
+ 		goto out_unmap_rdist;
+ 
+@@ -2411,8 +2503,9 @@ gic_acpi_init(union acpi_subtable_headers *header, const unsigned long end)
+ 		goto out_redist_unmap;
+ 	}
+ 
+-	err = gic_init_bases(acpi_data.dist_base, acpi_data.redist_regs,
+-			     acpi_data.nr_redist_regions, 0, gsi_domain_handle);
++	err = gic_init_bases(dist->base_address, acpi_data.dist_base,
++			     acpi_data.redist_regs, acpi_data.nr_redist_regions,
++			     0, gsi_domain_handle);
+ 	if (err)
+ 		goto out_fwhandle_free;
+ 
+diff --git a/drivers/mcb/mcb-pci.c b/drivers/mcb/mcb-pci.c
+index dc88232d9af83..53d9202ff9a7c 100644
+--- a/drivers/mcb/mcb-pci.c
++++ b/drivers/mcb/mcb-pci.c
+@@ -31,7 +31,7 @@ static int mcb_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ {
+ 	struct resource *res;
+ 	struct priv *priv;
+-	int ret;
++	int ret, table_size;
+ 	unsigned long flags;
+ 
+ 	priv = devm_kzalloc(&pdev->dev, sizeof(struct priv), GFP_KERNEL);
+@@ -90,7 +90,30 @@ static int mcb_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	if (ret < 0)
+ 		goto out_mcb_bus;
+ 
+-	dev_dbg(&pdev->dev, "Found %d cells\n", ret);
++	table_size = ret;
++
++	if (table_size < CHAM_HEADER_SIZE) {
++		/* Release the previous resources */
++		devm_iounmap(&pdev->dev, priv->base);
++		devm_release_mem_region(&pdev->dev, priv->mapbase, CHAM_HEADER_SIZE);
++
++		/* Then, allocate it again with the actual chameleon table size */
++		res = devm_request_mem_region(&pdev->dev, priv->mapbase,
++						table_size,
++						KBUILD_MODNAME);
++		if (!res) {
++			dev_err(&pdev->dev, "Failed to request PCI memory\n");
++			ret = -EBUSY;
++			goto out_mcb_bus;
++		}
++
++		priv->base = devm_ioremap(&pdev->dev, priv->mapbase, table_size);
++		if (!priv->base) {
++			dev_err(&pdev->dev, "Cannot ioremap\n");
++			ret = -ENOMEM;
++			goto out_mcb_bus;
++		}
++	}
+ 
+ 	mcb_bus_add_devices(priv->bus);
+ 
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 13321dbb5fbcf..d479e1656ef33 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -8029,16 +8029,16 @@ static int status_resync(struct seq_file *seq, struct mddev *mddev)
+ 	} else if (resync > max_sectors) {
+ 		resync = max_sectors;
+ 	} else {
+-		resync -= atomic_read(&mddev->recovery_active);
+-		if (resync < MD_RESYNC_ACTIVE) {
+-			/*
+-			 * Resync has started, but the subtraction has
+-			 * yielded one of the special values. Force it
+-			 * to active to ensure the status reports an
+-			 * active resync.
+-			 */
++		res = atomic_read(&mddev->recovery_active);
++		/*
++		 * Resync has started, but the subtraction has overflowed or
++		 * yielded one of the special values. Force it to active to
++		 * ensure the status reports an active resync.
++		 */
++		if (resync < res || resync - res < MD_RESYNC_ACTIVE)
+ 			resync = MD_RESYNC_ACTIVE;
+-		}
++		else
++			resync -= res;
+ 	}
+ 
+ 	if (resync == MD_RESYNC_NONE) {
+diff --git a/drivers/media/pci/cx23885/cx23885-core.c b/drivers/media/pci/cx23885/cx23885-core.c
+index 9232a966bcabb..2ce2914576cf2 100644
+--- a/drivers/media/pci/cx23885/cx23885-core.c
++++ b/drivers/media/pci/cx23885/cx23885-core.c
+@@ -1325,7 +1325,9 @@ void cx23885_free_buffer(struct cx23885_dev *dev, struct cx23885_buffer *buf)
+ {
+ 	struct cx23885_riscmem *risc = &buf->risc;
+ 
+-	dma_free_coherent(&dev->pci->dev, risc->size, risc->cpu, risc->dma);
++	if (risc->cpu)
++		dma_free_coherent(&dev->pci->dev, risc->size, risc->cpu, risc->dma);
++	memset(risc, 0, sizeof(*risc));
+ }
+ 
+ static void cx23885_tsport_reg_dump(struct cx23885_tsport *port)
+diff --git a/drivers/media/pci/cx23885/cx23885-video.c b/drivers/media/pci/cx23885/cx23885-video.c
+index 3d03f5e95786a..671fc0588e431 100644
+--- a/drivers/media/pci/cx23885/cx23885-video.c
++++ b/drivers/media/pci/cx23885/cx23885-video.c
+@@ -342,6 +342,7 @@ static int queue_setup(struct vb2_queue *q,
+ 
+ static int buffer_prepare(struct vb2_buffer *vb)
+ {
++	int ret;
+ 	struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
+ 	struct cx23885_dev *dev = vb->vb2_queue->drv_priv;
+ 	struct cx23885_buffer *buf =
+@@ -358,12 +359,12 @@ static int buffer_prepare(struct vb2_buffer *vb)
+ 
+ 	switch (dev->field) {
+ 	case V4L2_FIELD_TOP:
+-		cx23885_risc_buffer(dev->pci, &buf->risc,
++		ret = cx23885_risc_buffer(dev->pci, &buf->risc,
+ 				sgt->sgl, 0, UNSET,
+ 				buf->bpl, 0, dev->height);
+ 		break;
+ 	case V4L2_FIELD_BOTTOM:
+-		cx23885_risc_buffer(dev->pci, &buf->risc,
++		ret = cx23885_risc_buffer(dev->pci, &buf->risc,
+ 				sgt->sgl, UNSET, 0,
+ 				buf->bpl, 0, dev->height);
+ 		break;
+@@ -391,21 +392,21 @@ static int buffer_prepare(struct vb2_buffer *vb)
+ 			line0_offset = 0;
+ 			line1_offset = buf->bpl;
+ 		}
+-		cx23885_risc_buffer(dev->pci, &buf->risc,
++		ret = cx23885_risc_buffer(dev->pci, &buf->risc,
+ 				sgt->sgl, line0_offset,
+ 				line1_offset,
+ 				buf->bpl, buf->bpl,
+ 				dev->height >> 1);
+ 		break;
+ 	case V4L2_FIELD_SEQ_TB:
+-		cx23885_risc_buffer(dev->pci, &buf->risc,
++		ret = cx23885_risc_buffer(dev->pci, &buf->risc,
+ 				sgt->sgl,
+ 				0, buf->bpl * (dev->height >> 1),
+ 				buf->bpl, 0,
+ 				dev->height >> 1);
+ 		break;
+ 	case V4L2_FIELD_SEQ_BT:
+-		cx23885_risc_buffer(dev->pci, &buf->risc,
++		ret = cx23885_risc_buffer(dev->pci, &buf->risc,
+ 				sgt->sgl,
+ 				buf->bpl * (dev->height >> 1), 0,
+ 				buf->bpl, 0,
+@@ -418,7 +419,7 @@ static int buffer_prepare(struct vb2_buffer *vb)
+ 		buf, buf->vb.vb2_buf.index,
+ 		dev->width, dev->height, dev->fmt->depth, dev->fmt->fourcc,
+ 		(unsigned long)buf->risc.dma);
+-	return 0;
++	return ret;
+ }
+ 
+ static void buffer_finish(struct vb2_buffer *vb)
+diff --git a/drivers/media/pci/intel/ipu3/cio2-bridge.c b/drivers/media/pci/intel/ipu3/cio2-bridge.c
+index dfefe0d8aa959..45427a3a3a252 100644
+--- a/drivers/media/pci/intel/ipu3/cio2-bridge.c
++++ b/drivers/media/pci/intel/ipu3/cio2-bridge.c
+@@ -212,6 +212,7 @@ static void cio2_bridge_create_connection_swnodes(struct cio2_bridge *bridge,
+ 						  struct cio2_sensor *sensor)
+ {
+ 	struct software_node *nodes = sensor->swnodes;
++	char vcm_name[ACPI_ID_LEN + 4];
+ 
+ 	cio2_bridge_init_swnode_names(sensor);
+ 
+@@ -229,9 +230,13 @@ static void cio2_bridge_create_connection_swnodes(struct cio2_bridge *bridge,
+ 						sensor->node_names.endpoint,
+ 						&nodes[SWNODE_CIO2_PORT],
+ 						sensor->cio2_properties);
+-	if (sensor->ssdb.vcmtype)
+-		nodes[SWNODE_VCM] =
+-			NODE_VCM(cio2_vcm_types[sensor->ssdb.vcmtype - 1]);
++	if (sensor->ssdb.vcmtype) {
++		/* append ssdb.link to distinguish VCM nodes with same HID */
++		snprintf(vcm_name, sizeof(vcm_name), "%s-%u",
++			 cio2_vcm_types[sensor->ssdb.vcmtype - 1],
++			 sensor->ssdb.link);
++		nodes[SWNODE_VCM] = NODE_VCM(vcm_name);
++	}
+ 
+ 	cio2_bridge_init_swnode_group(sensor);
+ }
+@@ -295,7 +300,6 @@ static int cio2_bridge_connect_sensor(const struct cio2_sensor_config *cfg,
+ 		}
+ 
+ 		sensor = &bridge->sensors[bridge->n_sensors];
+-		strscpy(sensor->name, cfg->hid, sizeof(sensor->name));
+ 
+ 		ret = cio2_bridge_read_acpi_buffer(adev, "SSDB",
+ 						   &sensor->ssdb,
+@@ -303,6 +307,9 @@ static int cio2_bridge_connect_sensor(const struct cio2_sensor_config *cfg,
+ 		if (ret)
+ 			goto err_put_adev;
+ 
++		snprintf(sensor->name, sizeof(sensor->name), "%s-%u",
++			 cfg->hid, sensor->ssdb.link);
++
+ 		if (sensor->ssdb.vcmtype > ARRAY_SIZE(cio2_vcm_types)) {
+ 			dev_warn(&adev->dev, "Unknown VCM type %d\n",
+ 				 sensor->ssdb.vcmtype);
+diff --git a/drivers/media/pci/intel/ipu3/cio2-bridge.h b/drivers/media/pci/intel/ipu3/cio2-bridge.h
+index b93b749c65bda..b76ed8a641e20 100644
+--- a/drivers/media/pci/intel/ipu3/cio2-bridge.h
++++ b/drivers/media/pci/intel/ipu3/cio2-bridge.h
+@@ -113,7 +113,8 @@ struct cio2_sensor_config {
+ };
+ 
+ struct cio2_sensor {
+-	char name[ACPI_ID_LEN];
++	/* append ssdb.link(u8) in "-%u" format as suffix of HID */
++	char name[ACPI_ID_LEN + 4];
+ 	struct acpi_device *adev;
+ 	struct i2c_client *vcm_i2c_client;
+ 
+diff --git a/drivers/media/pci/netup_unidvb/netup_unidvb_core.c b/drivers/media/pci/netup_unidvb/netup_unidvb_core.c
+index 8287851b5ffdc..aaa1d2dedebdd 100644
+--- a/drivers/media/pci/netup_unidvb/netup_unidvb_core.c
++++ b/drivers/media/pci/netup_unidvb/netup_unidvb_core.c
+@@ -697,7 +697,7 @@ static void netup_unidvb_dma_fini(struct netup_unidvb_dev *ndev, int num)
+ 	netup_unidvb_dma_enable(dma, 0);
+ 	msleep(50);
+ 	cancel_work_sync(&dma->work);
+-	del_timer(&dma->timeout);
++	del_timer_sync(&dma->timeout);
+ }
+ 
+ static int netup_unidvb_dma_setup(struct netup_unidvb_dev *ndev)
+diff --git a/drivers/media/pci/tw68/tw68-video.c b/drivers/media/pci/tw68/tw68-video.c
+index 0cbc5b038073b..773a18702d369 100644
+--- a/drivers/media/pci/tw68/tw68-video.c
++++ b/drivers/media/pci/tw68/tw68-video.c
+@@ -437,6 +437,7 @@ static void tw68_buf_queue(struct vb2_buffer *vb)
+  */
+ static int tw68_buf_prepare(struct vb2_buffer *vb)
+ {
++	int ret;
+ 	struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
+ 	struct vb2_queue *vq = vb->vb2_queue;
+ 	struct tw68_dev *dev = vb2_get_drv_priv(vq);
+@@ -452,30 +453,30 @@ static int tw68_buf_prepare(struct vb2_buffer *vb)
+ 	bpl = (dev->width * dev->fmt->depth) >> 3;
+ 	switch (dev->field) {
+ 	case V4L2_FIELD_TOP:
+-		tw68_risc_buffer(dev->pci, buf, dma->sgl,
++		ret = tw68_risc_buffer(dev->pci, buf, dma->sgl,
+ 				 0, UNSET, bpl, 0, dev->height);
+ 		break;
+ 	case V4L2_FIELD_BOTTOM:
+-		tw68_risc_buffer(dev->pci, buf, dma->sgl,
++		ret = tw68_risc_buffer(dev->pci, buf, dma->sgl,
+ 				 UNSET, 0, bpl, 0, dev->height);
+ 		break;
+ 	case V4L2_FIELD_SEQ_TB:
+-		tw68_risc_buffer(dev->pci, buf, dma->sgl,
++		ret = tw68_risc_buffer(dev->pci, buf, dma->sgl,
+ 				 0, bpl * (dev->height >> 1),
+ 				 bpl, 0, dev->height >> 1);
+ 		break;
+ 	case V4L2_FIELD_SEQ_BT:
+-		tw68_risc_buffer(dev->pci, buf, dma->sgl,
++		ret = tw68_risc_buffer(dev->pci, buf, dma->sgl,
+ 				 bpl * (dev->height >> 1), 0,
+ 				 bpl, 0, dev->height >> 1);
+ 		break;
+ 	case V4L2_FIELD_INTERLACED:
+ 	default:
+-		tw68_risc_buffer(dev->pci, buf, dma->sgl,
++		ret = tw68_risc_buffer(dev->pci, buf, dma->sgl,
+ 				 0, bpl, bpl, bpl, dev->height >> 1);
+ 		break;
+ 	}
+-	return 0;
++	return ret;
+ }
+ 
+ static void tw68_buf_finish(struct vb2_buffer *vb)
+@@ -485,7 +486,8 @@ static void tw68_buf_finish(struct vb2_buffer *vb)
+ 	struct tw68_dev *dev = vb2_get_drv_priv(vq);
+ 	struct tw68_buf *buf = container_of(vbuf, struct tw68_buf, vb);
+ 
+-	dma_free_coherent(&dev->pci->dev, buf->size, buf->cpu, buf->dma);
++	if (buf->cpu)
++		dma_free_coherent(&dev->pci->dev, buf->size, buf->cpu, buf->dma);
+ }
+ 
+ static int tw68_start_streaming(struct vb2_queue *q, unsigned int count)
+diff --git a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec.c b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec.c
+index c99705681a03e..93fcea821001f 100644
+--- a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec.c
++++ b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec.c
+@@ -735,6 +735,13 @@ int vb2ops_vdec_queue_setup(struct vb2_queue *vq, unsigned int *nbuffers,
+ 	}
+ 
+ 	if (*nplanes) {
++		if (vq->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE) {
++			if (*nplanes != q_data->fmt->num_planes)
++				return -EINVAL;
++		} else {
++			if (*nplanes != 1)
++				return -EINVAL;
++		}
+ 		for (i = 0; i < *nplanes; i++) {
+ 			if (sizes[i] < q_data->sizeimage[i])
+ 				return -EINVAL;
+diff --git a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
+index f085f14d676ad..c898116b763a2 100644
+--- a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
++++ b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
+@@ -637,6 +637,11 @@ static u32 mxc_jpeg_get_plane_size(struct mxc_jpeg_q_data *q_data, u32 plane_no)
+ 		return q_data->sizeimage[plane_no];
+ 
+ 	size = q_data->sizeimage[fmt->mem_planes - 1];
++
++	/* Should be impossible given mxc_formats. */
++	if (WARN_ON_ONCE(fmt->comp_planes > ARRAY_SIZE(q_data->sizeimage)))
++		return size;
++
+ 	for (i = fmt->mem_planes; i < fmt->comp_planes; i++)
+ 		size += q_data->sizeimage[i];
+ 
+diff --git a/drivers/media/platform/renesas/vsp1/vsp1_drm.c b/drivers/media/platform/renesas/vsp1/vsp1_drm.c
+index c6f25200982c8..7fe375b6322cd 100644
+--- a/drivers/media/platform/renesas/vsp1/vsp1_drm.c
++++ b/drivers/media/platform/renesas/vsp1/vsp1_drm.c
+@@ -66,7 +66,9 @@ static int vsp1_du_insert_uif(struct vsp1_device *vsp1,
+ 			      struct vsp1_entity *prev, unsigned int prev_pad,
+ 			      struct vsp1_entity *next, unsigned int next_pad)
+ {
+-	struct v4l2_subdev_format format;
++	struct v4l2_subdev_format format = {
++		.which = V4L2_SUBDEV_FORMAT_ACTIVE,
++	};
+ 	int ret;
+ 
+ 	if (!uif) {
+@@ -82,8 +84,6 @@ static int vsp1_du_insert_uif(struct vsp1_device *vsp1,
+ 	prev->sink = uif;
+ 	prev->sink_pad = UIF_PAD_SINK;
+ 
+-	memset(&format, 0, sizeof(format));
+-	format.which = V4L2_SUBDEV_FORMAT_ACTIVE;
+ 	format.pad = prev_pad;
+ 
+ 	ret = v4l2_subdev_call(&prev->subdev, pad, get_fmt, NULL, &format);
+@@ -118,8 +118,12 @@ static int vsp1_du_pipeline_setup_rpf(struct vsp1_device *vsp1,
+ 				      struct vsp1_entity *uif,
+ 				      unsigned int brx_input)
+ {
+-	struct v4l2_subdev_selection sel;
+-	struct v4l2_subdev_format format;
++	struct v4l2_subdev_selection sel = {
++		.which = V4L2_SUBDEV_FORMAT_ACTIVE,
++	};
++	struct v4l2_subdev_format format = {
++		.which = V4L2_SUBDEV_FORMAT_ACTIVE,
++	};
+ 	const struct v4l2_rect *crop;
+ 	int ret;
+ 
+@@ -129,8 +133,6 @@ static int vsp1_du_pipeline_setup_rpf(struct vsp1_device *vsp1,
+ 	 */
+ 	crop = &vsp1->drm->inputs[rpf->entity.index].crop;
+ 
+-	memset(&format, 0, sizeof(format));
+-	format.which = V4L2_SUBDEV_FORMAT_ACTIVE;
+ 	format.pad = RWPF_PAD_SINK;
+ 	format.format.width = crop->width + crop->left;
+ 	format.format.height = crop->height + crop->top;
+@@ -147,8 +149,6 @@ static int vsp1_du_pipeline_setup_rpf(struct vsp1_device *vsp1,
+ 		__func__, format.format.width, format.format.height,
+ 		format.format.code, rpf->entity.index);
+ 
+-	memset(&sel, 0, sizeof(sel));
+-	sel.which = V4L2_SUBDEV_FORMAT_ACTIVE;
+ 	sel.pad = RWPF_PAD_SINK;
+ 	sel.target = V4L2_SEL_TGT_CROP;
+ 	sel.r = *crop;
+diff --git a/drivers/media/platform/renesas/vsp1/vsp1_entity.c b/drivers/media/platform/renesas/vsp1/vsp1_entity.c
+index 4c3bd2b1ca287..c31f05a80bb56 100644
+--- a/drivers/media/platform/renesas/vsp1/vsp1_entity.c
++++ b/drivers/media/platform/renesas/vsp1/vsp1_entity.c
+@@ -184,15 +184,14 @@ vsp1_entity_get_pad_selection(struct vsp1_entity *entity,
+ int vsp1_entity_init_cfg(struct v4l2_subdev *subdev,
+ 			 struct v4l2_subdev_state *sd_state)
+ {
+-	struct v4l2_subdev_format format;
+ 	unsigned int pad;
+ 
+ 	for (pad = 0; pad < subdev->entity.num_pads - 1; ++pad) {
+-		memset(&format, 0, sizeof(format));
+-
+-		format.pad = pad;
+-		format.which = sd_state ? V4L2_SUBDEV_FORMAT_TRY
+-			     : V4L2_SUBDEV_FORMAT_ACTIVE;
++		struct v4l2_subdev_format format = {
++			.pad = pad,
++			.which = sd_state ? V4L2_SUBDEV_FORMAT_TRY
++			       : V4L2_SUBDEV_FORMAT_ACTIVE,
++		};
+ 
+ 		v4l2_subdev_call(subdev, pad, set_fmt, sd_state, &format);
+ 	}
+diff --git a/drivers/media/platform/samsung/exynos4-is/fimc-capture.c b/drivers/media/platform/samsung/exynos4-is/fimc-capture.c
+index e3b95a2b7e040..beaee54ee73bf 100644
+--- a/drivers/media/platform/samsung/exynos4-is/fimc-capture.c
++++ b/drivers/media/platform/samsung/exynos4-is/fimc-capture.c
+@@ -763,7 +763,10 @@ static int fimc_pipeline_try_format(struct fimc_ctx *ctx,
+ 	struct fimc_dev *fimc = ctx->fimc_dev;
+ 	struct fimc_pipeline *p = to_fimc_pipeline(fimc->vid_cap.ve.pipe);
+ 	struct v4l2_subdev *sd = p->subdevs[IDX_SENSOR];
+-	struct v4l2_subdev_format sfmt;
++	struct v4l2_subdev_format sfmt = {
++		.which = set ? V4L2_SUBDEV_FORMAT_ACTIVE
++		       : V4L2_SUBDEV_FORMAT_TRY,
++	};
+ 	struct v4l2_mbus_framefmt *mf = &sfmt.format;
+ 	struct media_entity *me;
+ 	struct fimc_fmt *ffmt;
+@@ -774,9 +777,7 @@ static int fimc_pipeline_try_format(struct fimc_ctx *ctx,
+ 	if (WARN_ON(!sd || !tfmt))
+ 		return -EINVAL;
+ 
+-	memset(&sfmt, 0, sizeof(sfmt));
+ 	sfmt.format = *tfmt;
+-	sfmt.which = set ? V4L2_SUBDEV_FORMAT_ACTIVE : V4L2_SUBDEV_FORMAT_TRY;
+ 
+ 	me = fimc_pipeline_get_head(&sd->entity);
+ 
+diff --git a/drivers/media/platform/ti/am437x/am437x-vpfe.c b/drivers/media/platform/ti/am437x/am437x-vpfe.c
+index 2dfae9bc0bba8..dffac89cbd210 100644
+--- a/drivers/media/platform/ti/am437x/am437x-vpfe.c
++++ b/drivers/media/platform/ti/am437x/am437x-vpfe.c
+@@ -1499,7 +1499,9 @@ static int vpfe_enum_size(struct file *file, void  *priv,
+ 			  struct v4l2_frmsizeenum *fsize)
+ {
+ 	struct vpfe_device *vpfe = video_drvdata(file);
+-	struct v4l2_subdev_frame_size_enum fse;
++	struct v4l2_subdev_frame_size_enum fse = {
++		.which = V4L2_SUBDEV_FORMAT_ACTIVE,
++	};
+ 	struct v4l2_subdev *sd = vpfe->current_subdev->sd;
+ 	struct vpfe_fmt *fmt;
+ 	int ret;
+@@ -1514,11 +1516,9 @@ static int vpfe_enum_size(struct file *file, void  *priv,
+ 
+ 	memset(fsize->reserved, 0x0, sizeof(fsize->reserved));
+ 
+-	memset(&fse, 0x0, sizeof(fse));
+ 	fse.index = fsize->index;
+ 	fse.pad = 0;
+ 	fse.code = fmt->code;
+-	fse.which = V4L2_SUBDEV_FORMAT_ACTIVE;
+ 	ret = v4l2_subdev_call(sd, pad, enum_frame_size, NULL, &fse);
+ 	if (ret)
+ 		return ret;
+@@ -2146,7 +2146,6 @@ vpfe_async_bound(struct v4l2_async_notifier *notifier,
+ {
+ 	struct vpfe_device *vpfe = container_of(notifier->v4l2_dev,
+ 					       struct vpfe_device, v4l2_dev);
+-	struct v4l2_subdev_mbus_code_enum mbus_code;
+ 	struct vpfe_subdev_info *sdinfo;
+ 	struct vpfe_fmt *fmt;
+ 	int ret = 0;
+@@ -2173,9 +2172,11 @@ vpfe_async_bound(struct v4l2_async_notifier *notifier,
+ 
+ 	vpfe->num_active_fmt = 0;
+ 	for (j = 0, i = 0; (ret != -EINVAL); ++j) {
+-		memset(&mbus_code, 0, sizeof(mbus_code));
+-		mbus_code.index = j;
+-		mbus_code.which = V4L2_SUBDEV_FORMAT_ACTIVE;
++		struct v4l2_subdev_mbus_code_enum mbus_code = {
++			.index = j,
++			.which = V4L2_SUBDEV_FORMAT_ACTIVE,
++		};
++
+ 		ret = v4l2_subdev_call(subdev, pad, enum_mbus_code,
+ 				       NULL, &mbus_code);
+ 		if (ret)
+diff --git a/drivers/media/platform/ti/cal/cal-video.c b/drivers/media/platform/ti/cal/cal-video.c
+index 4eade409d5d36..bbfd2719725aa 100644
+--- a/drivers/media/platform/ti/cal/cal-video.c
++++ b/drivers/media/platform/ti/cal/cal-video.c
+@@ -811,7 +811,6 @@ static const struct v4l2_file_operations cal_fops = {
+ 
+ static int cal_ctx_v4l2_init_formats(struct cal_ctx *ctx)
+ {
+-	struct v4l2_subdev_mbus_code_enum mbus_code;
+ 	struct v4l2_mbus_framefmt mbus_fmt;
+ 	const struct cal_format_info *fmtinfo;
+ 	unsigned int i, j, k;
+@@ -826,10 +825,11 @@ static int cal_ctx_v4l2_init_formats(struct cal_ctx *ctx)
+ 	ctx->num_active_fmt = 0;
+ 
+ 	for (j = 0, i = 0; ; ++j) {
++		struct v4l2_subdev_mbus_code_enum mbus_code = {
++			.index = j,
++			.which = V4L2_SUBDEV_FORMAT_ACTIVE,
++		};
+ 
+-		memset(&mbus_code, 0, sizeof(mbus_code));
+-		mbus_code.index = j;
+-		mbus_code.which = V4L2_SUBDEV_FORMAT_ACTIVE;
+ 		ret = v4l2_subdev_call(ctx->phy->source, pad, enum_mbus_code,
+ 				       NULL, &mbus_code);
+ 		if (ret == -EINVAL)
+diff --git a/drivers/media/usb/dvb-usb/cxusb-analog.c b/drivers/media/usb/dvb-usb/cxusb-analog.c
+index e93183ddd7975..deba5224cb8df 100644
+--- a/drivers/media/usb/dvb-usb/cxusb-analog.c
++++ b/drivers/media/usb/dvb-usb/cxusb-analog.c
+@@ -1014,7 +1014,10 @@ static int cxusb_medion_try_s_fmt_vid_cap(struct file *file,
+ {
+ 	struct dvb_usb_device *dvbdev = video_drvdata(file);
+ 	struct cxusb_medion_dev *cxdev = dvbdev->priv;
+-	struct v4l2_subdev_format subfmt;
++	struct v4l2_subdev_format subfmt = {
++		.which = isset ? V4L2_SUBDEV_FORMAT_ACTIVE :
++			 V4L2_SUBDEV_FORMAT_TRY,
++	};
+ 	u32 field;
+ 	int ret;
+ 
+@@ -1024,9 +1027,6 @@ static int cxusb_medion_try_s_fmt_vid_cap(struct file *file,
+ 	field = vb2_start_streaming_called(&cxdev->videoqueue) ?
+ 		cxdev->field_order : cxusb_medion_field_order(cxdev);
+ 
+-	memset(&subfmt, 0, sizeof(subfmt));
+-	subfmt.which = isset ? V4L2_SUBDEV_FORMAT_ACTIVE :
+-		V4L2_SUBDEV_FORMAT_TRY;
+ 	subfmt.format.width = f->fmt.pix.width & ~1;
+ 	subfmt.format.height = f->fmt.pix.height & ~1;
+ 	subfmt.format.code = MEDIA_BUS_FMT_FIXED;
+@@ -1464,7 +1464,9 @@ int cxusb_medion_analog_init(struct dvb_usb_device *dvbdev)
+ 					    .buf = tuner_analog_msg_data,
+ 					    .len =
+ 					    sizeof(tuner_analog_msg_data) };
+-	struct v4l2_subdev_format subfmt;
++	struct v4l2_subdev_format subfmt = {
++		.which = V4L2_SUBDEV_FORMAT_ACTIVE,
++	};
+ 	int ret;
+ 
+ 	/* switch tuner to analog mode so IF demod will become accessible */
+@@ -1507,8 +1509,6 @@ int cxusb_medion_analog_init(struct dvb_usb_device *dvbdev)
+ 	v4l2_subdev_call(cxdev->tuner, video, s_std, cxdev->norm);
+ 	v4l2_subdev_call(cxdev->cx25840, video, s_std, cxdev->norm);
+ 
+-	memset(&subfmt, 0, sizeof(subfmt));
+-	subfmt.which = V4L2_SUBDEV_FORMAT_ACTIVE;
+ 	subfmt.format.width = cxdev->width;
+ 	subfmt.format.height = cxdev->height;
+ 	subfmt.format.code = MEDIA_BUS_FMT_FIXED;
+diff --git a/drivers/media/usb/pvrusb2/Kconfig b/drivers/media/usb/pvrusb2/Kconfig
+index f2b64e49c5a20..0df10270dbdfc 100644
+--- a/drivers/media/usb/pvrusb2/Kconfig
++++ b/drivers/media/usb/pvrusb2/Kconfig
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ config VIDEO_PVRUSB2
+ 	tristate "Hauppauge WinTV-PVR USB2 support"
+-	depends on VIDEO_DEV && I2C
++	depends on VIDEO_DEV && I2C && DVB_CORE
+ 	select VIDEO_TUNER
+ 	select VIDEO_TVEEPROM
+ 	select VIDEO_CX2341X
+@@ -37,6 +37,7 @@ config VIDEO_PVRUSB2_DVB
+ 	bool "pvrusb2 ATSC/DVB support"
+ 	default y
+ 	depends on VIDEO_PVRUSB2 && DVB_CORE
++	depends on VIDEO_PVRUSB2=m || DVB_CORE=y
+ 	select DVB_LGDT330X if MEDIA_SUBDRV_AUTOSELECT
+ 	select DVB_S5H1409 if MEDIA_SUBDRV_AUTOSELECT
+ 	select DVB_S5H1411 if MEDIA_SUBDRV_AUTOSELECT
+diff --git a/drivers/memstick/host/r592.c b/drivers/memstick/host/r592.c
+index 1d35d147552d4..42bfc46842b82 100644
+--- a/drivers/memstick/host/r592.c
++++ b/drivers/memstick/host/r592.c
+@@ -829,7 +829,7 @@ static void r592_remove(struct pci_dev *pdev)
+ 	/* Stop the processing thread.
+ 	That ensures that we won't take any more requests */
+ 	kthread_stop(dev->io_thread);
+-
++	del_timer_sync(&dev->detect_timer);
+ 	r592_enable_device(dev, false);
+ 
+ 	while (!error && dev->req) {
+diff --git a/drivers/message/fusion/mptlan.c b/drivers/message/fusion/mptlan.c
+index 142eb5d5d9df6..de2e7bcf47847 100644
+--- a/drivers/message/fusion/mptlan.c
++++ b/drivers/message/fusion/mptlan.c
+@@ -1433,7 +1433,9 @@ mptlan_remove(struct pci_dev *pdev)
+ {
+ 	MPT_ADAPTER 		*ioc = pci_get_drvdata(pdev);
+ 	struct net_device	*dev = ioc->netdev;
++	struct mpt_lan_priv *priv = netdev_priv(dev);
+ 
++	cancel_delayed_work_sync(&priv->post_buckets_task);
+ 	if(dev != NULL) {
+ 		unregister_netdev(dev);
+ 		free_netdev(dev);
+diff --git a/drivers/mfd/dln2.c b/drivers/mfd/dln2.c
+index 6cd0b0c752d6e..c3149729cec2e 100644
+--- a/drivers/mfd/dln2.c
++++ b/drivers/mfd/dln2.c
+@@ -827,6 +827,7 @@ out_stop_rx:
+ 	dln2_stop_rx_urbs(dln2);
+ 
+ out_free:
++	usb_put_dev(dln2->usb_dev);
+ 	dln2_free(dln2);
+ 
+ 	return ret;
+diff --git a/drivers/mfd/intel-lpss-pci.c b/drivers/mfd/intel-lpss-pci.c
+index dde31c50a6320..699f44ffff0e4 100644
+--- a/drivers/mfd/intel-lpss-pci.c
++++ b/drivers/mfd/intel-lpss-pci.c
+@@ -447,6 +447,21 @@ static const struct pci_device_id intel_lpss_pci_ids[] = {
+ 	{ PCI_VDEVICE(INTEL, 0x7e79), (kernel_ulong_t)&bxt_i2c_info },
+ 	{ PCI_VDEVICE(INTEL, 0x7e7a), (kernel_ulong_t)&bxt_i2c_info },
+ 	{ PCI_VDEVICE(INTEL, 0x7e7b), (kernel_ulong_t)&bxt_i2c_info },
++	/* MTP-S */
++	{ PCI_VDEVICE(INTEL, 0x7f28), (kernel_ulong_t)&bxt_uart_info },
++	{ PCI_VDEVICE(INTEL, 0x7f29), (kernel_ulong_t)&bxt_uart_info },
++	{ PCI_VDEVICE(INTEL, 0x7f2a), (kernel_ulong_t)&tgl_info },
++	{ PCI_VDEVICE(INTEL, 0x7f2b), (kernel_ulong_t)&tgl_info },
++	{ PCI_VDEVICE(INTEL, 0x7f4c), (kernel_ulong_t)&bxt_i2c_info },
++	{ PCI_VDEVICE(INTEL, 0x7f4d), (kernel_ulong_t)&bxt_i2c_info },
++	{ PCI_VDEVICE(INTEL, 0x7f4e), (kernel_ulong_t)&bxt_i2c_info },
++	{ PCI_VDEVICE(INTEL, 0x7f4f), (kernel_ulong_t)&bxt_i2c_info },
++	{ PCI_VDEVICE(INTEL, 0x7f5c), (kernel_ulong_t)&bxt_uart_info },
++	{ PCI_VDEVICE(INTEL, 0x7f5d), (kernel_ulong_t)&bxt_uart_info },
++	{ PCI_VDEVICE(INTEL, 0x7f5e), (kernel_ulong_t)&tgl_info },
++	{ PCI_VDEVICE(INTEL, 0x7f5f), (kernel_ulong_t)&tgl_info },
++	{ PCI_VDEVICE(INTEL, 0x7f7a), (kernel_ulong_t)&bxt_i2c_info },
++	{ PCI_VDEVICE(INTEL, 0x7f7b), (kernel_ulong_t)&bxt_i2c_info },
+ 	/* LKF */
+ 	{ PCI_VDEVICE(INTEL, 0x98a8), (kernel_ulong_t)&bxt_uart_info },
+ 	{ PCI_VDEVICE(INTEL, 0x98a9), (kernel_ulong_t)&bxt_uart_info },
+diff --git a/drivers/mfd/intel_soc_pmic_chtwc.c b/drivers/mfd/intel_soc_pmic_chtwc.c
+index d53dae2554906..871776d511e31 100644
+--- a/drivers/mfd/intel_soc_pmic_chtwc.c
++++ b/drivers/mfd/intel_soc_pmic_chtwc.c
+@@ -159,11 +159,19 @@ static const struct dmi_system_id cht_wc_model_dmi_ids[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "Mipad2"),
+ 		},
+ 	}, {
+-		/* Lenovo Yoga Book X90F / X91F / X91L */
++		/* Lenovo Yoga Book X90F / X90L */
+ 		.driver_data = (void *)(long)INTEL_CHT_WC_LENOVO_YOGABOOK1,
+ 		.matches = {
+-			/* Non exact match to match all versions */
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Lenovo YB1-X9"),
++			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "CHERRYVIEW D1 PLATFORM"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "YETI-11"),
++		},
++	}, {
++		/* Lenovo Yoga Book X91F / X91L */
++		.driver_data = (void *)(long)INTEL_CHT_WC_LENOVO_YOGABOOK1,
++		.matches = {
++			/* Non exact match to match F + L versions */
++			DMI_MATCH(DMI_PRODUCT_NAME, "Lenovo YB1-X91"),
+ 		},
+ 	}, {
+ 		/* Lenovo Yoga Tab 3 Pro YT3-X90F */
+diff --git a/drivers/misc/lkdtm/stackleak.c b/drivers/misc/lkdtm/stackleak.c
+index 025b133297a6b..f1d0221609138 100644
+--- a/drivers/misc/lkdtm/stackleak.c
++++ b/drivers/misc/lkdtm/stackleak.c
+@@ -43,12 +43,14 @@ static void noinstr check_stackleak_irqoff(void)
+ 	 * STACK_END_MAGIC, and in either casee something is seriously wrong.
+ 	 */
+ 	if (current_sp < task_stack_low || current_sp >= task_stack_high) {
++		instrumentation_begin();
+ 		pr_err("FAIL: current_stack_pointer (0x%lx) outside of task stack bounds [0x%lx..0x%lx]\n",
+ 		       current_sp, task_stack_low, task_stack_high - 1);
+ 		test_failed = true;
+ 		goto out;
+ 	}
+ 	if (lowest_sp < task_stack_low || lowest_sp >= task_stack_high) {
++		instrumentation_begin();
+ 		pr_err("FAIL: current->lowest_stack (0x%lx) outside of task stack bounds [0x%lx..0x%lx]\n",
+ 		       lowest_sp, task_stack_low, task_stack_high - 1);
+ 		test_failed = true;
+@@ -86,11 +88,14 @@ static void noinstr check_stackleak_irqoff(void)
+ 		if (*(unsigned long *)poison_low == STACKLEAK_POISON)
+ 			continue;
+ 
++		instrumentation_begin();
+ 		pr_err("FAIL: non-poison value %lu bytes below poison boundary: 0x%lx\n",
+ 		       poison_high - poison_low, *(unsigned long *)poison_low);
+ 		test_failed = true;
++		goto out;
+ 	}
+ 
++	instrumentation_begin();
+ 	pr_info("stackleak stack usage:\n"
+ 		"  high offset: %lu bytes\n"
+ 		"  current:     %lu bytes\n"
+@@ -113,6 +118,7 @@ out:
+ 	} else {
+ 		pr_info("OK: the rest of the thread stack is properly erased\n");
+ 	}
++	instrumentation_end();
+ }
+ 
+ static void lkdtm_STACKLEAK_ERASING(void)
+diff --git a/drivers/net/bonding/bond_netlink.c b/drivers/net/bonding/bond_netlink.c
+index c2d080fc4fc4e..27cbe148f0db5 100644
+--- a/drivers/net/bonding/bond_netlink.c
++++ b/drivers/net/bonding/bond_netlink.c
+@@ -84,6 +84,11 @@ nla_put_failure:
+ 	return -EMSGSIZE;
+ }
+ 
++/* Limit the max delay range to 300s */
++static struct netlink_range_validation delay_range = {
++	.max = 300000,
++};
++
+ static const struct nla_policy bond_policy[IFLA_BOND_MAX + 1] = {
+ 	[IFLA_BOND_MODE]		= { .type = NLA_U8 },
+ 	[IFLA_BOND_ACTIVE_SLAVE]	= { .type = NLA_U32 },
+@@ -114,7 +119,7 @@ static const struct nla_policy bond_policy[IFLA_BOND_MAX + 1] = {
+ 	[IFLA_BOND_AD_ACTOR_SYSTEM]	= { .type = NLA_BINARY,
+ 					    .len  = ETH_ALEN },
+ 	[IFLA_BOND_TLB_DYNAMIC_LB]	= { .type = NLA_U8 },
+-	[IFLA_BOND_PEER_NOTIF_DELAY]    = { .type = NLA_U32 },
++	[IFLA_BOND_PEER_NOTIF_DELAY]    = NLA_POLICY_FULL_RANGE(NLA_U32, &delay_range),
+ 	[IFLA_BOND_MISSED_MAX]		= { .type = NLA_U8 },
+ 	[IFLA_BOND_NS_IP6_TARGET]	= { .type = NLA_NESTED },
+ };
+diff --git a/drivers/net/bonding/bond_options.c b/drivers/net/bonding/bond_options.c
+index f71d5517f8293..5310cb488f11d 100644
+--- a/drivers/net/bonding/bond_options.c
++++ b/drivers/net/bonding/bond_options.c
+@@ -169,6 +169,12 @@ static const struct bond_opt_value bond_num_peer_notif_tbl[] = {
+ 	{ NULL,      -1,  0}
+ };
+ 
++static const struct bond_opt_value bond_peer_notif_delay_tbl[] = {
++	{ "off",     0,   0},
++	{ "maxval",  300000, BOND_VALFLAG_MAX},
++	{ NULL,      -1,  0}
++};
++
+ static const struct bond_opt_value bond_primary_reselect_tbl[] = {
+ 	{ "always",  BOND_PRI_RESELECT_ALWAYS,  BOND_VALFLAG_DEFAULT},
+ 	{ "better",  BOND_PRI_RESELECT_BETTER,  0},
+@@ -488,7 +494,7 @@ static const struct bond_option bond_opts[BOND_OPT_LAST] = {
+ 		.id = BOND_OPT_PEER_NOTIF_DELAY,
+ 		.name = "peer_notif_delay",
+ 		.desc = "Delay between each peer notification on failover event, in milliseconds",
+-		.values = bond_intmax_tbl,
++		.values = bond_peer_notif_delay_tbl,
+ 		.set = bond_option_peer_notif_delay_set
+ 	}
+ };
+diff --git a/drivers/net/can/dev/skb.c b/drivers/net/can/dev/skb.c
+index 241ec636e91fd..f6d05b3ef59ab 100644
+--- a/drivers/net/can/dev/skb.c
++++ b/drivers/net/can/dev/skb.c
+@@ -54,7 +54,8 @@ int can_put_echo_skb(struct sk_buff *skb, struct net_device *dev,
+ 	/* check flag whether this packet has to be looped back */
+ 	if (!(dev->flags & IFF_ECHO) ||
+ 	    (skb->protocol != htons(ETH_P_CAN) &&
+-	     skb->protocol != htons(ETH_P_CANFD))) {
++	     skb->protocol != htons(ETH_P_CANFD) &&
++	     skb->protocol != htons(ETH_P_CANXL))) {
+ 		kfree_skb(skb);
+ 		return 0;
+ 	}
+diff --git a/drivers/net/can/kvaser_pciefd.c b/drivers/net/can/kvaser_pciefd.c
+index bcad11709bc98..956a4a57396f9 100644
+--- a/drivers/net/can/kvaser_pciefd.c
++++ b/drivers/net/can/kvaser_pciefd.c
+@@ -71,10 +71,12 @@ MODULE_DESCRIPTION("CAN driver for Kvaser CAN/PCIe devices");
+ #define KVASER_PCIEFD_SYSID_BUILD_REG (KVASER_PCIEFD_SYSID_BASE + 0x14)
+ /* Shared receive buffer registers */
+ #define KVASER_PCIEFD_SRB_BASE 0x1f200
++#define KVASER_PCIEFD_SRB_FIFO_LAST_REG (KVASER_PCIEFD_SRB_BASE + 0x1f4)
+ #define KVASER_PCIEFD_SRB_CMD_REG (KVASER_PCIEFD_SRB_BASE + 0x200)
+ #define KVASER_PCIEFD_SRB_IEN_REG (KVASER_PCIEFD_SRB_BASE + 0x204)
+ #define KVASER_PCIEFD_SRB_IRQ_REG (KVASER_PCIEFD_SRB_BASE + 0x20c)
+ #define KVASER_PCIEFD_SRB_STAT_REG (KVASER_PCIEFD_SRB_BASE + 0x210)
++#define KVASER_PCIEFD_SRB_RX_NR_PACKETS_REG (KVASER_PCIEFD_SRB_BASE + 0x214)
+ #define KVASER_PCIEFD_SRB_CTRL_REG (KVASER_PCIEFD_SRB_BASE + 0x218)
+ /* EPCS flash controller registers */
+ #define KVASER_PCIEFD_SPI_BASE 0x1fc00
+@@ -111,6 +113,9 @@ MODULE_DESCRIPTION("CAN driver for Kvaser CAN/PCIe devices");
+ /* DMA support */
+ #define KVASER_PCIEFD_SRB_STAT_DMA BIT(24)
+ 
++/* SRB current packet level */
++#define KVASER_PCIEFD_SRB_RX_NR_PACKETS_MASK 0xff
++
+ /* DMA Enable */
+ #define KVASER_PCIEFD_SRB_CTRL_DMA_ENABLE BIT(0)
+ 
+@@ -526,7 +531,7 @@ static int kvaser_pciefd_set_tx_irq(struct kvaser_pciefd_can *can)
+ 	      KVASER_PCIEFD_KCAN_IRQ_TOF | KVASER_PCIEFD_KCAN_IRQ_ABD |
+ 	      KVASER_PCIEFD_KCAN_IRQ_TAE | KVASER_PCIEFD_KCAN_IRQ_TAL |
+ 	      KVASER_PCIEFD_KCAN_IRQ_FDIC | KVASER_PCIEFD_KCAN_IRQ_BPP |
+-	      KVASER_PCIEFD_KCAN_IRQ_TAR | KVASER_PCIEFD_KCAN_IRQ_TFD;
++	      KVASER_PCIEFD_KCAN_IRQ_TAR;
+ 
+ 	iowrite32(msk, can->reg_base + KVASER_PCIEFD_KCAN_IEN_REG);
+ 
+@@ -554,6 +559,8 @@ static void kvaser_pciefd_setup_controller(struct kvaser_pciefd_can *can)
+ 
+ 	if (can->can.ctrlmode & CAN_CTRLMODE_LISTENONLY)
+ 		mode |= KVASER_PCIEFD_KCAN_MODE_LOM;
++	else
++		mode &= ~KVASER_PCIEFD_KCAN_MODE_LOM;
+ 
+ 	mode |= KVASER_PCIEFD_KCAN_MODE_EEN;
+ 	mode |= KVASER_PCIEFD_KCAN_MODE_EPEN;
+@@ -572,7 +579,7 @@ static void kvaser_pciefd_start_controller_flush(struct kvaser_pciefd_can *can)
+ 
+ 	spin_lock_irqsave(&can->lock, irq);
+ 	iowrite32(-1, can->reg_base + KVASER_PCIEFD_KCAN_IRQ_REG);
+-	iowrite32(KVASER_PCIEFD_KCAN_IRQ_ABD | KVASER_PCIEFD_KCAN_IRQ_TFD,
++	iowrite32(KVASER_PCIEFD_KCAN_IRQ_ABD,
+ 		  can->reg_base + KVASER_PCIEFD_KCAN_IEN_REG);
+ 
+ 	status = ioread32(can->reg_base + KVASER_PCIEFD_KCAN_STAT_REG);
+@@ -615,7 +622,7 @@ static int kvaser_pciefd_bus_on(struct kvaser_pciefd_can *can)
+ 	iowrite32(0, can->reg_base + KVASER_PCIEFD_KCAN_IEN_REG);
+ 	iowrite32(-1, can->reg_base + KVASER_PCIEFD_KCAN_IRQ_REG);
+ 
+-	iowrite32(KVASER_PCIEFD_KCAN_IRQ_ABD | KVASER_PCIEFD_KCAN_IRQ_TFD,
++	iowrite32(KVASER_PCIEFD_KCAN_IRQ_ABD,
+ 		  can->reg_base + KVASER_PCIEFD_KCAN_IEN_REG);
+ 
+ 	mode = ioread32(can->reg_base + KVASER_PCIEFD_KCAN_MODE_REG);
+@@ -719,6 +726,7 @@ static int kvaser_pciefd_stop(struct net_device *netdev)
+ 		iowrite32(0, can->reg_base + KVASER_PCIEFD_KCAN_IEN_REG);
+ 		del_timer(&can->bec_poll_timer);
+ 	}
++	can->can.state = CAN_STATE_STOPPED;
+ 	close_candev(netdev);
+ 
+ 	return ret;
+@@ -1007,8 +1015,7 @@ static int kvaser_pciefd_setup_can_ctrls(struct kvaser_pciefd *pcie)
+ 		SET_NETDEV_DEV(netdev, &pcie->pci->dev);
+ 
+ 		iowrite32(-1, can->reg_base + KVASER_PCIEFD_KCAN_IRQ_REG);
+-		iowrite32(KVASER_PCIEFD_KCAN_IRQ_ABD |
+-			  KVASER_PCIEFD_KCAN_IRQ_TFD,
++		iowrite32(KVASER_PCIEFD_KCAN_IRQ_ABD,
+ 			  can->reg_base + KVASER_PCIEFD_KCAN_IEN_REG);
+ 
+ 		pcie->can[i] = can;
+@@ -1058,6 +1065,7 @@ static int kvaser_pciefd_setup_dma(struct kvaser_pciefd *pcie)
+ {
+ 	int i;
+ 	u32 srb_status;
++	u32 srb_packet_count;
+ 	dma_addr_t dma_addr[KVASER_PCIEFD_DMA_COUNT];
+ 
+ 	/* Disable the DMA */
+@@ -1085,6 +1093,15 @@ static int kvaser_pciefd_setup_dma(struct kvaser_pciefd *pcie)
+ 		  KVASER_PCIEFD_SRB_CMD_RDB1,
+ 		  pcie->reg_base + KVASER_PCIEFD_SRB_CMD_REG);
+ 
++	/* Empty Rx FIFO */
++	srb_packet_count = ioread32(pcie->reg_base + KVASER_PCIEFD_SRB_RX_NR_PACKETS_REG) &
++			   KVASER_PCIEFD_SRB_RX_NR_PACKETS_MASK;
++	while (srb_packet_count) {
++		/* Drop current packet in FIFO */
++		ioread32(pcie->reg_base + KVASER_PCIEFD_SRB_FIFO_LAST_REG);
++		srb_packet_count--;
++	}
++
+ 	srb_status = ioread32(pcie->reg_base + KVASER_PCIEFD_SRB_STAT_REG);
+ 	if (!(srb_status & KVASER_PCIEFD_SRB_STAT_DI)) {
+ 		dev_err(&pcie->pci->dev, "DMA not idle before enabling\n");
+@@ -1425,9 +1442,6 @@ static int kvaser_pciefd_handle_status_packet(struct kvaser_pciefd *pcie,
+ 		cmd = KVASER_PCIEFD_KCAN_CMD_AT;
+ 		cmd |= ++can->cmd_seq << KVASER_PCIEFD_KCAN_CMD_SEQ_SHIFT;
+ 		iowrite32(cmd, can->reg_base + KVASER_PCIEFD_KCAN_CMD_REG);
+-
+-		iowrite32(KVASER_PCIEFD_KCAN_IRQ_TFD,
+-			  can->reg_base + KVASER_PCIEFD_KCAN_IEN_REG);
+ 	} else if (p->header[0] & KVASER_PCIEFD_SPACK_IDET &&
+ 		   p->header[0] & KVASER_PCIEFD_SPACK_IRM &&
+ 		   cmdseq == (p->header[1] & KVASER_PCIEFD_PACKET_SEQ_MSK) &&
+@@ -1714,15 +1728,6 @@ static int kvaser_pciefd_transmit_irq(struct kvaser_pciefd_can *can)
+ 	if (irq & KVASER_PCIEFD_KCAN_IRQ_TOF)
+ 		netdev_err(can->can.dev, "Tx FIFO overflow\n");
+ 
+-	if (irq & KVASER_PCIEFD_KCAN_IRQ_TFD) {
+-		u8 count = ioread32(can->reg_base +
+-				    KVASER_PCIEFD_KCAN_TX_NPACKETS_REG) & 0xff;
+-
+-		if (count == 0)
+-			iowrite32(KVASER_PCIEFD_KCAN_CTRL_EFLUSH,
+-				  can->reg_base + KVASER_PCIEFD_KCAN_CTRL_REG);
+-	}
+-
+ 	if (irq & KVASER_PCIEFD_KCAN_IRQ_BPP)
+ 		netdev_err(can->can.dev,
+ 			   "Fail to change bittiming, when not in reset mode\n");
+@@ -1824,6 +1829,11 @@ static int kvaser_pciefd_probe(struct pci_dev *pdev,
+ 	if (err)
+ 		goto err_teardown_can_ctrls;
+ 
++	err = request_irq(pcie->pci->irq, kvaser_pciefd_irq_handler,
++			  IRQF_SHARED, KVASER_PCIEFD_DRV_NAME, pcie);
++	if (err)
++		goto err_teardown_can_ctrls;
++
+ 	iowrite32(KVASER_PCIEFD_SRB_IRQ_DPD0 | KVASER_PCIEFD_SRB_IRQ_DPD1,
+ 		  pcie->reg_base + KVASER_PCIEFD_SRB_IRQ_REG);
+ 
+@@ -1844,11 +1854,6 @@ static int kvaser_pciefd_probe(struct pci_dev *pdev,
+ 	iowrite32(KVASER_PCIEFD_SRB_CMD_RDB1,
+ 		  pcie->reg_base + KVASER_PCIEFD_SRB_CMD_REG);
+ 
+-	err = request_irq(pcie->pci->irq, kvaser_pciefd_irq_handler,
+-			  IRQF_SHARED, KVASER_PCIEFD_DRV_NAME, pcie);
+-	if (err)
+-		goto err_teardown_can_ctrls;
+-
+ 	err = kvaser_pciefd_reg_candev(pcie);
+ 	if (err)
+ 		goto err_free_irq;
+@@ -1856,6 +1861,8 @@ static int kvaser_pciefd_probe(struct pci_dev *pdev,
+ 	return 0;
+ 
+ err_free_irq:
++	/* Disable PCI interrupts */
++	iowrite32(0, pcie->reg_base + KVASER_PCIEFD_IEN_REG);
+ 	free_irq(pcie->pci->irq, pcie);
+ 
+ err_teardown_can_ctrls:
+diff --git a/drivers/net/dsa/mv88e6xxx/port.h b/drivers/net/dsa/mv88e6xxx/port.h
+index aec9d4fd20e36..d19b6303b91f0 100644
+--- a/drivers/net/dsa/mv88e6xxx/port.h
++++ b/drivers/net/dsa/mv88e6xxx/port.h
+@@ -276,7 +276,7 @@
+ /* Offset 0x10: Extended Port Control Command */
+ #define MV88E6393X_PORT_EPC_CMD		0x10
+ #define MV88E6393X_PORT_EPC_CMD_BUSY	0x8000
+-#define MV88E6393X_PORT_EPC_CMD_WRITE	0x0300
++#define MV88E6393X_PORT_EPC_CMD_WRITE	0x3000
+ #define MV88E6393X_PORT_EPC_INDEX_PORT_ETYPE	0x02
+ 
+ /* Offset 0x11: Extended Port Control Data */
+diff --git a/drivers/net/dsa/rzn1_a5psw.c b/drivers/net/dsa/rzn1_a5psw.c
+index 919027cf20124..c37d2e5372302 100644
+--- a/drivers/net/dsa/rzn1_a5psw.c
++++ b/drivers/net/dsa/rzn1_a5psw.c
+@@ -120,6 +120,22 @@ static void a5psw_port_mgmtfwd_set(struct a5psw *a5psw, int port, bool enable)
+ 	a5psw_port_pattern_set(a5psw, port, A5PSW_PATTERN_MGMTFWD, enable);
+ }
+ 
++static void a5psw_port_tx_enable(struct a5psw *a5psw, int port, bool enable)
++{
++	u32 mask = A5PSW_PORT_ENA_TX(port);
++	u32 reg = enable ? mask : 0;
++
++	/* Even though the port TX is disabled through TXENA bit in the
++	 * PORT_ENA register, it can still send BPDUs. This depends on the tag
++	 * configuration added when sending packets from the CPU port to the
++	 * switch port. Indeed, when using forced forwarding without filtering,
++	 * even disabled ports will be able to send packets that are tagged.
++	 * This allows to implement STP support when ports are in a state where
++	 * forwarding traffic should be stopped but BPDUs should still be sent.
++	 */
++	a5psw_reg_rmw(a5psw, A5PSW_PORT_ENA, mask, reg);
++}
++
+ static void a5psw_port_enable_set(struct a5psw *a5psw, int port, bool enable)
+ {
+ 	u32 port_ena = 0;
+@@ -292,6 +308,22 @@ static int a5psw_set_ageing_time(struct dsa_switch *ds, unsigned int msecs)
+ 	return 0;
+ }
+ 
++static void a5psw_port_learning_set(struct a5psw *a5psw, int port, bool learn)
++{
++	u32 mask = A5PSW_INPUT_LEARN_DIS(port);
++	u32 reg = !learn ? mask : 0;
++
++	a5psw_reg_rmw(a5psw, A5PSW_INPUT_LEARN, mask, reg);
++}
++
++static void a5psw_port_rx_block_set(struct a5psw *a5psw, int port, bool block)
++{
++	u32 mask = A5PSW_INPUT_LEARN_BLOCK(port);
++	u32 reg = block ? mask : 0;
++
++	a5psw_reg_rmw(a5psw, A5PSW_INPUT_LEARN, mask, reg);
++}
++
+ static void a5psw_flooding_set_resolution(struct a5psw *a5psw, int port,
+ 					  bool set)
+ {
+@@ -308,6 +340,14 @@ static void a5psw_flooding_set_resolution(struct a5psw *a5psw, int port,
+ 		a5psw_reg_writel(a5psw, offsets[i], a5psw->bridged_ports);
+ }
+ 
++static void a5psw_port_set_standalone(struct a5psw *a5psw, int port,
++				      bool standalone)
++{
++	a5psw_port_learning_set(a5psw, port, !standalone);
++	a5psw_flooding_set_resolution(a5psw, port, !standalone);
++	a5psw_port_mgmtfwd_set(a5psw, port, standalone);
++}
++
+ static int a5psw_port_bridge_join(struct dsa_switch *ds, int port,
+ 				  struct dsa_bridge bridge,
+ 				  bool *tx_fwd_offload,
+@@ -323,8 +363,7 @@ static int a5psw_port_bridge_join(struct dsa_switch *ds, int port,
+ 	}
+ 
+ 	a5psw->br_dev = bridge.dev;
+-	a5psw_flooding_set_resolution(a5psw, port, true);
+-	a5psw_port_mgmtfwd_set(a5psw, port, false);
++	a5psw_port_set_standalone(a5psw, port, false);
+ 
+ 	return 0;
+ }
+@@ -334,8 +373,7 @@ static void a5psw_port_bridge_leave(struct dsa_switch *ds, int port,
+ {
+ 	struct a5psw *a5psw = ds->priv;
+ 
+-	a5psw_flooding_set_resolution(a5psw, port, false);
+-	a5psw_port_mgmtfwd_set(a5psw, port, true);
++	a5psw_port_set_standalone(a5psw, port, true);
+ 
+ 	/* No more ports bridged */
+ 	if (a5psw->bridged_ports == BIT(A5PSW_CPU_PORT))
+@@ -344,28 +382,35 @@ static void a5psw_port_bridge_leave(struct dsa_switch *ds, int port,
+ 
+ static void a5psw_port_stp_state_set(struct dsa_switch *ds, int port, u8 state)
+ {
+-	u32 mask = A5PSW_INPUT_LEARN_DIS(port) | A5PSW_INPUT_LEARN_BLOCK(port);
++	bool learning_enabled, rx_enabled, tx_enabled;
+ 	struct a5psw *a5psw = ds->priv;
+-	u32 reg = 0;
+ 
+ 	switch (state) {
+ 	case BR_STATE_DISABLED:
+ 	case BR_STATE_BLOCKING:
+-		reg |= A5PSW_INPUT_LEARN_DIS(port);
+-		reg |= A5PSW_INPUT_LEARN_BLOCK(port);
+-		break;
+ 	case BR_STATE_LISTENING:
+-		reg |= A5PSW_INPUT_LEARN_DIS(port);
++		rx_enabled = false;
++		tx_enabled = false;
++		learning_enabled = false;
+ 		break;
+ 	case BR_STATE_LEARNING:
+-		reg |= A5PSW_INPUT_LEARN_BLOCK(port);
++		rx_enabled = false;
++		tx_enabled = false;
++		learning_enabled = true;
+ 		break;
+ 	case BR_STATE_FORWARDING:
+-	default:
++		rx_enabled = true;
++		tx_enabled = true;
++		learning_enabled = true;
+ 		break;
++	default:
++		dev_err(ds->dev, "invalid STP state: %d\n", state);
++		return;
+ 	}
+ 
+-	a5psw_reg_rmw(a5psw, A5PSW_INPUT_LEARN, mask, reg);
++	a5psw_port_learning_set(a5psw, port, learning_enabled);
++	a5psw_port_rx_block_set(a5psw, port, !rx_enabled);
++	a5psw_port_tx_enable(a5psw, port, tx_enabled);
+ }
+ 
+ static void a5psw_port_fast_age(struct dsa_switch *ds, int port)
+@@ -673,7 +718,7 @@ static int a5psw_setup(struct dsa_switch *ds)
+ 	}
+ 
+ 	/* Configure management port */
+-	reg = A5PSW_CPU_PORT | A5PSW_MGMT_CFG_DISCARD;
++	reg = A5PSW_CPU_PORT | A5PSW_MGMT_CFG_ENABLE;
+ 	a5psw_reg_writel(a5psw, A5PSW_MGMT_CFG, reg);
+ 
+ 	/* Set pattern 0 to forward all frame to mgmt port */
+@@ -722,13 +767,15 @@ static int a5psw_setup(struct dsa_switch *ds)
+ 		if (dsa_port_is_unused(dp))
+ 			continue;
+ 
+-		/* Enable egress flooding for CPU port */
+-		if (dsa_port_is_cpu(dp))
++		/* Enable egress flooding and learning for CPU port */
++		if (dsa_port_is_cpu(dp)) {
+ 			a5psw_flooding_set_resolution(a5psw, port, true);
++			a5psw_port_learning_set(a5psw, port, true);
++		}
+ 
+-		/* Enable management forward only for user ports */
++		/* Enable standalone mode for user ports */
+ 		if (dsa_port_is_user(dp))
+-			a5psw_port_mgmtfwd_set(a5psw, port, true);
++			a5psw_port_set_standalone(a5psw, port, true);
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/net/dsa/rzn1_a5psw.h b/drivers/net/dsa/rzn1_a5psw.h
+index c67abd49c013d..b869192eef3f7 100644
+--- a/drivers/net/dsa/rzn1_a5psw.h
++++ b/drivers/net/dsa/rzn1_a5psw.h
+@@ -19,6 +19,7 @@
+ #define A5PSW_PORT_OFFSET(port)		(0x400 * (port))
+ 
+ #define A5PSW_PORT_ENA			0x8
++#define A5PSW_PORT_ENA_TX(port)		BIT(port)
+ #define A5PSW_PORT_ENA_RX_SHIFT		16
+ #define A5PSW_PORT_ENA_TX_RX(port)	(BIT((port) + A5PSW_PORT_ENA_RX_SHIFT) | \
+ 					 BIT(port))
+@@ -36,7 +37,7 @@
+ #define A5PSW_INPUT_LEARN_BLOCK(p)	BIT(p)
+ 
+ #define A5PSW_MGMT_CFG			0x20
+-#define A5PSW_MGMT_CFG_DISCARD		BIT(7)
++#define A5PSW_MGMT_CFG_ENABLE		BIT(6)
+ 
+ #define A5PSW_MODE_CFG			0x24
+ #define A5PSW_MODE_STATS_RESET		BIT(31)
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+index 6bd18eb5137f4..2dd8ee4a6f75b 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+@@ -2864,7 +2864,7 @@ static int bnxt_get_nvram_directory(struct net_device *dev, u32 len, u8 *data)
+ 	if (rc)
+ 		return rc;
+ 
+-	buflen = dir_entries * entry_length;
++	buflen = mul_u32_u32(dir_entries, entry_length);
+ 	buf = hwrm_req_dma_slice(bp, req, buflen, &dma_handle);
+ 	if (!buf) {
+ 		hwrm_req_drop(bp, req);
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index d937daa8ee883..eca0c92c0c84d 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -3450,7 +3450,7 @@ err_clk_disable:
+ 	return ret;
+ }
+ 
+-static void bcmgenet_netif_stop(struct net_device *dev)
++static void bcmgenet_netif_stop(struct net_device *dev, bool stop_phy)
+ {
+ 	struct bcmgenet_priv *priv = netdev_priv(dev);
+ 
+@@ -3465,7 +3465,8 @@ static void bcmgenet_netif_stop(struct net_device *dev)
+ 	/* Disable MAC transmit. TX DMA disabled must be done before this */
+ 	umac_enable_set(priv, CMD_TX_EN, false);
+ 
+-	phy_stop(dev->phydev);
++	if (stop_phy)
++		phy_stop(dev->phydev);
+ 	bcmgenet_disable_rx_napi(priv);
+ 	bcmgenet_intr_disable(priv);
+ 
+@@ -3486,7 +3487,7 @@ static int bcmgenet_close(struct net_device *dev)
+ 
+ 	netif_dbg(priv, ifdown, dev, "bcmgenet_close\n");
+ 
+-	bcmgenet_netif_stop(dev);
++	bcmgenet_netif_stop(dev, false);
+ 
+ 	/* Really kill the PHY state machine and disconnect from it */
+ 	phy_disconnect(dev->phydev);
+@@ -4304,7 +4305,7 @@ static int bcmgenet_suspend(struct device *d)
+ 
+ 	netif_device_detach(dev);
+ 
+-	bcmgenet_netif_stop(dev);
++	bcmgenet_netif_stop(dev, true);
+ 
+ 	if (!device_may_wakeup(d))
+ 		phy_suspend(dev->phydev);
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 42ec6ca3bf035..577d94821b3e7 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -3798,7 +3798,6 @@ static int fec_enet_txq_xmit_frame(struct fec_enet_private *fep,
+ 	entries_free = fec_enet_get_free_txdesc_num(txq);
+ 	if (entries_free < MAX_SKB_FRAGS + 1) {
+ 		netdev_err(fep->netdev, "NOT enough BD for SG!\n");
+-		xdp_return_frame(frame);
+ 		return NETDEV_TX_BUSY;
+ 	}
+ 
+@@ -4478,9 +4477,11 @@ fec_drv_remove(struct platform_device *pdev)
+ 	struct device_node *np = pdev->dev.of_node;
+ 	int ret;
+ 
+-	ret = pm_runtime_resume_and_get(&pdev->dev);
++	ret = pm_runtime_get_sync(&pdev->dev);
+ 	if (ret < 0)
+-		return ret;
++		dev_err(&pdev->dev,
++			"Failed to resume device in remove callback (%pe)\n",
++			ERR_PTR(ret));
+ 
+ 	cancel_work_sync(&fep->tx_timeout_work);
+ 	fec_ptp_stop(pdev);
+@@ -4493,8 +4494,13 @@ fec_drv_remove(struct platform_device *pdev)
+ 		of_phy_deregister_fixed_link(np);
+ 	of_node_put(fep->phy_node);
+ 
+-	clk_disable_unprepare(fep->clk_ahb);
+-	clk_disable_unprepare(fep->clk_ipg);
++	/* After pm_runtime_get_sync() failed, the clks are still off, so skip
++	 * disabling them again.
++	 */
++	if (ret >= 0) {
++		clk_disable_unprepare(fep->clk_ahb);
++		clk_disable_unprepare(fep->clk_ipg);
++	}
+ 	pm_runtime_put_noidle(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+ 
+diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
+index 07111c241e0eb..60bf0e3fb2176 100644
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -284,19 +284,6 @@ static int gve_napi_poll_dqo(struct napi_struct *napi, int budget)
+ 	bool reschedule = false;
+ 	int work_done = 0;
+ 
+-	/* Clear PCI MSI-X Pending Bit Array (PBA)
+-	 *
+-	 * This bit is set if an interrupt event occurs while the vector is
+-	 * masked. If this bit is set and we reenable the interrupt, it will
+-	 * fire again. Since we're just about to poll the queue state, we don't
+-	 * need it to fire again.
+-	 *
+-	 * Under high softirq load, it's possible that the interrupt condition
+-	 * is triggered twice before we got the chance to process it.
+-	 */
+-	gve_write_irq_doorbell_dqo(priv, block,
+-				   GVE_ITR_NO_UPDATE_DQO | GVE_ITR_CLEAR_PBA_BIT_DQO);
+-
+ 	if (block->tx)
+ 		reschedule |= gve_tx_poll_dqo(block, /*do_clean=*/true);
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.c b/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.c
+index f671a63cecde4..c797d54f98caa 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.c
+@@ -330,9 +330,25 @@ static int hclge_comm_cmd_csq_done(struct hclge_comm_hw *hw)
+ 	return head == hw->cmq.csq.next_to_use;
+ }
+ 
+-static void hclge_comm_wait_for_resp(struct hclge_comm_hw *hw,
++static u32 hclge_get_cmdq_tx_timeout(u16 opcode, u32 tx_timeout)
++{
++	static const struct hclge_cmdq_tx_timeout_map cmdq_tx_timeout_map[] = {
++		{HCLGE_OPC_CFG_RST_TRIGGER, HCLGE_COMM_CMDQ_TX_TIMEOUT_500MS},
++	};
++	u32 i;
++
++	for (i = 0; i < ARRAY_SIZE(cmdq_tx_timeout_map); i++)
++		if (cmdq_tx_timeout_map[i].opcode == opcode)
++			return cmdq_tx_timeout_map[i].tx_timeout;
++
++	return tx_timeout;
++}
++
++static void hclge_comm_wait_for_resp(struct hclge_comm_hw *hw, u16 opcode,
+ 				     bool *is_completed)
+ {
++	u32 cmdq_tx_timeout = hclge_get_cmdq_tx_timeout(opcode,
++							hw->cmq.tx_timeout);
+ 	u32 timeout = 0;
+ 
+ 	do {
+@@ -342,7 +358,7 @@ static void hclge_comm_wait_for_resp(struct hclge_comm_hw *hw,
+ 		}
+ 		udelay(1);
+ 		timeout++;
+-	} while (timeout < hw->cmq.tx_timeout);
++	} while (timeout < cmdq_tx_timeout);
+ }
+ 
+ static int hclge_comm_cmd_convert_err_code(u16 desc_ret)
+@@ -406,7 +422,8 @@ static int hclge_comm_cmd_check_result(struct hclge_comm_hw *hw,
+ 	 * if multi descriptors to be sent, use the first one to check
+ 	 */
+ 	if (HCLGE_COMM_SEND_SYNC(le16_to_cpu(desc->flag)))
+-		hclge_comm_wait_for_resp(hw, &is_completed);
++		hclge_comm_wait_for_resp(hw, le16_to_cpu(desc->opcode),
++					 &is_completed);
+ 
+ 	if (!is_completed)
+ 		ret = -EBADE;
+@@ -528,7 +545,7 @@ int hclge_comm_cmd_queue_init(struct pci_dev *pdev, struct hclge_comm_hw *hw)
+ 	cmdq->crq.desc_num = HCLGE_COMM_NIC_CMQ_DESC_NUM;
+ 
+ 	/* Setup Tx write back timeout */
+-	cmdq->tx_timeout = HCLGE_COMM_CMDQ_TX_TIMEOUT;
++	cmdq->tx_timeout = HCLGE_COMM_CMDQ_TX_TIMEOUT_DEFAULT;
+ 
+ 	/* Setup queue rings */
+ 	ret = hclge_comm_alloc_cmd_queue(hw, HCLGE_COMM_TYPE_CSQ);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.h b/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.h
+index b1f9383b418f4..2b2928c6dccfc 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.h
+@@ -54,7 +54,8 @@
+ #define HCLGE_COMM_NIC_SW_RST_RDY		BIT(HCLGE_COMM_NIC_SW_RST_RDY_B)
+ #define HCLGE_COMM_NIC_CMQ_DESC_NUM_S		3
+ #define HCLGE_COMM_NIC_CMQ_DESC_NUM		1024
+-#define HCLGE_COMM_CMDQ_TX_TIMEOUT		30000
++#define HCLGE_COMM_CMDQ_TX_TIMEOUT_DEFAULT	30000
++#define HCLGE_COMM_CMDQ_TX_TIMEOUT_500MS	500000
+ 
+ enum hclge_opcode_type {
+ 	/* Generic commands */
+@@ -357,6 +358,11 @@ struct hclge_comm_caps_bit_map {
+ 	u16 local_bit;
+ };
+ 
++struct hclge_cmdq_tx_timeout_map {
++	u32 opcode;
++	u32 tx_timeout;
++};
++
+ struct hclge_comm_firmware_compat_cmd {
+ 	__le32 compat;
+ 	u8 rsv[20];
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
+index 66feb23f7b7b6..bcccd82a2620f 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
+@@ -130,7 +130,7 @@ static struct hns3_dbg_cmd_info hns3_dbg_cmd[] = {
+ 		.name = "tx_bd_queue",
+ 		.cmd = HNAE3_DBG_CMD_TX_BD,
+ 		.dentry = HNS3_DBG_DENTRY_TX_BD,
+-		.buf_len = HNS3_DBG_READ_LEN_4MB,
++		.buf_len = HNS3_DBG_READ_LEN_5MB,
+ 		.init = hns3_dbg_bd_file_init,
+ 	},
+ 	{
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.h b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.h
+index 97578eabb7d8b..4a5ef8a90a104 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.h
+@@ -10,6 +10,7 @@
+ #define HNS3_DBG_READ_LEN_128KB	0x20000
+ #define HNS3_DBG_READ_LEN_1MB	0x100000
+ #define HNS3_DBG_READ_LEN_4MB	0x400000
++#define HNS3_DBG_READ_LEN_5MB	0x500000
+ #define HNS3_DBG_WRITE_LEN	1024
+ 
+ #define HNS3_DBG_DATA_STR_LEN	32
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 07ad5f35219e2..50e956d6c3b25 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -8053,12 +8053,15 @@ static void hclge_ae_stop(struct hnae3_handle *handle)
+ 	/* If it is not PF reset or FLR, the firmware will disable the MAC,
+ 	 * so it only need to stop phy here.
+ 	 */
+-	if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state) &&
+-	    hdev->reset_type != HNAE3_FUNC_RESET &&
+-	    hdev->reset_type != HNAE3_FLR_RESET) {
+-		hclge_mac_stop_phy(hdev);
+-		hclge_update_link_status(hdev);
+-		return;
++	if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state)) {
++		hclge_pfc_pause_en_cfg(hdev, HCLGE_PFC_TX_RX_DISABLE,
++				       HCLGE_PFC_DISABLE);
++		if (hdev->reset_type != HNAE3_FUNC_RESET &&
++		    hdev->reset_type != HNAE3_FLR_RESET) {
++			hclge_mac_stop_phy(hdev);
++			hclge_update_link_status(hdev);
++			return;
++		}
+ 	}
+ 
+ 	hclge_reset_tqp(handle);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+index 4a33f65190e2b..922c0da3660c7 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+@@ -171,8 +171,8 @@ int hclge_mac_pause_en_cfg(struct hclge_dev *hdev, bool tx, bool rx)
+ 	return hclge_cmd_send(&hdev->hw, &desc, 1);
+ }
+ 
+-static int hclge_pfc_pause_en_cfg(struct hclge_dev *hdev, u8 tx_rx_bitmap,
+-				  u8 pfc_bitmap)
++int hclge_pfc_pause_en_cfg(struct hclge_dev *hdev, u8 tx_rx_bitmap,
++			   u8 pfc_bitmap)
+ {
+ 	struct hclge_desc desc;
+ 	struct hclge_pfc_en_cmd *pfc = (struct hclge_pfc_en_cmd *)desc.data;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h
+index 68f28a98e380b..dd6f1fd486cf2 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h
+@@ -164,6 +164,9 @@ struct hclge_bp_to_qs_map_cmd {
+ 	u32 rsvd1;
+ };
+ 
++#define HCLGE_PFC_DISABLE	0
++#define HCLGE_PFC_TX_RX_DISABLE	0
++
+ struct hclge_pfc_en_cmd {
+ 	u8 tx_rx_en_bitmap;
+ 	u8 pri_en_bitmap;
+@@ -235,6 +238,8 @@ void hclge_tm_schd_info_update(struct hclge_dev *hdev, u8 num_tc);
+ void hclge_tm_pfc_info_update(struct hclge_dev *hdev);
+ int hclge_tm_dwrr_cfg(struct hclge_dev *hdev);
+ int hclge_tm_init_hw(struct hclge_dev *hdev, bool init);
++int hclge_pfc_pause_en_cfg(struct hclge_dev *hdev, u8 tx_rx_bitmap,
++			   u8 pfc_bitmap);
+ int hclge_mac_pause_en_cfg(struct hclge_dev *hdev, bool tx, bool rx);
+ int hclge_pause_addr_cfg(struct hclge_dev *hdev, const u8 *mac_addr);
+ void hclge_pfc_rx_stats_get(struct hclge_dev *hdev, u64 *stats);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index e84e5be8e59ed..b1b14850e958f 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -1436,7 +1436,10 @@ static int hclgevf_reset_wait(struct hclgevf_dev *hdev)
+ 	 * might happen in case reset assertion was made by PF. Yes, this also
+ 	 * means we might end up waiting bit more even for VF reset.
+ 	 */
+-	msleep(5000);
++	if (hdev->reset_type == HNAE3_VF_FULL_RESET)
++		msleep(5000);
++	else
++		msleep(500);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+index 9afbbdac35903..7c0578b5457b9 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+@@ -2238,11 +2238,6 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
+ 		iavf_process_config(adapter);
+ 		adapter->flags |= IAVF_FLAG_SETUP_NETDEV_FEATURES;
+ 
+-		/* Request VLAN offload settings */
+-		if (VLAN_V2_ALLOWED(adapter))
+-			iavf_set_vlan_offload_features(adapter, 0,
+-						       netdev->features);
+-
+ 		iavf_set_queue_vlan_tag_loc(adapter);
+ 
+ 		was_mac_changed = !ether_addr_equal(netdev->dev_addr,
+diff --git a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
+index c6d4926f0fcf5..850db8e0e6b00 100644
+--- a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
+@@ -932,10 +932,9 @@ ice_tx_prepare_vlan_flags_dcb(struct ice_tx_ring *tx_ring,
+ 	if ((first->tx_flags & ICE_TX_FLAGS_HW_VLAN ||
+ 	     first->tx_flags & ICE_TX_FLAGS_HW_OUTER_SINGLE_VLAN) ||
+ 	    skb->priority != TC_PRIO_CONTROL) {
+-		first->tx_flags &= ~ICE_TX_FLAGS_VLAN_PR_M;
++		first->vid &= ~VLAN_PRIO_MASK;
+ 		/* Mask the lower 3 bits to set the 802.1p priority */
+-		first->tx_flags |= (skb->priority & 0x7) <<
+-				   ICE_TX_FLAGS_VLAN_PR_S;
++		first->vid |= (skb->priority << VLAN_PRIO_SHIFT) & VLAN_PRIO_MASK;
+ 		/* if this is not already set it means a VLAN 0 + priority needs
+ 		 * to be offloaded
+ 		 */
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index 450317dfcca73..11ae0e41f518a 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -2745,6 +2745,8 @@ ice_vsi_cfg_def(struct ice_vsi *vsi, struct ice_vsi_cfg_params *params)
+ 			goto unroll_vector_base;
+ 
+ 		ice_vsi_map_rings_to_vectors(vsi);
++		vsi->stat_offsets_loaded = false;
++
+ 		if (ice_is_xdp_ena_vsi(vsi)) {
+ 			ret = ice_vsi_determine_xdp_res(vsi);
+ 			if (ret)
+@@ -2793,6 +2795,9 @@ ice_vsi_cfg_def(struct ice_vsi *vsi, struct ice_vsi_cfg_params *params)
+ 		ret = ice_vsi_alloc_ring_stats(vsi);
+ 		if (ret)
+ 			goto unroll_vector_base;
++
++		vsi->stat_offsets_loaded = false;
++
+ 		/* Do not exit if configuring RSS had an issue, at least
+ 		 * receive traffic on first queue. Hence no need to capture
+ 		 * return value
+diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c b/drivers/net/ethernet/intel/ice/ice_sriov.c
+index 0cc05e54a7815..d4206db7d6d54 100644
+--- a/drivers/net/ethernet/intel/ice/ice_sriov.c
++++ b/drivers/net/ethernet/intel/ice/ice_sriov.c
+@@ -1181,7 +1181,7 @@ int ice_set_vf_spoofchk(struct net_device *netdev, int vf_id, bool ena)
+ 	if (!vf)
+ 		return -EINVAL;
+ 
+-	ret = ice_check_vf_ready_for_cfg(vf);
++	ret = ice_check_vf_ready_for_reset(vf);
+ 	if (ret)
+ 		goto out_put_vf;
+ 
+@@ -1296,7 +1296,7 @@ int ice_set_vf_mac(struct net_device *netdev, int vf_id, u8 *mac)
+ 		goto out_put_vf;
+ 	}
+ 
+-	ret = ice_check_vf_ready_for_cfg(vf);
++	ret = ice_check_vf_ready_for_reset(vf);
+ 	if (ret)
+ 		goto out_put_vf;
+ 
+@@ -1350,7 +1350,7 @@ int ice_set_vf_trust(struct net_device *netdev, int vf_id, bool trusted)
+ 		return -EOPNOTSUPP;
+ 	}
+ 
+-	ret = ice_check_vf_ready_for_cfg(vf);
++	ret = ice_check_vf_ready_for_reset(vf);
+ 	if (ret)
+ 		goto out_put_vf;
+ 
+@@ -1663,7 +1663,7 @@ ice_set_vf_port_vlan(struct net_device *netdev, int vf_id, u16 vlan_id, u8 qos,
+ 	if (!vf)
+ 		return -EINVAL;
+ 
+-	ret = ice_check_vf_ready_for_cfg(vf);
++	ret = ice_check_vf_ready_for_reset(vf);
+ 	if (ret)
+ 		goto out_put_vf;
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
+index 4fcf2d07eb853..059bd911c51d8 100644
+--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
++++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
+@@ -1664,8 +1664,7 @@ ice_tx_map(struct ice_tx_ring *tx_ring, struct ice_tx_buf *first,
+ 
+ 	if (first->tx_flags & ICE_TX_FLAGS_HW_VLAN) {
+ 		td_cmd |= (u64)ICE_TX_DESC_CMD_IL2TAG1;
+-		td_tag = (first->tx_flags & ICE_TX_FLAGS_VLAN_M) >>
+-			  ICE_TX_FLAGS_VLAN_S;
++		td_tag = first->vid;
+ 	}
+ 
+ 	dma = dma_map_single(tx_ring->dev, skb->data, size, DMA_TO_DEVICE);
+@@ -1998,7 +1997,7 @@ ice_tx_prepare_vlan_flags(struct ice_tx_ring *tx_ring, struct ice_tx_buf *first)
+ 	 * VLAN offloads exclusively so we only care about the VLAN ID here
+ 	 */
+ 	if (skb_vlan_tag_present(skb)) {
+-		first->tx_flags |= skb_vlan_tag_get(skb) << ICE_TX_FLAGS_VLAN_S;
++		first->vid = skb_vlan_tag_get(skb);
+ 		if (tx_ring->flags & ICE_TX_FLAGS_RING_VLAN_L2TAG2)
+ 			first->tx_flags |= ICE_TX_FLAGS_HW_OUTER_SINGLE_VLAN;
+ 		else
+@@ -2388,8 +2387,7 @@ ice_xmit_frame_ring(struct sk_buff *skb, struct ice_tx_ring *tx_ring)
+ 		offload.cd_qw1 |= (u64)(ICE_TX_DESC_DTYPE_CTX |
+ 					(ICE_TX_CTX_DESC_IL2TAG2 <<
+ 					ICE_TXD_CTX_QW1_CMD_S));
+-		offload.cd_l2tag2 = (first->tx_flags & ICE_TX_FLAGS_VLAN_M) >>
+-			ICE_TX_FLAGS_VLAN_S;
++		offload.cd_l2tag2 = first->vid;
+ 	}
+ 
+ 	/* set up TSO offload */
+diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h
+index fff0efe28373a..166413fc33f48 100644
+--- a/drivers/net/ethernet/intel/ice/ice_txrx.h
++++ b/drivers/net/ethernet/intel/ice/ice_txrx.h
+@@ -127,10 +127,6 @@ static inline int ice_skb_pad(void)
+ #define ICE_TX_FLAGS_IPV6	BIT(6)
+ #define ICE_TX_FLAGS_TUNNEL	BIT(7)
+ #define ICE_TX_FLAGS_HW_OUTER_SINGLE_VLAN	BIT(8)
+-#define ICE_TX_FLAGS_VLAN_M	0xffff0000
+-#define ICE_TX_FLAGS_VLAN_PR_M	0xe0000000
+-#define ICE_TX_FLAGS_VLAN_PR_S	29
+-#define ICE_TX_FLAGS_VLAN_S	16
+ 
+ #define ICE_XDP_PASS		0
+ #define ICE_XDP_CONSUMED	BIT(0)
+@@ -182,8 +178,9 @@ struct ice_tx_buf {
+ 		unsigned int gso_segs;
+ 		unsigned int nr_frags;	/* used for mbuf XDP */
+ 	};
+-	u32 type:16;			/* &ice_tx_buf_type */
+-	u32 tx_flags:16;
++	u32 tx_flags:12;
++	u32 type:4;			/* &ice_tx_buf_type */
++	u32 vid:16;
+ 	DEFINE_DMA_UNMAP_LEN(len);
+ 	DEFINE_DMA_UNMAP_ADDR(dma);
+ };
+diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.c b/drivers/net/ethernet/intel/ice/ice_vf_lib.c
+index 0e57bd1b85fd4..59524a7c88c5f 100644
+--- a/drivers/net/ethernet/intel/ice/ice_vf_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.c
+@@ -185,6 +185,25 @@ int ice_check_vf_ready_for_cfg(struct ice_vf *vf)
+ 	return 0;
+ }
+ 
++/**
++ * ice_check_vf_ready_for_reset - check if VF is ready to be reset
++ * @vf: VF to check if it's ready to be reset
++ *
++ * The purpose of this function is to ensure that the VF is not in reset,
++ * disabled, and is both initialized and active, thus enabling us to safely
++ * initialize another reset.
++ */
++int ice_check_vf_ready_for_reset(struct ice_vf *vf)
++{
++	int ret;
++
++	ret = ice_check_vf_ready_for_cfg(vf);
++	if (!ret && !test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states))
++		ret = -EAGAIN;
++
++	return ret;
++}
++
+ /**
+  * ice_trigger_vf_reset - Reset a VF on HW
+  * @vf: pointer to the VF structure
+diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.h b/drivers/net/ethernet/intel/ice/ice_vf_lib.h
+index ef30f05b5d02e..3fc6a0a8d9554 100644
+--- a/drivers/net/ethernet/intel/ice/ice_vf_lib.h
++++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.h
+@@ -215,6 +215,7 @@ u16 ice_get_num_vfs(struct ice_pf *pf);
+ struct ice_vsi *ice_get_vf_vsi(struct ice_vf *vf);
+ bool ice_is_vf_disabled(struct ice_vf *vf);
+ int ice_check_vf_ready_for_cfg(struct ice_vf *vf);
++int ice_check_vf_ready_for_reset(struct ice_vf *vf);
+ void ice_set_vf_state_dis(struct ice_vf *vf);
+ bool ice_is_any_vf_in_unicast_promisc(struct ice_pf *pf);
+ void
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+index e24e3f5017ca6..d8c66baf4eb41 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+@@ -3908,6 +3908,7 @@ error_handler:
+ 		ice_vc_notify_vf_link_state(vf);
+ 		break;
+ 	case VIRTCHNL_OP_RESET_VF:
++		clear_bit(ICE_VF_STATE_ACTIVE, vf->vf_states);
+ 		ops->reset_vf(vf);
+ 		break;
+ 	case VIRTCHNL_OP_ADD_ETH_ADDR:
+diff --git a/drivers/net/ethernet/intel/igb/e1000_mac.c b/drivers/net/ethernet/intel/igb/e1000_mac.c
+index 205d577bdbbaa..caf91c6f52b4d 100644
+--- a/drivers/net/ethernet/intel/igb/e1000_mac.c
++++ b/drivers/net/ethernet/intel/igb/e1000_mac.c
+@@ -426,7 +426,7 @@ void igb_mta_set(struct e1000_hw *hw, u32 hash_value)
+ static u32 igb_hash_mc_addr(struct e1000_hw *hw, u8 *mc_addr)
+ {
+ 	u32 hash_value, hash_mask;
+-	u8 bit_shift = 0;
++	u8 bit_shift = 1;
+ 
+ 	/* Register count multiplied by bits per register */
+ 	hash_mask = (hw->mac.mta_reg_count * 32) - 1;
+@@ -434,7 +434,7 @@ static u32 igb_hash_mc_addr(struct e1000_hw *hw, u8 *mc_addr)
+ 	/* For a mc_filter_type of 0, bit_shift is the number of left-shifts
+ 	 * where 0xFF would still fall within the hash mask.
+ 	 */
+-	while (hash_mask >> bit_shift != 0xFF)
++	while (hash_mask >> bit_shift != 0xFF && bit_shift < 4)
+ 		bit_shift++;
+ 
+ 	/* The portion of the address that is used for the hash table
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
+index a21bd1179477b..d840a59aec88a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
+@@ -867,8 +867,7 @@ static void mlx5e_build_rx_cq_param(struct mlx5_core_dev *mdev,
+ static u8 rq_end_pad_mode(struct mlx5_core_dev *mdev, struct mlx5e_params *params)
+ {
+ 	bool lro_en = params->packet_merge.type == MLX5E_PACKET_MERGE_LRO;
+-	bool ro = pcie_relaxed_ordering_enabled(mdev->pdev) &&
+-		MLX5_CAP_GEN(mdev, relaxed_ordering_write);
++	bool ro = MLX5_CAP_GEN(mdev, relaxed_ordering_write);
+ 
+ 	return ro && lro_en ?
+ 		MLX5_WQ_END_PAD_MODE_NONE : MLX5_WQ_END_PAD_MODE_ALIGN;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_common.c b/drivers/net/ethernet/mellanox/mlx5/core/en_common.c
+index 4c9a3210600c2..993af4c12d909 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_common.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_common.c
+@@ -44,7 +44,7 @@ void mlx5e_mkey_set_relaxed_ordering(struct mlx5_core_dev *mdev, void *mkc)
+ 	bool ro_read = MLX5_CAP_GEN(mdev, relaxed_ordering_read);
+ 
+ 	MLX5_SET(mkc, mkc, relaxed_ordering_read, ro_pci_enable && ro_read);
+-	MLX5_SET(mkc, mkc, relaxed_ordering_write, ro_pci_enable && ro_write);
++	MLX5_SET(mkc, mkc, relaxed_ordering_write, ro_write);
+ }
+ 
+ int mlx5e_create_mkey(struct mlx5_core_dev *mdev, u32 pdn, u32 *mkey)
+diff --git a/drivers/net/ethernet/mscc/vsc7514_regs.c b/drivers/net/ethernet/mscc/vsc7514_regs.c
+index ef6fd3f6be309..5595bfe84bbbb 100644
+--- a/drivers/net/ethernet/mscc/vsc7514_regs.c
++++ b/drivers/net/ethernet/mscc/vsc7514_regs.c
+@@ -307,15 +307,15 @@ static const u32 vsc7514_sys_regmap[] = {
+ 	REG(SYS_COUNT_DROP_YELLOW_PRIO_4,		0x000218),
+ 	REG(SYS_COUNT_DROP_YELLOW_PRIO_5,		0x00021c),
+ 	REG(SYS_COUNT_DROP_YELLOW_PRIO_6,		0x000220),
+-	REG(SYS_COUNT_DROP_YELLOW_PRIO_7,		0x000214),
+-	REG(SYS_COUNT_DROP_GREEN_PRIO_0,		0x000218),
+-	REG(SYS_COUNT_DROP_GREEN_PRIO_1,		0x00021c),
+-	REG(SYS_COUNT_DROP_GREEN_PRIO_2,		0x000220),
+-	REG(SYS_COUNT_DROP_GREEN_PRIO_3,		0x000224),
+-	REG(SYS_COUNT_DROP_GREEN_PRIO_4,		0x000228),
+-	REG(SYS_COUNT_DROP_GREEN_PRIO_5,		0x00022c),
+-	REG(SYS_COUNT_DROP_GREEN_PRIO_6,		0x000230),
+-	REG(SYS_COUNT_DROP_GREEN_PRIO_7,		0x000234),
++	REG(SYS_COUNT_DROP_YELLOW_PRIO_7,		0x000224),
++	REG(SYS_COUNT_DROP_GREEN_PRIO_0,		0x000228),
++	REG(SYS_COUNT_DROP_GREEN_PRIO_1,		0x00022c),
++	REG(SYS_COUNT_DROP_GREEN_PRIO_2,		0x000230),
++	REG(SYS_COUNT_DROP_GREEN_PRIO_3,		0x000234),
++	REG(SYS_COUNT_DROP_GREEN_PRIO_4,		0x000238),
++	REG(SYS_COUNT_DROP_GREEN_PRIO_5,		0x00023c),
++	REG(SYS_COUNT_DROP_GREEN_PRIO_6,		0x000240),
++	REG(SYS_COUNT_DROP_GREEN_PRIO_7,		0x000244),
+ 	REG(SYS_RESET_CFG,				0x000508),
+ 	REG(SYS_CMID,					0x00050c),
+ 	REG(SYS_VLAN_ETYPE_CFG,				0x000510),
+diff --git a/drivers/net/ethernet/netronome/nfp/nic/main.h b/drivers/net/ethernet/netronome/nfp/nic/main.h
+index 094374df42b8c..38b8b10b03cd3 100644
+--- a/drivers/net/ethernet/netronome/nfp/nic/main.h
++++ b/drivers/net/ethernet/netronome/nfp/nic/main.h
+@@ -8,7 +8,7 @@
+ 
+ #ifdef CONFIG_DCB
+ /* DCB feature definitions */
+-#define NFP_NET_MAX_DSCP	4
++#define NFP_NET_MAX_DSCP	64
+ #define NFP_NET_MAX_TC		IEEE_8021QAZ_MAX_TCS
+ #define NFP_NET_MAX_PRIO	8
+ #define NFP_DCB_CFG_STRIDE	256
+diff --git a/drivers/net/ethernet/pasemi/pasemi_mac.c b/drivers/net/ethernet/pasemi/pasemi_mac.c
+index aaab590ef548d..ed7dd0a042355 100644
+--- a/drivers/net/ethernet/pasemi/pasemi_mac.c
++++ b/drivers/net/ethernet/pasemi/pasemi_mac.c
+@@ -1423,7 +1423,7 @@ static void pasemi_mac_queue_csdesc(const struct sk_buff *skb,
+ 	write_dma_reg(PAS_DMA_TXCHAN_INCR(txring->chan.chno), 2);
+ }
+ 
+-static int pasemi_mac_start_tx(struct sk_buff *skb, struct net_device *dev)
++static netdev_tx_t pasemi_mac_start_tx(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct pasemi_mac * const mac = netdev_priv(dev);
+ 	struct pasemi_mac_txring * const txring = tx_ring(mac);
+diff --git a/drivers/net/ethernet/sfc/ef100_netdev.c b/drivers/net/ethernet/sfc/ef100_netdev.c
+index d916877b5a9ad..be395cd8770bc 100644
+--- a/drivers/net/ethernet/sfc/ef100_netdev.c
++++ b/drivers/net/ethernet/sfc/ef100_netdev.c
+@@ -378,7 +378,9 @@ int ef100_probe_netdev(struct efx_probe_data *probe_data)
+ 	efx->net_dev = net_dev;
+ 	SET_NETDEV_DEV(net_dev, &efx->pci_dev->dev);
+ 
+-	net_dev->features |= efx->type->offload_features;
++	/* enable all supported features except rx-fcs and rx-all */
++	net_dev->features |= efx->type->offload_features &
++			     ~(NETIF_F_RXFCS | NETIF_F_RXALL);
+ 	net_dev->hw_features |= efx->type->offload_features;
+ 	net_dev->hw_enc_features |= efx->type->offload_features;
+ 	net_dev->vlan_features |= NETIF_F_HW_CSUM | NETIF_F_SG |
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4.h b/drivers/net/ethernet/stmicro/stmmac/dwmac4.h
+index ccd49346d3b30..a70b0d8a622d6 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4.h
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4.h
+@@ -181,6 +181,7 @@ enum power_event {
+ #define GMAC4_LPI_CTRL_STATUS	0xd0
+ #define GMAC4_LPI_TIMER_CTRL	0xd4
+ #define GMAC4_LPI_ENTRY_TIMER	0xd8
++#define GMAC4_MAC_ONEUS_TIC_COUNTER	0xdc
+ 
+ /* LPI control and status defines */
+ #define GMAC4_LPI_CTRL_STATUS_LPITCSE	BIT(21)	/* LPI Tx Clock Stop Enable */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+index 36251ec2589c9..24d6ec06732d9 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+@@ -25,6 +25,7 @@ static void dwmac4_core_init(struct mac_device_info *hw,
+ 	struct stmmac_priv *priv = netdev_priv(dev);
+ 	void __iomem *ioaddr = hw->pcsr;
+ 	u32 value = readl(ioaddr + GMAC_CONFIG);
++	u32 clk_rate;
+ 
+ 	value |= GMAC_CORE_INIT;
+ 
+@@ -47,6 +48,10 @@ static void dwmac4_core_init(struct mac_device_info *hw,
+ 
+ 	writel(value, ioaddr + GMAC_CONFIG);
+ 
++	/* Configure LPI 1us counter to number of CSR clock ticks in 1us - 1 */
++	clk_rate = clk_get_rate(priv->plat->stmmac_clk);
++	writel((clk_rate / 1000000) - 1, ioaddr + GMAC4_MAC_ONEUS_TIC_COUNTER);
++
+ 	/* Enable GMAC interrupts */
+ 	value = GMAC_INT_DEFAULT_ENABLE;
+ 
+diff --git a/drivers/net/ethernet/sun/cassini.c b/drivers/net/ethernet/sun/cassini.c
+index 4ef05bad4613c..d61dfa250feb7 100644
+--- a/drivers/net/ethernet/sun/cassini.c
++++ b/drivers/net/ethernet/sun/cassini.c
+@@ -5077,6 +5077,8 @@ err_out_iounmap:
+ 		cas_shutdown(cp);
+ 	mutex_unlock(&cp->pm_mutex);
+ 
++	vfree(cp->fw_data);
++
+ 	pci_iounmap(pdev, cp->regs);
+ 
+ 
+diff --git a/drivers/net/ipvlan/ipvlan_core.c b/drivers/net/ipvlan/ipvlan_core.c
+index 460b3d4f2245f..ab5133eb1d517 100644
+--- a/drivers/net/ipvlan/ipvlan_core.c
++++ b/drivers/net/ipvlan/ipvlan_core.c
+@@ -436,6 +436,9 @@ static int ipvlan_process_v4_outbound(struct sk_buff *skb)
+ 		goto err;
+ 	}
+ 	skb_dst_set(skb, &rt->dst);
++
++	memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
++
+ 	err = ip_local_out(net, skb->sk, skb);
+ 	if (unlikely(net_xmit_eval(err)))
+ 		dev->stats.tx_errors++;
+@@ -474,6 +477,9 @@ static int ipvlan_process_v6_outbound(struct sk_buff *skb)
+ 		goto err;
+ 	}
+ 	skb_dst_set(skb, dst);
++
++	memset(IP6CB(skb), 0, sizeof(*IP6CB(skb)));
++
+ 	err = ip6_local_out(net, skb->sk, skb);
+ 	if (unlikely(net_xmit_eval(err)))
+ 		dev->stats.tx_errors++;
+diff --git a/drivers/net/mdio/mdio-mvusb.c b/drivers/net/mdio/mdio-mvusb.c
+index 68fc55906e788..554837c21e73c 100644
+--- a/drivers/net/mdio/mdio-mvusb.c
++++ b/drivers/net/mdio/mdio-mvusb.c
+@@ -67,6 +67,7 @@ static int mvusb_mdio_probe(struct usb_interface *interface,
+ 	struct device *dev = &interface->dev;
+ 	struct mvusb_mdio *mvusb;
+ 	struct mii_bus *mdio;
++	int ret;
+ 
+ 	mdio = devm_mdiobus_alloc_size(dev, sizeof(*mvusb));
+ 	if (!mdio)
+@@ -87,7 +88,15 @@ static int mvusb_mdio_probe(struct usb_interface *interface,
+ 	mdio->write = mvusb_mdio_write;
+ 
+ 	usb_set_intfdata(interface, mvusb);
+-	return of_mdiobus_register(mdio, dev->of_node);
++	ret = of_mdiobus_register(mdio, dev->of_node);
++	if (ret)
++		goto put_dev;
++
++	return 0;
++
++put_dev:
++	usb_put_dev(mvusb->udev);
++	return ret;
+ }
+ 
+ static void mvusb_mdio_disconnect(struct usb_interface *interface)
+diff --git a/drivers/net/pcs/pcs-xpcs.c b/drivers/net/pcs/pcs-xpcs.c
+index 04a6853530418..2b84a46622be4 100644
+--- a/drivers/net/pcs/pcs-xpcs.c
++++ b/drivers/net/pcs/pcs-xpcs.c
+@@ -873,7 +873,7 @@ int xpcs_do_config(struct dw_xpcs *xpcs, phy_interface_t interface,
+ 
+ 	switch (compat->an_mode) {
+ 	case DW_AN_C73:
+-		if (phylink_autoneg_inband(mode)) {
++		if (test_bit(ETHTOOL_LINK_MODE_Autoneg_BIT, advertising)) {
+ 			ret = xpcs_config_aneg_c73(xpcs, compat);
+ 			if (ret)
+ 				return ret;
+diff --git a/drivers/net/phy/bcm-phy-lib.h b/drivers/net/phy/bcm-phy-lib.h
+index 9902fb1820997..729db441797a0 100644
+--- a/drivers/net/phy/bcm-phy-lib.h
++++ b/drivers/net/phy/bcm-phy-lib.h
+@@ -40,6 +40,11 @@ static inline int bcm_phy_write_exp_sel(struct phy_device *phydev,
+ 	return bcm_phy_write_exp(phydev, reg | MII_BCM54XX_EXP_SEL_ER, val);
+ }
+ 
++static inline int bcm_phy_read_exp_sel(struct phy_device *phydev, u16 reg)
++{
++	return bcm_phy_read_exp(phydev, reg | MII_BCM54XX_EXP_SEL_ER);
++}
++
+ int bcm54xx_auxctl_write(struct phy_device *phydev, u16 regnum, u16 val);
+ int bcm54xx_auxctl_read(struct phy_device *phydev, u16 regnum);
+ 
+diff --git a/drivers/net/phy/bcm7xxx.c b/drivers/net/phy/bcm7xxx.c
+index 75593e7d1118f..6cebf3aaa621f 100644
+--- a/drivers/net/phy/bcm7xxx.c
++++ b/drivers/net/phy/bcm7xxx.c
+@@ -487,7 +487,7 @@ static int bcm7xxx_16nm_ephy_afe_config(struct phy_device *phydev)
+ 	bcm_phy_write_misc(phydev, 0x0038, 0x0002, 0xede0);
+ 
+ 	/* Read CORE_EXPA9 */
+-	tmp = bcm_phy_read_exp(phydev, 0x00a9);
++	tmp = bcm_phy_read_exp_sel(phydev, 0x00a9);
+ 	/* CORE_EXPA9[6:1] is rcalcode[5:0] */
+ 	rcalcode = (tmp & 0x7e) / 2;
+ 	/* Correct RCAL code + 1 is -1% rprogr, LP: +16 */
+diff --git a/drivers/net/phy/dp83867.c b/drivers/net/phy/dp83867.c
+index 89cd821f1f466..9f7ff88200484 100644
+--- a/drivers/net/phy/dp83867.c
++++ b/drivers/net/phy/dp83867.c
+@@ -42,6 +42,7 @@
+ #define DP83867_STRAP_STS1	0x006E
+ #define DP83867_STRAP_STS2	0x006f
+ #define DP83867_RGMIIDCTL	0x0086
++#define DP83867_DSP_FFE_CFG	0x012c
+ #define DP83867_RXFCFG		0x0134
+ #define DP83867_RXFPMD1	0x0136
+ #define DP83867_RXFPMD2	0x0137
+@@ -910,8 +911,27 @@ static int dp83867_phy_reset(struct phy_device *phydev)
+ 
+ 	usleep_range(10, 20);
+ 
+-	return phy_modify(phydev, MII_DP83867_PHYCTRL,
++	err = phy_modify(phydev, MII_DP83867_PHYCTRL,
+ 			 DP83867_PHYCR_FORCE_LINK_GOOD, 0);
++	if (err < 0)
++		return err;
++
++	/* Configure the DSP Feedforward Equalizer Configuration register to
++	 * improve short cable (< 1 meter) performance. This will not affect
++	 * long cable performance.
++	 */
++	err = phy_write_mmd(phydev, DP83867_DEVADDR, DP83867_DSP_FFE_CFG,
++			    0x0e81);
++	if (err < 0)
++		return err;
++
++	err = phy_write(phydev, DP83867_CTRL, DP83867_SW_RESTART);
++	if (err < 0)
++		return err;
++
++	usleep_range(10, 20);
++
++	return 0;
+ }
+ 
+ static void dp83867_link_change_notify(struct phy_device *phydev)
+diff --git a/drivers/net/tap.c b/drivers/net/tap.c
+index 8941aa199ea33..456de9c3ea169 100644
+--- a/drivers/net/tap.c
++++ b/drivers/net/tap.c
+@@ -739,7 +739,7 @@ static ssize_t tap_get_user(struct tap_queue *q, void *msg_control,
+ 
+ 	/* Move network header to the right position for VLAN tagged packets */
+ 	if (eth_type_vlan(skb->protocol) &&
+-	    __vlan_get_protocol(skb, skb->protocol, &depth) != 0)
++	    vlan_get_protocol_and_depth(skb, skb->protocol, &depth) != 0)
+ 		skb_set_network_header(skb, depth);
+ 
+ 	/* copy skb_ubuf_info for callback when skb has no error */
+@@ -1186,7 +1186,7 @@ static int tap_get_user_xdp(struct tap_queue *q, struct xdp_buff *xdp)
+ 
+ 	/* Move network header to the right position for VLAN tagged packets */
+ 	if (eth_type_vlan(skb->protocol) &&
+-	    __vlan_get_protocol(skb, skb->protocol, &depth) != 0)
++	    vlan_get_protocol_and_depth(skb, skb->protocol, &depth) != 0)
+ 		skb_set_network_header(skb, depth);
+ 
+ 	rcu_read_lock();
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index ad653b32b2f00..44087db2a0595 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -1976,6 +1976,14 @@ napi_busy:
+ 		int queue_len;
+ 
+ 		spin_lock_bh(&queue->lock);
++
++		if (unlikely(tfile->detached)) {
++			spin_unlock_bh(&queue->lock);
++			rcu_read_unlock();
++			err = -EBUSY;
++			goto free_skb;
++		}
++
+ 		__skb_queue_tail(queue, skb);
+ 		queue_len = skb_queue_len(queue);
+ 		spin_unlock(&queue->lock);
+@@ -2511,6 +2519,13 @@ build:
+ 	if (tfile->napi_enabled) {
+ 		queue = &tfile->sk.sk_write_queue;
+ 		spin_lock(&queue->lock);
++
++		if (unlikely(tfile->detached)) {
++			spin_unlock(&queue->lock);
++			kfree_skb(skb);
++			return -EBUSY;
++		}
++
+ 		__skb_queue_tail(queue, skb);
+ 		spin_unlock(&queue->lock);
+ 		ret = 1;
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 744bdc8a1abd2..13ac7f1c7ae8c 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -1867,6 +1867,38 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
+ 	return received;
+ }
+ 
++static void virtnet_disable_queue_pair(struct virtnet_info *vi, int qp_index)
++{
++	virtnet_napi_tx_disable(&vi->sq[qp_index].napi);
++	napi_disable(&vi->rq[qp_index].napi);
++	xdp_rxq_info_unreg(&vi->rq[qp_index].xdp_rxq);
++}
++
++static int virtnet_enable_queue_pair(struct virtnet_info *vi, int qp_index)
++{
++	struct net_device *dev = vi->dev;
++	int err;
++
++	err = xdp_rxq_info_reg(&vi->rq[qp_index].xdp_rxq, dev, qp_index,
++			       vi->rq[qp_index].napi.napi_id);
++	if (err < 0)
++		return err;
++
++	err = xdp_rxq_info_reg_mem_model(&vi->rq[qp_index].xdp_rxq,
++					 MEM_TYPE_PAGE_SHARED, NULL);
++	if (err < 0)
++		goto err_xdp_reg_mem_model;
++
++	virtnet_napi_enable(vi->rq[qp_index].vq, &vi->rq[qp_index].napi);
++	virtnet_napi_tx_enable(vi, vi->sq[qp_index].vq, &vi->sq[qp_index].napi);
++
++	return 0;
++
++err_xdp_reg_mem_model:
++	xdp_rxq_info_unreg(&vi->rq[qp_index].xdp_rxq);
++	return err;
++}
++
+ static int virtnet_open(struct net_device *dev)
+ {
+ 	struct virtnet_info *vi = netdev_priv(dev);
+@@ -1880,22 +1912,20 @@ static int virtnet_open(struct net_device *dev)
+ 			if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL))
+ 				schedule_delayed_work(&vi->refill, 0);
+ 
+-		err = xdp_rxq_info_reg(&vi->rq[i].xdp_rxq, dev, i, vi->rq[i].napi.napi_id);
++		err = virtnet_enable_queue_pair(vi, i);
+ 		if (err < 0)
+-			return err;
+-
+-		err = xdp_rxq_info_reg_mem_model(&vi->rq[i].xdp_rxq,
+-						 MEM_TYPE_PAGE_SHARED, NULL);
+-		if (err < 0) {
+-			xdp_rxq_info_unreg(&vi->rq[i].xdp_rxq);
+-			return err;
+-		}
+-
+-		virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi);
+-		virtnet_napi_tx_enable(vi, vi->sq[i].vq, &vi->sq[i].napi);
++			goto err_enable_qp;
+ 	}
+ 
+ 	return 0;
++
++err_enable_qp:
++	disable_delayed_refill(vi);
++	cancel_delayed_work_sync(&vi->refill);
++
++	for (i--; i >= 0; i--)
++		virtnet_disable_queue_pair(vi, i);
++	return err;
+ }
+ 
+ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
+@@ -2304,11 +2334,8 @@ static int virtnet_close(struct net_device *dev)
+ 	/* Make sure refill_work doesn't re-enable napi! */
+ 	cancel_delayed_work_sync(&vi->refill);
+ 
+-	for (i = 0; i < vi->max_queue_pairs; i++) {
+-		virtnet_napi_tx_disable(&vi->sq[i].napi);
+-		napi_disable(&vi->rq[i].napi);
+-		xdp_rxq_info_unreg(&vi->rq[i].xdp_rxq);
+-	}
++	for (i = 0; i < vi->max_queue_pairs; i++)
++		virtnet_disable_queue_pair(vi, i);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/wireless/ath/ath.h b/drivers/net/wireless/ath/ath.h
+index f083fb9038c36..f02a308a9ffc5 100644
+--- a/drivers/net/wireless/ath/ath.h
++++ b/drivers/net/wireless/ath/ath.h
+@@ -96,11 +96,13 @@ struct ath_keyval {
+ 	u8 kv_type;
+ 	u8 kv_pad;
+ 	u16 kv_len;
+-	u8 kv_val[16]; /* TK */
+-	u8 kv_mic[8]; /* Michael MIC key */
+-	u8 kv_txmic[8]; /* Michael MIC TX key (used only if the hardware
+-			 * supports both MIC keys in the same key cache entry;
+-			 * in that case, kv_mic is the RX key) */
++	struct_group(kv_values,
++		u8 kv_val[16]; /* TK */
++		u8 kv_mic[8]; /* Michael MIC key */
++		u8 kv_txmic[8]; /* Michael MIC TX key (used only if the hardware
++				 * supports both MIC keys in the same key cache entry;
++				 * in that case, kv_mic is the RX key) */
++	);
+ };
+ 
+ enum ath_cipher {
+diff --git a/drivers/net/wireless/ath/ath11k/dp.c b/drivers/net/wireless/ath/ath11k/dp.c
+index f5156a7fbdd7a..d070bcb3fe247 100644
+--- a/drivers/net/wireless/ath/ath11k/dp.c
++++ b/drivers/net/wireless/ath/ath11k/dp.c
+@@ -36,6 +36,7 @@ void ath11k_dp_peer_cleanup(struct ath11k *ar, int vdev_id, const u8 *addr)
+ 	}
+ 
+ 	ath11k_peer_rx_tid_cleanup(ar, peer);
++	peer->dp_setup_done = false;
+ 	crypto_free_shash(peer->tfm_mmic);
+ 	spin_unlock_bh(&ab->base_lock);
+ }
+@@ -72,7 +73,8 @@ int ath11k_dp_peer_setup(struct ath11k *ar, int vdev_id, const u8 *addr)
+ 	ret = ath11k_peer_rx_frag_setup(ar, addr, vdev_id);
+ 	if (ret) {
+ 		ath11k_warn(ab, "failed to setup rx defrag context\n");
+-		return ret;
++		tid--;
++		goto peer_clean;
+ 	}
+ 
+ 	/* TODO: Setup other peer specific resource used in data path */
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
+index b65a84a882641..32a4f88861d58 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
+@@ -389,10 +389,10 @@ int ath11k_dp_rxbufs_replenish(struct ath11k_base *ab, int mac_id,
+ 			goto fail_free_skb;
+ 
+ 		spin_lock_bh(&rx_ring->idr_lock);
+-		buf_id = idr_alloc(&rx_ring->bufs_idr, skb, 0,
+-				   rx_ring->bufs_max * 3, GFP_ATOMIC);
++		buf_id = idr_alloc(&rx_ring->bufs_idr, skb, 1,
++				   (rx_ring->bufs_max * 3) + 1, GFP_ATOMIC);
+ 		spin_unlock_bh(&rx_ring->idr_lock);
+-		if (buf_id < 0)
++		if (buf_id <= 0)
+ 			goto fail_dma_unmap;
+ 
+ 		desc = ath11k_hal_srng_src_get_next_entry(ab, srng);
+@@ -2665,6 +2665,9 @@ try_again:
+ 				   cookie);
+ 		mac_id = FIELD_GET(DP_RXDMA_BUF_COOKIE_PDEV_ID, cookie);
+ 
++		if (unlikely(buf_id == 0))
++			continue;
++
+ 		ar = ab->pdevs[mac_id].ar;
+ 		rx_ring = &ar->dp.rx_refill_buf_ring;
+ 		spin_lock_bh(&rx_ring->idr_lock);
+@@ -3138,6 +3141,7 @@ int ath11k_peer_rx_frag_setup(struct ath11k *ar, const u8 *peer_mac, int vdev_id
+ 	}
+ 
+ 	peer->tfm_mmic = tfm;
++	peer->dp_setup_done = true;
+ 	spin_unlock_bh(&ab->base_lock);
+ 
+ 	return 0;
+@@ -3583,6 +3587,13 @@ static int ath11k_dp_rx_frag_h_mpdu(struct ath11k *ar,
+ 		ret = -ENOENT;
+ 		goto out_unlock;
+ 	}
++	if (!peer->dp_setup_done) {
++		ath11k_warn(ab, "The peer %pM [%d] has uninitialized datapath\n",
++			    peer->addr, peer_id);
++		ret = -ENOENT;
++		goto out_unlock;
++	}
++
+ 	rx_tid = &peer->rx_tid[tid];
+ 
+ 	if ((!skb_queue_empty(&rx_tid->rx_frags) && seqno != rx_tid->cur_sn) ||
+diff --git a/drivers/net/wireless/ath/ath11k/peer.h b/drivers/net/wireless/ath/ath11k/peer.h
+index 6dd17bafe3a0c..9bd385d0a38c9 100644
+--- a/drivers/net/wireless/ath/ath11k/peer.h
++++ b/drivers/net/wireless/ath/ath11k/peer.h
+@@ -35,6 +35,7 @@ struct ath11k_peer {
+ 	u16 sec_type;
+ 	u16 sec_type_grp;
+ 	bool is_authorized;
++	bool dp_setup_done;
+ };
+ 
+ void ath11k_peer_unmap_event(struct ath11k_base *ab, u16 peer_id);
+diff --git a/drivers/net/wireless/ath/ath12k/dp_rx.c b/drivers/net/wireless/ath/ath12k/dp_rx.c
+index 83a43ad48c512..de9a4ca66c664 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath12k/dp_rx.c
+@@ -3494,11 +3494,14 @@ static int ath12k_dp_rx_h_null_q_desc(struct ath12k *ar, struct sk_buff *msdu,
+ 	msdu_len = ath12k_dp_rx_h_msdu_len(ab, desc);
+ 	peer_id = ath12k_dp_rx_h_peer_id(ab, desc);
+ 
++	spin_lock(&ab->base_lock);
+ 	if (!ath12k_peer_find_by_id(ab, peer_id)) {
++		spin_unlock(&ab->base_lock);
+ 		ath12k_dbg(ab, ATH12K_DBG_DATA, "invalid peer id received in wbm err pkt%d\n",
+ 			   peer_id);
+ 		return -EINVAL;
+ 	}
++	spin_unlock(&ab->base_lock);
+ 
+ 	if (!rxcb->is_frag && ((msdu_len + hal_rx_desc_sz) > DP_RX_BUFFER_SIZE)) {
+ 		/* First buffer will be freed by the caller, so deduct it's length */
+diff --git a/drivers/net/wireless/ath/ath12k/pci.c b/drivers/net/wireless/ath/ath12k/pci.c
+index f523aa15885f6..00b0080dbac38 100644
+--- a/drivers/net/wireless/ath/ath12k/pci.c
++++ b/drivers/net/wireless/ath/ath12k/pci.c
+@@ -119,6 +119,30 @@ static const char *irq_name[ATH12K_IRQ_NUM_MAX] = {
+ 	"tcl2host-status-ring",
+ };
+ 
++static int ath12k_pci_bus_wake_up(struct ath12k_base *ab)
++{
++	struct ath12k_pci *ab_pci = ath12k_pci_priv(ab);
++
++	return mhi_device_get_sync(ab_pci->mhi_ctrl->mhi_dev);
++}
++
++static void ath12k_pci_bus_release(struct ath12k_base *ab)
++{
++	struct ath12k_pci *ab_pci = ath12k_pci_priv(ab);
++
++	mhi_device_put(ab_pci->mhi_ctrl->mhi_dev);
++}
++
++static const struct ath12k_pci_ops ath12k_pci_ops_qcn9274 = {
++	.wakeup = NULL,
++	.release = NULL,
++};
++
++static const struct ath12k_pci_ops ath12k_pci_ops_wcn7850 = {
++	.wakeup = ath12k_pci_bus_wake_up,
++	.release = ath12k_pci_bus_release,
++};
++
+ static void ath12k_pci_select_window(struct ath12k_pci *ab_pci, u32 offset)
+ {
+ 	struct ath12k_base *ab = ab_pci->ab;
+@@ -989,13 +1013,14 @@ u32 ath12k_pci_read32(struct ath12k_base *ab, u32 offset)
+ {
+ 	struct ath12k_pci *ab_pci = ath12k_pci_priv(ab);
+ 	u32 val, window_start;
++	int ret = 0;
+ 
+ 	/* for offset beyond BAR + 4K - 32, may
+ 	 * need to wakeup MHI to access.
+ 	 */
+ 	if (test_bit(ATH12K_PCI_FLAG_INIT_DONE, &ab_pci->flags) &&
+-	    offset >= ACCESS_ALWAYS_OFF)
+-		mhi_device_get_sync(ab_pci->mhi_ctrl->mhi_dev);
++	    offset >= ACCESS_ALWAYS_OFF && ab_pci->pci_ops->wakeup)
++		ret = ab_pci->pci_ops->wakeup(ab);
+ 
+ 	if (offset < WINDOW_START) {
+ 		val = ioread32(ab->mem + offset);
+@@ -1023,9 +1048,9 @@ u32 ath12k_pci_read32(struct ath12k_base *ab, u32 offset)
+ 	}
+ 
+ 	if (test_bit(ATH12K_PCI_FLAG_INIT_DONE, &ab_pci->flags) &&
+-	    offset >= ACCESS_ALWAYS_OFF)
+-		mhi_device_put(ab_pci->mhi_ctrl->mhi_dev);
+-
++	    offset >= ACCESS_ALWAYS_OFF && ab_pci->pci_ops->release &&
++	    !ret)
++		ab_pci->pci_ops->release(ab);
+ 	return val;
+ }
+ 
+@@ -1033,13 +1058,14 @@ void ath12k_pci_write32(struct ath12k_base *ab, u32 offset, u32 value)
+ {
+ 	struct ath12k_pci *ab_pci = ath12k_pci_priv(ab);
+ 	u32 window_start;
++	int ret = 0;
+ 
+ 	/* for offset beyond BAR + 4K - 32, may
+ 	 * need to wakeup MHI to access.
+ 	 */
+ 	if (test_bit(ATH12K_PCI_FLAG_INIT_DONE, &ab_pci->flags) &&
+-	    offset >= ACCESS_ALWAYS_OFF)
+-		mhi_device_get_sync(ab_pci->mhi_ctrl->mhi_dev);
++	    offset >= ACCESS_ALWAYS_OFF && ab_pci->pci_ops->wakeup)
++		ret = ab_pci->pci_ops->wakeup(ab);
+ 
+ 	if (offset < WINDOW_START) {
+ 		iowrite32(value, ab->mem + offset);
+@@ -1067,8 +1093,9 @@ void ath12k_pci_write32(struct ath12k_base *ab, u32 offset, u32 value)
+ 	}
+ 
+ 	if (test_bit(ATH12K_PCI_FLAG_INIT_DONE, &ab_pci->flags) &&
+-	    offset >= ACCESS_ALWAYS_OFF)
+-		mhi_device_put(ab_pci->mhi_ctrl->mhi_dev);
++	    offset >= ACCESS_ALWAYS_OFF && ab_pci->pci_ops->release &&
++	    !ret)
++		ab_pci->pci_ops->release(ab);
+ }
+ 
+ int ath12k_pci_power_up(struct ath12k_base *ab)
+@@ -1182,6 +1209,7 @@ static int ath12k_pci_probe(struct pci_dev *pdev,
+ 	case QCN9274_DEVICE_ID:
+ 		ab_pci->msi_config = &ath12k_msi_config[0];
+ 		ab->static_window_map = true;
++		ab_pci->pci_ops = &ath12k_pci_ops_qcn9274;
+ 		ath12k_pci_read_hw_version(ab, &soc_hw_version_major,
+ 					   &soc_hw_version_minor);
+ 		switch (soc_hw_version_major) {
+@@ -1203,6 +1231,7 @@ static int ath12k_pci_probe(struct pci_dev *pdev,
+ 		ab_pci->msi_config = &ath12k_msi_config[0];
+ 		ab->static_window_map = false;
+ 		ab->hw_rev = ATH12K_HW_WCN7850_HW20;
++		ab_pci->pci_ops = &ath12k_pci_ops_wcn7850;
+ 		break;
+ 
+ 	default:
+diff --git a/drivers/net/wireless/ath/ath12k/pci.h b/drivers/net/wireless/ath/ath12k/pci.h
+index 0d9e40ab31f26..0f24fd9395cd9 100644
+--- a/drivers/net/wireless/ath/ath12k/pci.h
++++ b/drivers/net/wireless/ath/ath12k/pci.h
+@@ -86,6 +86,11 @@ enum ath12k_pci_flags {
+ 	ATH12K_PCI_ASPM_RESTORE,
+ };
+ 
++struct ath12k_pci_ops {
++	int (*wakeup)(struct ath12k_base *ab);
++	void (*release)(struct ath12k_base *ab);
++};
++
+ struct ath12k_pci {
+ 	struct pci_dev *pdev;
+ 	struct ath12k_base *ab;
+@@ -103,6 +108,7 @@ struct ath12k_pci {
+ 	/* enum ath12k_pci_flags */
+ 	unsigned long flags;
+ 	u16 link_ctl;
++	const struct ath12k_pci_ops *pci_ops;
+ };
+ 
+ static inline struct ath12k_pci *ath12k_pci_priv(struct ath12k_base *ab)
+diff --git a/drivers/net/wireless/ath/ath12k/qmi.c b/drivers/net/wireless/ath/ath12k/qmi.c
+index 979a63f2e2ab8..03ba245fbee92 100644
+--- a/drivers/net/wireless/ath/ath12k/qmi.c
++++ b/drivers/net/wireless/ath/ath12k/qmi.c
+@@ -2991,7 +2991,7 @@ static void ath12k_qmi_driver_event_work(struct work_struct *work)
+ 		spin_unlock(&qmi->event_lock);
+ 
+ 		if (test_bit(ATH12K_FLAG_UNREGISTERING, &ab->dev_flags))
+-			return;
++			goto skip;
+ 
+ 		switch (event->type) {
+ 		case ATH12K_QMI_EVENT_SERVER_ARRIVE:
+@@ -3032,6 +3032,8 @@ static void ath12k_qmi_driver_event_work(struct work_struct *work)
+ 			ath12k_warn(ab, "invalid event type: %d", event->type);
+ 			break;
+ 		}
++
++skip:
+ 		kfree(event);
+ 		spin_lock(&qmi->event_lock);
+ 	}
+diff --git a/drivers/net/wireless/ath/key.c b/drivers/net/wireless/ath/key.c
+index 61b59a804e308..b7b61d4f02bae 100644
+--- a/drivers/net/wireless/ath/key.c
++++ b/drivers/net/wireless/ath/key.c
+@@ -503,7 +503,7 @@ int ath_key_config(struct ath_common *common,
+ 
+ 	hk.kv_len = key->keylen;
+ 	if (key->keylen)
+-		memcpy(hk.kv_val, key->key, key->keylen);
++		memcpy(&hk.kv_values, key->key, key->keylen);
+ 
+ 	if (!(key->flags & IEEE80211_KEY_FLAG_PAIRWISE)) {
+ 		switch (vif->type) {
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
+index ff710b0b5071a..00679a990e3da 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
+@@ -1039,6 +1039,11 @@ static int brcmf_ops_sdio_probe(struct sdio_func *func,
+ 	struct brcmf_sdio_dev *sdiodev;
+ 	struct brcmf_bus *bus_if;
+ 
++	if (!id) {
++		dev_err(&func->dev, "Error no sdio_device_id passed for %x:%x\n", func->vendor, func->device);
++		return -ENODEV;
++	}
++
+ 	brcmf_dbg(SDIO, "Enter\n");
+ 	brcmf_dbg(SDIO, "Class=%x\n", func->class);
+ 	brcmf_dbg(SDIO, "sdio vendor ID: 0x%04x\n", func->vendor);
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index 3d33e687964ad..06c362e0b12fc 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -1617,13 +1617,14 @@ static int brcmf_set_pmk(struct brcmf_if *ifp, const u8 *pmk_data, u16 pmk_len)
+ {
+ 	struct brcmf_pub *drvr = ifp->drvr;
+ 	struct brcmf_wsec_pmk_le pmk;
+-	int i, err;
++	int err;
++
++	memset(&pmk, 0, sizeof(pmk));
+ 
+-	/* convert to firmware key format */
+-	pmk.key_len = cpu_to_le16(pmk_len << 1);
+-	pmk.flags = cpu_to_le16(BRCMF_WSEC_PASSPHRASE);
+-	for (i = 0; i < pmk_len; i++)
+-		snprintf(&pmk.key[2 * i], 3, "%02x", pmk_data[i]);
++	/* pass pmk directly */
++	pmk.key_len = cpu_to_le16(pmk_len);
++	pmk.flags = cpu_to_le16(0);
++	memcpy(pmk.key, pmk_data, pmk_len);
+ 
+ 	/* store psk in firmware */
+ 	err = brcmf_fil_cmd_data_set(ifp, BRCMF_C_SET_WSEC_PMK,
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.c
+index 8073f31be27d9..9cdbd8d438439 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.c
+@@ -737,6 +737,8 @@ static u32 brcmf_chip_tcm_rambase(struct brcmf_chip_priv *ci)
+ 		return 0x170000;
+ 	case BRCM_CC_4378_CHIP_ID:
+ 		return 0x352000;
++	case BRCM_CC_4387_CHIP_ID:
++		return 0x740000;
+ 	default:
+ 		brcmf_err("unknown chip: %s\n", ci->pub.name);
+ 		break;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+index a9b9b2dc62d4f..3fb333d5d58f1 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+@@ -15,6 +15,7 @@
+ #include <linux/sched/signal.h>
+ #include <linux/kthread.h>
+ #include <linux/io.h>
++#include <linux/random.h>
+ #include <asm/unaligned.h>
+ 
+ #include <soc.h>
+@@ -66,6 +67,7 @@ BRCMF_FW_DEF(4366C, "brcmfmac4366c-pcie");
+ BRCMF_FW_DEF(4371, "brcmfmac4371-pcie");
+ BRCMF_FW_CLM_DEF(4377B3, "brcmfmac4377b3-pcie");
+ BRCMF_FW_CLM_DEF(4378B1, "brcmfmac4378b1-pcie");
++BRCMF_FW_CLM_DEF(4387C2, "brcmfmac4387c2-pcie");
+ 
+ /* firmware config files */
+ MODULE_FIRMWARE(BRCMF_FW_DEFAULT_PATH "brcmfmac*-pcie.txt");
+@@ -100,6 +102,7 @@ static const struct brcmf_firmware_mapping brcmf_pcie_fwnames[] = {
+ 	BRCMF_FW_ENTRY(BRCM_CC_4371_CHIP_ID, 0xFFFFFFFF, 4371),
+ 	BRCMF_FW_ENTRY(BRCM_CC_4377_CHIP_ID, 0xFFFFFFFF, 4377B3), /* revision ID 4 */
+ 	BRCMF_FW_ENTRY(BRCM_CC_4378_CHIP_ID, 0xFFFFFFFF, 4378B1), /* revision ID 3 */
++	BRCMF_FW_ENTRY(BRCM_CC_4387_CHIP_ID, 0xFFFFFFFF, 4387C2), /* revision ID 7 */
+ };
+ 
+ #define BRCMF_PCIE_FW_UP_TIMEOUT		5000 /* msec */
+@@ -1653,6 +1656,13 @@ brcmf_pcie_init_share_ram_info(struct brcmf_pciedev_info *devinfo,
+ 	return 0;
+ }
+ 
++struct brcmf_random_seed_footer {
++	__le32 length;
++	__le32 magic;
++};
++
++#define BRCMF_RANDOM_SEED_MAGIC		0xfeedc0de
++#define BRCMF_RANDOM_SEED_LENGTH	0x100
+ 
+ static int brcmf_pcie_download_fw_nvram(struct brcmf_pciedev_info *devinfo,
+ 					const struct firmware *fw, void *nvram,
+@@ -1689,6 +1699,30 @@ static int brcmf_pcie_download_fw_nvram(struct brcmf_pciedev_info *devinfo,
+ 			  nvram_len;
+ 		memcpy_toio(devinfo->tcm + address, nvram, nvram_len);
+ 		brcmf_fw_nvram_free(nvram);
++
++		if (devinfo->otp.valid) {
++			size_t rand_len = BRCMF_RANDOM_SEED_LENGTH;
++			struct brcmf_random_seed_footer footer = {
++				.length = cpu_to_le32(rand_len),
++				.magic = cpu_to_le32(BRCMF_RANDOM_SEED_MAGIC),
++			};
++			void *randbuf;
++
++			/* Some Apple chips/firmwares expect a buffer of random
++			 * data to be present before NVRAM
++			 */
++			brcmf_dbg(PCIE, "Download random seed\n");
++
++			address -= sizeof(footer);
++			memcpy_toio(devinfo->tcm + address, &footer,
++				    sizeof(footer));
++
++			address -= rand_len;
++			randbuf = kzalloc(rand_len, GFP_KERNEL);
++			get_random_bytes(randbuf, rand_len);
++			memcpy_toio(devinfo->tcm + address, randbuf, rand_len);
++			kfree(randbuf);
++		}
+ 	} else {
+ 		brcmf_dbg(PCIE, "No matching NVRAM file found %s\n",
+ 			  devinfo->nvram_name);
+@@ -2016,6 +2050,11 @@ static int brcmf_pcie_read_otp(struct brcmf_pciedev_info *devinfo)
+ 		base = 0x1120;
+ 		words = 0x170;
+ 		break;
++	case BRCM_CC_4387_CHIP_ID:
++		coreid = BCMA_CORE_GCI;
++		base = 0x113c;
++		words = 0x170;
++		break;
+ 	default:
+ 		/* OTP not supported on this chip */
+ 		return 0;
+@@ -2339,6 +2378,9 @@ static void brcmf_pcie_debugfs_create(struct device *dev)
+ }
+ #endif
+ 
++/* Forward declaration for pci_match_id() call */
++static const struct pci_device_id brcmf_pcie_devid_table[];
++
+ static int
+ brcmf_pcie_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ {
+@@ -2349,6 +2391,14 @@ brcmf_pcie_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	struct brcmf_core *core;
+ 	struct brcmf_bus *bus;
+ 
++	if (!id) {
++		id = pci_match_id(brcmf_pcie_devid_table, pdev);
++		if (!id) {
++			pci_err(pdev, "Error could not find pci_device_id for %x:%x\n", pdev->vendor, pdev->device);
++			return -ENODEV;
++		}
++	}
++
+ 	brcmf_dbg(PCIE, "Enter %x:%x\n", pdev->vendor, pdev->device);
+ 
+ 	ret = -ENOMEM;
+@@ -2630,6 +2680,7 @@ static const struct pci_device_id brcmf_pcie_devid_table[] = {
+ 	BRCMF_PCIE_DEVICE(BRCM_PCIE_43596_DEVICE_ID, CYW),
+ 	BRCMF_PCIE_DEVICE(BRCM_PCIE_4377_DEVICE_ID, WCC),
+ 	BRCMF_PCIE_DEVICE(BRCM_PCIE_4378_DEVICE_ID, WCC),
++	BRCMF_PCIE_DEVICE(BRCM_PCIE_4387_DEVICE_ID, WCC),
+ 
+ 	{ /* end: all zeroes */ }
+ };
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c
+index 246843aeb6964..2178675ae1a44 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c
+@@ -1331,6 +1331,9 @@ brcmf_usb_disconnect_cb(struct brcmf_usbdev_info *devinfo)
+ 	brcmf_usb_detach(devinfo);
+ }
+ 
++/* Forward declaration for usb_match_id() call */
++static const struct usb_device_id brcmf_usb_devid_table[];
++
+ static int
+ brcmf_usb_probe(struct usb_interface *intf, const struct usb_device_id *id)
+ {
+@@ -1342,6 +1345,14 @@ brcmf_usb_probe(struct usb_interface *intf, const struct usb_device_id *id)
+ 	u32 num_of_eps;
+ 	u8 endpoint_num, ep;
+ 
++	if (!id) {
++		id = usb_match_id(intf, brcmf_usb_devid_table);
++		if (!id) {
++			dev_err(&intf->dev, "Error could not find matching usb_device_id\n");
++			return -ENODEV;
++		}
++	}
++
+ 	brcmf_dbg(USB, "Enter 0x%04x:0x%04x\n", id->idVendor, id->idProduct);
+ 
+ 	devinfo = kzalloc(sizeof(*devinfo), GFP_ATOMIC);
+diff --git a/drivers/net/wireless/broadcom/brcm80211/include/brcm_hw_ids.h b/drivers/net/wireless/broadcom/brcm80211/include/brcm_hw_ids.h
+index 896615f579522..44684bf1b9acc 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/include/brcm_hw_ids.h
++++ b/drivers/net/wireless/broadcom/brcm80211/include/brcm_hw_ids.h
+@@ -54,6 +54,7 @@
+ #define BRCM_CC_4371_CHIP_ID		0x4371
+ #define BRCM_CC_4377_CHIP_ID		0x4377
+ #define BRCM_CC_4378_CHIP_ID		0x4378
++#define BRCM_CC_4387_CHIP_ID		0x4387
+ #define CY_CC_4373_CHIP_ID		0x4373
+ #define CY_CC_43012_CHIP_ID		43012
+ #define CY_CC_43439_CHIP_ID		43439
+@@ -95,6 +96,7 @@
+ #define BRCM_PCIE_43596_DEVICE_ID	0x4415
+ #define BRCM_PCIE_4377_DEVICE_ID	0x4488
+ #define BRCM_PCIE_4378_DEVICE_ID	0x4425
++#define BRCM_PCIE_4387_DEVICE_ID	0x4433
+ 
+ /* brcmsmac IDs */
+ #define BCM4313_D11N2G_ID	0x4727	/* 4313 802.11n 2.4G device */
+diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/sta.c b/drivers/net/wireless/intel/iwlwifi/dvm/sta.c
+index cef43cf80620a..8b01ab986cb13 100644
+--- a/drivers/net/wireless/intel/iwlwifi/dvm/sta.c
++++ b/drivers/net/wireless/intel/iwlwifi/dvm/sta.c
+@@ -1081,6 +1081,7 @@ static int iwlagn_send_sta_key(struct iwl_priv *priv,
+ {
+ 	__le16 key_flags;
+ 	struct iwl_addsta_cmd sta_cmd;
++	size_t to_copy;
+ 	int i;
+ 
+ 	spin_lock_bh(&priv->sta_lock);
+@@ -1100,7 +1101,9 @@ static int iwlagn_send_sta_key(struct iwl_priv *priv,
+ 		sta_cmd.key.tkip_rx_tsc_byte2 = tkip_iv32;
+ 		for (i = 0; i < 5; i++)
+ 			sta_cmd.key.tkip_rx_ttak[i] = cpu_to_le16(tkip_p1k[i]);
+-		memcpy(sta_cmd.key.key, keyconf->key, keyconf->keylen);
++		/* keyconf may contain MIC rx/tx keys which iwl does not use */
++		to_copy = min_t(size_t, sizeof(sta_cmd.key.key), keyconf->keylen);
++		memcpy(sta_cmd.key.key, keyconf->key, to_copy);
+ 		break;
+ 	case WLAN_CIPHER_SUITE_WEP104:
+ 		key_flags |= STA_KEY_FLG_KEY_SIZE_MSK;
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+index a02e5a67b7066..585e8cd2d332d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+@@ -38,7 +38,7 @@ static const struct dmi_system_id dmi_ppag_approved_list[] = {
+ 	},
+ 	{ .ident = "ASUS",
+ 	  .matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTek COMPUTER INC."),
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+ 		},
+ 	},
+ 	{}
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+index 027360e63b926..3ef0b776b7727 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+@@ -1664,14 +1664,10 @@ static __le32 iwl_get_mon_reg(struct iwl_fw_runtime *fwrt, u32 alloc_id,
+ }
+ 
+ static void *
+-iwl_dump_ini_mon_fill_header(struct iwl_fw_runtime *fwrt,
+-			     struct iwl_dump_ini_region_data *reg_data,
++iwl_dump_ini_mon_fill_header(struct iwl_fw_runtime *fwrt, u32 alloc_id,
+ 			     struct iwl_fw_ini_monitor_dump *data,
+ 			     const struct iwl_fw_mon_regs *addrs)
+ {
+-	struct iwl_fw_ini_region_tlv *reg = (void *)reg_data->reg_tlv->data;
+-	u32 alloc_id = le32_to_cpu(reg->dram_alloc_id);
+-
+ 	if (!iwl_trans_grab_nic_access(fwrt->trans)) {
+ 		IWL_ERR(fwrt, "Failed to get monitor header\n");
+ 		return NULL;
+@@ -1702,8 +1698,10 @@ iwl_dump_ini_mon_dram_fill_header(struct iwl_fw_runtime *fwrt,
+ 				  void *data, u32 data_len)
+ {
+ 	struct iwl_fw_ini_monitor_dump *mon_dump = (void *)data;
++	struct iwl_fw_ini_region_tlv *reg = (void *)reg_data->reg_tlv->data;
++	u32 alloc_id = le32_to_cpu(reg->dram_alloc_id);
+ 
+-	return iwl_dump_ini_mon_fill_header(fwrt, reg_data, mon_dump,
++	return iwl_dump_ini_mon_fill_header(fwrt, alloc_id, mon_dump,
+ 					    &fwrt->trans->cfg->mon_dram_regs);
+ }
+ 
+@@ -1713,8 +1711,10 @@ iwl_dump_ini_mon_smem_fill_header(struct iwl_fw_runtime *fwrt,
+ 				  void *data, u32 data_len)
+ {
+ 	struct iwl_fw_ini_monitor_dump *mon_dump = (void *)data;
++	struct iwl_fw_ini_region_tlv *reg = (void *)reg_data->reg_tlv->data;
++	u32 alloc_id = le32_to_cpu(reg->internal_buffer.alloc_id);
+ 
+-	return iwl_dump_ini_mon_fill_header(fwrt, reg_data, mon_dump,
++	return iwl_dump_ini_mon_fill_header(fwrt, alloc_id, mon_dump,
+ 					    &fwrt->trans->cfg->mon_smem_regs);
+ }
+ 
+@@ -1725,7 +1725,10 @@ iwl_dump_ini_mon_dbgi_fill_header(struct iwl_fw_runtime *fwrt,
+ {
+ 	struct iwl_fw_ini_monitor_dump *mon_dump = (void *)data;
+ 
+-	return iwl_dump_ini_mon_fill_header(fwrt, reg_data, mon_dump,
++	return iwl_dump_ini_mon_fill_header(fwrt,
++					    /* no offset calculation later */
++					    IWL_FW_INI_ALLOCATION_ID_DBGC1,
++					    mon_dump,
+ 					    &fwrt->trans->cfg->mon_dbgi_regs);
+ }
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+index 0c6b49fcb00d4..0ce0f228c9bdf 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+@@ -1076,7 +1076,7 @@ static const struct dmi_system_id dmi_tas_approved_list[] = {
+ 	},
+ 		{ .ident = "LENOVO",
+ 	  .matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Lenovo"),
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+ 		},
+ 	},
+ 	{ .ident = "DELL",
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 9fc2d5d8b7d75..a25fd90816f5b 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -3587,7 +3587,7 @@ static int __iwl_mvm_mac_set_key(struct ieee80211_hw *hw,
+ 	struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ 	struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
+ 	struct iwl_mvm_sta *mvmsta = NULL;
+-	struct iwl_mvm_key_pn *ptk_pn;
++	struct iwl_mvm_key_pn *ptk_pn = NULL;
+ 	int keyidx = key->keyidx;
+ 	u32 sec_key_id = WIDE_ID(DATA_PATH_GROUP, SEC_KEY_CMD);
+ 	u8 sec_key_ver = iwl_fw_lookup_cmd_ver(mvm->fw, sec_key_id, 0);
+@@ -3739,6 +3739,10 @@ static int __iwl_mvm_mac_set_key(struct ieee80211_hw *hw,
+ 		if (ret) {
+ 			IWL_WARN(mvm, "set key failed\n");
+ 			key->hw_key_idx = STA_KEY_IDX_INVALID;
++			if (ptk_pn) {
++				RCU_INIT_POINTER(mvmsta->ptk_pn[keyidx], NULL);
++				kfree(ptk_pn);
++			}
+ 			/*
+ 			 * can't add key for RX, but we don't need it
+ 			 * in the device for TX so still return 0,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/nvm.c b/drivers/net/wireless/intel/iwlwifi/mvm/nvm.c
+index 6d18a1fd649b9..fdf60afb0f3f2 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/nvm.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/nvm.c
+@@ -445,6 +445,11 @@ iwl_mvm_update_mcc(struct iwl_mvm *mvm, const char *alpha2,
+ 		struct iwl_mcc_update_resp *mcc_resp = (void *)pkt->data;
+ 
+ 		n_channels =  __le32_to_cpu(mcc_resp->n_channels);
++		if (iwl_rx_packet_payload_len(pkt) !=
++		    struct_size(mcc_resp, channels, n_channels)) {
++			resp_cp = ERR_PTR(-EINVAL);
++			goto exit;
++		}
+ 		resp_len = sizeof(struct iwl_mcc_update_resp) +
+ 			   n_channels * sizeof(__le32);
+ 		resp_cp = kmemdup(mcc_resp, resp_len, GFP_KERNEL);
+@@ -456,6 +461,11 @@ iwl_mvm_update_mcc(struct iwl_mvm *mvm, const char *alpha2,
+ 		struct iwl_mcc_update_resp_v3 *mcc_resp_v3 = (void *)pkt->data;
+ 
+ 		n_channels =  __le32_to_cpu(mcc_resp_v3->n_channels);
++		if (iwl_rx_packet_payload_len(pkt) !=
++		    struct_size(mcc_resp_v3, channels, n_channels)) {
++			resp_cp = ERR_PTR(-EINVAL);
++			goto exit;
++		}
+ 		resp_len = sizeof(struct iwl_mcc_update_resp) +
+ 			   n_channels * sizeof(__le32);
+ 		resp_cp = kzalloc(resp_len, GFP_KERNEL);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+index e685113172c52..ad410b6efce73 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+@@ -1967,7 +1967,7 @@ void iwl_mvm_rx_mpdu_mq(struct iwl_mvm *mvm, struct napi_struct *napi,
+ 				RCU_INIT_POINTER(mvm->csa_tx_blocked_vif, NULL);
+ 				/* Unblock BCAST / MCAST station */
+ 				iwl_mvm_modify_all_sta_disable_tx(mvm, mvmvif, false);
+-				cancel_delayed_work_sync(&mvm->cs_tx_unblock_dwork);
++				cancel_delayed_work(&mvm->cs_tx_unblock_dwork);
+ 			}
+ 		}
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+index 9813d7fa18007..1c454392de0be 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+@@ -791,10 +791,11 @@ unsigned int iwl_mvm_max_amsdu_size(struct iwl_mvm *mvm,
+ 				    struct ieee80211_sta *sta, unsigned int tid)
+ {
+ 	struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
+-	enum nl80211_band band = mvmsta->vif->bss_conf.chandef.chan->band;
+ 	u8 ac = tid_to_mac80211_ac[tid];
++	enum nl80211_band band;
+ 	unsigned int txf;
+-	int lmac = iwl_mvm_get_lmac_id(mvm->fw, band);
++	unsigned int val;
++	int lmac;
+ 
+ 	/* For HE redirect to trigger based fifos */
+ 	if (sta->deflink.he_cap.has_he && !WARN_ON(!iwl_mvm_has_new_tx_api(mvm)))
+@@ -808,7 +809,37 @@ unsigned int iwl_mvm_max_amsdu_size(struct iwl_mvm *mvm,
+ 	 * We also want to have the start of the next packet inside the
+ 	 * fifo to be able to send bursts.
+ 	 */
+-	return min_t(unsigned int, mvmsta->max_amsdu_len,
++	val = mvmsta->max_amsdu_len;
++
++	if (hweight16(sta->valid_links) <= 1) {
++		if (sta->valid_links) {
++			struct ieee80211_bss_conf *link_conf;
++			unsigned int link = ffs(sta->valid_links) - 1;
++
++			rcu_read_lock();
++			link_conf = rcu_dereference(mvmsta->vif->link_conf[link]);
++			if (WARN_ON(!link_conf))
++				band = NL80211_BAND_2GHZ;
++			else
++				band = link_conf->chandef.chan->band;
++			rcu_read_unlock();
++		} else {
++			band = mvmsta->vif->bss_conf.chandef.chan->band;
++		}
++
++		lmac = iwl_mvm_get_lmac_id(mvm->fw, band);
++	} else if (fw_has_capa(&mvm->fw->ucode_capa,
++			       IWL_UCODE_TLV_CAPA_CDB_SUPPORT)) {
++		/* for real MLO restrict to both LMACs if they exist */
++		lmac = IWL_LMAC_5G_INDEX;
++		val = min_t(unsigned int, val,
++			    mvm->fwrt.smem_cfg.lmac[lmac].txfifo_size[txf] - 256);
++		lmac = IWL_LMAC_24G_INDEX;
++	} else {
++		lmac = IWL_LMAC_24G_INDEX;
++	}
++
++	return min_t(unsigned int, val,
+ 		     mvm->fwrt.smem_cfg.lmac[lmac].txfifo_size[txf] - 256);
+ }
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index a0bf19b18635c..25b2d41de4c1d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -504,6 +504,7 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
+ 
+ /* Bz devices */
+ 	{IWL_PCI_DEVICE(0x2727, PCI_ANY_ID, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0x272b, PCI_ANY_ID, iwl_bz_trans_cfg)},
+ 	{IWL_PCI_DEVICE(0xA840, PCI_ANY_ID, iwl_bz_trans_cfg)},
+ 	{IWL_PCI_DEVICE(0x7740, PCI_ANY_ID, iwl_bz_trans_cfg)},
+ #endif /* CONFIG_IWLMVM */
+@@ -1698,6 +1699,9 @@ static void iwl_pci_remove(struct pci_dev *pdev)
+ {
+ 	struct iwl_trans *trans = pci_get_drvdata(pdev);
+ 
++	if (!trans)
++		return;
++
+ 	iwl_drv_stop(trans->drv);
+ 
+ 	iwl_trans_pcie_free(trans);
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+index 171b6bf4a65a0..3cc61c30cca16 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+@@ -2861,7 +2861,7 @@ static bool iwl_write_to_user_buf(char __user *user_buf, ssize_t count,
+ 				  void *buf, ssize_t *size,
+ 				  ssize_t *bytes_copied)
+ {
+-	int buf_size_left = count - *bytes_copied;
++	ssize_t buf_size_left = count - *bytes_copied;
+ 
+ 	buf_size_left = buf_size_left - (buf_size_left % sizeof(u32));
+ 	if (*size > buf_size_left)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac2_mac.h b/drivers/net/wireless/mediatek/mt76/mt76_connac2_mac.h
+index f33171bcd3432..c3b692eac6f65 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac2_mac.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac2_mac.h
+@@ -163,7 +163,7 @@ enum {
+ #define MT_TXS5_MPDU_TX_CNT		GENMASK(31, 23)
+ 
+ #define MT_TXS6_MPDU_FAIL_CNT		GENMASK(31, 23)
+-
++#define MT_TXS7_MPDU_RETRY_BYTE		GENMASK(22, 0)
+ #define MT_TXS7_MPDU_RETRY_CNT		GENMASK(31, 23)
+ 
+ /* RXD DW1 */
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
+index 82aac0a04655f..e57eade24ae56 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
+@@ -576,7 +576,8 @@ bool mt76_connac2_mac_fill_txs(struct mt76_dev *dev, struct mt76_wcid *wcid,
+ 	/* PPDU based reporting */
+ 	if (FIELD_GET(MT_TXS0_TXS_FORMAT, txs) > 1) {
+ 		stats->tx_bytes +=
+-			le32_get_bits(txs_data[5], MT_TXS5_MPDU_TX_BYTE);
++			le32_get_bits(txs_data[5], MT_TXS5_MPDU_TX_BYTE) -
++			le32_get_bits(txs_data[7], MT_TXS7_MPDU_RETRY_BYTE);
+ 		stats->tx_packets +=
+ 			le32_get_bits(txs_data[5], MT_TXS5_MPDU_TX_CNT);
+ 		stats->tx_failed +=
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/usb.c b/drivers/net/wireless/mediatek/mt76/mt7921/usb.c
+index 70c9bbdbf60e9..09ab9b83c2011 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/usb.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/usb.c
+@@ -18,6 +18,9 @@ static const struct usb_device_id mt7921u_device_table[] = {
+ 	/* Comfast CF-952AX */
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x3574, 0x6211, 0xff, 0xff, 0xff),
+ 		.driver_info = (kernel_ulong_t)MT7921_FIRMWARE_WM },
++	/* Netgear, Inc. [A8000,AXE3000] */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x0846, 0x9060, 0xff, 0xff, 0xff),
++		.driver_info = (kernel_ulong_t)MT7921_FIRMWARE_WM },
+ 	{ },
+ };
+ 
+diff --git a/drivers/net/wireless/realtek/rtw88/mac80211.c b/drivers/net/wireless/realtek/rtw88/mac80211.c
+index 3b92ac611d3fd..e29ca5dcf105b 100644
+--- a/drivers/net/wireless/realtek/rtw88/mac80211.c
++++ b/drivers/net/wireless/realtek/rtw88/mac80211.c
+@@ -893,7 +893,7 @@ static void rtw_ops_sta_rc_update(struct ieee80211_hw *hw,
+ 	struct rtw_sta_info *si = (struct rtw_sta_info *)sta->drv_priv;
+ 
+ 	if (changed & IEEE80211_RC_BW_CHANGED)
+-		rtw_update_sta_info(rtwdev, si, true);
++		ieee80211_queue_work(rtwdev->hw, &si->rc_work);
+ }
+ 
+ const struct ieee80211_ops rtw_ops = {
+diff --git a/drivers/net/wireless/realtek/rtw88/main.c b/drivers/net/wireless/realtek/rtw88/main.c
+index b2e78737bd5d0..76f7aadef77c5 100644
+--- a/drivers/net/wireless/realtek/rtw88/main.c
++++ b/drivers/net/wireless/realtek/rtw88/main.c
+@@ -298,6 +298,17 @@ static u8 rtw_acquire_macid(struct rtw_dev *rtwdev)
+ 	return mac_id;
+ }
+ 
++static void rtw_sta_rc_work(struct work_struct *work)
++{
++	struct rtw_sta_info *si = container_of(work, struct rtw_sta_info,
++					       rc_work);
++	struct rtw_dev *rtwdev = si->rtwdev;
++
++	mutex_lock(&rtwdev->mutex);
++	rtw_update_sta_info(rtwdev, si, true);
++	mutex_unlock(&rtwdev->mutex);
++}
++
+ int rtw_sta_add(struct rtw_dev *rtwdev, struct ieee80211_sta *sta,
+ 		struct ieee80211_vif *vif)
+ {
+@@ -308,12 +319,14 @@ int rtw_sta_add(struct rtw_dev *rtwdev, struct ieee80211_sta *sta,
+ 	if (si->mac_id >= RTW_MAX_MAC_ID_NUM)
+ 		return -ENOSPC;
+ 
++	si->rtwdev = rtwdev;
+ 	si->sta = sta;
+ 	si->vif = vif;
+ 	si->init_ra_lv = 1;
+ 	ewma_rssi_init(&si->avg_rssi);
+ 	for (i = 0; i < ARRAY_SIZE(sta->txq); i++)
+ 		rtw_txq_init(rtwdev, sta->txq[i]);
++	INIT_WORK(&si->rc_work, rtw_sta_rc_work);
+ 
+ 	rtw_update_sta_info(rtwdev, si, true);
+ 	rtw_fw_media_status_report(rtwdev, si->mac_id, true);
+@@ -332,6 +345,8 @@ void rtw_sta_remove(struct rtw_dev *rtwdev, struct ieee80211_sta *sta,
+ 	struct rtw_sta_info *si = (struct rtw_sta_info *)sta->drv_priv;
+ 	int i;
+ 
++	cancel_work_sync(&si->rc_work);
++
+ 	rtw_release_macid(rtwdev, si->mac_id);
+ 	if (fw_exist)
+ 		rtw_fw_media_status_report(rtwdev, si->mac_id, false);
+diff --git a/drivers/net/wireless/realtek/rtw88/main.h b/drivers/net/wireless/realtek/rtw88/main.h
+index d4a53d5567451..cdbcea28802d2 100644
+--- a/drivers/net/wireless/realtek/rtw88/main.h
++++ b/drivers/net/wireless/realtek/rtw88/main.h
+@@ -734,6 +734,7 @@ struct rtw_txq {
+ DECLARE_EWMA(rssi, 10, 16);
+ 
+ struct rtw_sta_info {
++	struct rtw_dev *rtwdev;
+ 	struct ieee80211_sta *sta;
+ 	struct ieee80211_vif *vif;
+ 
+@@ -758,6 +759,8 @@ struct rtw_sta_info {
+ 
+ 	bool use_cfg_mask;
+ 	struct cfg80211_bitrate_mask *mask;
++
++	struct work_struct rc_work;
+ };
+ 
+ enum rtw_bfee_role {
+diff --git a/drivers/net/wireless/realtek/rtw88/usb.c b/drivers/net/wireless/realtek/rtw88/usb.c
+index a10d6fef4ffaf..44a5fafb99055 100644
+--- a/drivers/net/wireless/realtek/rtw88/usb.c
++++ b/drivers/net/wireless/realtek/rtw88/usb.c
+@@ -804,6 +804,7 @@ static void rtw_usb_intf_deinit(struct rtw_dev *rtwdev,
+ 	struct rtw_usb *rtwusb = rtw_get_usb_priv(rtwdev);
+ 
+ 	usb_put_dev(rtwusb->udev);
++	kfree(rtwusb->usb_data);
+ 	usb_set_intfdata(intf, NULL);
+ }
+ 
+@@ -832,7 +833,7 @@ int rtw_usb_probe(struct usb_interface *intf, const struct usb_device_id *id)
+ 
+ 	ret = rtw_usb_alloc_rx_bufs(rtwusb);
+ 	if (ret)
+-		return ret;
++		goto err_release_hw;
+ 
+ 	ret = rtw_core_init(rtwdev);
+ 	if (ret)
+diff --git a/drivers/net/wireless/realtek/rtw88/usb.h b/drivers/net/wireless/realtek/rtw88/usb.h
+index 30647f0dd61c6..ad1d7955c6a51 100644
+--- a/drivers/net/wireless/realtek/rtw88/usb.h
++++ b/drivers/net/wireless/realtek/rtw88/usb.h
+@@ -78,7 +78,7 @@ struct rtw_usb {
+ 	u8 pipe_interrupt;
+ 	u8 pipe_in;
+ 	u8 out_ep[RTW_USB_EP_MAX];
+-	u8 qsel_to_ep[TX_DESC_QSEL_MAX];
++	int qsel_to_ep[TX_DESC_QSEL_MAX];
+ 	u8 usb_txagg_num;
+ 
+ 	struct workqueue_struct *txwq, *rxwq;
+diff --git a/drivers/net/wwan/iosm/iosm_ipc_imem.c b/drivers/net/wwan/iosm/iosm_ipc_imem.c
+index c066b0040a3fe..829515a601b37 100644
+--- a/drivers/net/wwan/iosm/iosm_ipc_imem.c
++++ b/drivers/net/wwan/iosm/iosm_ipc_imem.c
+@@ -565,24 +565,32 @@ static void ipc_imem_run_state_worker(struct work_struct *instance)
+ 	struct ipc_mux_config mux_cfg;
+ 	struct iosm_imem *ipc_imem;
+ 	u8 ctrl_chl_idx = 0;
++	int ret;
+ 
+ 	ipc_imem = container_of(instance, struct iosm_imem, run_state_worker);
+ 
+ 	if (ipc_imem->phase != IPC_P_RUN) {
+ 		dev_err(ipc_imem->dev,
+ 			"Modem link down. Exit run state worker.");
+-		return;
++		goto err_out;
+ 	}
+ 
+ 	if (test_and_clear_bit(IOSM_DEVLINK_INIT, &ipc_imem->flag))
+ 		ipc_devlink_deinit(ipc_imem->ipc_devlink);
+ 
+-	if (!ipc_imem_setup_cp_mux_cap_init(ipc_imem, &mux_cfg))
+-		ipc_imem->mux = ipc_mux_init(&mux_cfg, ipc_imem);
++	ret = ipc_imem_setup_cp_mux_cap_init(ipc_imem, &mux_cfg);
++	if (ret < 0)
++		goto err_out;
++
++	ipc_imem->mux = ipc_mux_init(&mux_cfg, ipc_imem);
++	if (!ipc_imem->mux)
++		goto err_out;
++
++	ret = ipc_imem_wwan_channel_init(ipc_imem, mux_cfg.protocol);
++	if (ret < 0)
++		goto err_ipc_mux_deinit;
+ 
+-	ipc_imem_wwan_channel_init(ipc_imem, mux_cfg.protocol);
+-	if (ipc_imem->mux)
+-		ipc_imem->mux->wwan = ipc_imem->wwan;
++	ipc_imem->mux->wwan = ipc_imem->wwan;
+ 
+ 	while (ctrl_chl_idx < IPC_MEM_MAX_CHANNELS) {
+ 		if (!ipc_chnl_cfg_get(&chnl_cfg_port, ctrl_chl_idx)) {
+@@ -622,6 +630,13 @@ static void ipc_imem_run_state_worker(struct work_struct *instance)
+ 
+ 	/* Complete all memory stores after setting bit */
+ 	smp_mb__after_atomic();
++
++	return;
++
++err_ipc_mux_deinit:
++	ipc_mux_deinit(ipc_imem->mux);
++err_out:
++	ipc_uevent_send(ipc_imem->dev, UEVENT_CD_READY_LINK_DOWN);
+ }
+ 
+ static void ipc_imem_handle_irq(struct iosm_imem *ipc_imem, int irq)
+diff --git a/drivers/net/wwan/iosm/iosm_ipc_imem_ops.c b/drivers/net/wwan/iosm/iosm_ipc_imem_ops.c
+index 66b90cc4c3460..109cf89304888 100644
+--- a/drivers/net/wwan/iosm/iosm_ipc_imem_ops.c
++++ b/drivers/net/wwan/iosm/iosm_ipc_imem_ops.c
+@@ -77,8 +77,8 @@ out:
+ }
+ 
+ /* Initialize wwan channel */
+-void ipc_imem_wwan_channel_init(struct iosm_imem *ipc_imem,
+-				enum ipc_mux_protocol mux_type)
++int ipc_imem_wwan_channel_init(struct iosm_imem *ipc_imem,
++			       enum ipc_mux_protocol mux_type)
+ {
+ 	struct ipc_chnl_cfg chnl_cfg = { 0 };
+ 
+@@ -87,7 +87,7 @@ void ipc_imem_wwan_channel_init(struct iosm_imem *ipc_imem,
+ 	/* If modem version is invalid (0xffffffff), do not initialize WWAN. */
+ 	if (ipc_imem->cp_version == -1) {
+ 		dev_err(ipc_imem->dev, "invalid CP version");
+-		return;
++		return -EIO;
+ 	}
+ 
+ 	ipc_chnl_cfg_get(&chnl_cfg, ipc_imem->nr_of_channels);
+@@ -104,9 +104,13 @@ void ipc_imem_wwan_channel_init(struct iosm_imem *ipc_imem,
+ 
+ 	/* WWAN registration. */
+ 	ipc_imem->wwan = ipc_wwan_init(ipc_imem, ipc_imem->dev);
+-	if (!ipc_imem->wwan)
++	if (!ipc_imem->wwan) {
+ 		dev_err(ipc_imem->dev,
+ 			"failed to register the ipc_wwan interfaces");
++		return -ENOMEM;
++	}
++
++	return 0;
+ }
+ 
+ /* Map SKB to DMA for transfer */
+diff --git a/drivers/net/wwan/iosm/iosm_ipc_imem_ops.h b/drivers/net/wwan/iosm/iosm_ipc_imem_ops.h
+index f8afb217d9e2f..026c5bd0f9992 100644
+--- a/drivers/net/wwan/iosm/iosm_ipc_imem_ops.h
++++ b/drivers/net/wwan/iosm/iosm_ipc_imem_ops.h
+@@ -91,9 +91,11 @@ int ipc_imem_sys_wwan_transmit(struct iosm_imem *ipc_imem, int if_id,
+  *				MUX.
+  * @ipc_imem:		Pointer to iosm_imem struct.
+  * @mux_type:		Type of mux protocol.
++ *
++ * Return: 0 on success and failure value on error
+  */
+-void ipc_imem_wwan_channel_init(struct iosm_imem *ipc_imem,
+-				enum ipc_mux_protocol mux_type);
++int ipc_imem_wwan_channel_init(struct iosm_imem *ipc_imem,
++			       enum ipc_mux_protocol mux_type);
+ 
+ /**
+  * ipc_imem_sys_devlink_open - Open a Flash/CD Channel link to CP
+diff --git a/drivers/parisc/power.c b/drivers/parisc/power.c
+index 456776bd8ee66..6f5e5f0230d39 100644
+--- a/drivers/parisc/power.c
++++ b/drivers/parisc/power.c
+@@ -37,7 +37,6 @@
+ #include <linux/module.h>
+ #include <linux/init.h>
+ #include <linux/kernel.h>
+-#include <linux/notifier.h>
+ #include <linux/panic_notifier.h>
+ #include <linux/reboot.h>
+ #include <linux/sched/signal.h>
+@@ -175,16 +174,21 @@ static void powerfail_interrupt(int code, void *x)
+ 
+ 
+ 
+-/* parisc_panic_event() is called by the panic handler.
+- * As soon as a panic occurs, our tasklets above will not be
+- * executed any longer. This function then re-enables the 
+- * soft-power switch and allows the user to switch off the system
++/*
++ * parisc_panic_event() is called by the panic handler.
++ *
++ * As soon as a panic occurs, our tasklets above will not
++ * be executed any longer. This function then re-enables
++ * the soft-power switch and allows the user to switch off
++ * the system. We rely in pdc_soft_power_button_panic()
++ * since this version spin_trylocks (instead of regular
++ * spinlock), preventing deadlocks on panic path.
+  */
+ static int parisc_panic_event(struct notifier_block *this,
+ 		unsigned long event, void *ptr)
+ {
+ 	/* re-enable the soft-power switch */
+-	pdc_soft_power_button(0);
++	pdc_soft_power_button_panic(0);
+ 	return NOTIFY_DONE;
+ }
+ 
+diff --git a/drivers/phy/st/phy-miphy28lp.c b/drivers/phy/st/phy-miphy28lp.c
+index 068160a34f5cc..e30305b77f0d1 100644
+--- a/drivers/phy/st/phy-miphy28lp.c
++++ b/drivers/phy/st/phy-miphy28lp.c
+@@ -9,6 +9,7 @@
+ 
+ #include <linux/platform_device.h>
+ #include <linux/io.h>
++#include <linux/iopoll.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/of.h>
+@@ -484,19 +485,11 @@ static inline void miphy28lp_pcie_config_gen(struct miphy28lp_phy *miphy_phy)
+ 
+ static inline int miphy28lp_wait_compensation(struct miphy28lp_phy *miphy_phy)
+ {
+-	unsigned long finish = jiffies + 5 * HZ;
+ 	u8 val;
+ 
+ 	/* Waiting for Compensation to complete */
+-	do {
+-		val = readb_relaxed(miphy_phy->base + MIPHY_COMP_FSM_6);
+-
+-		if (time_after_eq(jiffies, finish))
+-			return -EBUSY;
+-		cpu_relax();
+-	} while (!(val & COMP_DONE));
+-
+-	return 0;
++	return readb_relaxed_poll_timeout(miphy_phy->base + MIPHY_COMP_FSM_6,
++					  val, val & COMP_DONE, 1, 5 * USEC_PER_SEC);
+ }
+ 
+ 
+@@ -805,7 +798,6 @@ static inline void miphy28lp_configure_usb3(struct miphy28lp_phy *miphy_phy)
+ 
+ static inline int miphy_is_ready(struct miphy28lp_phy *miphy_phy)
+ {
+-	unsigned long finish = jiffies + 5 * HZ;
+ 	u8 mask = HFC_PLL | HFC_RDY;
+ 	u8 val;
+ 
+@@ -816,21 +808,14 @@ static inline int miphy_is_ready(struct miphy28lp_phy *miphy_phy)
+ 	if (miphy_phy->type == PHY_TYPE_SATA)
+ 		mask |= PHY_RDY;
+ 
+-	do {
+-		val = readb_relaxed(miphy_phy->base + MIPHY_STATUS_1);
+-		if ((val & mask) != mask)
+-			cpu_relax();
+-		else
+-			return 0;
+-	} while (!time_after_eq(jiffies, finish));
+-
+-	return -EBUSY;
++	return readb_relaxed_poll_timeout(miphy_phy->base + MIPHY_STATUS_1,
++					  val, (val & mask) == mask, 1,
++					  5 * USEC_PER_SEC);
+ }
+ 
+ static int miphy_osc_is_ready(struct miphy28lp_phy *miphy_phy)
+ {
+ 	struct miphy28lp_dev *miphy_dev = miphy_phy->phydev;
+-	unsigned long finish = jiffies + 5 * HZ;
+ 	u32 val;
+ 
+ 	if (!miphy_phy->osc_rdy)
+@@ -839,17 +824,10 @@ static int miphy_osc_is_ready(struct miphy28lp_phy *miphy_phy)
+ 	if (!miphy_phy->syscfg_reg[SYSCFG_STATUS])
+ 		return -EINVAL;
+ 
+-	do {
+-		regmap_read(miphy_dev->regmap,
+-				miphy_phy->syscfg_reg[SYSCFG_STATUS], &val);
+-
+-		if ((val & MIPHY_OSC_RDY) != MIPHY_OSC_RDY)
+-			cpu_relax();
+-		else
+-			return 0;
+-	} while (!time_after_eq(jiffies, finish));
+-
+-	return -EBUSY;
++	return regmap_read_poll_timeout(miphy_dev->regmap,
++					miphy_phy->syscfg_reg[SYSCFG_STATUS],
++					val, val & MIPHY_OSC_RDY, 1,
++					5 * USEC_PER_SEC);
+ }
+ 
+ static int miphy28lp_get_resource_byname(struct device_node *child,
+diff --git a/drivers/pinctrl/pinctrl-at91.c b/drivers/pinctrl/pinctrl-at91.c
+index 735c501e7a06c..9fa68ca4a412d 100644
+--- a/drivers/pinctrl/pinctrl-at91.c
++++ b/drivers/pinctrl/pinctrl-at91.c
+@@ -18,6 +18,7 @@
+ #include <linux/pm.h>
+ #include <linux/seq_file.h>
+ #include <linux/slab.h>
++#include <linux/string_helpers.h>
+ 
+ /* Since we request GPIOs from ourself */
+ #include <linux/pinctrl/consumer.h>
+@@ -1371,6 +1372,7 @@ static int at91_pinctrl_probe_dt(struct platform_device *pdev,
+ 
+ static int at91_pinctrl_probe(struct platform_device *pdev)
+ {
++	struct device *dev = &pdev->dev;
+ 	struct at91_pinctrl *info;
+ 	struct pinctrl_pin_desc *pdesc;
+ 	int ret, i, j, k;
+@@ -1394,9 +1396,19 @@ static int at91_pinctrl_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	for (i = 0, k = 0; i < gpio_banks; i++) {
++		char **names;
++
++		names = devm_kasprintf_strarray(dev, "pio", MAX_NB_GPIO_PER_BANK);
++		if (!names)
++			return -ENOMEM;
++
+ 		for (j = 0; j < MAX_NB_GPIO_PER_BANK; j++, k++) {
++			char *name = names[j];
++
++			strreplace(name, '-', i + 'A');
++
+ 			pdesc->number = k;
+-			pdesc->name = kasprintf(GFP_KERNEL, "pio%c%d", i + 'A', j);
++			pdesc->name = name;
+ 			pdesc++;
+ 		}
+ 	}
+@@ -1797,7 +1809,8 @@ static const struct of_device_id at91_gpio_of_match[] = {
+ 
+ static int at91_gpio_probe(struct platform_device *pdev)
+ {
+-	struct device_node *np = pdev->dev.of_node;
++	struct device *dev = &pdev->dev;
++	struct device_node *np = dev->of_node;
+ 	struct at91_gpio_chip *at91_chip = NULL;
+ 	struct gpio_chip *chip;
+ 	struct pinctrl_gpio_range *range;
+@@ -1866,16 +1879,14 @@ static int at91_gpio_probe(struct platform_device *pdev)
+ 			chip->ngpio = ngpio;
+ 	}
+ 
+-	names = devm_kcalloc(&pdev->dev, chip->ngpio, sizeof(char *),
+-			     GFP_KERNEL);
+-
++	names = devm_kasprintf_strarray(dev, "pio", chip->ngpio);
+ 	if (!names) {
+ 		ret = -ENOMEM;
+ 		goto clk_enable_err;
+ 	}
+ 
+ 	for (i = 0; i < chip->ngpio; i++)
+-		names[i] = devm_kasprintf(&pdev->dev, GFP_KERNEL, "pio%c%d", alias_idx + 'A', i);
++		strreplace(names[i], '-', alias_idx + 'A');
+ 
+ 	chip->names = (const char *const *)names;
+ 
+diff --git a/drivers/platform/x86/amd/pmc.c b/drivers/platform/x86/amd/pmc.c
+index 69f305496643f..73dedc9950144 100644
+--- a/drivers/platform/x86/amd/pmc.c
++++ b/drivers/platform/x86/amd/pmc.c
+@@ -265,6 +265,7 @@ static int amd_pmc_stb_debugfs_open_v2(struct inode *inode, struct file *filp)
+ 	dev->msg_port = 0;
+ 	if (ret) {
+ 		dev_err(dev->dev, "error: S2D_NUM_SAMPLES not supported : %d\n", ret);
++		kfree(buf);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/platform/x86/intel/vsec.c b/drivers/platform/x86/intel/vsec.c
+index 2311c16cb975d..91be391bba3f7 100644
+--- a/drivers/platform/x86/intel/vsec.c
++++ b/drivers/platform/x86/intel/vsec.c
+@@ -67,14 +67,6 @@ enum intel_vsec_id {
+ 	VSEC_ID_TPMI		= 66,
+ };
+ 
+-static enum intel_vsec_id intel_vsec_allow_list[] = {
+-	VSEC_ID_TELEMETRY,
+-	VSEC_ID_WATCHER,
+-	VSEC_ID_CRASHLOG,
+-	VSEC_ID_SDSI,
+-	VSEC_ID_TPMI,
+-};
+-
+ static const char *intel_vsec_name(enum intel_vsec_id id)
+ {
+ 	switch (id) {
+@@ -98,26 +90,19 @@ static const char *intel_vsec_name(enum intel_vsec_id id)
+ 	}
+ }
+ 
+-static bool intel_vsec_allowed(u16 id)
+-{
+-	int i;
+-
+-	for (i = 0; i < ARRAY_SIZE(intel_vsec_allow_list); i++)
+-		if (intel_vsec_allow_list[i] == id)
+-			return true;
+-
+-	return false;
+-}
+-
+-static bool intel_vsec_disabled(u16 id, unsigned long quirks)
++static bool intel_vsec_supported(u16 id, unsigned long caps)
+ {
+ 	switch (id) {
++	case VSEC_ID_TELEMETRY:
++		return !!(caps & VSEC_CAP_TELEMETRY);
+ 	case VSEC_ID_WATCHER:
+-		return !!(quirks & VSEC_QUIRK_NO_WATCHER);
+-
++		return !!(caps & VSEC_CAP_WATCHER);
+ 	case VSEC_ID_CRASHLOG:
+-		return !!(quirks & VSEC_QUIRK_NO_CRASHLOG);
+-
++		return !!(caps & VSEC_CAP_CRASHLOG);
++	case VSEC_ID_SDSI:
++		return !!(caps & VSEC_CAP_SDSI);
++	case VSEC_ID_TPMI:
++		return !!(caps & VSEC_CAP_TPMI);
+ 	default:
+ 		return false;
+ 	}
+@@ -206,7 +191,7 @@ static int intel_vsec_add_dev(struct pci_dev *pdev, struct intel_vsec_header *he
+ 	unsigned long quirks = info->quirks;
+ 	int i;
+ 
+-	if (!intel_vsec_allowed(header->id) || intel_vsec_disabled(header->id, quirks))
++	if (!intel_vsec_supported(header->id, info->caps))
+ 		return -EINVAL;
+ 
+ 	if (!header->num_entries) {
+@@ -261,14 +246,14 @@ static int intel_vsec_add_dev(struct pci_dev *pdev, struct intel_vsec_header *he
+ static bool intel_vsec_walk_header(struct pci_dev *pdev,
+ 				   struct intel_vsec_platform_info *info)
+ {
+-	struct intel_vsec_header **header = info->capabilities;
++	struct intel_vsec_header **header = info->headers;
+ 	bool have_devices = false;
+ 	int ret;
+ 
+ 	for ( ; *header; header++) {
+ 		ret = intel_vsec_add_dev(pdev, *header, info);
+ 		if (ret)
+-			dev_info(&pdev->dev, "Could not add device for DVSEC id %d\n",
++			dev_info(&pdev->dev, "Could not add device for VSEC id %d\n",
+ 				 (*header)->id);
+ 		else
+ 			have_devices = true;
+@@ -403,14 +388,8 @@ static int intel_vsec_pci_probe(struct pci_dev *pdev, const struct pci_device_id
+ 	return 0;
+ }
+ 
+-/* TGL info */
+-static const struct intel_vsec_platform_info tgl_info = {
+-	.quirks = VSEC_QUIRK_NO_WATCHER | VSEC_QUIRK_NO_CRASHLOG |
+-		  VSEC_QUIRK_TABLE_SHIFT | VSEC_QUIRK_EARLY_HW,
+-};
+-
+ /* DG1 info */
+-static struct intel_vsec_header dg1_telemetry = {
++static struct intel_vsec_header dg1_header = {
+ 	.length = 0x10,
+ 	.id = 2,
+ 	.num_entries = 1,
+@@ -419,19 +398,31 @@ static struct intel_vsec_header dg1_telemetry = {
+ 	.offset = 0x466000,
+ };
+ 
+-static struct intel_vsec_header *dg1_capabilities[] = {
+-	&dg1_telemetry,
++static struct intel_vsec_header *dg1_headers[] = {
++	&dg1_header,
+ 	NULL
+ };
+ 
+ static const struct intel_vsec_platform_info dg1_info = {
+-	.capabilities = dg1_capabilities,
++	.caps = VSEC_CAP_TELEMETRY,
++	.headers = dg1_headers,
+ 	.quirks = VSEC_QUIRK_NO_DVSEC | VSEC_QUIRK_EARLY_HW,
+ };
+ 
+ /* MTL info */
+ static const struct intel_vsec_platform_info mtl_info = {
+-	.quirks = VSEC_QUIRK_NO_WATCHER | VSEC_QUIRK_NO_CRASHLOG,
++	.caps = VSEC_CAP_TELEMETRY,
++};
++
++/* OOBMSM info */
++static const struct intel_vsec_platform_info oobmsm_info = {
++	.caps = VSEC_CAP_TELEMETRY | VSEC_CAP_SDSI | VSEC_CAP_TPMI,
++};
++
++/* TGL info */
++static const struct intel_vsec_platform_info tgl_info = {
++	.caps = VSEC_CAP_TELEMETRY,
++	.quirks = VSEC_QUIRK_TABLE_SHIFT | VSEC_QUIRK_EARLY_HW,
+ };
+ 
+ #define PCI_DEVICE_ID_INTEL_VSEC_ADL		0x467d
+@@ -446,7 +437,7 @@ static const struct pci_device_id intel_vsec_pci_ids[] = {
+ 	{ PCI_DEVICE_DATA(INTEL, VSEC_DG1, &dg1_info) },
+ 	{ PCI_DEVICE_DATA(INTEL, VSEC_MTL_M, &mtl_info) },
+ 	{ PCI_DEVICE_DATA(INTEL, VSEC_MTL_S, &mtl_info) },
+-	{ PCI_DEVICE_DATA(INTEL, VSEC_OOBMSM, &(struct intel_vsec_platform_info) {}) },
++	{ PCI_DEVICE_DATA(INTEL, VSEC_OOBMSM, &oobmsm_info) },
+ 	{ PCI_DEVICE_DATA(INTEL, VSEC_RPL, &tgl_info) },
+ 	{ PCI_DEVICE_DATA(INTEL, VSEC_TGL, &tgl_info) },
+ 	{ }
+diff --git a/drivers/platform/x86/intel/vsec.h b/drivers/platform/x86/intel/vsec.h
+index ae8fe92c5595b..0fd042c171ba0 100644
+--- a/drivers/platform/x86/intel/vsec.h
++++ b/drivers/platform/x86/intel/vsec.h
+@@ -5,6 +5,12 @@
+ #include <linux/auxiliary_bus.h>
+ #include <linux/bits.h>
+ 
++#define VSEC_CAP_TELEMETRY	BIT(0)
++#define VSEC_CAP_WATCHER	BIT(1)
++#define VSEC_CAP_CRASHLOG	BIT(2)
++#define VSEC_CAP_SDSI		BIT(3)
++#define VSEC_CAP_TPMI		BIT(4)
++
+ struct pci_dev;
+ struct resource;
+ 
+@@ -27,7 +33,8 @@ enum intel_vsec_quirks {
+ 
+ /* Platform specific data */
+ struct intel_vsec_platform_info {
+-	struct intel_vsec_header **capabilities;
++	struct intel_vsec_header **headers;
++	unsigned long caps;
+ 	unsigned long quirks;
+ };
+ 
+diff --git a/drivers/platform/x86/x86-android-tablets.c b/drivers/platform/x86/x86-android-tablets.c
+index 111b007656fc4..8405e1c58d520 100644
+--- a/drivers/platform/x86/x86-android-tablets.c
++++ b/drivers/platform/x86/x86-android-tablets.c
+@@ -265,6 +265,88 @@ static struct gpiod_lookup_table int3496_gpo2_pin22_gpios = {
+ 	},
+ };
+ 
++static struct gpiod_lookup_table int3496_reference_gpios = {
++	.dev_id = "intel-int3496",
++	.table = {
++		GPIO_LOOKUP("INT33FC:01", 15, "vbus", GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP("INT33FC:02", 1, "mux", GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP("INT33FC:02", 18, "id", GPIO_ACTIVE_HIGH),
++		{ }
++	},
++};
++
++/* Acer Iconia One 7 B1-750 has an Android factory img with everything hardcoded */
++static const char * const acer_b1_750_mount_matrix[] = {
++	"-1", "0", "0",
++	"0", "1", "0",
++	"0", "0", "1"
++};
++
++static const struct property_entry acer_b1_750_bma250e_props[] = {
++	PROPERTY_ENTRY_STRING_ARRAY("mount-matrix", acer_b1_750_mount_matrix),
++	{ }
++};
++
++static const struct software_node acer_b1_750_bma250e_node = {
++	.properties = acer_b1_750_bma250e_props,
++};
++
++static const struct x86_i2c_client_info acer_b1_750_i2c_clients[] __initconst = {
++	{
++		/* Novatek NVT-ts touchscreen */
++		.board_info = {
++			.type = "NVT-ts",
++			.addr = 0x34,
++			.dev_name = "NVT-ts",
++		},
++		.adapter_path = "\\_SB_.I2C4",
++		.irq_data = {
++			.type = X86_ACPI_IRQ_TYPE_GPIOINT,
++			.chip = "INT33FC:02",
++			.index = 3,
++			.trigger = ACPI_EDGE_SENSITIVE,
++			.polarity = ACPI_ACTIVE_LOW,
++		},
++	}, {
++		/* BMA250E accelerometer */
++		.board_info = {
++			.type = "bma250e",
++			.addr = 0x18,
++			.swnode = &acer_b1_750_bma250e_node,
++		},
++		.adapter_path = "\\_SB_.I2C3",
++		.irq_data = {
++			.type = X86_ACPI_IRQ_TYPE_GPIOINT,
++			.chip = "INT33FC:02",
++			.index = 25,
++			.trigger = ACPI_LEVEL_SENSITIVE,
++			.polarity = ACPI_ACTIVE_HIGH,
++		},
++	},
++};
++
++static struct gpiod_lookup_table acer_b1_750_goodix_gpios = {
++	.dev_id = "i2c-NVT-ts",
++	.table = {
++		GPIO_LOOKUP("INT33FC:01", 26, "reset", GPIO_ACTIVE_LOW),
++		{ }
++	},
++};
++
++static struct gpiod_lookup_table * const acer_b1_750_gpios[] = {
++	&acer_b1_750_goodix_gpios,
++	&int3496_reference_gpios,
++	NULL
++};
++
++static const struct x86_dev_info acer_b1_750_info __initconst = {
++	.i2c_client_info = acer_b1_750_i2c_clients,
++	.i2c_client_count = ARRAY_SIZE(acer_b1_750_i2c_clients),
++	.pdev_info = int3496_pdevs,
++	.pdev_count = ARRAY_SIZE(int3496_pdevs),
++	.gpiod_lookup_tables = acer_b1_750_gpios,
++};
++
+ /*
+  * Advantech MICA-071
+  * This is a standard Windows tablet, but it has an extra "quick launch" button
+@@ -1298,17 +1380,8 @@ static const struct x86_i2c_client_info nextbook_ares8_i2c_clients[] __initconst
+ 	},
+ };
+ 
+-static struct gpiod_lookup_table nextbook_ares8_int3496_gpios = {
+-	.dev_id = "intel-int3496",
+-	.table = {
+-		GPIO_LOOKUP("INT33FC:02", 1, "mux", GPIO_ACTIVE_HIGH),
+-		GPIO_LOOKUP("INT33FC:02", 18, "id", GPIO_ACTIVE_HIGH),
+-		{ }
+-	},
+-};
+-
+ static struct gpiod_lookup_table * const nextbook_ares8_gpios[] = {
+-	&nextbook_ares8_int3496_gpios,
++	&int3496_reference_gpios,
+ 	NULL
+ };
+ 
+@@ -1435,6 +1508,14 @@ static const struct x86_dev_info xiaomi_mipad2_info __initconst = {
+ };
+ 
+ static const struct dmi_system_id x86_android_tablet_ids[] __initconst = {
++	{
++		/* Acer Iconia One 7 B1-750 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Insyde"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "VESPA2"),
++		},
++		.driver_data = (void *)&acer_b1_750_info,
++	},
+ 	{
+ 		/* Advantech MICA-071 */
+ 		.matches = {
+diff --git a/drivers/power/supply/axp288_charger.c b/drivers/power/supply/axp288_charger.c
+index 15219ed43ce95..b5903193e2f96 100644
+--- a/drivers/power/supply/axp288_charger.c
++++ b/drivers/power/supply/axp288_charger.c
+@@ -836,6 +836,7 @@ static int axp288_charger_probe(struct platform_device *pdev)
+ 	struct device *dev = &pdev->dev;
+ 	struct axp20x_dev *axp20x = dev_get_drvdata(pdev->dev.parent);
+ 	struct power_supply_config charger_cfg = {};
++	const char *extcon_name = NULL;
+ 	unsigned int val;
+ 
+ 	/*
+@@ -872,8 +873,18 @@ static int axp288_charger_probe(struct platform_device *pdev)
+ 		return PTR_ERR(info->cable.edev);
+ 	}
+ 
+-	if (acpi_dev_present(USB_HOST_EXTCON_HID, NULL, -1)) {
+-		info->otg.cable = extcon_get_extcon_dev(USB_HOST_EXTCON_NAME);
++	/*
++	 * On devices with broken ACPI GPIO event handlers there also is no ACPI
++	 * "INT3496" (USB_HOST_EXTCON_HID) device. x86-android-tablets.ko
++	 * instantiates an "intel-int3496" extcon on these devs as a workaround.
++	 */
++	if (acpi_quirk_skip_gpio_event_handlers())
++		extcon_name = "intel-int3496";
++	else if (acpi_dev_present(USB_HOST_EXTCON_HID, NULL, -1))
++		extcon_name = USB_HOST_EXTCON_NAME;
++
++	if (extcon_name) {
++		info->otg.cable = extcon_get_extcon_dev(extcon_name);
+ 		if (IS_ERR(info->otg.cable)) {
+ 			dev_err_probe(dev, PTR_ERR(info->otg.cable),
+ 				      "extcon_get_extcon_dev(%s) failed\n",
+diff --git a/drivers/remoteproc/imx_dsp_rproc.c b/drivers/remoteproc/imx_dsp_rproc.c
+index 506ec9565716b..dcd07a6a5e945 100644
+--- a/drivers/remoteproc/imx_dsp_rproc.c
++++ b/drivers/remoteproc/imx_dsp_rproc.c
+@@ -721,6 +721,191 @@ static void imx_dsp_rproc_kick(struct rproc *rproc, int vqid)
+ 		dev_err(dev, "%s: failed (%d, err:%d)\n", __func__, vqid, err);
+ }
+ 
++/*
++ * Custom memory copy implementation for i.MX DSP Cores
++ *
++ * The IRAM is part of the HiFi DSP.
++ * According to hw specs only 32-bits writes are allowed.
++ */
++static int imx_dsp_rproc_memcpy(void *dst, const void *src, size_t size)
++{
++	void __iomem *dest = (void __iomem *)dst;
++	const u8 *src_byte = src;
++	const u32 *source = src;
++	u32 affected_mask;
++	int i, q, r;
++	u32 tmp;
++
++	/* destination must be 32bit aligned */
++	if (!IS_ALIGNED((uintptr_t)dest, 4))
++		return -EINVAL;
++
++	q = size / 4;
++	r = size % 4;
++
++	/* copy data in units of 32 bits at a time */
++	for (i = 0; i < q; i++)
++		writel(source[i], dest + i * 4);
++
++	if (r) {
++		affected_mask = GENMASK(8 * r, 0);
++
++		/*
++		 * first read the 32bit data of dest, then change affected
++		 * bytes, and write back to dest.
++		 * For unaffected bytes, it should not be changed
++		 */
++		tmp = readl(dest + q * 4);
++		tmp &= ~affected_mask;
++
++		/* avoid reading after end of source */
++		for (i = 0; i < r; i++)
++			tmp |= (src_byte[q * 4 + i] << (8 * i));
++
++		writel(tmp, dest + q * 4);
++	}
++
++	return 0;
++}
++
++/*
++ * Custom memset implementation for i.MX DSP Cores
++ *
++ * The IRAM is part of the HiFi DSP.
++ * According to hw specs only 32-bits writes are allowed.
++ */
++static int imx_dsp_rproc_memset(void *addr, u8 value, size_t size)
++{
++	void __iomem *tmp_dst = (void __iomem *)addr;
++	u32 tmp_val = value;
++	u32 affected_mask;
++	int q, r;
++	u32 tmp;
++
++	/* destination must be 32bit aligned */
++	if (!IS_ALIGNED((uintptr_t)addr, 4))
++		return -EINVAL;
++
++	tmp_val |= tmp_val << 8;
++	tmp_val |= tmp_val << 16;
++
++	q = size / 4;
++	r = size % 4;
++
++	while (q--)
++		writel(tmp_val, tmp_dst++);
++
++	if (r) {
++		affected_mask = GENMASK(8 * r, 0);
++
++		/*
++		 * first read the 32bit data of addr, then change affected
++		 * bytes, and write back to addr.
++		 * For unaffected bytes, it should not be changed
++		 */
++		tmp = readl(tmp_dst);
++		tmp &= ~affected_mask;
++
++		tmp |= (tmp_val & affected_mask);
++		writel(tmp, tmp_dst);
++	}
++
++	return 0;
++}
++
++/*
++ * imx_dsp_rproc_elf_load_segments() - load firmware segments to memory
++ * @rproc: remote processor which will be booted using these fw segments
++ * @fw: the ELF firmware image
++ *
++ * This function loads the firmware segments to memory, where the remote
++ * processor expects them.
++ *
++ * Return: 0 on success and an appropriate error code otherwise
++ */
++static int imx_dsp_rproc_elf_load_segments(struct rproc *rproc, const struct firmware *fw)
++{
++	struct device *dev = &rproc->dev;
++	const void *ehdr, *phdr;
++	int i, ret = 0;
++	u16 phnum;
++	const u8 *elf_data = fw->data;
++	u8 class = fw_elf_get_class(fw);
++	u32 elf_phdr_get_size = elf_size_of_phdr(class);
++
++	ehdr = elf_data;
++	phnum = elf_hdr_get_e_phnum(class, ehdr);
++	phdr = elf_data + elf_hdr_get_e_phoff(class, ehdr);
++
++	/* go through the available ELF segments */
++	for (i = 0; i < phnum; i++, phdr += elf_phdr_get_size) {
++		u64 da = elf_phdr_get_p_paddr(class, phdr);
++		u64 memsz = elf_phdr_get_p_memsz(class, phdr);
++		u64 filesz = elf_phdr_get_p_filesz(class, phdr);
++		u64 offset = elf_phdr_get_p_offset(class, phdr);
++		u32 type = elf_phdr_get_p_type(class, phdr);
++		void *ptr;
++
++		if (type != PT_LOAD || !memsz)
++			continue;
++
++		dev_dbg(dev, "phdr: type %d da 0x%llx memsz 0x%llx filesz 0x%llx\n",
++			type, da, memsz, filesz);
++
++		if (filesz > memsz) {
++			dev_err(dev, "bad phdr filesz 0x%llx memsz 0x%llx\n",
++				filesz, memsz);
++			ret = -EINVAL;
++			break;
++		}
++
++		if (offset + filesz > fw->size) {
++			dev_err(dev, "truncated fw: need 0x%llx avail 0x%zx\n",
++				offset + filesz, fw->size);
++			ret = -EINVAL;
++			break;
++		}
++
++		if (!rproc_u64_fit_in_size_t(memsz)) {
++			dev_err(dev, "size (%llx) does not fit in size_t type\n",
++				memsz);
++			ret = -EOVERFLOW;
++			break;
++		}
++
++		/* grab the kernel address for this device address */
++		ptr = rproc_da_to_va(rproc, da, memsz, NULL);
++		if (!ptr) {
++			dev_err(dev, "bad phdr da 0x%llx mem 0x%llx\n", da,
++				memsz);
++			ret = -EINVAL;
++			break;
++		}
++
++		/* put the segment where the remote processor expects it */
++		if (filesz) {
++			ret = imx_dsp_rproc_memcpy(ptr, elf_data + offset, filesz);
++			if (ret) {
++				dev_err(dev, "memory copy failed for da 0x%llx memsz 0x%llx\n",
++					da, memsz);
++				break;
++			}
++		}
++
++		/* zero out remaining memory for this segment */
++		if (memsz > filesz) {
++			ret = imx_dsp_rproc_memset(ptr + filesz, 0, memsz - filesz);
++			if (ret) {
++				dev_err(dev, "memset failed for da 0x%llx memsz 0x%llx\n",
++					da, memsz);
++				break;
++			}
++		}
++	}
++
++	return ret;
++}
++
+ static int imx_dsp_rproc_parse_fw(struct rproc *rproc, const struct firmware *fw)
+ {
+ 	if (rproc_elf_load_rsc_table(rproc, fw))
+@@ -735,7 +920,7 @@ static const struct rproc_ops imx_dsp_rproc_ops = {
+ 	.start		= imx_dsp_rproc_start,
+ 	.stop		= imx_dsp_rproc_stop,
+ 	.kick		= imx_dsp_rproc_kick,
+-	.load		= rproc_elf_load_segments,
++	.load		= imx_dsp_rproc_elf_load_segments,
+ 	.parse_fw	= imx_dsp_rproc_parse_fw,
+ 	.sanity_check	= rproc_elf_sanity_check,
+ 	.get_boot_addr	= rproc_elf_get_boot_addr,
+diff --git a/drivers/remoteproc/stm32_rproc.c b/drivers/remoteproc/stm32_rproc.c
+index 23c1690b8d73f..8746cbb1f168d 100644
+--- a/drivers/remoteproc/stm32_rproc.c
++++ b/drivers/remoteproc/stm32_rproc.c
+@@ -291,8 +291,16 @@ static void stm32_rproc_mb_vq_work(struct work_struct *work)
+ 	struct stm32_mbox *mb = container_of(work, struct stm32_mbox, vq_work);
+ 	struct rproc *rproc = dev_get_drvdata(mb->client.dev);
+ 
++	mutex_lock(&rproc->lock);
++
++	if (rproc->state != RPROC_RUNNING)
++		goto unlock_mutex;
++
+ 	if (rproc_vq_interrupt(rproc, mb->vq_id) == IRQ_NONE)
+ 		dev_dbg(&rproc->dev, "no message found in vq%d\n", mb->vq_id);
++
++unlock_mutex:
++	mutex_unlock(&rproc->lock);
+ }
+ 
+ static void stm32_rproc_mb_callback(struct mbox_client *cl, void *data)
+diff --git a/drivers/s390/block/dasd_eckd.c b/drivers/s390/block/dasd_eckd.c
+index 1a69f97e88fbb..1baf548cd71c5 100644
+--- a/drivers/s390/block/dasd_eckd.c
++++ b/drivers/s390/block/dasd_eckd.c
+@@ -127,6 +127,8 @@ static int prepare_itcw(struct itcw *, unsigned int, unsigned int, int,
+ 			struct dasd_device *, struct dasd_device *,
+ 			unsigned int, int, unsigned int, unsigned int,
+ 			unsigned int, unsigned int);
++static int dasd_eckd_query_pprc_status(struct dasd_device *,
++				       struct dasd_pprc_data_sc4 *);
+ 
+ /* initial attempt at a probe function. this can be simplified once
+  * the other detection code is gone */
+@@ -3732,6 +3734,26 @@ static int count_exts(unsigned int from, unsigned int to, int trks_per_ext)
+ 	return count;
+ }
+ 
++static int dasd_in_copy_relation(struct dasd_device *device)
++{
++	struct dasd_pprc_data_sc4 *temp;
++	int rc;
++
++	if (!dasd_eckd_pprc_enabled(device))
++		return 0;
++
++	temp = kzalloc(sizeof(*temp), GFP_KERNEL);
++	if (!temp)
++		return -ENOMEM;
++
++	rc = dasd_eckd_query_pprc_status(device, temp);
++	if (!rc)
++		rc = temp->dev_info[0].state;
++
++	kfree(temp);
++	return rc;
++}
++
+ /*
+  * Release allocated space for a given range or an entire volume.
+  */
+@@ -3748,6 +3770,7 @@ dasd_eckd_dso_ras(struct dasd_device *device, struct dasd_block *block,
+ 	int cur_to_trk, cur_from_trk;
+ 	struct dasd_ccw_req *cqr;
+ 	u32 beg_cyl, end_cyl;
++	int copy_relation;
+ 	struct ccw1 *ccw;
+ 	int trks_per_ext;
+ 	size_t ras_size;
+@@ -3759,6 +3782,10 @@ dasd_eckd_dso_ras(struct dasd_device *device, struct dasd_block *block,
+ 	if (dasd_eckd_ras_sanity_checks(device, first_trk, last_trk))
+ 		return ERR_PTR(-EINVAL);
+ 
++	copy_relation = dasd_in_copy_relation(device);
++	if (copy_relation < 0)
++		return ERR_PTR(copy_relation);
++
+ 	rq = req ? blk_mq_rq_to_pdu(req) : NULL;
+ 
+ 	features = &private->features;
+@@ -3787,9 +3814,11 @@ dasd_eckd_dso_ras(struct dasd_device *device, struct dasd_block *block,
+ 	/*
+ 	 * This bit guarantees initialisation of tracks within an extent that is
+ 	 * not fully specified, but is only supported with a certain feature
+-	 * subset.
++	 * subset and for devices not in a copy relation.
+ 	 */
+-	ras_data->op_flags.guarantee_init = !!(features->feature[56] & 0x01);
++	if (features->feature[56] & 0x01 && !copy_relation)
++		ras_data->op_flags.guarantee_init = 1;
++
+ 	ras_data->lss = private->conf.ned->ID;
+ 	ras_data->dev_addr = private->conf.ned->unit_addr;
+ 	ras_data->nr_exts = nr_exts;
+diff --git a/drivers/s390/cio/device.c b/drivers/s390/cio/device.c
+index 8eb089b99cde9..d5c43e9b51289 100644
+--- a/drivers/s390/cio/device.c
++++ b/drivers/s390/cio/device.c
+@@ -1111,6 +1111,8 @@ static void io_subchannel_verify(struct subchannel *sch)
+ 	cdev = sch_get_cdev(sch);
+ 	if (cdev)
+ 		dev_fsm_event(cdev, DEV_EVENT_VERIFY);
++	else
++		css_schedule_eval(sch->schid);
+ }
+ 
+ static void io_subchannel_terminate_path(struct subchannel *sch, u8 mask)
+diff --git a/drivers/s390/cio/qdio.h b/drivers/s390/cio/qdio.h
+index 5ea6249d81803..641f0dbb65a90 100644
+--- a/drivers/s390/cio/qdio.h
++++ b/drivers/s390/cio/qdio.h
+@@ -95,7 +95,7 @@ static inline int do_sqbs(u64 token, unsigned char state, int queue,
+ 		"	lgr	1,%[token]\n"
+ 		"	.insn	rsy,0xeb000000008a,%[qs],%[ccq],0(%[state])"
+ 		: [ccq] "+&d" (_ccq), [qs] "+&d" (_queuestart)
+-		: [state] "d" ((unsigned long)state), [token] "d" (token)
++		: [state] "a" ((unsigned long)state), [token] "d" (token)
+ 		: "memory", "cc", "1");
+ 	*count = _ccq & 0xff;
+ 	*start = _queuestart & 0xff;
+diff --git a/drivers/scsi/hisi_sas/hisi_sas.h b/drivers/scsi/hisi_sas/hisi_sas.h
+index 6f8a52a1b8087..423af1dc36487 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas.h
++++ b/drivers/scsi/hisi_sas/hisi_sas.h
+@@ -653,7 +653,8 @@ extern void hisi_sas_phy_down(struct hisi_hba *hisi_hba, int phy_no, int rdy,
+ extern void hisi_sas_phy_bcast(struct hisi_sas_phy *phy);
+ extern void hisi_sas_slot_task_free(struct hisi_hba *hisi_hba,
+ 				    struct sas_task *task,
+-				    struct hisi_sas_slot *slot);
++				    struct hisi_sas_slot *slot,
++				    bool need_lock);
+ extern void hisi_sas_init_mem(struct hisi_hba *hisi_hba);
+ extern void hisi_sas_rst_work_handler(struct work_struct *work);
+ extern void hisi_sas_sync_rst_work_handler(struct work_struct *work);
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
+index 8c038ccf1c095..2093c1e828177 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
+@@ -205,7 +205,7 @@ static int hisi_sas_slot_index_alloc(struct hisi_hba *hisi_hba,
+ }
+ 
+ void hisi_sas_slot_task_free(struct hisi_hba *hisi_hba, struct sas_task *task,
+-			     struct hisi_sas_slot *slot)
++			     struct hisi_sas_slot *slot, bool need_lock)
+ {
+ 	int device_id = slot->device_id;
+ 	struct hisi_sas_device *sas_dev = &hisi_hba->devices[device_id];
+@@ -239,9 +239,13 @@ void hisi_sas_slot_task_free(struct hisi_hba *hisi_hba, struct sas_task *task,
+ 		}
+ 	}
+ 
+-	spin_lock(&sas_dev->lock);
+-	list_del_init(&slot->entry);
+-	spin_unlock(&sas_dev->lock);
++	if (need_lock) {
++		spin_lock(&sas_dev->lock);
++		list_del_init(&slot->entry);
++		spin_unlock(&sas_dev->lock);
++	} else {
++		list_del_init(&slot->entry);
++	}
+ 
+ 	memset(slot, 0, offsetof(struct hisi_sas_slot, buf));
+ 
+@@ -1021,7 +1025,7 @@ static void hisi_sas_port_notify_formed(struct asd_sas_phy *sas_phy)
+ }
+ 
+ static void hisi_sas_do_release_task(struct hisi_hba *hisi_hba, struct sas_task *task,
+-				     struct hisi_sas_slot *slot)
++				     struct hisi_sas_slot *slot, bool need_lock)
+ {
+ 	if (task) {
+ 		unsigned long flags;
+@@ -1038,7 +1042,7 @@ static void hisi_sas_do_release_task(struct hisi_hba *hisi_hba, struct sas_task
+ 		spin_unlock_irqrestore(&task->task_state_lock, flags);
+ 	}
+ 
+-	hisi_sas_slot_task_free(hisi_hba, task, slot);
++	hisi_sas_slot_task_free(hisi_hba, task, slot, need_lock);
+ }
+ 
+ static void hisi_sas_release_task(struct hisi_hba *hisi_hba,
+@@ -1047,8 +1051,11 @@ static void hisi_sas_release_task(struct hisi_hba *hisi_hba,
+ 	struct hisi_sas_slot *slot, *slot2;
+ 	struct hisi_sas_device *sas_dev = device->lldd_dev;
+ 
++	spin_lock(&sas_dev->lock);
+ 	list_for_each_entry_safe(slot, slot2, &sas_dev->list, entry)
+-		hisi_sas_do_release_task(hisi_hba, slot->task, slot);
++		hisi_sas_do_release_task(hisi_hba, slot->task, slot, false);
++
++	spin_unlock(&sas_dev->lock);
+ }
+ 
+ void hisi_sas_release_tasks(struct hisi_hba *hisi_hba)
+@@ -1574,7 +1581,7 @@ static int hisi_sas_abort_task(struct sas_task *task)
+ 		 */
+ 		if (rc == TMF_RESP_FUNC_COMPLETE && rc2 != TMF_RESP_FUNC_SUCC) {
+ 			if (task->lldd_task)
+-				hisi_sas_do_release_task(hisi_hba, task, slot);
++				hisi_sas_do_release_task(hisi_hba, task, slot, true);
+ 		}
+ 	} else if (task->task_proto & SAS_PROTOCOL_SATA ||
+ 		task->task_proto & SAS_PROTOCOL_STP) {
+@@ -1594,7 +1601,7 @@ static int hisi_sas_abort_task(struct sas_task *task)
+ 			 */
+ 			if ((sas_dev->dev_status == HISI_SAS_DEV_NCQ_ERR) &&
+ 			    qc && qc->scsicmd) {
+-				hisi_sas_do_release_task(hisi_hba, task, slot);
++				hisi_sas_do_release_task(hisi_hba, task, slot, true);
+ 				rc = TMF_RESP_FUNC_COMPLETE;
+ 			} else {
+ 				rc = hisi_sas_softreset_ata_disk(device);
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
+index 70c24377c6a19..76176b1fc035d 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
+@@ -1310,7 +1310,7 @@ static void slot_complete_v1_hw(struct hisi_hba *hisi_hba,
+ 	}
+ 
+ out:
+-	hisi_sas_slot_task_free(hisi_hba, task, slot);
++	hisi_sas_slot_task_free(hisi_hba, task, slot, true);
+ 
+ 	if (task->task_done)
+ 		task->task_done(task);
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
+index 02575d81afca2..746e4d77de04a 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
+@@ -2466,7 +2466,7 @@ out:
+ 	}
+ 	task->task_state_flags |= SAS_TASK_STATE_DONE;
+ 	spin_unlock_irqrestore(&task->task_state_lock, flags);
+-	hisi_sas_slot_task_free(hisi_hba, task, slot);
++	hisi_sas_slot_task_free(hisi_hba, task, slot, true);
+ 
+ 	if (!is_internal && (task->task_proto != SAS_PROTOCOL_SMP)) {
+ 		spin_lock_irqsave(&device->done_lock, flags);
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index 9afc23e3a80fc..71820a1170b4f 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -883,6 +883,7 @@ static void dereg_device_v3_hw(struct hisi_hba *hisi_hba,
+ 
+ 	cfg_abt_set_query_iptt = hisi_sas_read32(hisi_hba,
+ 		CFG_ABT_SET_QUERY_IPTT);
++	spin_lock(&sas_dev->lock);
+ 	list_for_each_entry_safe(slot, slot2, &sas_dev->list, entry) {
+ 		cfg_abt_set_query_iptt &= ~CFG_SET_ABORTED_IPTT_MSK;
+ 		cfg_abt_set_query_iptt |= (1 << CFG_SET_ABORTED_EN_OFF) |
+@@ -890,6 +891,7 @@ static void dereg_device_v3_hw(struct hisi_hba *hisi_hba,
+ 		hisi_sas_write32(hisi_hba, CFG_ABT_SET_QUERY_IPTT,
+ 			cfg_abt_set_query_iptt);
+ 	}
++	spin_unlock(&sas_dev->lock);
+ 	cfg_abt_set_query_iptt &= ~(1 << CFG_SET_ABORTED_EN_OFF);
+ 	hisi_sas_write32(hisi_hba, CFG_ABT_SET_QUERY_IPTT,
+ 		cfg_abt_set_query_iptt);
+@@ -2378,7 +2380,7 @@ out:
+ 	}
+ 	task->task_state_flags |= SAS_TASK_STATE_DONE;
+ 	spin_unlock_irqrestore(&task->task_state_lock, flags);
+-	hisi_sas_slot_task_free(hisi_hba, task, slot);
++	hisi_sas_slot_task_free(hisi_hba, task, slot, true);
+ 
+ 	if (!is_internal && (task->task_proto != SAS_PROTOCOL_SMP)) {
+ 		spin_lock_irqsave(&device->done_lock, flags);
+diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
+index f5252e45a48a2..3e365e5e194a2 100644
+--- a/drivers/scsi/lpfc/lpfc_debugfs.c
++++ b/drivers/scsi/lpfc/lpfc_debugfs.c
+@@ -2157,10 +2157,13 @@ lpfc_debugfs_lockstat_write(struct file *file, const char __user *buf,
+ 	char mybuf[64];
+ 	char *pbuf;
+ 	int i;
++	size_t bsize;
+ 
+ 	memset(mybuf, 0, sizeof(mybuf));
+ 
+-	if (copy_from_user(mybuf, buf, nbytes))
++	bsize = min(nbytes, (sizeof(mybuf) - 1));
++
++	if (copy_from_user(mybuf, buf, bsize))
+ 		return -EFAULT;
+ 	pbuf = &mybuf[0];
+ 
+@@ -2181,7 +2184,7 @@ lpfc_debugfs_lockstat_write(struct file *file, const char __user *buf,
+ 			qp->lock_conflict.wq_access = 0;
+ 		}
+ 	}
+-	return nbytes;
++	return bsize;
+ }
+ #endif
+ 
+diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
+index 35b252f1ef731..62d2ca688cd14 100644
+--- a/drivers/scsi/lpfc/lpfc_els.c
++++ b/drivers/scsi/lpfc/lpfc_els.c
+@@ -5455,18 +5455,20 @@ out:
+ 	 * these conditions and release the RPI.
+ 	 */
+ 	if (phba->sli_rev == LPFC_SLI_REV4 &&
+-	    (vport && vport->port_type == LPFC_NPIV_PORT) &&
+-	    !(ndlp->fc4_xpt_flags & SCSI_XPT_REGD) &&
+-	    ndlp->nlp_flag & NLP_RELEASE_RPI) {
+-		if (ndlp->nlp_state !=  NLP_STE_PLOGI_ISSUE &&
+-		    ndlp->nlp_state != NLP_STE_REG_LOGIN_ISSUE) {
+-			lpfc_sli4_free_rpi(phba, ndlp->nlp_rpi);
+-			spin_lock_irq(&ndlp->lock);
+-			ndlp->nlp_rpi = LPFC_RPI_ALLOC_ERROR;
+-			ndlp->nlp_flag &= ~NLP_RELEASE_RPI;
+-			spin_unlock_irq(&ndlp->lock);
+-			lpfc_drop_node(vport, ndlp);
++	    vport && vport->port_type == LPFC_NPIV_PORT &&
++	    !(ndlp->fc4_xpt_flags & SCSI_XPT_REGD)) {
++		if (ndlp->nlp_flag & NLP_RELEASE_RPI) {
++			if (ndlp->nlp_state != NLP_STE_PLOGI_ISSUE &&
++			    ndlp->nlp_state != NLP_STE_REG_LOGIN_ISSUE) {
++				lpfc_sli4_free_rpi(phba, ndlp->nlp_rpi);
++				spin_lock_irq(&ndlp->lock);
++				ndlp->nlp_rpi = LPFC_RPI_ALLOC_ERROR;
++				ndlp->nlp_flag &= ~NLP_RELEASE_RPI;
++				spin_unlock_irq(&ndlp->lock);
++			}
+ 		}
++
++		lpfc_drop_node(vport, ndlp);
+ 	}
+ 
+ 	/* Release the originating I/O reference. */
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index b7c569a42aa47..03964b26f3f27 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -1463,6 +1463,8 @@ static int scsi_dispatch_cmd(struct scsi_cmnd *cmd)
+ 	struct Scsi_Host *host = cmd->device->host;
+ 	int rtn = 0;
+ 
++	atomic_inc(&cmd->device->iorequest_cnt);
++
+ 	/* check if the device is still usable */
+ 	if (unlikely(cmd->device->sdev_state == SDEV_DEL)) {
+ 		/* in SDEV_DEL we error all commands. DID_NO_CONNECT
+@@ -1761,7 +1763,6 @@ static blk_status_t scsi_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 		goto out_dec_host_busy;
+ 	}
+ 
+-	atomic_inc(&cmd->device->iorequest_cnt);
+ 	return BLK_STS_OK;
+ 
+ out_dec_host_busy:
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index d9ce379c4d2e8..e6bc622954cfa 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -1780,7 +1780,7 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
+ 
+ 	length = scsi_bufflen(scmnd);
+ 	payload = (struct vmbus_packet_mpb_array *)&cmd_request->mpb;
+-	payload_sz = sizeof(cmd_request->mpb);
++	payload_sz = 0;
+ 
+ 	if (scsi_sg_count(scmnd)) {
+ 		unsigned long offset_in_hvpg = offset_in_hvpage(sgl->offset);
+@@ -1789,10 +1789,10 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
+ 		unsigned long hvpfn, hvpfns_to_add;
+ 		int j, i = 0, sg_count;
+ 
+-		if (hvpg_count > MAX_PAGE_BUFFER_COUNT) {
++		payload_sz = (hvpg_count * sizeof(u64) +
++			      sizeof(struct vmbus_packet_mpb_array));
+ 
+-			payload_sz = (hvpg_count * sizeof(u64) +
+-				      sizeof(struct vmbus_packet_mpb_array));
++		if (hvpg_count > MAX_PAGE_BUFFER_COUNT) {
+ 			payload = kzalloc(payload_sz, GFP_ATOMIC);
+ 			if (!payload)
+ 				return SCSI_MLQUEUE_DEVICE_BUSY;
+diff --git a/drivers/soundwire/bus.c b/drivers/soundwire/bus.c
+index b6aca59c31300..7fd99e581a574 100644
+--- a/drivers/soundwire/bus.c
++++ b/drivers/soundwire/bus.c
+@@ -546,9 +546,11 @@ int sdw_nread(struct sdw_slave *slave, u32 addr, size_t count, u8 *val)
+ {
+ 	int ret;
+ 
+-	ret = pm_runtime_resume_and_get(&slave->dev);
+-	if (ret < 0 && ret != -EACCES)
++	ret = pm_runtime_get_sync(&slave->dev);
++	if (ret < 0 && ret != -EACCES) {
++		pm_runtime_put_noidle(&slave->dev);
+ 		return ret;
++	}
+ 
+ 	ret = sdw_nread_no_pm(slave, addr, count, val);
+ 
+@@ -570,9 +572,11 @@ int sdw_nwrite(struct sdw_slave *slave, u32 addr, size_t count, const u8 *val)
+ {
+ 	int ret;
+ 
+-	ret = pm_runtime_resume_and_get(&slave->dev);
+-	if (ret < 0 && ret != -EACCES)
++	ret = pm_runtime_get_sync(&slave->dev);
++	if (ret < 0 && ret != -EACCES) {
++		pm_runtime_put_noidle(&slave->dev);
+ 		return ret;
++	}
+ 
+ 	ret = sdw_nwrite_no_pm(slave, addr, count, val);
+ 
+@@ -1541,9 +1545,10 @@ static int sdw_handle_slave_alerts(struct sdw_slave *slave)
+ 
+ 	sdw_modify_slave_status(slave, SDW_SLAVE_ALERT);
+ 
+-	ret = pm_runtime_resume_and_get(&slave->dev);
++	ret = pm_runtime_get_sync(&slave->dev);
+ 	if (ret < 0 && ret != -EACCES) {
+ 		dev_err(&slave->dev, "Failed to resume device: %d\n", ret);
++		pm_runtime_put_noidle(&slave->dev);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/soundwire/dmi-quirks.c b/drivers/soundwire/dmi-quirks.c
+index 7969881f126dc..58ea013fa918a 100644
+--- a/drivers/soundwire/dmi-quirks.c
++++ b/drivers/soundwire/dmi-quirks.c
+@@ -73,6 +73,23 @@ static const struct adr_remap hp_omen_16[] = {
+ 	{}
+ };
+ 
++/*
++ * Intel NUC M15 LAPRC510 and LAPRC710
++ */
++static const struct adr_remap intel_rooks_county[] = {
++	/* rt711-sdca on link0 */
++	{
++		0x000020025d071100ull,
++		0x000030025d071101ull
++	},
++	/* rt1316-sdca on link2 */
++	{
++		0x000120025d071100ull,
++		0x000230025d131601ull
++	},
++	{}
++};
++
+ static const struct dmi_system_id adr_remap_quirk_table[] = {
+ 	/* TGL devices */
+ 	{
+@@ -98,6 +115,14 @@ static const struct dmi_system_id adr_remap_quirk_table[] = {
+ 		},
+ 		.driver_data = (void *)intel_tgl_bios,
+ 	},
++	{
++		/* quirk used for NUC15 'Rooks County' LAPRC510 and LAPRC710 skews */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Intel(R) Client Systems"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LAPRC"),
++		},
++		.driver_data = (void *)intel_rooks_county,
++	},
+ 	{
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc"),
+diff --git a/drivers/soundwire/qcom.c b/drivers/soundwire/qcom.c
+index ba502129150d5..30575ed20947e 100644
+--- a/drivers/soundwire/qcom.c
++++ b/drivers/soundwire/qcom.c
+@@ -1217,6 +1217,9 @@ static int qcom_swrm_get_port_config(struct qcom_swrm_ctrl *ctrl)
+ 	ctrl->num_dout_ports = val;
+ 
+ 	nports = ctrl->num_dout_ports + ctrl->num_din_ports;
++	if (nports > QCOM_SDW_MAX_PORTS)
++		return -EINVAL;
++
+ 	/* Valid port numbers are from 1-14, so mask out port 0 explicitly */
+ 	set_bit(0, &ctrl->dout_port_mask);
+ 	set_bit(0, &ctrl->din_port_mask);
+diff --git a/drivers/spi/spi-imx.c b/drivers/spi/spi-imx.c
+index 6c9c87cd14cae..a2a5ada61c3af 100644
+--- a/drivers/spi/spi-imx.c
++++ b/drivers/spi/spi-imx.c
+@@ -252,6 +252,18 @@ static bool spi_imx_can_dma(struct spi_controller *controller, struct spi_device
+ 	return true;
+ }
+ 
++/*
++ * Note the number of natively supported chip selects for MX51 is 4. Some
++ * devices may have less actual SS pins but the register map supports 4. When
++ * using gpio chip selects the cs values passed into the macros below can go
++ * outside the range 0 - 3. We therefore need to limit the cs value to avoid
++ * corrupting bits outside the allocated locations.
++ *
++ * The simplest way to do this is to just mask the cs bits to 2 bits. This
++ * still allows all 4 native chip selects to work as well as gpio chip selects
++ * (which can use any of the 4 chip select configurations).
++ */
++
+ #define MX51_ECSPI_CTRL		0x08
+ #define MX51_ECSPI_CTRL_ENABLE		(1 <<  0)
+ #define MX51_ECSPI_CTRL_XCH		(1 <<  2)
+@@ -260,16 +272,16 @@ static bool spi_imx_can_dma(struct spi_controller *controller, struct spi_device
+ #define MX51_ECSPI_CTRL_DRCTL(drctl)	((drctl) << 16)
+ #define MX51_ECSPI_CTRL_POSTDIV_OFFSET	8
+ #define MX51_ECSPI_CTRL_PREDIV_OFFSET	12
+-#define MX51_ECSPI_CTRL_CS(cs)		((cs) << 18)
++#define MX51_ECSPI_CTRL_CS(cs)		((cs & 3) << 18)
+ #define MX51_ECSPI_CTRL_BL_OFFSET	20
+ #define MX51_ECSPI_CTRL_BL_MASK		(0xfff << 20)
+ 
+ #define MX51_ECSPI_CONFIG	0x0c
+-#define MX51_ECSPI_CONFIG_SCLKPHA(cs)	(1 << ((cs) +  0))
+-#define MX51_ECSPI_CONFIG_SCLKPOL(cs)	(1 << ((cs) +  4))
+-#define MX51_ECSPI_CONFIG_SBBCTRL(cs)	(1 << ((cs) +  8))
+-#define MX51_ECSPI_CONFIG_SSBPOL(cs)	(1 << ((cs) + 12))
+-#define MX51_ECSPI_CONFIG_SCLKCTL(cs)	(1 << ((cs) + 20))
++#define MX51_ECSPI_CONFIG_SCLKPHA(cs)	(1 << ((cs & 3) +  0))
++#define MX51_ECSPI_CONFIG_SCLKPOL(cs)	(1 << ((cs & 3) +  4))
++#define MX51_ECSPI_CONFIG_SBBCTRL(cs)	(1 << ((cs & 3) +  8))
++#define MX51_ECSPI_CONFIG_SSBPOL(cs)	(1 << ((cs & 3) + 12))
++#define MX51_ECSPI_CONFIG_SCLKCTL(cs)	(1 << ((cs & 3) + 20))
+ 
+ #define MX51_ECSPI_INT		0x10
+ #define MX51_ECSPI_INT_TEEN		(1 <<  0)
+diff --git a/drivers/spi/spi-intel-pci.c b/drivers/spi/spi-intel-pci.c
+index 4d69e320d0185..a7381e774b953 100644
+--- a/drivers/spi/spi-intel-pci.c
++++ b/drivers/spi/spi-intel-pci.c
+@@ -83,6 +83,7 @@ static const struct pci_device_id intel_spi_pci_ids[] = {
+ 	{ PCI_VDEVICE(INTEL, 0xa2a4), (unsigned long)&cnl_info },
+ 	{ PCI_VDEVICE(INTEL, 0xa324), (unsigned long)&cnl_info },
+ 	{ PCI_VDEVICE(INTEL, 0xa3a4), (unsigned long)&cnl_info },
++	{ PCI_VDEVICE(INTEL, 0xae23), (unsigned long)&cnl_info },
+ 	{ },
+ };
+ MODULE_DEVICE_TABLE(pci, intel_spi_pci_ids);
+diff --git a/drivers/staging/axis-fifo/axis-fifo.c b/drivers/staging/axis-fifo/axis-fifo.c
+index dfd2b357f484b..0a85ea667a1b5 100644
+--- a/drivers/staging/axis-fifo/axis-fifo.c
++++ b/drivers/staging/axis-fifo/axis-fifo.c
+@@ -103,17 +103,17 @@
+  *           globals
+  * ----------------------------
+  */
+-static int read_timeout = 1000; /* ms to wait before read() times out */
+-static int write_timeout = 1000; /* ms to wait before write() times out */
++static long read_timeout = 1000; /* ms to wait before read() times out */
++static long write_timeout = 1000; /* ms to wait before write() times out */
+ 
+ /* ----------------------------
+  * module command-line arguments
+  * ----------------------------
+  */
+ 
+-module_param(read_timeout, int, 0444);
++module_param(read_timeout, long, 0444);
+ MODULE_PARM_DESC(read_timeout, "ms to wait before blocking read() timing out; set to -1 for no timeout");
+-module_param(write_timeout, int, 0444);
++module_param(write_timeout, long, 0444);
+ MODULE_PARM_DESC(write_timeout, "ms to wait before blocking write() timing out; set to -1 for no timeout");
+ 
+ /* ----------------------------
+@@ -384,9 +384,7 @@ static ssize_t axis_fifo_read(struct file *f, char __user *buf,
+ 		mutex_lock(&fifo->read_lock);
+ 		ret = wait_event_interruptible_timeout(fifo->read_queue,
+ 			ioread32(fifo->base_addr + XLLF_RDFO_OFFSET),
+-				 (read_timeout >= 0) ?
+-				  msecs_to_jiffies(read_timeout) :
+-				  MAX_SCHEDULE_TIMEOUT);
++			read_timeout);
+ 
+ 		if (ret <= 0) {
+ 			if (ret == 0) {
+@@ -528,9 +526,7 @@ static ssize_t axis_fifo_write(struct file *f, const char __user *buf,
+ 		ret = wait_event_interruptible_timeout(fifo->write_queue,
+ 			ioread32(fifo->base_addr + XLLF_TDFV_OFFSET)
+ 				 >= words_to_write,
+-				 (write_timeout >= 0) ?
+-				  msecs_to_jiffies(write_timeout) :
+-				  MAX_SCHEDULE_TIMEOUT);
++			write_timeout);
+ 
+ 		if (ret <= 0) {
+ 			if (ret == 0) {
+@@ -948,7 +944,17 @@ static struct platform_driver axis_fifo_driver = {
+ 
+ static int __init axis_fifo_init(void)
+ {
+-	pr_info("axis-fifo driver loaded with parameters read_timeout = %i, write_timeout = %i\n",
++	if (read_timeout >= 0)
++		read_timeout = msecs_to_jiffies(read_timeout);
++	else
++		read_timeout = MAX_SCHEDULE_TIMEOUT;
++
++	if (write_timeout >= 0)
++		write_timeout = msecs_to_jiffies(write_timeout);
++	else
++		write_timeout = MAX_SCHEDULE_TIMEOUT;
++
++	pr_info("axis-fifo driver loaded with parameters read_timeout = %li, write_timeout = %li\n",
+ 		read_timeout, write_timeout);
+ 	return platform_driver_register(&axis_fifo_driver);
+ }
+diff --git a/drivers/staging/media/imx/imx-media-capture.c b/drivers/staging/media/imx/imx-media-capture.c
+index 93ba092360105..5cc67786b9169 100644
+--- a/drivers/staging/media/imx/imx-media-capture.c
++++ b/drivers/staging/media/imx/imx-media-capture.c
+@@ -501,14 +501,14 @@ static int capture_legacy_g_parm(struct file *file, void *fh,
+ 				 struct v4l2_streamparm *a)
+ {
+ 	struct capture_priv *priv = video_drvdata(file);
+-	struct v4l2_subdev_frame_interval fi;
++	struct v4l2_subdev_frame_interval fi = {
++		.pad = priv->src_sd_pad,
++	};
+ 	int ret;
+ 
+ 	if (a->type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
+ 		return -EINVAL;
+ 
+-	memset(&fi, 0, sizeof(fi));
+-	fi.pad = priv->src_sd_pad;
+ 	ret = v4l2_subdev_call(priv->src_sd, video, g_frame_interval, &fi);
+ 	if (ret < 0)
+ 		return ret;
+@@ -523,14 +523,14 @@ static int capture_legacy_s_parm(struct file *file, void *fh,
+ 				 struct v4l2_streamparm *a)
+ {
+ 	struct capture_priv *priv = video_drvdata(file);
+-	struct v4l2_subdev_frame_interval fi;
++	struct v4l2_subdev_frame_interval fi = {
++		.pad = priv->src_sd_pad,
++	};
+ 	int ret;
+ 
+ 	if (a->type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
+ 		return -EINVAL;
+ 
+-	memset(&fi, 0, sizeof(fi));
+-	fi.pad = priv->src_sd_pad;
+ 	fi.interval = a->parm.capture.timeperframe;
+ 	ret = v4l2_subdev_call(priv->src_sd, video, s_frame_interval, &fi);
+ 	if (ret < 0)
+diff --git a/drivers/staging/media/imx/imx-media-utils.c b/drivers/staging/media/imx/imx-media-utils.c
+index 411e907b68eba..b545750ca5262 100644
+--- a/drivers/staging/media/imx/imx-media-utils.c
++++ b/drivers/staging/media/imx/imx-media-utils.c
+@@ -432,15 +432,15 @@ int imx_media_init_cfg(struct v4l2_subdev *sd,
+ 		       struct v4l2_subdev_state *sd_state)
+ {
+ 	struct v4l2_mbus_framefmt *mf_try;
+-	struct v4l2_subdev_format format;
+ 	unsigned int pad;
+ 	int ret;
+ 
+ 	for (pad = 0; pad < sd->entity.num_pads; pad++) {
+-		memset(&format, 0, sizeof(format));
++		struct v4l2_subdev_format format = {
++			.pad = pad,
++			.which = V4L2_SUBDEV_FORMAT_ACTIVE,
++		};
+ 
+-		format.pad = pad;
+-		format.which = V4L2_SUBDEV_FORMAT_ACTIVE;
+ 		ret = v4l2_subdev_call(sd, pad, get_fmt, NULL, &format);
+ 		if (ret)
+ 			continue;
+diff --git a/drivers/staging/media/omap4iss/iss_video.c b/drivers/staging/media/omap4iss/iss_video.c
+index 05548eab7daad..74fb0d185a8ff 100644
+--- a/drivers/staging/media/omap4iss/iss_video.c
++++ b/drivers/staging/media/omap4iss/iss_video.c
+@@ -237,7 +237,9 @@ static int
+ __iss_video_get_format(struct iss_video *video,
+ 		       struct v4l2_mbus_framefmt *format)
+ {
+-	struct v4l2_subdev_format fmt;
++	struct v4l2_subdev_format fmt = {
++		.which = V4L2_SUBDEV_FORMAT_ACTIVE,
++	};
+ 	struct v4l2_subdev *subdev;
+ 	u32 pad;
+ 	int ret;
+@@ -246,9 +248,7 @@ __iss_video_get_format(struct iss_video *video,
+ 	if (!subdev)
+ 		return -EINVAL;
+ 
+-	memset(&fmt, 0, sizeof(fmt));
+ 	fmt.pad = pad;
+-	fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE;
+ 
+ 	mutex_lock(&video->mutex);
+ 	ret = v4l2_subdev_call(subdev, pad, get_fmt, NULL, &fmt);
+diff --git a/drivers/staging/rtl8192e/rtl8192e/rtl_core.c b/drivers/staging/rtl8192e/rtl8192e/rtl_core.c
+index 72d76dc7df781..92552ce30cd58 100644
+--- a/drivers/staging/rtl8192e/rtl8192e/rtl_core.c
++++ b/drivers/staging/rtl8192e/rtl8192e/rtl_core.c
+@@ -48,9 +48,9 @@ static const struct rtl819x_ops rtl819xp_ops = {
+ };
+ 
+ static struct pci_device_id rtl8192_pci_id_tbl[] = {
+-	{RTL_PCI_DEVICE(0x10ec, 0x8192, rtl819xp_ops)},
+-	{RTL_PCI_DEVICE(0x07aa, 0x0044, rtl819xp_ops)},
+-	{RTL_PCI_DEVICE(0x07aa, 0x0047, rtl819xp_ops)},
++	{PCI_DEVICE(0x10ec, 0x8192)},
++	{PCI_DEVICE(0x07aa, 0x0044)},
++	{PCI_DEVICE(0x07aa, 0x0047)},
+ 	{}
+ };
+ 
+diff --git a/drivers/staging/rtl8192e/rtl8192e/rtl_core.h b/drivers/staging/rtl8192e/rtl8192e/rtl_core.h
+index fd96eef90c7fa..bbc1c4bac3588 100644
+--- a/drivers/staging/rtl8192e/rtl8192e/rtl_core.h
++++ b/drivers/staging/rtl8192e/rtl8192e/rtl_core.h
+@@ -55,11 +55,6 @@
+ #define IS_HARDWARE_TYPE_8192SE(_priv)		\
+ 	(((struct r8192_priv *)rtllib_priv(dev))->card_8192 == NIC_8192SE)
+ 
+-#define RTL_PCI_DEVICE(vend, dev, cfg) \
+-	.vendor = (vend), .device = (dev), \
+-	.subvendor = PCI_ANY_ID, .subdevice = PCI_ANY_ID, \
+-	.driver_data = (kernel_ulong_t)&(cfg)
+-
+ #define TOTAL_CAM_ENTRY		32
+ #define CAM_CONTENT_COUNT	8
+ 
+diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
+index 3f7a9f7f5f4e3..07e196b44b91d 100644
+--- a/drivers/target/iscsi/iscsi_target.c
++++ b/drivers/target/iscsi/iscsi_target.c
+@@ -4531,6 +4531,9 @@ int iscsit_close_session(struct iscsit_session *sess, bool can_sleep)
+ 	iscsit_stop_time2retain_timer(sess);
+ 	spin_unlock_bh(&se_tpg->session_lock);
+ 
++	if (sess->sess_ops->ErrorRecoveryLevel == 2)
++		iscsit_free_connection_recovery_entries(sess);
++
+ 	/*
+ 	 * transport_deregister_session_configfs() will clear the
+ 	 * struct se_node_acl->nacl_sess pointer now as a iscsi_np process context
+@@ -4554,9 +4557,6 @@ int iscsit_close_session(struct iscsit_session *sess, bool can_sleep)
+ 
+ 	transport_deregister_session(sess->se_sess);
+ 
+-	if (sess->sess_ops->ErrorRecoveryLevel == 2)
+-		iscsit_free_connection_recovery_entries(sess);
+-
+ 	iscsit_free_all_ooo_cmdsns(sess);
+ 
+ 	spin_lock_bh(&se_tpg->session_lock);
+diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
+index cfebec107f3fc..0a525f44ea316 100644
+--- a/drivers/thunderbolt/nhi.c
++++ b/drivers/thunderbolt/nhi.c
+@@ -54,6 +54,21 @@ static int ring_interrupt_index(const struct tb_ring *ring)
+ 	return bit;
+ }
+ 
++static void nhi_mask_interrupt(struct tb_nhi *nhi, int mask, int ring)
++{
++	if (nhi->quirks & QUIRK_AUTO_CLEAR_INT)
++		return;
++	iowrite32(mask, nhi->iobase + REG_RING_INTERRUPT_MASK_CLEAR_BASE + ring);
++}
++
++static void nhi_clear_interrupt(struct tb_nhi *nhi, int ring)
++{
++	if (nhi->quirks & QUIRK_AUTO_CLEAR_INT)
++		ioread32(nhi->iobase + REG_RING_NOTIFY_BASE + ring);
++	else
++		iowrite32(~0, nhi->iobase + REG_RING_INT_CLEAR + ring);
++}
++
+ /*
+  * ring_interrupt_active() - activate/deactivate interrupts for a single ring
+  *
+@@ -61,8 +76,8 @@ static int ring_interrupt_index(const struct tb_ring *ring)
+  */
+ static void ring_interrupt_active(struct tb_ring *ring, bool active)
+ {
+-	int reg = REG_RING_INTERRUPT_BASE +
+-		  ring_interrupt_index(ring) / 32 * 4;
++	int index = ring_interrupt_index(ring) / 32 * 4;
++	int reg = REG_RING_INTERRUPT_BASE + index;
+ 	int interrupt_bit = ring_interrupt_index(ring) & 31;
+ 	int mask = 1 << interrupt_bit;
+ 	u32 old, new;
+@@ -123,7 +138,11 @@ static void ring_interrupt_active(struct tb_ring *ring, bool active)
+ 					 "interrupt for %s %d is already %s\n",
+ 					 RING_TYPE(ring), ring->hop,
+ 					 active ? "enabled" : "disabled");
+-	iowrite32(new, ring->nhi->iobase + reg);
++
++	if (active)
++		iowrite32(new, ring->nhi->iobase + reg);
++	else
++		nhi_mask_interrupt(ring->nhi, mask, index);
+ }
+ 
+ /*
+@@ -136,11 +155,11 @@ static void nhi_disable_interrupts(struct tb_nhi *nhi)
+ 	int i = 0;
+ 	/* disable interrupts */
+ 	for (i = 0; i < RING_INTERRUPT_REG_COUNT(nhi); i++)
+-		iowrite32(0, nhi->iobase + REG_RING_INTERRUPT_BASE + 4 * i);
++		nhi_mask_interrupt(nhi, ~0, 4 * i);
+ 
+ 	/* clear interrupt status bits */
+ 	for (i = 0; i < RING_NOTIFY_REG_COUNT(nhi); i++)
+-		ioread32(nhi->iobase + REG_RING_NOTIFY_BASE + 4 * i);
++		nhi_clear_interrupt(nhi, 4 * i);
+ }
+ 
+ /* ring helper methods */
+diff --git a/drivers/thunderbolt/nhi_regs.h b/drivers/thunderbolt/nhi_regs.h
+index faef165a919cc..6ba2958154770 100644
+--- a/drivers/thunderbolt/nhi_regs.h
++++ b/drivers/thunderbolt/nhi_regs.h
+@@ -93,6 +93,8 @@ struct ring_desc {
+ #define REG_RING_INTERRUPT_BASE	0x38200
+ #define RING_INTERRUPT_REG_COUNT(nhi) ((31 + 2 * nhi->hop_count) / 32)
+ 
++#define REG_RING_INTERRUPT_MASK_CLEAR_BASE	0x38208
++
+ #define REG_INT_THROTTLING_RATE	0x38c00
+ 
+ /* Interrupt Vector Allocation */
+diff --git a/drivers/tty/serial/8250/8250_bcm7271.c b/drivers/tty/serial/8250/8250_bcm7271.c
+index f801b1f5b46c0..af0e1c0701879 100644
+--- a/drivers/tty/serial/8250/8250_bcm7271.c
++++ b/drivers/tty/serial/8250/8250_bcm7271.c
+@@ -1012,7 +1012,7 @@ static int brcmuart_probe(struct platform_device *pdev)
+ 	of_property_read_u32(np, "clock-frequency", &clk_rate);
+ 
+ 	/* See if a Baud clock has been specified */
+-	baud_mux_clk = of_clk_get_by_name(np, "sw_baud");
++	baud_mux_clk = devm_clk_get(dev, "sw_baud");
+ 	if (IS_ERR(baud_mux_clk)) {
+ 		if (PTR_ERR(baud_mux_clk) == -EPROBE_DEFER) {
+ 			ret = -EPROBE_DEFER;
+@@ -1032,7 +1032,7 @@ static int brcmuart_probe(struct platform_device *pdev)
+ 	if (clk_rate == 0) {
+ 		dev_err(dev, "clock-frequency or clk not defined\n");
+ 		ret = -EINVAL;
+-		goto release_dma;
++		goto err_clk_disable;
+ 	}
+ 
+ 	dev_dbg(dev, "DMA is %senabled\n", priv->dma_enabled ? "" : "not ");
+@@ -1119,6 +1119,8 @@ err1:
+ 	serial8250_unregister_port(priv->line);
+ err:
+ 	brcmuart_free_bufs(dev, priv);
++err_clk_disable:
++	clk_disable_unprepare(baud_mux_clk);
+ release_dma:
+ 	if (priv->dma_enabled)
+ 		brcmuart_arbitration(priv, 0);
+@@ -1133,6 +1135,7 @@ static int brcmuart_remove(struct platform_device *pdev)
+ 	hrtimer_cancel(&priv->hrt);
+ 	serial8250_unregister_port(priv->line);
+ 	brcmuart_free_bufs(&pdev->dev, priv);
++	clk_disable_unprepare(priv->baud_mux_clk);
+ 	if (priv->dma_enabled)
+ 		brcmuart_arbitration(priv, 0);
+ 	return 0;
+diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c
+index ab63c308be0a2..13bf535eedcd5 100644
+--- a/drivers/tty/serial/8250/8250_core.c
++++ b/drivers/tty/serial/8250/8250_core.c
+@@ -1158,6 +1158,7 @@ void serial8250_unregister_port(int line)
+ 		uart->port.type = PORT_UNKNOWN;
+ 		uart->port.dev = &serial8250_isa_devs->dev;
+ 		uart->capabilities = 0;
++		serial8250_init_port(uart);
+ 		serial8250_apply_quirks(uart);
+ 		uart_add_one_port(&serial8250_reg, &uart->port);
+ 	} else {
+diff --git a/drivers/tty/serial/8250/8250_exar.c b/drivers/tty/serial/8250/8250_exar.c
+index 64770c62bbec5..b406cba10b0eb 100644
+--- a/drivers/tty/serial/8250/8250_exar.c
++++ b/drivers/tty/serial/8250/8250_exar.c
+@@ -40,9 +40,13 @@
+ #define PCI_DEVICE_ID_COMMTECH_4224PCIE		0x0020
+ #define PCI_DEVICE_ID_COMMTECH_4228PCIE		0x0021
+ #define PCI_DEVICE_ID_COMMTECH_4222PCIE		0x0022
++
+ #define PCI_DEVICE_ID_EXAR_XR17V4358		0x4358
+ #define PCI_DEVICE_ID_EXAR_XR17V8358		0x8358
+ 
++#define PCI_SUBDEVICE_ID_USR_2980		0x0128
++#define PCI_SUBDEVICE_ID_USR_2981		0x0129
++
+ #define PCI_DEVICE_ID_SEALEVEL_710xC		0x1001
+ #define PCI_DEVICE_ID_SEALEVEL_720xC		0x1002
+ #define PCI_DEVICE_ID_SEALEVEL_740xC		0x1004
+@@ -829,6 +833,15 @@ static const struct exar8250_board pbn_exar_XR17V8358 = {
+ 		(kernel_ulong_t)&bd			\
+ 	}
+ 
++#define USR_DEVICE(devid, sdevid, bd) {			\
++	PCI_DEVICE_SUB(					\
++		PCI_VENDOR_ID_USR,			\
++		PCI_DEVICE_ID_EXAR_##devid,		\
++		PCI_VENDOR_ID_EXAR,			\
++		PCI_SUBDEVICE_ID_USR_##sdevid), 0, 0,	\
++		(kernel_ulong_t)&bd			\
++	}
++
+ static const struct pci_device_id exar_pci_tbl[] = {
+ 	EXAR_DEVICE(ACCESSIO, COM_2S, pbn_exar_XR17C15x),
+ 	EXAR_DEVICE(ACCESSIO, COM_4S, pbn_exar_XR17C15x),
+@@ -853,6 +866,10 @@ static const struct pci_device_id exar_pci_tbl[] = {
+ 
+ 	IBM_DEVICE(XR17C152, SATURN_SERIAL_ONE_PORT, pbn_exar_ibm_saturn),
+ 
++	/* USRobotics USR298x-OEM PCI Modems */
++	USR_DEVICE(XR17C152, 2980, pbn_exar_XR17C15x),
++	USR_DEVICE(XR17C152, 2981, pbn_exar_XR17C15x),
++
+ 	/* Exar Corp. XR17C15[248] Dual/Quad/Octal UART */
+ 	EXAR_DEVICE(EXAR, XR17C152, pbn_exar_XR17C15x),
+ 	EXAR_DEVICE(EXAR, XR17C154, pbn_exar_XR17C15x),
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index c55be6fda0cab..e80c4f6551a1c 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -1920,6 +1920,8 @@ pci_moxa_setup(struct serial_private *priv,
+ #define PCI_SUBDEVICE_ID_SIIG_DUAL_30	0x2530
+ #define PCI_VENDOR_ID_ADVANTECH		0x13fe
+ #define PCI_DEVICE_ID_INTEL_CE4100_UART 0x2e66
++#define PCI_DEVICE_ID_ADVANTECH_PCI1600	0x1600
++#define PCI_DEVICE_ID_ADVANTECH_PCI1600_1611	0x1611
+ #define PCI_DEVICE_ID_ADVANTECH_PCI3620	0x3620
+ #define PCI_DEVICE_ID_ADVANTECH_PCI3618	0x3618
+ #define PCI_DEVICE_ID_ADVANTECH_PCIf618	0xf618
+@@ -4085,6 +4087,9 @@ static SIMPLE_DEV_PM_OPS(pciserial_pm_ops, pciserial_suspend_one,
+ 			 pciserial_resume_one);
+ 
+ static const struct pci_device_id serial_pci_tbl[] = {
++	{	PCI_VENDOR_ID_ADVANTECH, PCI_DEVICE_ID_ADVANTECH_PCI1600,
++		PCI_DEVICE_ID_ADVANTECH_PCI1600_1611, PCI_ANY_ID, 0, 0,
++		pbn_b0_4_921600 },
+ 	/* Advantech use PCI_DEVICE_ID_ADVANTECH_PCI3620 (0x3620) as 'PCI_SUBVENDOR_ID' */
+ 	{	PCI_VENDOR_ID_ADVANTECH, PCI_DEVICE_ID_ADVANTECH_PCI3620,
+ 		PCI_DEVICE_ID_ADVANTECH_PCI3620, 0x0001, 0, 0,
+diff --git a/drivers/tty/serial/arc_uart.c b/drivers/tty/serial/arc_uart.c
+index 59e25f2b66322..4b2512eef577b 100644
+--- a/drivers/tty/serial/arc_uart.c
++++ b/drivers/tty/serial/arc_uart.c
+@@ -606,10 +606,11 @@ static int arc_serial_probe(struct platform_device *pdev)
+ 	}
+ 	uart->baud = val;
+ 
+-	port->membase = of_iomap(np, 0);
+-	if (!port->membase)
++	port->membase = devm_platform_ioremap_resource(pdev, 0);
++	if (IS_ERR(port->membase)) {
+ 		/* No point of dev_err since UART itself is hosed here */
+-		return -ENXIO;
++		return PTR_ERR(port->membase);
++	}
+ 
+ 	port->irq = irq_of_parse_and_map(np, 0);
+ 
+diff --git a/drivers/tty/serial/qcom_geni_serial.c b/drivers/tty/serial/qcom_geni_serial.c
+index 28fbc927a5465..0ae297239f85b 100644
+--- a/drivers/tty/serial/qcom_geni_serial.c
++++ b/drivers/tty/serial/qcom_geni_serial.c
+@@ -1665,19 +1665,18 @@ static int qcom_geni_serial_probe(struct platform_device *pdev)
+ 	uport->private_data = &port->private_data;
+ 	platform_set_drvdata(pdev, port);
+ 
+-	ret = uart_add_one_port(drv, uport);
+-	if (ret)
+-		return ret;
+-
+ 	irq_set_status_flags(uport->irq, IRQ_NOAUTOEN);
+ 	ret = devm_request_irq(uport->dev, uport->irq, qcom_geni_serial_isr,
+ 			IRQF_TRIGGER_HIGH, port->name, uport);
+ 	if (ret) {
+ 		dev_err(uport->dev, "Failed to get IRQ ret %d\n", ret);
+-		uart_remove_one_port(drv, uport);
+ 		return ret;
+ 	}
+ 
++	ret = uart_add_one_port(drv, uport);
++	if (ret)
++		return ret;
++
+ 	/*
+ 	 * Set pm_runtime status as ACTIVE so that wakeup_irq gets
+ 	 * enabled/disabled from dev_pm_arm_wake_irq during system
+diff --git a/drivers/tty/vt/vc_screen.c b/drivers/tty/vt/vc_screen.c
+index 1dc07f9214d57..01c96537fa36b 100644
+--- a/drivers/tty/vt/vc_screen.c
++++ b/drivers/tty/vt/vc_screen.c
+@@ -656,10 +656,17 @@ vcs_write(struct file *file, const char __user *buf, size_t count, loff_t *ppos)
+ 			}
+ 		}
+ 
+-		/* The vcs_size might have changed while we slept to grab
+-		 * the user buffer, so recheck.
++		/* The vc might have been freed or vcs_size might have changed
++		 * while we slept to grab the user buffer, so recheck.
+ 		 * Return data written up to now on failure.
+ 		 */
++		vc = vcs_vc(inode, &viewed);
++		if (!vc) {
++			if (written)
++				break;
++			ret = -ENXIO;
++			goto unlock_out;
++		}
+ 		size = vcs_size(vc, attr, false);
+ 		if (size < 0) {
+ 			if (written)
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 70b112038792a..8ac2945e849f4 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -9428,8 +9428,16 @@ static int __ufshcd_wl_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op)
+ 			 * that performance might be impacted.
+ 			 */
+ 			ret = ufshcd_urgent_bkops(hba);
+-			if (ret)
++			if (ret) {
++				/*
++				 * If return err in suspend flow, IO will hang.
++				 * Trigger error handler and break suspend for
++				 * error recovery.
++				 */
++				ufshcd_force_error_recovery(hba);
++				ret = -EBUSY;
+ 				goto enable_scaling;
++			}
+ 		} else {
+ 			/* make sure that auto bkops is disabled */
+ 			ufshcd_disable_auto_bkops(hba);
+diff --git a/drivers/ufs/host/ufshcd-pci.c b/drivers/ufs/host/ufshcd-pci.c
+index 1c91f43e15c8e..9c911787f84c6 100644
+--- a/drivers/ufs/host/ufshcd-pci.c
++++ b/drivers/ufs/host/ufshcd-pci.c
+@@ -607,6 +607,7 @@ static const struct pci_device_id ufshcd_pci_tbl[] = {
+ 	{ PCI_VDEVICE(INTEL, 0x51FF), (kernel_ulong_t)&ufs_intel_adl_hba_vops },
+ 	{ PCI_VDEVICE(INTEL, 0x54FF), (kernel_ulong_t)&ufs_intel_adl_hba_vops },
+ 	{ PCI_VDEVICE(INTEL, 0x7E47), (kernel_ulong_t)&ufs_intel_mtl_hba_vops },
++	{ PCI_VDEVICE(INTEL, 0xA847), (kernel_ulong_t)&ufs_intel_mtl_hba_vops },
+ 	{ }	/* terminate list */
+ };
+ 
+diff --git a/drivers/usb/class/usbtmc.c b/drivers/usb/class/usbtmc.c
+index 4bb6d304eb4b2..311007b1d9046 100644
+--- a/drivers/usb/class/usbtmc.c
++++ b/drivers/usb/class/usbtmc.c
+@@ -1928,6 +1928,8 @@ static int usbtmc_ioctl_request(struct usbtmc_device_data *data,
+ 
+ 	if (request.req.wLength > USBTMC_BUFSIZE)
+ 		return -EMSGSIZE;
++	if (request.req.wLength == 0)	/* Length-0 requests are never IN */
++		request.req.bRequestType &= ~USB_DIR_IN;
+ 
+ 	is_in = request.req.bRequestType & USB_DIR_IN;
+ 
+diff --git a/drivers/usb/dwc3/debugfs.c b/drivers/usb/dwc3/debugfs.c
+index 850df0e6bcabf..f0ffd2e5c6429 100644
+--- a/drivers/usb/dwc3/debugfs.c
++++ b/drivers/usb/dwc3/debugfs.c
+@@ -327,6 +327,11 @@ static int dwc3_lsp_show(struct seq_file *s, void *unused)
+ 	unsigned int		current_mode;
+ 	unsigned long		flags;
+ 	u32			reg;
++	int			ret;
++
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	reg = dwc3_readl(dwc->regs, DWC3_GSTS);
+@@ -345,6 +350,8 @@ static int dwc3_lsp_show(struct seq_file *s, void *unused)
+ 	}
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return 0;
+ }
+ 
+@@ -390,6 +397,11 @@ static int dwc3_mode_show(struct seq_file *s, void *unused)
+ 	struct dwc3		*dwc = s->private;
+ 	unsigned long		flags;
+ 	u32			reg;
++	int			ret;
++
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	reg = dwc3_readl(dwc->regs, DWC3_GCTL);
+@@ -409,6 +421,8 @@ static int dwc3_mode_show(struct seq_file *s, void *unused)
+ 		seq_printf(s, "UNKNOWN %08x\n", DWC3_GCTL_PRTCAP(reg));
+ 	}
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return 0;
+ }
+ 
+@@ -458,6 +472,11 @@ static int dwc3_testmode_show(struct seq_file *s, void *unused)
+ 	struct dwc3		*dwc = s->private;
+ 	unsigned long		flags;
+ 	u32			reg;
++	int			ret;
++
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	reg = dwc3_readl(dwc->regs, DWC3_DCTL);
+@@ -488,6 +507,8 @@ static int dwc3_testmode_show(struct seq_file *s, void *unused)
+ 		seq_printf(s, "UNKNOWN %d\n", reg);
+ 	}
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return 0;
+ }
+ 
+@@ -504,6 +525,7 @@ static ssize_t dwc3_testmode_write(struct file *file,
+ 	unsigned long		flags;
+ 	u32			testmode = 0;
+ 	char			buf[32];
++	int			ret;
+ 
+ 	if (copy_from_user(&buf, ubuf, min_t(size_t, sizeof(buf) - 1, count)))
+ 		return -EFAULT;
+@@ -521,10 +543,16 @@ static ssize_t dwc3_testmode_write(struct file *file,
+ 	else
+ 		testmode = 0;
+ 
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
++
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	dwc3_gadget_set_test_mode(dwc, testmode);
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return count;
+ }
+ 
+@@ -543,12 +571,18 @@ static int dwc3_link_state_show(struct seq_file *s, void *unused)
+ 	enum dwc3_link_state	state;
+ 	u32			reg;
+ 	u8			speed;
++	int			ret;
++
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	reg = dwc3_readl(dwc->regs, DWC3_GSTS);
+ 	if (DWC3_GSTS_CURMOD(reg) != DWC3_GSTS_CURMOD_DEVICE) {
+ 		seq_puts(s, "Not available\n");
+ 		spin_unlock_irqrestore(&dwc->lock, flags);
++		pm_runtime_put_sync(dwc->dev);
+ 		return 0;
+ 	}
+ 
+@@ -561,6 +595,8 @@ static int dwc3_link_state_show(struct seq_file *s, void *unused)
+ 		   dwc3_gadget_hs_link_string(state));
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return 0;
+ }
+ 
+@@ -579,6 +615,7 @@ static ssize_t dwc3_link_state_write(struct file *file,
+ 	char			buf[32];
+ 	u32			reg;
+ 	u8			speed;
++	int			ret;
+ 
+ 	if (copy_from_user(&buf, ubuf, min_t(size_t, sizeof(buf) - 1, count)))
+ 		return -EFAULT;
+@@ -598,10 +635,15 @@ static ssize_t dwc3_link_state_write(struct file *file,
+ 	else
+ 		return -EINVAL;
+ 
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
++
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	reg = dwc3_readl(dwc->regs, DWC3_GSTS);
+ 	if (DWC3_GSTS_CURMOD(reg) != DWC3_GSTS_CURMOD_DEVICE) {
+ 		spin_unlock_irqrestore(&dwc->lock, flags);
++		pm_runtime_put_sync(dwc->dev);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -611,12 +653,15 @@ static ssize_t dwc3_link_state_write(struct file *file,
+ 	if (speed < DWC3_DSTS_SUPERSPEED &&
+ 	    state != DWC3_LINK_STATE_RECOV) {
+ 		spin_unlock_irqrestore(&dwc->lock, flags);
++		pm_runtime_put_sync(dwc->dev);
+ 		return -EINVAL;
+ 	}
+ 
+ 	dwc3_gadget_set_link_state(dwc, state);
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return count;
+ }
+ 
+@@ -640,6 +685,11 @@ static int dwc3_tx_fifo_size_show(struct seq_file *s, void *unused)
+ 	unsigned long		flags;
+ 	u32			mdwidth;
+ 	u32			val;
++	int			ret;
++
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	val = dwc3_core_fifo_space(dep, DWC3_TXFIFO);
+@@ -652,6 +702,8 @@ static int dwc3_tx_fifo_size_show(struct seq_file *s, void *unused)
+ 	seq_printf(s, "%u\n", val);
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return 0;
+ }
+ 
+@@ -662,6 +714,11 @@ static int dwc3_rx_fifo_size_show(struct seq_file *s, void *unused)
+ 	unsigned long		flags;
+ 	u32			mdwidth;
+ 	u32			val;
++	int			ret;
++
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	val = dwc3_core_fifo_space(dep, DWC3_RXFIFO);
+@@ -674,6 +731,8 @@ static int dwc3_rx_fifo_size_show(struct seq_file *s, void *unused)
+ 	seq_printf(s, "%u\n", val);
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return 0;
+ }
+ 
+@@ -683,12 +742,19 @@ static int dwc3_tx_request_queue_show(struct seq_file *s, void *unused)
+ 	struct dwc3		*dwc = dep->dwc;
+ 	unsigned long		flags;
+ 	u32			val;
++	int			ret;
++
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	val = dwc3_core_fifo_space(dep, DWC3_TXREQQ);
+ 	seq_printf(s, "%u\n", val);
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return 0;
+ }
+ 
+@@ -698,12 +764,19 @@ static int dwc3_rx_request_queue_show(struct seq_file *s, void *unused)
+ 	struct dwc3		*dwc = dep->dwc;
+ 	unsigned long		flags;
+ 	u32			val;
++	int			ret;
++
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	val = dwc3_core_fifo_space(dep, DWC3_RXREQQ);
+ 	seq_printf(s, "%u\n", val);
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return 0;
+ }
+ 
+@@ -713,12 +786,19 @@ static int dwc3_rx_info_queue_show(struct seq_file *s, void *unused)
+ 	struct dwc3		*dwc = dep->dwc;
+ 	unsigned long		flags;
+ 	u32			val;
++	int			ret;
++
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	val = dwc3_core_fifo_space(dep, DWC3_RXINFOQ);
+ 	seq_printf(s, "%u\n", val);
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return 0;
+ }
+ 
+@@ -728,12 +808,19 @@ static int dwc3_descriptor_fetch_queue_show(struct seq_file *s, void *unused)
+ 	struct dwc3		*dwc = dep->dwc;
+ 	unsigned long		flags;
+ 	u32			val;
++	int			ret;
++
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	val = dwc3_core_fifo_space(dep, DWC3_DESCFETCHQ);
+ 	seq_printf(s, "%u\n", val);
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return 0;
+ }
+ 
+@@ -743,12 +830,19 @@ static int dwc3_event_queue_show(struct seq_file *s, void *unused)
+ 	struct dwc3		*dwc = dep->dwc;
+ 	unsigned long		flags;
+ 	u32			val;
++	int			ret;
++
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	val = dwc3_core_fifo_space(dep, DWC3_EVENTQ);
+ 	seq_printf(s, "%u\n", val);
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return 0;
+ }
+ 
+@@ -793,6 +887,11 @@ static int dwc3_trb_ring_show(struct seq_file *s, void *unused)
+ 	struct dwc3		*dwc = dep->dwc;
+ 	unsigned long		flags;
+ 	int			i;
++	int			ret;
++
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	if (dep->number <= 1) {
+@@ -822,6 +921,8 @@ static int dwc3_trb_ring_show(struct seq_file *s, void *unused)
+ out:
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return 0;
+ }
+ 
+@@ -834,6 +935,11 @@ static int dwc3_ep_info_register_show(struct seq_file *s, void *unused)
+ 	u32			lower_32_bits;
+ 	u32			upper_32_bits;
+ 	u32			reg;
++	int			ret;
++
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	reg = DWC3_GDBGLSPMUX_EPSELECT(dep->number);
+@@ -846,6 +952,8 @@ static int dwc3_ep_info_register_show(struct seq_file *s, void *unused)
+ 	seq_printf(s, "0x%016llx\n", ep_info);
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return 0;
+ }
+ 
+@@ -905,6 +1013,7 @@ void dwc3_debugfs_init(struct dwc3 *dwc)
+ 	dwc->regset->regs = dwc3_regs;
+ 	dwc->regset->nregs = ARRAY_SIZE(dwc3_regs);
+ 	dwc->regset->base = dwc->regs - DWC3_GLOBALS_REGS_START;
++	dwc->regset->dev = dwc->dev;
+ 
+ 	root = debugfs_create_dir(dev_name(dwc->dev), usb_debug_root);
+ 	dwc->debug_root = root;
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index e63700937ba8c..80ae308448451 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -2597,6 +2597,21 @@ static int dwc3_gadget_soft_disconnect(struct dwc3 *dwc)
+ 	return ret;
+ }
+ 
++static int dwc3_gadget_soft_connect(struct dwc3 *dwc)
++{
++	/*
++	 * In the Synopsys DWC_usb31 1.90a programming guide section
++	 * 4.1.9, it specifies that for a reconnect after a
++	 * device-initiated disconnect requires a core soft reset
++	 * (DCTL.CSftRst) before enabling the run/stop bit.
++	 */
++	dwc3_core_soft_reset(dwc);
++
++	dwc3_event_buffers_setup(dwc);
++	__dwc3_gadget_start(dwc);
++	return dwc3_gadget_run_stop(dwc, true);
++}
++
+ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
+ {
+ 	struct dwc3		*dwc = gadget_to_dwc(g);
+@@ -2635,21 +2650,10 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
+ 
+ 	synchronize_irq(dwc->irq_gadget);
+ 
+-	if (!is_on) {
++	if (!is_on)
+ 		ret = dwc3_gadget_soft_disconnect(dwc);
+-	} else {
+-		/*
+-		 * In the Synopsys DWC_usb31 1.90a programming guide section
+-		 * 4.1.9, it specifies that for a reconnect after a
+-		 * device-initiated disconnect requires a core soft reset
+-		 * (DCTL.CSftRst) before enabling the run/stop bit.
+-		 */
+-		dwc3_core_soft_reset(dwc);
+-
+-		dwc3_event_buffers_setup(dwc);
+-		__dwc3_gadget_start(dwc);
+-		ret = dwc3_gadget_run_stop(dwc, true);
+-	}
++	else
++		ret = dwc3_gadget_soft_connect(dwc);
+ 
+ 	pm_runtime_put(dwc->dev);
+ 
+@@ -4565,42 +4569,39 @@ void dwc3_gadget_exit(struct dwc3 *dwc)
+ int dwc3_gadget_suspend(struct dwc3 *dwc)
+ {
+ 	unsigned long flags;
++	int ret;
+ 
+ 	if (!dwc->gadget_driver)
+ 		return 0;
+ 
+-	dwc3_gadget_run_stop(dwc, false);
++	ret = dwc3_gadget_soft_disconnect(dwc);
++	if (ret)
++		goto err;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	dwc3_disconnect_gadget(dwc);
+-	__dwc3_gadget_stop(dwc);
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
+ 	return 0;
++
++err:
++	/*
++	 * Attempt to reset the controller's state. Likely no
++	 * communication can be established until the host
++	 * performs a port reset.
++	 */
++	if (dwc->softconnect)
++		dwc3_gadget_soft_connect(dwc);
++
++	return ret;
+ }
+ 
+ int dwc3_gadget_resume(struct dwc3 *dwc)
+ {
+-	int			ret;
+-
+ 	if (!dwc->gadget_driver || !dwc->softconnect)
+ 		return 0;
+ 
+-	ret = __dwc3_gadget_start(dwc);
+-	if (ret < 0)
+-		goto err0;
+-
+-	ret = dwc3_gadget_run_stop(dwc, true);
+-	if (ret < 0)
+-		goto err1;
+-
+-	return 0;
+-
+-err1:
+-	__dwc3_gadget_stop(dwc);
+-
+-err0:
+-	return ret;
++	return dwc3_gadget_soft_connect(dwc);
+ }
+ 
+ void dwc3_gadget_process_pending_events(struct dwc3 *dwc)
+diff --git a/drivers/usb/gadget/function/u_ether.c b/drivers/usb/gadget/function/u_ether.c
+index f259975dfba43..d67d071c85ac0 100644
+--- a/drivers/usb/gadget/function/u_ether.c
++++ b/drivers/usb/gadget/function/u_ether.c
+@@ -17,6 +17,7 @@
+ #include <linux/etherdevice.h>
+ #include <linux/ethtool.h>
+ #include <linux/if_vlan.h>
++#include <linux/string_helpers.h>
+ #include <linux/usb/composite.h>
+ 
+ #include "u_ether.h"
+@@ -942,6 +943,8 @@ int gether_get_host_addr_cdc(struct net_device *net, char *host_addr, int len)
+ 	dev = netdev_priv(net);
+ 	snprintf(host_addr, len, "%pm", dev->host_mac);
+ 
++	string_upper(host_addr, host_addr);
++
+ 	return strlen(host_addr);
+ }
+ EXPORT_SYMBOL_GPL(gether_get_host_addr_cdc);
+diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
+index b14cbd0a6d013..23b0629a87743 100644
+--- a/drivers/usb/gadget/udc/core.c
++++ b/drivers/usb/gadget/udc/core.c
+@@ -37,10 +37,6 @@ static struct bus_type gadget_bus_type;
+  * @vbus: for udcs who care about vbus status, this value is real vbus status;
+  * for udcs who do not care about vbus status, this value is always true
+  * @started: the UDC's started state. True if the UDC had started.
+- * @connect_lock: protects udc->vbus, udc->started, gadget->connect, gadget->deactivate related
+- * functions. usb_gadget_connect_locked, usb_gadget_disconnect_locked,
+- * usb_udc_connect_control_locked, usb_gadget_udc_start_locked, usb_gadget_udc_stop_locked are
+- * called with this lock held.
+  *
+  * This represents the internal data structure which is used by the UDC-class
+  * to hold information about udc driver and gadget together.
+@@ -52,7 +48,6 @@ struct usb_udc {
+ 	struct list_head		list;
+ 	bool				vbus;
+ 	bool				started;
+-	struct mutex			connect_lock;
+ };
+ 
+ static struct class *udc_class;
+@@ -665,9 +660,17 @@ out:
+ }
+ EXPORT_SYMBOL_GPL(usb_gadget_vbus_disconnect);
+ 
+-/* Internal version of usb_gadget_connect needs to be called with connect_lock held. */
+-static int usb_gadget_connect_locked(struct usb_gadget *gadget)
+-	__must_hold(&gadget->udc->connect_lock)
++/**
++ * usb_gadget_connect - software-controlled connect to USB host
++ * @gadget:the peripheral being connected
++ *
++ * Enables the D+ (or potentially D-) pullup.  The host will start
++ * enumerating this gadget when the pullup is active and a VBUS session
++ * is active (the link is powered).
++ *
++ * Returns zero on success, else negative errno.
++ */
++int usb_gadget_connect(struct usb_gadget *gadget)
+ {
+ 	int ret = 0;
+ 
+@@ -676,15 +679,10 @@ static int usb_gadget_connect_locked(struct usb_gadget *gadget)
+ 		goto out;
+ 	}
+ 
+-	if (gadget->connected)
+-		goto out;
+-
+-	if (gadget->deactivated || !gadget->udc->started) {
++	if (gadget->deactivated) {
+ 		/*
+ 		 * If gadget is deactivated we only save new state.
+ 		 * Gadget will be connected automatically after activation.
+-		 *
+-		 * udc first needs to be started before gadget can be pulled up.
+ 		 */
+ 		gadget->connected = true;
+ 		goto out;
+@@ -699,32 +697,22 @@ out:
+ 
+ 	return ret;
+ }
++EXPORT_SYMBOL_GPL(usb_gadget_connect);
+ 
+ /**
+- * usb_gadget_connect - software-controlled connect to USB host
+- * @gadget:the peripheral being connected
++ * usb_gadget_disconnect - software-controlled disconnect from USB host
++ * @gadget:the peripheral being disconnected
+  *
+- * Enables the D+ (or potentially D-) pullup.  The host will start
+- * enumerating this gadget when the pullup is active and a VBUS session
+- * is active (the link is powered).
++ * Disables the D+ (or potentially D-) pullup, which the host may see
++ * as a disconnect (when a VBUS session is active).  Not all systems
++ * support software pullup controls.
++ *
++ * Following a successful disconnect, invoke the ->disconnect() callback
++ * for the current gadget driver so that UDC drivers don't need to.
+  *
+  * Returns zero on success, else negative errno.
+  */
+-int usb_gadget_connect(struct usb_gadget *gadget)
+-{
+-	int ret;
+-
+-	mutex_lock(&gadget->udc->connect_lock);
+-	ret = usb_gadget_connect_locked(gadget);
+-	mutex_unlock(&gadget->udc->connect_lock);
+-
+-	return ret;
+-}
+-EXPORT_SYMBOL_GPL(usb_gadget_connect);
+-
+-/* Internal version of usb_gadget_disconnect needs to be called with connect_lock held. */
+-static int usb_gadget_disconnect_locked(struct usb_gadget *gadget)
+-	__must_hold(&gadget->udc->connect_lock)
++int usb_gadget_disconnect(struct usb_gadget *gadget)
+ {
+ 	int ret = 0;
+ 
+@@ -736,12 +724,10 @@ static int usb_gadget_disconnect_locked(struct usb_gadget *gadget)
+ 	if (!gadget->connected)
+ 		goto out;
+ 
+-	if (gadget->deactivated || !gadget->udc->started) {
++	if (gadget->deactivated) {
+ 		/*
+ 		 * If gadget is deactivated we only save new state.
+ 		 * Gadget will stay disconnected after activation.
+-		 *
+-		 * udc should have been started before gadget being pulled down.
+ 		 */
+ 		gadget->connected = false;
+ 		goto out;
+@@ -761,30 +747,6 @@ out:
+ 
+ 	return ret;
+ }
+-
+-/**
+- * usb_gadget_disconnect - software-controlled disconnect from USB host
+- * @gadget:the peripheral being disconnected
+- *
+- * Disables the D+ (or potentially D-) pullup, which the host may see
+- * as a disconnect (when a VBUS session is active).  Not all systems
+- * support software pullup controls.
+- *
+- * Following a successful disconnect, invoke the ->disconnect() callback
+- * for the current gadget driver so that UDC drivers don't need to.
+- *
+- * Returns zero on success, else negative errno.
+- */
+-int usb_gadget_disconnect(struct usb_gadget *gadget)
+-{
+-	int ret;
+-
+-	mutex_lock(&gadget->udc->connect_lock);
+-	ret = usb_gadget_disconnect_locked(gadget);
+-	mutex_unlock(&gadget->udc->connect_lock);
+-
+-	return ret;
+-}
+ EXPORT_SYMBOL_GPL(usb_gadget_disconnect);
+ 
+ /**
+@@ -805,11 +767,10 @@ int usb_gadget_deactivate(struct usb_gadget *gadget)
+ 	if (gadget->deactivated)
+ 		goto out;
+ 
+-	mutex_lock(&gadget->udc->connect_lock);
+ 	if (gadget->connected) {
+-		ret = usb_gadget_disconnect_locked(gadget);
++		ret = usb_gadget_disconnect(gadget);
+ 		if (ret)
+-			goto unlock;
++			goto out;
+ 
+ 		/*
+ 		 * If gadget was being connected before deactivation, we want
+@@ -819,8 +780,6 @@ int usb_gadget_deactivate(struct usb_gadget *gadget)
+ 	}
+ 	gadget->deactivated = true;
+ 
+-unlock:
+-	mutex_unlock(&gadget->udc->connect_lock);
+ out:
+ 	trace_usb_gadget_deactivate(gadget, ret);
+ 
+@@ -844,7 +803,6 @@ int usb_gadget_activate(struct usb_gadget *gadget)
+ 	if (!gadget->deactivated)
+ 		goto out;
+ 
+-	mutex_lock(&gadget->udc->connect_lock);
+ 	gadget->deactivated = false;
+ 
+ 	/*
+@@ -852,8 +810,7 @@ int usb_gadget_activate(struct usb_gadget *gadget)
+ 	 * while it was being deactivated, we call usb_gadget_connect().
+ 	 */
+ 	if (gadget->connected)
+-		ret = usb_gadget_connect_locked(gadget);
+-	mutex_unlock(&gadget->udc->connect_lock);
++		ret = usb_gadget_connect(gadget);
+ 
+ out:
+ 	trace_usb_gadget_activate(gadget, ret);
+@@ -1094,13 +1051,12 @@ EXPORT_SYMBOL_GPL(usb_gadget_set_state);
+ 
+ /* ------------------------------------------------------------------------- */
+ 
+-/* Acquire connect_lock before calling this function. */
+-static void usb_udc_connect_control_locked(struct usb_udc *udc) __must_hold(&udc->connect_lock)
++static void usb_udc_connect_control(struct usb_udc *udc)
+ {
+-	if (udc->vbus && udc->started)
+-		usb_gadget_connect_locked(udc->gadget);
++	if (udc->vbus)
++		usb_gadget_connect(udc->gadget);
+ 	else
+-		usb_gadget_disconnect_locked(udc->gadget);
++		usb_gadget_disconnect(udc->gadget);
+ }
+ 
+ /**
+@@ -1116,12 +1072,10 @@ void usb_udc_vbus_handler(struct usb_gadget *gadget, bool status)
+ {
+ 	struct usb_udc *udc = gadget->udc;
+ 
+-	mutex_lock(&udc->connect_lock);
+ 	if (udc) {
+ 		udc->vbus = status;
+-		usb_udc_connect_control_locked(udc);
++		usb_udc_connect_control(udc);
+ 	}
+-	mutex_unlock(&udc->connect_lock);
+ }
+ EXPORT_SYMBOL_GPL(usb_udc_vbus_handler);
+ 
+@@ -1143,7 +1097,7 @@ void usb_gadget_udc_reset(struct usb_gadget *gadget,
+ EXPORT_SYMBOL_GPL(usb_gadget_udc_reset);
+ 
+ /**
+- * usb_gadget_udc_start_locked - tells usb device controller to start up
++ * usb_gadget_udc_start - tells usb device controller to start up
+  * @udc: The UDC to be started
+  *
+  * This call is issued by the UDC Class driver when it's about
+@@ -1154,11 +1108,8 @@ EXPORT_SYMBOL_GPL(usb_gadget_udc_reset);
+  * necessary to have it powered on.
+  *
+  * Returns zero on success, else negative errno.
+- *
+- * Caller should acquire connect_lock before invoking this function.
+  */
+-static inline int usb_gadget_udc_start_locked(struct usb_udc *udc)
+-	__must_hold(&udc->connect_lock)
++static inline int usb_gadget_udc_start(struct usb_udc *udc)
+ {
+ 	int ret;
+ 
+@@ -1175,7 +1126,7 @@ static inline int usb_gadget_udc_start_locked(struct usb_udc *udc)
+ }
+ 
+ /**
+- * usb_gadget_udc_stop_locked - tells usb device controller we don't need it anymore
++ * usb_gadget_udc_stop - tells usb device controller we don't need it anymore
+  * @udc: The UDC to be stopped
+  *
+  * This call is issued by the UDC Class driver after calling
+@@ -1184,11 +1135,8 @@ static inline int usb_gadget_udc_start_locked(struct usb_udc *udc)
+  * The details are implementation specific, but it can go as
+  * far as powering off UDC completely and disable its data
+  * line pullups.
+- *
+- * Caller should acquire connect lock before invoking this function.
+  */
+-static inline void usb_gadget_udc_stop_locked(struct usb_udc *udc)
+-	__must_hold(&udc->connect_lock)
++static inline void usb_gadget_udc_stop(struct usb_udc *udc)
+ {
+ 	if (!udc->started) {
+ 		dev_err(&udc->dev, "UDC had already stopped\n");
+@@ -1347,7 +1295,6 @@ int usb_add_gadget(struct usb_gadget *gadget)
+ 
+ 	udc->gadget = gadget;
+ 	gadget->udc = udc;
+-	mutex_init(&udc->connect_lock);
+ 
+ 	udc->started = false;
+ 
+@@ -1549,15 +1496,11 @@ static int gadget_bind_driver(struct device *dev)
+ 	if (ret)
+ 		goto err_bind;
+ 
+-	mutex_lock(&udc->connect_lock);
+-	ret = usb_gadget_udc_start_locked(udc);
+-	if (ret) {
+-		mutex_unlock(&udc->connect_lock);
++	ret = usb_gadget_udc_start(udc);
++	if (ret)
+ 		goto err_start;
+-	}
+ 	usb_gadget_enable_async_callbacks(udc);
+-	usb_udc_connect_control_locked(udc);
+-	mutex_unlock(&udc->connect_lock);
++	usb_udc_connect_control(udc);
+ 
+ 	kobject_uevent(&udc->dev.kobj, KOBJ_CHANGE);
+ 	return 0;
+@@ -1588,14 +1531,12 @@ static void gadget_unbind_driver(struct device *dev)
+ 
+ 	kobject_uevent(&udc->dev.kobj, KOBJ_CHANGE);
+ 
+-	mutex_lock(&udc->connect_lock);
+-	usb_gadget_disconnect_locked(gadget);
++	usb_gadget_disconnect(gadget);
+ 	usb_gadget_disable_async_callbacks(udc);
+ 	if (gadget->irq)
+ 		synchronize_irq(gadget->irq);
+ 	udc->driver->unbind(gadget);
+-	usb_gadget_udc_stop_locked(udc);
+-	mutex_unlock(&udc->connect_lock);
++	usb_gadget_udc_stop(udc);
+ 
+ 	mutex_lock(&udc_lock);
+ 	driver->is_bound = false;
+@@ -1681,15 +1622,11 @@ static ssize_t soft_connect_store(struct device *dev,
+ 	}
+ 
+ 	if (sysfs_streq(buf, "connect")) {
+-		mutex_lock(&udc->connect_lock);
+-		usb_gadget_udc_start_locked(udc);
+-		usb_gadget_connect_locked(udc->gadget);
+-		mutex_unlock(&udc->connect_lock);
++		usb_gadget_udc_start(udc);
++		usb_gadget_connect(udc->gadget);
+ 	} else if (sysfs_streq(buf, "disconnect")) {
+-		mutex_lock(&udc->connect_lock);
+-		usb_gadget_disconnect_locked(udc->gadget);
+-		usb_gadget_udc_stop_locked(udc);
+-		mutex_unlock(&udc->connect_lock);
++		usb_gadget_disconnect(udc->gadget);
++		usb_gadget_udc_stop(udc);
+ 	} else {
+ 		dev_err(dev, "unsupported command '%s'\n", buf);
+ 		ret = -EINVAL;
+diff --git a/drivers/usb/host/uhci-pci.c b/drivers/usb/host/uhci-pci.c
+index 3592f757fe05d..7bd2fddde770a 100644
+--- a/drivers/usb/host/uhci-pci.c
++++ b/drivers/usb/host/uhci-pci.c
+@@ -119,11 +119,13 @@ static int uhci_pci_init(struct usb_hcd *hcd)
+ 
+ 	uhci->rh_numports = uhci_count_ports(hcd);
+ 
+-	/* Intel controllers report the OverCurrent bit active on.
+-	 * VIA controllers report it active off, so we'll adjust the
+-	 * bit value.  (It's not standardized in the UHCI spec.)
++	/*
++	 * Intel controllers report the OverCurrent bit active on.  VIA
++	 * and ZHAOXIN controllers report it active off, so we'll adjust
++	 * the bit value.  (It's not standardized in the UHCI spec.)
+ 	 */
+-	if (to_pci_dev(uhci_dev(uhci))->vendor == PCI_VENDOR_ID_VIA)
++	if (to_pci_dev(uhci_dev(uhci))->vendor == PCI_VENDOR_ID_VIA ||
++			to_pci_dev(uhci_dev(uhci))->vendor == PCI_VENDOR_ID_ZHAOXIN)
+ 		uhci->oc_low = 1;
+ 
+ 	/* HP's server management chip requires a longer port reset delay. */
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index d0a9467aa5fc4..c385513ad00b6 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -9,6 +9,7 @@
+  */
+ 
+ #include <linux/usb.h>
++#include <linux/overflow.h>
+ #include <linux/pci.h>
+ #include <linux/slab.h>
+ #include <linux/dmapool.h>
+@@ -568,7 +569,7 @@ static struct xhci_stream_ctx *xhci_alloc_stream_ctx(struct xhci_hcd *xhci,
+ 		gfp_t mem_flags)
+ {
+ 	struct device *dev = xhci_to_hcd(xhci)->self.sysdev;
+-	size_t size = sizeof(struct xhci_stream_ctx) * num_stream_ctxs;
++	size_t size = size_mul(sizeof(struct xhci_stream_ctx), num_stream_ctxs);
+ 
+ 	if (size > MEDIUM_STREAM_ARRAY_SIZE)
+ 		return dma_alloc_coherent(dev, size,
+@@ -1660,7 +1661,7 @@ static int scratchpad_alloc(struct xhci_hcd *xhci, gfp_t flags)
+ 		goto fail_sp;
+ 
+ 	xhci->scratchpad->sp_array = dma_alloc_coherent(dev,
+-				     num_sp * sizeof(u64),
++				     size_mul(sizeof(u64), num_sp),
+ 				     &xhci->scratchpad->sp_dma, flags);
+ 	if (!xhci->scratchpad->sp_array)
+ 		goto fail_sp2;
+@@ -1799,7 +1800,7 @@ int xhci_alloc_erst(struct xhci_hcd *xhci,
+ 	struct xhci_segment *seg;
+ 	struct xhci_erst_entry *entry;
+ 
+-	size = sizeof(struct xhci_erst_entry) * evt_ring->num_segs;
++	size = size_mul(sizeof(struct xhci_erst_entry), evt_ring->num_segs);
+ 	erst->entries = dma_alloc_coherent(xhci_to_hcd(xhci)->self.sysdev,
+ 					   size, &erst->erst_dma_addr, flags);
+ 	if (!erst->entries)
+@@ -1830,7 +1831,7 @@ xhci_free_interrupter(struct xhci_hcd *xhci, struct xhci_interrupter *ir)
+ 	if (!ir)
+ 		return;
+ 
+-	erst_size = sizeof(struct xhci_erst_entry) * (ir->erst.num_entries);
++	erst_size = sizeof(struct xhci_erst_entry) * ir->erst.num_entries;
+ 	if (ir->erst.entries)
+ 		dma_free_coherent(dev, erst_size,
+ 				  ir->erst.entries,
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 6db07ca419c31..1fb727f5c4966 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -13,6 +13,7 @@
+ #include <linux/module.h>
+ #include <linux/acpi.h>
+ #include <linux/reset.h>
++#include <linux/suspend.h>
+ 
+ #include "xhci.h"
+ #include "xhci-trace.h"
+@@ -194,7 +195,7 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 
+ 	if (pdev->vendor == PCI_VENDOR_ID_AMD &&
+ 		pdev->device == PCI_DEVICE_ID_AMD_RENOIR_XHCI)
+-		xhci->quirks |= XHCI_BROKEN_D3COLD;
++		xhci->quirks |= XHCI_BROKEN_D3COLD_S2I;
+ 
+ 	if (pdev->vendor == PCI_VENDOR_ID_INTEL) {
+ 		xhci->quirks |= XHCI_LPM_SUPPORT;
+@@ -609,9 +610,16 @@ static int xhci_pci_suspend(struct usb_hcd *hcd, bool do_wakeup)
+ 	 * Systems with the TI redriver that loses port status change events
+ 	 * need to have the registers polled during D3, so avoid D3cold.
+ 	 */
+-	if (xhci->quirks & (XHCI_COMP_MODE_QUIRK | XHCI_BROKEN_D3COLD))
++	if (xhci->quirks & XHCI_COMP_MODE_QUIRK)
+ 		pci_d3cold_disable(pdev);
+ 
++#ifdef CONFIG_SUSPEND
++	/* d3cold is broken, but only when s2idle is used */
++	if (pm_suspend_target_state == PM_SUSPEND_TO_IDLE &&
++	    xhci->quirks & (XHCI_BROKEN_D3COLD_S2I))
++		pci_d3cold_disable(pdev);
++#endif
++
+ 	if (xhci->quirks & XHCI_PME_STUCK_QUIRK)
+ 		xhci_pme_quirk(hcd);
+ 
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index eb788c60c1c09..5bd97c5a551c1 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -276,6 +276,26 @@ static void inc_enq(struct xhci_hcd *xhci, struct xhci_ring *ring,
+ 	trace_xhci_inc_enq(ring);
+ }
+ 
++static int xhci_num_trbs_to(struct xhci_segment *start_seg, union xhci_trb *start,
++			    struct xhci_segment *end_seg, union xhci_trb *end,
++			    unsigned int num_segs)
++{
++	union xhci_trb *last_on_seg;
++	int num = 0;
++	int i = 0;
++
++	do {
++		if (start_seg == end_seg && end >= start)
++			return num + (end - start);
++		last_on_seg = &start_seg->trbs[TRBS_PER_SEGMENT - 1];
++		num += last_on_seg - start;
++		start_seg = start_seg->next;
++		start = start_seg->trbs;
++	} while (i++ <= num_segs);
++
++	return -EINVAL;
++}
++
+ /*
+  * Check to see if there's room to enqueue num_trbs on the ring and make sure
+  * enqueue pointer will not advance into dequeue segment. See rules above.
+@@ -2140,6 +2160,7 @@ static int finish_td(struct xhci_hcd *xhci, struct xhci_virt_ep *ep,
+ 		     u32 trb_comp_code)
+ {
+ 	struct xhci_ep_ctx *ep_ctx;
++	int trbs_freed;
+ 
+ 	ep_ctx = xhci_get_ep_ctx(xhci, ep->vdev->out_ctx, ep->ep_index);
+ 
+@@ -2209,9 +2230,15 @@ static int finish_td(struct xhci_hcd *xhci, struct xhci_virt_ep *ep,
+ 	}
+ 
+ 	/* Update ring dequeue pointer */
++	trbs_freed = xhci_num_trbs_to(ep_ring->deq_seg, ep_ring->dequeue,
++				      td->last_trb_seg, td->last_trb,
++				      ep_ring->num_segs);
++	if (trbs_freed < 0)
++		xhci_dbg(xhci, "Failed to count freed trbs at TD finish\n");
++	else
++		ep_ring->num_trbs_free += trbs_freed;
+ 	ep_ring->dequeue = td->last_trb;
+ 	ep_ring->deq_seg = td->last_trb_seg;
+-	ep_ring->num_trbs_free += td->num_trbs - 1;
+ 	inc_deq(xhci, ep_ring);
+ 
+ 	return xhci_td_cleanup(xhci, td, ep_ring, td->status);
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 786002bb35db0..3818359603cc2 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1901,7 +1901,7 @@ struct xhci_hcd {
+ #define XHCI_DISABLE_SPARSE	BIT_ULL(38)
+ #define XHCI_SG_TRB_CACHE_SIZE_QUIRK	BIT_ULL(39)
+ #define XHCI_NO_SOFT_RETRY	BIT_ULL(40)
+-#define XHCI_BROKEN_D3COLD	BIT_ULL(41)
++#define XHCI_BROKEN_D3COLD_S2I	BIT_ULL(41)
+ #define XHCI_EP_CTX_BROKEN_DCS	BIT_ULL(42)
+ #define XHCI_SUSPEND_RESUME_CLKS	BIT_ULL(43)
+ #define XHCI_RESET_TO_DEFAULT	BIT_ULL(44)
+diff --git a/drivers/usb/storage/scsiglue.c b/drivers/usb/storage/scsiglue.c
+index 8931df5a85fd9..c54e9805da536 100644
+--- a/drivers/usb/storage/scsiglue.c
++++ b/drivers/usb/storage/scsiglue.c
+@@ -406,22 +406,25 @@ static DEF_SCSI_QCMD(queuecommand)
+  ***********************************************************************/
+ 
+ /* Command timeout and abort */
+-static int command_abort(struct scsi_cmnd *srb)
++static int command_abort_matching(struct us_data *us, struct scsi_cmnd *srb_match)
+ {
+-	struct us_data *us = host_to_us(srb->device->host);
+-
+-	usb_stor_dbg(us, "%s called\n", __func__);
+-
+ 	/*
+ 	 * us->srb together with the TIMED_OUT, RESETTING, and ABORTING
+ 	 * bits are protected by the host lock.
+ 	 */
+ 	scsi_lock(us_to_host(us));
+ 
+-	/* Is this command still active? */
+-	if (us->srb != srb) {
++	/* is there any active pending command to abort ? */
++	if (!us->srb) {
+ 		scsi_unlock(us_to_host(us));
+ 		usb_stor_dbg(us, "-- nothing to abort\n");
++		return SUCCESS;
++	}
++
++	/* Does the command match the passed srb if any ? */
++	if (srb_match && us->srb != srb_match) {
++		scsi_unlock(us_to_host(us));
++		usb_stor_dbg(us, "-- pending command mismatch\n");
+ 		return FAILED;
+ 	}
+ 
+@@ -444,6 +447,14 @@ static int command_abort(struct scsi_cmnd *srb)
+ 	return SUCCESS;
+ }
+ 
++static int command_abort(struct scsi_cmnd *srb)
++{
++	struct us_data *us = host_to_us(srb->device->host);
++
++	usb_stor_dbg(us, "%s called\n", __func__);
++	return command_abort_matching(us, srb);
++}
++
+ /*
+  * This invokes the transport reset mechanism to reset the state of the
+  * device
+@@ -455,6 +466,9 @@ static int device_reset(struct scsi_cmnd *srb)
+ 
+ 	usb_stor_dbg(us, "%s called\n", __func__);
+ 
++	/* abort any pending command before reset */
++	command_abort_matching(us, NULL);
++
+ 	/* lock the device pointers and do the reset */
+ 	mutex_lock(&(us->dev_mutex));
+ 	result = us->transport_reset(us);
+diff --git a/drivers/usb/typec/altmodes/displayport.c b/drivers/usb/typec/altmodes/displayport.c
+index 8f3e884222ade..66de880b28d01 100644
+--- a/drivers/usb/typec/altmodes/displayport.c
++++ b/drivers/usb/typec/altmodes/displayport.c
+@@ -516,6 +516,10 @@ static ssize_t pin_assignment_show(struct device *dev,
+ 
+ 	mutex_unlock(&dp->lock);
+ 
++	/* get_current_pin_assignments can return 0 when no matching pin assignments are found */
++	if (len == 0)
++		len++;
++
+ 	buf[len - 1] = '\n';
+ 	return len;
+ }
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index 1ee774c263f08..be1708e30e917 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -1523,7 +1523,21 @@ static bool svdm_consume_svids(struct tcpm_port *port, const u32 *p, int cnt)
+ 		pmdata->svids[pmdata->nsvids++] = svid;
+ 		tcpm_log(port, "SVID %d: 0x%x", pmdata->nsvids, svid);
+ 	}
+-	return true;
++
++	/*
++	 * PD3.0 Spec 6.4.4.3.2: The SVIDs are returned 2 per VDO (see Table
++	 * 6-43), and can be returned maximum 6 VDOs per response (see Figure
++	 * 6-19). If the Respondersupports 12 or more SVID then the Discover
++	 * SVIDs Command Shall be executed multiple times until a Discover
++	 * SVIDs VDO is returned ending either with a SVID value of 0x0000 in
++	 * the last part of the last VDO or with a VDO containing two SVIDs
++	 * with values of 0x0000.
++	 *
++	 * However, some odd dockers support SVIDs less than 12 but without
++	 * 0x0000 in the last VDO, so we need to break the Discover SVIDs
++	 * request and return false here.
++	 */
++	return cnt == 7;
+ abort:
+ 	tcpm_log(port, "SVID_DISCOVERY_MAX(%d) too low!", SVID_DISCOVERY_MAX);
+ 	return false;
+diff --git a/drivers/usb/typec/ucsi/ucsi_acpi.c b/drivers/usb/typec/ucsi/ucsi_acpi.c
+index 62206a6b8ea75..217355f1f9b94 100644
+--- a/drivers/usb/typec/ucsi/ucsi_acpi.c
++++ b/drivers/usb/typec/ucsi/ucsi_acpi.c
+@@ -9,6 +9,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/module.h>
+ #include <linux/acpi.h>
++#include <linux/dmi.h>
+ 
+ #include "ucsi.h"
+ 
+@@ -23,6 +24,7 @@ struct ucsi_acpi {
+ 	struct completion complete;
+ 	unsigned long flags;
+ 	guid_t guid;
++	u64 cmd;
+ };
+ 
+ static int ucsi_acpi_dsm(struct ucsi_acpi *ua, int func)
+@@ -62,6 +64,7 @@ static int ucsi_acpi_async_write(struct ucsi *ucsi, unsigned int offset,
+ 	struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi);
+ 
+ 	memcpy(ua->base + offset, val, val_len);
++	ua->cmd = *(u64 *)val;
+ 
+ 	return ucsi_acpi_dsm(ua, UCSI_DSM_FUNC_WRITE);
+ }
+@@ -93,13 +96,46 @@ static const struct ucsi_operations ucsi_acpi_ops = {
+ 	.async_write = ucsi_acpi_async_write
+ };
+ 
++static int
++ucsi_zenbook_read(struct ucsi *ucsi, unsigned int offset, void *val, size_t val_len)
++{
++	struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi);
++	int ret;
++
++	if (offset == UCSI_VERSION || UCSI_COMMAND(ua->cmd) == UCSI_PPM_RESET) {
++		ret = ucsi_acpi_dsm(ua, UCSI_DSM_FUNC_READ);
++		if (ret)
++			return ret;
++	}
++
++	memcpy(val, ua->base + offset, val_len);
++
++	return 0;
++}
++
++static const struct ucsi_operations ucsi_zenbook_ops = {
++	.read = ucsi_zenbook_read,
++	.sync_write = ucsi_acpi_sync_write,
++	.async_write = ucsi_acpi_async_write
++};
++
++static const struct dmi_system_id zenbook_dmi_id[] = {
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX325UA_UM325UA"),
++		},
++	},
++	{ }
++};
++
+ static void ucsi_acpi_notify(acpi_handle handle, u32 event, void *data)
+ {
+ 	struct ucsi_acpi *ua = data;
+ 	u32 cci;
+ 	int ret;
+ 
+-	ret = ucsi_acpi_read(ua->ucsi, UCSI_CCI, &cci, sizeof(cci));
++	ret = ua->ucsi->ops->read(ua->ucsi, UCSI_CCI, &cci, sizeof(cci));
+ 	if (ret)
+ 		return;
+ 
+@@ -114,6 +150,7 @@ static void ucsi_acpi_notify(acpi_handle handle, u32 event, void *data)
+ static int ucsi_acpi_probe(struct platform_device *pdev)
+ {
+ 	struct acpi_device *adev = ACPI_COMPANION(&pdev->dev);
++	const struct ucsi_operations *ops = &ucsi_acpi_ops;
+ 	struct ucsi_acpi *ua;
+ 	struct resource *res;
+ 	acpi_status status;
+@@ -143,7 +180,10 @@ static int ucsi_acpi_probe(struct platform_device *pdev)
+ 	init_completion(&ua->complete);
+ 	ua->dev = &pdev->dev;
+ 
+-	ua->ucsi = ucsi_create(&pdev->dev, &ucsi_acpi_ops);
++	if (dmi_check_system(zenbook_dmi_id))
++		ops = &ucsi_zenbook_ops;
++
++	ua->ucsi = ucsi_create(&pdev->dev, ops);
+ 	if (IS_ERR(ua->ucsi))
+ 		return PTR_ERR(ua->ucsi);
+ 
+diff --git a/drivers/video/fbdev/arcfb.c b/drivers/video/fbdev/arcfb.c
+index 45e64016db328..024d0ee4f04f9 100644
+--- a/drivers/video/fbdev/arcfb.c
++++ b/drivers/video/fbdev/arcfb.c
+@@ -523,7 +523,7 @@ static int arcfb_probe(struct platform_device *dev)
+ 
+ 	info = framebuffer_alloc(sizeof(struct arcfb_par), &dev->dev);
+ 	if (!info)
+-		goto err;
++		goto err_fb_alloc;
+ 
+ 	info->screen_base = (char __iomem *)videomemory;
+ 	info->fbops = &arcfb_ops;
+@@ -535,7 +535,7 @@ static int arcfb_probe(struct platform_device *dev)
+ 
+ 	if (!dio_addr || !cio_addr || !c2io_addr) {
+ 		printk(KERN_WARNING "no IO addresses supplied\n");
+-		goto err1;
++		goto err_addr;
+ 	}
+ 	par->dio_addr = dio_addr;
+ 	par->cio_addr = cio_addr;
+@@ -551,12 +551,12 @@ static int arcfb_probe(struct platform_device *dev)
+ 			printk(KERN_INFO
+ 				"arcfb: Failed req IRQ %d\n", par->irq);
+ 			retval = -EBUSY;
+-			goto err1;
++			goto err_addr;
+ 		}
+ 	}
+ 	retval = register_framebuffer(info);
+ 	if (retval < 0)
+-		goto err1;
++		goto err_register_fb;
+ 	platform_set_drvdata(dev, info);
+ 	fb_info(info, "Arc frame buffer device, using %dK of video memory\n",
+ 		videomemorysize >> 10);
+@@ -580,9 +580,12 @@ static int arcfb_probe(struct platform_device *dev)
+ 	}
+ 
+ 	return 0;
+-err1:
++
++err_register_fb:
++	free_irq(par->irq, info);
++err_addr:
+ 	framebuffer_release(info);
+-err:
++err_fb_alloc:
+ 	vfree(videomemory);
+ 	return retval;
+ }
+diff --git a/fs/ceph/snap.c b/fs/ceph/snap.c
+index 87007203f130e..0b236ebd989fc 100644
+--- a/fs/ceph/snap.c
++++ b/fs/ceph/snap.c
+@@ -1111,6 +1111,19 @@ skip_inode:
+ 				continue;
+ 			adjust_snap_realm_parent(mdsc, child, realm->ino);
+ 		}
++	} else {
++		/*
++		 * In the non-split case both 'num_split_inos' and
++		 * 'num_split_realms' should be 0, making this a no-op.
++		 * However the MDS happens to populate 'split_realms' list
++		 * in one of the UPDATE op cases by mistake.
++		 *
++		 * Skip both lists just in case to ensure that 'p' is
++		 * positioned at the start of realm info, as expected by
++		 * ceph_update_snap_trace().
++		 */
++		p += sizeof(u64) * num_split_inos;
++		p += sizeof(u64) * num_split_realms;
+ 	}
+ 
+ 	/*
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 414685c5d5306..5f8fd20951af3 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -424,8 +424,8 @@ struct smb_version_operations {
+ 	/* check for STATUS_NETWORK_SESSION_EXPIRED */
+ 	bool (*is_session_expired)(char *);
+ 	/* send oplock break response */
+-	int (*oplock_response)(struct cifs_tcon *, struct cifs_fid *,
+-			       struct cifsInodeInfo *);
++	int (*oplock_response)(struct cifs_tcon *tcon, __u64 persistent_fid, __u64 volatile_fid,
++			__u16 net_fid, struct cifsInodeInfo *cifs_inode);
+ 	/* query remote filesystem */
+ 	int (*queryfs)(const unsigned int, struct cifs_tcon *,
+ 		       struct cifs_sb_info *, struct kstatfs *);
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 59a10330e299b..8e9a672320ab7 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -1918,18 +1918,22 @@ void __cifs_put_smb_ses(struct cifs_ses *ses)
+ 	/* ses_count can never go negative */
+ 	WARN_ON(ses->ses_count < 0);
+ 
++	spin_lock(&ses->ses_lock);
+ 	if (ses->ses_status == SES_GOOD)
+ 		ses->ses_status = SES_EXITING;
+ 
+-	cifs_free_ipc(ses);
+-
+ 	if (ses->ses_status == SES_EXITING && server->ops->logoff) {
++		spin_unlock(&ses->ses_lock);
++		cifs_free_ipc(ses);
+ 		xid = get_xid();
+ 		rc = server->ops->logoff(xid, ses);
+ 		if (rc)
+ 			cifs_server_dbg(VFS, "%s: Session Logoff failure rc=%d\n",
+ 				__func__, rc);
+ 		_free_xid(xid);
++	} else {
++		spin_unlock(&ses->ses_lock);
++		cifs_free_ipc(ses);
+ 	}
+ 
+ 	spin_lock(&cifs_tcp_ses_lock);
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index c5fcefdfd7976..ba7f2e09d6c8e 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -4881,9 +4881,9 @@ void cifs_oplock_break(struct work_struct *work)
+ 	struct cifs_tcon *tcon = tlink_tcon(cfile->tlink);
+ 	struct TCP_Server_Info *server = tcon->ses->server;
+ 	int rc = 0;
+-	bool purge_cache = false;
+-	struct cifs_deferred_close *dclose;
+-	bool is_deferred = false;
++	bool purge_cache = false, oplock_break_cancelled;
++	__u64 persistent_fid, volatile_fid;
++	__u16 net_fid;
+ 
+ 	wait_on_bit(&cinode->flags, CIFS_INODE_PENDING_WRITERS,
+ 			TASK_UNINTERRUPTIBLE);
+@@ -4924,28 +4924,28 @@ oplock_break_ack:
+ 	 * file handles but cached, then schedule deferred close immediately.
+ 	 * So, new open will not use cached handle.
+ 	 */
+-	spin_lock(&CIFS_I(inode)->deferred_lock);
+-	is_deferred = cifs_is_deferred_close(cfile, &dclose);
+-	spin_unlock(&CIFS_I(inode)->deferred_lock);
+ 
+-	if (!CIFS_CACHE_HANDLE(cinode) && is_deferred &&
+-			cfile->deferred_close_scheduled && delayed_work_pending(&cfile->deferred)) {
++	if (!CIFS_CACHE_HANDLE(cinode) && !list_empty(&cinode->deferred_closes))
+ 		cifs_close_deferred_file(cinode);
+-	}
+ 
++	persistent_fid = cfile->fid.persistent_fid;
++	volatile_fid = cfile->fid.volatile_fid;
++	net_fid = cfile->fid.netfid;
++	oplock_break_cancelled = cfile->oplock_break_cancelled;
++
++	_cifsFileInfo_put(cfile, false /* do not wait for ourself */, false);
+ 	/*
+ 	 * releasing stale oplock after recent reconnect of smb session using
+ 	 * a now incorrect file handle is not a data integrity issue but do
+ 	 * not bother sending an oplock release if session to server still is
+ 	 * disconnected since oplock already released by the server
+ 	 */
+-	if (!cfile->oplock_break_cancelled) {
+-		rc = tcon->ses->server->ops->oplock_response(tcon, &cfile->fid,
+-							     cinode);
++	if (!oplock_break_cancelled) {
++		rc = tcon->ses->server->ops->oplock_response(tcon, persistent_fid,
++				volatile_fid, net_fid, cinode);
+ 		cifs_dbg(FYI, "Oplock release rc = %d\n", rc);
+ 	}
+ 
+-	_cifsFileInfo_put(cfile, false /* do not wait for ourself */, false);
+ 	cifs_done_oplock_break(cinode);
+ }
+ 
+diff --git a/fs/cifs/smb1ops.c b/fs/cifs/smb1ops.c
+index abda6148be10f..7d1b3fc014d94 100644
+--- a/fs/cifs/smb1ops.c
++++ b/fs/cifs/smb1ops.c
+@@ -897,12 +897,11 @@ cifs_close_dir(const unsigned int xid, struct cifs_tcon *tcon,
+ }
+ 
+ static int
+-cifs_oplock_response(struct cifs_tcon *tcon, struct cifs_fid *fid,
+-		     struct cifsInodeInfo *cinode)
++cifs_oplock_response(struct cifs_tcon *tcon, __u64 persistent_fid,
++		__u64 volatile_fid, __u16 net_fid, struct cifsInodeInfo *cinode)
+ {
+-	return CIFSSMBLock(0, tcon, fid->netfid, current->tgid, 0, 0, 0, 0,
+-			   LOCKING_ANDX_OPLOCK_RELEASE, false,
+-			   CIFS_CACHE_READ(cinode) ? 1 : 0);
++	return CIFSSMBLock(0, tcon, net_fid, current->tgid, 0, 0, 0, 0,
++			   LOCKING_ANDX_OPLOCK_RELEASE, false, CIFS_CACHE_READ(cinode) ? 1 : 0);
+ }
+ 
+ static int
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index a295e4c2d54e3..5065398665f11 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -2383,15 +2383,14 @@ smb2_is_network_name_deleted(char *buf, struct TCP_Server_Info *server)
+ }
+ 
+ static int
+-smb2_oplock_response(struct cifs_tcon *tcon, struct cifs_fid *fid,
+-		     struct cifsInodeInfo *cinode)
++smb2_oplock_response(struct cifs_tcon *tcon, __u64 persistent_fid,
++		__u64 volatile_fid, __u16 net_fid, struct cifsInodeInfo *cinode)
+ {
+ 	if (tcon->ses->server->capabilities & SMB2_GLOBAL_CAP_LEASING)
+ 		return SMB2_lease_break(0, tcon, cinode->lease_key,
+ 					smb2_get_lease_state(cinode));
+ 
+-	return SMB2_oplock_break(0, tcon, fid->persistent_fid,
+-				 fid->volatile_fid,
++	return SMB2_oplock_break(0, tcon, persistent_fid, volatile_fid,
+ 				 CIFS_CACHE_READ(cinode) ? 1 : 0);
+ }
+ 
+diff --git a/fs/ext2/ext2.h b/fs/ext2/ext2.h
+index cb78d7dcfb952..c60cb900bb2f4 100644
+--- a/fs/ext2/ext2.h
++++ b/fs/ext2/ext2.h
+@@ -180,6 +180,7 @@ static inline struct ext2_sb_info *EXT2_SB(struct super_block *sb)
+ #define EXT2_MIN_BLOCK_SIZE		1024
+ #define	EXT2_MAX_BLOCK_SIZE		4096
+ #define EXT2_MIN_BLOCK_LOG_SIZE		  10
++#define EXT2_MAX_BLOCK_LOG_SIZE		  16
+ #define EXT2_BLOCK_SIZE(s)		((s)->s_blocksize)
+ #define	EXT2_ADDR_PER_BLOCK(s)		(EXT2_BLOCK_SIZE(s) / sizeof (__u32))
+ #define EXT2_BLOCK_SIZE_BITS(s)		((s)->s_blocksize_bits)
+diff --git a/fs/ext2/super.c b/fs/ext2/super.c
+index 69c88facfe90e..f342f347a695f 100644
+--- a/fs/ext2/super.c
++++ b/fs/ext2/super.c
+@@ -945,6 +945,13 @@ static int ext2_fill_super(struct super_block *sb, void *data, int silent)
+ 		goto failed_mount;
+ 	}
+ 
++	if (le32_to_cpu(es->s_log_block_size) >
++	    (EXT2_MAX_BLOCK_LOG_SIZE - BLOCK_SIZE_BITS)) {
++		ext2_msg(sb, KERN_ERR,
++			 "Invalid log block size: %u",
++			 le32_to_cpu(es->s_log_block_size));
++		goto failed_mount;
++	}
+ 	blocksize = BLOCK_SIZE << le32_to_cpu(sbi->s_es->s_log_block_size);
+ 
+ 	if (test_opt(sb, DAX)) {
+diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
+index f2c415f31b755..a38aa33af08ef 100644
+--- a/fs/ext4/balloc.c
++++ b/fs/ext4/balloc.c
+@@ -319,6 +319,22 @@ static ext4_fsblk_t ext4_valid_block_bitmap_padding(struct super_block *sb,
+ 	return (next_zero_bit < bitmap_size ? next_zero_bit : 0);
+ }
+ 
++struct ext4_group_info *ext4_get_group_info(struct super_block *sb,
++					    ext4_group_t group)
++{
++	 struct ext4_group_info **grp_info;
++	 long indexv, indexh;
++
++	 if (unlikely(group >= EXT4_SB(sb)->s_groups_count)) {
++		 ext4_error(sb, "invalid group %u", group);
++		 return NULL;
++	 }
++	 indexv = group >> (EXT4_DESC_PER_BLOCK_BITS(sb));
++	 indexh = group & ((EXT4_DESC_PER_BLOCK(sb)) - 1);
++	 grp_info = sbi_array_rcu_deref(EXT4_SB(sb), s_group_info, indexv);
++	 return grp_info[indexh];
++}
++
+ /*
+  * Return the block number which was discovered to be invalid, or 0 if
+  * the block bitmap is valid.
+@@ -393,7 +409,7 @@ static int ext4_validate_block_bitmap(struct super_block *sb,
+ 
+ 	if (buffer_verified(bh))
+ 		return 0;
+-	if (EXT4_MB_GRP_BBITMAP_CORRUPT(grp))
++	if (!grp || EXT4_MB_GRP_BBITMAP_CORRUPT(grp))
+ 		return -EFSCORRUPTED;
+ 
+ 	ext4_lock_group(sb, block_group);
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index df0255b7d1faa..68228bd60e836 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -2740,6 +2740,8 @@ extern void ext4_check_blocks_bitmap(struct super_block *);
+ extern struct ext4_group_desc * ext4_get_group_desc(struct super_block * sb,
+ 						    ext4_group_t block_group,
+ 						    struct buffer_head ** bh);
++extern struct ext4_group_info *ext4_get_group_info(struct super_block *sb,
++						   ext4_group_t group);
+ extern int ext4_should_retry_alloc(struct super_block *sb, int *retries);
+ 
+ extern struct buffer_head *ext4_read_block_bitmap_nowait(struct super_block *sb,
+@@ -3347,19 +3349,6 @@ static inline void ext4_isize_set(struct ext4_inode *raw_inode, loff_t i_size)
+ 	raw_inode->i_size_high = cpu_to_le32(i_size >> 32);
+ }
+ 
+-static inline
+-struct ext4_group_info *ext4_get_group_info(struct super_block *sb,
+-					    ext4_group_t group)
+-{
+-	 struct ext4_group_info **grp_info;
+-	 long indexv, indexh;
+-	 BUG_ON(group >= EXT4_SB(sb)->s_groups_count);
+-	 indexv = group >> (EXT4_DESC_PER_BLOCK_BITS(sb));
+-	 indexh = group & ((EXT4_DESC_PER_BLOCK(sb)) - 1);
+-	 grp_info = sbi_array_rcu_deref(EXT4_SB(sb), s_group_info, indexv);
+-	 return grp_info[indexh];
+-}
+-
+ /*
+  * Reading s_groups_count requires using smp_rmb() afterwards.  See
+  * the locking protocol documented in the comments of ext4_group_add()
+diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
+index 157663031f8c9..2354538a430e3 100644
+--- a/fs/ext4/ialloc.c
++++ b/fs/ext4/ialloc.c
+@@ -91,7 +91,7 @@ static int ext4_validate_inode_bitmap(struct super_block *sb,
+ 
+ 	if (buffer_verified(bh))
+ 		return 0;
+-	if (EXT4_MB_GRP_IBITMAP_CORRUPT(grp))
++	if (!grp || EXT4_MB_GRP_IBITMAP_CORRUPT(grp))
+ 		return -EFSCORRUPTED;
+ 
+ 	ext4_lock_group(sb, block_group);
+@@ -293,7 +293,7 @@ void ext4_free_inode(handle_t *handle, struct inode *inode)
+ 	}
+ 	if (!(sbi->s_mount_state & EXT4_FC_REPLAY)) {
+ 		grp = ext4_get_group_info(sb, block_group);
+-		if (unlikely(EXT4_MB_GRP_IBITMAP_CORRUPT(grp))) {
++		if (!grp || unlikely(EXT4_MB_GRP_IBITMAP_CORRUPT(grp))) {
+ 			fatal = -EFSCORRUPTED;
+ 			goto error_return;
+ 		}
+@@ -1047,7 +1047,7 @@ got_group:
+ 			 * Skip groups with already-known suspicious inode
+ 			 * tables
+ 			 */
+-			if (EXT4_MB_GRP_IBITMAP_CORRUPT(grp))
++			if (!grp || EXT4_MB_GRP_IBITMAP_CORRUPT(grp))
+ 				goto next_group;
+ 		}
+ 
+@@ -1185,6 +1185,10 @@ got:
+ 
+ 		if (!(sbi->s_mount_state & EXT4_FC_REPLAY)) {
+ 			grp = ext4_get_group_info(sb, group);
++			if (!grp) {
++				err = -EFSCORRUPTED;
++				goto out;
++			}
+ 			down_read(&grp->alloc_sem); /*
+ 						     * protect vs itable
+ 						     * lazyinit
+@@ -1528,7 +1532,7 @@ int ext4_init_inode_table(struct super_block *sb, ext4_group_t group,
+ 	}
+ 
+ 	gdp = ext4_get_group_desc(sb, group, &group_desc_bh);
+-	if (!gdp)
++	if (!gdp || !grp)
+ 		goto out;
+ 
+ 	/*
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 5639a4cf7ff98..9d495cd63ea27 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -745,6 +745,8 @@ static int __mb_check_buddy(struct ext4_buddy *e4b, char *file,
+ 	MB_CHECK_ASSERT(e4b->bd_info->bb_fragments == fragments);
+ 
+ 	grp = ext4_get_group_info(sb, e4b->bd_group);
++	if (!grp)
++		return NULL;
+ 	list_for_each(cur, &grp->bb_prealloc_list) {
+ 		ext4_group_t groupnr;
+ 		struct ext4_prealloc_space *pa;
+@@ -1060,9 +1062,9 @@ mb_set_largest_free_order(struct super_block *sb, struct ext4_group_info *grp)
+ 
+ static noinline_for_stack
+ void ext4_mb_generate_buddy(struct super_block *sb,
+-				void *buddy, void *bitmap, ext4_group_t group)
++			    void *buddy, void *bitmap, ext4_group_t group,
++			    struct ext4_group_info *grp)
+ {
+-	struct ext4_group_info *grp = ext4_get_group_info(sb, group);
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+ 	ext4_grpblk_t max = EXT4_CLUSTERS_PER_GROUP(sb);
+ 	ext4_grpblk_t i = 0;
+@@ -1183,6 +1185,8 @@ static int ext4_mb_init_cache(struct page *page, char *incore, gfp_t gfp)
+ 			break;
+ 
+ 		grinfo = ext4_get_group_info(sb, group);
++		if (!grinfo)
++			continue;
+ 		/*
+ 		 * If page is uptodate then we came here after online resize
+ 		 * which added some new uninitialized group info structs, so
+@@ -1248,6 +1252,10 @@ static int ext4_mb_init_cache(struct page *page, char *incore, gfp_t gfp)
+ 				group, page->index, i * blocksize);
+ 			trace_ext4_mb_buddy_bitmap_load(sb, group);
+ 			grinfo = ext4_get_group_info(sb, group);
++			if (!grinfo) {
++				err = -EFSCORRUPTED;
++				goto out;
++			}
+ 			grinfo->bb_fragments = 0;
+ 			memset(grinfo->bb_counters, 0,
+ 			       sizeof(*grinfo->bb_counters) *
+@@ -1258,7 +1266,7 @@ static int ext4_mb_init_cache(struct page *page, char *incore, gfp_t gfp)
+ 			ext4_lock_group(sb, group);
+ 			/* init the buddy */
+ 			memset(data, 0xff, blocksize);
+-			ext4_mb_generate_buddy(sb, data, incore, group);
++			ext4_mb_generate_buddy(sb, data, incore, group, grinfo);
+ 			ext4_unlock_group(sb, group);
+ 			incore = NULL;
+ 		} else {
+@@ -1372,6 +1380,9 @@ int ext4_mb_init_group(struct super_block *sb, ext4_group_t group, gfp_t gfp)
+ 	might_sleep();
+ 	mb_debug(sb, "init group %u\n", group);
+ 	this_grp = ext4_get_group_info(sb, group);
++	if (!this_grp)
++		return -EFSCORRUPTED;
++
+ 	/*
+ 	 * This ensures that we don't reinit the buddy cache
+ 	 * page which map to the group from which we are already
+@@ -1446,6 +1457,8 @@ ext4_mb_load_buddy_gfp(struct super_block *sb, ext4_group_t group,
+ 
+ 	blocks_per_page = PAGE_SIZE / sb->s_blocksize;
+ 	grp = ext4_get_group_info(sb, group);
++	if (!grp)
++		return -EFSCORRUPTED;
+ 
+ 	e4b->bd_blkbits = sb->s_blocksize_bits;
+ 	e4b->bd_info = grp;
+@@ -2162,7 +2175,9 @@ int ext4_mb_find_by_goal(struct ext4_allocation_context *ac,
+ 	struct ext4_group_info *grp = ext4_get_group_info(ac->ac_sb, group);
+ 	struct ext4_free_extent ex;
+ 
+-	if (!(ac->ac_flags & EXT4_MB_HINT_TRY_GOAL))
++	if (!grp)
++		return -EFSCORRUPTED;
++	if (!(ac->ac_flags & (EXT4_MB_HINT_TRY_GOAL | EXT4_MB_HINT_GOAL_ONLY)))
+ 		return 0;
+ 	if (grp->bb_free == 0)
+ 		return 0;
+@@ -2386,7 +2401,7 @@ static bool ext4_mb_good_group(struct ext4_allocation_context *ac,
+ 
+ 	BUG_ON(cr < 0 || cr >= 4);
+ 
+-	if (unlikely(EXT4_MB_GRP_BBITMAP_CORRUPT(grp)))
++	if (unlikely(EXT4_MB_GRP_BBITMAP_CORRUPT(grp) || !grp))
+ 		return false;
+ 
+ 	free = grp->bb_free;
+@@ -2455,6 +2470,8 @@ static int ext4_mb_good_group_nolock(struct ext4_allocation_context *ac,
+ 	ext4_grpblk_t free;
+ 	int ret = 0;
+ 
++	if (!grp)
++		return -EFSCORRUPTED;
+ 	if (sbi->s_mb_stats)
+ 		atomic64_inc(&sbi->s_bal_cX_groups_considered[ac->ac_criteria]);
+ 	if (should_lock) {
+@@ -2535,7 +2552,7 @@ ext4_group_t ext4_mb_prefetch(struct super_block *sb, ext4_group_t group,
+ 		 * prefetch once, so we avoid getblk() call, which can
+ 		 * be expensive.
+ 		 */
+-		if (!EXT4_MB_GRP_TEST_AND_SET_READ(grp) &&
++		if (gdp && grp && !EXT4_MB_GRP_TEST_AND_SET_READ(grp) &&
+ 		    EXT4_MB_GRP_NEED_INIT(grp) &&
+ 		    ext4_free_group_clusters(sb, gdp) > 0 &&
+ 		    !(ext4_has_group_desc_csum(sb) &&
+@@ -2579,7 +2596,7 @@ void ext4_mb_prefetch_fini(struct super_block *sb, ext4_group_t group,
+ 		group--;
+ 		grp = ext4_get_group_info(sb, group);
+ 
+-		if (EXT4_MB_GRP_NEED_INIT(grp) &&
++		if (grp && gdp && EXT4_MB_GRP_NEED_INIT(grp) &&
+ 		    ext4_free_group_clusters(sb, gdp) > 0 &&
+ 		    !(ext4_has_group_desc_csum(sb) &&
+ 		      (gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)))) {
+@@ -2838,6 +2855,8 @@ static int ext4_mb_seq_groups_show(struct seq_file *seq, void *v)
+ 		sizeof(struct ext4_group_info);
+ 
+ 	grinfo = ext4_get_group_info(sb, group);
++	if (!grinfo)
++		return 0;
+ 	/* Load the group info in memory only if not already loaded. */
+ 	if (unlikely(EXT4_MB_GRP_NEED_INIT(grinfo))) {
+ 		err = ext4_mb_load_buddy(sb, group, &e4b);
+@@ -2848,7 +2867,7 @@ static int ext4_mb_seq_groups_show(struct seq_file *seq, void *v)
+ 		buddy_loaded = 1;
+ 	}
+ 
+-	memcpy(&sg, ext4_get_group_info(sb, group), i);
++	memcpy(&sg, grinfo, i);
+ 
+ 	if (buddy_loaded)
+ 		ext4_mb_unload_buddy(&e4b);
+@@ -3210,8 +3229,12 @@ static int ext4_mb_init_backend(struct super_block *sb)
+ 
+ err_freebuddy:
+ 	cachep = get_groupinfo_cache(sb->s_blocksize_bits);
+-	while (i-- > 0)
+-		kmem_cache_free(cachep, ext4_get_group_info(sb, i));
++	while (i-- > 0) {
++		struct ext4_group_info *grp = ext4_get_group_info(sb, i);
++
++		if (grp)
++			kmem_cache_free(cachep, grp);
++	}
+ 	i = sbi->s_group_info_size;
+ 	rcu_read_lock();
+ 	group_info = rcu_dereference(sbi->s_group_info);
+@@ -3525,6 +3548,8 @@ int ext4_mb_release(struct super_block *sb)
+ 		for (i = 0; i < ngroups; i++) {
+ 			cond_resched();
+ 			grinfo = ext4_get_group_info(sb, i);
++			if (!grinfo)
++				continue;
+ 			mb_group_bb_bitmap_free(grinfo);
+ 			ext4_lock_group(sb, i);
+ 			count = ext4_mb_cleanup_pa(grinfo);
+@@ -3993,6 +4018,7 @@ ext4_mb_normalize_request(struct ext4_allocation_context *ac,
+ 				struct ext4_allocation_request *ar)
+ {
+ 	struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb);
++	struct ext4_super_block *es = sbi->s_es;
+ 	int bsbits, max;
+ 	ext4_lblk_t end;
+ 	loff_t size, start_off;
+@@ -4188,18 +4214,21 @@ ext4_mb_normalize_request(struct ext4_allocation_context *ac,
+ 	ac->ac_g_ex.fe_len = EXT4_NUM_B2C(sbi, size);
+ 
+ 	/* define goal start in order to merge */
+-	if (ar->pright && (ar->lright == (start + size))) {
++	if (ar->pright && (ar->lright == (start + size)) &&
++	    ar->pright >= size &&
++	    ar->pright - size >= le32_to_cpu(es->s_first_data_block)) {
+ 		/* merge to the right */
+ 		ext4_get_group_no_and_offset(ac->ac_sb, ar->pright - size,
+-						&ac->ac_f_ex.fe_group,
+-						&ac->ac_f_ex.fe_start);
++						&ac->ac_g_ex.fe_group,
++						&ac->ac_g_ex.fe_start);
+ 		ac->ac_flags |= EXT4_MB_HINT_TRY_GOAL;
+ 	}
+-	if (ar->pleft && (ar->lleft + 1 == start)) {
++	if (ar->pleft && (ar->lleft + 1 == start) &&
++	    ar->pleft + 1 < ext4_blocks_count(es)) {
+ 		/* merge to the left */
+ 		ext4_get_group_no_and_offset(ac->ac_sb, ar->pleft + 1,
+-						&ac->ac_f_ex.fe_group,
+-						&ac->ac_f_ex.fe_start);
++						&ac->ac_g_ex.fe_group,
++						&ac->ac_g_ex.fe_start);
+ 		ac->ac_flags |= EXT4_MB_HINT_TRY_GOAL;
+ 	}
+ 
+@@ -4292,6 +4321,7 @@ static void ext4_mb_use_inode_pa(struct ext4_allocation_context *ac,
+ 	BUG_ON(start < pa->pa_pstart);
+ 	BUG_ON(end > pa->pa_pstart + EXT4_C2B(sbi, pa->pa_len));
+ 	BUG_ON(pa->pa_free < len);
++	BUG_ON(ac->ac_b_ex.fe_len <= 0);
+ 	pa->pa_free -= len;
+ 
+ 	mb_debug(ac->ac_sb, "use %llu/%d from inode pa %p\n", start, len, pa);
+@@ -4454,6 +4484,8 @@ static void ext4_mb_generate_from_freelist(struct super_block *sb, void *bitmap,
+ 	struct ext4_free_data *entry;
+ 
+ 	grp = ext4_get_group_info(sb, group);
++	if (!grp)
++		return;
+ 	n = rb_first(&(grp->bb_free_root));
+ 
+ 	while (n) {
+@@ -4481,6 +4513,9 @@ void ext4_mb_generate_from_pa(struct super_block *sb, void *bitmap,
+ 	int preallocated = 0;
+ 	int len;
+ 
++	if (!grp)
++		return;
++
+ 	/* all form of preallocation discards first load group,
+ 	 * so the only competing code is preallocation use.
+ 	 * we don't need any locking here
+@@ -4616,10 +4651,8 @@ ext4_mb_new_inode_pa(struct ext4_allocation_context *ac)
+ 	pa = ac->ac_pa;
+ 
+ 	if (ac->ac_b_ex.fe_len < ac->ac_g_ex.fe_len) {
+-		int winl;
+-		int wins;
+-		int win;
+-		int offs;
++		int new_bex_start;
++		int new_bex_end;
+ 
+ 		/* we can't allocate as much as normalizer wants.
+ 		 * so, found space must get proper lstart
+@@ -4627,26 +4660,40 @@ ext4_mb_new_inode_pa(struct ext4_allocation_context *ac)
+ 		BUG_ON(ac->ac_g_ex.fe_logical > ac->ac_o_ex.fe_logical);
+ 		BUG_ON(ac->ac_g_ex.fe_len < ac->ac_o_ex.fe_len);
+ 
+-		/* we're limited by original request in that
+-		 * logical block must be covered any way
+-		 * winl is window we can move our chunk within */
+-		winl = ac->ac_o_ex.fe_logical - ac->ac_g_ex.fe_logical;
++		/*
++		 * Use the below logic for adjusting best extent as it keeps
++		 * fragmentation in check while ensuring logical range of best
++		 * extent doesn't overflow out of goal extent:
++		 *
++		 * 1. Check if best ex can be kept at end of goal and still
++		 *    cover original start
++		 * 2. Else, check if best ex can be kept at start of goal and
++		 *    still cover original start
++		 * 3. Else, keep the best ex at start of original request.
++		 */
++		new_bex_end = ac->ac_g_ex.fe_logical +
++			EXT4_C2B(sbi, ac->ac_g_ex.fe_len);
++		new_bex_start = new_bex_end - EXT4_C2B(sbi, ac->ac_b_ex.fe_len);
++		if (ac->ac_o_ex.fe_logical >= new_bex_start)
++			goto adjust_bex;
+ 
+-		/* also, we should cover whole original request */
+-		wins = EXT4_C2B(sbi, ac->ac_b_ex.fe_len - ac->ac_o_ex.fe_len);
++		new_bex_start = ac->ac_g_ex.fe_logical;
++		new_bex_end =
++			new_bex_start + EXT4_C2B(sbi, ac->ac_b_ex.fe_len);
++		if (ac->ac_o_ex.fe_logical < new_bex_end)
++			goto adjust_bex;
+ 
+-		/* the smallest one defines real window */
+-		win = min(winl, wins);
++		new_bex_start = ac->ac_o_ex.fe_logical;
++		new_bex_end =
++			new_bex_start + EXT4_C2B(sbi, ac->ac_b_ex.fe_len);
+ 
+-		offs = ac->ac_o_ex.fe_logical %
+-			EXT4_C2B(sbi, ac->ac_b_ex.fe_len);
+-		if (offs && offs < win)
+-			win = offs;
++adjust_bex:
++		ac->ac_b_ex.fe_logical = new_bex_start;
+ 
+-		ac->ac_b_ex.fe_logical = ac->ac_o_ex.fe_logical -
+-			EXT4_NUM_B2C(sbi, win);
+ 		BUG_ON(ac->ac_o_ex.fe_logical < ac->ac_b_ex.fe_logical);
+ 		BUG_ON(ac->ac_o_ex.fe_len > ac->ac_b_ex.fe_len);
++		BUG_ON(new_bex_end > (ac->ac_g_ex.fe_logical +
++				      EXT4_C2B(sbi, ac->ac_g_ex.fe_len)));
+ 	}
+ 
+ 	/* preallocation can change ac_b_ex, thus we store actually
+@@ -4672,6 +4719,8 @@ ext4_mb_new_inode_pa(struct ext4_allocation_context *ac)
+ 
+ 	ei = EXT4_I(ac->ac_inode);
+ 	grp = ext4_get_group_info(sb, ac->ac_b_ex.fe_group);
++	if (!grp)
++		return;
+ 
+ 	pa->pa_obj_lock = &ei->i_prealloc_lock;
+ 	pa->pa_inode = ac->ac_inode;
+@@ -4725,6 +4774,8 @@ ext4_mb_new_group_pa(struct ext4_allocation_context *ac)
+ 	atomic_add(pa->pa_free, &EXT4_SB(sb)->s_mb_preallocated);
+ 
+ 	grp = ext4_get_group_info(sb, ac->ac_b_ex.fe_group);
++	if (!grp)
++		return;
+ 	lg = ac->ac_lg;
+ 	BUG_ON(lg == NULL);
+ 
+@@ -4853,6 +4904,8 @@ ext4_mb_discard_group_preallocations(struct super_block *sb,
+ 	int err;
+ 	int free = 0;
+ 
++	if (!grp)
++		return 0;
+ 	mb_debug(sb, "discard preallocation for group %u\n", group);
+ 	if (list_empty(&grp->bb_prealloc_list))
+ 		goto out_dbg;
+@@ -5090,6 +5143,9 @@ static inline void ext4_mb_show_pa(struct super_block *sb)
+ 		struct ext4_prealloc_space *pa;
+ 		ext4_grpblk_t start;
+ 		struct list_head *cur;
++
++		if (!grp)
++			continue;
+ 		ext4_lock_group(sb, i);
+ 		list_for_each(cur, &grp->bb_prealloc_list) {
+ 			pa = list_entry(cur, struct ext4_prealloc_space,
+@@ -5889,6 +5945,7 @@ static void ext4_mb_clear_bb(handle_t *handle, struct inode *inode,
+ 	struct buffer_head *bitmap_bh = NULL;
+ 	struct super_block *sb = inode->i_sb;
+ 	struct ext4_group_desc *gdp;
++	struct ext4_group_info *grp;
+ 	unsigned int overflow;
+ 	ext4_grpblk_t bit;
+ 	struct buffer_head *gd_bh;
+@@ -5914,8 +5971,8 @@ do_more:
+ 	overflow = 0;
+ 	ext4_get_group_no_and_offset(sb, block, &block_group, &bit);
+ 
+-	if (unlikely(EXT4_MB_GRP_BBITMAP_CORRUPT(
+-			ext4_get_group_info(sb, block_group))))
++	grp = ext4_get_group_info(sb, block_group);
++	if (unlikely(!grp || EXT4_MB_GRP_BBITMAP_CORRUPT(grp)))
+ 		return;
+ 
+ 	/*
+@@ -6517,6 +6574,8 @@ int ext4_trim_fs(struct super_block *sb, struct fstrim_range *range)
+ 
+ 	for (group = first_group; group <= last_group; group++) {
+ 		grp = ext4_get_group_info(sb, group);
++		if (!grp)
++			continue;
+ 		/* We only do this if the grp has never been initialized */
+ 		if (unlikely(EXT4_MB_GRP_NEED_INIT(grp))) {
+ 			ret = ext4_mb_init_group(sb, group, GFP_NOFS);
+diff --git a/fs/ext4/mmp.c b/fs/ext4/mmp.c
+index 46735ce315b5a..0aaf38ffcb6ec 100644
+--- a/fs/ext4/mmp.c
++++ b/fs/ext4/mmp.c
+@@ -290,6 +290,7 @@ int ext4_multi_mount_protect(struct super_block *sb,
+ 	if (mmp_block < le32_to_cpu(es->s_first_data_block) ||
+ 	    mmp_block >= ext4_blocks_count(es)) {
+ 		ext4_warning(sb, "Invalid MMP block in superblock");
++		retval = -EINVAL;
+ 		goto failed;
+ 	}
+ 
+@@ -315,6 +316,7 @@ int ext4_multi_mount_protect(struct super_block *sb,
+ 
+ 	if (seq == EXT4_MMP_SEQ_FSCK) {
+ 		dump_mmp_msg(sb, mmp, "fsck is running on the filesystem");
++		retval = -EBUSY;
+ 		goto failed;
+ 	}
+ 
+@@ -328,6 +330,7 @@ int ext4_multi_mount_protect(struct super_block *sb,
+ 
+ 	if (schedule_timeout_interruptible(HZ * wait_time) != 0) {
+ 		ext4_warning(sb, "MMP startup interrupted, failing mount\n");
++		retval = -ETIMEDOUT;
+ 		goto failed;
+ 	}
+ 
+@@ -338,6 +341,7 @@ int ext4_multi_mount_protect(struct super_block *sb,
+ 	if (seq != le32_to_cpu(mmp->mmp_seq)) {
+ 		dump_mmp_msg(sb, mmp,
+ 			     "Device is already active on another node.");
++		retval = -EBUSY;
+ 		goto failed;
+ 	}
+ 
+@@ -361,6 +365,7 @@ skip:
+ 	 */
+ 	if (schedule_timeout_interruptible(HZ * wait_time) != 0) {
+ 		ext4_warning(sb, "MMP startup interrupted, failing mount");
++		retval = -ETIMEDOUT;
+ 		goto failed;
+ 	}
+ 
+@@ -371,6 +376,7 @@ skip:
+ 	if (seq != le32_to_cpu(mmp->mmp_seq)) {
+ 		dump_mmp_msg(sb, mmp,
+ 			     "Device is already active on another node.");
++		retval = -EBUSY;
+ 		goto failed;
+ 	}
+ 
+@@ -390,6 +396,7 @@ skip:
+ 		EXT4_SB(sb)->s_mmp_tsk = NULL;
+ 		ext4_warning(sb, "Unable to create kmmpd thread for %s.",
+ 			     sb->s_id);
++		retval = -ENOMEM;
+ 		goto failed;
+ 	}
+ 
+@@ -397,5 +404,5 @@ skip:
+ 
+ failed:
+ 	brelse(bh);
+-	return 1;
++	return retval;
+ }
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index d6ac61f43ac35..d34afa8e0c158 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -1048,6 +1048,8 @@ void ext4_mark_group_bitmap_corrupted(struct super_block *sb,
+ 	struct ext4_group_desc *gdp = ext4_get_group_desc(sb, group, NULL);
+ 	int ret;
+ 
++	if (!grp || !gdp)
++		return;
+ 	if (flags & EXT4_GROUP_INFO_BBITMAP_CORRUPT) {
+ 		ret = ext4_test_and_set_bit(EXT4_GROUP_INFO_BBITMAP_CORRUPT_BIT,
+ 					    &grp->bb_state);
+@@ -5264,9 +5266,11 @@ static int __ext4_fill_super(struct fs_context *fc, struct super_block *sb)
+ 			  ext4_has_feature_orphan_present(sb) ||
+ 			  ext4_has_feature_journal_needs_recovery(sb));
+ 
+-	if (ext4_has_feature_mmp(sb) && !sb_rdonly(sb))
+-		if (ext4_multi_mount_protect(sb, le64_to_cpu(es->s_mmp_block)))
++	if (ext4_has_feature_mmp(sb) && !sb_rdonly(sb)) {
++		err = ext4_multi_mount_protect(sb, le64_to_cpu(es->s_mmp_block));
++		if (err)
+ 			goto failed_mount3a;
++	}
+ 
+ 	/*
+ 	 * The first inode we look at is the journal inode.  Don't try
+@@ -6350,6 +6354,7 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
+ 	struct ext4_mount_options old_opts;
+ 	ext4_group_t g;
+ 	int err = 0;
++	int enable_rw = 0;
+ #ifdef CONFIG_QUOTA
+ 	int enable_quota = 0;
+ 	int i, j;
+@@ -6536,13 +6541,13 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
+ 			if (err)
+ 				goto restore_opts;
+ 
+-			sb->s_flags &= ~SB_RDONLY;
+-			if (ext4_has_feature_mmp(sb))
+-				if (ext4_multi_mount_protect(sb,
+-						le64_to_cpu(es->s_mmp_block))) {
+-					err = -EROFS;
++			enable_rw = 1;
++			if (ext4_has_feature_mmp(sb)) {
++				err = ext4_multi_mount_protect(sb,
++						le64_to_cpu(es->s_mmp_block));
++				if (err)
+ 					goto restore_opts;
+-				}
++			}
+ #ifdef CONFIG_QUOTA
+ 			enable_quota = 1;
+ #endif
+@@ -6595,6 +6600,9 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
+ 	if (!test_opt(sb, BLOCK_VALIDITY) && sbi->s_system_blks)
+ 		ext4_release_system_zone(sb);
+ 
++	if (enable_rw)
++		sb->s_flags &= ~SB_RDONLY;
++
+ 	if (!ext4_has_feature_mmp(sb) || sb_rdonly(sb))
+ 		ext4_stop_mmpd(sbi);
+ 
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index c3e058e0a0188..d4c862ccd1f72 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -152,6 +152,11 @@ static bool __is_bitmap_valid(struct f2fs_sb_info *sbi, block_t blkaddr,
+ 	se = get_seg_entry(sbi, segno);
+ 
+ 	exist = f2fs_test_bit(offset, se->cur_valid_map);
++
++	/* skip data, if we already have an error in checkpoint. */
++	if (unlikely(f2fs_cp_error(sbi)))
++		return exist;
++
+ 	if (exist && type == DATA_GENERIC_ENHANCE_UPDATE) {
+ 		f2fs_err(sbi, "Inconsistent error blkaddr:%u, sit bitmap:%d",
+ 			 blkaddr, exist);
+@@ -202,6 +207,11 @@ bool f2fs_is_valid_blkaddr(struct f2fs_sb_info *sbi,
+ 	case DATA_GENERIC_ENHANCE_UPDATE:
+ 		if (unlikely(blkaddr >= MAX_BLKADDR(sbi) ||
+ 				blkaddr < MAIN_BLKADDR(sbi))) {
++
++			/* Skip to emit an error message. */
++			if (unlikely(f2fs_cp_error(sbi)))
++				return false;
++
+ 			f2fs_warn(sbi, "access invalid blkaddr:%u",
+ 				  blkaddr);
+ 			set_sbi_flag(sbi, SBI_NEED_FSCK);
+@@ -325,8 +335,15 @@ static int __f2fs_write_meta_page(struct page *page,
+ 
+ 	trace_f2fs_writepage(page, META);
+ 
+-	if (unlikely(f2fs_cp_error(sbi)))
++	if (unlikely(f2fs_cp_error(sbi))) {
++		if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
++			ClearPageUptodate(page);
++			dec_page_count(sbi, F2FS_DIRTY_META);
++			unlock_page(page);
++			return 0;
++		}
+ 		goto redirty_out;
++	}
+ 	if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
+ 		goto redirty_out;
+ 	if (wbc->for_reclaim && page->index < GET_SUM_BLOCK(sbi, 0))
+@@ -1306,7 +1323,8 @@ void f2fs_wait_on_all_pages(struct f2fs_sb_info *sbi, int type)
+ 		if (!get_pages(sbi, type))
+ 			break;
+ 
+-		if (unlikely(f2fs_cp_error(sbi)))
++		if (unlikely(f2fs_cp_error(sbi) &&
++			!is_sbi_flag_set(sbi, SBI_IS_CLOSE)))
+ 			break;
+ 
+ 		if (type == F2FS_DIRTY_META)
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 1034912a61b30..92bcdbd8e4f21 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -2237,6 +2237,10 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
+ 	if (ret)
+ 		goto out;
+ 
++	if (unlikely(f2fs_cp_error(sbi))) {
++		ret = -EIO;
++		goto out_put_dnode;
++	}
+ 	f2fs_bug_on(sbi, dn.data_blkaddr != COMPRESS_ADDR);
+ 
+ skip_reading_dnode:
+@@ -2800,7 +2804,8 @@ int f2fs_write_single_data_page(struct page *page, int *submitted,
+ 		 * don't drop any dirty dentry pages for keeping lastest
+ 		 * directory structure.
+ 		 */
+-		if (S_ISDIR(inode->i_mode))
++		if (S_ISDIR(inode->i_mode) &&
++				!is_sbi_flag_set(sbi, SBI_IS_CLOSE))
+ 			goto redirty_out;
+ 		goto out;
+ 	}
+diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c
+index 9a8153895d203..bea6ab9d846ae 100644
+--- a/fs/f2fs/extent_cache.c
++++ b/fs/f2fs/extent_cache.c
+@@ -23,18 +23,26 @@ bool sanity_check_extent_cache(struct inode *inode)
+ {
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	struct f2fs_inode_info *fi = F2FS_I(inode);
++	struct extent_tree *et = fi->extent_tree[EX_READ];
+ 	struct extent_info *ei;
+ 
+-	if (!fi->extent_tree[EX_READ])
++	if (!et)
++		return true;
++
++	ei = &et->largest;
++	if (!ei->len)
+ 		return true;
+ 
+-	ei = &fi->extent_tree[EX_READ]->largest;
++	/* Let's drop, if checkpoint got corrupted. */
++	if (is_set_ckpt_flags(sbi, CP_ERROR_FLAG)) {
++		ei->len = 0;
++		et->largest_updated = true;
++		return true;
++	}
+ 
+-	if (ei->len &&
+-		(!f2fs_is_valid_blkaddr(sbi, ei->blk,
+-					DATA_GENERIC_ENHANCE) ||
+-		!f2fs_is_valid_blkaddr(sbi, ei->blk + ei->len - 1,
+-					DATA_GENERIC_ENHANCE))) {
++	if (!f2fs_is_valid_blkaddr(sbi, ei->blk, DATA_GENERIC_ENHANCE) ||
++	    !f2fs_is_valid_blkaddr(sbi, ei->blk + ei->len - 1,
++					DATA_GENERIC_ENHANCE)) {
+ 		set_sbi_flag(sbi, SBI_NEED_FSCK);
+ 		f2fs_warn(sbi, "%s: inode (ino=%lx) extent info [%u, %u, %u] is incorrect, run fsck to fix",
+ 			  __func__, inode->i_ino,
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index d6f9d6e0f13b9..47eff365f536c 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -4427,6 +4427,11 @@ static inline bool f2fs_hw_is_readonly(struct f2fs_sb_info *sbi)
+ 	return false;
+ }
+ 
++static inline bool f2fs_dev_is_readonly(struct f2fs_sb_info *sbi)
++{
++	return f2fs_sb_has_readonly(sbi) || f2fs_hw_is_readonly(sbi);
++}
++
+ static inline bool f2fs_lfs_mode(struct f2fs_sb_info *sbi)
+ {
+ 	return F2FS_OPTION(sbi).fs_mode == FS_MODE_LFS;
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 2996d38aa89c3..f984d9f05f808 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -1810,6 +1810,7 @@ int f2fs_gc(struct f2fs_sb_info *sbi, struct f2fs_gc_control *gc_control)
+ 		.iroot = RADIX_TREE_INIT(gc_list.iroot, GFP_NOFS),
+ 	};
+ 	unsigned int skipped_round = 0, round = 0;
++	unsigned int upper_secs;
+ 
+ 	trace_f2fs_gc_begin(sbi->sb, gc_type, gc_control->no_bg_gc,
+ 				gc_control->nr_free_secs,
+@@ -1895,8 +1896,13 @@ retry:
+ 		}
+ 	}
+ 
+-	/* Write checkpoint to reclaim prefree segments */
+-	if (free_sections(sbi) < NR_CURSEG_PERSIST_TYPE &&
++	__get_secs_required(sbi, NULL, &upper_secs, NULL);
++
++	/*
++	 * Write checkpoint to reclaim prefree segments.
++	 * We need more three extra sections for writer's data/node/dentry.
++	 */
++	if (free_sections(sbi) <= upper_secs + NR_GC_CHECKPOINT_SECS &&
+ 				prefree_segments(sbi)) {
+ 		ret = f2fs_write_checkpoint(sbi, &cpc);
+ 		if (ret)
+diff --git a/fs/f2fs/gc.h b/fs/f2fs/gc.h
+index 5ad6ac63e13f3..28a00942802c2 100644
+--- a/fs/f2fs/gc.h
++++ b/fs/f2fs/gc.h
+@@ -30,6 +30,8 @@
+ /* Search max. number of dirty segments to select a victim segment */
+ #define DEF_MAX_VICTIM_SEARCH 4096 /* covers 8GB */
+ 
++#define NR_GC_CHECKPOINT_SECS (3)	/* data/node/dentry sections */
++
+ struct f2fs_gc_kthread {
+ 	struct task_struct *f2fs_gc_task;
+ 	wait_queue_head_t gc_wait_queue_head;
+diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
+index babb29a1c0347..9728bdeccb2cc 100644
+--- a/fs/f2fs/segment.h
++++ b/fs/f2fs/segment.h
+@@ -602,8 +602,12 @@ static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi,
+ 	return true;
+ }
+ 
+-static inline bool has_not_enough_free_secs(struct f2fs_sb_info *sbi,
+-					int freed, int needed)
++/*
++ * calculate needed sections for dirty node/dentry
++ * and call has_curseg_enough_space
++ */
++static inline void __get_secs_required(struct f2fs_sb_info *sbi,
++		unsigned int *lower_p, unsigned int *upper_p, bool *curseg_p)
+ {
+ 	unsigned int total_node_blocks = get_pages(sbi, F2FS_DIRTY_NODES) +
+ 					get_pages(sbi, F2FS_DIRTY_DENTS) +
+@@ -613,20 +617,37 @@ static inline bool has_not_enough_free_secs(struct f2fs_sb_info *sbi,
+ 	unsigned int dent_secs = total_dent_blocks / CAP_BLKS_PER_SEC(sbi);
+ 	unsigned int node_blocks = total_node_blocks % CAP_BLKS_PER_SEC(sbi);
+ 	unsigned int dent_blocks = total_dent_blocks % CAP_BLKS_PER_SEC(sbi);
+-	unsigned int free, need_lower, need_upper;
++
++	if (lower_p)
++		*lower_p = node_secs + dent_secs;
++	if (upper_p)
++		*upper_p = node_secs + dent_secs +
++			(node_blocks ? 1 : 0) + (dent_blocks ? 1 : 0);
++	if (curseg_p)
++		*curseg_p = has_curseg_enough_space(sbi,
++				node_blocks, dent_blocks);
++}
++
++static inline bool has_not_enough_free_secs(struct f2fs_sb_info *sbi,
++					int freed, int needed)
++{
++	unsigned int free_secs, lower_secs, upper_secs;
++	bool curseg_space;
+ 
+ 	if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
+ 		return false;
+ 
+-	free = free_sections(sbi) + freed;
+-	need_lower = node_secs + dent_secs + reserved_sections(sbi) + needed;
+-	need_upper = need_lower + (node_blocks ? 1 : 0) + (dent_blocks ? 1 : 0);
++	__get_secs_required(sbi, &lower_secs, &upper_secs, &curseg_space);
++
++	free_secs = free_sections(sbi) + freed;
++	lower_secs += needed + reserved_sections(sbi);
++	upper_secs += needed + reserved_sections(sbi);
+ 
+-	if (free > need_upper)
++	if (free_secs > upper_secs)
+ 		return false;
+-	else if (free <= need_lower)
++	else if (free_secs <= lower_secs)
+ 		return true;
+-	return !has_curseg_enough_space(sbi, node_blocks, dent_blocks);
++	return !curseg_space;
+ }
+ 
+ static inline bool f2fs_is_checkpoint_ready(struct f2fs_sb_info *sbi)
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 5c1c3a84501fe..333ea095c8c50 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -2274,7 +2274,7 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
+ 	if (f2fs_readonly(sb) && (*flags & SB_RDONLY))
+ 		goto skip;
+ 
+-	if (f2fs_sb_has_readonly(sbi) && !(*flags & SB_RDONLY)) {
++	if (f2fs_dev_is_readonly(sbi) && !(*flags & SB_RDONLY)) {
+ 		err = -EROFS;
+ 		goto restore_opts;
+ 	}
+diff --git a/fs/gfs2/glops.c b/fs/gfs2/glops.c
+index 4d99cc77a29b7..b65950e76be5a 100644
+--- a/fs/gfs2/glops.c
++++ b/fs/gfs2/glops.c
+@@ -396,6 +396,7 @@ static int inode_go_demote_ok(const struct gfs2_glock *gl)
+ 
+ static int gfs2_dinode_in(struct gfs2_inode *ip, const void *buf)
+ {
++	struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
+ 	const struct gfs2_dinode *str = buf;
+ 	struct timespec64 atime;
+ 	u16 height, depth;
+@@ -442,7 +443,7 @@ static int gfs2_dinode_in(struct gfs2_inode *ip, const void *buf)
+ 	/* i_diskflags and i_eattr must be set before gfs2_set_inode_flags() */
+ 	gfs2_set_inode_flags(inode);
+ 	height = be16_to_cpu(str->di_height);
+-	if (unlikely(height > GFS2_MAX_META_HEIGHT))
++	if (unlikely(height > sdp->sd_max_height))
+ 		goto corrupt;
+ 	ip->i_height = (u8)height;
+ 
+diff --git a/fs/hfsplus/inode.c b/fs/hfsplus/inode.c
+index abb91f5fae921..b21660475ac1c 100644
+--- a/fs/hfsplus/inode.c
++++ b/fs/hfsplus/inode.c
+@@ -511,7 +511,11 @@ int hfsplus_cat_read_inode(struct inode *inode, struct hfs_find_data *fd)
+ 	if (type == HFSPLUS_FOLDER) {
+ 		struct hfsplus_cat_folder *folder = &entry.folder;
+ 
+-		WARN_ON(fd->entrylength < sizeof(struct hfsplus_cat_folder));
++		if (fd->entrylength < sizeof(struct hfsplus_cat_folder)) {
++			pr_err("bad catalog folder entry\n");
++			res = -EIO;
++			goto out;
++		}
+ 		hfs_bnode_read(fd->bnode, &entry, fd->entryoffset,
+ 					sizeof(struct hfsplus_cat_folder));
+ 		hfsplus_get_perms(inode, &folder->permissions, 1);
+@@ -531,7 +535,11 @@ int hfsplus_cat_read_inode(struct inode *inode, struct hfs_find_data *fd)
+ 	} else if (type == HFSPLUS_FILE) {
+ 		struct hfsplus_cat_file *file = &entry.file;
+ 
+-		WARN_ON(fd->entrylength < sizeof(struct hfsplus_cat_file));
++		if (fd->entrylength < sizeof(struct hfsplus_cat_file)) {
++			pr_err("bad catalog file entry\n");
++			res = -EIO;
++			goto out;
++		}
+ 		hfs_bnode_read(fd->bnode, &entry, fd->entryoffset,
+ 					sizeof(struct hfsplus_cat_file));
+ 
+@@ -562,6 +570,7 @@ int hfsplus_cat_read_inode(struct inode *inode, struct hfs_find_data *fd)
+ 		pr_err("bad catalog entry used to create inode\n");
+ 		res = -EIO;
+ 	}
++out:
+ 	return res;
+ }
+ 
+@@ -570,6 +579,7 @@ int hfsplus_cat_write_inode(struct inode *inode)
+ 	struct inode *main_inode = inode;
+ 	struct hfs_find_data fd;
+ 	hfsplus_cat_entry entry;
++	int res = 0;
+ 
+ 	if (HFSPLUS_IS_RSRC(inode))
+ 		main_inode = HFSPLUS_I(inode)->rsrc_inode;
+@@ -588,7 +598,11 @@ int hfsplus_cat_write_inode(struct inode *inode)
+ 	if (S_ISDIR(main_inode->i_mode)) {
+ 		struct hfsplus_cat_folder *folder = &entry.folder;
+ 
+-		WARN_ON(fd.entrylength < sizeof(struct hfsplus_cat_folder));
++		if (fd.entrylength < sizeof(struct hfsplus_cat_folder)) {
++			pr_err("bad catalog folder entry\n");
++			res = -EIO;
++			goto out;
++		}
+ 		hfs_bnode_read(fd.bnode, &entry, fd.entryoffset,
+ 					sizeof(struct hfsplus_cat_folder));
+ 		/* simple node checks? */
+@@ -613,7 +627,11 @@ int hfsplus_cat_write_inode(struct inode *inode)
+ 	} else {
+ 		struct hfsplus_cat_file *file = &entry.file;
+ 
+-		WARN_ON(fd.entrylength < sizeof(struct hfsplus_cat_file));
++		if (fd.entrylength < sizeof(struct hfsplus_cat_file)) {
++			pr_err("bad catalog file entry\n");
++			res = -EIO;
++			goto out;
++		}
+ 		hfs_bnode_read(fd.bnode, &entry, fd.entryoffset,
+ 					sizeof(struct hfsplus_cat_file));
+ 		hfsplus_inode_write_fork(inode, &file->data_fork);
+@@ -634,7 +652,7 @@ int hfsplus_cat_write_inode(struct inode *inode)
+ 	set_bit(HFSPLUS_I_CAT_DIRTY, &HFSPLUS_I(inode)->flags);
+ out:
+ 	hfs_find_exit(&fd);
+-	return 0;
++	return res;
+ }
+ 
+ int hfsplus_fileattr_get(struct dentry *dentry, struct fileattr *fa)
+diff --git a/fs/ksmbd/connection.c b/fs/ksmbd/connection.c
+index 4ed379f9b1aa6..4882a812ea867 100644
+--- a/fs/ksmbd/connection.c
++++ b/fs/ksmbd/connection.c
+@@ -351,7 +351,8 @@ int ksmbd_conn_handler_loop(void *p)
+ 			break;
+ 
+ 		/* 4 for rfc1002 length field */
+-		size = pdu_size + 4;
++		/* 1 for implied bcc[0] */
++		size = pdu_size + 4 + 1;
+ 		conn->request_buf = kvmalloc(size, GFP_KERNEL);
+ 		if (!conn->request_buf)
+ 			break;
+diff --git a/fs/ksmbd/oplock.c b/fs/ksmbd/oplock.c
+index 2e54ded4d92c2..6d1ccb9998939 100644
+--- a/fs/ksmbd/oplock.c
++++ b/fs/ksmbd/oplock.c
+@@ -1449,11 +1449,12 @@ struct lease_ctx_info *parse_lease_state(void *open_req)
+  * smb2_find_context_vals() - find a particular context info in open request
+  * @open_req:	buffer containing smb2 file open(create) request
+  * @tag:	context name to search for
++ * @tag_len:	the length of tag
+  *
+  * Return:	pointer to requested context, NULL if @str context not found
+  *		or error pointer if name length is invalid.
+  */
+-struct create_context *smb2_find_context_vals(void *open_req, const char *tag)
++struct create_context *smb2_find_context_vals(void *open_req, const char *tag, int tag_len)
+ {
+ 	struct create_context *cc;
+ 	unsigned int next = 0;
+@@ -1492,7 +1493,7 @@ struct create_context *smb2_find_context_vals(void *open_req, const char *tag)
+ 			return ERR_PTR(-EINVAL);
+ 
+ 		name = (char *)cc + name_off;
+-		if (memcmp(name, tag, name_len) == 0)
++		if (name_len == tag_len && !memcmp(name, tag, name_len))
+ 			return cc;
+ 
+ 		remain_len -= next;
+diff --git a/fs/ksmbd/oplock.h b/fs/ksmbd/oplock.h
+index 09753448f7798..4b0fe6da76940 100644
+--- a/fs/ksmbd/oplock.h
++++ b/fs/ksmbd/oplock.h
+@@ -118,7 +118,7 @@ void create_durable_v2_rsp_buf(char *cc, struct ksmbd_file *fp);
+ void create_mxac_rsp_buf(char *cc, int maximal_access);
+ void create_disk_id_rsp_buf(char *cc, __u64 file_id, __u64 vol_id);
+ void create_posix_rsp_buf(char *cc, struct ksmbd_file *fp);
+-struct create_context *smb2_find_context_vals(void *open_req, const char *str);
++struct create_context *smb2_find_context_vals(void *open_req, const char *tag, int tag_len);
+ struct oplock_info *lookup_lease_in_table(struct ksmbd_conn *conn,
+ 					  char *lease_key);
+ int find_same_lease_key(struct ksmbd_session *sess, struct ksmbd_inode *ci,
+diff --git a/fs/ksmbd/smb2misc.c b/fs/ksmbd/smb2misc.c
+index fbdde426dd01d..0ffe663b75906 100644
+--- a/fs/ksmbd/smb2misc.c
++++ b/fs/ksmbd/smb2misc.c
+@@ -416,8 +416,11 @@ int ksmbd_smb2_check_message(struct ksmbd_work *work)
+ 
+ 		/*
+ 		 * Allow a message that padded to 8byte boundary.
++		 * Linux 4.19.217 with smb 3.0.2 are sometimes
++		 * sending messages where the cls_len is exactly
++		 * 8 bytes less than len.
+ 		 */
+-		if (clc_len < len && (len - clc_len) < 8)
++		if (clc_len < len && (len - clc_len) <= 8)
+ 			goto validate_credit;
+ 
+ 		pr_err_ratelimited(
+diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c
+index da66abc7c4e18..cca593e340967 100644
+--- a/fs/ksmbd/smb2pdu.c
++++ b/fs/ksmbd/smb2pdu.c
+@@ -1384,7 +1384,7 @@ static struct ksmbd_user *session_user(struct ksmbd_conn *conn,
+ 	struct authenticate_message *authblob;
+ 	struct ksmbd_user *user;
+ 	char *name;
+-	unsigned int auth_msg_len, name_off, name_len, secbuf_len;
++	unsigned int name_off, name_len, secbuf_len;
+ 
+ 	secbuf_len = le16_to_cpu(req->SecurityBufferLength);
+ 	if (secbuf_len < sizeof(struct authenticate_message)) {
+@@ -1394,9 +1394,8 @@ static struct ksmbd_user *session_user(struct ksmbd_conn *conn,
+ 	authblob = user_authblob(conn, req);
+ 	name_off = le32_to_cpu(authblob->UserName.BufferOffset);
+ 	name_len = le16_to_cpu(authblob->UserName.Length);
+-	auth_msg_len = le16_to_cpu(req->SecurityBufferOffset) + secbuf_len;
+ 
+-	if (auth_msg_len < (u64)name_off + name_len)
++	if (secbuf_len < (u64)name_off + name_len)
+ 		return NULL;
+ 
+ 	name = smb_strndup_from_utf16((const char *)authblob + name_off,
+@@ -2492,7 +2491,7 @@ static int smb2_create_sd_buffer(struct ksmbd_work *work,
+ 		return -ENOENT;
+ 
+ 	/* Parse SD BUFFER create contexts */
+-	context = smb2_find_context_vals(req, SMB2_CREATE_SD_BUFFER);
++	context = smb2_find_context_vals(req, SMB2_CREATE_SD_BUFFER, 4);
+ 	if (!context)
+ 		return -ENOENT;
+ 	else if (IS_ERR(context))
+@@ -2694,7 +2693,7 @@ int smb2_open(struct ksmbd_work *work)
+ 
+ 	if (req->CreateContextsOffset) {
+ 		/* Parse non-durable handle create contexts */
+-		context = smb2_find_context_vals(req, SMB2_CREATE_EA_BUFFER);
++		context = smb2_find_context_vals(req, SMB2_CREATE_EA_BUFFER, 4);
+ 		if (IS_ERR(context)) {
+ 			rc = PTR_ERR(context);
+ 			goto err_out1;
+@@ -2714,7 +2713,7 @@ int smb2_open(struct ksmbd_work *work)
+ 		}
+ 
+ 		context = smb2_find_context_vals(req,
+-						 SMB2_CREATE_QUERY_MAXIMAL_ACCESS_REQUEST);
++						 SMB2_CREATE_QUERY_MAXIMAL_ACCESS_REQUEST, 4);
+ 		if (IS_ERR(context)) {
+ 			rc = PTR_ERR(context);
+ 			goto err_out1;
+@@ -2725,7 +2724,7 @@ int smb2_open(struct ksmbd_work *work)
+ 		}
+ 
+ 		context = smb2_find_context_vals(req,
+-						 SMB2_CREATE_TIMEWARP_REQUEST);
++						 SMB2_CREATE_TIMEWARP_REQUEST, 4);
+ 		if (IS_ERR(context)) {
+ 			rc = PTR_ERR(context);
+ 			goto err_out1;
+@@ -2737,7 +2736,7 @@ int smb2_open(struct ksmbd_work *work)
+ 
+ 		if (tcon->posix_extensions) {
+ 			context = smb2_find_context_vals(req,
+-							 SMB2_CREATE_TAG_POSIX);
++							 SMB2_CREATE_TAG_POSIX, 16);
+ 			if (IS_ERR(context)) {
+ 				rc = PTR_ERR(context);
+ 				goto err_out1;
+@@ -3136,7 +3135,7 @@ int smb2_open(struct ksmbd_work *work)
+ 		struct create_alloc_size_req *az_req;
+ 
+ 		az_req = (struct create_alloc_size_req *)smb2_find_context_vals(req,
+-					SMB2_CREATE_ALLOCATION_SIZE);
++					SMB2_CREATE_ALLOCATION_SIZE, 4);
+ 		if (IS_ERR(az_req)) {
+ 			rc = PTR_ERR(az_req);
+ 			goto err_out;
+@@ -3163,7 +3162,7 @@ int smb2_open(struct ksmbd_work *work)
+ 					    err);
+ 		}
+ 
+-		context = smb2_find_context_vals(req, SMB2_CREATE_QUERY_ON_DISK_ID);
++		context = smb2_find_context_vals(req, SMB2_CREATE_QUERY_ON_DISK_ID, 4);
+ 		if (IS_ERR(context)) {
+ 			rc = PTR_ERR(context);
+ 			goto err_out;
+diff --git a/fs/nilfs2/inode.c b/fs/nilfs2/inode.c
+index 1310d2d5feb38..a8ce522ac7479 100644
+--- a/fs/nilfs2/inode.c
++++ b/fs/nilfs2/inode.c
+@@ -917,6 +917,7 @@ void nilfs_evict_inode(struct inode *inode)
+ 	struct nilfs_transaction_info ti;
+ 	struct super_block *sb = inode->i_sb;
+ 	struct nilfs_inode_info *ii = NILFS_I(inode);
++	struct the_nilfs *nilfs;
+ 	int ret;
+ 
+ 	if (inode->i_nlink || !ii->i_root || unlikely(is_bad_inode(inode))) {
+@@ -929,6 +930,23 @@ void nilfs_evict_inode(struct inode *inode)
+ 
+ 	truncate_inode_pages_final(&inode->i_data);
+ 
++	nilfs = sb->s_fs_info;
++	if (unlikely(sb_rdonly(sb) || !nilfs->ns_writer)) {
++		/*
++		 * If this inode is about to be disposed after the file system
++		 * has been degraded to read-only due to file system corruption
++		 * or after the writer has been detached, do not make any
++		 * changes that cause writes, just clear it.
++		 * Do this check after read-locking ns_segctor_sem by
++		 * nilfs_transaction_begin() in order to avoid a race with
++		 * the writer detach operation.
++		 */
++		clear_inode(inode);
++		nilfs_clear_inode(inode);
++		nilfs_transaction_abort(sb);
++		return;
++	}
++
+ 	/* TODO: some of the following operations may fail.  */
+ 	nilfs_truncate_bmap(ii, 0);
+ 	nilfs_mark_inode_dirty(inode);
+diff --git a/fs/ntfs3/frecord.c b/fs/ntfs3/frecord.c
+index 7d0473da12c33..9e7dfee303e8a 100644
+--- a/fs/ntfs3/frecord.c
++++ b/fs/ntfs3/frecord.c
+@@ -102,7 +102,7 @@ void ni_clear(struct ntfs_inode *ni)
+ {
+ 	struct rb_node *node;
+ 
+-	if (!ni->vfs_inode.i_nlink && is_rec_inuse(ni->mi.mrec))
++	if (!ni->vfs_inode.i_nlink && ni->mi.mrec && is_rec_inuse(ni->mi.mrec))
+ 		ni_delete_all(ni);
+ 
+ 	al_destroy(ni);
+@@ -3258,6 +3258,9 @@ int ni_write_inode(struct inode *inode, int sync, const char *hint)
+ 		return 0;
+ 	}
+ 
++	if (!ni->mi.mrec)
++		goto out;
++
+ 	if (is_rec_inuse(ni->mi.mrec) &&
+ 	    !(sbi->flags & NTFS_FLAGS_LOG_REPLAYING) && inode->i_nlink) {
+ 		bool modified = false;
+diff --git a/fs/ntfs3/fsntfs.c b/fs/ntfs3/fsntfs.c
+index 24c9aeb5a49e0..2c0ce364808a9 100644
+--- a/fs/ntfs3/fsntfs.c
++++ b/fs/ntfs3/fsntfs.c
+@@ -1683,6 +1683,7 @@ struct ntfs_inode *ntfs_new_inode(struct ntfs_sb_info *sbi, CLST rno, bool dir)
+ 
+ out:
+ 	if (err) {
++		make_bad_inode(inode);
+ 		iput(inode);
+ 		ni = ERR_PTR(err);
+ 	}
+diff --git a/fs/ntfs3/index.c b/fs/ntfs3/index.c
+index 7a1e01a2ed9ae..f716487ec8a05 100644
+--- a/fs/ntfs3/index.c
++++ b/fs/ntfs3/index.c
+@@ -994,6 +994,7 @@ struct INDEX_ROOT *indx_get_root(struct ntfs_index *indx, struct ntfs_inode *ni,
+ 	struct ATTR_LIST_ENTRY *le = NULL;
+ 	struct ATTRIB *a;
+ 	const struct INDEX_NAMES *in = &s_index_names[indx->type];
++	struct INDEX_ROOT *root = NULL;
+ 
+ 	a = ni_find_attr(ni, NULL, &le, ATTR_ROOT, in->name, in->name_len, NULL,
+ 			 mi);
+@@ -1003,7 +1004,15 @@ struct INDEX_ROOT *indx_get_root(struct ntfs_index *indx, struct ntfs_inode *ni,
+ 	if (attr)
+ 		*attr = a;
+ 
+-	return resident_data_ex(a, sizeof(struct INDEX_ROOT));
++	root = resident_data_ex(a, sizeof(struct INDEX_ROOT));
++
++	/* length check */
++	if (root && offsetof(struct INDEX_ROOT, ihdr) + le32_to_cpu(root->ihdr.used) >
++			le32_to_cpu(a->res.data_size)) {
++		return NULL;
++	}
++
++	return root;
+ }
+ 
+ static int indx_write(struct ntfs_index *indx, struct ntfs_inode *ni,
+diff --git a/fs/ntfs3/inode.c b/fs/ntfs3/inode.c
+index ce6bb3bd86b6e..059f288784580 100644
+--- a/fs/ntfs3/inode.c
++++ b/fs/ntfs3/inode.c
+@@ -100,6 +100,12 @@ static struct inode *ntfs_read_mft(struct inode *inode,
+ 	/* Record should contain $I30 root. */
+ 	is_dir = rec->flags & RECORD_FLAG_DIR;
+ 
++	/* MFT_REC_MFT is not a dir */
++	if (is_dir && ino == MFT_REC_MFT) {
++		err = -EINVAL;
++		goto out;
++	}
++
+ 	inode->i_generation = le16_to_cpu(rec->seq);
+ 
+ 	/* Enumerate all struct Attributes MFT. */
+diff --git a/fs/ntfs3/record.c b/fs/ntfs3/record.c
+index defce6a5c8e1b..abfe004774c03 100644
+--- a/fs/ntfs3/record.c
++++ b/fs/ntfs3/record.c
+@@ -220,11 +220,6 @@ struct ATTRIB *mi_enum_attr(struct mft_inode *mi, struct ATTRIB *attr)
+ 			return NULL;
+ 		}
+ 
+-		if (off + asize < off) {
+-			/* overflow check */
+-			return NULL;
+-		}
+-
+ 		attr = Add2Ptr(attr, asize);
+ 		off += asize;
+ 	}
+@@ -247,8 +242,8 @@ struct ATTRIB *mi_enum_attr(struct mft_inode *mi, struct ATTRIB *attr)
+ 	if ((t32 & 0xf) || (t32 > 0x100))
+ 		return NULL;
+ 
+-	/* Check boundary. */
+-	if (off + asize > used)
++	/* Check overflow and boundary. */
++	if (off + asize < off || off + asize > used)
+ 		return NULL;
+ 
+ 	/* Check size of attribute. */
+diff --git a/fs/open.c b/fs/open.c
+index 4401a73d4032d..4478adcc4f3a0 100644
+--- a/fs/open.c
++++ b/fs/open.c
+@@ -1196,13 +1196,21 @@ inline int build_open_flags(const struct open_how *how, struct open_flags *op)
+ 	}
+ 
+ 	/*
+-	 * In order to ensure programs get explicit errors when trying to use
+-	 * O_TMPFILE on old kernels, O_TMPFILE is implemented such that it
+-	 * looks like (O_DIRECTORY|O_RDWR & ~O_CREAT) to old kernels. But we
+-	 * have to require userspace to explicitly set it.
++	 * Block bugs where O_DIRECTORY | O_CREAT created regular files.
++	 * Note, that blocking O_DIRECTORY | O_CREAT here also protects
++	 * O_TMPFILE below which requires O_DIRECTORY being raised.
+ 	 */
++	if ((flags & (O_DIRECTORY | O_CREAT)) == (O_DIRECTORY | O_CREAT))
++		return -EINVAL;
++
++	/* Now handle the creative implementation of O_TMPFILE. */
+ 	if (flags & __O_TMPFILE) {
+-		if ((flags & O_TMPFILE_MASK) != O_TMPFILE)
++		/*
++		 * In order to ensure programs get explicit errors when trying
++		 * to use O_TMPFILE on old kernels we enforce that O_DIRECTORY
++		 * is raised alongside __O_TMPFILE.
++		 */
++		if (!(flags & O_DIRECTORY))
+ 			return -EINVAL;
+ 		if (!(acc_mode & MAY_WRITE))
+ 			return -EINVAL;
+diff --git a/fs/statfs.c b/fs/statfs.c
+index 0ba34c1355932..96d1c3edf289c 100644
+--- a/fs/statfs.c
++++ b/fs/statfs.c
+@@ -130,6 +130,7 @@ static int do_statfs_native(struct kstatfs *st, struct statfs __user *p)
+ 	if (sizeof(buf) == sizeof(*st))
+ 		memcpy(&buf, st, sizeof(*st));
+ 	else {
++		memset(&buf, 0, sizeof(buf));
+ 		if (sizeof buf.f_blocks == 4) {
+ 			if ((st->f_blocks | st->f_bfree | st->f_bavail |
+ 			     st->f_bsize | st->f_frsize) &
+@@ -158,7 +159,6 @@ static int do_statfs_native(struct kstatfs *st, struct statfs __user *p)
+ 		buf.f_namelen = st->f_namelen;
+ 		buf.f_frsize = st->f_frsize;
+ 		buf.f_flags = st->f_flags;
+-		memset(buf.f_spare, 0, sizeof(buf.f_spare));
+ 	}
+ 	if (copy_to_user(p, &buf, sizeof(buf)))
+ 		return -EFAULT;
+@@ -171,6 +171,7 @@ static int do_statfs64(struct kstatfs *st, struct statfs64 __user *p)
+ 	if (sizeof(buf) == sizeof(*st))
+ 		memcpy(&buf, st, sizeof(*st));
+ 	else {
++		memset(&buf, 0, sizeof(buf));
+ 		buf.f_type = st->f_type;
+ 		buf.f_bsize = st->f_bsize;
+ 		buf.f_blocks = st->f_blocks;
+@@ -182,7 +183,6 @@ static int do_statfs64(struct kstatfs *st, struct statfs64 __user *p)
+ 		buf.f_namelen = st->f_namelen;
+ 		buf.f_frsize = st->f_frsize;
+ 		buf.f_flags = st->f_flags;
+-		memset(buf.f_spare, 0, sizeof(buf.f_spare));
+ 	}
+ 	if (copy_to_user(p, &buf, sizeof(buf)))
+ 		return -EFAULT;
+diff --git a/include/drm/display/drm_dp.h b/include/drm/display/drm_dp.h
+index 4545ed6109584..b8b7f990d67f6 100644
+--- a/include/drm/display/drm_dp.h
++++ b/include/drm/display/drm_dp.h
+@@ -286,8 +286,8 @@
+ 
+ #define DP_DSC_MAX_BITS_PER_PIXEL_HI        0x068   /* eDP 1.4 */
+ # define DP_DSC_MAX_BITS_PER_PIXEL_HI_MASK  (0x3 << 0)
+-# define DP_DSC_MAX_BPP_DELTA_VERSION_MASK  0x06
+-# define DP_DSC_MAX_BPP_DELTA_AVAILABILITY  0x08
++# define DP_DSC_MAX_BPP_DELTA_VERSION_MASK  (0x3 << 5)	/* eDP 1.5 & DP 2.0 */
++# define DP_DSC_MAX_BPP_DELTA_AVAILABILITY  (1 << 7)	/* eDP 1.5 & DP 2.0 */
+ 
+ #define DP_DSC_DEC_COLOR_FORMAT_CAP         0x069
+ # define DP_DSC_RGB                         (1 << 0)
+diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
+index 220c8c60e021a..f196c19f8e55c 100644
+--- a/include/linux/arm-smccc.h
++++ b/include/linux/arm-smccc.h
+@@ -226,6 +226,24 @@ void __init arm_smccc_version_init(u32 version, enum arm_smccc_conduit conduit);
+ 
+ extern u64 smccc_has_sve_hint;
+ 
++/**
++ * arm_smccc_get_soc_id_version()
++ *
++ * Returns the SOC ID version.
++ *
++ * When ARM_SMCCC_ARCH_SOC_ID is not present, returns SMCCC_RET_NOT_SUPPORTED.
++ */
++s32 arm_smccc_get_soc_id_version(void);
++
++/**
++ * arm_smccc_get_soc_id_revision()
++ *
++ * Returns the SOC ID revision.
++ *
++ * When ARM_SMCCC_ARCH_SOC_ID is not present, returns SMCCC_RET_NOT_SUPPORTED.
++ */
++s32 arm_smccc_get_soc_id_revision(void);
++
+ /**
+  * struct arm_smccc_res - Result from SMC/HVC call
+  * @a0-a3 result values from registers 0 to 3
+diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
+index 5b2f8147d1ae3..0f1001dca0e00 100644
+--- a/include/linux/cpuhotplug.h
++++ b/include/linux/cpuhotplug.h
+@@ -163,7 +163,6 @@ enum cpuhp_state {
+ 	CPUHP_AP_PERF_X86_CSTATE_STARTING,
+ 	CPUHP_AP_PERF_XTENSA_STARTING,
+ 	CPUHP_AP_MIPS_OP_LOONGSON3_STARTING,
+-	CPUHP_AP_ARM_SDEI_STARTING,
+ 	CPUHP_AP_ARM_VFP_STARTING,
+ 	CPUHP_AP_ARM64_DEBUG_MONITORS_STARTING,
+ 	CPUHP_AP_PERF_ARM_HW_BREAKPOINT_STARTING,
+diff --git a/include/linux/dim.h b/include/linux/dim.h
+index 6c5733981563e..f343bc9aa2ec9 100644
+--- a/include/linux/dim.h
++++ b/include/linux/dim.h
+@@ -236,8 +236,9 @@ void dim_park_tired(struct dim *dim);
+  *
+  * Calculate the delta between two samples (in data rates).
+  * Takes into consideration counter wrap-around.
++ * Returned boolean indicates whether curr_stats are reliable.
+  */
+-void dim_calc_stats(struct dim_sample *start, struct dim_sample *end,
++bool dim_calc_stats(struct dim_sample *start, struct dim_sample *end,
+ 		    struct dim_stats *curr_stats);
+ 
+ /**
+diff --git a/include/linux/if_vlan.h b/include/linux/if_vlan.h
+index 6864b89ef8681..7ad09082f56c3 100644
+--- a/include/linux/if_vlan.h
++++ b/include/linux/if_vlan.h
+@@ -628,6 +628,23 @@ static inline __be16 vlan_get_protocol(const struct sk_buff *skb)
+ 	return __vlan_get_protocol(skb, skb->protocol, NULL);
+ }
+ 
++/* This version of __vlan_get_protocol() also pulls mac header in skb->head */
++static inline __be16 vlan_get_protocol_and_depth(struct sk_buff *skb,
++						 __be16 type, int *depth)
++{
++	int maclen;
++
++	type = __vlan_get_protocol(skb, type, &maclen);
++
++	if (type) {
++		if (!pskb_may_pull(skb, maclen))
++			type = 0;
++		else if (depth)
++			*depth = maclen;
++	}
++	return type;
++}
++
+ /* A getter for the SKB protocol field which will handle VLAN tags consistently
+  * whether VLAN acceleration is enabled or not.
+  */
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index c35f04f636f15..7db9f960221d3 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -2463,6 +2463,7 @@ static inline
+ struct netdev_queue *netdev_get_tx_queue(const struct net_device *dev,
+ 					 unsigned int index)
+ {
++	DEBUG_NET_WARN_ON_ONCE(index >= dev->num_tx_queues);
+ 	return &dev->_tx[index];
+ }
+ 
+diff --git a/include/linux/sched/task_stack.h b/include/linux/sched/task_stack.h
+index 5e799a47431e8..f158b025c1750 100644
+--- a/include/linux/sched/task_stack.h
++++ b/include/linux/sched/task_stack.h
+@@ -23,7 +23,7 @@ static __always_inline void *task_stack_page(const struct task_struct *task)
+ 
+ #define setup_thread_stack(new,old)	do { } while(0)
+ 
+-static inline unsigned long *end_of_stack(const struct task_struct *task)
++static __always_inline unsigned long *end_of_stack(const struct task_struct *task)
+ {
+ #ifdef CONFIG_STACK_GROWSUP
+ 	return (unsigned long *)((unsigned long)task->stack + THREAD_SIZE) - 1;
+diff --git a/include/linux/sunrpc/svc_rdma.h b/include/linux/sunrpc/svc_rdma.h
+index 24aa159d29a7f..fbc4bd423b355 100644
+--- a/include/linux/sunrpc/svc_rdma.h
++++ b/include/linux/sunrpc/svc_rdma.h
+@@ -176,7 +176,7 @@ extern struct svc_rdma_recv_ctxt *
+ extern void svc_rdma_recv_ctxt_put(struct svcxprt_rdma *rdma,
+ 				   struct svc_rdma_recv_ctxt *ctxt);
+ extern void svc_rdma_flush_recv_queues(struct svcxprt_rdma *rdma);
+-extern void svc_rdma_release_rqst(struct svc_rqst *rqstp);
++extern void svc_rdma_release_ctxt(struct svc_xprt *xprt, void *ctxt);
+ extern int svc_rdma_recvfrom(struct svc_rqst *);
+ 
+ /* svc_rdma_rw.c */
+diff --git a/include/linux/sunrpc/svc_xprt.h b/include/linux/sunrpc/svc_xprt.h
+index 775368802762e..f725a3ac3406a 100644
+--- a/include/linux/sunrpc/svc_xprt.h
++++ b/include/linux/sunrpc/svc_xprt.h
+@@ -23,7 +23,7 @@ struct svc_xprt_ops {
+ 	int		(*xpo_sendto)(struct svc_rqst *);
+ 	int		(*xpo_result_payload)(struct svc_rqst *, unsigned int,
+ 					      unsigned int);
+-	void		(*xpo_release_rqst)(struct svc_rqst *);
++	void		(*xpo_release_ctxt)(struct svc_xprt *xprt, void *ctxt);
+ 	void		(*xpo_detach)(struct svc_xprt *);
+ 	void		(*xpo_free)(struct svc_xprt *);
+ 	void		(*xpo_kill_temp_xprt)(struct svc_xprt *);
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index 400f8a7d0c3fe..07df96c47ef4f 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -294,6 +294,21 @@ enum {
+ 	 * during the hdev->setup vendor callback.
+ 	 */
+ 	HCI_QUIRK_BROKEN_MWS_TRANSPORT_CONFIG,
++
++	/* When this quirk is set, max_page for local extended features
++	 * is set to 1, even if controller reports higher number. Some
++	 * controllers (e.g. RTL8723CS) report more pages, but they
++	 * don't actually support features declared there.
++	 */
++	HCI_QUIRK_BROKEN_LOCAL_EXT_FEATURES_PAGE_2,
++
++	/*
++	 * When this quirk is set, the HCI_OP_LE_SET_RPA_TIMEOUT command is
++	 * skipped during initialization. This is required for the Actions
++	 * Semiconductor ATS2851 based controllers, which erroneously claims
++	 * to support it.
++	 */
++	HCI_QUIRK_BROKEN_SET_RPA_TIMEOUT,
+ };
+ 
+ /* HCI device flags */
+diff --git a/include/net/bonding.h b/include/net/bonding.h
+index c3843239517d5..2d034e07b796c 100644
+--- a/include/net/bonding.h
++++ b/include/net/bonding.h
+@@ -233,7 +233,7 @@ struct bonding {
+ 	 */
+ 	spinlock_t mode_lock;
+ 	spinlock_t stats_lock;
+-	u8	 send_peer_notif;
++	u32	 send_peer_notif;
+ 	u8       igmp_retrans;
+ #ifdef CONFIG_PROC_FS
+ 	struct   proc_dir_entry *proc_entry;
+diff --git a/include/net/ip_vs.h b/include/net/ip_vs.h
+index 6d71a5ff52dfd..e20f1f92066d1 100644
+--- a/include/net/ip_vs.h
++++ b/include/net/ip_vs.h
+@@ -630,8 +630,10 @@ struct ip_vs_conn {
+ 	 */
+ 	struct ip_vs_app        *app;           /* bound ip_vs_app object */
+ 	void                    *app_data;      /* Application private data */
+-	struct ip_vs_seq        in_seq;         /* incoming seq. struct */
+-	struct ip_vs_seq        out_seq;        /* outgoing seq. struct */
++	struct_group(sync_conn_opt,
++		struct ip_vs_seq  in_seq;       /* incoming seq. struct */
++		struct ip_vs_seq  out_seq;      /* outgoing seq. struct */
++	);
+ 
+ 	const struct ip_vs_pe	*pe;
+ 	char			*pe_data;
+diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h
+index 2016839991a42..fc688c7e95951 100644
+--- a/include/net/pkt_sched.h
++++ b/include/net/pkt_sched.h
+@@ -167,6 +167,7 @@ struct tc_mqprio_caps {
+ struct tc_mqprio_qopt_offload {
+ 	/* struct tc_mqprio_qopt must always be the first element */
+ 	struct tc_mqprio_qopt qopt;
++	struct netlink_ext_ack *extack;
+ 	u16 mode;
+ 	u16 shaper;
+ 	u32 flags;
+@@ -194,6 +195,7 @@ struct tc_taprio_sched_entry {
+ 
+ struct tc_taprio_qopt_offload {
+ 	struct tc_mqprio_qopt_offload mqprio;
++	struct netlink_ext_ack *extack;
+ 	u8 enable;
+ 	ktime_t base_time;
+ 	u64 cycle_time;
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 573f2bf7e0de7..9cd0354221507 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -2718,7 +2718,7 @@ static inline void sock_recv_cmsgs(struct msghdr *msg, struct sock *sk,
+ 		__sock_recv_cmsgs(msg, sk, skb);
+ 	else if (unlikely(sock_flag(sk, SOCK_TIMESTAMP)))
+ 		sock_write_timestamp(sk, skb->tstamp);
+-	else if (unlikely(sk->sk_stamp == SK_DEFAULT_STAMP))
++	else if (unlikely(sock_read_timestamp(sk) == SK_DEFAULT_STAMP))
+ 		sock_write_timestamp(sk, 0);
+ }
+ 
+diff --git a/include/uapi/asm-generic/fcntl.h b/include/uapi/asm-generic/fcntl.h
+index 1ecdb911add8d..80f37a0d40d7d 100644
+--- a/include/uapi/asm-generic/fcntl.h
++++ b/include/uapi/asm-generic/fcntl.h
+@@ -91,7 +91,6 @@
+ 
+ /* a horrid kludge trying to make sure that this will fail on old kernels */
+ #define O_TMPFILE (__O_TMPFILE | O_DIRECTORY)
+-#define O_TMPFILE_MASK (__O_TMPFILE | O_DIRECTORY | O_CREAT)      
+ 
+ #ifndef O_NDELAY
+ #define O_NDELAY	O_NONBLOCK
+diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c
+index 35f4138a54dc1..58da17ae51241 100644
+--- a/kernel/bpf/bpf_local_storage.c
++++ b/kernel/bpf/bpf_local_storage.c
+@@ -51,11 +51,21 @@ owner_storage(struct bpf_local_storage_map *smap, void *owner)
+ 	return map->ops->map_owner_storage_ptr(owner);
+ }
+ 
++static bool selem_linked_to_storage_lockless(const struct bpf_local_storage_elem *selem)
++{
++	return !hlist_unhashed_lockless(&selem->snode);
++}
++
+ static bool selem_linked_to_storage(const struct bpf_local_storage_elem *selem)
+ {
+ 	return !hlist_unhashed(&selem->snode);
+ }
+ 
++static bool selem_linked_to_map_lockless(const struct bpf_local_storage_elem *selem)
++{
++	return !hlist_unhashed_lockless(&selem->map_node);
++}
++
+ static bool selem_linked_to_map(const struct bpf_local_storage_elem *selem)
+ {
+ 	return !hlist_unhashed(&selem->map_node);
+@@ -174,7 +184,7 @@ static void __bpf_selem_unlink_storage(struct bpf_local_storage_elem *selem,
+ 	bool free_local_storage = false;
+ 	unsigned long flags;
+ 
+-	if (unlikely(!selem_linked_to_storage(selem)))
++	if (unlikely(!selem_linked_to_storage_lockless(selem)))
+ 		/* selem has already been unlinked from sk */
+ 		return;
+ 
+@@ -208,7 +218,7 @@ void bpf_selem_unlink_map(struct bpf_local_storage_elem *selem)
+ 	struct bpf_local_storage_map_bucket *b;
+ 	unsigned long flags;
+ 
+-	if (unlikely(!selem_linked_to_map(selem)))
++	if (unlikely(!selem_linked_to_map_lockless(selem)))
+ 		/* selem has already be unlinked from smap */
+ 		return;
+ 
+@@ -420,7 +430,7 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap,
+ 		err = check_flags(old_sdata, map_flags);
+ 		if (err)
+ 			return ERR_PTR(err);
+-		if (old_sdata && selem_linked_to_storage(SELEM(old_sdata))) {
++		if (old_sdata && selem_linked_to_storage_lockless(SELEM(old_sdata))) {
+ 			copy_map_value_locked(&smap->map, old_sdata->data,
+ 					      value, false);
+ 			return old_sdata;
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 64600acbb4e76..f4b267082cbf9 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -17579,6 +17579,10 @@ BTF_ID(func, migrate_enable)
+ #if !defined CONFIG_PREEMPT_RCU && !defined CONFIG_TINY_RCU
+ BTF_ID(func, rcu_read_unlock_strict)
+ #endif
++#if defined(CONFIG_DEBUG_PREEMPT) || defined(CONFIG_TRACE_PREEMPT_TOGGLE)
++BTF_ID(func, preempt_count_add)
++BTF_ID(func, preempt_count_sub)
++#endif
+ BTF_SET_END(btf_id_deny)
+ 
+ static bool can_be_sleepable(struct bpf_prog *prog)
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 68baa8194d9f8..db016e4189319 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -10150,8 +10150,20 @@ void perf_tp_event(u16 event_type, u64 count, void *record, int entry_size,
+ 	perf_trace_buf_update(record, event_type);
+ 
+ 	hlist_for_each_entry_rcu(event, head, hlist_entry) {
+-		if (perf_tp_event_match(event, &data, regs))
++		if (perf_tp_event_match(event, &data, regs)) {
+ 			perf_swevent_event(event, count, &data, regs);
++
++			/*
++			 * Here use the same on-stack perf_sample_data,
++			 * some members in data are event-specific and
++			 * need to be re-computed for different sweveents.
++			 * Re-initialize data->sample_flags safely to avoid
++			 * the problem that next event skips preparing data
++			 * because data->sample_flags is set.
++			 */
++			perf_sample_data_init(&data, 0, 0);
++			perf_sample_save_raw_data(&data, &raw);
++		}
+ 	}
+ 
+ 	/*
+diff --git a/kernel/rcu/refscale.c b/kernel/rcu/refscale.c
+index afa3e1a2f6902..1970ce5f22d40 100644
+--- a/kernel/rcu/refscale.c
++++ b/kernel/rcu/refscale.c
+@@ -1031,7 +1031,7 @@ ref_scale_cleanup(void)
+ static int
+ ref_scale_shutdown(void *arg)
+ {
+-	wait_event(shutdown_wq, shutdown_start);
++	wait_event_idle(shutdown_wq, shutdown_start);
+ 
+ 	smp_mb(); // Wake before output.
+ 	ref_scale_cleanup();
+diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
+index 249c2967d9e6c..54e8fb258c98d 100644
+--- a/kernel/rcu/tree_exp.h
++++ b/kernel/rcu/tree_exp.h
+@@ -802,9 +802,11 @@ static int rcu_print_task_exp_stall(struct rcu_node *rnp)
+ 	int ndetected = 0;
+ 	struct task_struct *t;
+ 
+-	if (!READ_ONCE(rnp->exp_tasks))
+-		return 0;
+ 	raw_spin_lock_irqsave_rcu_node(rnp, flags);
++	if (!rnp->exp_tasks) {
++		raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
++		return 0;
++	}
+ 	t = list_entry(rnp->exp_tasks->prev,
+ 		       struct task_struct, rcu_node_entry);
+ 	list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) {
+diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
+index 93bf2b4e47e56..771d1e040303b 100644
+--- a/kernel/time/tick-broadcast.c
++++ b/kernel/time/tick-broadcast.c
+@@ -35,14 +35,15 @@ static __cacheline_aligned_in_smp DEFINE_RAW_SPINLOCK(tick_broadcast_lock);
+ #ifdef CONFIG_TICK_ONESHOT
+ static DEFINE_PER_CPU(struct clock_event_device *, tick_oneshot_wakeup_device);
+ 
+-static void tick_broadcast_setup_oneshot(struct clock_event_device *bc);
++static void tick_broadcast_setup_oneshot(struct clock_event_device *bc, bool from_periodic);
+ static void tick_broadcast_clear_oneshot(int cpu);
+ static void tick_resume_broadcast_oneshot(struct clock_event_device *bc);
+ # ifdef CONFIG_HOTPLUG_CPU
+ static void tick_broadcast_oneshot_offline(unsigned int cpu);
+ # endif
+ #else
+-static inline void tick_broadcast_setup_oneshot(struct clock_event_device *bc) { BUG(); }
++static inline void
++tick_broadcast_setup_oneshot(struct clock_event_device *bc, bool from_periodic) { BUG(); }
+ static inline void tick_broadcast_clear_oneshot(int cpu) { }
+ static inline void tick_resume_broadcast_oneshot(struct clock_event_device *bc) { }
+ # ifdef CONFIG_HOTPLUG_CPU
+@@ -264,7 +265,7 @@ int tick_device_uses_broadcast(struct clock_event_device *dev, int cpu)
+ 		if (tick_broadcast_device.mode == TICKDEV_MODE_PERIODIC)
+ 			tick_broadcast_start_periodic(bc);
+ 		else
+-			tick_broadcast_setup_oneshot(bc);
++			tick_broadcast_setup_oneshot(bc, false);
+ 		ret = 1;
+ 	} else {
+ 		/*
+@@ -500,7 +501,7 @@ void tick_broadcast_control(enum tick_broadcast_mode mode)
+ 			if (tick_broadcast_device.mode == TICKDEV_MODE_PERIODIC)
+ 				tick_broadcast_start_periodic(bc);
+ 			else
+-				tick_broadcast_setup_oneshot(bc);
++				tick_broadcast_setup_oneshot(bc, false);
+ 		}
+ 	}
+ out:
+@@ -1020,48 +1021,101 @@ static inline ktime_t tick_get_next_period(void)
+ /**
+  * tick_broadcast_setup_oneshot - setup the broadcast device
+  */
+-static void tick_broadcast_setup_oneshot(struct clock_event_device *bc)
++static void tick_broadcast_setup_oneshot(struct clock_event_device *bc,
++					 bool from_periodic)
+ {
+ 	int cpu = smp_processor_id();
++	ktime_t nexttick = 0;
+ 
+ 	if (!bc)
+ 		return;
+ 
+-	/* Set it up only once ! */
+-	if (bc->event_handler != tick_handle_oneshot_broadcast) {
+-		int was_periodic = clockevent_state_periodic(bc);
+-
+-		bc->event_handler = tick_handle_oneshot_broadcast;
+-
++	/*
++	 * When the broadcast device was switched to oneshot by the first
++	 * CPU handling the NOHZ change, the other CPUs will reach this
++	 * code via hrtimer_run_queues() -> tick_check_oneshot_change()
++	 * too. Set up the broadcast device only once!
++	 */
++	if (bc->event_handler == tick_handle_oneshot_broadcast) {
+ 		/*
+-		 * We must be careful here. There might be other CPUs
+-		 * waiting for periodic broadcast. We need to set the
+-		 * oneshot_mask bits for those and program the
+-		 * broadcast device to fire.
++		 * The CPU which switched from periodic to oneshot mode
++		 * set the broadcast oneshot bit for all other CPUs which
++		 * are in the general (periodic) broadcast mask to ensure
++		 * that CPUs which wait for the periodic broadcast are
++		 * woken up.
++		 *
++		 * Clear the bit for the local CPU as the set bit would
++		 * prevent the first tick_broadcast_enter() after this CPU
++		 * switched to oneshot state to program the broadcast
++		 * device.
++		 *
++		 * This code can also be reached via tick_broadcast_control(),
++		 * but this cannot avoid the tick_broadcast_clear_oneshot()
++		 * as that would break the periodic to oneshot transition of
++		 * secondary CPUs. But that's harmless as the below only
++		 * clears already cleared bits.
+ 		 */
++		tick_broadcast_clear_oneshot(cpu);
++		return;
++	}
++
++
++	bc->event_handler = tick_handle_oneshot_broadcast;
++	bc->next_event = KTIME_MAX;
++
++	/*
++	 * When the tick mode is switched from periodic to oneshot it must
++	 * be ensured that CPUs which are waiting for periodic broadcast
++	 * get their wake-up at the next tick.  This is achieved by ORing
++	 * tick_broadcast_mask into tick_broadcast_oneshot_mask.
++	 *
++	 * For other callers, e.g. broadcast device replacement,
++	 * tick_broadcast_oneshot_mask must not be touched as this would
++	 * set bits for CPUs which are already NOHZ, but not idle. Their
++	 * next tick_broadcast_enter() would observe the bit set and fail
++	 * to update the expiry time and the broadcast event device.
++	 */
++	if (from_periodic) {
+ 		cpumask_copy(tmpmask, tick_broadcast_mask);
++		/* Remove the local CPU as it is obviously not idle */
+ 		cpumask_clear_cpu(cpu, tmpmask);
+-		cpumask_or(tick_broadcast_oneshot_mask,
+-			   tick_broadcast_oneshot_mask, tmpmask);
++		cpumask_or(tick_broadcast_oneshot_mask, tick_broadcast_oneshot_mask, tmpmask);
+ 
+-		if (was_periodic && !cpumask_empty(tmpmask)) {
+-			ktime_t nextevt = tick_get_next_period();
++		/*
++		 * Ensure that the oneshot broadcast handler will wake the
++		 * CPUs which are still waiting for periodic broadcast.
++		 */
++		nexttick = tick_get_next_period();
++		tick_broadcast_init_next_event(tmpmask, nexttick);
+ 
+-			clockevents_switch_state(bc, CLOCK_EVT_STATE_ONESHOT);
+-			tick_broadcast_init_next_event(tmpmask, nextevt);
+-			tick_broadcast_set_event(bc, cpu, nextevt);
+-		} else
+-			bc->next_event = KTIME_MAX;
+-	} else {
+ 		/*
+-		 * The first cpu which switches to oneshot mode sets
+-		 * the bit for all other cpus which are in the general
+-		 * (periodic) broadcast mask. So the bit is set and
+-		 * would prevent the first broadcast enter after this
+-		 * to program the bc device.
++		 * If the underlying broadcast clock event device is
++		 * already in oneshot state, then there is nothing to do.
++		 * The device was already armed for the next tick
++		 * in tick_handle_broadcast_periodic()
+ 		 */
+-		tick_broadcast_clear_oneshot(cpu);
++		if (clockevent_state_oneshot(bc))
++			return;
+ 	}
++
++	/*
++	 * When switching from periodic to oneshot mode arm the broadcast
++	 * device for the next tick.
++	 *
++	 * If the broadcast device has been replaced in oneshot mode and
++	 * the oneshot broadcast mask is not empty, then arm it to expire
++	 * immediately in order to reevaluate the next expiring timer.
++	 * @nexttick is 0 and therefore in the past which will cause the
++	 * clockevent code to force an event.
++	 *
++	 * For both cases the programming can be avoided when the oneshot
++	 * broadcast mask is empty.
++	 *
++	 * tick_broadcast_set_event() implicitly switches the broadcast
++	 * device to oneshot state.
++	 */
++	if (!cpumask_empty(tick_broadcast_oneshot_mask))
++		tick_broadcast_set_event(bc, cpu, nexttick);
+ }
+ 
+ /*
+@@ -1070,14 +1124,16 @@ static void tick_broadcast_setup_oneshot(struct clock_event_device *bc)
+ void tick_broadcast_switch_to_oneshot(void)
+ {
+ 	struct clock_event_device *bc;
++	enum tick_device_mode oldmode;
+ 	unsigned long flags;
+ 
+ 	raw_spin_lock_irqsave(&tick_broadcast_lock, flags);
+ 
++	oldmode = tick_broadcast_device.mode;
+ 	tick_broadcast_device.mode = TICKDEV_MODE_ONESHOT;
+ 	bc = tick_broadcast_device.evtdev;
+ 	if (bc)
+-		tick_broadcast_setup_oneshot(bc);
++		tick_broadcast_setup_oneshot(bc, oldmode == TICKDEV_MODE_PERIODIC);
+ 
+ 	raw_spin_unlock_irqrestore(&tick_broadcast_lock, flags);
+ }
+diff --git a/kernel/trace/rethook.c b/kernel/trace/rethook.c
+index 32c3dfdb4d6a7..60f6cb2b486bf 100644
+--- a/kernel/trace/rethook.c
++++ b/kernel/trace/rethook.c
+@@ -288,7 +288,7 @@ unsigned long rethook_trampoline_handler(struct pt_regs *regs,
+ 	 * These loops must be protected from rethook_free_rcu() because those
+ 	 * are accessing 'rhn->rethook'.
+ 	 */
+-	preempt_disable();
++	preempt_disable_notrace();
+ 
+ 	/*
+ 	 * Run the handler on the shadow stack. Do not unlink the list here because
+@@ -321,7 +321,7 @@ unsigned long rethook_trampoline_handler(struct pt_regs *regs,
+ 		first = first->next;
+ 		rethook_recycle(rhn);
+ 	}
+-	preempt_enable();
++	preempt_enable_notrace();
+ 
+ 	return correct_ret_addr;
+ }
+diff --git a/lib/cpu_rmap.c b/lib/cpu_rmap.c
+index f08d9c56f712e..e77f12bb3c774 100644
+--- a/lib/cpu_rmap.c
++++ b/lib/cpu_rmap.c
+@@ -232,7 +232,8 @@ void free_irq_cpu_rmap(struct cpu_rmap *rmap)
+ 
+ 	for (index = 0; index < rmap->used; index++) {
+ 		glue = rmap->obj[index];
+-		irq_set_affinity_notifier(glue->notify.irq, NULL);
++		if (glue)
++			irq_set_affinity_notifier(glue->notify.irq, NULL);
+ 	}
+ 
+ 	cpu_rmap_put(rmap);
+@@ -268,6 +269,7 @@ static void irq_cpu_rmap_release(struct kref *ref)
+ 		container_of(ref, struct irq_glue, notify.kref);
+ 
+ 	cpu_rmap_put(glue->rmap);
++	glue->rmap->obj[glue->index] = NULL;
+ 	kfree(glue);
+ }
+ 
+@@ -297,6 +299,7 @@ int irq_cpu_rmap_add(struct cpu_rmap *rmap, int irq)
+ 	rc = irq_set_affinity_notifier(irq, &glue->notify);
+ 	if (rc) {
+ 		cpu_rmap_put(glue->rmap);
++		rmap->obj[glue->index] = NULL;
+ 		kfree(glue);
+ 	}
+ 	return rc;
+diff --git a/lib/dim/dim.c b/lib/dim/dim.c
+index 38045d6d05381..e89aaf07bde50 100644
+--- a/lib/dim/dim.c
++++ b/lib/dim/dim.c
+@@ -54,7 +54,7 @@ void dim_park_tired(struct dim *dim)
+ }
+ EXPORT_SYMBOL(dim_park_tired);
+ 
+-void dim_calc_stats(struct dim_sample *start, struct dim_sample *end,
++bool dim_calc_stats(struct dim_sample *start, struct dim_sample *end,
+ 		    struct dim_stats *curr_stats)
+ {
+ 	/* u32 holds up to 71 minutes, should be enough */
+@@ -66,7 +66,7 @@ void dim_calc_stats(struct dim_sample *start, struct dim_sample *end,
+ 			     start->comp_ctr);
+ 
+ 	if (!delta_us)
+-		return;
++		return false;
+ 
+ 	curr_stats->ppms = DIV_ROUND_UP(npkts * USEC_PER_MSEC, delta_us);
+ 	curr_stats->bpms = DIV_ROUND_UP(nbytes * USEC_PER_MSEC, delta_us);
+@@ -79,5 +79,6 @@ void dim_calc_stats(struct dim_sample *start, struct dim_sample *end,
+ 	else
+ 		curr_stats->cpe_ratio = 0;
+ 
++	return true;
+ }
+ EXPORT_SYMBOL(dim_calc_stats);
+diff --git a/lib/dim/net_dim.c b/lib/dim/net_dim.c
+index 53f6b9c6e9366..4e32f7aaac86c 100644
+--- a/lib/dim/net_dim.c
++++ b/lib/dim/net_dim.c
+@@ -227,7 +227,8 @@ void net_dim(struct dim *dim, struct dim_sample end_sample)
+ 				  dim->start_sample.event_ctr);
+ 		if (nevents < DIM_NEVENTS)
+ 			break;
+-		dim_calc_stats(&dim->start_sample, &end_sample, &curr_stats);
++		if (!dim_calc_stats(&dim->start_sample, &end_sample, &curr_stats))
++			break;
+ 		if (net_dim_decision(&curr_stats, dim)) {
+ 			dim->state = DIM_APPLY_NEW_PROFILE;
+ 			schedule_work(&dim->work);
+diff --git a/lib/dim/rdma_dim.c b/lib/dim/rdma_dim.c
+index 15462d54758d3..88f7794867078 100644
+--- a/lib/dim/rdma_dim.c
++++ b/lib/dim/rdma_dim.c
+@@ -88,7 +88,8 @@ void rdma_dim(struct dim *dim, u64 completions)
+ 		nevents = curr_sample->event_ctr - dim->start_sample.event_ctr;
+ 		if (nevents < DIM_NEVENTS)
+ 			break;
+-		dim_calc_stats(&dim->start_sample, curr_sample, &curr_stats);
++		if (!dim_calc_stats(&dim->start_sample, curr_sample, &curr_stats))
++			break;
+ 		if (rdma_dim_decision(&curr_stats, dim)) {
+ 			dim->state = DIM_APPLY_NEW_PROFILE;
+ 			schedule_work(&dim->work);
+diff --git a/lib/maple_tree.c b/lib/maple_tree.c
+index 1281a40d5735c..a614129d43394 100644
+--- a/lib/maple_tree.c
++++ b/lib/maple_tree.c
+@@ -5340,15 +5340,9 @@ int mas_empty_area(struct ma_state *mas, unsigned long min,
+ 
+ 	mt = mte_node_type(mas->node);
+ 	pivots = ma_pivots(mas_mn(mas), mt);
+-	if (offset)
+-		mas->min = pivots[offset - 1] + 1;
+-
+-	if (offset < mt_pivots[mt])
+-		mas->max = pivots[offset];
+-
+-	if (mas->index < mas->min)
+-		mas->index = mas->min;
+-
++	min = mas_safe_min(mas, pivots, offset);
++	if (mas->index < min)
++		mas->index = min;
+ 	mas->last = mas->index + size - 1;
+ 	return 0;
+ }
+diff --git a/mm/zswap.c b/mm/zswap.c
+index f6c89049cf700..5d5977c9ea45b 100644
+--- a/mm/zswap.c
++++ b/mm/zswap.c
+@@ -995,6 +995,22 @@ static int zswap_writeback_entry(struct zpool *pool, unsigned long handle)
+ 		goto fail;
+ 
+ 	case ZSWAP_SWAPCACHE_NEW: /* page is locked */
++		/*
++		 * Having a local reference to the zswap entry doesn't exclude
++		 * swapping from invalidating and recycling the swap slot. Once
++		 * the swapcache is secured against concurrent swapping to and
++		 * from the slot, recheck that the entry is still current before
++		 * writing.
++		 */
++		spin_lock(&tree->lock);
++		if (zswap_rb_search(&tree->rbroot, entry->offset) != entry) {
++			spin_unlock(&tree->lock);
++			delete_from_swap_cache(page_folio(page));
++			ret = -ENOMEM;
++			goto fail;
++		}
++		spin_unlock(&tree->lock);
++
+ 		/* decompress */
+ 		acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
+ 		dlen = PAGE_SIZE;
+diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c
+index 5920544e93e82..0fa52bcc296bf 100644
+--- a/net/8021q/vlan_dev.c
++++ b/net/8021q/vlan_dev.c
+@@ -108,8 +108,8 @@ static netdev_tx_t vlan_dev_hard_start_xmit(struct sk_buff *skb,
+ 	 * NOTE: THIS ASSUMES DIX ETHERNET, SPECIFICALLY NOT SUPPORTING
+ 	 * OTHER THINGS LIKE FDDI/TokenRing/802.3 SNAPs...
+ 	 */
+-	if (veth->h_vlan_proto != vlan->vlan_proto ||
+-	    vlan->flags & VLAN_FLAG_REORDER_HDR) {
++	if (vlan->flags & VLAN_FLAG_REORDER_HDR ||
++	    veth->h_vlan_proto != vlan->vlan_proto) {
+ 		u16 vlan_tci;
+ 		vlan_tci = vlan->vlan_id;
+ 		vlan_tci |= vlan_dev_get_egress_qos_mask(dev, skb->priority);
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index e87c928c9e17a..51f13518dba9b 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -886,8 +886,13 @@ static u8 hci_cc_read_local_ext_features(struct hci_dev *hdev, void *data,
+ 	if (rp->status)
+ 		return rp->status;
+ 
+-	if (hdev->max_page < rp->max_page)
+-		hdev->max_page = rp->max_page;
++	if (hdev->max_page < rp->max_page) {
++		if (test_bit(HCI_QUIRK_BROKEN_LOCAL_EXT_FEATURES_PAGE_2,
++			     &hdev->quirks))
++			bt_dev_warn(hdev, "broken local ext features page 2");
++		else
++			hdev->max_page = rp->max_page;
++	}
+ 
+ 	if (rp->page < HCI_MAX_PAGES)
+ 		memcpy(hdev->features[rp->page], rp->features, 8);
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index 632be12672887..b65ee3a32e5d7 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -4093,7 +4093,8 @@ static int hci_le_set_rpa_timeout_sync(struct hci_dev *hdev)
+ {
+ 	__le16 timeout = cpu_to_le16(hdev->rpa_timeout);
+ 
+-	if (!(hdev->commands[35] & 0x04))
++	if (!(hdev->commands[35] & 0x04) ||
++	    test_bit(HCI_QUIRK_BROKEN_SET_RPA_TIMEOUT, &hdev->quirks))
+ 		return 0;
+ 
+ 	return __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_RPA_TIMEOUT,
+@@ -4533,6 +4534,9 @@ static const struct {
+ 			 "HCI Set Event Filter command not supported."),
+ 	HCI_QUIRK_BROKEN(ENHANCED_SETUP_SYNC_CONN,
+ 			 "HCI Enhanced Setup Synchronous Connection command is "
++			 "advertised, but not supported."),
++	HCI_QUIRK_BROKEN(SET_RPA_TIMEOUT,
++			 "HCI LE Set Random Private Address Timeout command is "
+ 			 "advertised, but not supported.")
+ };
+ 
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 55a7226233f96..24d075282996c 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -4694,7 +4694,6 @@ static inline int l2cap_disconnect_rsp(struct l2cap_conn *conn,
+ 
+ 	chan = l2cap_get_chan_by_scid(conn, scid);
+ 	if (!chan) {
+-		mutex_unlock(&conn->chan_lock);
+ 		return 0;
+ 	}
+ 
+diff --git a/net/bridge/br_forward.c b/net/bridge/br_forward.c
+index 02bb620d3b8da..bd54f17e3c3d8 100644
+--- a/net/bridge/br_forward.c
++++ b/net/bridge/br_forward.c
+@@ -42,7 +42,7 @@ int br_dev_queue_push_xmit(struct net *net, struct sock *sk, struct sk_buff *skb
+ 	    eth_type_vlan(skb->protocol)) {
+ 		int depth;
+ 
+-		if (!__vlan_get_protocol(skb, skb->protocol, &depth))
++		if (!vlan_get_protocol_and_depth(skb, skb->protocol, &depth))
+ 			goto drop;
+ 
+ 		skb_set_network_header(skb, depth);
+diff --git a/net/bridge/br_private_tunnel.h b/net/bridge/br_private_tunnel.h
+index 2b053289f0166..efb096025151a 100644
+--- a/net/bridge/br_private_tunnel.h
++++ b/net/bridge/br_private_tunnel.h
+@@ -27,6 +27,10 @@ int br_process_vlan_tunnel_info(const struct net_bridge *br,
+ int br_get_vlan_tunnel_info_size(struct net_bridge_vlan_group *vg);
+ int br_fill_vlan_tunnel_info(struct sk_buff *skb,
+ 			     struct net_bridge_vlan_group *vg);
++bool vlan_tunid_inrange(const struct net_bridge_vlan *v_curr,
++			const struct net_bridge_vlan *v_last);
++int br_vlan_tunnel_info(const struct net_bridge_port *p, int cmd,
++			u16 vid, u32 tun_id, bool *changed);
+ 
+ #ifdef CONFIG_BRIDGE_VLAN_FILTERING
+ /* br_vlan_tunnel.c */
+@@ -43,10 +47,6 @@ void br_handle_ingress_vlan_tunnel(struct sk_buff *skb,
+ 				   struct net_bridge_vlan_group *vg);
+ int br_handle_egress_vlan_tunnel(struct sk_buff *skb,
+ 				 struct net_bridge_vlan *vlan);
+-bool vlan_tunid_inrange(const struct net_bridge_vlan *v_curr,
+-			const struct net_bridge_vlan *v_last);
+-int br_vlan_tunnel_info(const struct net_bridge_port *p, int cmd,
+-			u16 vid, u32 tun_id, bool *changed);
+ #else
+ static inline int vlan_tunnel_init(struct net_bridge_vlan_group *vg)
+ {
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index 5761d4ab839dd..1af623839bffa 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -1106,7 +1106,7 @@ static int isotp_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
+ 	struct isotp_sock *so = isotp_sk(sk);
+ 	int ret = 0;
+ 
+-	if (flags & ~(MSG_DONTWAIT | MSG_TRUNC | MSG_PEEK))
++	if (flags & ~(MSG_DONTWAIT | MSG_TRUNC | MSG_PEEK | MSG_CMSG_COMPAT))
+ 		return -EINVAL;
+ 
+ 	if (!so->bound)
+diff --git a/net/can/j1939/socket.c b/net/can/j1939/socket.c
+index 7e90f9e61d9bc..1790469b25808 100644
+--- a/net/can/j1939/socket.c
++++ b/net/can/j1939/socket.c
+@@ -798,7 +798,7 @@ static int j1939_sk_recvmsg(struct socket *sock, struct msghdr *msg,
+ 	struct j1939_sk_buff_cb *skcb;
+ 	int ret = 0;
+ 
+-	if (flags & ~(MSG_DONTWAIT | MSG_ERRQUEUE))
++	if (flags & ~(MSG_DONTWAIT | MSG_ERRQUEUE | MSG_CMSG_COMPAT))
+ 		return -EINVAL;
+ 
+ 	if (flags & MSG_ERRQUEUE)
+diff --git a/net/core/datagram.c b/net/core/datagram.c
+index e4ff2db40c981..8dabb9a74cb17 100644
+--- a/net/core/datagram.c
++++ b/net/core/datagram.c
+@@ -799,18 +799,21 @@ __poll_t datagram_poll(struct file *file, struct socket *sock,
+ {
+ 	struct sock *sk = sock->sk;
+ 	__poll_t mask;
++	u8 shutdown;
+ 
+ 	sock_poll_wait(file, sock, wait);
+ 	mask = 0;
+ 
+ 	/* exceptional events? */
+-	if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue))
++	if (READ_ONCE(sk->sk_err) ||
++	    !skb_queue_empty_lockless(&sk->sk_error_queue))
+ 		mask |= EPOLLERR |
+ 			(sock_flag(sk, SOCK_SELECT_ERR_QUEUE) ? EPOLLPRI : 0);
+ 
+-	if (sk->sk_shutdown & RCV_SHUTDOWN)
++	shutdown = READ_ONCE(sk->sk_shutdown);
++	if (shutdown & RCV_SHUTDOWN)
+ 		mask |= EPOLLRDHUP | EPOLLIN | EPOLLRDNORM;
+-	if (sk->sk_shutdown == SHUTDOWN_MASK)
++	if (shutdown == SHUTDOWN_MASK)
+ 		mask |= EPOLLHUP;
+ 
+ 	/* readable? */
+@@ -819,10 +822,12 @@ __poll_t datagram_poll(struct file *file, struct socket *sock,
+ 
+ 	/* Connection-based need to check for termination and startup */
+ 	if (connection_based(sk)) {
+-		if (sk->sk_state == TCP_CLOSE)
++		int state = READ_ONCE(sk->sk_state);
++
++		if (state == TCP_CLOSE)
+ 			mask |= EPOLLHUP;
+ 		/* connection hasn't started yet? */
+-		if (sk->sk_state == TCP_SYN_SENT)
++		if (state == TCP_SYN_SENT)
+ 			return mask;
+ 	}
+ 
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 1488f700bf819..b3d8e74fcaf06 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -2535,6 +2535,8 @@ int __netif_set_xps_queue(struct net_device *dev, const unsigned long *mask,
+ 	struct xps_map *map, *new_map;
+ 	unsigned int nr_ids;
+ 
++	WARN_ON_ONCE(index >= dev->num_tx_queues);
++
+ 	if (dev->num_tc) {
+ 		/* Do not allow XPS on subordinate device directly */
+ 		num_tc = dev->num_tc;
+@@ -3338,7 +3340,7 @@ __be16 skb_network_protocol(struct sk_buff *skb, int *depth)
+ 		type = eth->h_proto;
+ 	}
+ 
+-	return __vlan_get_protocol(skb, type, depth);
++	return vlan_get_protocol_and_depth(skb, type, depth);
+ }
+ 
+ /* openvswitch calls this on rx path, so we need a different check.
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 14bb41aafee30..afec5e2c21ac0 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -5243,7 +5243,7 @@ bool skb_partial_csum_set(struct sk_buff *skb, u16 start, u16 off)
+ 	u32 csum_end = (u32)start + (u32)off + sizeof(__sum16);
+ 	u32 csum_start = skb_headroom(skb) + (u32)start;
+ 
+-	if (unlikely(csum_start > U16_MAX || csum_end > skb_headlen(skb))) {
++	if (unlikely(csum_start >= U16_MAX || csum_end > skb_headlen(skb))) {
+ 		net_warn_ratelimited("bad partial csum: csum=%u/%u headroom=%u headlen=%u\n",
+ 				     start, off, skb_headroom(skb), skb_headlen(skb));
+ 		return false;
+@@ -5251,7 +5251,7 @@ bool skb_partial_csum_set(struct sk_buff *skb, u16 start, u16 off)
+ 	skb->ip_summed = CHECKSUM_PARTIAL;
+ 	skb->csum_start = csum_start;
+ 	skb->csum_offset = off;
+-	skb_set_transport_header(skb, start);
++	skb->transport_header = csum_start;
+ 	return true;
+ }
+ EXPORT_SYMBOL_GPL(skb_partial_csum_set);
+diff --git a/net/core/stream.c b/net/core/stream.c
+index 434446ab14c57..f5c4e47df1650 100644
+--- a/net/core/stream.c
++++ b/net/core/stream.c
+@@ -73,8 +73,8 @@ int sk_stream_wait_connect(struct sock *sk, long *timeo_p)
+ 		add_wait_queue(sk_sleep(sk), &wait);
+ 		sk->sk_write_pending++;
+ 		done = sk_wait_event(sk, timeo_p,
+-				     !sk->sk_err &&
+-				     !((1 << sk->sk_state) &
++				     !READ_ONCE(sk->sk_err) &&
++				     !((1 << READ_ONCE(sk->sk_state)) &
+ 				       ~(TCPF_ESTABLISHED | TCPF_CLOSE_WAIT)), &wait);
+ 		remove_wait_queue(sk_sleep(sk), &wait);
+ 		sk->sk_write_pending--;
+@@ -87,9 +87,9 @@ EXPORT_SYMBOL(sk_stream_wait_connect);
+  * sk_stream_closing - Return 1 if we still have things to send in our buffers.
+  * @sk: socket to verify
+  */
+-static inline int sk_stream_closing(struct sock *sk)
++static int sk_stream_closing(const struct sock *sk)
+ {
+-	return (1 << sk->sk_state) &
++	return (1 << READ_ONCE(sk->sk_state)) &
+ 	       (TCPF_FIN_WAIT1 | TCPF_CLOSING | TCPF_LAST_ACK);
+ }
+ 
+@@ -142,8 +142,8 @@ int sk_stream_wait_memory(struct sock *sk, long *timeo_p)
+ 
+ 		set_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
+ 		sk->sk_write_pending++;
+-		sk_wait_event(sk, &current_timeo, sk->sk_err ||
+-						  (sk->sk_shutdown & SEND_SHUTDOWN) ||
++		sk_wait_event(sk, &current_timeo, READ_ONCE(sk->sk_err) ||
++						  (READ_ONCE(sk->sk_shutdown) & SEND_SHUTDOWN) ||
+ 						  (sk_stream_memory_free(sk) &&
+ 						  !vm_wait), &wait);
+ 		sk->sk_write_pending--;
+diff --git a/net/devlink/core.c b/net/devlink/core.c
+index 777b091ef74df..c23ebabadc526 100644
+--- a/net/devlink/core.c
++++ b/net/devlink/core.c
+@@ -204,11 +204,6 @@ struct devlink *devlink_alloc_ns(const struct devlink_ops *ops,
+ 	if (ret < 0)
+ 		goto err_xa_alloc;
+ 
+-	devlink->netdevice_nb.notifier_call = devlink_port_netdevice_event;
+-	ret = register_netdevice_notifier(&devlink->netdevice_nb);
+-	if (ret)
+-		goto err_register_netdevice_notifier;
+-
+ 	devlink->dev = dev;
+ 	devlink->ops = ops;
+ 	xa_init_flags(&devlink->ports, XA_FLAGS_ALLOC);
+@@ -233,8 +228,6 @@ struct devlink *devlink_alloc_ns(const struct devlink_ops *ops,
+ 
+ 	return devlink;
+ 
+-err_register_netdevice_notifier:
+-	xa_erase(&devlinks, devlink->index);
+ err_xa_alloc:
+ 	kfree(devlink);
+ 	return NULL;
+@@ -266,8 +259,6 @@ void devlink_free(struct devlink *devlink)
+ 	xa_destroy(&devlink->params);
+ 	xa_destroy(&devlink->ports);
+ 
+-	WARN_ON_ONCE(unregister_netdevice_notifier(&devlink->netdevice_nb));
+-
+ 	xa_erase(&devlinks, devlink->index);
+ 
+ 	devlink_put(devlink);
+@@ -303,6 +294,10 @@ static struct pernet_operations devlink_pernet_ops __net_initdata = {
+ 	.pre_exit = devlink_pernet_pre_exit,
+ };
+ 
++static struct notifier_block devlink_port_netdevice_nb = {
++	.notifier_call = devlink_port_netdevice_event,
++};
++
+ static int __init devlink_init(void)
+ {
+ 	int err;
+@@ -311,6 +306,9 @@ static int __init devlink_init(void)
+ 	if (err)
+ 		goto out;
+ 	err = register_pernet_subsys(&devlink_pernet_ops);
++	if (err)
++		goto out;
++	err = register_netdevice_notifier(&devlink_port_netdevice_nb);
+ 
+ out:
+ 	WARN_ON(err);
+diff --git a/net/devlink/devl_internal.h b/net/devlink/devl_internal.h
+index e133f423294a2..62921b2eb0d3f 100644
+--- a/net/devlink/devl_internal.h
++++ b/net/devlink/devl_internal.h
+@@ -50,7 +50,6 @@ struct devlink {
+ 	u8 reload_failed:1;
+ 	refcount_t refcount;
+ 	struct rcu_work rwork;
+-	struct notifier_block netdevice_nb;
+ 	char priv[] __aligned(NETDEV_ALIGN);
+ };
+ 
+diff --git a/net/devlink/leftover.c b/net/devlink/leftover.c
+index dffca2f9bfa7f..cd02549680767 100644
+--- a/net/devlink/leftover.c
++++ b/net/devlink/leftover.c
+@@ -7073,10 +7073,9 @@ int devlink_port_netdevice_event(struct notifier_block *nb,
+ 	struct devlink_port *devlink_port = netdev->devlink_port;
+ 	struct devlink *devlink;
+ 
+-	devlink = container_of(nb, struct devlink, netdevice_nb);
+-
+-	if (!devlink_port || devlink_port->devlink != devlink)
++	if (!devlink_port)
+ 		return NOTIFY_OK;
++	devlink = devlink_port->devlink;
+ 
+ 	switch (event) {
+ 	case NETDEV_POST_INIT:
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index 8db6747f892f8..70fd769f1174b 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -894,7 +894,7 @@ int inet_shutdown(struct socket *sock, int how)
+ 		   EPOLLHUP, even on eg. unconnected UDP sockets -- RR */
+ 		fallthrough;
+ 	default:
+-		sk->sk_shutdown |= how;
++		WRITE_ONCE(sk->sk_shutdown, sk->sk_shutdown | how);
+ 		if (sk->sk_prot->shutdown)
+ 			sk->sk_prot->shutdown(sk, how);
+ 		break;
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 288693981b006..6c7c666554ced 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -498,6 +498,7 @@ __poll_t tcp_poll(struct file *file, struct socket *sock, poll_table *wait)
+ 	__poll_t mask;
+ 	struct sock *sk = sock->sk;
+ 	const struct tcp_sock *tp = tcp_sk(sk);
++	u8 shutdown;
+ 	int state;
+ 
+ 	sock_poll_wait(file, sock, wait);
+@@ -540,9 +541,10 @@ __poll_t tcp_poll(struct file *file, struct socket *sock, poll_table *wait)
+ 	 * NOTE. Check for TCP_CLOSE is added. The goal is to prevent
+ 	 * blocking on fresh not-connected or disconnected socket. --ANK
+ 	 */
+-	if (sk->sk_shutdown == SHUTDOWN_MASK || state == TCP_CLOSE)
++	shutdown = READ_ONCE(sk->sk_shutdown);
++	if (shutdown == SHUTDOWN_MASK || state == TCP_CLOSE)
+ 		mask |= EPOLLHUP;
+-	if (sk->sk_shutdown & RCV_SHUTDOWN)
++	if (shutdown & RCV_SHUTDOWN)
+ 		mask |= EPOLLIN | EPOLLRDNORM | EPOLLRDHUP;
+ 
+ 	/* Connected or passive Fast Open socket? */
+@@ -559,7 +561,7 @@ __poll_t tcp_poll(struct file *file, struct socket *sock, poll_table *wait)
+ 		if (tcp_stream_is_readable(sk, target))
+ 			mask |= EPOLLIN | EPOLLRDNORM;
+ 
+-		if (!(sk->sk_shutdown & SEND_SHUTDOWN)) {
++		if (!(shutdown & SEND_SHUTDOWN)) {
+ 			if (__sk_stream_is_writeable(sk, 1)) {
+ 				mask |= EPOLLOUT | EPOLLWRNORM;
+ 			} else {  /* send SIGIO later */
+@@ -2866,7 +2868,7 @@ void __tcp_close(struct sock *sk, long timeout)
+ 	int data_was_unread = 0;
+ 	int state;
+ 
+-	sk->sk_shutdown = SHUTDOWN_MASK;
++	WRITE_ONCE(sk->sk_shutdown, SHUTDOWN_MASK);
+ 
+ 	if (sk->sk_state == TCP_LISTEN) {
+ 		tcp_set_state(sk, TCP_CLOSE);
+@@ -3118,7 +3120,7 @@ int tcp_disconnect(struct sock *sk, int flags)
+ 
+ 	inet_bhash2_reset_saddr(sk);
+ 
+-	sk->sk_shutdown = 0;
++	WRITE_ONCE(sk->sk_shutdown, 0);
+ 	sock_reset_flag(sk, SOCK_DONE);
+ 	tp->srtt_us = 0;
+ 	tp->mdev_us = jiffies_to_usecs(TCP_TIMEOUT_INIT);
+@@ -4648,7 +4650,7 @@ void tcp_done(struct sock *sk)
+ 	if (req)
+ 		reqsk_fastopen_remove(sk, req, false);
+ 
+-	sk->sk_shutdown = SHUTDOWN_MASK;
++	WRITE_ONCE(sk->sk_shutdown, SHUTDOWN_MASK);
+ 
+ 	if (!sock_flag(sk, SOCK_DEAD))
+ 		sk->sk_state_change(sk);
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index ebf9175119370..2e9547467edbe 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -168,7 +168,7 @@ static int tcp_msg_wait_data(struct sock *sk, struct sk_psock *psock,
+ 	sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk);
+ 	ret = sk_wait_event(sk, &timeo,
+ 			    !list_empty(&psock->ingress_msg) ||
+-			    !skb_queue_empty(&sk->sk_receive_queue), &wait);
++			    !skb_queue_empty_lockless(&sk->sk_receive_queue), &wait);
+ 	sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk);
+ 	remove_wait_queue(sk_sleep(sk), &wait);
+ 	return ret;
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index cc072d2cfcd82..10776c54ff784 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -4362,7 +4362,7 @@ void tcp_fin(struct sock *sk)
+ 
+ 	inet_csk_schedule_ack(sk);
+ 
+-	sk->sk_shutdown |= RCV_SHUTDOWN;
++	WRITE_ONCE(sk->sk_shutdown, sk->sk_shutdown | RCV_SHUTDOWN);
+ 	sock_set_flag(sk, SOCK_DONE);
+ 
+ 	switch (sk->sk_state) {
+@@ -6597,7 +6597,7 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb)
+ 			break;
+ 
+ 		tcp_set_state(sk, TCP_FIN_WAIT2);
+-		sk->sk_shutdown |= SEND_SHUTDOWN;
++		WRITE_ONCE(sk->sk_shutdown, sk->sk_shutdown | SEND_SHUTDOWN);
+ 
+ 		sk_dst_confirm(sk);
+ 
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index b9d55277cb858..c87958f979f0a 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -829,6 +829,9 @@ static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb)
+ 				   inet_twsk(sk)->tw_priority : sk->sk_priority;
+ 		transmit_time = tcp_transmit_time(sk);
+ 		xfrm_sk_clone_policy(ctl_sk, sk);
++	} else {
++		ctl_sk->sk_mark = 0;
++		ctl_sk->sk_priority = 0;
+ 	}
+ 	ip_send_unicast_reply(ctl_sk,
+ 			      skb, &TCP_SKB_CB(skb)->header.h4.opt,
+@@ -836,7 +839,6 @@ static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb)
+ 			      &arg, arg.iov[0].iov_len,
+ 			      transmit_time);
+ 
+-	ctl_sk->sk_mark = 0;
+ 	xfrm_sk_free_policy(ctl_sk);
+ 	sock_net_set(ctl_sk, &init_net);
+ 	__TCP_INC_STATS(net, TCP_MIB_OUTSEGS);
+@@ -935,7 +937,6 @@ static void tcp_v4_send_ack(const struct sock *sk,
+ 			      &arg, arg.iov[0].iov_len,
+ 			      transmit_time);
+ 
+-	ctl_sk->sk_mark = 0;
+ 	sock_net_set(ctl_sk, &init_net);
+ 	__TCP_INC_STATS(net, TCP_MIB_OUTSEGS);
+ 	local_bh_enable();
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index a4ecfc9d25930..da80974ad23ae 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -1015,12 +1015,14 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
+ 					    ntohl(tun_id),
+ 					    ntohl(md->u.index), truncate,
+ 					    false);
++			proto = htons(ETH_P_ERSPAN);
+ 		} else if (md->version == 2) {
+ 			erspan_build_header_v2(skb,
+ 					       ntohl(tun_id),
+ 					       md->u.md2.dir,
+ 					       get_hwid(&md->u.md2),
+ 					       truncate, false);
++			proto = htons(ETH_P_ERSPAN2);
+ 		} else {
+ 			goto tx_err;
+ 		}
+@@ -1043,24 +1045,25 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
+ 			break;
+ 		}
+ 
+-		if (t->parms.erspan_ver == 1)
++		if (t->parms.erspan_ver == 1) {
+ 			erspan_build_header(skb, ntohl(t->parms.o_key),
+ 					    t->parms.index,
+ 					    truncate, false);
+-		else if (t->parms.erspan_ver == 2)
++			proto = htons(ETH_P_ERSPAN);
++		} else if (t->parms.erspan_ver == 2) {
+ 			erspan_build_header_v2(skb, ntohl(t->parms.o_key),
+ 					       t->parms.dir,
+ 					       t->parms.hwid,
+ 					       truncate, false);
+-		else
++			proto = htons(ETH_P_ERSPAN2);
++		} else {
+ 			goto tx_err;
++		}
+ 
+ 		fl6.daddr = t->parms.raddr;
+ 	}
+ 
+ 	/* Push GRE header. */
+-	proto = (t->parms.erspan_ver == 1) ? htons(ETH_P_ERSPAN)
+-					   : htons(ETH_P_ERSPAN2);
+ 	gre_build_header(skb, 8, TUNNEL_SEQ, proto, 0, htonl(atomic_fetch_inc(&t->o_seqno)));
+ 
+ 	/* TooBig packet may have updated dst->dev's mtu */
+diff --git a/net/key/af_key.c b/net/key/af_key.c
+index a815f5ab4c49a..31ab12fd720ae 100644
+--- a/net/key/af_key.c
++++ b/net/key/af_key.c
+@@ -1940,7 +1940,8 @@ static u32 gen_reqid(struct net *net)
+ }
+ 
+ static int
+-parse_ipsecrequest(struct xfrm_policy *xp, struct sadb_x_ipsecrequest *rq)
++parse_ipsecrequest(struct xfrm_policy *xp, struct sadb_x_policy *pol,
++		   struct sadb_x_ipsecrequest *rq)
+ {
+ 	struct net *net = xp_net(xp);
+ 	struct xfrm_tmpl *t = xp->xfrm_vec + xp->xfrm_nr;
+@@ -1958,9 +1959,12 @@ parse_ipsecrequest(struct xfrm_policy *xp, struct sadb_x_ipsecrequest *rq)
+ 	if ((mode = pfkey_mode_to_xfrm(rq->sadb_x_ipsecrequest_mode)) < 0)
+ 		return -EINVAL;
+ 	t->mode = mode;
+-	if (rq->sadb_x_ipsecrequest_level == IPSEC_LEVEL_USE)
++	if (rq->sadb_x_ipsecrequest_level == IPSEC_LEVEL_USE) {
++		if ((mode == XFRM_MODE_TUNNEL || mode == XFRM_MODE_BEET) &&
++		    pol->sadb_x_policy_dir == IPSEC_DIR_OUTBOUND)
++			return -EINVAL;
+ 		t->optional = 1;
+-	else if (rq->sadb_x_ipsecrequest_level == IPSEC_LEVEL_UNIQUE) {
++	} else if (rq->sadb_x_ipsecrequest_level == IPSEC_LEVEL_UNIQUE) {
+ 		t->reqid = rq->sadb_x_ipsecrequest_reqid;
+ 		if (t->reqid > IPSEC_MANUAL_REQID_MAX)
+ 			t->reqid = 0;
+@@ -2002,7 +2006,7 @@ parse_ipsecrequests(struct xfrm_policy *xp, struct sadb_x_policy *pol)
+ 		    rq->sadb_x_ipsecrequest_len < sizeof(*rq))
+ 			return -EINVAL;
+ 
+-		if ((err = parse_ipsecrequest(xp, rq)) < 0)
++		if ((err = parse_ipsecrequest(xp, pol, rq)) < 0)
+ 			return err;
+ 		len -= rq->sadb_x_ipsecrequest_len;
+ 		rq = (void*)((u8*)rq + rq->sadb_x_ipsecrequest_len);
+diff --git a/net/llc/af_llc.c b/net/llc/af_llc.c
+index da7fe94bea2eb..9ffbc667be6cf 100644
+--- a/net/llc/af_llc.c
++++ b/net/llc/af_llc.c
+@@ -583,7 +583,8 @@ static int llc_ui_wait_for_disc(struct sock *sk, long timeout)
+ 
+ 	add_wait_queue(sk_sleep(sk), &wait);
+ 	while (1) {
+-		if (sk_wait_event(sk, &timeout, sk->sk_state == TCP_CLOSE, &wait))
++		if (sk_wait_event(sk, &timeout,
++				  READ_ONCE(sk->sk_state) == TCP_CLOSE, &wait))
+ 			break;
+ 		rc = -ERESTARTSYS;
+ 		if (signal_pending(current))
+@@ -603,7 +604,8 @@ static bool llc_ui_wait_for_conn(struct sock *sk, long timeout)
+ 
+ 	add_wait_queue(sk_sleep(sk), &wait);
+ 	while (1) {
+-		if (sk_wait_event(sk, &timeout, sk->sk_state != TCP_SYN_SENT, &wait))
++		if (sk_wait_event(sk, &timeout,
++				  READ_ONCE(sk->sk_state) != TCP_SYN_SENT, &wait))
+ 			break;
+ 		if (signal_pending(current) || !timeout)
+ 			break;
+@@ -622,7 +624,7 @@ static int llc_ui_wait_for_busy_core(struct sock *sk, long timeout)
+ 	while (1) {
+ 		rc = 0;
+ 		if (sk_wait_event(sk, &timeout,
+-				  (sk->sk_shutdown & RCV_SHUTDOWN) ||
++				  (READ_ONCE(sk->sk_shutdown) & RCV_SHUTDOWN) ||
+ 				  (!llc_data_accept_state(llc->state) &&
+ 				   !llc->remote_busy_flag &&
+ 				   !llc->p_flag), &wait))
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index d3d861911ed65..5ddbe0c8cfaa1 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -1512,9 +1512,10 @@ static int ieee80211_stop_ap(struct wiphy *wiphy, struct net_device *dev,
+ 		sdata_dereference(link->u.ap.unsol_bcast_probe_resp,
+ 				  sdata);
+ 
+-	/* abort any running channel switch */
++	/* abort any running channel switch or color change */
+ 	mutex_lock(&local->mtx);
+ 	link_conf->csa_active = false;
++	link_conf->color_change_active = false;
+ 	if (link->csa_block_tx) {
+ 		ieee80211_wake_vif_queues(local, sdata,
+ 					  IEEE80211_QUEUE_STOP_REASON_CSA);
+@@ -3502,7 +3503,7 @@ void ieee80211_channel_switch_disconnect(struct ieee80211_vif *vif, bool block_t
+ EXPORT_SYMBOL(ieee80211_channel_switch_disconnect);
+ 
+ static int ieee80211_set_after_csa_beacon(struct ieee80211_sub_if_data *sdata,
+-					  u32 *changed)
++					  u64 *changed)
+ {
+ 	int err;
+ 
+@@ -3545,7 +3546,7 @@ static int ieee80211_set_after_csa_beacon(struct ieee80211_sub_if_data *sdata,
+ static int __ieee80211_csa_finalize(struct ieee80211_sub_if_data *sdata)
+ {
+ 	struct ieee80211_local *local = sdata->local;
+-	u32 changed = 0;
++	u64 changed = 0;
+ 	int err;
+ 
+ 	sdata_assert_lock(sdata);
+diff --git a/net/mac80211/trace.h b/net/mac80211/trace.h
+index 9f4377566c425..c85367a4757a9 100644
+--- a/net/mac80211/trace.h
++++ b/net/mac80211/trace.h
+@@ -67,7 +67,7 @@
+ 			__entry->min_freq_offset = (c)->chan ? (c)->chan->freq_offset : 0;	\
+ 			__entry->min_chan_width = (c)->width;				\
+ 			__entry->min_center_freq1 = (c)->center_freq1;			\
+-			__entry->freq1_offset = (c)->freq1_offset;			\
++			__entry->min_freq1_offset = (c)->freq1_offset;			\
+ 			__entry->min_center_freq2 = (c)->center_freq2;
+ #define MIN_CHANDEF_PR_FMT	" min_control:%d.%03d MHz min_width:%d min_center: %d.%03d/%d MHz"
+ #define MIN_CHANDEF_PR_ARG	__entry->min_control_freq, __entry->min_freq_offset,	\
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index 7699fb4106701..45cb8e7bcc613 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -3781,6 +3781,7 @@ struct sk_buff *ieee80211_tx_dequeue(struct ieee80211_hw *hw,
+ 	ieee80211_tx_result r;
+ 	struct ieee80211_vif *vif = txq->vif;
+ 	int q = vif->hw_queue[txq->ac];
++	unsigned long flags;
+ 	bool q_stopped;
+ 
+ 	WARN_ON_ONCE(softirq_count() == 0);
+@@ -3789,9 +3790,9 @@ struct sk_buff *ieee80211_tx_dequeue(struct ieee80211_hw *hw,
+ 		return NULL;
+ 
+ begin:
+-	spin_lock(&local->queue_stop_reason_lock);
++	spin_lock_irqsave(&local->queue_stop_reason_lock, flags);
+ 	q_stopped = local->queue_stop_reasons[q];
+-	spin_unlock(&local->queue_stop_reason_lock);
++	spin_unlock_irqrestore(&local->queue_stop_reason_lock, flags);
+ 
+ 	if (unlikely(q_stopped)) {
+ 		/* mark for waking later */
+diff --git a/net/netfilter/core.c b/net/netfilter/core.c
+index 358220b585215..edf92074221e2 100644
+--- a/net/netfilter/core.c
++++ b/net/netfilter/core.c
+@@ -699,9 +699,11 @@ void nf_conntrack_destroy(struct nf_conntrack *nfct)
+ 
+ 	rcu_read_lock();
+ 	ct_hook = rcu_dereference(nf_ct_hook);
+-	BUG_ON(ct_hook == NULL);
+-	ct_hook->destroy(nfct);
++	if (ct_hook)
++		ct_hook->destroy(nfct);
+ 	rcu_read_unlock();
++
++	WARN_ON(!ct_hook);
+ }
+ EXPORT_SYMBOL(nf_conntrack_destroy);
+ 
+diff --git a/net/netfilter/ipvs/ip_vs_sync.c b/net/netfilter/ipvs/ip_vs_sync.c
+index 4963fec815da3..d4fe7bb4f853a 100644
+--- a/net/netfilter/ipvs/ip_vs_sync.c
++++ b/net/netfilter/ipvs/ip_vs_sync.c
+@@ -603,7 +603,7 @@ static void ip_vs_sync_conn_v0(struct netns_ipvs *ipvs, struct ip_vs_conn *cp,
+ 	if (cp->flags & IP_VS_CONN_F_SEQ_MASK) {
+ 		struct ip_vs_sync_conn_options *opt =
+ 			(struct ip_vs_sync_conn_options *)&s[1];
+-		memcpy(opt, &cp->in_seq, sizeof(*opt));
++		memcpy(opt, &cp->sync_conn_opt, sizeof(*opt));
+ 	}
+ 
+ 	m->nr_conns++;
+diff --git a/net/netfilter/nf_conntrack_standalone.c b/net/netfilter/nf_conntrack_standalone.c
+index 57f6724c99a76..169e16fc2bceb 100644
+--- a/net/netfilter/nf_conntrack_standalone.c
++++ b/net/netfilter/nf_conntrack_standalone.c
+@@ -1218,11 +1218,12 @@ static int __init nf_conntrack_standalone_init(void)
+ 	nf_conntrack_htable_size_user = nf_conntrack_htable_size;
+ #endif
+ 
++	nf_conntrack_init_end();
++
+ 	ret = register_pernet_subsys(&nf_conntrack_net_ops);
+ 	if (ret < 0)
+ 		goto out_pernet;
+ 
+-	nf_conntrack_init_end();
+ 	return 0;
+ 
+ out_pernet:
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 45f701fd86f06..ef80504c3ccd2 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -3802,12 +3802,10 @@ static struct nft_rule *nft_rule_lookup_byid(const struct net *net,
+ 	struct nft_trans *trans;
+ 
+ 	list_for_each_entry(trans, &nft_net->commit_list, list) {
+-		struct nft_rule *rule = nft_trans_rule(trans);
+-
+ 		if (trans->msg_type == NFT_MSG_NEWRULE &&
+ 		    trans->ctx.chain == chain &&
+ 		    id == nft_trans_rule_id(trans))
+-			return rule;
++			return nft_trans_rule(trans);
+ 	}
+ 	return ERR_PTR(-ENOENT);
+ }
+diff --git a/net/netfilter/nft_chain_filter.c b/net/netfilter/nft_chain_filter.c
+index c3563f0be2692..680fe557686e4 100644
+--- a/net/netfilter/nft_chain_filter.c
++++ b/net/netfilter/nft_chain_filter.c
+@@ -344,6 +344,12 @@ static void nft_netdev_event(unsigned long event, struct net_device *dev,
+ 		return;
+ 	}
+ 
++	/* UNREGISTER events are also happening on netns exit.
++	 *
++	 * Although nf_tables core releases all tables/chains, only this event
++	 * handler provides guarantee that hook->ops.dev is still accessible,
++	 * so we cannot skip exiting net namespaces.
++	 */
+ 	__nft_release_basechain(ctx);
+ }
+ 
+@@ -362,9 +368,6 @@ static int nf_tables_netdev_event(struct notifier_block *this,
+ 	    event != NETDEV_CHANGENAME)
+ 		return NOTIFY_DONE;
+ 
+-	if (!check_net(ctx.net))
+-		return NOTIFY_DONE;
+-
+ 	nft_net = nft_pernet(ctx.net);
+ 	mutex_lock(&nft_net->commit_mutex);
+ 	list_for_each_entry(table, &nft_net->tables, list) {
+diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
+index 19ea4d3c35535..2f114aa10f1a7 100644
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -221,7 +221,7 @@ static int nft_rbtree_gc_elem(const struct nft_set *__set,
+ {
+ 	struct nft_set *set = (struct nft_set *)__set;
+ 	struct rb_node *prev = rb_prev(&rbe->node);
+-	struct nft_rbtree_elem *rbe_prev;
++	struct nft_rbtree_elem *rbe_prev = NULL;
+ 	struct nft_set_gc_batch *gcb;
+ 
+ 	gcb = nft_set_gc_batch_check(set, NULL, GFP_ATOMIC);
+@@ -229,17 +229,21 @@ static int nft_rbtree_gc_elem(const struct nft_set *__set,
+ 		return -ENOMEM;
+ 
+ 	/* search for expired end interval coming before this element. */
+-	do {
++	while (prev) {
+ 		rbe_prev = rb_entry(prev, struct nft_rbtree_elem, node);
+ 		if (nft_rbtree_interval_end(rbe_prev))
+ 			break;
+ 
+ 		prev = rb_prev(prev);
+-	} while (prev != NULL);
++	}
++
++	if (rbe_prev) {
++		rb_erase(&rbe_prev->node, &priv->root);
++		atomic_dec(&set->nelems);
++	}
+ 
+-	rb_erase(&rbe_prev->node, &priv->root);
+ 	rb_erase(&rbe->node, &priv->root);
+-	atomic_sub(2, &set->nelems);
++	atomic_dec(&set->nelems);
+ 
+ 	nft_set_gc_batch_add(gcb, rbe);
+ 	nft_set_gc_batch_complete(gcb);
+@@ -268,7 +272,7 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+ 			       struct nft_set_ext **ext)
+ {
+ 	struct nft_rbtree_elem *rbe, *rbe_le = NULL, *rbe_ge = NULL;
+-	struct rb_node *node, *parent, **p, *first = NULL;
++	struct rb_node *node, *next, *parent, **p, *first = NULL;
+ 	struct nft_rbtree *priv = nft_set_priv(set);
+ 	u8 genmask = nft_genmask_next(net);
+ 	int d, err;
+@@ -307,7 +311,9 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+ 	 * Values stored in the tree are in reversed order, starting from
+ 	 * highest to lowest value.
+ 	 */
+-	for (node = first; node != NULL; node = rb_next(node)) {
++	for (node = first; node != NULL; node = next) {
++		next = rb_next(node);
++
+ 		rbe = rb_entry(node, struct nft_rbtree_elem, node);
+ 
+ 		if (!nft_set_elem_active(&rbe->ext, genmask))
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 9b6eb28e6e94f..45d47b39de225 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -1990,7 +1990,7 @@ static int netlink_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 
+ 	skb_free_datagram(sk, skb);
+ 
+-	if (nlk->cb_running &&
++	if (READ_ONCE(nlk->cb_running) &&
+ 	    atomic_read(&sk->sk_rmem_alloc) <= sk->sk_rcvbuf / 2) {
+ 		ret = netlink_dump(sk);
+ 		if (ret) {
+@@ -2304,7 +2304,7 @@ static int netlink_dump(struct sock *sk)
+ 	if (cb->done)
+ 		cb->done(cb);
+ 
+-	nlk->cb_running = false;
++	WRITE_ONCE(nlk->cb_running, false);
+ 	module = cb->module;
+ 	skb = cb->skb;
+ 	mutex_unlock(nlk->cb_mutex);
+@@ -2367,7 +2367,7 @@ int __netlink_dump_start(struct sock *ssk, struct sk_buff *skb,
+ 			goto error_put;
+ 	}
+ 
+-	nlk->cb_running = true;
++	WRITE_ONCE(nlk->cb_running, true);
+ 	nlk->dump_done_errno = INT_MAX;
+ 
+ 	mutex_unlock(nlk->cb_mutex);
+@@ -2705,7 +2705,7 @@ static int netlink_native_seq_show(struct seq_file *seq, void *v)
+ 			   nlk->groups ? (u32)nlk->groups[0] : 0,
+ 			   sk_rmem_alloc_get(s),
+ 			   sk_wmem_alloc_get(s),
+-			   nlk->cb_running,
++			   READ_ONCE(nlk->cb_running),
+ 			   refcount_read(&s->sk_refcnt),
+ 			   atomic_read(&s->sk_drops),
+ 			   sock_i_ino(s)
+diff --git a/net/nsh/nsh.c b/net/nsh/nsh.c
+index e9ca007718b7e..0f23e5e8e03eb 100644
+--- a/net/nsh/nsh.c
++++ b/net/nsh/nsh.c
+@@ -77,13 +77,12 @@ static struct sk_buff *nsh_gso_segment(struct sk_buff *skb,
+ 				       netdev_features_t features)
+ {
+ 	struct sk_buff *segs = ERR_PTR(-EINVAL);
++	u16 mac_offset = skb->mac_header;
+ 	unsigned int nsh_len, mac_len;
+ 	__be16 proto;
+-	int nhoff;
+ 
+ 	skb_reset_network_header(skb);
+ 
+-	nhoff = skb->network_header - skb->mac_header;
+ 	mac_len = skb->mac_len;
+ 
+ 	if (unlikely(!pskb_may_pull(skb, NSH_BASE_HDR_LEN)))
+@@ -108,15 +107,14 @@ static struct sk_buff *nsh_gso_segment(struct sk_buff *skb,
+ 	segs = skb_mac_gso_segment(skb, features);
+ 	if (IS_ERR_OR_NULL(segs)) {
+ 		skb_gso_error_unwind(skb, htons(ETH_P_NSH), nsh_len,
+-				     skb->network_header - nhoff,
+-				     mac_len);
++				     mac_offset, mac_len);
+ 		goto out;
+ 	}
+ 
+ 	for (skb = segs; skb; skb = skb->next) {
+ 		skb->protocol = htons(ETH_P_NSH);
+ 		__skb_push(skb, nsh_len);
+-		skb_set_mac_header(skb, -nhoff);
++		skb->mac_header = mac_offset;
+ 		skb->network_header = skb->mac_header + mac_len;
+ 		skb->mac_len = mac_len;
+ 	}
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index b8c62d88567ba..db9c2fa71c50c 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -1935,10 +1935,8 @@ static void packet_parse_headers(struct sk_buff *skb, struct socket *sock)
+ 	/* Move network header to the right position for VLAN tagged packets */
+ 	if (likely(skb->dev->type == ARPHRD_ETHER) &&
+ 	    eth_type_vlan(skb->protocol) &&
+-	    __vlan_get_protocol(skb, skb->protocol, &depth) != 0) {
+-		if (pskb_may_pull(skb, depth))
+-			skb_set_network_header(skb, depth);
+-	}
++	    vlan_get_protocol_and_depth(skb, skb->protocol, &depth) != 0)
++		skb_set_network_header(skb, depth);
+ 
+ 	skb_probe_transport_header(skb);
+ }
+diff --git a/net/sched/sch_mqprio.c b/net/sched/sch_mqprio.c
+index 48ed87b91086e..fc6225f15fcdb 100644
+--- a/net/sched/sch_mqprio.c
++++ b/net/sched/sch_mqprio.c
+@@ -33,9 +33,12 @@ static int mqprio_enable_offload(struct Qdisc *sch,
+ 				 const struct tc_mqprio_qopt *qopt,
+ 				 struct netlink_ext_ack *extack)
+ {
+-	struct tc_mqprio_qopt_offload mqprio = {.qopt = *qopt};
+ 	struct mqprio_sched *priv = qdisc_priv(sch);
+ 	struct net_device *dev = qdisc_dev(sch);
++	struct tc_mqprio_qopt_offload mqprio = {
++		.qopt = *qopt,
++		.extack = extack,
++	};
+ 	int err, i;
+ 
+ 	switch (priv->mode) {
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index 1f469861eae32..cbad430191721 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -1520,7 +1520,9 @@ static int taprio_enable_offload(struct net_device *dev,
+ 		return -ENOMEM;
+ 	}
+ 	offload->enable = 1;
++	offload->extack = extack;
+ 	mqprio_qopt_reconstruct(dev, &offload->mqprio.qopt);
++	offload->mqprio.extack = extack;
+ 	taprio_sched_to_offload(dev, sched, offload, &caps);
+ 
+ 	for (tc = 0; tc < TC_MAX_QUEUE; tc++)
+@@ -1528,14 +1530,20 @@ static int taprio_enable_offload(struct net_device *dev,
+ 
+ 	err = ops->ndo_setup_tc(dev, TC_SETUP_QDISC_TAPRIO, offload);
+ 	if (err < 0) {
+-		NL_SET_ERR_MSG(extack,
+-			       "Device failed to setup taprio offload");
++		NL_SET_ERR_MSG_WEAK(extack,
++				    "Device failed to setup taprio offload");
+ 		goto done;
+ 	}
+ 
+ 	q->offloaded = true;
+ 
+ done:
++	/* The offload structure may linger around via a reference taken by the
++	 * device driver, so clear up the netlink extack pointer so that the
++	 * driver isn't tempted to dereference data which stopped being valid
++	 */
++	offload->extack = NULL;
++	offload->mqprio.extack = NULL;
+ 	taprio_offload_free(offload);
+ 
+ 	return err;
+diff --git a/net/smc/smc_close.c b/net/smc/smc_close.c
+index 31db7438857c9..dbdf03e8aa5b5 100644
+--- a/net/smc/smc_close.c
++++ b/net/smc/smc_close.c
+@@ -67,8 +67,8 @@ static void smc_close_stream_wait(struct smc_sock *smc, long timeout)
+ 
+ 		rc = sk_wait_event(sk, &timeout,
+ 				   !smc_tx_prepared_sends(&smc->conn) ||
+-				   sk->sk_err == ECONNABORTED ||
+-				   sk->sk_err == ECONNRESET ||
++				   READ_ONCE(sk->sk_err) == ECONNABORTED ||
++				   READ_ONCE(sk->sk_err) == ECONNRESET ||
+ 				   smc->conn.killed,
+ 				   &wait);
+ 		if (rc)
+diff --git a/net/smc/smc_rx.c b/net/smc/smc_rx.c
+index 4380d32f5a5f9..9a2f3638d161d 100644
+--- a/net/smc/smc_rx.c
++++ b/net/smc/smc_rx.c
+@@ -267,9 +267,9 @@ int smc_rx_wait(struct smc_sock *smc, long *timeo,
+ 	sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk);
+ 	add_wait_queue(sk_sleep(sk), &wait);
+ 	rc = sk_wait_event(sk, timeo,
+-			   sk->sk_err ||
++			   READ_ONCE(sk->sk_err) ||
+ 			   cflags->peer_conn_abort ||
+-			   sk->sk_shutdown & RCV_SHUTDOWN ||
++			   READ_ONCE(sk->sk_shutdown) & RCV_SHUTDOWN ||
+ 			   conn->killed ||
+ 			   fcrit(conn),
+ 			   &wait);
+diff --git a/net/smc/smc_tx.c b/net/smc/smc_tx.c
+index f4b6a71ac488a..45128443f1f10 100644
+--- a/net/smc/smc_tx.c
++++ b/net/smc/smc_tx.c
+@@ -113,8 +113,8 @@ static int smc_tx_wait(struct smc_sock *smc, int flags)
+ 			break; /* at least 1 byte of free & no urgent data */
+ 		set_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
+ 		sk_wait_event(sk, &timeo,
+-			      sk->sk_err ||
+-			      (sk->sk_shutdown & SEND_SHUTDOWN) ||
++			      READ_ONCE(sk->sk_err) ||
++			      (READ_ONCE(sk->sk_shutdown) & SEND_SHUTDOWN) ||
+ 			      smc_cdc_rxed_any_close(conn) ||
+ 			      (atomic_read(&conn->sndbuf_space) &&
+ 			       !conn->urg_tx_pend),
+diff --git a/net/socket.c b/net/socket.c
+index 9c92c0e6c4da8..263fab8e49010 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -2909,7 +2909,7 @@ static int do_recvmmsg(int fd, struct mmsghdr __user *mmsg,
+ 		 * error to return on the next call or if the
+ 		 * app asks about it using getsockopt(SO_ERROR).
+ 		 */
+-		sock->sk->sk_err = -err;
++		WRITE_ONCE(sock->sk->sk_err, -err);
+ 	}
+ out_put:
+ 	fput_light(sock->file, fput_needed);
+diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
+index fea7ce8fba14e..84d0b8798dfcc 100644
+--- a/net/sunrpc/svc.c
++++ b/net/sunrpc/svc.c
+@@ -1018,7 +1018,7 @@ static int __svc_register(struct net *net, const char *progname,
+ #endif
+ 	}
+ 
+-	trace_svc_register(progname, version, protocol, port, family, error);
++	trace_svc_register(progname, version, family, protocol, port, error);
+ 	return error;
+ }
+ 
+@@ -1382,7 +1382,7 @@ err_bad_rpc:
+ 	/* Only RPCv2 supported */
+ 	xdr_stream_encode_u32(xdr, RPC_VERSION);
+ 	xdr_stream_encode_u32(xdr, RPC_VERSION);
+-	goto sendit;
++	return 1;	/* don't wrap */
+ 
+ err_bad_auth:
+ 	dprintk("svc: authentication failed (%d)\n",
+@@ -1398,7 +1398,7 @@ err_bad_auth:
+ err_bad_prog:
+ 	dprintk("svc: unknown program %d\n", rqstp->rq_prog);
+ 	serv->sv_stats->rpcbadfmt++;
+-	xdr_stream_encode_u32(xdr, RPC_PROG_UNAVAIL);
++	*rqstp->rq_accept_statp = rpc_prog_unavail;
+ 	goto sendit;
+ 
+ err_bad_vers:
+@@ -1406,7 +1406,12 @@ err_bad_vers:
+ 		       rqstp->rq_vers, rqstp->rq_prog, progp->pg_name);
+ 
+ 	serv->sv_stats->rpcbadfmt++;
+-	xdr_stream_encode_u32(xdr, RPC_PROG_MISMATCH);
++	*rqstp->rq_accept_statp = rpc_prog_mismatch;
++
++	/*
++	 * svc_authenticate() has already added the verifier and
++	 * advanced the stream just past rq_accept_statp.
++	 */
+ 	xdr_stream_encode_u32(xdr, process.mismatch.lovers);
+ 	xdr_stream_encode_u32(xdr, process.mismatch.hivers);
+ 	goto sendit;
+@@ -1415,19 +1420,19 @@ err_bad_proc:
+ 	svc_printk(rqstp, "unknown procedure (%d)\n", rqstp->rq_proc);
+ 
+ 	serv->sv_stats->rpcbadfmt++;
+-	xdr_stream_encode_u32(xdr, RPC_PROC_UNAVAIL);
++	*rqstp->rq_accept_statp = rpc_proc_unavail;
+ 	goto sendit;
+ 
+ err_garbage_args:
+ 	svc_printk(rqstp, "failed to decode RPC header\n");
+ 
+ 	serv->sv_stats->rpcbadfmt++;
+-	xdr_stream_encode_u32(xdr, RPC_GARBAGE_ARGS);
++	*rqstp->rq_accept_statp = rpc_garbage_args;
+ 	goto sendit;
+ 
+ err_system_err:
+ 	serv->sv_stats->rpcbadfmt++;
+-	xdr_stream_encode_u32(xdr, RPC_SYSTEM_ERR);
++	*rqstp->rq_accept_statp = rpc_system_err;
+ 	goto sendit;
+ }
+ 
+diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
+index ba629297da4e2..767628776dc01 100644
+--- a/net/sunrpc/svc_xprt.c
++++ b/net/sunrpc/svc_xprt.c
+@@ -532,13 +532,23 @@ void svc_reserve(struct svc_rqst *rqstp, int space)
+ }
+ EXPORT_SYMBOL_GPL(svc_reserve);
+ 
++static void free_deferred(struct svc_xprt *xprt, struct svc_deferred_req *dr)
++{
++	if (!dr)
++		return;
++
++	xprt->xpt_ops->xpo_release_ctxt(xprt, dr->xprt_ctxt);
++	kfree(dr);
++}
++
+ static void svc_xprt_release(struct svc_rqst *rqstp)
+ {
+ 	struct svc_xprt	*xprt = rqstp->rq_xprt;
+ 
+-	xprt->xpt_ops->xpo_release_rqst(rqstp);
++	xprt->xpt_ops->xpo_release_ctxt(xprt, rqstp->rq_xprt_ctxt);
++	rqstp->rq_xprt_ctxt = NULL;
+ 
+-	kfree(rqstp->rq_deferred);
++	free_deferred(xprt, rqstp->rq_deferred);
+ 	rqstp->rq_deferred = NULL;
+ 
+ 	pagevec_release(&rqstp->rq_pvec);
+@@ -1055,7 +1065,7 @@ static void svc_delete_xprt(struct svc_xprt *xprt)
+ 	spin_unlock_bh(&serv->sv_lock);
+ 
+ 	while ((dr = svc_deferred_dequeue(xprt)) != NULL)
+-		kfree(dr);
++		free_deferred(xprt, dr);
+ 
+ 	call_xpt_users(xprt);
+ 	svc_xprt_put(xprt);
+@@ -1177,8 +1187,8 @@ static void svc_revisit(struct cache_deferred_req *dreq, int too_many)
+ 	if (too_many || test_bit(XPT_DEAD, &xprt->xpt_flags)) {
+ 		spin_unlock(&xprt->xpt_lock);
+ 		trace_svc_defer_drop(dr);
++		free_deferred(xprt, dr);
+ 		svc_xprt_put(xprt);
+-		kfree(dr);
+ 		return;
+ 	}
+ 	dr->xprt = NULL;
+@@ -1223,14 +1233,14 @@ static struct cache_deferred_req *svc_defer(struct cache_req *req)
+ 		dr->addrlen = rqstp->rq_addrlen;
+ 		dr->daddr = rqstp->rq_daddr;
+ 		dr->argslen = rqstp->rq_arg.len >> 2;
+-		dr->xprt_ctxt = rqstp->rq_xprt_ctxt;
+-		rqstp->rq_xprt_ctxt = NULL;
+ 
+ 		/* back up head to the start of the buffer and copy */
+ 		skip = rqstp->rq_arg.len - rqstp->rq_arg.head[0].iov_len;
+ 		memcpy(dr->args, rqstp->rq_arg.head[0].iov_base - skip,
+ 		       dr->argslen << 2);
+ 	}
++	dr->xprt_ctxt = rqstp->rq_xprt_ctxt;
++	rqstp->rq_xprt_ctxt = NULL;
+ 	trace_svc_defer(rqstp);
+ 	svc_xprt_get(rqstp->rq_xprt);
+ 	dr->xprt = rqstp->rq_xprt;
+@@ -1263,6 +1273,8 @@ static noinline int svc_deferred_recv(struct svc_rqst *rqstp)
+ 	rqstp->rq_daddr       = dr->daddr;
+ 	rqstp->rq_respages    = rqstp->rq_pages;
+ 	rqstp->rq_xprt_ctxt   = dr->xprt_ctxt;
++
++	dr->xprt_ctxt = NULL;
+ 	svc_xprt_received(rqstp->rq_xprt);
+ 	return dr->argslen << 2;
+ }
+diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
+index 03a4f56150865..bf2d2cdca1185 100644
+--- a/net/sunrpc/svcsock.c
++++ b/net/sunrpc/svcsock.c
+@@ -112,27 +112,27 @@ static void svc_reclassify_socket(struct socket *sock)
+ #endif
+ 
+ /**
+- * svc_tcp_release_rqst - Release transport-related resources
+- * @rqstp: request structure with resources to be released
++ * svc_tcp_release_ctxt - Release transport-related resources
++ * @xprt: the transport which owned the context
++ * @ctxt: the context from rqstp->rq_xprt_ctxt or dr->xprt_ctxt
+  *
+  */
+-static void svc_tcp_release_rqst(struct svc_rqst *rqstp)
++static void svc_tcp_release_ctxt(struct svc_xprt *xprt, void *ctxt)
+ {
+ }
+ 
+ /**
+- * svc_udp_release_rqst - Release transport-related resources
+- * @rqstp: request structure with resources to be released
++ * svc_udp_release_ctxt - Release transport-related resources
++ * @xprt: the transport which owned the context
++ * @ctxt: the context from rqstp->rq_xprt_ctxt or dr->xprt_ctxt
+  *
+  */
+-static void svc_udp_release_rqst(struct svc_rqst *rqstp)
++static void svc_udp_release_ctxt(struct svc_xprt *xprt, void *ctxt)
+ {
+-	struct sk_buff *skb = rqstp->rq_xprt_ctxt;
++	struct sk_buff *skb = ctxt;
+ 
+-	if (skb) {
+-		rqstp->rq_xprt_ctxt = NULL;
++	if (skb)
+ 		consume_skb(skb);
+-	}
+ }
+ 
+ union svc_pktinfo_u {
+@@ -560,7 +560,8 @@ static int svc_udp_sendto(struct svc_rqst *rqstp)
+ 	unsigned int sent;
+ 	int err;
+ 
+-	svc_udp_release_rqst(rqstp);
++	svc_udp_release_ctxt(xprt, rqstp->rq_xprt_ctxt);
++	rqstp->rq_xprt_ctxt = NULL;
+ 
+ 	svc_set_cmsg_data(rqstp, cmh);
+ 
+@@ -632,7 +633,7 @@ static const struct svc_xprt_ops svc_udp_ops = {
+ 	.xpo_recvfrom = svc_udp_recvfrom,
+ 	.xpo_sendto = svc_udp_sendto,
+ 	.xpo_result_payload = svc_sock_result_payload,
+-	.xpo_release_rqst = svc_udp_release_rqst,
++	.xpo_release_ctxt = svc_udp_release_ctxt,
+ 	.xpo_detach = svc_sock_detach,
+ 	.xpo_free = svc_sock_free,
+ 	.xpo_has_wspace = svc_udp_has_wspace,
+@@ -1162,7 +1163,8 @@ static int svc_tcp_sendto(struct svc_rqst *rqstp)
+ 	unsigned int sent;
+ 	int err;
+ 
+-	svc_tcp_release_rqst(rqstp);
++	svc_tcp_release_ctxt(xprt, rqstp->rq_xprt_ctxt);
++	rqstp->rq_xprt_ctxt = NULL;
+ 
+ 	atomic_inc(&svsk->sk_sendqlen);
+ 	mutex_lock(&xprt->xpt_mutex);
+@@ -1207,7 +1209,7 @@ static const struct svc_xprt_ops svc_tcp_ops = {
+ 	.xpo_recvfrom = svc_tcp_recvfrom,
+ 	.xpo_sendto = svc_tcp_sendto,
+ 	.xpo_result_payload = svc_sock_result_payload,
+-	.xpo_release_rqst = svc_tcp_release_rqst,
++	.xpo_release_ctxt = svc_tcp_release_ctxt,
+ 	.xpo_detach = svc_tcp_sock_detach,
+ 	.xpo_free = svc_sock_free,
+ 	.xpo_has_wspace = svc_tcp_has_wspace,
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+index 1c658fa430633..a22fe7587fa6f 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+@@ -239,21 +239,20 @@ void svc_rdma_recv_ctxt_put(struct svcxprt_rdma *rdma,
+ }
+ 
+ /**
+- * svc_rdma_release_rqst - Release transport-specific per-rqst resources
+- * @rqstp: svc_rqst being released
++ * svc_rdma_release_ctxt - Release transport-specific per-rqst resources
++ * @xprt: the transport which owned the context
++ * @vctxt: the context from rqstp->rq_xprt_ctxt or dr->xprt_ctxt
+  *
+  * Ensure that the recv_ctxt is released whether or not a Reply
+  * was sent. For example, the client could close the connection,
+  * or svc_process could drop an RPC, before the Reply is sent.
+  */
+-void svc_rdma_release_rqst(struct svc_rqst *rqstp)
++void svc_rdma_release_ctxt(struct svc_xprt *xprt, void *vctxt)
+ {
+-	struct svc_rdma_recv_ctxt *ctxt = rqstp->rq_xprt_ctxt;
+-	struct svc_xprt *xprt = rqstp->rq_xprt;
++	struct svc_rdma_recv_ctxt *ctxt = vctxt;
+ 	struct svcxprt_rdma *rdma =
+ 		container_of(xprt, struct svcxprt_rdma, sc_xprt);
+ 
+-	rqstp->rq_xprt_ctxt = NULL;
+ 	if (ctxt)
+ 		svc_rdma_recv_ctxt_put(rdma, ctxt);
+ }
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
+index 416b298f74ddb..ca04f7a6a085c 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
+@@ -80,7 +80,7 @@ static const struct svc_xprt_ops svc_rdma_ops = {
+ 	.xpo_recvfrom = svc_rdma_recvfrom,
+ 	.xpo_sendto = svc_rdma_sendto,
+ 	.xpo_result_payload = svc_rdma_result_payload,
+-	.xpo_release_rqst = svc_rdma_release_rqst,
++	.xpo_release_ctxt = svc_rdma_release_ctxt,
+ 	.xpo_detach = svc_rdma_detach,
+ 	.xpo_free = svc_rdma_free,
+ 	.xpo_has_wspace = svc_rdma_has_wspace,
+diff --git a/net/tipc/bearer.c b/net/tipc/bearer.c
+index 35cac7733fd3a..53881406e2006 100644
+--- a/net/tipc/bearer.c
++++ b/net/tipc/bearer.c
+@@ -541,6 +541,19 @@ int tipc_bearer_mtu(struct net *net, u32 bearer_id)
+ 	return mtu;
+ }
+ 
++int tipc_bearer_min_mtu(struct net *net, u32 bearer_id)
++{
++	int mtu = TIPC_MIN_BEARER_MTU;
++	struct tipc_bearer *b;
++
++	rcu_read_lock();
++	b = bearer_get(net, bearer_id);
++	if (b)
++		mtu += b->encap_hlen;
++	rcu_read_unlock();
++	return mtu;
++}
++
+ /* tipc_bearer_xmit_skb - sends buffer to destination over bearer
+  */
+ void tipc_bearer_xmit_skb(struct net *net, u32 bearer_id,
+@@ -1138,8 +1151,8 @@ int __tipc_nl_bearer_set(struct sk_buff *skb, struct genl_info *info)
+ 				return -EINVAL;
+ 			}
+ #ifdef CONFIG_TIPC_MEDIA_UDP
+-			if (tipc_udp_mtu_bad(nla_get_u32
+-					     (props[TIPC_NLA_PROP_MTU]))) {
++			if (nla_get_u32(props[TIPC_NLA_PROP_MTU]) <
++			    b->encap_hlen + TIPC_MIN_BEARER_MTU) {
+ 				NL_SET_ERR_MSG(info->extack,
+ 					       "MTU value is out-of-range");
+ 				return -EINVAL;
+diff --git a/net/tipc/bearer.h b/net/tipc/bearer.h
+index 490ad6e5f7a3c..bd0cc5c287ef8 100644
+--- a/net/tipc/bearer.h
++++ b/net/tipc/bearer.h
+@@ -146,6 +146,7 @@ struct tipc_media {
+  * @identity: array index of this bearer within TIPC bearer array
+  * @disc: ptr to link setup request
+  * @net_plane: network plane ('A' through 'H') currently associated with bearer
++ * @encap_hlen: encap headers length
+  * @up: bearer up flag (bit 0)
+  * @refcnt: tipc_bearer reference counter
+  *
+@@ -170,6 +171,7 @@ struct tipc_bearer {
+ 	u32 identity;
+ 	struct tipc_discoverer *disc;
+ 	char net_plane;
++	u16 encap_hlen;
+ 	unsigned long up;
+ 	refcount_t refcnt;
+ };
+@@ -232,6 +234,7 @@ int tipc_bearer_setup(void);
+ void tipc_bearer_cleanup(void);
+ void tipc_bearer_stop(struct net *net);
+ int tipc_bearer_mtu(struct net *net, u32 bearer_id);
++int tipc_bearer_min_mtu(struct net *net, u32 bearer_id);
+ bool tipc_bearer_bcast_support(struct net *net, u32 bearer_id);
+ void tipc_bearer_xmit_skb(struct net *net, u32 bearer_id,
+ 			  struct sk_buff *skb,
+diff --git a/net/tipc/link.c b/net/tipc/link.c
+index b3ce24823f503..2eff1c7949cbc 100644
+--- a/net/tipc/link.c
++++ b/net/tipc/link.c
+@@ -2200,7 +2200,7 @@ static int tipc_link_proto_rcv(struct tipc_link *l, struct sk_buff *skb,
+ 	struct tipc_msg *hdr = buf_msg(skb);
+ 	struct tipc_gap_ack_blks *ga = NULL;
+ 	bool reply = msg_probe(hdr), retransmitted = false;
+-	u32 dlen = msg_data_sz(hdr), glen = 0;
++	u32 dlen = msg_data_sz(hdr), glen = 0, msg_max;
+ 	u16 peers_snd_nxt =  msg_next_sent(hdr);
+ 	u16 peers_tol = msg_link_tolerance(hdr);
+ 	u16 peers_prio = msg_linkprio(hdr);
+@@ -2239,6 +2239,9 @@ static int tipc_link_proto_rcv(struct tipc_link *l, struct sk_buff *skb,
+ 	switch (mtyp) {
+ 	case RESET_MSG:
+ 	case ACTIVATE_MSG:
++		msg_max = msg_max_pkt(hdr);
++		if (msg_max < tipc_bearer_min_mtu(l->net, l->bearer_id))
++			break;
+ 		/* Complete own link name with peer's interface name */
+ 		if_name =  strrchr(l->name, ':') + 1;
+ 		if (sizeof(l->name) - (if_name - l->name) <= TIPC_MAX_IF_NAME)
+@@ -2283,8 +2286,8 @@ static int tipc_link_proto_rcv(struct tipc_link *l, struct sk_buff *skb,
+ 		l->peer_session = msg_session(hdr);
+ 		l->in_session = true;
+ 		l->peer_bearer_id = msg_bearer_id(hdr);
+-		if (l->mtu > msg_max_pkt(hdr))
+-			l->mtu = msg_max_pkt(hdr);
++		if (l->mtu > msg_max)
++			l->mtu = msg_max;
+ 		break;
+ 
+ 	case STATE_MSG:
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 37edfe10f8c6f..dd73d71c02a99 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -314,9 +314,9 @@ static void tsk_rej_rx_queue(struct sock *sk, int error)
+ 		tipc_sk_respond(sk, skb, error);
+ }
+ 
+-static bool tipc_sk_connected(struct sock *sk)
++static bool tipc_sk_connected(const struct sock *sk)
+ {
+-	return sk->sk_state == TIPC_ESTABLISHED;
++	return READ_ONCE(sk->sk_state) == TIPC_ESTABLISHED;
+ }
+ 
+ /* tipc_sk_type_connectionless - check if the socket is datagram socket
+diff --git a/net/tipc/udp_media.c b/net/tipc/udp_media.c
+index c2bb818704c8f..0a85244fd6188 100644
+--- a/net/tipc/udp_media.c
++++ b/net/tipc/udp_media.c
+@@ -738,8 +738,8 @@ static int tipc_udp_enable(struct net *net, struct tipc_bearer *b,
+ 			udp_conf.local_ip.s_addr = local.ipv4.s_addr;
+ 		udp_conf.use_udp_checksums = false;
+ 		ub->ifindex = dev->ifindex;
+-		if (tipc_mtu_bad(dev, sizeof(struct iphdr) +
+-				      sizeof(struct udphdr))) {
++		b->encap_hlen = sizeof(struct iphdr) + sizeof(struct udphdr);
++		if (tipc_mtu_bad(dev, b->encap_hlen)) {
+ 			err = -EINVAL;
+ 			goto err;
+ 		}
+@@ -760,6 +760,7 @@ static int tipc_udp_enable(struct net *net, struct tipc_bearer *b,
+ 		else
+ 			udp_conf.local_ip6 = local.ipv6;
+ 		ub->ifindex = dev->ifindex;
++		b->encap_hlen = sizeof(struct ipv6hdr) + sizeof(struct udphdr);
+ 		b->mtu = 1280;
+ #endif
+ 	} else {
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index b32c112984dd9..f2e7302a4d96b 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -111,7 +111,8 @@ int wait_on_pending_writer(struct sock *sk, long *timeo)
+ 			break;
+ 		}
+ 
+-		if (sk_wait_event(sk, timeo, !sk->sk_write_pending, &wait))
++		if (sk_wait_event(sk, timeo,
++				  !READ_ONCE(sk->sk_write_pending), &wait))
+ 			break;
+ 	}
+ 	remove_wait_queue(sk_sleep(sk), &wait);
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 0b0f18ecce447..29c6083a37daf 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -603,7 +603,7 @@ static void unix_release_sock(struct sock *sk, int embrion)
+ 	/* Clear state */
+ 	unix_state_lock(sk);
+ 	sock_orphan(sk);
+-	sk->sk_shutdown = SHUTDOWN_MASK;
++	WRITE_ONCE(sk->sk_shutdown, SHUTDOWN_MASK);
+ 	path	     = u->path;
+ 	u->path.dentry = NULL;
+ 	u->path.mnt = NULL;
+@@ -628,7 +628,7 @@ static void unix_release_sock(struct sock *sk, int embrion)
+ 		if (sk->sk_type == SOCK_STREAM || sk->sk_type == SOCK_SEQPACKET) {
+ 			unix_state_lock(skpair);
+ 			/* No more writes */
+-			skpair->sk_shutdown = SHUTDOWN_MASK;
++			WRITE_ONCE(skpair->sk_shutdown, SHUTDOWN_MASK);
+ 			if (!skb_queue_empty(&sk->sk_receive_queue) || embrion)
+ 				skpair->sk_err = ECONNRESET;
+ 			unix_state_unlock(skpair);
+@@ -1442,7 +1442,7 @@ static long unix_wait_for_peer(struct sock *other, long timeo)
+ 
+ 	sched = !sock_flag(other, SOCK_DEAD) &&
+ 		!(other->sk_shutdown & RCV_SHUTDOWN) &&
+-		unix_recvq_full(other);
++		unix_recvq_full_lockless(other);
+ 
+ 	unix_state_unlock(other);
+ 
+@@ -3008,7 +3008,7 @@ static int unix_shutdown(struct socket *sock, int mode)
+ 	++mode;
+ 
+ 	unix_state_lock(sk);
+-	sk->sk_shutdown |= mode;
++	WRITE_ONCE(sk->sk_shutdown, sk->sk_shutdown | mode);
+ 	other = unix_peer(sk);
+ 	if (other)
+ 		sock_hold(other);
+@@ -3028,7 +3028,7 @@ static int unix_shutdown(struct socket *sock, int mode)
+ 		if (mode&SEND_SHUTDOWN)
+ 			peer_mode |= RCV_SHUTDOWN;
+ 		unix_state_lock(other);
+-		other->sk_shutdown |= peer_mode;
++		WRITE_ONCE(other->sk_shutdown, other->sk_shutdown | peer_mode);
+ 		unix_state_unlock(other);
+ 		other->sk_state_change(other);
+ 		if (peer_mode == SHUTDOWN_MASK)
+@@ -3160,16 +3160,18 @@ static __poll_t unix_poll(struct file *file, struct socket *sock, poll_table *wa
+ {
+ 	struct sock *sk = sock->sk;
+ 	__poll_t mask;
++	u8 shutdown;
+ 
+ 	sock_poll_wait(file, sock, wait);
+ 	mask = 0;
++	shutdown = READ_ONCE(sk->sk_shutdown);
+ 
+ 	/* exceptional events? */
+ 	if (sk->sk_err)
+ 		mask |= EPOLLERR;
+-	if (sk->sk_shutdown == SHUTDOWN_MASK)
++	if (shutdown == SHUTDOWN_MASK)
+ 		mask |= EPOLLHUP;
+-	if (sk->sk_shutdown & RCV_SHUTDOWN)
++	if (shutdown & RCV_SHUTDOWN)
+ 		mask |= EPOLLRDHUP | EPOLLIN | EPOLLRDNORM;
+ 
+ 	/* readable? */
+@@ -3203,18 +3205,20 @@ static __poll_t unix_dgram_poll(struct file *file, struct socket *sock,
+ 	struct sock *sk = sock->sk, *other;
+ 	unsigned int writable;
+ 	__poll_t mask;
++	u8 shutdown;
+ 
+ 	sock_poll_wait(file, sock, wait);
+ 	mask = 0;
++	shutdown = READ_ONCE(sk->sk_shutdown);
+ 
+ 	/* exceptional events? */
+ 	if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue))
+ 		mask |= EPOLLERR |
+ 			(sock_flag(sk, SOCK_SELECT_ERR_QUEUE) ? EPOLLPRI : 0);
+ 
+-	if (sk->sk_shutdown & RCV_SHUTDOWN)
++	if (shutdown & RCV_SHUTDOWN)
+ 		mask |= EPOLLRDHUP | EPOLLIN | EPOLLRDNORM;
+-	if (sk->sk_shutdown == SHUTDOWN_MASK)
++	if (shutdown == SHUTDOWN_MASK)
+ 		mask |= EPOLLHUP;
+ 
+ 	/* readable? */
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index 19aea7cba26ef..5d48017482bca 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -1426,7 +1426,7 @@ static int vsock_connect(struct socket *sock, struct sockaddr *addr,
+ 			vsock_transport_cancel_pkt(vsk);
+ 			vsock_remove_connected(vsk);
+ 			goto out_wait;
+-		} else if (timeout == 0) {
++		} else if ((sk->sk_state != TCP_ESTABLISHED) && (timeout == 0)) {
+ 			err = -ETIMEDOUT;
+ 			sk->sk_state = TCP_CLOSE;
+ 			sock->state = SS_UNCONNECTED;
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index 790bc31cf82ea..b3829ed844f84 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -5,7 +5,7 @@
+  * Copyright 2008 Johannes Berg <johannes@sipsolutions.net>
+  * Copyright 2013-2014  Intel Mobile Communications GmbH
+  * Copyright 2016	Intel Deutschland GmbH
+- * Copyright (C) 2018-2022 Intel Corporation
++ * Copyright (C) 2018-2023 Intel Corporation
+  */
+ #include <linux/kernel.h>
+ #include <linux/slab.h>
+@@ -540,6 +540,10 @@ static int cfg80211_parse_ap_info(struct cfg80211_colocated_ap *entry,
+ 	/* skip the TBTT offset */
+ 	pos++;
+ 
++	/* ignore entries with invalid BSSID */
++	if (!is_valid_ether_addr(pos))
++		return -EINVAL;
++
+ 	memcpy(entry->bssid, pos, ETH_ALEN);
+ 	pos += ETH_ALEN;
+ 
+diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c
+index 95f1436bf6a2e..e2ca50bfca24f 100644
+--- a/net/xfrm/xfrm_device.c
++++ b/net/xfrm/xfrm_device.c
+@@ -378,7 +378,7 @@ int xfrm_dev_policy_add(struct net *net, struct xfrm_policy *xp,
+ 		break;
+ 	default:
+ 		xdo->dev = NULL;
+-		dev_put(dev);
++		netdev_put(dev, &xdo->dev_tracker);
+ 		NL_SET_ERR_MSG(extack, "Unrecognized offload direction");
+ 		return -EINVAL;
+ 	}
+diff --git a/net/xfrm/xfrm_interface_core.c b/net/xfrm/xfrm_interface_core.c
+index 35279c220bd78..1f99dc4690271 100644
+--- a/net/xfrm/xfrm_interface_core.c
++++ b/net/xfrm/xfrm_interface_core.c
+@@ -310,52 +310,6 @@ static void xfrmi_scrub_packet(struct sk_buff *skb, bool xnet)
+ 	skb->mark = 0;
+ }
+ 
+-static int xfrmi_input(struct sk_buff *skb, int nexthdr, __be32 spi,
+-		       int encap_type, unsigned short family)
+-{
+-	struct sec_path *sp;
+-
+-	sp = skb_sec_path(skb);
+-	if (sp && (sp->len || sp->olen) &&
+-	    !xfrm_policy_check(NULL, XFRM_POLICY_IN, skb, family))
+-		goto discard;
+-
+-	XFRM_SPI_SKB_CB(skb)->family = family;
+-	if (family == AF_INET) {
+-		XFRM_SPI_SKB_CB(skb)->daddroff = offsetof(struct iphdr, daddr);
+-		XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip4 = NULL;
+-	} else {
+-		XFRM_SPI_SKB_CB(skb)->daddroff = offsetof(struct ipv6hdr, daddr);
+-		XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip6 = NULL;
+-	}
+-
+-	return xfrm_input(skb, nexthdr, spi, encap_type);
+-discard:
+-	kfree_skb(skb);
+-	return 0;
+-}
+-
+-static int xfrmi4_rcv(struct sk_buff *skb)
+-{
+-	return xfrmi_input(skb, ip_hdr(skb)->protocol, 0, 0, AF_INET);
+-}
+-
+-static int xfrmi6_rcv(struct sk_buff *skb)
+-{
+-	return xfrmi_input(skb, skb_network_header(skb)[IP6CB(skb)->nhoff],
+-			   0, 0, AF_INET6);
+-}
+-
+-static int xfrmi4_input(struct sk_buff *skb, int nexthdr, __be32 spi, int encap_type)
+-{
+-	return xfrmi_input(skb, nexthdr, spi, encap_type, AF_INET);
+-}
+-
+-static int xfrmi6_input(struct sk_buff *skb, int nexthdr, __be32 spi, int encap_type)
+-{
+-	return xfrmi_input(skb, nexthdr, spi, encap_type, AF_INET6);
+-}
+-
+ static int xfrmi_rcv_cb(struct sk_buff *skb, int err)
+ {
+ 	const struct xfrm_mode *inner_mode;
+@@ -991,8 +945,8 @@ static struct pernet_operations xfrmi_net_ops = {
+ };
+ 
+ static struct xfrm6_protocol xfrmi_esp6_protocol __read_mostly = {
+-	.handler	=	xfrmi6_rcv,
+-	.input_handler	=	xfrmi6_input,
++	.handler	=	xfrm6_rcv,
++	.input_handler	=	xfrm_input,
+ 	.cb_handler	=	xfrmi_rcv_cb,
+ 	.err_handler	=	xfrmi6_err,
+ 	.priority	=	10,
+@@ -1042,8 +996,8 @@ static struct xfrm6_tunnel xfrmi_ip6ip_handler __read_mostly = {
+ #endif
+ 
+ static struct xfrm4_protocol xfrmi_esp4_protocol __read_mostly = {
+-	.handler	=	xfrmi4_rcv,
+-	.input_handler	=	xfrmi4_input,
++	.handler	=	xfrm4_rcv,
++	.input_handler	=	xfrm_input,
+ 	.cb_handler	=	xfrmi_rcv_cb,
+ 	.err_handler	=	xfrmi4_err,
+ 	.priority	=	10,
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index 5c61ec04b839b..21a3a1cd3d6de 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -3712,12 +3712,6 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb,
+ 		}
+ 		xfrm_nr = ti;
+ 
+-		if (net->xfrm.policy_default[dir] == XFRM_USERPOLICY_BLOCK &&
+-		    !xfrm_nr) {
+-			XFRM_INC_STATS(net, LINUX_MIB_XFRMINNOSTATES);
+-			goto reject;
+-		}
+-
+ 		if (npols > 1) {
+ 			xfrm_tmpl_sort(stp, tpp, xfrm_nr, family);
+ 			tpp = stp;
+@@ -3745,9 +3739,6 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb,
+ 			goto reject;
+ 		}
+ 
+-		if (if_id)
+-			secpath_reset(skb);
+-
+ 		xfrm_pols_put(pols, npols);
+ 		return 1;
+ 	}
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index 103af2b3e986f..6794b9dea27aa 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -1768,7 +1768,7 @@ static void copy_templates(struct xfrm_policy *xp, struct xfrm_user_tmpl *ut,
+ }
+ 
+ static int validate_tmpl(int nr, struct xfrm_user_tmpl *ut, u16 family,
+-			 struct netlink_ext_ack *extack)
++			 int dir, struct netlink_ext_ack *extack)
+ {
+ 	u16 prev_family;
+ 	int i;
+@@ -1794,6 +1794,10 @@ static int validate_tmpl(int nr, struct xfrm_user_tmpl *ut, u16 family,
+ 		switch (ut[i].mode) {
+ 		case XFRM_MODE_TUNNEL:
+ 		case XFRM_MODE_BEET:
++			if (ut[i].optional && dir == XFRM_POLICY_OUT) {
++				NL_SET_ERR_MSG(extack, "Mode in optional template not allowed in outbound policy");
++				return -EINVAL;
++			}
+ 			break;
+ 		default:
+ 			if (ut[i].family != prev_family) {
+@@ -1831,7 +1835,7 @@ static int validate_tmpl(int nr, struct xfrm_user_tmpl *ut, u16 family,
+ }
+ 
+ static int copy_from_user_tmpl(struct xfrm_policy *pol, struct nlattr **attrs,
+-			       struct netlink_ext_ack *extack)
++			       int dir, struct netlink_ext_ack *extack)
+ {
+ 	struct nlattr *rt = attrs[XFRMA_TMPL];
+ 
+@@ -1842,7 +1846,7 @@ static int copy_from_user_tmpl(struct xfrm_policy *pol, struct nlattr **attrs,
+ 		int nr = nla_len(rt) / sizeof(*utmpl);
+ 		int err;
+ 
+-		err = validate_tmpl(nr, utmpl, pol->family, extack);
++		err = validate_tmpl(nr, utmpl, pol->family, dir, extack);
+ 		if (err)
+ 			return err;
+ 
+@@ -1919,7 +1923,7 @@ static struct xfrm_policy *xfrm_policy_construct(struct net *net,
+ 	if (err)
+ 		goto error;
+ 
+-	if (!(err = copy_from_user_tmpl(xp, attrs, extack)))
++	if (!(err = copy_from_user_tmpl(xp, attrs, p->dir, extack)))
+ 		err = copy_from_user_sec_ctx(xp, attrs);
+ 	if (err)
+ 		goto error;
+@@ -1978,6 +1982,7 @@ static int xfrm_add_policy(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 
+ 	if (err) {
+ 		xfrm_dev_policy_delete(xp);
++		xfrm_dev_policy_free(xp);
+ 		security_xfrm_policy_free(xp->security);
+ 		kfree(xp);
+ 		return err;
+@@ -3497,7 +3502,7 @@ static struct xfrm_policy *xfrm_compile_policy(struct sock *sk, int opt,
+ 		return NULL;
+ 
+ 	nr = ((len - sizeof(*p)) / sizeof(*ut));
+-	if (validate_tmpl(nr, ut, p->sel.family, NULL))
++	if (validate_tmpl(nr, ut, p->sel.family, p->dir, NULL))
+ 		return NULL;
+ 
+ 	if (p->dir > XFRM_POLICY_OUT)
+diff --git a/samples/bpf/hbm.c b/samples/bpf/hbm.c
+index 516fbac28b716..7f89700a17b69 100644
+--- a/samples/bpf/hbm.c
++++ b/samples/bpf/hbm.c
+@@ -315,6 +315,7 @@ static int run_bpf_prog(char *prog, int cg_id)
+ 		fout = fopen(fname, "w");
+ 		fprintf(fout, "id:%d\n", cg_id);
+ 		fprintf(fout, "ERROR: Could not lookup queue_stats\n");
++		fclose(fout);
+ 	} else if (stats_flag && qstats.lastPacketTime >
+ 		   qstats.firstPacketTime) {
+ 		long long delta_us = (qstats.lastPacketTime -
+diff --git a/scripts/recordmcount.c b/scripts/recordmcount.c
+index e30216525325b..40ae6b2c7a6da 100644
+--- a/scripts/recordmcount.c
++++ b/scripts/recordmcount.c
+@@ -110,6 +110,7 @@ static ssize_t uwrite(void const *const buf, size_t const count)
+ {
+ 	size_t cnt = count;
+ 	off_t idx = 0;
++	void *p = NULL;
+ 
+ 	file_updated = 1;
+ 
+@@ -117,7 +118,10 @@ static ssize_t uwrite(void const *const buf, size_t const count)
+ 		off_t aoffset = (file_ptr + count) - file_end;
+ 
+ 		if (aoffset > file_append_size) {
+-			file_append = realloc(file_append, aoffset);
++			p = realloc(file_append, aoffset);
++			if (!p)
++				free(file_append);
++			file_append = p;
+ 			file_append_size = aoffset;
+ 		}
+ 		if (!file_append) {
+diff --git a/sound/firewire/digi00x/digi00x-stream.c b/sound/firewire/digi00x/digi00x-stream.c
+index a15f55b0dce37..295163bb8abb6 100644
+--- a/sound/firewire/digi00x/digi00x-stream.c
++++ b/sound/firewire/digi00x/digi00x-stream.c
+@@ -259,8 +259,10 @@ int snd_dg00x_stream_init_duplex(struct snd_dg00x *dg00x)
+ 		return err;
+ 
+ 	err = init_stream(dg00x, &dg00x->tx_stream);
+-	if (err < 0)
++	if (err < 0) {
+ 		destroy_stream(dg00x, &dg00x->rx_stream);
++		return err;
++	}
+ 
+ 	err = amdtp_domain_init(&dg00x->domain);
+ 	if (err < 0) {
+diff --git a/sound/pci/hda/hda_generic.c b/sound/pci/hda/hda_generic.c
+index fc114e5224806..dbf7aa88e0e31 100644
+--- a/sound/pci/hda/hda_generic.c
++++ b/sound/pci/hda/hda_generic.c
+@@ -1155,8 +1155,8 @@ static bool path_has_mixer(struct hda_codec *codec, int path_idx, int ctl_type)
+ 	return path && path->ctls[ctl_type];
+ }
+ 
+-static const char * const channel_name[4] = {
+-	"Front", "Surround", "CLFE", "Side"
++static const char * const channel_name[] = {
++	"Front", "Surround", "CLFE", "Side", "Back",
+ };
+ 
+ /* give some appropriate ctl name prefix for the given line out channel */
+@@ -1182,7 +1182,7 @@ static const char *get_line_out_pfx(struct hda_codec *codec, int ch,
+ 
+ 	/* multi-io channels */
+ 	if (ch >= cfg->line_outs)
+-		return channel_name[ch];
++		goto fixed_name;
+ 
+ 	switch (cfg->line_out_type) {
+ 	case AUTO_PIN_SPEAKER_OUT:
+@@ -1234,6 +1234,7 @@ static const char *get_line_out_pfx(struct hda_codec *codec, int ch,
+ 	if (cfg->line_outs == 1 && !spec->multi_ios)
+ 		return "Line Out";
+ 
++ fixed_name:
+ 	if (ch >= ARRAY_SIZE(channel_name)) {
+ 		snd_BUG();
+ 		return "PCM";
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 77a592f219472..881b2f3a1551f 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2528,6 +2528,9 @@ static const struct pci_device_id azx_ids[] = {
+ 	/* Meteorlake-P */
+ 	{ PCI_DEVICE(0x8086, 0x7e28),
+ 	  .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
++	/* Lunarlake-P */
++	{ PCI_DEVICE(0x8086, 0xa828),
++	  .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
+ 	/* Broxton-P(Apollolake) */
+ 	{ PCI_DEVICE(0x8086, 0x5a98),
+ 	  .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_BROXTON },
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 5c6980394dcec..be2c6cff77011 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -4577,6 +4577,11 @@ HDA_CODEC_ENTRY(0x10de009d, "GPU 9d HDMI/DP",	patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de009e, "GPU 9e HDMI/DP",	patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de009f, "GPU 9f HDMI/DP",	patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de00a0, "GPU a0 HDMI/DP",	patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00a3, "GPU a3 HDMI/DP",	patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00a4, "GPU a4 HDMI/DP",	patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00a5, "GPU a5 HDMI/DP",	patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00a6, "GPU a6 HDMI/DP",	patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00a7, "GPU a7 HDMI/DP",	patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de8001, "MCP73 HDMI",	patch_nvhdmi_2ch),
+ HDA_CODEC_ENTRY(0x10de8067, "MCP67/68 HDMI",	patch_nvhdmi_2ch),
+ HDA_CODEC_ENTRY(0x11069f80, "VX900 HDMI/DP",	patch_via_hdmi),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 172ffc2c332b7..c757607177368 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -9363,7 +9363,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x802f, "HP Z240", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x8077, "HP", ALC256_FIXUP_HP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x103c, 0x8158, "HP", ALC256_FIXUP_HP_HEADSET_MIC),
+-	SND_PCI_QUIRK(0x103c, 0x820d, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
++	SND_PCI_QUIRK(0x103c, 0x820d, "HP Pavilion 15", ALC295_FIXUP_HP_X360),
+ 	SND_PCI_QUIRK(0x103c, 0x8256, "HP", ALC221_FIXUP_HP_FRONT_MIC),
+ 	SND_PCI_QUIRK(0x103c, 0x827e, "HP x360", ALC295_FIXUP_HP_X360),
+ 	SND_PCI_QUIRK(0x103c, 0x827f, "HP x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+@@ -9458,7 +9458,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8aa3, "HP ProBook 450 G9 (MB 8AA1)", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8aa8, "HP EliteBook 640 G9 (MB 8AA6)", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8aab, "HP EliteBook 650 G9 (MB 8AA9)", ALC236_FIXUP_HP_GPIO_LED),
+-	 SND_PCI_QUIRK(0x103c, 0x8abb, "HP ZBook Firefly 14 G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x8abb, "HP ZBook Firefly 14 G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8ad1, "HP EliteBook 840 14 inch G9 Notebook PC", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8ad2, "HP EliteBook 860 16 inch G9 Notebook PC", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8b42, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+@@ -9469,8 +9469,13 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8b47, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8b5d, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x8b5e, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
++	SND_PCI_QUIRK(0x103c, 0x8b63, "HP Elite Dragonfly 13.5 inch G4", ALC245_FIXUP_CS35L41_SPI_4_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8b65, "HP ProBook 455 15.6 inch G10 Notebook PC", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x8b66, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
++	SND_PCI_QUIRK(0x103c, 0x8b70, "HP EliteBook 835 G10", ALC287_FIXUP_CS35L41_I2C_2),
++	SND_PCI_QUIRK(0x103c, 0x8b72, "HP EliteBook 845 G10", ALC287_FIXUP_CS35L41_I2C_2),
++	SND_PCI_QUIRK(0x103c, 0x8b74, "HP EliteBook 845W G10", ALC287_FIXUP_CS35L41_I2C_2),
++	SND_PCI_QUIRK(0x103c, 0x8b77, "HP ElieBook 865 G10", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x103c, 0x8b7a, "HP", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8b7d, "HP", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8b87, "HP", ALC236_FIXUP_HP_GPIO_LED),
+@@ -9480,7 +9485,9 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8b8f, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8b92, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8b96, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
++	SND_PCI_QUIRK(0x103c, 0x8b97, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x8bf0, "HP", ALC236_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x8c26, "HP HP EliteBook 800G11", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+ 	SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+@@ -9522,6 +9529,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x1b13, "Asus U41SV", ALC269_FIXUP_INV_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1bbd, "ASUS Z550MA", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x1c23, "Asus X55U", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
++	SND_PCI_QUIRK(0x1043, 0x1c62, "ASUS GU603", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x1c92, "ASUS ROG Strix G15", ALC285_FIXUP_ASUS_G533Z_PINS),
+ 	SND_PCI_QUIRK(0x1043, 0x1ccd, "ASUS X555UB", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1d42, "ASUS Zephyrus G14 2022", ALC289_FIXUP_ASUS_GA401),
+@@ -9618,6 +9626,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x7716, "Clevo NS50PU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x7717, "Clevo NS70PU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x7718, "Clevo L140PU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x7724, "Clevo L140AU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8228, "Clevo NR40BU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8520, "Clevo NH50D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8521, "Clevo NH77D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+@@ -11663,6 +11672,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x1632, "HP RP5800", ALC662_FIXUP_HP_RP5800),
+ 	SND_PCI_QUIRK(0x103c, 0x870c, "HP", ALC897_FIXUP_HP_HSMIC_VERB),
+ 	SND_PCI_QUIRK(0x103c, 0x8719, "HP", ALC897_FIXUP_HP_HSMIC_VERB),
++	SND_PCI_QUIRK(0x103c, 0x872b, "HP", ALC897_FIXUP_HP_HSMIC_VERB),
+ 	SND_PCI_QUIRK(0x103c, 0x873e, "HP", ALC671_FIXUP_HP_HEADSET_MIC2),
+ 	SND_PCI_QUIRK(0x103c, 0x877e, "HP 288 Pro G6", ALC671_FIXUP_HP_HEADSET_MIC2),
+ 	SND_PCI_QUIRK(0x103c, 0x885f, "HP 288 Pro G8", ALC671_FIXUP_HP_HEADSET_MIC2),
+diff --git a/sound/soc/amd/Kconfig b/sound/soc/amd/Kconfig
+index c88ebd84bdd50..08e42082f5e96 100644
+--- a/sound/soc/amd/Kconfig
++++ b/sound/soc/amd/Kconfig
+@@ -90,6 +90,7 @@ config SND_SOC_AMD_VANGOGH_MACH
+ 
+ config SND_SOC_AMD_ACP6x
+ 	tristate "AMD Audio Coprocessor-v6.x Yellow Carp support"
++	select SND_AMD_ACP_CONFIG
+ 	depends on X86 && PCI
+ 	help
+ 	  This option enables Audio Coprocessor i.e ACP v6.x support on
+@@ -130,6 +131,7 @@ config SND_SOC_AMD_RPL_ACP6x
+ 
+ config SND_SOC_AMD_PS
+         tristate "AMD Audio Coprocessor-v6.3 Pink Sardine support"
++	select SND_AMD_ACP_CONFIG
+         depends on X86 && PCI && ACPI
+         help
+           This option enables Audio Coprocessor i.e ACP v6.3 support on
+diff --git a/sound/soc/amd/ps/acp63.h b/sound/soc/amd/ps/acp63.h
+index 6bf29b520511d..dd36790b25aef 100644
+--- a/sound/soc/amd/ps/acp63.h
++++ b/sound/soc/amd/ps/acp63.h
+@@ -111,3 +111,5 @@ struct acp63_dev_data {
+ 	u16 pdev_count;
+ 	u16 pdm_dev_index;
+ };
++
++int snd_amd_acp_find_config(struct pci_dev *pci);
+diff --git a/sound/soc/amd/ps/pci-ps.c b/sound/soc/amd/ps/pci-ps.c
+index 688a1d4643d91..afddb9a77ba49 100644
+--- a/sound/soc/amd/ps/pci-ps.c
++++ b/sound/soc/amd/ps/pci-ps.c
+@@ -247,11 +247,17 @@ static int snd_acp63_probe(struct pci_dev *pci,
+ {
+ 	struct acp63_dev_data *adata;
+ 	u32 addr;
+-	u32 irqflags;
++	u32 irqflags, flag;
+ 	int val;
+ 	int ret;
+ 
+ 	irqflags = IRQF_SHARED;
++
++	/* Return if acp config flag is defined */
++	flag = snd_amd_acp_find_config(pci);
++	if (flag)
++		return -ENODEV;
++
+ 	/* Pink Sardine device check */
+ 	switch (pci->revision) {
+ 	case 0x63:
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index 0acdf0156f075..b9958e5553674 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -45,6 +45,13 @@ static struct snd_soc_card acp6x_card = {
+ };
+ 
+ static const struct dmi_system_id yc_acp_quirk_table[] = {
++	{
++		.driver_data = &acp6x_card,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Dell G15 5525"),
++		}
++	},
+ 	{
+ 		.driver_data = &acp6x_card,
+ 		.matches = {
+@@ -178,6 +185,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "21EN"),
+ 		}
+ 	},
++	{
++		.driver_data = &acp6x_card,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "21HY"),
++		}
++	},
+ 	{
+ 		.driver_data = &acp6x_card,
+ 		.matches = {
+@@ -262,6 +276,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "OMEN by HP Gaming Laptop 16z-n000"),
+ 		}
+ 	},
++	{
++		.driver_data = &acp6x_card,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "HP"),
++			DMI_MATCH(DMI_BOARD_NAME, "8A42"),
++		}
++	},
+ 	{
+ 		.driver_data = &acp6x_card,
+ 		.matches = {
+diff --git a/sound/soc/amd/yc/acp6x.h b/sound/soc/amd/yc/acp6x.h
+index 036207568c048..2de7d1edf00b7 100644
+--- a/sound/soc/amd/yc/acp6x.h
++++ b/sound/soc/amd/yc/acp6x.h
+@@ -105,3 +105,6 @@ static inline void acp6x_writel(u32 val, void __iomem *base_addr)
+ {
+ 	writel(val, base_addr - ACP6x_PHY_BASE_ADDRESS);
+ }
++
++int snd_amd_acp_find_config(struct pci_dev *pci);
++
+diff --git a/sound/soc/amd/yc/pci-acp6x.c b/sound/soc/amd/yc/pci-acp6x.c
+index 77c5fa1f7af14..7af6a349b1d41 100644
+--- a/sound/soc/amd/yc/pci-acp6x.c
++++ b/sound/soc/amd/yc/pci-acp6x.c
+@@ -149,10 +149,16 @@ static int snd_acp6x_probe(struct pci_dev *pci,
+ 	int index = 0;
+ 	int val = 0x00;
+ 	u32 addr;
+-	unsigned int irqflags;
++	unsigned int irqflags, flag;
+ 	int ret;
+ 
+ 	irqflags = IRQF_SHARED;
++
++	/* Return if acp config flag is defined */
++	flag = snd_amd_acp_find_config(pci);
++	if (flag)
++		return -ENODEV;
++
+ 	/* Yellow Carp device check */
+ 	switch (pci->revision) {
+ 	case 0x60:
+diff --git a/sound/soc/fsl/fsl_micfil.c b/sound/soc/fsl/fsl_micfil.c
+index 94341e4352b3c..3f08082a55bec 100644
+--- a/sound/soc/fsl/fsl_micfil.c
++++ b/sound/soc/fsl/fsl_micfil.c
+@@ -1159,7 +1159,7 @@ static int fsl_micfil_probe(struct platform_device *pdev)
+ 	ret = devm_snd_dmaengine_pcm_register(&pdev->dev, NULL, 0);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "failed to pcm register\n");
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	fsl_micfil_dai.capture.formats = micfil->soc->formats;
+@@ -1169,9 +1169,20 @@ static int fsl_micfil_probe(struct platform_device *pdev)
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "failed to register component %s\n",
+ 			fsl_micfil_component.name);
++		goto err_pm_disable;
+ 	}
+ 
+ 	return ret;
++
++err_pm_disable:
++	pm_runtime_disable(&pdev->dev);
++
++	return ret;
++}
++
++static void fsl_micfil_remove(struct platform_device *pdev)
++{
++	pm_runtime_disable(&pdev->dev);
+ }
+ 
+ static int __maybe_unused fsl_micfil_runtime_suspend(struct device *dev)
+@@ -1232,6 +1243,7 @@ static const struct dev_pm_ops fsl_micfil_pm_ops = {
+ 
+ static struct platform_driver fsl_micfil_driver = {
+ 	.probe = fsl_micfil_probe,
++	.remove_new = fsl_micfil_remove,
+ 	.driver = {
+ 		.name = "fsl-micfil-dai",
+ 		.pm = &fsl_micfil_pm_ops,
+diff --git a/sound/soc/mediatek/mt8186/mt8186-afe-clk.c b/sound/soc/mediatek/mt8186/mt8186-afe-clk.c
+index a6b4f29049bbc..539e3a023bc4e 100644
+--- a/sound/soc/mediatek/mt8186/mt8186-afe-clk.c
++++ b/sound/soc/mediatek/mt8186/mt8186-afe-clk.c
+@@ -644,9 +644,3 @@ int mt8186_init_clock(struct mtk_base_afe *afe)
+ 
+ 	return 0;
+ }
+-
+-void mt8186_deinit_clock(void *priv)
+-{
+-	struct mtk_base_afe *afe = priv;
+-	mt8186_audsys_clk_unregister(afe);
+-}
+diff --git a/sound/soc/mediatek/mt8186/mt8186-afe-clk.h b/sound/soc/mediatek/mt8186/mt8186-afe-clk.h
+index d5988717d8f2d..a9d59e506d9af 100644
+--- a/sound/soc/mediatek/mt8186/mt8186-afe-clk.h
++++ b/sound/soc/mediatek/mt8186/mt8186-afe-clk.h
+@@ -81,7 +81,6 @@ enum {
+ struct mtk_base_afe;
+ int mt8186_set_audio_int_bus_parent(struct mtk_base_afe *afe, int clk_id);
+ int mt8186_init_clock(struct mtk_base_afe *afe);
+-void mt8186_deinit_clock(void *priv);
+ int mt8186_afe_enable_cgs(struct mtk_base_afe *afe);
+ void mt8186_afe_disable_cgs(struct mtk_base_afe *afe);
+ int mt8186_afe_enable_clock(struct mtk_base_afe *afe);
+diff --git a/sound/soc/mediatek/mt8186/mt8186-afe-pcm.c b/sound/soc/mediatek/mt8186/mt8186-afe-pcm.c
+index 41172a82103ee..a868a04ed4e7a 100644
+--- a/sound/soc/mediatek/mt8186/mt8186-afe-pcm.c
++++ b/sound/soc/mediatek/mt8186/mt8186-afe-pcm.c
+@@ -2848,10 +2848,6 @@ static int mt8186_afe_pcm_dev_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
+-	ret = devm_add_action_or_reset(dev, mt8186_deinit_clock, (void *)afe);
+-	if (ret)
+-		return ret;
+-
+ 	/* init memif */
+ 	afe->memif_32bit_supported = 0;
+ 	afe->memif_size = MT8186_MEMIF_NUM;
+diff --git a/sound/soc/mediatek/mt8186/mt8186-audsys-clk.c b/sound/soc/mediatek/mt8186/mt8186-audsys-clk.c
+index 578969ca91c8e..5666be6b1bd2e 100644
+--- a/sound/soc/mediatek/mt8186/mt8186-audsys-clk.c
++++ b/sound/soc/mediatek/mt8186/mt8186-audsys-clk.c
+@@ -84,6 +84,29 @@ static const struct afe_gate aud_clks[CLK_AUD_NR_CLK] = {
+ 	GATE_AUD2(CLK_AUD_ETDM_OUT1_BCLK, "aud_etdm_out1_bclk", "top_audio", 24),
+ };
+ 
++static void mt8186_audsys_clk_unregister(void *data)
++{
++	struct mtk_base_afe *afe = data;
++	struct mt8186_afe_private *afe_priv = afe->platform_priv;
++	struct clk *clk;
++	struct clk_lookup *cl;
++	int i;
++
++	if (!afe_priv)
++		return;
++
++	for (i = 0; i < CLK_AUD_NR_CLK; i++) {
++		cl = afe_priv->lookup[i];
++		if (!cl)
++			continue;
++
++		clk = cl->clk;
++		clk_unregister_gate(clk);
++
++		clkdev_drop(cl);
++	}
++}
++
+ int mt8186_audsys_clk_register(struct mtk_base_afe *afe)
+ {
+ 	struct mt8186_afe_private *afe_priv = afe->platform_priv;
+@@ -124,27 +147,6 @@ int mt8186_audsys_clk_register(struct mtk_base_afe *afe)
+ 		afe_priv->lookup[i] = cl;
+ 	}
+ 
+-	return 0;
++	return devm_add_action_or_reset(afe->dev, mt8186_audsys_clk_unregister, afe);
+ }
+ 
+-void mt8186_audsys_clk_unregister(struct mtk_base_afe *afe)
+-{
+-	struct mt8186_afe_private *afe_priv = afe->platform_priv;
+-	struct clk *clk;
+-	struct clk_lookup *cl;
+-	int i;
+-
+-	if (!afe_priv)
+-		return;
+-
+-	for (i = 0; i < CLK_AUD_NR_CLK; i++) {
+-		cl = afe_priv->lookup[i];
+-		if (!cl)
+-			continue;
+-
+-		clk = cl->clk;
+-		clk_unregister_gate(clk);
+-
+-		clkdev_drop(cl);
+-	}
+-}
+diff --git a/sound/soc/mediatek/mt8186/mt8186-audsys-clk.h b/sound/soc/mediatek/mt8186/mt8186-audsys-clk.h
+index b8d6a06e11e8d..897a2914dc191 100644
+--- a/sound/soc/mediatek/mt8186/mt8186-audsys-clk.h
++++ b/sound/soc/mediatek/mt8186/mt8186-audsys-clk.h
+@@ -10,6 +10,5 @@
+ #define _MT8186_AUDSYS_CLK_H_
+ 
+ int mt8186_audsys_clk_register(struct mtk_base_afe *afe);
+-void mt8186_audsys_clk_unregister(struct mtk_base_afe *afe);
+ 
+ #endif
+diff --git a/sound/soc/sof/ipc3-topology.c b/sound/soc/sof/ipc3-topology.c
+index b1f425b39db94..ffa4c6dea752a 100644
+--- a/sound/soc/sof/ipc3-topology.c
++++ b/sound/soc/sof/ipc3-topology.c
+@@ -2111,10 +2111,13 @@ static int sof_ipc3_dai_config(struct snd_sof_dev *sdev, struct snd_sof_widget *
+ 	 * For the case of PAUSE/HW_FREE, since there are no quirks, flags can be used as is.
+ 	 */
+ 
+-	if (flags & SOF_DAI_CONFIG_FLAGS_HW_PARAMS)
++	if (flags & SOF_DAI_CONFIG_FLAGS_HW_PARAMS) {
++		/* Clear stale command */
++		config->flags &= ~SOF_DAI_CONFIG_FLAGS_CMD_MASK;
+ 		config->flags |= flags;
+-	else
++	} else {
+ 		config->flags = flags;
++	}
+ 
+ 	/* only send the IPC if the widget is set up in the DSP */
+ 	if (swidget->use_count > 0) {
+diff --git a/sound/soc/sof/topology.c b/sound/soc/sof/topology.c
+index 9f3a038fe21ad..93ab58cea14f8 100644
+--- a/sound/soc/sof/topology.c
++++ b/sound/soc/sof/topology.c
+@@ -586,6 +586,10 @@ static int sof_copy_tuples(struct snd_sof_dev *sdev, struct snd_soc_tplg_vendor_
+ 				if (*num_copied_tuples == tuples_size)
+ 					return 0;
+ 			}
++
++			/* stop when we've found the required token instances */
++			if (found == num_tokens * token_instance_num)
++				return 0;
+ 		}
+ 
+ 		/* next array */
+diff --git a/sound/usb/format.c b/sound/usb/format.c
+index 4b1c5ba121f39..ab5fed9f55b60 100644
+--- a/sound/usb/format.c
++++ b/sound/usb/format.c
+@@ -423,6 +423,7 @@ static int line6_parse_audio_format_rates_quirk(struct snd_usb_audio *chip,
+ 	case USB_ID(0x0e41, 0x4248): /* Line6 Helix >= fw 2.82 */
+ 	case USB_ID(0x0e41, 0x4249): /* Line6 Helix Rack >= fw 2.82 */
+ 	case USB_ID(0x0e41, 0x424a): /* Line6 Helix LT >= fw 2.82 */
++	case USB_ID(0x0e41, 0x424b): /* Line6 Pod Go */
+ 	case USB_ID(0x19f7, 0x0011): /* Rode Rodecaster Pro */
+ 		return set_fixed_rate(fp, 48000, SNDRV_PCM_RATE_48000);
+ 	}
+diff --git a/tools/include/uapi/asm-generic/fcntl.h b/tools/include/uapi/asm-generic/fcntl.h
+index b02c8e0f40575..1c7a0f6632c09 100644
+--- a/tools/include/uapi/asm-generic/fcntl.h
++++ b/tools/include/uapi/asm-generic/fcntl.h
+@@ -91,7 +91,6 @@
+ 
+ /* a horrid kludge trying to make sure that this will fail on old kernels */
+ #define O_TMPFILE (__O_TMPFILE | O_DIRECTORY)
+-#define O_TMPFILE_MASK (__O_TMPFILE | O_DIRECTORY | O_CREAT)      
+ 
+ #ifndef O_NDELAY
+ #define O_NDELAY	O_NONBLOCK
+diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
+index 6f085602b7bd3..d8c174a719383 100644
+--- a/tools/perf/builtin-script.c
++++ b/tools/perf/builtin-script.c
+@@ -3652,6 +3652,13 @@ static int process_stat_config_event(struct perf_session *session __maybe_unused
+ 				     union perf_event *event)
+ {
+ 	perf_event__read_stat_config(&stat_config, &event->stat_config);
++
++	/*
++	 * Aggregation modes are not used since post-processing scripts are
++	 * supposed to take care of such requirements
++	 */
++	stat_config.aggr_mode = AGGR_NONE;
++
+ 	return 0;
+ }
+ 
+diff --git a/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c b/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c
+index e7d48cb563c0e..ae6af354a81db 100644
+--- a/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c
++++ b/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c
+@@ -70,8 +70,8 @@ static int max_freq_mode;
+  */
+ static unsigned long max_frequency;
+ 
+-static unsigned long long tsc_at_measure_start;
+-static unsigned long long tsc_at_measure_end;
++static unsigned long long *tsc_at_measure_start;
++static unsigned long long *tsc_at_measure_end;
+ static unsigned long long *mperf_previous_count;
+ static unsigned long long *aperf_previous_count;
+ static unsigned long long *mperf_current_count;
+@@ -169,7 +169,7 @@ static int mperf_get_count_percent(unsigned int id, double *percent,
+ 	aperf_diff = aperf_current_count[cpu] - aperf_previous_count[cpu];
+ 
+ 	if (max_freq_mode == MAX_FREQ_TSC_REF) {
+-		tsc_diff = tsc_at_measure_end - tsc_at_measure_start;
++		tsc_diff = tsc_at_measure_end[cpu] - tsc_at_measure_start[cpu];
+ 		*percent = 100.0 * mperf_diff / tsc_diff;
+ 		dprint("%s: TSC Ref - mperf_diff: %llu, tsc_diff: %llu\n",
+ 		       mperf_cstates[id].name, mperf_diff, tsc_diff);
+@@ -206,7 +206,7 @@ static int mperf_get_count_freq(unsigned int id, unsigned long long *count,
+ 
+ 	if (max_freq_mode == MAX_FREQ_TSC_REF) {
+ 		/* Calculate max_freq from TSC count */
+-		tsc_diff = tsc_at_measure_end - tsc_at_measure_start;
++		tsc_diff = tsc_at_measure_end[cpu] - tsc_at_measure_start[cpu];
+ 		time_diff = timespec_diff_us(time_start, time_end);
+ 		max_frequency = tsc_diff / time_diff;
+ 	}
+@@ -225,33 +225,27 @@ static int mperf_get_count_freq(unsigned int id, unsigned long long *count,
+ static int mperf_start(void)
+ {
+ 	int cpu;
+-	unsigned long long dbg;
+ 
+ 	clock_gettime(CLOCK_REALTIME, &time_start);
+-	mperf_get_tsc(&tsc_at_measure_start);
+ 
+-	for (cpu = 0; cpu < cpu_count; cpu++)
++	for (cpu = 0; cpu < cpu_count; cpu++) {
++		mperf_get_tsc(&tsc_at_measure_start[cpu]);
+ 		mperf_init_stats(cpu);
++	}
+ 
+-	mperf_get_tsc(&dbg);
+-	dprint("TSC diff: %llu\n", dbg - tsc_at_measure_start);
+ 	return 0;
+ }
+ 
+ static int mperf_stop(void)
+ {
+-	unsigned long long dbg;
+ 	int cpu;
+ 
+-	for (cpu = 0; cpu < cpu_count; cpu++)
++	for (cpu = 0; cpu < cpu_count; cpu++) {
+ 		mperf_measure_stats(cpu);
++		mperf_get_tsc(&tsc_at_measure_end[cpu]);
++	}
+ 
+-	mperf_get_tsc(&tsc_at_measure_end);
+ 	clock_gettime(CLOCK_REALTIME, &time_end);
+-
+-	mperf_get_tsc(&dbg);
+-	dprint("TSC diff: %llu\n", dbg - tsc_at_measure_end);
+-
+ 	return 0;
+ }
+ 
+@@ -353,7 +347,8 @@ struct cpuidle_monitor *mperf_register(void)
+ 	aperf_previous_count = calloc(cpu_count, sizeof(unsigned long long));
+ 	mperf_current_count = calloc(cpu_count, sizeof(unsigned long long));
+ 	aperf_current_count = calloc(cpu_count, sizeof(unsigned long long));
+-
++	tsc_at_measure_start = calloc(cpu_count, sizeof(unsigned long long));
++	tsc_at_measure_end = calloc(cpu_count, sizeof(unsigned long long));
+ 	mperf_monitor.name_len = strlen(mperf_monitor.name);
+ 	return &mperf_monitor;
+ }
+@@ -364,6 +359,8 @@ void mperf_unregister(void)
+ 	free(aperf_previous_count);
+ 	free(mperf_current_count);
+ 	free(aperf_current_count);
++	free(tsc_at_measure_start);
++	free(tsc_at_measure_end);
+ 	free(is_valid);
+ }
+ 
+diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c
+index 1e616a8c6a9cf..f4f7c0aef702b 100644
+--- a/tools/testing/selftests/cgroup/test_memcontrol.c
++++ b/tools/testing/selftests/cgroup/test_memcontrol.c
+@@ -98,6 +98,11 @@ static int alloc_anon_50M_check(const char *cgroup, void *arg)
+ 	int ret = -1;
+ 
+ 	buf = malloc(size);
++	if (buf == NULL) {
++		fprintf(stderr, "malloc() failed\n");
++		return -1;
++	}
++
+ 	for (ptr = buf; ptr < buf + size; ptr += PAGE_SIZE)
+ 		*ptr = 0;
+ 
+@@ -211,6 +216,11 @@ static int alloc_anon_noexit(const char *cgroup, void *arg)
+ 	char *buf, *ptr;
+ 
+ 	buf = malloc(size);
++	if (buf == NULL) {
++		fprintf(stderr, "malloc() failed\n");
++		return -1;
++	}
++
+ 	for (ptr = buf; ptr < buf + size; ptr += PAGE_SIZE)
+ 		*ptr = 0;
+ 
+@@ -778,6 +788,11 @@ static int alloc_anon_50M_check_swap(const char *cgroup, void *arg)
+ 	int ret = -1;
+ 
+ 	buf = malloc(size);
++	if (buf == NULL) {
++		fprintf(stderr, "malloc() failed\n");
++		return -1;
++	}
++
+ 	for (ptr = buf; ptr < buf + size; ptr += PAGE_SIZE)
+ 		*ptr = 0;
+ 
+diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
+index c39a4353ba194..827647ff3d41b 100644
+--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
++++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
+@@ -954,6 +954,7 @@ struct kvm_x86_state *vcpu_save_state(struct kvm_vcpu *vcpu)
+ 	vcpu_run_complete_io(vcpu);
+ 
+ 	state = malloc(sizeof(*state) + msr_list->nmsrs * sizeof(state->msrs.entries[0]));
++	TEST_ASSERT(state, "-ENOMEM when allocating kvm state");
+ 
+ 	vcpu_events_get(vcpu, &state->events);
+ 	vcpu_mp_state_get(vcpu, &state->mp_state);
+diff --git a/tools/testing/selftests/net/fib_nexthops.sh b/tools/testing/selftests/net/fib_nexthops.sh
+index a47b26ab48f23..0f5e88c8f4ffe 100755
+--- a/tools/testing/selftests/net/fib_nexthops.sh
++++ b/tools/testing/selftests/net/fib_nexthops.sh
+@@ -2283,7 +2283,7 @@ EOF
+ ################################################################################
+ # main
+ 
+-while getopts :t:pP46hv:w: o
++while getopts :t:pP46hvw: o
+ do
+ 	case $o in
+ 		t) TESTS=$OPTARG;;
+diff --git a/tools/testing/selftests/net/srv6_end_dt4_l3vpn_test.sh b/tools/testing/selftests/net/srv6_end_dt4_l3vpn_test.sh
+index 1003119773e5d..f962823628119 100755
+--- a/tools/testing/selftests/net/srv6_end_dt4_l3vpn_test.sh
++++ b/tools/testing/selftests/net/srv6_end_dt4_l3vpn_test.sh
+@@ -232,10 +232,14 @@ setup_rt_networking()
+ 	local nsname=rt-${rt}
+ 
+ 	ip netns add ${nsname}
++
++	ip netns exec ${nsname} sysctl -wq net.ipv6.conf.all.accept_dad=0
++	ip netns exec ${nsname} sysctl -wq net.ipv6.conf.default.accept_dad=0
++
+ 	ip link set veth-rt-${rt} netns ${nsname}
+ 	ip -netns ${nsname} link set veth-rt-${rt} name veth0
+ 
+-	ip -netns ${nsname} addr add ${IPv6_RT_NETWORK}::${rt}/64 dev veth0
++	ip -netns ${nsname} addr add ${IPv6_RT_NETWORK}::${rt}/64 dev veth0 nodad
+ 	ip -netns ${nsname} link set veth0 up
+ 	ip -netns ${nsname} link set lo up
+ 
+@@ -254,6 +258,12 @@ setup_hs()
+ 
+ 	# set the networking for the host
+ 	ip netns add ${hsname}
++
++	# disable the rp_filter otherwise the kernel gets confused about how
++	# to route decap ipv4 packets.
++	ip netns exec ${rtname} sysctl -wq net.ipv4.conf.all.rp_filter=0
++	ip netns exec ${rtname} sysctl -wq net.ipv4.conf.default.rp_filter=0
++
+ 	ip -netns ${hsname} link add veth0 type veth peer name ${rtveth}
+ 	ip -netns ${hsname} link set ${rtveth} netns ${rtname}
+ 	ip -netns ${hsname} addr add ${IPv4_HS_NETWORK}.${hs}/24 dev veth0
+@@ -272,11 +282,6 @@ setup_hs()
+ 
+ 	ip netns exec ${rtname} sysctl -wq net.ipv4.conf.${rtveth}.proxy_arp=1
+ 
+-	# disable the rp_filter otherwise the kernel gets confused about how
+-	# to route decap ipv4 packets.
+-	ip netns exec ${rtname} sysctl -wq net.ipv4.conf.all.rp_filter=0
+-	ip netns exec ${rtname} sysctl -wq net.ipv4.conf.${rtveth}.rp_filter=0
+-
+ 	ip netns exec ${rtname} sh -c "echo 1 > /proc/sys/net/vrf/strict_mode"
+ }
+ 
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index b1679d08a2160..4ec97f72f8ce8 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -3959,18 +3959,19 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, u32 id)
+ 	}
+ 
+ 	vcpu->vcpu_idx = atomic_read(&kvm->online_vcpus);
+-	r = xa_insert(&kvm->vcpu_array, vcpu->vcpu_idx, vcpu, GFP_KERNEL_ACCOUNT);
+-	BUG_ON(r == -EBUSY);
++	r = xa_reserve(&kvm->vcpu_array, vcpu->vcpu_idx, GFP_KERNEL_ACCOUNT);
+ 	if (r)
+ 		goto unlock_vcpu_destroy;
+ 
+ 	/* Now it's all set up, let userspace reach it */
+ 	kvm_get_kvm(kvm);
+ 	r = create_vcpu_fd(vcpu);
+-	if (r < 0) {
+-		xa_erase(&kvm->vcpu_array, vcpu->vcpu_idx);
+-		kvm_put_kvm_no_destroy(kvm);
+-		goto unlock_vcpu_destroy;
++	if (r < 0)
++		goto kvm_put_xa_release;
++
++	if (KVM_BUG_ON(!!xa_store(&kvm->vcpu_array, vcpu->vcpu_idx, vcpu, 0), kvm)) {
++		r = -EINVAL;
++		goto kvm_put_xa_release;
+ 	}
+ 
+ 	/*
+@@ -3985,6 +3986,9 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, u32 id)
+ 	kvm_create_vcpu_debugfs(vcpu);
+ 	return r;
+ 
++kvm_put_xa_release:
++	kvm_put_kvm_no_destroy(kvm);
++	xa_release(&kvm->vcpu_array, vcpu->vcpu_idx);
+ unlock_vcpu_destroy:
+ 	mutex_unlock(&kvm->lock);
+ 	kvm_dirty_ring_free(&vcpu->dirty_ring);


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:6.3 commit in: /
@ 2023-05-28 14:51 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2023-05-28 14:51 UTC (permalink / raw
  To: gentoo-commits

commit:     c2befb541fe938960594507653448fcf82bbe429
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun May 28 14:51:31 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun May 28 14:51:31 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c2befb54

xfs: fix livelock in delayed allocation at ENOSPC

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |  4 ++
 ...fs-livelock-fix-in-delaye-alloc-at-ENOSPC.patch | 56 ++++++++++++++++++++++
 2 files changed, 60 insertions(+)

diff --git a/0000_README b/0000_README
index dfa966c1..447571a4 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch:  1700_sparc-address-warray-bound-warnings.patch
 From:		https://github.com/KSPP/linux/issues/109
 Desc:		Address -Warray-bounds warnings 
 
+Patch:  1900_xfs-livelock-fix-in-delaye-alloc-at-ENOSPC.patch
+From:		https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git
+Desc:		xfs: fix livelock in delayed allocation at ENOSPC
+
 Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758

diff --git a/1900_xfs-livelock-fix-in-delaye-alloc-at-ENOSPC.patch b/1900_xfs-livelock-fix-in-delaye-alloc-at-ENOSPC.patch
new file mode 100644
index 00000000..e1e5726b
--- /dev/null
+++ b/1900_xfs-livelock-fix-in-delaye-alloc-at-ENOSPC.patch
@@ -0,0 +1,56 @@
+From 9419092fb2630c30e4ffeb9ef61007ef0c61827a Mon Sep 17 00:00:00 2001
+From: Dave Chinner <dchinner@redhat.com>
+Date: Thu, 27 Apr 2023 09:02:11 +1000
+Subject: xfs: fix livelock in delayed allocation at ENOSPC
+
+On a filesystem with a non-zero stripe unit and a large sequential
+write, delayed allocation will set a minimum allocation length of
+the stripe unit. If allocation fails because there are no extents
+long enough for an aligned minlen allocation, it is supposed to
+fall back to unaligned allocation which allows single block extents
+to be allocated.
+
+When the allocator code was rewritting in the 6.3 cycle, this
+fallback was broken - the old code used args->fsbno as the both the
+allocation target and the allocation result, the new code passes the
+target as a separate parameter. The conversion didn't handle the
+aligned->unaligned fallback path correctly - it reset args->fsbno to
+the target fsbno on failure which broke allocation failure detection
+in the high level code and so it never fell back to unaligned
+allocations.
+
+This resulted in a loop in writeback trying to allocate an aligned
+block, getting a false positive success, trying to insert the result
+in the BMBT. This did nothing because the extent already was in the
+BMBT (merge results in an unchanged extent) and so it returned the
+prior extent to the conversion code as the current iomap.
+
+Because the iomap returned didn't cover the offset we tried to map,
+xfs_convert_blocks() then retries the allocation, which fails in the
+same way and now we have a livelock.
+
+Reported-and-tested-by: Brian Foster <bfoster@redhat.com>
+Fixes: 85843327094f ("xfs: factor xfs_bmap_btalloc()")
+Signed-off-by: Dave Chinner <dchinner@redhat.com>
+Reviewed-by: Darrick J. Wong <djwong@kernel.org>
+---
+ fs/xfs/libxfs/xfs_bmap.c | 1 -
+ 1 file changed, 1 deletion(-)
+
+(limited to 'fs/xfs/libxfs/xfs_bmap.c')
+
+diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c
+index 1a4e446194dd8..b512de0540d54 100644
+--- a/fs/xfs/libxfs/xfs_bmap.c
++++ b/fs/xfs/libxfs/xfs_bmap.c
+@@ -3540,7 +3540,6 @@ xfs_bmap_btalloc_at_eof(
+ 	 * original non-aligned state so the caller can proceed on allocation
+ 	 * failure as if this function was never called.
+ 	 */
+-	args->fsbno = ap->blkno;
+ 	args->alignment = 1;
+ 	return 0;
+ }
+-- 
+cgit 
+


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:6.3 commit in: /
@ 2023-05-30 16:50 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2023-05-30 16:50 UTC (permalink / raw
  To: gentoo-commits

commit:     6b32c5adb2f3c6f9724b3d905802259a1f9b372c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue May 30 16:50:24 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue May 30 16:50:24 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=6b32c5ad

Linux patch 6.3.5

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |    4 +
 1004_linux-6.3.5.patch | 4973 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4977 insertions(+)

diff --git a/0000_README b/0000_README
index 447571a4..e0952135 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch:  1003_linux-6.3.4.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.3.4
 
+Patch:  1004_linux-6.3.5.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.3.5
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1004_linux-6.3.5.patch b/1004_linux-6.3.5.patch
new file mode 100644
index 00000000..bdea91f3
--- /dev/null
+++ b/1004_linux-6.3.5.patch
@@ -0,0 +1,4973 @@
+diff --git a/Documentation/devicetree/bindings/usb/cdns,usb3.yaml b/Documentation/devicetree/bindings/usb/cdns,usb3.yaml
+index cae46c4982adf..69a93a0722f07 100644
+--- a/Documentation/devicetree/bindings/usb/cdns,usb3.yaml
++++ b/Documentation/devicetree/bindings/usb/cdns,usb3.yaml
+@@ -64,7 +64,7 @@ properties:
+     description:
+       size of memory intended as internal memory for endpoints
+       buffers expressed in KB
+-    $ref: /schemas/types.yaml#/definitions/uint32
++    $ref: /schemas/types.yaml#/definitions/uint16
+ 
+   cdns,phyrst-a-enable:
+     description: Enable resetting of PHY if Rx fail is detected
+diff --git a/Makefile b/Makefile
+index 3c5b606690182..d710ff6a3d566 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 3
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/arch/arm/boot/dts/imx6qdl-mba6.dtsi b/arch/arm/boot/dts/imx6qdl-mba6.dtsi
+index 78555a6188510..7b7e6c2ad190c 100644
+--- a/arch/arm/boot/dts/imx6qdl-mba6.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-mba6.dtsi
+@@ -209,6 +209,7 @@
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_pcie>;
+ 	reset-gpio = <&gpio6 7 GPIO_ACTIVE_LOW>;
++	vpcie-supply = <&reg_pcie>;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi
+index 67072e6c77d5f..cbd9d124c80d0 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi
+@@ -98,11 +98,17 @@
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+ 
+-		ethphy: ethernet-phy@4 {
++		ethphy: ethernet-phy@4 { /* AR8033 or ADIN1300 */
+ 			compatible = "ethernet-phy-ieee802.3-c22";
+ 			reg = <4>;
+ 			reset-gpios = <&gpio1 9 GPIO_ACTIVE_LOW>;
+ 			reset-assert-us = <10000>;
++			/*
++			 * Deassert delay:
++			 * ADIN1300 requires 5ms.
++			 * AR8033   requires 1ms.
++			 */
++			reset-deassert-us = <20000>;
+ 		};
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp.dtsi b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+index 3f9d67341484b..a237275ee0179 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+@@ -1151,7 +1151,7 @@
+ 
+ 			media_blk_ctrl: blk-ctrl@32ec0000 {
+ 				compatible = "fsl,imx8mp-media-blk-ctrl",
+-					     "syscon";
++					     "simple-bus", "syscon";
+ 				reg = <0x32ec0000 0x10000>;
+ 				#address-cells = <1>;
+ 				#size-cells = <1>;
+diff --git a/arch/m68k/kernel/signal.c b/arch/m68k/kernel/signal.c
+index b9f6908a31bc3..ba468b5f3f0b6 100644
+--- a/arch/m68k/kernel/signal.c
++++ b/arch/m68k/kernel/signal.c
+@@ -858,11 +858,17 @@ static inline int rt_setup_ucontext(struct ucontext __user *uc, struct pt_regs *
+ }
+ 
+ static inline void __user *
+-get_sigframe(struct ksignal *ksig, size_t frame_size)
++get_sigframe(struct ksignal *ksig, struct pt_regs *tregs, size_t frame_size)
+ {
+ 	unsigned long usp = sigsp(rdusp(), ksig);
++	unsigned long gap = 0;
+ 
+-	return (void __user *)((usp - frame_size) & -8UL);
++	if (CPU_IS_020_OR_030 && tregs->format == 0xb) {
++		/* USP is unreliable so use worst-case value */
++		gap = 256;
++	}
++
++	return (void __user *)((usp - gap - frame_size) & -8UL);
+ }
+ 
+ static int setup_frame(struct ksignal *ksig, sigset_t *set,
+@@ -880,7 +886,7 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set,
+ 		return -EFAULT;
+ 	}
+ 
+-	frame = get_sigframe(ksig, sizeof(*frame) + fsize);
++	frame = get_sigframe(ksig, tregs, sizeof(*frame) + fsize);
+ 
+ 	if (fsize)
+ 		err |= copy_to_user (frame + 1, regs + 1, fsize);
+@@ -952,7 +958,7 @@ static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
+ 		return -EFAULT;
+ 	}
+ 
+-	frame = get_sigframe(ksig, sizeof(*frame));
++	frame = get_sigframe(ksig, tregs, sizeof(*frame));
+ 
+ 	if (fsize)
+ 		err |= copy_to_user (&frame->uc.uc_extra, regs + 1, fsize);
+diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
+index a98940e642432..67c26e81e2150 100644
+--- a/arch/parisc/Kconfig
++++ b/arch/parisc/Kconfig
+@@ -129,6 +129,10 @@ config PM
+ config STACKTRACE_SUPPORT
+ 	def_bool y
+ 
++config LOCKDEP_SUPPORT
++	bool
++	default y
++
+ config ISA_DMA_API
+ 	bool
+ 
+diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h
+index 0bdee67241320..c8b6928cee1ee 100644
+--- a/arch/parisc/include/asm/cacheflush.h
++++ b/arch/parisc/include/asm/cacheflush.h
+@@ -48,6 +48,10 @@ void flush_dcache_page(struct page *page);
+ 
+ #define flush_dcache_mmap_lock(mapping)		xa_lock_irq(&mapping->i_pages)
+ #define flush_dcache_mmap_unlock(mapping)	xa_unlock_irq(&mapping->i_pages)
++#define flush_dcache_mmap_lock_irqsave(mapping, flags)		\
++		xa_lock_irqsave(&mapping->i_pages, flags)
++#define flush_dcache_mmap_unlock_irqrestore(mapping, flags)	\
++		xa_unlock_irqrestore(&mapping->i_pages, flags)
+ 
+ #define flush_icache_page(vma,page)	do { 		\
+ 	flush_kernel_dcache_page_addr(page_address(page)); \
+diff --git a/arch/parisc/kernel/alternative.c b/arch/parisc/kernel/alternative.c
+index 66f5672c70bd4..25c4d6c3375db 100644
+--- a/arch/parisc/kernel/alternative.c
++++ b/arch/parisc/kernel/alternative.c
+@@ -25,7 +25,7 @@ void __init_or_module apply_alternatives(struct alt_instr *start,
+ {
+ 	struct alt_instr *entry;
+ 	int index = 0, applied = 0;
+-	int num_cpus = num_online_cpus();
++	int num_cpus = num_present_cpus();
+ 	u16 cond_check;
+ 
+ 	cond_check = ALT_COND_ALWAYS |
+diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c
+index 1d3b8bc8a6233..ca4a302d4365f 100644
+--- a/arch/parisc/kernel/cache.c
++++ b/arch/parisc/kernel/cache.c
+@@ -399,6 +399,7 @@ void flush_dcache_page(struct page *page)
+ 	unsigned long offset;
+ 	unsigned long addr, old_addr = 0;
+ 	unsigned long count = 0;
++	unsigned long flags;
+ 	pgoff_t pgoff;
+ 
+ 	if (mapping && !mapping_mapped(mapping)) {
+@@ -420,7 +421,7 @@ void flush_dcache_page(struct page *page)
+ 	 * to flush one address here for them all to become coherent
+ 	 * on machines that support equivalent aliasing
+ 	 */
+-	flush_dcache_mmap_lock(mapping);
++	flush_dcache_mmap_lock_irqsave(mapping, flags);
+ 	vma_interval_tree_foreach(mpnt, &mapping->i_mmap, pgoff, pgoff) {
+ 		offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT;
+ 		addr = mpnt->vm_start + offset;
+@@ -460,7 +461,7 @@ void flush_dcache_page(struct page *page)
+ 		}
+ 		WARN_ON(++count == 4096);
+ 	}
+-	flush_dcache_mmap_unlock(mapping);
++	flush_dcache_mmap_unlock_irqrestore(mapping, flags);
+ }
+ EXPORT_SYMBOL(flush_dcache_page);
+ 
+diff --git a/arch/parisc/kernel/process.c b/arch/parisc/kernel/process.c
+index c064719b49b09..ec48850b9273c 100644
+--- a/arch/parisc/kernel/process.c
++++ b/arch/parisc/kernel/process.c
+@@ -122,13 +122,18 @@ void machine_power_off(void)
+ 	/* It seems we have no way to power the system off via
+ 	 * software. The user has to press the button himself. */
+ 
+-	printk(KERN_EMERG "System shut down completed.\n"
+-	       "Please power this system off now.");
++	printk("Power off or press RETURN to reboot.\n");
+ 
+ 	/* prevent soft lockup/stalled CPU messages for endless loop. */
+ 	rcu_sysrq_start();
+ 	lockup_detector_soft_poweroff();
+-	for (;;);
++	while (1) {
++		/* reboot if user presses RETURN key */
++		if (pdc_iodc_getc() == 13) {
++			printk("Rebooting...\n");
++			machine_restart(NULL);
++		}
++	}
+ }
+ 
+ void (*pm_power_off)(void);
+diff --git a/arch/parisc/kernel/traps.c b/arch/parisc/kernel/traps.c
+index f9696fbf646c4..67b51841dc8b4 100644
+--- a/arch/parisc/kernel/traps.c
++++ b/arch/parisc/kernel/traps.c
+@@ -291,19 +291,19 @@ static void handle_break(struct pt_regs *regs)
+ 	}
+ 
+ #ifdef CONFIG_KPROBES
+-	if (unlikely(iir == PARISC_KPROBES_BREAK_INSN)) {
++	if (unlikely(iir == PARISC_KPROBES_BREAK_INSN && !user_mode(regs))) {
+ 		parisc_kprobe_break_handler(regs);
+ 		return;
+ 	}
+-	if (unlikely(iir == PARISC_KPROBES_BREAK_INSN2)) {
++	if (unlikely(iir == PARISC_KPROBES_BREAK_INSN2 && !user_mode(regs))) {
+ 		parisc_kprobe_ss_handler(regs);
+ 		return;
+ 	}
+ #endif
+ 
+ #ifdef CONFIG_KGDB
+-	if (unlikely(iir == PARISC_KGDB_COMPILED_BREAK_INSN ||
+-		iir == PARISC_KGDB_BREAK_INSN)) {
++	if (unlikely((iir == PARISC_KGDB_COMPILED_BREAK_INSN ||
++		iir == PARISC_KGDB_BREAK_INSN)) && !user_mode(regs)) {
+ 		kgdb_handle_exception(9, SIGTRAP, 0, regs);
+ 		return;
+ 	}
+diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
+index 7d1199554fe36..54abd93828bfc 100644
+--- a/arch/x86/events/intel/uncore_snbep.c
++++ b/arch/x86/events/intel/uncore_snbep.c
+@@ -6138,6 +6138,7 @@ static struct intel_uncore_type spr_uncore_mdf = {
+ };
+ 
+ #define UNCORE_SPR_NUM_UNCORE_TYPES		12
++#define UNCORE_SPR_CHA				0
+ #define UNCORE_SPR_IIO				1
+ #define UNCORE_SPR_IMC				6
+ #define UNCORE_SPR_UPI				8
+@@ -6448,12 +6449,22 @@ static int uncore_type_max_boxes(struct intel_uncore_type **types,
+ 	return max + 1;
+ }
+ 
++#define SPR_MSR_UNC_CBO_CONFIG		0x2FFE
++
+ void spr_uncore_cpu_init(void)
+ {
++	struct intel_uncore_type *type;
++	u64 num_cbo;
++
+ 	uncore_msr_uncores = uncore_get_uncores(UNCORE_ACCESS_MSR,
+ 						UNCORE_SPR_MSR_EXTRA_UNCORES,
+ 						spr_msr_uncores);
+ 
++	type = uncore_find_type_by_id(uncore_msr_uncores, UNCORE_SPR_CHA);
++	if (type) {
++		rdmsrl(SPR_MSR_UNC_CBO_CONFIG, num_cbo);
++		type->num_boxes = num_cbo;
++	}
+ 	spr_uncore_iio_free_running.num_boxes = uncore_type_max_boxes(uncore_msr_uncores, UNCORE_SPR_IIO);
+ }
+ 
+diff --git a/arch/x86/kernel/cpu/topology.c b/arch/x86/kernel/cpu/topology.c
+index 5e868b62a7c4e..0270925fe013b 100644
+--- a/arch/x86/kernel/cpu/topology.c
++++ b/arch/x86/kernel/cpu/topology.c
+@@ -79,7 +79,7 @@ int detect_extended_topology_early(struct cpuinfo_x86 *c)
+ 	 * initial apic id, which also represents 32-bit extended x2apic id.
+ 	 */
+ 	c->initial_apicid = edx;
+-	smp_num_siblings = LEVEL_MAX_SIBLINGS(ebx);
++	smp_num_siblings = max_t(int, smp_num_siblings, LEVEL_MAX_SIBLINGS(ebx));
+ #endif
+ 	return 0;
+ }
+@@ -109,7 +109,8 @@ int detect_extended_topology(struct cpuinfo_x86 *c)
+ 	 */
+ 	cpuid_count(leaf, SMT_LEVEL, &eax, &ebx, &ecx, &edx);
+ 	c->initial_apicid = edx;
+-	core_level_siblings = smp_num_siblings = LEVEL_MAX_SIBLINGS(ebx);
++	core_level_siblings = LEVEL_MAX_SIBLINGS(ebx);
++	smp_num_siblings = max_t(int, smp_num_siblings, LEVEL_MAX_SIBLINGS(ebx));
+ 	core_plus_mask_width = ht_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
+ 	die_level_siblings = LEVEL_MAX_SIBLINGS(ebx);
+ 	pkg_mask_width = die_plus_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
+diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
+index 0bf6779187dda..f18ca44c904b7 100644
+--- a/arch/x86/kernel/dumpstack.c
++++ b/arch/x86/kernel/dumpstack.c
+@@ -195,7 +195,6 @@ static void show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
+ 	printk("%sCall Trace:\n", log_lvl);
+ 
+ 	unwind_start(&state, task, regs, stack);
+-	stack = stack ? : get_stack_pointer(task, regs);
+ 	regs = unwind_get_entry_regs(&state, &partial);
+ 
+ 	/*
+@@ -214,9 +213,13 @@ static void show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
+ 	 * - hardirq stack
+ 	 * - entry stack
+ 	 */
+-	for ( ; stack; stack = PTR_ALIGN(stack_info.next_sp, sizeof(long))) {
++	for (stack = stack ?: get_stack_pointer(task, regs);
++	     stack;
++	     stack = stack_info.next_sp) {
+ 		const char *stack_name;
+ 
++		stack = PTR_ALIGN(stack, sizeof(long));
++
+ 		if (get_stack_info(stack, task, &stack_info, &visit_mask)) {
+ 			/*
+ 			 * We weren't on a valid stack.  It's possible that
+diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
+index cb258f58fdc87..913287b9340c9 100644
+--- a/arch/x86/mm/init.c
++++ b/arch/x86/mm/init.c
+@@ -9,6 +9,7 @@
+ #include <linux/sched/task.h>
+ 
+ #include <asm/set_memory.h>
++#include <asm/cpu_device_id.h>
+ #include <asm/e820/api.h>
+ #include <asm/init.h>
+ #include <asm/page.h>
+@@ -261,6 +262,24 @@ static void __init probe_page_size_mask(void)
+ 	}
+ }
+ 
++#define INTEL_MATCH(_model) { .vendor  = X86_VENDOR_INTEL,	\
++			      .family  = 6,			\
++			      .model = _model,			\
++			    }
++/*
++ * INVLPG may not properly flush Global entries
++ * on these CPUs when PCIDs are enabled.
++ */
++static const struct x86_cpu_id invlpg_miss_ids[] = {
++	INTEL_MATCH(INTEL_FAM6_ALDERLAKE   ),
++	INTEL_MATCH(INTEL_FAM6_ALDERLAKE_L ),
++	INTEL_MATCH(INTEL_FAM6_ALDERLAKE_N ),
++	INTEL_MATCH(INTEL_FAM6_RAPTORLAKE  ),
++	INTEL_MATCH(INTEL_FAM6_RAPTORLAKE_P),
++	INTEL_MATCH(INTEL_FAM6_RAPTORLAKE_S),
++	{}
++};
++
+ static void setup_pcid(void)
+ {
+ 	if (!IS_ENABLED(CONFIG_X86_64))
+@@ -269,6 +288,12 @@ static void setup_pcid(void)
+ 	if (!boot_cpu_has(X86_FEATURE_PCID))
+ 		return;
+ 
++	if (x86_match_cpu(invlpg_miss_ids)) {
++		pr_info("Incomplete global flushes, disabling PCID");
++		setup_clear_cpu_cap(X86_FEATURE_PCID);
++		return;
++	}
++
+ 	if (boot_cpu_has(X86_FEATURE_PGE)) {
+ 		/*
+ 		 * This can't be cr4_set_bits_and_update_boot() -- the
+diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c
+index 8babce71915fe..014c508e914d9 100644
+--- a/arch/x86/pci/xen.c
++++ b/arch/x86/pci/xen.c
+@@ -198,7 +198,7 @@ static int xen_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
+ 		i++;
+ 	}
+ 	kfree(v);
+-	return 0;
++	return msi_device_populate_sysfs(&dev->dev);
+ 
+ error:
+ 	if (ret == -ENOSYS)
+@@ -254,7 +254,7 @@ static int xen_hvm_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
+ 		dev_dbg(&dev->dev,
+ 			"xen: msi --> pirq=%d --> irq=%d\n", pirq, irq);
+ 	}
+-	return 0;
++	return msi_device_populate_sysfs(&dev->dev);
+ 
+ error:
+ 	dev_err(&dev->dev, "Failed to create MSI%s! ret=%d!\n",
+@@ -346,7 +346,7 @@ static int xen_initdom_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
+ 		if (ret < 0)
+ 			goto out;
+ 	}
+-	ret = 0;
++	ret = msi_device_populate_sysfs(&dev->dev);
+ out:
+ 	return ret;
+ }
+@@ -394,6 +394,8 @@ static void xen_teardown_msi_irqs(struct pci_dev *dev)
+ 			xen_destroy_irq(msidesc->irq + i);
+ 		msidesc->irq = 0;
+ 	}
++
++	msi_device_destroy_sysfs(&dev->dev);
+ }
+ 
+ static void xen_pv_teardown_msi_irqs(struct pci_dev *dev)
+diff --git a/arch/xtensa/kernel/signal.c b/arch/xtensa/kernel/signal.c
+index 876d5df157ed9..5c01d7e70d90d 100644
+--- a/arch/xtensa/kernel/signal.c
++++ b/arch/xtensa/kernel/signal.c
+@@ -343,7 +343,19 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set,
+ 	struct rt_sigframe *frame;
+ 	int err = 0, sig = ksig->sig;
+ 	unsigned long sp, ra, tp, ps;
++	unsigned long handler = (unsigned long)ksig->ka.sa.sa_handler;
++	unsigned long handler_fdpic_GOT = 0;
+ 	unsigned int base;
++	bool fdpic = IS_ENABLED(CONFIG_BINFMT_ELF_FDPIC) &&
++		(current->personality & FDPIC_FUNCPTRS);
++
++	if (fdpic) {
++		unsigned long __user *fdpic_func_desc =
++			(unsigned long __user *)handler;
++		if (__get_user(handler, &fdpic_func_desc[0]) ||
++		    __get_user(handler_fdpic_GOT, &fdpic_func_desc[1]))
++			return -EFAULT;
++	}
+ 
+ 	sp = regs->areg[1];
+ 
+@@ -373,20 +385,26 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set,
+ 	err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
+ 
+ 	if (ksig->ka.sa.sa_flags & SA_RESTORER) {
+-		ra = (unsigned long)ksig->ka.sa.sa_restorer;
++		if (fdpic) {
++			unsigned long __user *fdpic_func_desc =
++				(unsigned long __user *)ksig->ka.sa.sa_restorer;
++
++			err |= __get_user(ra, fdpic_func_desc);
++		} else {
++			ra = (unsigned long)ksig->ka.sa.sa_restorer;
++		}
+ 	} else {
+ 
+ 		/* Create sys_rt_sigreturn syscall in stack frame */
+ 
+ 		err |= gen_return_code(frame->retcode);
+-
+-		if (err) {
+-			return -EFAULT;
+-		}
+ 		ra = (unsigned long) frame->retcode;
+ 	}
+ 
+-	/* 
++	if (err)
++		return -EFAULT;
++
++	/*
+ 	 * Create signal handler execution context.
+ 	 * Return context not modified until this point.
+ 	 */
+@@ -394,8 +412,7 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set,
+ 	/* Set up registers for signal handler; preserve the threadptr */
+ 	tp = regs->threadptr;
+ 	ps = regs->ps;
+-	start_thread(regs, (unsigned long) ksig->ka.sa.sa_handler,
+-		     (unsigned long) frame);
++	start_thread(regs, handler, (unsigned long)frame);
+ 
+ 	/* Set up a stack frame for a call4 if userspace uses windowed ABI */
+ 	if (ps & PS_WOE_MASK) {
+@@ -413,6 +430,8 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set,
+ 	regs->areg[base + 4] = (unsigned long) &frame->uc;
+ 	regs->threadptr = tp;
+ 	regs->ps = ps;
++	if (fdpic)
++		regs->areg[base + 11] = handler_fdpic_GOT;
+ 
+ 	pr_debug("SIG rt deliver (%s:%d): signal=%d sp=%p pc=%08lx\n",
+ 		 current->comm, current->pid, sig, frame, regs->pc);
+diff --git a/arch/xtensa/kernel/xtensa_ksyms.c b/arch/xtensa/kernel/xtensa_ksyms.c
+index 2a31b1ab0c9f2..17a7ef86fd0dd 100644
+--- a/arch/xtensa/kernel/xtensa_ksyms.c
++++ b/arch/xtensa/kernel/xtensa_ksyms.c
+@@ -56,6 +56,8 @@ EXPORT_SYMBOL(empty_zero_page);
+  */
+ extern long long __ashrdi3(long long, int);
+ extern long long __ashldi3(long long, int);
++extern long long __bswapdi2(long long);
++extern int __bswapsi2(int);
+ extern long long __lshrdi3(long long, int);
+ extern int __divsi3(int, int);
+ extern int __modsi3(int, int);
+@@ -66,6 +68,8 @@ extern unsigned long long __umulsidi3(unsigned int, unsigned int);
+ 
+ EXPORT_SYMBOL(__ashldi3);
+ EXPORT_SYMBOL(__ashrdi3);
++EXPORT_SYMBOL(__bswapdi2);
++EXPORT_SYMBOL(__bswapsi2);
+ EXPORT_SYMBOL(__lshrdi3);
+ EXPORT_SYMBOL(__divsi3);
+ EXPORT_SYMBOL(__modsi3);
+diff --git a/arch/xtensa/lib/Makefile b/arch/xtensa/lib/Makefile
+index 7ecef0519a27c..c9c2614188f74 100644
+--- a/arch/xtensa/lib/Makefile
++++ b/arch/xtensa/lib/Makefile
+@@ -4,7 +4,7 @@
+ #
+ 
+ lib-y	+= memcopy.o memset.o checksum.o \
+-	   ashldi3.o ashrdi3.o lshrdi3.o \
++	   ashldi3.o ashrdi3.o bswapdi2.o bswapsi2.o lshrdi3.o \
+ 	   divsi3.o udivsi3.o modsi3.o umodsi3.o mulsi3.o umulsidi3.o \
+ 	   usercopy.o strncpy_user.o strnlen_user.o
+ lib-$(CONFIG_PCI) += pci-auto.o
+diff --git a/arch/xtensa/lib/bswapdi2.S b/arch/xtensa/lib/bswapdi2.S
+new file mode 100644
+index 0000000000000..d8e52e05eba66
+--- /dev/null
++++ b/arch/xtensa/lib/bswapdi2.S
+@@ -0,0 +1,21 @@
++/* SPDX-License-Identifier: GPL-2.0-or-later WITH GCC-exception-2.0 */
++#include <linux/linkage.h>
++#include <asm/asmmacro.h>
++#include <asm/core.h>
++
++ENTRY(__bswapdi2)
++
++	abi_entry_default
++	ssai	8
++	srli	a4, a2, 16
++	src	a4, a4, a2
++	src	a4, a4, a4
++	src	a4, a2, a4
++	srli	a2, a3, 16
++	src	a2, a2, a3
++	src	a2, a2, a2
++	src	a2, a3, a2
++	mov	a3, a4
++	abi_ret_default
++
++ENDPROC(__bswapdi2)
+diff --git a/arch/xtensa/lib/bswapsi2.S b/arch/xtensa/lib/bswapsi2.S
+new file mode 100644
+index 0000000000000..9c1de1344f79a
+--- /dev/null
++++ b/arch/xtensa/lib/bswapsi2.S
+@@ -0,0 +1,16 @@
++/* SPDX-License-Identifier: GPL-2.0-or-later WITH GCC-exception-2.0 */
++#include <linux/linkage.h>
++#include <asm/asmmacro.h>
++#include <asm/core.h>
++
++ENTRY(__bswapsi2)
++
++	abi_entry_default
++	ssai	8
++	srli	a3, a2, 16
++	src	a3, a3, a2
++	src	a3, a3, a3
++	src	a2, a2, a3
++	abi_ret_default
++
++ENDPROC(__bswapsi2)
+diff --git a/block/blk-map.c b/block/blk-map.c
+index 9137d16cecdc3..9c03e641d32c9 100644
+--- a/block/blk-map.c
++++ b/block/blk-map.c
+@@ -247,7 +247,7 @@ static struct bio *blk_rq_map_bio_alloc(struct request *rq,
+ {
+ 	struct bio *bio;
+ 
+-	if (rq->cmd_flags & REQ_ALLOC_CACHE) {
++	if (rq->cmd_flags & REQ_ALLOC_CACHE && (nr_vecs <= BIO_INLINE_VECS)) {
+ 		bio = bio_alloc_bioset(NULL, nr_vecs, rq->cmd_flags, gfp_mask,
+ 					&fs_bio_set);
+ 		if (!bio)
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index fb56bfc45096d..8fb7672021ee2 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -1934,24 +1934,23 @@ static void binder_deferred_fd_close(int fd)
+ static void binder_transaction_buffer_release(struct binder_proc *proc,
+ 					      struct binder_thread *thread,
+ 					      struct binder_buffer *buffer,
+-					      binder_size_t failed_at,
++					      binder_size_t off_end_offset,
+ 					      bool is_failure)
+ {
+ 	int debug_id = buffer->debug_id;
+-	binder_size_t off_start_offset, buffer_offset, off_end_offset;
++	binder_size_t off_start_offset, buffer_offset;
+ 
+ 	binder_debug(BINDER_DEBUG_TRANSACTION,
+ 		     "%d buffer release %d, size %zd-%zd, failed at %llx\n",
+ 		     proc->pid, buffer->debug_id,
+ 		     buffer->data_size, buffer->offsets_size,
+-		     (unsigned long long)failed_at);
++		     (unsigned long long)off_end_offset);
+ 
+ 	if (buffer->target_node)
+ 		binder_dec_node(buffer->target_node, 1, 0);
+ 
+ 	off_start_offset = ALIGN(buffer->data_size, sizeof(void *));
+-	off_end_offset = is_failure && failed_at ? failed_at :
+-				off_start_offset + buffer->offsets_size;
++
+ 	for (buffer_offset = off_start_offset; buffer_offset < off_end_offset;
+ 	     buffer_offset += sizeof(binder_size_t)) {
+ 		struct binder_object_header *hdr;
+@@ -2111,6 +2110,21 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
+ 	}
+ }
+ 
++/* Clean up all the objects in the buffer */
++static inline void binder_release_entire_buffer(struct binder_proc *proc,
++						struct binder_thread *thread,
++						struct binder_buffer *buffer,
++						bool is_failure)
++{
++	binder_size_t off_end_offset;
++
++	off_end_offset = ALIGN(buffer->data_size, sizeof(void *));
++	off_end_offset += buffer->offsets_size;
++
++	binder_transaction_buffer_release(proc, thread, buffer,
++					  off_end_offset, is_failure);
++}
++
+ static int binder_translate_binder(struct flat_binder_object *fp,
+ 				   struct binder_transaction *t,
+ 				   struct binder_thread *thread)
+@@ -2806,7 +2820,7 @@ static int binder_proc_transaction(struct binder_transaction *t,
+ 		t_outdated->buffer = NULL;
+ 		buffer->transaction = NULL;
+ 		trace_binder_transaction_update_buffer_release(buffer);
+-		binder_transaction_buffer_release(proc, NULL, buffer, 0, 0);
++		binder_release_entire_buffer(proc, NULL, buffer, false);
+ 		binder_alloc_free_buf(&proc->alloc, buffer);
+ 		kfree(t_outdated);
+ 		binder_stats_deleted(BINDER_STAT_TRANSACTION);
+@@ -3775,7 +3789,7 @@ binder_free_buf(struct binder_proc *proc,
+ 		binder_node_inner_unlock(buf_node);
+ 	}
+ 	trace_binder_transaction_buffer_release(buffer);
+-	binder_transaction_buffer_release(proc, thread, buffer, 0, is_failure);
++	binder_release_entire_buffer(proc, thread, buffer, is_failure);
+ 	binder_alloc_free_buf(&proc->alloc, buffer);
+ }
+ 
+diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
+index 55a3c3c2409f0..662a2a2e2e84a 100644
+--- a/drivers/android/binder_alloc.c
++++ b/drivers/android/binder_alloc.c
+@@ -212,8 +212,8 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
+ 		mm = alloc->mm;
+ 
+ 	if (mm) {
+-		mmap_read_lock(mm);
+-		vma = vma_lookup(mm, alloc->vma_addr);
++		mmap_write_lock(mm);
++		vma = alloc->vma;
+ 	}
+ 
+ 	if (!vma && need_mm) {
+@@ -270,7 +270,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
+ 		trace_binder_alloc_page_end(alloc, index);
+ 	}
+ 	if (mm) {
+-		mmap_read_unlock(mm);
++		mmap_write_unlock(mm);
+ 		mmput(mm);
+ 	}
+ 	return 0;
+@@ -303,21 +303,24 @@ err_page_ptr_cleared:
+ 	}
+ err_no_vma:
+ 	if (mm) {
+-		mmap_read_unlock(mm);
++		mmap_write_unlock(mm);
+ 		mmput(mm);
+ 	}
+ 	return vma ? -ENOMEM : -ESRCH;
+ }
+ 
++static inline void binder_alloc_set_vma(struct binder_alloc *alloc,
++		struct vm_area_struct *vma)
++{
++	/* pairs with smp_load_acquire in binder_alloc_get_vma() */
++	smp_store_release(&alloc->vma, vma);
++}
++
+ static inline struct vm_area_struct *binder_alloc_get_vma(
+ 		struct binder_alloc *alloc)
+ {
+-	struct vm_area_struct *vma = NULL;
+-
+-	if (alloc->vma_addr)
+-		vma = vma_lookup(alloc->mm, alloc->vma_addr);
+-
+-	return vma;
++	/* pairs with smp_store_release in binder_alloc_set_vma() */
++	return smp_load_acquire(&alloc->vma);
+ }
+ 
+ static bool debug_low_async_space_locked(struct binder_alloc *alloc, int pid)
+@@ -380,15 +383,13 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
+ 	size_t size, data_offsets_size;
+ 	int ret;
+ 
+-	mmap_read_lock(alloc->mm);
++	/* Check binder_alloc is fully initialized */
+ 	if (!binder_alloc_get_vma(alloc)) {
+-		mmap_read_unlock(alloc->mm);
+ 		binder_alloc_debug(BINDER_DEBUG_USER_ERROR,
+ 				   "%d: binder_alloc_buf, no vma\n",
+ 				   alloc->pid);
+ 		return ERR_PTR(-ESRCH);
+ 	}
+-	mmap_read_unlock(alloc->mm);
+ 
+ 	data_offsets_size = ALIGN(data_size, sizeof(void *)) +
+ 		ALIGN(offsets_size, sizeof(void *));
+@@ -778,7 +779,9 @@ int binder_alloc_mmap_handler(struct binder_alloc *alloc,
+ 	buffer->free = 1;
+ 	binder_insert_free_buffer(alloc, buffer);
+ 	alloc->free_async_space = alloc->buffer_size / 2;
+-	alloc->vma_addr = vma->vm_start;
++
++	/* Signal binder_alloc is fully initialized */
++	binder_alloc_set_vma(alloc, vma);
+ 
+ 	return 0;
+ 
+@@ -808,8 +811,7 @@ void binder_alloc_deferred_release(struct binder_alloc *alloc)
+ 
+ 	buffers = 0;
+ 	mutex_lock(&alloc->mutex);
+-	BUG_ON(alloc->vma_addr &&
+-	       vma_lookup(alloc->mm, alloc->vma_addr));
++	BUG_ON(alloc->vma);
+ 
+ 	while ((n = rb_first(&alloc->allocated_buffers))) {
+ 		buffer = rb_entry(n, struct binder_buffer, rb_node);
+@@ -916,25 +918,17 @@ void binder_alloc_print_pages(struct seq_file *m,
+ 	 * Make sure the binder_alloc is fully initialized, otherwise we might
+ 	 * read inconsistent state.
+ 	 */
+-
+-	mmap_read_lock(alloc->mm);
+-	if (binder_alloc_get_vma(alloc) == NULL) {
+-		mmap_read_unlock(alloc->mm);
+-		goto uninitialized;
+-	}
+-
+-	mmap_read_unlock(alloc->mm);
+-	for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) {
+-		page = &alloc->pages[i];
+-		if (!page->page_ptr)
+-			free++;
+-		else if (list_empty(&page->lru))
+-			active++;
+-		else
+-			lru++;
++	if (binder_alloc_get_vma(alloc) != NULL) {
++		for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) {
++			page = &alloc->pages[i];
++			if (!page->page_ptr)
++				free++;
++			else if (list_empty(&page->lru))
++				active++;
++			else
++				lru++;
++		}
+ 	}
+-
+-uninitialized:
+ 	mutex_unlock(&alloc->mutex);
+ 	seq_printf(m, "  pages: %d:%d:%d\n", active, lru, free);
+ 	seq_printf(m, "  pages high watermark: %zu\n", alloc->pages_high);
+@@ -969,7 +963,7 @@ int binder_alloc_get_allocated_count(struct binder_alloc *alloc)
+  */
+ void binder_alloc_vma_close(struct binder_alloc *alloc)
+ {
+-	alloc->vma_addr = 0;
++	binder_alloc_set_vma(alloc, NULL);
+ }
+ 
+ /**
+diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h
+index 0f811ac4bcffd..138d1d5af9ce3 100644
+--- a/drivers/android/binder_alloc.h
++++ b/drivers/android/binder_alloc.h
+@@ -75,7 +75,7 @@ struct binder_lru_page {
+ /**
+  * struct binder_alloc - per-binder proc state for binder allocator
+  * @mutex:              protects binder_alloc fields
+- * @vma_addr:           vm_area_struct->vm_start passed to mmap_handler
++ * @vma:                vm_area_struct passed to mmap_handler
+  *                      (invariant after mmap)
+  * @mm:                 copy of task->mm (invariant after open)
+  * @buffer:             base of per-proc address space mapped via mmap
+@@ -99,7 +99,7 @@ struct binder_lru_page {
+  */
+ struct binder_alloc {
+ 	struct mutex mutex;
+-	unsigned long vma_addr;
++	struct vm_area_struct *vma;
+ 	struct mm_struct *mm;
+ 	void __user *buffer;
+ 	struct list_head buffers;
+diff --git a/drivers/android/binder_alloc_selftest.c b/drivers/android/binder_alloc_selftest.c
+index 43a881073a428..c2b323bc3b3a5 100644
+--- a/drivers/android/binder_alloc_selftest.c
++++ b/drivers/android/binder_alloc_selftest.c
+@@ -287,7 +287,7 @@ void binder_selftest_alloc(struct binder_alloc *alloc)
+ 	if (!binder_selftest_run)
+ 		return;
+ 	mutex_lock(&binder_selftest_lock);
+-	if (!binder_selftest_run || !alloc->vma_addr)
++	if (!binder_selftest_run || !alloc->vma)
+ 		goto done;
+ 	pr_info("STARTED\n");
+ 	binder_selftest_alloc_offset(alloc, end_offset, 0);
+diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
+index 2a05d8cc0e795..5be91591cb3b2 100644
+--- a/drivers/char/tpm/tpm-chip.c
++++ b/drivers/char/tpm/tpm-chip.c
+@@ -572,6 +572,10 @@ static int tpm_hwrng_read(struct hwrng *rng, void *data, size_t max, bool wait)
+ {
+ 	struct tpm_chip *chip = container_of(rng, struct tpm_chip, hwrng);
+ 
++	/* Give back zero bytes, as TPM chip has not yet fully resumed: */
++	if (chip->flags & TPM_CHIP_FLAG_SUSPENDED)
++		return 0;
++
+ 	return tpm_get_random(chip, data, max);
+ }
+ 
+@@ -605,6 +609,42 @@ static int tpm_get_pcr_allocation(struct tpm_chip *chip)
+ 	return rc;
+ }
+ 
++/*
++ * tpm_chip_bootstrap() - Boostrap TPM chip after power on
++ * @chip: TPM chip to use.
++ *
++ * Initialize TPM chip after power on. This a one-shot function: subsequent
++ * calls will have no effect.
++ */
++int tpm_chip_bootstrap(struct tpm_chip *chip)
++{
++	int rc;
++
++	if (chip->flags & TPM_CHIP_FLAG_BOOTSTRAPPED)
++		return 0;
++
++	rc = tpm_chip_start(chip);
++	if (rc)
++		return rc;
++
++	rc = tpm_auto_startup(chip);
++	if (rc)
++		goto stop;
++
++	rc = tpm_get_pcr_allocation(chip);
++stop:
++	tpm_chip_stop(chip);
++
++	/*
++	 * Unconditionally set, as driver initialization should cease, when the
++	 * boostrapping process fails.
++	 */
++	chip->flags |= TPM_CHIP_FLAG_BOOTSTRAPPED;
++
++	return rc;
++}
++EXPORT_SYMBOL_GPL(tpm_chip_bootstrap);
++
+ /*
+  * tpm_chip_register() - create a character device for the TPM chip
+  * @chip: TPM chip to use.
+@@ -620,17 +660,7 @@ int tpm_chip_register(struct tpm_chip *chip)
+ {
+ 	int rc;
+ 
+-	rc = tpm_chip_start(chip);
+-	if (rc)
+-		return rc;
+-	rc = tpm_auto_startup(chip);
+-	if (rc) {
+-		tpm_chip_stop(chip);
+-		return rc;
+-	}
+-
+-	rc = tpm_get_pcr_allocation(chip);
+-	tpm_chip_stop(chip);
++	rc = tpm_chip_bootstrap(chip);
+ 	if (rc)
+ 		return rc;
+ 
+diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
+index 7e513b7718320..0f941cb32eb17 100644
+--- a/drivers/char/tpm/tpm-interface.c
++++ b/drivers/char/tpm/tpm-interface.c
+@@ -412,6 +412,8 @@ int tpm_pm_suspend(struct device *dev)
+ 	}
+ 
+ suspended:
++	chip->flags |= TPM_CHIP_FLAG_SUSPENDED;
++
+ 	if (rc)
+ 		dev_err(dev, "Ignoring error %d while suspending\n", rc);
+ 	return 0;
+@@ -429,6 +431,14 @@ int tpm_pm_resume(struct device *dev)
+ 	if (chip == NULL)
+ 		return -ENODEV;
+ 
++	chip->flags &= ~TPM_CHIP_FLAG_SUSPENDED;
++
++	/*
++	 * Guarantee that SUSPENDED is written last, so that hwrng does not
++	 * activate before the chip has been fully resumed.
++	 */
++	wmb();
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(tpm_pm_resume);
+diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
+index 830014a266090..f6c99b3f00458 100644
+--- a/drivers/char/tpm/tpm.h
++++ b/drivers/char/tpm/tpm.h
+@@ -263,6 +263,7 @@ static inline void tpm_msleep(unsigned int delay_msec)
+ 		     delay_msec * 1000);
+ };
+ 
++int tpm_chip_bootstrap(struct tpm_chip *chip);
+ int tpm_chip_start(struct tpm_chip *chip);
+ void tpm_chip_stop(struct tpm_chip *chip);
+ struct tpm_chip *tpm_find_get_ops(struct tpm_chip *chip);
+diff --git a/drivers/char/tpm/tpm_tis.c b/drivers/char/tpm/tpm_tis.c
+index 4be19d8f3ca95..0d084d6652c41 100644
+--- a/drivers/char/tpm/tpm_tis.c
++++ b/drivers/char/tpm/tpm_tis.c
+@@ -243,7 +243,7 @@ static int tpm_tis_init(struct device *dev, struct tpm_info *tpm_info)
+ 		irq = tpm_info->irq;
+ 
+ 	if (itpm || is_itpm(ACPI_COMPANION(dev)))
+-		phy->priv.flags |= TPM_TIS_ITPM_WORKAROUND;
++		set_bit(TPM_TIS_ITPM_WORKAROUND, &phy->priv.flags);
+ 
+ 	return tpm_tis_core_init(dev, &phy->priv, irq, &tpm_tcg,
+ 				 ACPI_HANDLE(dev));
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index eecfbd7e97867..f02b583005a53 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -53,41 +53,63 @@ static int wait_for_tpm_stat(struct tpm_chip *chip, u8 mask,
+ 	long rc;
+ 	u8 status;
+ 	bool canceled = false;
++	u8 sts_mask = 0;
++	int ret = 0;
+ 
+ 	/* check current status */
+ 	status = chip->ops->status(chip);
+ 	if ((status & mask) == mask)
+ 		return 0;
+ 
+-	stop = jiffies + timeout;
++	/* check what status changes can be handled by irqs */
++	if (priv->int_mask & TPM_INTF_STS_VALID_INT)
++		sts_mask |= TPM_STS_VALID;
+ 
+-	if (chip->flags & TPM_CHIP_FLAG_IRQ) {
++	if (priv->int_mask & TPM_INTF_DATA_AVAIL_INT)
++		sts_mask |= TPM_STS_DATA_AVAIL;
++
++	if (priv->int_mask & TPM_INTF_CMD_READY_INT)
++		sts_mask |= TPM_STS_COMMAND_READY;
++
++	sts_mask &= mask;
++
++	stop = jiffies + timeout;
++	/* process status changes with irq support */
++	if (sts_mask) {
++		ret = -ETIME;
+ again:
+ 		timeout = stop - jiffies;
+ 		if ((long)timeout <= 0)
+ 			return -ETIME;
+ 		rc = wait_event_interruptible_timeout(*queue,
+-			wait_for_tpm_stat_cond(chip, mask, check_cancel,
++			wait_for_tpm_stat_cond(chip, sts_mask, check_cancel,
+ 					       &canceled),
+ 			timeout);
+ 		if (rc > 0) {
+ 			if (canceled)
+ 				return -ECANCELED;
+-			return 0;
++			ret = 0;
+ 		}
+ 		if (rc == -ERESTARTSYS && freezing(current)) {
+ 			clear_thread_flag(TIF_SIGPENDING);
+ 			goto again;
+ 		}
+-	} else {
+-		do {
+-			usleep_range(priv->timeout_min,
+-				     priv->timeout_max);
+-			status = chip->ops->status(chip);
+-			if ((status & mask) == mask)
+-				return 0;
+-		} while (time_before(jiffies, stop));
+ 	}
++
++	if (ret)
++		return ret;
++
++	mask &= ~sts_mask;
++	if (!mask) /* all done */
++		return 0;
++	/* process status changes without irq support */
++	do {
++		status = chip->ops->status(chip);
++		if ((status & mask) == mask)
++			return 0;
++		usleep_range(priv->timeout_min,
++			     priv->timeout_max);
++	} while (time_before(jiffies, stop));
+ 	return -ETIME;
+ }
+ 
+@@ -376,7 +398,7 @@ static int tpm_tis_send_data(struct tpm_chip *chip, const u8 *buf, size_t len)
+ 	struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
+ 	int rc, status, burstcnt;
+ 	size_t count = 0;
+-	bool itpm = priv->flags & TPM_TIS_ITPM_WORKAROUND;
++	bool itpm = test_bit(TPM_TIS_ITPM_WORKAROUND, &priv->flags);
+ 
+ 	status = tpm_tis_status(chip);
+ 	if ((status & TPM_STS_COMMAND_READY) == 0) {
+@@ -509,7 +531,8 @@ static int tpm_tis_send(struct tpm_chip *chip, u8 *buf, size_t len)
+ 	int rc, irq;
+ 	struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
+ 
+-	if (!(chip->flags & TPM_CHIP_FLAG_IRQ) || priv->irq_tested)
++	if (!(chip->flags & TPM_CHIP_FLAG_IRQ) ||
++	     test_bit(TPM_TIS_IRQ_TESTED, &priv->flags))
+ 		return tpm_tis_send_main(chip, buf, len);
+ 
+ 	/* Verify receipt of the expected IRQ */
+@@ -519,11 +542,11 @@ static int tpm_tis_send(struct tpm_chip *chip, u8 *buf, size_t len)
+ 	rc = tpm_tis_send_main(chip, buf, len);
+ 	priv->irq = irq;
+ 	chip->flags |= TPM_CHIP_FLAG_IRQ;
+-	if (!priv->irq_tested)
++	if (!test_bit(TPM_TIS_IRQ_TESTED, &priv->flags))
+ 		tpm_msleep(1);
+-	if (!priv->irq_tested)
++	if (!test_bit(TPM_TIS_IRQ_TESTED, &priv->flags))
+ 		disable_interrupts(chip);
+-	priv->irq_tested = true;
++	set_bit(TPM_TIS_IRQ_TESTED, &priv->flags);
+ 	return rc;
+ }
+ 
+@@ -666,7 +689,7 @@ static int probe_itpm(struct tpm_chip *chip)
+ 	size_t len = sizeof(cmd_getticks);
+ 	u16 vendor;
+ 
+-	if (priv->flags & TPM_TIS_ITPM_WORKAROUND)
++	if (test_bit(TPM_TIS_ITPM_WORKAROUND, &priv->flags))
+ 		return 0;
+ 
+ 	rc = tpm_tis_read16(priv, TPM_DID_VID(0), &vendor);
+@@ -686,13 +709,13 @@ static int probe_itpm(struct tpm_chip *chip)
+ 
+ 	tpm_tis_ready(chip);
+ 
+-	priv->flags |= TPM_TIS_ITPM_WORKAROUND;
++	set_bit(TPM_TIS_ITPM_WORKAROUND, &priv->flags);
+ 
+ 	rc = tpm_tis_send_data(chip, cmd_getticks, len);
+ 	if (rc == 0)
+ 		dev_info(&chip->dev, "Detected an iTPM.\n");
+ 	else {
+-		priv->flags &= ~TPM_TIS_ITPM_WORKAROUND;
++		clear_bit(TPM_TIS_ITPM_WORKAROUND, &priv->flags);
+ 		rc = -EFAULT;
+ 	}
+ 
+@@ -736,7 +759,7 @@ static irqreturn_t tis_int_handler(int dummy, void *dev_id)
+ 	if (interrupt == 0)
+ 		return IRQ_NONE;
+ 
+-	priv->irq_tested = true;
++	set_bit(TPM_TIS_IRQ_TESTED, &priv->flags);
+ 	if (interrupt & TPM_INTF_DATA_AVAIL_INT)
+ 		wake_up_interruptible(&priv->read_queue);
+ 	if (interrupt & TPM_INTF_LOCALITY_CHANGE_INT)
+@@ -819,7 +842,7 @@ static int tpm_tis_probe_irq_single(struct tpm_chip *chip, u32 intmask,
+ 	if (rc < 0)
+ 		goto restore_irqs;
+ 
+-	priv->irq_tested = false;
++	clear_bit(TPM_TIS_IRQ_TESTED, &priv->flags);
+ 
+ 	/* Generate an interrupt by having the core call through to
+ 	 * tpm_tis_send
+@@ -1031,8 +1054,40 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ 	if (rc < 0)
+ 		goto out_err;
+ 
+-	intmask |= TPM_INTF_CMD_READY_INT | TPM_INTF_LOCALITY_CHANGE_INT |
+-		   TPM_INTF_DATA_AVAIL_INT | TPM_INTF_STS_VALID_INT;
++	/* Figure out the capabilities */
++	rc = tpm_tis_read32(priv, TPM_INTF_CAPS(priv->locality), &intfcaps);
++	if (rc < 0)
++		goto out_err;
++
++	dev_dbg(dev, "TPM interface capabilities (0x%x):\n",
++		intfcaps);
++	if (intfcaps & TPM_INTF_BURST_COUNT_STATIC)
++		dev_dbg(dev, "\tBurst Count Static\n");
++	if (intfcaps & TPM_INTF_CMD_READY_INT) {
++		intmask |= TPM_INTF_CMD_READY_INT;
++		dev_dbg(dev, "\tCommand Ready Int Support\n");
++	}
++	if (intfcaps & TPM_INTF_INT_EDGE_FALLING)
++		dev_dbg(dev, "\tInterrupt Edge Falling\n");
++	if (intfcaps & TPM_INTF_INT_EDGE_RISING)
++		dev_dbg(dev, "\tInterrupt Edge Rising\n");
++	if (intfcaps & TPM_INTF_INT_LEVEL_LOW)
++		dev_dbg(dev, "\tInterrupt Level Low\n");
++	if (intfcaps & TPM_INTF_INT_LEVEL_HIGH)
++		dev_dbg(dev, "\tInterrupt Level High\n");
++	if (intfcaps & TPM_INTF_LOCALITY_CHANGE_INT) {
++		intmask |= TPM_INTF_LOCALITY_CHANGE_INT;
++		dev_dbg(dev, "\tLocality Change Int Support\n");
++	}
++	if (intfcaps & TPM_INTF_STS_VALID_INT) {
++		intmask |= TPM_INTF_STS_VALID_INT;
++		dev_dbg(dev, "\tSts Valid Int Support\n");
++	}
++	if (intfcaps & TPM_INTF_DATA_AVAIL_INT) {
++		intmask |= TPM_INTF_DATA_AVAIL_INT;
++		dev_dbg(dev, "\tData Avail Int Support\n");
++	}
++
+ 	intmask &= ~TPM_GLOBAL_INT_ENABLE;
+ 
+ 	rc = tpm_tis_request_locality(chip, 0);
+@@ -1066,35 +1121,14 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ 		goto out_err;
+ 	}
+ 
+-	/* Figure out the capabilities */
+-	rc = tpm_tis_read32(priv, TPM_INTF_CAPS(priv->locality), &intfcaps);
+-	if (rc < 0)
+-		goto out_err;
+-
+-	dev_dbg(dev, "TPM interface capabilities (0x%x):\n",
+-		intfcaps);
+-	if (intfcaps & TPM_INTF_BURST_COUNT_STATIC)
+-		dev_dbg(dev, "\tBurst Count Static\n");
+-	if (intfcaps & TPM_INTF_CMD_READY_INT)
+-		dev_dbg(dev, "\tCommand Ready Int Support\n");
+-	if (intfcaps & TPM_INTF_INT_EDGE_FALLING)
+-		dev_dbg(dev, "\tInterrupt Edge Falling\n");
+-	if (intfcaps & TPM_INTF_INT_EDGE_RISING)
+-		dev_dbg(dev, "\tInterrupt Edge Rising\n");
+-	if (intfcaps & TPM_INTF_INT_LEVEL_LOW)
+-		dev_dbg(dev, "\tInterrupt Level Low\n");
+-	if (intfcaps & TPM_INTF_INT_LEVEL_HIGH)
+-		dev_dbg(dev, "\tInterrupt Level High\n");
+-	if (intfcaps & TPM_INTF_LOCALITY_CHANGE_INT)
+-		dev_dbg(dev, "\tLocality Change Int Support\n");
+-	if (intfcaps & TPM_INTF_STS_VALID_INT)
+-		dev_dbg(dev, "\tSts Valid Int Support\n");
+-	if (intfcaps & TPM_INTF_DATA_AVAIL_INT)
+-		dev_dbg(dev, "\tData Avail Int Support\n");
+-
+ 	/* INTERRUPT Setup */
+ 	init_waitqueue_head(&priv->read_queue);
+ 	init_waitqueue_head(&priv->int_queue);
++
++	rc = tpm_chip_bootstrap(chip);
++	if (rc)
++		goto out_err;
++
+ 	if (irq != -1) {
+ 		/*
+ 		 * Before doing irq testing issue a command to the TPM in polling mode
+@@ -1122,7 +1156,9 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ 		else
+ 			tpm_tis_probe_irq(chip, intmask);
+ 
+-		if (!(chip->flags & TPM_CHIP_FLAG_IRQ)) {
++		if (chip->flags & TPM_CHIP_FLAG_IRQ) {
++			priv->int_mask = intmask;
++		} else {
+ 			dev_err(&chip->dev, FW_BUG
+ 					"TPM interrupt not working, polling instead\n");
+ 
+@@ -1159,31 +1195,20 @@ static void tpm_tis_reenable_interrupts(struct tpm_chip *chip)
+ 	u32 intmask;
+ 	int rc;
+ 
+-	if (chip->ops->clk_enable != NULL)
+-		chip->ops->clk_enable(chip, true);
+-
+-	/* reenable interrupts that device may have lost or
+-	 * BIOS/firmware may have disabled
++	/*
++	 * Re-enable interrupts that device may have lost or BIOS/firmware may
++	 * have disabled.
+ 	 */
+ 	rc = tpm_tis_write8(priv, TPM_INT_VECTOR(priv->locality), priv->irq);
+-	if (rc < 0)
+-		goto out;
++	if (rc < 0) {
++		dev_err(&chip->dev, "Setting IRQ failed.\n");
++		return;
++	}
+ 
+-	rc = tpm_tis_read32(priv, TPM_INT_ENABLE(priv->locality), &intmask);
++	intmask = priv->int_mask | TPM_GLOBAL_INT_ENABLE;
++	rc = tpm_tis_write32(priv, TPM_INT_ENABLE(priv->locality), intmask);
+ 	if (rc < 0)
+-		goto out;
+-
+-	intmask |= TPM_INTF_CMD_READY_INT
+-	    | TPM_INTF_LOCALITY_CHANGE_INT | TPM_INTF_DATA_AVAIL_INT
+-	    | TPM_INTF_STS_VALID_INT | TPM_GLOBAL_INT_ENABLE;
+-
+-	tpm_tis_write32(priv, TPM_INT_ENABLE(priv->locality), intmask);
+-
+-out:
+-	if (chip->ops->clk_enable != NULL)
+-		chip->ops->clk_enable(chip, false);
+-
+-	return;
++		dev_err(&chip->dev, "Enabling interrupts failed.\n");
+ }
+ 
+ int tpm_tis_resume(struct device *dev)
+@@ -1191,27 +1216,27 @@ int tpm_tis_resume(struct device *dev)
+ 	struct tpm_chip *chip = dev_get_drvdata(dev);
+ 	int ret;
+ 
+-	ret = tpm_tis_request_locality(chip, 0);
+-	if (ret < 0)
++	ret = tpm_chip_start(chip);
++	if (ret)
+ 		return ret;
+ 
+ 	if (chip->flags & TPM_CHIP_FLAG_IRQ)
+ 		tpm_tis_reenable_interrupts(chip);
+ 
+-	ret = tpm_pm_resume(dev);
+-	if (ret)
+-		goto out;
+-
+ 	/*
+ 	 * TPM 1.2 requires self-test on resume. This function actually returns
+ 	 * an error code but for unknown reason it isn't handled.
+ 	 */
+ 	if (!(chip->flags & TPM_CHIP_FLAG_TPM2))
+ 		tpm1_do_selftest(chip);
+-out:
+-	tpm_tis_relinquish_locality(chip, 0);
+ 
+-	return ret;
++	tpm_chip_stop(chip);
++
++	ret = tpm_pm_resume(dev);
++	if (ret)
++		return ret;
++
++	return 0;
+ }
+ EXPORT_SYMBOL_GPL(tpm_tis_resume);
+ #endif
+diff --git a/drivers/char/tpm/tpm_tis_core.h b/drivers/char/tpm/tpm_tis_core.h
+index 1d51d5168fb6e..e978f457fd4d4 100644
+--- a/drivers/char/tpm/tpm_tis_core.h
++++ b/drivers/char/tpm/tpm_tis_core.h
+@@ -87,6 +87,7 @@ enum tpm_tis_flags {
+ 	TPM_TIS_ITPM_WORKAROUND		= BIT(0),
+ 	TPM_TIS_INVALID_STATUS		= BIT(1),
+ 	TPM_TIS_DEFAULT_CANCELLATION	= BIT(2),
++	TPM_TIS_IRQ_TESTED		= BIT(3),
+ };
+ 
+ struct tpm_tis_data {
+@@ -95,7 +96,7 @@ struct tpm_tis_data {
+ 	unsigned int locality_count;
+ 	int locality;
+ 	int irq;
+-	bool irq_tested;
++	unsigned int int_mask;
+ 	unsigned long flags;
+ 	void __iomem *ilb_base_addr;
+ 	u16 clkrun_enabled;
+diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c
+index f2addb4571723..114e15d02bdee 100644
+--- a/drivers/cxl/core/mbox.c
++++ b/drivers/cxl/core/mbox.c
+@@ -984,7 +984,7 @@ static int cxl_mem_get_partition_info(struct cxl_dev_state *cxlds)
+  * cxl_dev_state_identify() - Send the IDENTIFY command to the device.
+  * @cxlds: The device data for the operation
+  *
+- * Return: 0 if identify was executed successfully.
++ * Return: 0 if identify was executed successfully or media not ready.
+  *
+  * This will dispatch the identify command to the device and on success populate
+  * structures to be exported to sysfs.
+@@ -996,6 +996,9 @@ int cxl_dev_state_identify(struct cxl_dev_state *cxlds)
+ 	struct cxl_mbox_cmd mbox_cmd;
+ 	int rc;
+ 
++	if (!cxlds->media_ready)
++		return 0;
++
+ 	mbox_cmd = (struct cxl_mbox_cmd) {
+ 		.opcode = CXL_MBOX_OP_IDENTIFY,
+ 		.size_out = sizeof(id),
+@@ -1065,10 +1068,12 @@ int cxl_mem_create_range_info(struct cxl_dev_state *cxlds)
+ 				   cxlds->persistent_only_bytes, "pmem");
+ 	}
+ 
+-	rc = cxl_mem_get_partition_info(cxlds);
+-	if (rc) {
+-		dev_err(dev, "Failed to query partition information\n");
+-		return rc;
++	if (cxlds->media_ready) {
++		rc = cxl_mem_get_partition_info(cxlds);
++		if (rc) {
++			dev_err(dev, "Failed to query partition information\n");
++			return rc;
++		}
+ 	}
+ 
+ 	rc = add_dpa_res(dev, &cxlds->dpa_res, &cxlds->ram_res, 0,
+diff --git a/drivers/cxl/core/pci.c b/drivers/cxl/core/pci.c
+index 523d5b9fd7fcf..2055d0b9d4af1 100644
+--- a/drivers/cxl/core/pci.c
++++ b/drivers/cxl/core/pci.c
+@@ -101,23 +101,57 @@ int devm_cxl_port_enumerate_dports(struct cxl_port *port)
+ }
+ EXPORT_SYMBOL_NS_GPL(devm_cxl_port_enumerate_dports, CXL);
+ 
+-/*
+- * Wait up to @media_ready_timeout for the device to report memory
+- * active.
+- */
+-int cxl_await_media_ready(struct cxl_dev_state *cxlds)
++static int cxl_dvsec_mem_range_valid(struct cxl_dev_state *cxlds, int id)
++{
++	struct pci_dev *pdev = to_pci_dev(cxlds->dev);
++	int d = cxlds->cxl_dvsec;
++	bool valid = false;
++	int rc, i;
++	u32 temp;
++
++	if (id > CXL_DVSEC_RANGE_MAX)
++		return -EINVAL;
++
++	/* Check MEM INFO VALID bit first, give up after 1s */
++	i = 1;
++	do {
++		rc = pci_read_config_dword(pdev,
++					   d + CXL_DVSEC_RANGE_SIZE_LOW(id),
++					   &temp);
++		if (rc)
++			return rc;
++
++		valid = FIELD_GET(CXL_DVSEC_MEM_INFO_VALID, temp);
++		if (valid)
++			break;
++		msleep(1000);
++	} while (i--);
++
++	if (!valid) {
++		dev_err(&pdev->dev,
++			"Timeout awaiting memory range %d valid after 1s.\n",
++			id);
++		return -ETIMEDOUT;
++	}
++
++	return 0;
++}
++
++static int cxl_dvsec_mem_range_active(struct cxl_dev_state *cxlds, int id)
+ {
+ 	struct pci_dev *pdev = to_pci_dev(cxlds->dev);
+ 	int d = cxlds->cxl_dvsec;
+ 	bool active = false;
+-	u64 md_status;
+ 	int rc, i;
++	u32 temp;
+ 
+-	for (i = media_ready_timeout; i; i--) {
+-		u32 temp;
++	if (id > CXL_DVSEC_RANGE_MAX)
++		return -EINVAL;
+ 
++	/* Check MEM ACTIVE bit, up to 60s timeout by default */
++	for (i = media_ready_timeout; i; i--) {
+ 		rc = pci_read_config_dword(
+-			pdev, d + CXL_DVSEC_RANGE_SIZE_LOW(0), &temp);
++			pdev, d + CXL_DVSEC_RANGE_SIZE_LOW(id), &temp);
+ 		if (rc)
+ 			return rc;
+ 
+@@ -134,6 +168,39 @@ int cxl_await_media_ready(struct cxl_dev_state *cxlds)
+ 		return -ETIMEDOUT;
+ 	}
+ 
++	return 0;
++}
++
++/*
++ * Wait up to @media_ready_timeout for the device to report memory
++ * active.
++ */
++int cxl_await_media_ready(struct cxl_dev_state *cxlds)
++{
++	struct pci_dev *pdev = to_pci_dev(cxlds->dev);
++	int d = cxlds->cxl_dvsec;
++	int rc, i, hdm_count;
++	u64 md_status;
++	u16 cap;
++
++	rc = pci_read_config_word(pdev,
++				  d + CXL_DVSEC_CAP_OFFSET, &cap);
++	if (rc)
++		return rc;
++
++	hdm_count = FIELD_GET(CXL_DVSEC_HDM_COUNT_MASK, cap);
++	for (i = 0; i < hdm_count; i++) {
++		rc = cxl_dvsec_mem_range_valid(cxlds, i);
++		if (rc)
++			return rc;
++	}
++
++	for (i = 0; i < hdm_count; i++) {
++		rc = cxl_dvsec_mem_range_active(cxlds, i);
++		if (rc)
++			return rc;
++	}
++
+ 	md_status = readq(cxlds->regs.memdev + CXLMDEV_STATUS_OFFSET);
+ 	if (!CXLMDEV_READY(md_status))
+ 		return -EIO;
+@@ -241,17 +308,36 @@ static void disable_hdm(void *_cxlhdm)
+ 	       hdm + CXL_HDM_DECODER_CTRL_OFFSET);
+ }
+ 
+-static int devm_cxl_enable_hdm(struct device *host, struct cxl_hdm *cxlhdm)
++int devm_cxl_enable_hdm(struct cxl_port *port, struct cxl_hdm *cxlhdm)
+ {
+-	void __iomem *hdm = cxlhdm->regs.hdm_decoder;
++	void __iomem *hdm;
+ 	u32 global_ctrl;
+ 
++	/*
++	 * If the hdm capability was not mapped there is nothing to enable and
++	 * the caller is responsible for what happens next.  For example,
++	 * emulate a passthrough decoder.
++	 */
++	if (IS_ERR(cxlhdm))
++		return 0;
++
++	hdm = cxlhdm->regs.hdm_decoder;
+ 	global_ctrl = readl(hdm + CXL_HDM_DECODER_CTRL_OFFSET);
++
++	/*
++	 * If the HDM decoder capability was enabled on entry, skip
++	 * registering disable_hdm() since this decode capability may be
++	 * owned by platform firmware.
++	 */
++	if (global_ctrl & CXL_HDM_DECODER_ENABLE)
++		return 0;
++
+ 	writel(global_ctrl | CXL_HDM_DECODER_ENABLE,
+ 	       hdm + CXL_HDM_DECODER_CTRL_OFFSET);
+ 
+-	return devm_add_action_or_reset(host, disable_hdm, cxlhdm);
++	return devm_add_action_or_reset(&port->dev, disable_hdm, cxlhdm);
+ }
++EXPORT_SYMBOL_NS_GPL(devm_cxl_enable_hdm, CXL);
+ 
+ int cxl_dvsec_rr_decode(struct device *dev, int d,
+ 			struct cxl_endpoint_dvsec_info *info)
+@@ -425,7 +511,7 @@ int cxl_hdm_decode_init(struct cxl_dev_state *cxlds, struct cxl_hdm *cxlhdm,
+ 	if (info->mem_enabled)
+ 		return 0;
+ 
+-	rc = devm_cxl_enable_hdm(&port->dev, cxlhdm);
++	rc = devm_cxl_enable_hdm(port, cxlhdm);
+ 	if (rc)
+ 		return rc;
+ 
+diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
+index 044a92d9813e2..f93a285389621 100644
+--- a/drivers/cxl/cxl.h
++++ b/drivers/cxl/cxl.h
+@@ -710,6 +710,7 @@ struct cxl_endpoint_dvsec_info {
+ struct cxl_hdm;
+ struct cxl_hdm *devm_cxl_setup_hdm(struct cxl_port *port,
+ 				   struct cxl_endpoint_dvsec_info *info);
++int devm_cxl_enable_hdm(struct cxl_port *port, struct cxl_hdm *cxlhdm);
+ int devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm,
+ 				struct cxl_endpoint_dvsec_info *info);
+ int devm_cxl_add_passthrough_decoder(struct cxl_port *port);
+diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
+index 090acebba4fab..2c97acfa84162 100644
+--- a/drivers/cxl/cxlmem.h
++++ b/drivers/cxl/cxlmem.h
+@@ -227,6 +227,7 @@ struct cxl_event_state {
+  * @regs: Parsed register blocks
+  * @cxl_dvsec: Offset to the PCIe device DVSEC
+  * @rcd: operating in RCD mode (CXL 3.0 9.11.8 CXL Devices Attached to an RCH)
++ * @media_ready: Indicate whether the device media is usable
+  * @payload_size: Size of space for payload
+  *                (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register)
+  * @lsa_size: Size of Label Storage Area
+@@ -264,6 +265,7 @@ struct cxl_dev_state {
+ 	int cxl_dvsec;
+ 
+ 	bool rcd;
++	bool media_ready;
+ 	size_t payload_size;
+ 	size_t lsa_size;
+ 	struct mutex mbox_mutex; /* Protects device mailbox and firmware */
+diff --git a/drivers/cxl/cxlpci.h b/drivers/cxl/cxlpci.h
+index 0465ef963cd6a..7c02e55b80429 100644
+--- a/drivers/cxl/cxlpci.h
++++ b/drivers/cxl/cxlpci.h
+@@ -31,6 +31,8 @@
+ #define   CXL_DVSEC_RANGE_BASE_LOW(i)	(0x24 + (i * 0x10))
+ #define     CXL_DVSEC_MEM_BASE_LOW_MASK	GENMASK(31, 28)
+ 
++#define CXL_DVSEC_RANGE_MAX		2
++
+ /* CXL 2.0 8.1.4: Non-CXL Function Map DVSEC */
+ #define CXL_DVSEC_FUNCTION_MAP					2
+ 
+diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
+index 39c4b54f07152..71d7eafb6c94c 100644
+--- a/drivers/cxl/mem.c
++++ b/drivers/cxl/mem.c
+@@ -104,6 +104,9 @@ static int cxl_mem_probe(struct device *dev)
+ 	struct dentry *dentry;
+ 	int rc;
+ 
++	if (!cxlds->media_ready)
++		return -EBUSY;
++
+ 	/*
+ 	 * Someone is trying to reattach this device after it lost its port
+ 	 * connection (an endpoint port previously registered by this memdev was
+diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c
+index 60b23624d167f..24a2ad5caeb7f 100644
+--- a/drivers/cxl/pci.c
++++ b/drivers/cxl/pci.c
+@@ -757,6 +757,12 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	if (rc)
+ 		dev_dbg(&pdev->dev, "Failed to map RAS capability.\n");
+ 
++	rc = cxl_await_media_ready(cxlds);
++	if (rc == 0)
++		cxlds->media_ready = true;
++	else
++		dev_warn(&pdev->dev, "Media not active (%d)\n", rc);
++
+ 	rc = cxl_pci_setup_mailbox(cxlds);
+ 	if (rc)
+ 		return rc;
+diff --git a/drivers/cxl/port.c b/drivers/cxl/port.c
+index eb57324c4ad4a..c23b6164e1c0f 100644
+--- a/drivers/cxl/port.c
++++ b/drivers/cxl/port.c
+@@ -60,13 +60,17 @@ static int discover_region(struct device *dev, void *root)
+ static int cxl_switch_port_probe(struct cxl_port *port)
+ {
+ 	struct cxl_hdm *cxlhdm;
+-	int rc;
++	int rc, nr_dports;
+ 
+-	rc = devm_cxl_port_enumerate_dports(port);
+-	if (rc < 0)
+-		return rc;
++	nr_dports = devm_cxl_port_enumerate_dports(port);
++	if (nr_dports < 0)
++		return nr_dports;
+ 
+ 	cxlhdm = devm_cxl_setup_hdm(port, NULL);
++	rc = devm_cxl_enable_hdm(port, cxlhdm);
++	if (rc)
++		return rc;
++
+ 	if (!IS_ERR(cxlhdm))
+ 		return devm_cxl_enumerate_decoders(cxlhdm, NULL);
+ 
+@@ -75,7 +79,7 @@ static int cxl_switch_port_probe(struct cxl_port *port)
+ 		return PTR_ERR(cxlhdm);
+ 	}
+ 
+-	if (rc == 1) {
++	if (nr_dports == 1) {
+ 		dev_dbg(&port->dev, "Fallback to passthrough decoder\n");
+ 		return devm_cxl_add_passthrough_decoder(port);
+ 	}
+@@ -113,12 +117,6 @@ static int cxl_endpoint_port_probe(struct cxl_port *port)
+ 	if (rc)
+ 		return rc;
+ 
+-	rc = cxl_await_media_ready(cxlds);
+-	if (rc) {
+-		dev_err(&port->dev, "Media not active (%d)\n", rc);
+-		return rc;
+-	}
+-
+ 	rc = devm_cxl_enumerate_decoders(cxlhdm, &info);
+ 	if (rc)
+ 		return rc;
+diff --git a/drivers/firmware/arm_ffa/bus.c b/drivers/firmware/arm_ffa/bus.c
+index f29d77ecf72db..2b8bfcd010f5f 100644
+--- a/drivers/firmware/arm_ffa/bus.c
++++ b/drivers/firmware/arm_ffa/bus.c
+@@ -15,6 +15,8 @@
+ 
+ #include "common.h"
+ 
++static DEFINE_IDA(ffa_bus_id);
++
+ static int ffa_device_match(struct device *dev, struct device_driver *drv)
+ {
+ 	const struct ffa_device_id *id_table;
+@@ -53,7 +55,8 @@ static void ffa_device_remove(struct device *dev)
+ {
+ 	struct ffa_driver *ffa_drv = to_ffa_driver(dev->driver);
+ 
+-	ffa_drv->remove(to_ffa_dev(dev));
++	if (ffa_drv->remove)
++		ffa_drv->remove(to_ffa_dev(dev));
+ }
+ 
+ static int ffa_device_uevent(const struct device *dev, struct kobj_uevent_env *env)
+@@ -130,6 +133,7 @@ static void ffa_release_device(struct device *dev)
+ {
+ 	struct ffa_device *ffa_dev = to_ffa_dev(dev);
+ 
++	ida_free(&ffa_bus_id, ffa_dev->id);
+ 	kfree(ffa_dev);
+ }
+ 
+@@ -170,18 +174,24 @@ bool ffa_device_is_valid(struct ffa_device *ffa_dev)
+ struct ffa_device *ffa_device_register(const uuid_t *uuid, int vm_id,
+ 				       const struct ffa_ops *ops)
+ {
+-	int ret;
++	int id, ret;
+ 	struct device *dev;
+ 	struct ffa_device *ffa_dev;
+ 
++	id = ida_alloc_min(&ffa_bus_id, 1, GFP_KERNEL);
++	if (id < 0)
++		return NULL;
++
+ 	ffa_dev = kzalloc(sizeof(*ffa_dev), GFP_KERNEL);
+-	if (!ffa_dev)
++	if (!ffa_dev) {
++		ida_free(&ffa_bus_id, id);
+ 		return NULL;
++	}
+ 
+ 	dev = &ffa_dev->dev;
+ 	dev->bus = &ffa_bus_type;
+ 	dev->release = ffa_release_device;
+-	dev_set_name(&ffa_dev->dev, "arm-ffa-%04x", vm_id);
++	dev_set_name(&ffa_dev->dev, "arm-ffa-%d", id);
+ 
+ 	ffa_dev->vm_id = vm_id;
+ 	ffa_dev->ops = ops;
+@@ -217,4 +227,5 @@ void arm_ffa_bus_exit(void)
+ {
+ 	ffa_devices_unregister();
+ 	bus_unregister(&ffa_bus_type);
++	ida_destroy(&ffa_bus_id);
+ }
+diff --git a/drivers/firmware/arm_ffa/driver.c b/drivers/firmware/arm_ffa/driver.c
+index fa85c64d3dede..02774baa90078 100644
+--- a/drivers/firmware/arm_ffa/driver.c
++++ b/drivers/firmware/arm_ffa/driver.c
+@@ -420,12 +420,17 @@ ffa_setup_and_transmit(u32 func_id, void *buffer, u32 max_fragsize,
+ 		ep_mem_access->receiver = args->attrs[idx].receiver;
+ 		ep_mem_access->attrs = args->attrs[idx].attrs;
+ 		ep_mem_access->composite_off = COMPOSITE_OFFSET(args->nattrs);
++		ep_mem_access->flag = 0;
++		ep_mem_access->reserved = 0;
+ 	}
++	mem_region->reserved_0 = 0;
++	mem_region->reserved_1 = 0;
+ 	mem_region->ep_count = args->nattrs;
+ 
+ 	composite = buffer + COMPOSITE_OFFSET(args->nattrs);
+ 	composite->total_pg_cnt = ffa_get_num_pages_sg(args->sg);
+ 	composite->addr_range_cnt = num_entries;
++	composite->reserved = 0;
+ 
+ 	length = COMPOSITE_CONSTITUENTS_OFFSET(args->nattrs, num_entries);
+ 	frag_len = COMPOSITE_CONSTITUENTS_OFFSET(args->nattrs, 0);
+@@ -460,6 +465,7 @@ ffa_setup_and_transmit(u32 func_id, void *buffer, u32 max_fragsize,
+ 
+ 		constituents->address = sg_phys(args->sg);
+ 		constituents->pg_cnt = args->sg->length / FFA_PAGE_SIZE;
++		constituents->reserved = 0;
+ 		constituents++;
+ 		frag_len += sizeof(struct ffa_mem_region_addr_range);
+ 	} while ((args->sg = sg_next(args->sg)));
+diff --git a/drivers/gpio/gpio-mockup.c b/drivers/gpio/gpio-mockup.c
+index e6a7049bef641..b32063ac845a5 100644
+--- a/drivers/gpio/gpio-mockup.c
++++ b/drivers/gpio/gpio-mockup.c
+@@ -369,7 +369,7 @@ static void gpio_mockup_debugfs_setup(struct device *dev,
+ 		priv->offset = i;
+ 		priv->desc = gpiochip_get_desc(gc, i);
+ 
+-		debugfs_create_file(name, 0200, chip->dbg_dir, priv,
++		debugfs_create_file(name, 0600, chip->dbg_dir, priv,
+ 				    &gpio_mockup_debugfs_ops);
+ 	}
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
+index 7e8b7171068dc..bebd136ed5444 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
+@@ -1328,12 +1328,9 @@ int amdgpu_mes_self_test(struct amdgpu_device *adev)
+ 	struct amdgpu_mes_ctx_data ctx_data = {0};
+ 	struct amdgpu_ring *added_rings[AMDGPU_MES_CTX_MAX_RINGS] = { NULL };
+ 	int gang_ids[3] = {0};
+-	int queue_types[][2] = { { AMDGPU_RING_TYPE_GFX,
+-				   AMDGPU_MES_CTX_MAX_GFX_RINGS},
+-				 { AMDGPU_RING_TYPE_COMPUTE,
+-				   AMDGPU_MES_CTX_MAX_COMPUTE_RINGS},
+-				 { AMDGPU_RING_TYPE_SDMA,
+-				   AMDGPU_MES_CTX_MAX_SDMA_RINGS } };
++	int queue_types[][2] = { { AMDGPU_RING_TYPE_GFX, 1 },
++				 { AMDGPU_RING_TYPE_COMPUTE, 1 },
++				 { AMDGPU_RING_TYPE_SDMA, 1} };
+ 	int i, r, pasid, k = 0;
+ 
+ 	pasid = amdgpu_pasid_alloc(16);
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c
+index e1b7fca096660..5f10883da6a23 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c
+@@ -57,7 +57,13 @@ static int psp_v10_0_init_microcode(struct psp_context *psp)
+ 	if (err)
+ 		return err;
+ 
+-	return psp_init_ta_microcode(psp, ucode_prefix);
++	err = psp_init_ta_microcode(psp, ucode_prefix);
++	if ((adev->ip_versions[GC_HWIP][0] == IP_VERSION(9, 1, 0)) &&
++		(adev->pdev->revision == 0xa1) &&
++		(psp->securedisplay_context.context.bin_desc.fw_version >= 0x27000008)) {
++		adev->psp.securedisplay_context.context.bin_desc.size_bytes = 0;
++	}
++	return err;
+ }
+ 
+ static int psp_v10_0_ring_create(struct psp_context *psp,
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index f54d670ab3abc..0695c7c3d489d 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2813,7 +2813,7 @@ static int dm_resume(void *handle)
+ 		 * this is the case when traversing through already created
+ 		 * MST connectors, should be skipped
+ 		 */
+-		if (aconnector->dc_link->type == dc_connection_mst_branch)
++		if (aconnector && aconnector->mst_root)
+ 			continue;
+ 
+ 		mutex_lock(&aconnector->hpd_lock);
+@@ -6717,7 +6717,7 @@ static int dm_encoder_helper_atomic_check(struct drm_encoder *encoder,
+ 	int clock, bpp = 0;
+ 	bool is_y420 = false;
+ 
+-	if (!aconnector->mst_output_port || !aconnector->dc_sink)
++	if (!aconnector->mst_output_port)
+ 		return 0;
+ 
+ 	mst_port = aconnector->mst_output_port;
+diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+index bf6d63673b5aa..5da7236ca203f 100644
+--- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+@@ -869,13 +869,11 @@ static ssize_t amdgpu_get_pp_od_clk_voltage(struct device *dev,
+ 	}
+ 	if (ret == -ENOENT) {
+ 		size = amdgpu_dpm_print_clock_levels(adev, OD_SCLK, buf);
+-		if (size > 0) {
+-			size += amdgpu_dpm_print_clock_levels(adev, OD_MCLK, buf + size);
+-			size += amdgpu_dpm_print_clock_levels(adev, OD_VDDC_CURVE, buf + size);
+-			size += amdgpu_dpm_print_clock_levels(adev, OD_VDDGFX_OFFSET, buf + size);
+-			size += amdgpu_dpm_print_clock_levels(adev, OD_RANGE, buf + size);
+-			size += amdgpu_dpm_print_clock_levels(adev, OD_CCLK, buf + size);
+-		}
++		size += amdgpu_dpm_print_clock_levels(adev, OD_MCLK, buf + size);
++		size += amdgpu_dpm_print_clock_levels(adev, OD_VDDC_CURVE, buf + size);
++		size += amdgpu_dpm_print_clock_levels(adev, OD_VDDGFX_OFFSET, buf + size);
++		size += amdgpu_dpm_print_clock_levels(adev, OD_RANGE, buf + size);
++		size += amdgpu_dpm_print_clock_levels(adev, OD_CCLK, buf + size);
+ 	}
+ 
+ 	if (size == 0)
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+index ca0fca7da29e0..268c697735f34 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+@@ -125,6 +125,7 @@ static struct cmn2asic_msg_mapping smu_v13_0_7_message_map[SMU_MSG_MAX_COUNT] =
+ 	MSG_MAP(ArmD3,				PPSMC_MSG_ArmD3,                       0),
+ 	MSG_MAP(AllowGpo,			PPSMC_MSG_SetGpoAllow,           0),
+ 	MSG_MAP(GetPptLimit,			PPSMC_MSG_GetPptLimit,                 0),
++	MSG_MAP(NotifyPowerSource,		PPSMC_MSG_NotifyPowerSource,           0),
+ };
+ 
+ static struct cmn2asic_mapping smu_v13_0_7_clk_map[SMU_CLK_COUNT] = {
+diff --git a/drivers/gpu/drm/drm_managed.c b/drivers/gpu/drm/drm_managed.c
+index 4cf214de50c40..c21c3f6230335 100644
+--- a/drivers/gpu/drm/drm_managed.c
++++ b/drivers/gpu/drm/drm_managed.c
+@@ -264,28 +264,10 @@ void drmm_kfree(struct drm_device *dev, void *data)
+ }
+ EXPORT_SYMBOL(drmm_kfree);
+ 
+-static void drmm_mutex_release(struct drm_device *dev, void *res)
++void __drmm_mutex_release(struct drm_device *dev, void *res)
+ {
+ 	struct mutex *lock = res;
+ 
+ 	mutex_destroy(lock);
+ }
+-
+-/**
+- * drmm_mutex_init - &drm_device-managed mutex_init()
+- * @dev: DRM device
+- * @lock: lock to be initialized
+- *
+- * Returns:
+- * 0 on success, or a negative errno code otherwise.
+- *
+- * This is a &drm_device-managed version of mutex_init(). The initialized
+- * lock is automatically destroyed on the final drm_dev_put().
+- */
+-int drmm_mutex_init(struct drm_device *dev, struct mutex *lock)
+-{
+-	mutex_init(lock);
+-
+-	return drmm_add_action_or_reset(dev, drmm_mutex_release, lock);
+-}
+-EXPORT_SYMBOL(drmm_mutex_init);
++EXPORT_SYMBOL(__drmm_mutex_release);
+diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
+index 0a5aaf78172a6..576c4c838a331 100644
+--- a/drivers/gpu/drm/mgag200/mgag200_mode.c
++++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
+@@ -640,6 +640,11 @@ void mgag200_crtc_helper_atomic_enable(struct drm_crtc *crtc, struct drm_atomic_
+ 	if (funcs->pixpllc_atomic_update)
+ 		funcs->pixpllc_atomic_update(crtc, old_state);
+ 
++	if (crtc_state->gamma_lut)
++		mgag200_crtc_set_gamma(mdev, format, crtc_state->gamma_lut->data);
++	else
++		mgag200_crtc_set_gamma_linear(mdev, format);
++
+ 	mgag200_enable_display(mdev);
+ 
+ 	if (funcs->enable_vidrst)
+diff --git a/drivers/gpu/drm/radeon/radeon_irq_kms.c b/drivers/gpu/drm/radeon/radeon_irq_kms.c
+index 3377fbc71f654..c4dda908666cf 100644
+--- a/drivers/gpu/drm/radeon/radeon_irq_kms.c
++++ b/drivers/gpu/drm/radeon/radeon_irq_kms.c
+@@ -99,6 +99,16 @@ static void radeon_hotplug_work_func(struct work_struct *work)
+ 
+ static void radeon_dp_work_func(struct work_struct *work)
+ {
++	struct radeon_device *rdev = container_of(work, struct radeon_device,
++						  dp_work);
++	struct drm_device *dev = rdev->ddev;
++	struct drm_mode_config *mode_config = &dev->mode_config;
++	struct drm_connector *connector;
++
++	mutex_lock(&mode_config->mutex);
++	list_for_each_entry(connector, &mode_config->connector_list, head)
++		radeon_connector_hotplug(connector);
++	mutex_unlock(&mode_config->mutex);
+ }
+ 
+ /**
+diff --git a/drivers/hwtracing/coresight/coresight-tmc-etr.c b/drivers/hwtracing/coresight/coresight-tmc-etr.c
+index 918d461fcf4a6..eaa296ced1678 100644
+--- a/drivers/hwtracing/coresight/coresight-tmc-etr.c
++++ b/drivers/hwtracing/coresight/coresight-tmc-etr.c
+@@ -942,7 +942,7 @@ tmc_etr_buf_insert_barrier_packet(struct etr_buf *etr_buf, u64 offset)
+ 
+ 	len = tmc_etr_buf_get_data(etr_buf, offset,
+ 				   CORESIGHT_BARRIER_PKT_SIZE, &bufp);
+-	if (WARN_ON(len < CORESIGHT_BARRIER_PKT_SIZE))
++	if (WARN_ON(len < 0 || len < CORESIGHT_BARRIER_PKT_SIZE))
+ 		return -EINVAL;
+ 	coresight_insert_barrier_packet(bufp);
+ 	return offset + CORESIGHT_BARRIER_PKT_SIZE;
+diff --git a/drivers/irqchip/irq-mips-gic.c b/drivers/irqchip/irq-mips-gic.c
+index 1a6a7a672ad77..e23935099b830 100644
+--- a/drivers/irqchip/irq-mips-gic.c
++++ b/drivers/irqchip/irq-mips-gic.c
+@@ -50,7 +50,7 @@ void __iomem *mips_gic_base;
+ 
+ static DEFINE_PER_CPU_READ_MOSTLY(unsigned long[GIC_MAX_LONGS], pcpu_masks);
+ 
+-static DEFINE_SPINLOCK(gic_lock);
++static DEFINE_RAW_SPINLOCK(gic_lock);
+ static struct irq_domain *gic_irq_domain;
+ static int gic_shared_intrs;
+ static unsigned int gic_cpu_pin;
+@@ -211,7 +211,7 @@ static int gic_set_type(struct irq_data *d, unsigned int type)
+ 
+ 	irq = GIC_HWIRQ_TO_SHARED(d->hwirq);
+ 
+-	spin_lock_irqsave(&gic_lock, flags);
++	raw_spin_lock_irqsave(&gic_lock, flags);
+ 	switch (type & IRQ_TYPE_SENSE_MASK) {
+ 	case IRQ_TYPE_EDGE_FALLING:
+ 		pol = GIC_POL_FALLING_EDGE;
+@@ -251,7 +251,7 @@ static int gic_set_type(struct irq_data *d, unsigned int type)
+ 	else
+ 		irq_set_chip_handler_name_locked(d, &gic_level_irq_controller,
+ 						 handle_level_irq, NULL);
+-	spin_unlock_irqrestore(&gic_lock, flags);
++	raw_spin_unlock_irqrestore(&gic_lock, flags);
+ 
+ 	return 0;
+ }
+@@ -269,7 +269,7 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *cpumask,
+ 		return -EINVAL;
+ 
+ 	/* Assumption : cpumask refers to a single CPU */
+-	spin_lock_irqsave(&gic_lock, flags);
++	raw_spin_lock_irqsave(&gic_lock, flags);
+ 
+ 	/* Re-route this IRQ */
+ 	write_gic_map_vp(irq, BIT(mips_cm_vp_id(cpu)));
+@@ -280,7 +280,7 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *cpumask,
+ 		set_bit(irq, per_cpu_ptr(pcpu_masks, cpu));
+ 
+ 	irq_data_update_effective_affinity(d, cpumask_of(cpu));
+-	spin_unlock_irqrestore(&gic_lock, flags);
++	raw_spin_unlock_irqrestore(&gic_lock, flags);
+ 
+ 	return IRQ_SET_MASK_OK;
+ }
+@@ -358,12 +358,12 @@ static void gic_mask_local_irq_all_vpes(struct irq_data *d)
+ 	cd = irq_data_get_irq_chip_data(d);
+ 	cd->mask = false;
+ 
+-	spin_lock_irqsave(&gic_lock, flags);
++	raw_spin_lock_irqsave(&gic_lock, flags);
+ 	for_each_online_cpu(cpu) {
+ 		write_gic_vl_other(mips_cm_vp_id(cpu));
+ 		write_gic_vo_rmask(BIT(intr));
+ 	}
+-	spin_unlock_irqrestore(&gic_lock, flags);
++	raw_spin_unlock_irqrestore(&gic_lock, flags);
+ }
+ 
+ static void gic_unmask_local_irq_all_vpes(struct irq_data *d)
+@@ -376,12 +376,12 @@ static void gic_unmask_local_irq_all_vpes(struct irq_data *d)
+ 	cd = irq_data_get_irq_chip_data(d);
+ 	cd->mask = true;
+ 
+-	spin_lock_irqsave(&gic_lock, flags);
++	raw_spin_lock_irqsave(&gic_lock, flags);
+ 	for_each_online_cpu(cpu) {
+ 		write_gic_vl_other(mips_cm_vp_id(cpu));
+ 		write_gic_vo_smask(BIT(intr));
+ 	}
+-	spin_unlock_irqrestore(&gic_lock, flags);
++	raw_spin_unlock_irqrestore(&gic_lock, flags);
+ }
+ 
+ static void gic_all_vpes_irq_cpu_online(void)
+@@ -394,19 +394,21 @@ static void gic_all_vpes_irq_cpu_online(void)
+ 	unsigned long flags;
+ 	int i;
+ 
+-	spin_lock_irqsave(&gic_lock, flags);
++	raw_spin_lock_irqsave(&gic_lock, flags);
+ 
+ 	for (i = 0; i < ARRAY_SIZE(local_intrs); i++) {
+ 		unsigned int intr = local_intrs[i];
+ 		struct gic_all_vpes_chip_data *cd;
+ 
++		if (!gic_local_irq_is_routable(intr))
++			continue;
+ 		cd = &gic_all_vpes_chip_data[intr];
+ 		write_gic_vl_map(mips_gic_vx_map_reg(intr), cd->map);
+ 		if (cd->mask)
+ 			write_gic_vl_smask(BIT(intr));
+ 	}
+ 
+-	spin_unlock_irqrestore(&gic_lock, flags);
++	raw_spin_unlock_irqrestore(&gic_lock, flags);
+ }
+ 
+ static struct irq_chip gic_all_vpes_local_irq_controller = {
+@@ -436,11 +438,11 @@ static int gic_shared_irq_domain_map(struct irq_domain *d, unsigned int virq,
+ 
+ 	data = irq_get_irq_data(virq);
+ 
+-	spin_lock_irqsave(&gic_lock, flags);
++	raw_spin_lock_irqsave(&gic_lock, flags);
+ 	write_gic_map_pin(intr, GIC_MAP_PIN_MAP_TO_PIN | gic_cpu_pin);
+ 	write_gic_map_vp(intr, BIT(mips_cm_vp_id(cpu)));
+ 	irq_data_update_effective_affinity(data, cpumask_of(cpu));
+-	spin_unlock_irqrestore(&gic_lock, flags);
++	raw_spin_unlock_irqrestore(&gic_lock, flags);
+ 
+ 	return 0;
+ }
+@@ -535,12 +537,12 @@ static int gic_irq_domain_map(struct irq_domain *d, unsigned int virq,
+ 	if (!gic_local_irq_is_routable(intr))
+ 		return -EPERM;
+ 
+-	spin_lock_irqsave(&gic_lock, flags);
++	raw_spin_lock_irqsave(&gic_lock, flags);
+ 	for_each_online_cpu(cpu) {
+ 		write_gic_vl_other(mips_cm_vp_id(cpu));
+ 		write_gic_vo_map(mips_gic_vx_map_reg(intr), map);
+ 	}
+-	spin_unlock_irqrestore(&gic_lock, flags);
++	raw_spin_unlock_irqrestore(&gic_lock, flags);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/media/radio/radio-shark.c b/drivers/media/radio/radio-shark.c
+index 8230da828d0ee..127a3be0e0f07 100644
+--- a/drivers/media/radio/radio-shark.c
++++ b/drivers/media/radio/radio-shark.c
+@@ -316,6 +316,16 @@ static int usb_shark_probe(struct usb_interface *intf,
+ {
+ 	struct shark_device *shark;
+ 	int retval = -ENOMEM;
++	static const u8 ep_addresses[] = {
++		SHARK_IN_EP | USB_DIR_IN,
++		SHARK_OUT_EP | USB_DIR_OUT,
++		0};
++
++	/* Are the expected endpoints present? */
++	if (!usb_check_int_endpoints(intf, ep_addresses)) {
++		dev_err(&intf->dev, "Invalid radioSHARK device\n");
++		return -EINVAL;
++	}
+ 
+ 	shark = kzalloc(sizeof(struct shark_device), GFP_KERNEL);
+ 	if (!shark)
+diff --git a/drivers/media/radio/radio-shark2.c b/drivers/media/radio/radio-shark2.c
+index d150f12382c60..f1c5c0a6a335c 100644
+--- a/drivers/media/radio/radio-shark2.c
++++ b/drivers/media/radio/radio-shark2.c
+@@ -282,6 +282,16 @@ static int usb_shark_probe(struct usb_interface *intf,
+ {
+ 	struct shark_device *shark;
+ 	int retval = -ENOMEM;
++	static const u8 ep_addresses[] = {
++		SHARK_IN_EP | USB_DIR_IN,
++		SHARK_OUT_EP | USB_DIR_OUT,
++		0};
++
++	/* Are the expected endpoints present? */
++	if (!usb_check_int_endpoints(intf, ep_addresses)) {
++		dev_err(&intf->dev, "Invalid radioSHARK2 device\n");
++		return -EINVAL;
++	}
+ 
+ 	shark = kzalloc(sizeof(struct shark_device), GFP_KERNEL);
+ 	if (!shark)
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index 672ab90c4b2d9..0ff294f074659 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -266,6 +266,7 @@ static ssize_t power_ro_lock_store(struct device *dev,
+ 		goto out_put;
+ 	}
+ 	req_to_mmc_queue_req(req)->drv_op = MMC_DRV_OP_BOOT_WP;
++	req_to_mmc_queue_req(req)->drv_op_result = -EIO;
+ 	blk_execute_rq(req, false);
+ 	ret = req_to_mmc_queue_req(req)->drv_op_result;
+ 	blk_mq_free_request(req);
+@@ -653,6 +654,7 @@ static int mmc_blk_ioctl_cmd(struct mmc_blk_data *md,
+ 	idatas[0] = idata;
+ 	req_to_mmc_queue_req(req)->drv_op =
+ 		rpmb ? MMC_DRV_OP_IOCTL_RPMB : MMC_DRV_OP_IOCTL;
++	req_to_mmc_queue_req(req)->drv_op_result = -EIO;
+ 	req_to_mmc_queue_req(req)->drv_op_data = idatas;
+ 	req_to_mmc_queue_req(req)->ioc_count = 1;
+ 	blk_execute_rq(req, false);
+@@ -724,6 +726,7 @@ static int mmc_blk_ioctl_multi_cmd(struct mmc_blk_data *md,
+ 	}
+ 	req_to_mmc_queue_req(req)->drv_op =
+ 		rpmb ? MMC_DRV_OP_IOCTL_RPMB : MMC_DRV_OP_IOCTL;
++	req_to_mmc_queue_req(req)->drv_op_result = -EIO;
+ 	req_to_mmc_queue_req(req)->drv_op_data = idata;
+ 	req_to_mmc_queue_req(req)->ioc_count = n;
+ 	blk_execute_rq(req, false);
+@@ -2808,6 +2811,7 @@ static int mmc_dbg_card_status_get(void *data, u64 *val)
+ 	if (IS_ERR(req))
+ 		return PTR_ERR(req);
+ 	req_to_mmc_queue_req(req)->drv_op = MMC_DRV_OP_GET_CARD_STATUS;
++	req_to_mmc_queue_req(req)->drv_op_result = -EIO;
+ 	blk_execute_rq(req, false);
+ 	ret = req_to_mmc_queue_req(req)->drv_op_result;
+ 	if (ret >= 0) {
+@@ -2846,6 +2850,7 @@ static int mmc_ext_csd_open(struct inode *inode, struct file *filp)
+ 		goto out_free;
+ 	}
+ 	req_to_mmc_queue_req(req)->drv_op = MMC_DRV_OP_GET_EXT_CSD;
++	req_to_mmc_queue_req(req)->drv_op_result = -EIO;
+ 	req_to_mmc_queue_req(req)->drv_op_data = &ext_csd;
+ 	blk_execute_rq(req, false);
+ 	err = req_to_mmc_queue_req(req)->drv_op_result;
+diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
+index 58f042fdd4f43..03fe21a89021d 100644
+--- a/drivers/mmc/host/sdhci-esdhc-imx.c
++++ b/drivers/mmc/host/sdhci-esdhc-imx.c
+@@ -1634,6 +1634,10 @@ sdhci_esdhc_imx_probe_dt(struct platform_device *pdev,
+ 	if (ret)
+ 		return ret;
+ 
++	/* HS400/HS400ES require 8 bit bus */
++	if (!(host->mmc->caps & MMC_CAP_8_BIT_DATA))
++		host->mmc->caps2 &= ~(MMC_CAP2_HS400 | MMC_CAP2_HS400_ES);
++
+ 	if (mmc_gpio_get_cd(host->mmc) >= 0)
+ 		host->quirks &= ~SDHCI_QUIRK_BROKEN_CARD_DETECTION;
+ 
+@@ -1724,10 +1728,6 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
+ 		host->mmc_host_ops.init_card = usdhc_init_card;
+ 	}
+ 
+-	err = sdhci_esdhc_imx_probe_dt(pdev, host, imx_data);
+-	if (err)
+-		goto disable_ahb_clk;
+-
+ 	if (imx_data->socdata->flags & ESDHC_FLAG_MAN_TUNING)
+ 		sdhci_esdhc_ops.platform_execute_tuning =
+ 					esdhc_executing_tuning;
+@@ -1735,15 +1735,13 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
+ 	if (imx_data->socdata->flags & ESDHC_FLAG_ERR004536)
+ 		host->quirks |= SDHCI_QUIRK_BROKEN_ADMA;
+ 
+-	if (host->mmc->caps & MMC_CAP_8_BIT_DATA &&
+-	    imx_data->socdata->flags & ESDHC_FLAG_HS400)
++	if (imx_data->socdata->flags & ESDHC_FLAG_HS400)
+ 		host->mmc->caps2 |= MMC_CAP2_HS400;
+ 
+ 	if (imx_data->socdata->flags & ESDHC_FLAG_BROKEN_AUTO_CMD23)
+ 		host->quirks2 |= SDHCI_QUIRK2_ACMD23_BROKEN;
+ 
+-	if (host->mmc->caps & MMC_CAP_8_BIT_DATA &&
+-	    imx_data->socdata->flags & ESDHC_FLAG_HS400_ES) {
++	if (imx_data->socdata->flags & ESDHC_FLAG_HS400_ES) {
+ 		host->mmc->caps2 |= MMC_CAP2_HS400_ES;
+ 		host->mmc_host_ops.hs400_enhanced_strobe =
+ 					esdhc_hs400_enhanced_strobe;
+@@ -1765,6 +1763,10 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
+ 			goto disable_ahb_clk;
+ 	}
+ 
++	err = sdhci_esdhc_imx_probe_dt(pdev, host, imx_data);
++	if (err)
++		goto disable_ahb_clk;
++
+ 	sdhci_esdhc_imx_hwinit(host);
+ 
+ 	err = sdhci_add_host(host);
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 7a7d584f378a5..806d33d9f7124 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -3924,7 +3924,11 @@ static int bond_slave_netdev_event(unsigned long event,
+ 		unblock_netpoll_tx();
+ 		break;
+ 	case NETDEV_FEAT_CHANGE:
+-		bond_compute_features(bond);
++		if (!bond->notifier_ctx) {
++			bond->notifier_ctx = true;
++			bond_compute_features(bond);
++			bond->notifier_ctx = false;
++		}
+ 		break;
+ 	case NETDEV_RESEND_IGMP:
+ 		/* Propagate to master device */
+@@ -6283,6 +6287,8 @@ static int bond_init(struct net_device *bond_dev)
+ 	if (!bond->wq)
+ 		return -ENOMEM;
+ 
++	bond->notifier_ctx = false;
++
+ 	spin_lock_init(&bond->stats_lock);
+ 	netdev_lockdep_set_classes(bond_dev);
+ 
+diff --git a/drivers/net/ethernet/3com/3c589_cs.c b/drivers/net/ethernet/3com/3c589_cs.c
+index 82f94b1635bf8..5267e9dcd87ef 100644
+--- a/drivers/net/ethernet/3com/3c589_cs.c
++++ b/drivers/net/ethernet/3com/3c589_cs.c
+@@ -195,6 +195,7 @@ static int tc589_probe(struct pcmcia_device *link)
+ {
+ 	struct el3_private *lp;
+ 	struct net_device *dev;
++	int ret;
+ 
+ 	dev_dbg(&link->dev, "3c589_attach()\n");
+ 
+@@ -218,7 +219,15 @@ static int tc589_probe(struct pcmcia_device *link)
+ 
+ 	dev->ethtool_ops = &netdev_ethtool_ops;
+ 
+-	return tc589_config(link);
++	ret = tc589_config(link);
++	if (ret)
++		goto err_free_netdev;
++
++	return 0;
++
++err_free_netdev:
++	free_netdev(dev);
++	return ret;
+ }
+ 
+ static void tc589_detach(struct pcmcia_device *link)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
+index 7045fedfd73a0..7af223b0a37f5 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
+@@ -652,9 +652,7 @@ static void otx2_sqe_add_ext(struct otx2_nic *pfvf, struct otx2_snd_queue *sq,
+ 				htons(ext->lso_sb - skb_network_offset(skb));
+ 		} else if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) {
+ 			ext->lso_format = pfvf->hw.lso_tsov6_idx;
+-
+-			ipv6_hdr(skb)->payload_len =
+-				htons(ext->lso_sb - skb_network_offset(skb));
++			ipv6_hdr(skb)->payload_len = htons(tcp_hdrlen(skb));
+ 		} else if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4) {
+ 			__be16 l3_proto = vlan_get_protocol(skb);
+ 			struct udphdr *udph = udp_hdr(skb);
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index c9fb1d7084d57..55d9a2c421a19 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -3272,18 +3272,14 @@ static int mtk_open(struct net_device *dev)
+ 			eth->dsa_meta[i] = md_dst;
+ 		}
+ 	} else {
+-		/* Hardware special tag parsing needs to be disabled if at least
+-		 * one MAC does not use DSA.
++		/* Hardware DSA untagging and VLAN RX offloading need to be
++		 * disabled if at least one MAC does not use DSA.
+ 		 */
+ 		u32 val = mtk_r32(eth, MTK_CDMP_IG_CTRL);
+ 
+ 		val &= ~MTK_CDMP_STAG_EN;
+ 		mtk_w32(eth, val, MTK_CDMP_IG_CTRL);
+ 
+-		val = mtk_r32(eth, MTK_CDMQ_IG_CTRL);
+-		val &= ~MTK_CDMQ_STAG_EN;
+-		mtk_w32(eth, val, MTK_CDMQ_IG_CTRL);
+-
+ 		mtk_w32(eth, 0, MTK_CDMP_EG_CTRL);
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index b00e33ed05e91..4d6a94ab1f414 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -1920,9 +1920,10 @@ static void mlx5_cmd_err_trace(struct mlx5_core_dev *dev, u16 opcode, u16 op_mod
+ static void cmd_status_log(struct mlx5_core_dev *dev, u16 opcode, u8 status,
+ 			   u32 syndrome, int err)
+ {
++	const char *namep = mlx5_command_str(opcode);
+ 	struct mlx5_cmd_stats *stats;
+ 
+-	if (!err)
++	if (!err || !(strcmp(namep, "unknown command opcode")))
+ 		return;
+ 
+ 	stats = &dev->cmd.stats[opcode];
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c
+index eb5aeba3addf4..8ba606a470c8d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c
+@@ -175,6 +175,8 @@ static bool mlx5e_ptp_poll_ts_cq(struct mlx5e_cq *cq, int budget)
+ 	/* ensure cq space is freed before enabling more cqes */
+ 	wmb();
+ 
++	mlx5e_txqsq_wake(&ptpsq->txqsq);
++
+ 	return work_done == budget;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
+index 780224fd67a1d..fbb392d54fa51 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
+@@ -1338,11 +1338,13 @@ static void mlx5e_invalidate_encap(struct mlx5e_priv *priv,
+ 	struct mlx5e_tc_flow *flow;
+ 
+ 	list_for_each_entry(flow, encap_flows, tmp_list) {
+-		struct mlx5_flow_attr *attr = flow->attr;
+ 		struct mlx5_esw_flow_attr *esw_attr;
++		struct mlx5_flow_attr *attr;
+ 
+ 		if (!mlx5e_is_offloaded_flow(flow))
+ 			continue;
++
++		attr = mlx5e_tc_get_encap_attr(flow);
+ 		esw_attr = attr->esw_attr;
+ 
+ 		if (flow_flag_test(flow, SLOW))
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
+index b9c2f67d37941..23aa18f050555 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
+@@ -182,6 +182,8 @@ static inline u16 mlx5e_txqsq_get_next_pi(struct mlx5e_txqsq *sq, u16 size)
+ 	return pi;
+ }
+ 
++void mlx5e_txqsq_wake(struct mlx5e_txqsq *sq);
++
+ static inline u16 mlx5e_shampo_get_cqe_header_index(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
+ {
+ 	return be16_to_cpu(cqe->shampo.header_entry_index) & (rq->mpwqe.shampo->hd_per_wq - 1);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index 87a2850b32d09..2b1094e5b0c9d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -1692,11 +1692,9 @@ bool mlx5e_tc_is_vf_tunnel(struct net_device *out_dev, struct net_device *route_
+ int mlx5e_tc_query_route_vport(struct net_device *out_dev, struct net_device *route_dev, u16 *vport)
+ {
+ 	struct mlx5e_priv *out_priv, *route_priv;
+-	struct mlx5_devcom *devcom = NULL;
+ 	struct mlx5_core_dev *route_mdev;
+ 	struct mlx5_eswitch *esw;
+ 	u16 vhca_id;
+-	int err;
+ 
+ 	out_priv = netdev_priv(out_dev);
+ 	esw = out_priv->mdev->priv.eswitch;
+@@ -1705,6 +1703,9 @@ int mlx5e_tc_query_route_vport(struct net_device *out_dev, struct net_device *ro
+ 
+ 	vhca_id = MLX5_CAP_GEN(route_mdev, vhca_id);
+ 	if (mlx5_lag_is_active(out_priv->mdev)) {
++		struct mlx5_devcom *devcom;
++		int err;
++
+ 		/* In lag case we may get devices from different eswitch instances.
+ 		 * If we failed to get vport num, it means, mostly, that we on the wrong
+ 		 * eswitch.
+@@ -1713,16 +1714,16 @@ int mlx5e_tc_query_route_vport(struct net_device *out_dev, struct net_device *ro
+ 		if (err != -ENOENT)
+ 			return err;
+ 
++		rcu_read_lock();
+ 		devcom = out_priv->mdev->priv.devcom;
+-		esw = mlx5_devcom_get_peer_data(devcom, MLX5_DEVCOM_ESW_OFFLOADS);
+-		if (!esw)
+-			return -ENODEV;
++		esw = mlx5_devcom_get_peer_data_rcu(devcom, MLX5_DEVCOM_ESW_OFFLOADS);
++		err = esw ? mlx5_eswitch_vhca_id_to_vport(esw, vhca_id, vport) : -ENODEV;
++		rcu_read_unlock();
++
++		return err;
+ 	}
+ 
+-	err = mlx5_eswitch_vhca_id_to_vport(esw, vhca_id, vport);
+-	if (devcom)
+-		mlx5_devcom_release_peer_data(devcom, MLX5_DEVCOM_ESW_OFFLOADS);
+-	return err;
++	return mlx5_eswitch_vhca_id_to_vport(esw, vhca_id, vport);
+ }
+ 
+ static int
+@@ -5448,6 +5449,8 @@ int mlx5e_tc_esw_init(struct mlx5_rep_uplink_priv *uplink_priv)
+ 		goto err_action_counter;
+ 	}
+ 
++	mlx5_esw_offloads_devcom_init(esw);
++
+ 	return 0;
+ 
+ err_action_counter:
+@@ -5476,7 +5479,7 @@ void mlx5e_tc_esw_cleanup(struct mlx5_rep_uplink_priv *uplink_priv)
+ 	priv = netdev_priv(rpriv->netdev);
+ 	esw = priv->mdev->priv.eswitch;
+ 
+-	mlx5e_tc_clean_fdb_peer_flows(esw);
++	mlx5_esw_offloads_devcom_cleanup(esw);
+ 
+ 	mlx5e_tc_tun_cleanup(uplink_priv->encap);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+index df5e780e8e6a6..c7eb6b238c2ba 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+@@ -762,6 +762,17 @@ static void mlx5e_tx_wi_consume_fifo_skbs(struct mlx5e_txqsq *sq, struct mlx5e_t
+ 	}
+ }
+ 
++void mlx5e_txqsq_wake(struct mlx5e_txqsq *sq)
++{
++	if (netif_tx_queue_stopped(sq->txq) &&
++	    mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, sq->stop_room) &&
++	    mlx5e_ptpsq_fifo_has_room(sq) &&
++	    !test_bit(MLX5E_SQ_STATE_RECOVERING, &sq->state)) {
++		netif_tx_wake_queue(sq->txq);
++		sq->stats->wake++;
++	}
++}
++
+ bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget)
+ {
+ 	struct mlx5e_sq_stats *stats;
+@@ -861,13 +872,7 @@ bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget)
+ 
+ 	netdev_tx_completed_queue(sq->txq, npkts, nbytes);
+ 
+-	if (netif_tx_queue_stopped(sq->txq) &&
+-	    mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, sq->stop_room) &&
+-	    mlx5e_ptpsq_fifo_has_room(sq) &&
+-	    !test_bit(MLX5E_SQ_STATE_RECOVERING, &sq->state)) {
+-		netif_tx_wake_queue(sq->txq);
+-		stats->wake++;
+-	}
++	mlx5e_txqsq_wake(sq);
+ 
+ 	return (i == MLX5E_TX_CQ_POLL_BUDGET);
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
+index 9a458a5d98539..44547b22a536f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
+@@ -161,20 +161,22 @@ int mlx5e_napi_poll(struct napi_struct *napi, int budget)
+ 		}
+ 	}
+ 
++	/* budget=0 means we may be in IRQ context, do as little as possible */
++	if (unlikely(!budget))
++		goto out;
++
+ 	busy |= mlx5e_poll_xdpsq_cq(&c->xdpsq.cq);
+ 
+ 	if (c->xdp)
+ 		busy |= mlx5e_poll_xdpsq_cq(&c->rq_xdpsq.cq);
+ 
+-	if (likely(budget)) { /* budget=0 means: don't poll rx rings */
+-		if (xsk_open)
+-			work_done = mlx5e_poll_rx_cq(&xskrq->cq, budget);
++	if (xsk_open)
++		work_done = mlx5e_poll_rx_cq(&xskrq->cq, budget);
+ 
+-		if (likely(budget - work_done))
+-			work_done += mlx5e_poll_rx_cq(&rq->cq, budget - work_done);
++	if (likely(budget - work_done))
++		work_done += mlx5e_poll_rx_cq(&rq->cq, budget - work_done);
+ 
+-		busy |= work_done == budget;
+-	}
++	busy |= work_done == budget;
+ 
+ 	mlx5e_poll_ico_cq(&c->icosq.cq);
+ 	if (mlx5e_poll_ico_cq(&c->async_icosq.cq))
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+index 9d5a5756a15a9..c8c12d1672f99 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+@@ -371,6 +371,8 @@ int mlx5_eswitch_enable(struct mlx5_eswitch *esw, int num_vfs);
+ void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw, bool clear_vf);
+ void mlx5_eswitch_disable_locked(struct mlx5_eswitch *esw);
+ void mlx5_eswitch_disable(struct mlx5_eswitch *esw);
++void mlx5_esw_offloads_devcom_init(struct mlx5_eswitch *esw);
++void mlx5_esw_offloads_devcom_cleanup(struct mlx5_eswitch *esw);
+ int mlx5_eswitch_set_vport_mac(struct mlx5_eswitch *esw,
+ 			       u16 vport, const u8 *mac);
+ int mlx5_eswitch_set_vport_state(struct mlx5_eswitch *esw,
+@@ -768,6 +770,8 @@ static inline void mlx5_eswitch_cleanup(struct mlx5_eswitch *esw) {}
+ static inline int mlx5_eswitch_enable(struct mlx5_eswitch *esw, int num_vfs) { return 0; }
+ static inline void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw, bool clear_vf) {}
+ static inline void mlx5_eswitch_disable(struct mlx5_eswitch *esw) {}
++static inline void mlx5_esw_offloads_devcom_init(struct mlx5_eswitch *esw) {}
++static inline void mlx5_esw_offloads_devcom_cleanup(struct mlx5_eswitch *esw) {}
+ static inline bool mlx5_eswitch_is_funcs_handler(struct mlx5_core_dev *dev) { return false; }
+ static inline
+ int mlx5_eswitch_set_vport_state(struct mlx5_eswitch *esw, u16 vport, int link_state) { return 0; }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index c99d208722f58..590df9bf39a56 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -2781,7 +2781,7 @@ err_out:
+ 	return err;
+ }
+ 
+-static void esw_offloads_devcom_init(struct mlx5_eswitch *esw)
++void mlx5_esw_offloads_devcom_init(struct mlx5_eswitch *esw)
+ {
+ 	struct mlx5_devcom *devcom = esw->dev->priv.devcom;
+ 
+@@ -2804,7 +2804,7 @@ static void esw_offloads_devcom_init(struct mlx5_eswitch *esw)
+ 			       ESW_OFFLOADS_DEVCOM_PAIR, esw);
+ }
+ 
+-static void esw_offloads_devcom_cleanup(struct mlx5_eswitch *esw)
++void mlx5_esw_offloads_devcom_cleanup(struct mlx5_eswitch *esw)
+ {
+ 	struct mlx5_devcom *devcom = esw->dev->priv.devcom;
+ 
+@@ -3274,8 +3274,6 @@ int esw_offloads_enable(struct mlx5_eswitch *esw)
+ 	if (err)
+ 		goto err_vports;
+ 
+-	esw_offloads_devcom_init(esw);
+-
+ 	return 0;
+ 
+ err_vports:
+@@ -3316,7 +3314,6 @@ static int esw_offloads_stop(struct mlx5_eswitch *esw,
+ 
+ void esw_offloads_disable(struct mlx5_eswitch *esw)
+ {
+-	esw_offloads_devcom_cleanup(esw);
+ 	mlx5_eswitch_disable_pf_vf_vports(esw);
+ 	esw_offloads_unload_rep(esw, MLX5_VPORT_UPLINK);
+ 	esw_set_passing_vport_metadata(esw, false);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.c
+index adefde3ea9410..b7d779d08d837 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.c
+@@ -3,6 +3,7 @@
+ 
+ #include <linux/mlx5/vport.h>
+ #include "lib/devcom.h"
++#include "mlx5_core.h"
+ 
+ static LIST_HEAD(devcom_list);
+ 
+@@ -13,7 +14,7 @@ static LIST_HEAD(devcom_list);
+ 
+ struct mlx5_devcom_component {
+ 	struct {
+-		void *data;
++		void __rcu *data;
+ 	} device[MLX5_DEVCOM_PORTS_SUPPORTED];
+ 
+ 	mlx5_devcom_event_handler_t handler;
+@@ -77,6 +78,7 @@ struct mlx5_devcom *mlx5_devcom_register_device(struct mlx5_core_dev *dev)
+ 	if (MLX5_CAP_GEN(dev, num_lag_ports) != MLX5_DEVCOM_PORTS_SUPPORTED)
+ 		return NULL;
+ 
++	mlx5_dev_list_lock();
+ 	sguid0 = mlx5_query_nic_system_image_guid(dev);
+ 	list_for_each_entry(iter, &devcom_list, list) {
+ 		struct mlx5_core_dev *tmp_dev = NULL;
+@@ -102,8 +104,10 @@ struct mlx5_devcom *mlx5_devcom_register_device(struct mlx5_core_dev *dev)
+ 
+ 	if (!priv) {
+ 		priv = mlx5_devcom_list_alloc();
+-		if (!priv)
+-			return ERR_PTR(-ENOMEM);
++		if (!priv) {
++			devcom = ERR_PTR(-ENOMEM);
++			goto out;
++		}
+ 
+ 		idx = 0;
+ 		new_priv = true;
+@@ -112,13 +116,16 @@ struct mlx5_devcom *mlx5_devcom_register_device(struct mlx5_core_dev *dev)
+ 	priv->devs[idx] = dev;
+ 	devcom = mlx5_devcom_alloc(priv, idx);
+ 	if (!devcom) {
+-		kfree(priv);
+-		return ERR_PTR(-ENOMEM);
++		if (new_priv)
++			kfree(priv);
++		devcom = ERR_PTR(-ENOMEM);
++		goto out;
+ 	}
+ 
+ 	if (new_priv)
+ 		list_add(&priv->list, &devcom_list);
+-
++out:
++	mlx5_dev_list_unlock();
+ 	return devcom;
+ }
+ 
+@@ -131,6 +138,7 @@ void mlx5_devcom_unregister_device(struct mlx5_devcom *devcom)
+ 	if (IS_ERR_OR_NULL(devcom))
+ 		return;
+ 
++	mlx5_dev_list_lock();
+ 	priv = devcom->priv;
+ 	priv->devs[devcom->idx] = NULL;
+ 
+@@ -141,10 +149,12 @@ void mlx5_devcom_unregister_device(struct mlx5_devcom *devcom)
+ 			break;
+ 
+ 	if (i != MLX5_DEVCOM_PORTS_SUPPORTED)
+-		return;
++		goto out;
+ 
+ 	list_del(&priv->list);
+ 	kfree(priv);
++out:
++	mlx5_dev_list_unlock();
+ }
+ 
+ void mlx5_devcom_register_component(struct mlx5_devcom *devcom,
+@@ -162,7 +172,7 @@ void mlx5_devcom_register_component(struct mlx5_devcom *devcom,
+ 	comp = &devcom->priv->components[id];
+ 	down_write(&comp->sem);
+ 	comp->handler = handler;
+-	comp->device[devcom->idx].data = data;
++	rcu_assign_pointer(comp->device[devcom->idx].data, data);
+ 	up_write(&comp->sem);
+ }
+ 
+@@ -176,8 +186,9 @@ void mlx5_devcom_unregister_component(struct mlx5_devcom *devcom,
+ 
+ 	comp = &devcom->priv->components[id];
+ 	down_write(&comp->sem);
+-	comp->device[devcom->idx].data = NULL;
++	RCU_INIT_POINTER(comp->device[devcom->idx].data, NULL);
+ 	up_write(&comp->sem);
++	synchronize_rcu();
+ }
+ 
+ int mlx5_devcom_send_event(struct mlx5_devcom *devcom,
+@@ -193,12 +204,15 @@ int mlx5_devcom_send_event(struct mlx5_devcom *devcom,
+ 
+ 	comp = &devcom->priv->components[id];
+ 	down_write(&comp->sem);
+-	for (i = 0; i < MLX5_DEVCOM_PORTS_SUPPORTED; i++)
+-		if (i != devcom->idx && comp->device[i].data) {
+-			err = comp->handler(event, comp->device[i].data,
+-					    event_data);
++	for (i = 0; i < MLX5_DEVCOM_PORTS_SUPPORTED; i++) {
++		void *data = rcu_dereference_protected(comp->device[i].data,
++						       lockdep_is_held(&comp->sem));
++
++		if (i != devcom->idx && data) {
++			err = comp->handler(event, data, event_data);
+ 			break;
+ 		}
++	}
+ 
+ 	up_write(&comp->sem);
+ 	return err;
+@@ -213,7 +227,7 @@ void mlx5_devcom_set_paired(struct mlx5_devcom *devcom,
+ 	comp = &devcom->priv->components[id];
+ 	WARN_ON(!rwsem_is_locked(&comp->sem));
+ 
+-	comp->paired = paired;
++	WRITE_ONCE(comp->paired, paired);
+ }
+ 
+ bool mlx5_devcom_is_paired(struct mlx5_devcom *devcom,
+@@ -222,7 +236,7 @@ bool mlx5_devcom_is_paired(struct mlx5_devcom *devcom,
+ 	if (IS_ERR_OR_NULL(devcom))
+ 		return false;
+ 
+-	return devcom->priv->components[id].paired;
++	return READ_ONCE(devcom->priv->components[id].paired);
+ }
+ 
+ void *mlx5_devcom_get_peer_data(struct mlx5_devcom *devcom,
+@@ -236,7 +250,7 @@ void *mlx5_devcom_get_peer_data(struct mlx5_devcom *devcom,
+ 
+ 	comp = &devcom->priv->components[id];
+ 	down_read(&comp->sem);
+-	if (!comp->paired) {
++	if (!READ_ONCE(comp->paired)) {
+ 		up_read(&comp->sem);
+ 		return NULL;
+ 	}
+@@ -245,7 +259,29 @@ void *mlx5_devcom_get_peer_data(struct mlx5_devcom *devcom,
+ 		if (i != devcom->idx)
+ 			break;
+ 
+-	return comp->device[i].data;
++	return rcu_dereference_protected(comp->device[i].data, lockdep_is_held(&comp->sem));
++}
++
++void *mlx5_devcom_get_peer_data_rcu(struct mlx5_devcom *devcom, enum mlx5_devcom_components id)
++{
++	struct mlx5_devcom_component *comp;
++	int i;
++
++	if (IS_ERR_OR_NULL(devcom))
++		return NULL;
++
++	for (i = 0; i < MLX5_DEVCOM_PORTS_SUPPORTED; i++)
++		if (i != devcom->idx)
++			break;
++
++	comp = &devcom->priv->components[id];
++	/* This can change concurrently, however 'data' pointer will remain
++	 * valid for the duration of RCU read section.
++	 */
++	if (!READ_ONCE(comp->paired))
++		return NULL;
++
++	return rcu_dereference(comp->device[i].data);
+ }
+ 
+ void mlx5_devcom_release_peer_data(struct mlx5_devcom *devcom,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.h
+index 94313c18bb647..9a496f4722dad 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.h
+@@ -41,6 +41,7 @@ bool mlx5_devcom_is_paired(struct mlx5_devcom *devcom,
+ 
+ void *mlx5_devcom_get_peer_data(struct mlx5_devcom *devcom,
+ 				enum mlx5_devcom_components id);
++void *mlx5_devcom_get_peer_data_rcu(struct mlx5_devcom *devcom, enum mlx5_devcom_components id);
+ void mlx5_devcom_release_peer_data(struct mlx5_devcom *devcom,
+ 				   enum mlx5_devcom_components id);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index ad90bf125e94f..62327b52f1acf 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -1041,7 +1041,7 @@ static int mlx5_init_once(struct mlx5_core_dev *dev)
+ 
+ 	dev->dm = mlx5_dm_create(dev);
+ 	if (IS_ERR(dev->dm))
+-		mlx5_core_warn(dev, "Failed to init device memory%d\n", err);
++		mlx5_core_warn(dev, "Failed to init device memory %ld\n", PTR_ERR(dev->dm));
+ 
+ 	dev->tracer = mlx5_fw_tracer_create(dev);
+ 	dev->hv_vhca = mlx5_hv_vhca_create(dev);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c
+index 07b6a6dcb92f0..7fbad87f475df 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c
+@@ -117,6 +117,8 @@ int mlx5dr_cmd_query_device(struct mlx5_core_dev *mdev,
+ 	caps->gvmi		= MLX5_CAP_GEN(mdev, vhca_id);
+ 	caps->flex_protocols	= MLX5_CAP_GEN(mdev, flex_parser_protocols);
+ 	caps->sw_format_ver	= MLX5_CAP_GEN(mdev, steering_format_version);
++	caps->roce_caps.fl_rc_qp_when_roce_disabled =
++		MLX5_CAP_GEN(mdev, fl_rc_qp_when_roce_disabled);
+ 
+ 	if (MLX5_CAP_GEN(mdev, roce)) {
+ 		err = dr_cmd_query_nic_vport_roce_en(mdev, 0, &roce_en);
+@@ -124,7 +126,7 @@ int mlx5dr_cmd_query_device(struct mlx5_core_dev *mdev,
+ 			return err;
+ 
+ 		caps->roce_caps.roce_en = roce_en;
+-		caps->roce_caps.fl_rc_qp_when_roce_disabled =
++		caps->roce_caps.fl_rc_qp_when_roce_disabled |=
+ 			MLX5_CAP_ROCE(mdev, fl_rc_qp_when_roce_disabled);
+ 		caps->roce_caps.fl_rc_qp_when_roce_enabled =
+ 			MLX5_CAP_ROCE(mdev, fl_rc_qp_when_roce_enabled);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste.c
+index 1e15f605df6ef..7d7973090a6b9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste.c
+@@ -15,7 +15,8 @@ static u32 dr_ste_crc32_calc(const void *input_data, size_t length)
+ {
+ 	u32 crc = crc32(0, input_data, length);
+ 
+-	return (__force u32)htonl(crc);
++	return (__force u32)((crc >> 24) & 0xff) | ((crc << 8) & 0xff0000) |
++			    ((crc >> 8) & 0xff00) | ((crc << 24) & 0xff000000);
+ }
+ 
+ bool mlx5dr_ste_supp_ttl_cs_recalc(struct mlx5dr_cmd_caps *caps)
+diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_main.c b/drivers/net/ethernet/microchip/lan966x/lan966x_main.c
+index 685e8cd7658c9..d2e5c9b7ec974 100644
+--- a/drivers/net/ethernet/microchip/lan966x/lan966x_main.c
++++ b/drivers/net/ethernet/microchip/lan966x/lan966x_main.c
+@@ -1013,6 +1013,16 @@ static int lan966x_reset_switch(struct lan966x *lan966x)
+ 
+ 	reset_control_reset(switch_reset);
+ 
++	/* Don't reinitialize the switch core, if it is already initialized. In
++	 * case it is initialized twice, some pointers inside the queue system
++	 * in HW will get corrupted and then after a while the queue system gets
++	 * full and no traffic is passing through the switch. The issue is seen
++	 * when loading and unloading the driver and sending traffic through the
++	 * switch.
++	 */
++	if (lan_rd(lan966x, SYS_RESET_CFG) & SYS_RESET_CFG_CORE_ENA)
++		return 0;
++
+ 	lan_wr(SYS_RESET_CFG_CORE_ENA_SET(0), lan966x, SYS_RESET_CFG);
+ 	lan_wr(SYS_RAM_INIT_RAM_INIT_SET(1), lan966x, SYS_RAM_INIT);
+ 	ret = readx_poll_timeout(lan966x_ram_init, lan966x,
+diff --git a/drivers/net/ethernet/nvidia/forcedeth.c b/drivers/net/ethernet/nvidia/forcedeth.c
+index 0605d1ee490dd..7a549b834e970 100644
+--- a/drivers/net/ethernet/nvidia/forcedeth.c
++++ b/drivers/net/ethernet/nvidia/forcedeth.c
+@@ -6138,6 +6138,7 @@ static int nv_probe(struct pci_dev *pci_dev, const struct pci_device_id *id)
+ 	return 0;
+ 
+ out_error:
++	nv_mgmt_release_sema(dev);
+ 	if (phystate_orig)
+ 		writel(phystate|NVREG_ADAPTCTL_RUNNING, base + NvRegAdapterControl);
+ out_freering:
+diff --git a/drivers/net/phy/mscc/mscc_main.c b/drivers/net/phy/mscc/mscc_main.c
+index 62bf99e45af16..bd81a4b041e52 100644
+--- a/drivers/net/phy/mscc/mscc_main.c
++++ b/drivers/net/phy/mscc/mscc_main.c
+@@ -2656,6 +2656,7 @@ static struct phy_driver vsc85xx_driver[] = {
+ module_phy_driver(vsc85xx_driver);
+ 
+ static struct mdio_device_id __maybe_unused vsc85xx_tbl[] = {
++	{ PHY_ID_VSC8502, 0xfffffff0, },
+ 	{ PHY_ID_VSC8504, 0xfffffff0, },
+ 	{ PHY_ID_VSC8514, 0xfffffff0, },
+ 	{ PHY_ID_VSC8530, 0xfffffff0, },
+diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c
+index d10606f257c43..555b0b1e9a789 100644
+--- a/drivers/net/team/team.c
++++ b/drivers/net/team/team.c
+@@ -1629,6 +1629,7 @@ static int team_init(struct net_device *dev)
+ 
+ 	team->dev = dev;
+ 	team_set_no_mode(team);
++	team->notifier_ctx = false;
+ 
+ 	team->pcpu_stats = netdev_alloc_pcpu_stats(struct team_pcpu_stats);
+ 	if (!team->pcpu_stats)
+@@ -3022,7 +3023,11 @@ static int team_device_event(struct notifier_block *unused,
+ 		team_del_slave(port->team->dev, dev);
+ 		break;
+ 	case NETDEV_FEAT_CHANGE:
+-		team_compute_features(port->team);
++		if (!port->team->notifier_ctx) {
++			port->team->notifier_ctx = true;
++			team_compute_features(port->team);
++			port->team->notifier_ctx = false;
++		}
+ 		break;
+ 	case NETDEV_PRECHANGEMTU:
+ 		/* Forbid to change mtu of underlaying device */
+diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
+index 6ce8f4f0c70e8..db05622f1f703 100644
+--- a/drivers/net/usb/cdc_ncm.c
++++ b/drivers/net/usb/cdc_ncm.c
+@@ -181,9 +181,12 @@ static u32 cdc_ncm_check_tx_max(struct usbnet *dev, u32 new_tx)
+ 	else
+ 		min = ctx->max_datagram_size + ctx->max_ndp_size + sizeof(struct usb_cdc_ncm_nth32);
+ 
+-	max = min_t(u32, CDC_NCM_NTB_MAX_SIZE_TX, le32_to_cpu(ctx->ncm_parm.dwNtbOutMaxSize));
+-	if (max == 0)
++	if (le32_to_cpu(ctx->ncm_parm.dwNtbOutMaxSize) == 0)
+ 		max = CDC_NCM_NTB_MAX_SIZE_TX; /* dwNtbOutMaxSize not set */
++	else
++		max = clamp_t(u32, le32_to_cpu(ctx->ncm_parm.dwNtbOutMaxSize),
++			      USB_CDC_NCM_NTB_MIN_OUT_SIZE,
++			      CDC_NCM_NTB_MAX_SIZE_TX);
+ 
+ 	/* some devices set dwNtbOutMaxSize too low for the above default */
+ 	min = min(min, max);
+@@ -1244,6 +1247,9 @@ cdc_ncm_fill_tx_frame(struct usbnet *dev, struct sk_buff *skb, __le32 sign)
+ 			 * further.
+ 			 */
+ 			if (skb_out == NULL) {
++				/* If even the smallest allocation fails, abort. */
++				if (ctx->tx_curr_size == USB_CDC_NCM_NTB_MIN_OUT_SIZE)
++					goto alloc_failed;
+ 				ctx->tx_low_mem_max_cnt = min(ctx->tx_low_mem_max_cnt + 1,
+ 							      (unsigned)CDC_NCM_LOW_MEM_MAX_CNT);
+ 				ctx->tx_low_mem_val = ctx->tx_low_mem_max_cnt;
+@@ -1262,13 +1268,8 @@ cdc_ncm_fill_tx_frame(struct usbnet *dev, struct sk_buff *skb, __le32 sign)
+ 			skb_out = alloc_skb(ctx->tx_curr_size, GFP_ATOMIC);
+ 
+ 			/* No allocation possible so we will abort */
+-			if (skb_out == NULL) {
+-				if (skb != NULL) {
+-					dev_kfree_skb_any(skb);
+-					dev->net->stats.tx_dropped++;
+-				}
+-				goto exit_no_skb;
+-			}
++			if (!skb_out)
++				goto alloc_failed;
+ 			ctx->tx_low_mem_val--;
+ 		}
+ 		if (ctx->is_ndp16) {
+@@ -1461,6 +1462,11 @@ cdc_ncm_fill_tx_frame(struct usbnet *dev, struct sk_buff *skb, __le32 sign)
+ 
+ 	return skb_out;
+ 
++alloc_failed:
++	if (skb) {
++		dev_kfree_skb_any(skb);
++		dev->net->stats.tx_dropped++;
++	}
+ exit_no_skb:
+ 	/* Start timer, if there is a remaining non-empty skb */
+ 	if (ctx->tx_curr_skb != NULL && n > 0)
+diff --git a/drivers/net/wireless/realtek/rtw89/mac.c b/drivers/net/wireless/realtek/rtw89/mac.c
+index 2e2a2b6eab09d..d0cafe813cdb4 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac.c
++++ b/drivers/net/wireless/realtek/rtw89/mac.c
+@@ -1425,6 +1425,8 @@ const struct rtw89_mac_size_set rtw89_mac_size = {
+ 	.wde_size4 = {RTW89_WDE_PG_64, 0, 4096,},
+ 	/* PCIE 64 */
+ 	.wde_size6 = {RTW89_WDE_PG_64, 512, 0,},
++	/* 8852B PCIE SCC */
++	.wde_size7 = {RTW89_WDE_PG_64, 510, 2,},
+ 	/* DLFW */
+ 	.wde_size9 = {RTW89_WDE_PG_64, 0, 1024,},
+ 	/* 8852C DLFW */
+@@ -1449,6 +1451,8 @@ const struct rtw89_mac_size_set rtw89_mac_size = {
+ 	.wde_qt4 = {0, 0, 0, 0,},
+ 	/* PCIE 64 */
+ 	.wde_qt6 = {448, 48, 0, 16,},
++	/* 8852B PCIE SCC */
++	.wde_qt7 = {446, 48, 0, 16,},
+ 	/* 8852C DLFW */
+ 	.wde_qt17 = {0, 0, 0,  0,},
+ 	/* 8852C PCIE SCC */
+diff --git a/drivers/net/wireless/realtek/rtw89/mac.h b/drivers/net/wireless/realtek/rtw89/mac.h
+index 8064d3953d7f2..85c02c1f36bd7 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac.h
++++ b/drivers/net/wireless/realtek/rtw89/mac.h
+@@ -791,6 +791,7 @@ struct rtw89_mac_size_set {
+ 	const struct rtw89_dle_size wde_size0;
+ 	const struct rtw89_dle_size wde_size4;
+ 	const struct rtw89_dle_size wde_size6;
++	const struct rtw89_dle_size wde_size7;
+ 	const struct rtw89_dle_size wde_size9;
+ 	const struct rtw89_dle_size wde_size18;
+ 	const struct rtw89_dle_size wde_size19;
+@@ -803,6 +804,7 @@ struct rtw89_mac_size_set {
+ 	const struct rtw89_wde_quota wde_qt0;
+ 	const struct rtw89_wde_quota wde_qt4;
+ 	const struct rtw89_wde_quota wde_qt6;
++	const struct rtw89_wde_quota wde_qt7;
+ 	const struct rtw89_wde_quota wde_qt17;
+ 	const struct rtw89_wde_quota wde_qt18;
+ 	const struct rtw89_ple_quota ple_qt4;
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852b.c b/drivers/net/wireless/realtek/rtw89/rtw8852b.c
+index 45c374d025cbd..355e515364611 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8852b.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8852b.c
+@@ -13,25 +13,25 @@
+ #include "txrx.h"
+ 
+ static const struct rtw89_hfc_ch_cfg rtw8852b_hfc_chcfg_pcie[] = {
+-	{5, 343, grp_0}, /* ACH 0 */
+-	{5, 343, grp_0}, /* ACH 1 */
+-	{5, 343, grp_0}, /* ACH 2 */
+-	{5, 343, grp_0}, /* ACH 3 */
++	{5, 341, grp_0}, /* ACH 0 */
++	{5, 341, grp_0}, /* ACH 1 */
++	{4, 342, grp_0}, /* ACH 2 */
++	{4, 342, grp_0}, /* ACH 3 */
+ 	{0, 0, grp_0}, /* ACH 4 */
+ 	{0, 0, grp_0}, /* ACH 5 */
+ 	{0, 0, grp_0}, /* ACH 6 */
+ 	{0, 0, grp_0}, /* ACH 7 */
+-	{4, 344, grp_0}, /* B0MGQ */
+-	{4, 344, grp_0}, /* B0HIQ */
++	{4, 342, grp_0}, /* B0MGQ */
++	{4, 342, grp_0}, /* B0HIQ */
+ 	{0, 0, grp_0}, /* B1MGQ */
+ 	{0, 0, grp_0}, /* B1HIQ */
+ 	{40, 0, 0} /* FWCMDQ */
+ };
+ 
+ static const struct rtw89_hfc_pub_cfg rtw8852b_hfc_pubcfg_pcie = {
+-	448, /* Group 0 */
++	446, /* Group 0 */
+ 	0, /* Group 1 */
+-	448, /* Public Max */
++	446, /* Public Max */
+ 	0 /* WP threshold */
+ };
+ 
+@@ -44,9 +44,9 @@ static const struct rtw89_hfc_param_ini rtw8852b_hfc_param_ini_pcie[] = {
+ };
+ 
+ static const struct rtw89_dle_mem rtw8852b_dle_mem_pcie[] = {
+-	[RTW89_QTA_SCC] = {RTW89_QTA_SCC, &rtw89_mac_size.wde_size6,
+-			   &rtw89_mac_size.ple_size6, &rtw89_mac_size.wde_qt6,
+-			   &rtw89_mac_size.wde_qt6, &rtw89_mac_size.ple_qt18,
++	[RTW89_QTA_SCC] = {RTW89_QTA_SCC, &rtw89_mac_size.wde_size7,
++			   &rtw89_mac_size.ple_size6, &rtw89_mac_size.wde_qt7,
++			   &rtw89_mac_size.wde_qt7, &rtw89_mac_size.ple_qt18,
+ 			   &rtw89_mac_size.ple_qt58},
+ 	[RTW89_QTA_DLFW] = {RTW89_QTA_DLFW, &rtw89_mac_size.wde_size9,
+ 			    &rtw89_mac_size.ple_size8, &rtw89_mac_size.wde_qt4,
+diff --git a/drivers/platform/mellanox/mlxbf-pmc.c b/drivers/platform/mellanox/mlxbf-pmc.c
+index c2c9b0d3244cb..be967d797c28e 100644
+--- a/drivers/platform/mellanox/mlxbf-pmc.c
++++ b/drivers/platform/mellanox/mlxbf-pmc.c
+@@ -1348,9 +1348,8 @@ static int mlxbf_pmc_map_counters(struct device *dev)
+ 
+ 	for (i = 0; i < pmc->total_blocks; ++i) {
+ 		if (strstr(pmc->block_name[i], "tile")) {
+-			ret = sscanf(pmc->block_name[i], "tile%d", &tile_num);
+-			if (ret < 0)
+-				return ret;
++			if (sscanf(pmc->block_name[i], "tile%d", &tile_num) != 1)
++				return -EINVAL;
+ 
+ 			if (tile_num >= pmc->tile_count)
+ 				continue;
+diff --git a/drivers/platform/x86/intel/ifs/load.c b/drivers/platform/x86/intel/ifs/load.c
+index c5c24e6fdc436..132ec822ab55d 100644
+--- a/drivers/platform/x86/intel/ifs/load.c
++++ b/drivers/platform/x86/intel/ifs/load.c
+@@ -208,7 +208,7 @@ static int scan_chunks_sanity_check(struct device *dev)
+ 			continue;
+ 		reinit_completion(&ifs_done);
+ 		local_work.dev = dev;
+-		INIT_WORK(&local_work.w, copy_hashes_authenticate_chunks);
++		INIT_WORK_ONSTACK(&local_work.w, copy_hashes_authenticate_chunks);
+ 		schedule_work_on(cpu, &local_work.w);
+ 		wait_for_completion(&ifs_done);
+ 		if (ifsd->loading_error) {
+diff --git a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
+index 0954a04623edf..aa715d7d62375 100644
+--- a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
++++ b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
+@@ -295,14 +295,13 @@ struct isst_if_pkg_info {
+ static struct isst_if_cpu_info *isst_cpu_info;
+ static struct isst_if_pkg_info *isst_pkg_info;
+ 
+-#define ISST_MAX_PCI_DOMAINS	8
+-
+ static struct pci_dev *_isst_if_get_pci_dev(int cpu, int bus_no, int dev, int fn)
+ {
+ 	struct pci_dev *matched_pci_dev = NULL;
+ 	struct pci_dev *pci_dev = NULL;
++	struct pci_dev *_pci_dev = NULL;
+ 	int no_matches = 0, pkg_id;
+-	int i, bus_number;
++	int bus_number;
+ 
+ 	if (bus_no < 0 || bus_no >= ISST_MAX_BUS_NUMBER || cpu < 0 ||
+ 	    cpu >= nr_cpu_ids || cpu >= num_possible_cpus())
+@@ -314,12 +313,11 @@ static struct pci_dev *_isst_if_get_pci_dev(int cpu, int bus_no, int dev, int fn
+ 	if (bus_number < 0)
+ 		return NULL;
+ 
+-	for (i = 0; i < ISST_MAX_PCI_DOMAINS; ++i) {
+-		struct pci_dev *_pci_dev;
++	for_each_pci_dev(_pci_dev) {
+ 		int node;
+ 
+-		_pci_dev = pci_get_domain_bus_and_slot(i, bus_number, PCI_DEVFN(dev, fn));
+-		if (!_pci_dev)
++		if (_pci_dev->bus->number != bus_number ||
++		    _pci_dev->devfn != PCI_DEVFN(dev, fn))
+ 			continue;
+ 
+ 		++no_matches;
+diff --git a/drivers/power/supply/axp288_fuel_gauge.c b/drivers/power/supply/axp288_fuel_gauge.c
+index 05f4131784629..3be6f3b10ea42 100644
+--- a/drivers/power/supply/axp288_fuel_gauge.c
++++ b/drivers/power/supply/axp288_fuel_gauge.c
+@@ -507,7 +507,7 @@ static void fuel_gauge_external_power_changed(struct power_supply *psy)
+ 	mutex_lock(&info->lock);
+ 	info->valid = 0; /* Force updating of the cached registers */
+ 	mutex_unlock(&info->lock);
+-	power_supply_changed(info->bat);
++	power_supply_changed(psy);
+ }
+ 
+ static struct power_supply_desc fuel_gauge_desc = {
+diff --git a/drivers/power/supply/bq24190_charger.c b/drivers/power/supply/bq24190_charger.c
+index de67b985f0a91..dc33f00fcc068 100644
+--- a/drivers/power/supply/bq24190_charger.c
++++ b/drivers/power/supply/bq24190_charger.c
+@@ -1262,6 +1262,7 @@ static void bq24190_input_current_limit_work(struct work_struct *work)
+ 	bq24190_charger_set_property(bdi->charger,
+ 				     POWER_SUPPLY_PROP_INPUT_CURRENT_LIMIT,
+ 				     &val);
++	power_supply_changed(bdi->charger);
+ }
+ 
+ /* Sync the input-current-limit with our parent supply (if we have one) */
+diff --git a/drivers/power/supply/bq25890_charger.c b/drivers/power/supply/bq25890_charger.c
+index bfe08d7bfaf30..75cf2449abd97 100644
+--- a/drivers/power/supply/bq25890_charger.c
++++ b/drivers/power/supply/bq25890_charger.c
+@@ -750,7 +750,7 @@ static void bq25890_charger_external_power_changed(struct power_supply *psy)
+ 	if (bq->chip_version != BQ25892)
+ 		return;
+ 
+-	ret = power_supply_get_property_from_supplier(bq->charger,
++	ret = power_supply_get_property_from_supplier(psy,
+ 						      POWER_SUPPLY_PROP_USB_TYPE,
+ 						      &val);
+ 	if (ret)
+@@ -775,6 +775,7 @@ static void bq25890_charger_external_power_changed(struct power_supply *psy)
+ 	}
+ 
+ 	bq25890_field_write(bq, F_IINLIM, input_current_limit);
++	power_supply_changed(psy);
+ }
+ 
+ static int bq25890_get_chip_state(struct bq25890_device *bq,
+@@ -1106,6 +1107,8 @@ static void bq25890_pump_express_work(struct work_struct *data)
+ 	dev_info(bq->dev, "Hi-voltage charging requested, input voltage is %d mV\n",
+ 		 voltage);
+ 
++	power_supply_changed(bq->charger);
++
+ 	return;
+ error_print:
+ 	bq25890_field_write(bq, F_PUMPX_EN, 0);
+diff --git a/drivers/power/supply/bq27xxx_battery.c b/drivers/power/supply/bq27xxx_battery.c
+index 5ff6f44fd47b2..929e813b9c443 100644
+--- a/drivers/power/supply/bq27xxx_battery.c
++++ b/drivers/power/supply/bq27xxx_battery.c
+@@ -1761,60 +1761,6 @@ static int bq27xxx_battery_read_health(struct bq27xxx_device_info *di)
+ 	return POWER_SUPPLY_HEALTH_GOOD;
+ }
+ 
+-void bq27xxx_battery_update(struct bq27xxx_device_info *di)
+-{
+-	struct bq27xxx_reg_cache cache = {0, };
+-	bool has_singe_flag = di->opts & BQ27XXX_O_ZERO;
+-
+-	cache.flags = bq27xxx_read(di, BQ27XXX_REG_FLAGS, has_singe_flag);
+-	if ((cache.flags & 0xff) == 0xff)
+-		cache.flags = -1; /* read error */
+-	if (cache.flags >= 0) {
+-		cache.temperature = bq27xxx_battery_read_temperature(di);
+-		if (di->regs[BQ27XXX_REG_TTE] != INVALID_REG_ADDR)
+-			cache.time_to_empty = bq27xxx_battery_read_time(di, BQ27XXX_REG_TTE);
+-		if (di->regs[BQ27XXX_REG_TTECP] != INVALID_REG_ADDR)
+-			cache.time_to_empty_avg = bq27xxx_battery_read_time(di, BQ27XXX_REG_TTECP);
+-		if (di->regs[BQ27XXX_REG_TTF] != INVALID_REG_ADDR)
+-			cache.time_to_full = bq27xxx_battery_read_time(di, BQ27XXX_REG_TTF);
+-
+-		cache.charge_full = bq27xxx_battery_read_fcc(di);
+-		cache.capacity = bq27xxx_battery_read_soc(di);
+-		if (di->regs[BQ27XXX_REG_AE] != INVALID_REG_ADDR)
+-			cache.energy = bq27xxx_battery_read_energy(di);
+-		di->cache.flags = cache.flags;
+-		cache.health = bq27xxx_battery_read_health(di);
+-		if (di->regs[BQ27XXX_REG_CYCT] != INVALID_REG_ADDR)
+-			cache.cycle_count = bq27xxx_battery_read_cyct(di);
+-
+-		/* We only have to read charge design full once */
+-		if (di->charge_design_full <= 0)
+-			di->charge_design_full = bq27xxx_battery_read_dcap(di);
+-	}
+-
+-	if ((di->cache.capacity != cache.capacity) ||
+-	    (di->cache.flags != cache.flags))
+-		power_supply_changed(di->bat);
+-
+-	if (memcmp(&di->cache, &cache, sizeof(cache)) != 0)
+-		di->cache = cache;
+-
+-	di->last_update = jiffies;
+-}
+-EXPORT_SYMBOL_GPL(bq27xxx_battery_update);
+-
+-static void bq27xxx_battery_poll(struct work_struct *work)
+-{
+-	struct bq27xxx_device_info *di =
+-			container_of(work, struct bq27xxx_device_info,
+-				     work.work);
+-
+-	bq27xxx_battery_update(di);
+-
+-	if (poll_interval > 0)
+-		schedule_delayed_work(&di->work, poll_interval * HZ);
+-}
+-
+ static bool bq27xxx_battery_is_full(struct bq27xxx_device_info *di, int flags)
+ {
+ 	if (di->opts & BQ27XXX_O_ZERO)
+@@ -1833,7 +1779,8 @@ static bool bq27xxx_battery_is_full(struct bq27xxx_device_info *di, int flags)
+ static int bq27xxx_battery_current_and_status(
+ 	struct bq27xxx_device_info *di,
+ 	union power_supply_propval *val_curr,
+-	union power_supply_propval *val_status)
++	union power_supply_propval *val_status,
++	struct bq27xxx_reg_cache *cache)
+ {
+ 	bool single_flags = (di->opts & BQ27XXX_O_ZERO);
+ 	int curr;
+@@ -1845,10 +1792,14 @@ static int bq27xxx_battery_current_and_status(
+ 		return curr;
+ 	}
+ 
+-	flags = bq27xxx_read(di, BQ27XXX_REG_FLAGS, single_flags);
+-	if (flags < 0) {
+-		dev_err(di->dev, "error reading flags\n");
+-		return flags;
++	if (cache) {
++		flags = cache->flags;
++	} else {
++		flags = bq27xxx_read(di, BQ27XXX_REG_FLAGS, single_flags);
++		if (flags < 0) {
++			dev_err(di->dev, "error reading flags\n");
++			return flags;
++		}
+ 	}
+ 
+ 	if (di->opts & BQ27XXX_O_ZERO) {
+@@ -1883,6 +1834,78 @@ static int bq27xxx_battery_current_and_status(
+ 	return 0;
+ }
+ 
++static void bq27xxx_battery_update_unlocked(struct bq27xxx_device_info *di)
++{
++	union power_supply_propval status = di->last_status;
++	struct bq27xxx_reg_cache cache = {0, };
++	bool has_singe_flag = di->opts & BQ27XXX_O_ZERO;
++
++	cache.flags = bq27xxx_read(di, BQ27XXX_REG_FLAGS, has_singe_flag);
++	if ((cache.flags & 0xff) == 0xff)
++		cache.flags = -1; /* read error */
++	if (cache.flags >= 0) {
++		cache.temperature = bq27xxx_battery_read_temperature(di);
++		if (di->regs[BQ27XXX_REG_TTE] != INVALID_REG_ADDR)
++			cache.time_to_empty = bq27xxx_battery_read_time(di, BQ27XXX_REG_TTE);
++		if (di->regs[BQ27XXX_REG_TTECP] != INVALID_REG_ADDR)
++			cache.time_to_empty_avg = bq27xxx_battery_read_time(di, BQ27XXX_REG_TTECP);
++		if (di->regs[BQ27XXX_REG_TTF] != INVALID_REG_ADDR)
++			cache.time_to_full = bq27xxx_battery_read_time(di, BQ27XXX_REG_TTF);
++
++		cache.charge_full = bq27xxx_battery_read_fcc(di);
++		cache.capacity = bq27xxx_battery_read_soc(di);
++		if (di->regs[BQ27XXX_REG_AE] != INVALID_REG_ADDR)
++			cache.energy = bq27xxx_battery_read_energy(di);
++		di->cache.flags = cache.flags;
++		cache.health = bq27xxx_battery_read_health(di);
++		if (di->regs[BQ27XXX_REG_CYCT] != INVALID_REG_ADDR)
++			cache.cycle_count = bq27xxx_battery_read_cyct(di);
++
++		/*
++		 * On gauges with signed current reporting the current must be
++		 * checked to detect charging <-> discharging status changes.
++		 */
++		if (!(di->opts & BQ27XXX_O_ZERO))
++			bq27xxx_battery_current_and_status(di, NULL, &status, &cache);
++
++		/* We only have to read charge design full once */
++		if (di->charge_design_full <= 0)
++			di->charge_design_full = bq27xxx_battery_read_dcap(di);
++	}
++
++	if ((di->cache.capacity != cache.capacity) ||
++	    (di->cache.flags != cache.flags) ||
++	    (di->last_status.intval != status.intval)) {
++		di->last_status.intval = status.intval;
++		power_supply_changed(di->bat);
++	}
++
++	if (memcmp(&di->cache, &cache, sizeof(cache)) != 0)
++		di->cache = cache;
++
++	di->last_update = jiffies;
++
++	if (!di->removed && poll_interval > 0)
++		mod_delayed_work(system_wq, &di->work, poll_interval * HZ);
++}
++
++void bq27xxx_battery_update(struct bq27xxx_device_info *di)
++{
++	mutex_lock(&di->lock);
++	bq27xxx_battery_update_unlocked(di);
++	mutex_unlock(&di->lock);
++}
++EXPORT_SYMBOL_GPL(bq27xxx_battery_update);
++
++static void bq27xxx_battery_poll(struct work_struct *work)
++{
++	struct bq27xxx_device_info *di =
++			container_of(work, struct bq27xxx_device_info,
++				     work.work);
++
++	bq27xxx_battery_update(di);
++}
++
+ /*
+  * Get the average power in µW
+  * Return < 0 if something fails.
+@@ -1985,10 +2008,8 @@ static int bq27xxx_battery_get_property(struct power_supply *psy,
+ 	struct bq27xxx_device_info *di = power_supply_get_drvdata(psy);
+ 
+ 	mutex_lock(&di->lock);
+-	if (time_is_before_jiffies(di->last_update + 5 * HZ)) {
+-		cancel_delayed_work_sync(&di->work);
+-		bq27xxx_battery_poll(&di->work.work);
+-	}
++	if (time_is_before_jiffies(di->last_update + 5 * HZ))
++		bq27xxx_battery_update_unlocked(di);
+ 	mutex_unlock(&di->lock);
+ 
+ 	if (psp != POWER_SUPPLY_PROP_PRESENT && di->cache.flags < 0)
+@@ -1996,7 +2017,7 @@ static int bq27xxx_battery_get_property(struct power_supply *psy,
+ 
+ 	switch (psp) {
+ 	case POWER_SUPPLY_PROP_STATUS:
+-		ret = bq27xxx_battery_current_and_status(di, NULL, val);
++		ret = bq27xxx_battery_current_and_status(di, NULL, val, NULL);
+ 		break;
+ 	case POWER_SUPPLY_PROP_VOLTAGE_NOW:
+ 		ret = bq27xxx_battery_voltage(di, val);
+@@ -2005,7 +2026,7 @@ static int bq27xxx_battery_get_property(struct power_supply *psy,
+ 		val->intval = di->cache.flags < 0 ? 0 : 1;
+ 		break;
+ 	case POWER_SUPPLY_PROP_CURRENT_NOW:
+-		ret = bq27xxx_battery_current_and_status(di, val, NULL);
++		ret = bq27xxx_battery_current_and_status(di, val, NULL, NULL);
+ 		break;
+ 	case POWER_SUPPLY_PROP_CAPACITY:
+ 		ret = bq27xxx_simple_value(di->cache.capacity, val);
+@@ -2078,8 +2099,8 @@ static void bq27xxx_external_power_changed(struct power_supply *psy)
+ {
+ 	struct bq27xxx_device_info *di = power_supply_get_drvdata(psy);
+ 
+-	cancel_delayed_work_sync(&di->work);
+-	schedule_delayed_work(&di->work, 0);
++	/* After charger plug in/out wait 0.5s for things to stabilize */
++	mod_delayed_work(system_wq, &di->work, HZ / 2);
+ }
+ 
+ int bq27xxx_battery_setup(struct bq27xxx_device_info *di)
+@@ -2127,22 +2148,18 @@ EXPORT_SYMBOL_GPL(bq27xxx_battery_setup);
+ 
+ void bq27xxx_battery_teardown(struct bq27xxx_device_info *di)
+ {
+-	/*
+-	 * power_supply_unregister call bq27xxx_battery_get_property which
+-	 * call bq27xxx_battery_poll.
+-	 * Make sure that bq27xxx_battery_poll will not call
+-	 * schedule_delayed_work again after unregister (which cause OOPS).
+-	 */
+-	poll_interval = 0;
+-
+-	cancel_delayed_work_sync(&di->work);
+-
+-	power_supply_unregister(di->bat);
+-
+ 	mutex_lock(&bq27xxx_list_lock);
+ 	list_del(&di->list);
+ 	mutex_unlock(&bq27xxx_list_lock);
+ 
++	/* Set removed to avoid bq27xxx_battery_update() re-queuing the work */
++	mutex_lock(&di->lock);
++	di->removed = true;
++	mutex_unlock(&di->lock);
++
++	cancel_delayed_work_sync(&di->work);
++
++	power_supply_unregister(di->bat);
+ 	mutex_destroy(&di->lock);
+ }
+ EXPORT_SYMBOL_GPL(bq27xxx_battery_teardown);
+diff --git a/drivers/power/supply/bq27xxx_battery_i2c.c b/drivers/power/supply/bq27xxx_battery_i2c.c
+index f8768997333bc..6d3c748763390 100644
+--- a/drivers/power/supply/bq27xxx_battery_i2c.c
++++ b/drivers/power/supply/bq27xxx_battery_i2c.c
+@@ -179,7 +179,7 @@ static int bq27xxx_battery_i2c_probe(struct i2c_client *client)
+ 	i2c_set_clientdata(client, di);
+ 
+ 	if (client->irq) {
+-		ret = devm_request_threaded_irq(&client->dev, client->irq,
++		ret = request_threaded_irq(client->irq,
+ 				NULL, bq27xxx_battery_irq_handler_thread,
+ 				IRQF_ONESHOT,
+ 				di->name, di);
+@@ -209,6 +209,7 @@ static void bq27xxx_battery_i2c_remove(struct i2c_client *client)
+ {
+ 	struct bq27xxx_device_info *di = i2c_get_clientdata(client);
+ 
++	free_irq(client->irq, di);
+ 	bq27xxx_battery_teardown(di);
+ 
+ 	mutex_lock(&battery_mutex);
+diff --git a/drivers/power/supply/mt6360_charger.c b/drivers/power/supply/mt6360_charger.c
+index 92e48e3a48536..1305cba61edd4 100644
+--- a/drivers/power/supply/mt6360_charger.c
++++ b/drivers/power/supply/mt6360_charger.c
+@@ -796,7 +796,9 @@ static int mt6360_charger_probe(struct platform_device *pdev)
+ 	mci->vinovp = 6500000;
+ 	mutex_init(&mci->chgdet_lock);
+ 	platform_set_drvdata(pdev, mci);
+-	devm_work_autocancel(&pdev->dev, &mci->chrdet_work, mt6360_chrdet_work);
++	ret = devm_work_autocancel(&pdev->dev, &mci->chrdet_work, mt6360_chrdet_work);
++	if (ret)
++		return dev_err_probe(&pdev->dev, ret, "Failed to set delayed work\n");
+ 
+ 	ret = device_property_read_u32(&pdev->dev, "richtek,vinovp-microvolt", &mci->vinovp);
+ 	if (ret)
+diff --git a/drivers/power/supply/power_supply_leds.c b/drivers/power/supply/power_supply_leds.c
+index 702bf83f6e6d2..0674483279d77 100644
+--- a/drivers/power/supply/power_supply_leds.c
++++ b/drivers/power/supply/power_supply_leds.c
+@@ -35,8 +35,9 @@ static void power_supply_update_bat_leds(struct power_supply *psy)
+ 		led_trigger_event(psy->charging_full_trig, LED_FULL);
+ 		led_trigger_event(psy->charging_trig, LED_OFF);
+ 		led_trigger_event(psy->full_trig, LED_FULL);
+-		led_trigger_event(psy->charging_blink_full_solid_trig,
+-			LED_FULL);
++		/* Going from blink to LED on requires a LED_OFF event to stop blink */
++		led_trigger_event(psy->charging_blink_full_solid_trig, LED_OFF);
++		led_trigger_event(psy->charging_blink_full_solid_trig, LED_FULL);
+ 		break;
+ 	case POWER_SUPPLY_STATUS_CHARGING:
+ 		led_trigger_event(psy->charging_full_trig, LED_FULL);
+diff --git a/drivers/power/supply/sbs-charger.c b/drivers/power/supply/sbs-charger.c
+index 75ebcbf0a7883..a14e89ac0369c 100644
+--- a/drivers/power/supply/sbs-charger.c
++++ b/drivers/power/supply/sbs-charger.c
+@@ -24,7 +24,7 @@
+ #define SBS_CHARGER_REG_STATUS			0x13
+ #define SBS_CHARGER_REG_ALARM_WARNING		0x16
+ 
+-#define SBS_CHARGER_STATUS_CHARGE_INHIBITED	BIT(1)
++#define SBS_CHARGER_STATUS_CHARGE_INHIBITED	BIT(0)
+ #define SBS_CHARGER_STATUS_RES_COLD		BIT(9)
+ #define SBS_CHARGER_STATUS_RES_HOT		BIT(10)
+ #define SBS_CHARGER_STATUS_BATTERY_PRESENT	BIT(14)
+diff --git a/drivers/regulator/mt6359-regulator.c b/drivers/regulator/mt6359-regulator.c
+index de3b0462832cd..f94f87c5407ae 100644
+--- a/drivers/regulator/mt6359-regulator.c
++++ b/drivers/regulator/mt6359-regulator.c
+@@ -951,9 +951,12 @@ static int mt6359_regulator_probe(struct platform_device *pdev)
+ 	struct regulator_config config = {};
+ 	struct regulator_dev *rdev;
+ 	struct mt6359_regulator_info *mt6359_info;
+-	int i, hw_ver;
++	int i, hw_ver, ret;
++
++	ret = regmap_read(mt6397->regmap, MT6359P_HWCID, &hw_ver);
++	if (ret)
++		return ret;
+ 
+-	regmap_read(mt6397->regmap, MT6359P_HWCID, &hw_ver);
+ 	if (hw_ver >= MT6359P_CHIP_VER)
+ 		mt6359_info = mt6359p_regulators;
+ 	else
+diff --git a/drivers/regulator/pca9450-regulator.c b/drivers/regulator/pca9450-regulator.c
+index c6351fac9f4d9..aeee2654ad8bb 100644
+--- a/drivers/regulator/pca9450-regulator.c
++++ b/drivers/regulator/pca9450-regulator.c
+@@ -264,7 +264,7 @@ static const struct pca9450_regulator_desc pca9450a_regulators[] = {
+ 			.vsel_reg = PCA9450_REG_BUCK2OUT_DVS0,
+ 			.vsel_mask = BUCK2OUT_DVS0_MASK,
+ 			.enable_reg = PCA9450_REG_BUCK2CTRL,
+-			.enable_mask = BUCK1_ENMODE_MASK,
++			.enable_mask = BUCK2_ENMODE_MASK,
+ 			.ramp_reg = PCA9450_REG_BUCK2CTRL,
+ 			.ramp_mask = BUCK2_RAMP_MASK,
+ 			.ramp_delay_table = pca9450_dvs_buck_ramp_table,
+@@ -502,7 +502,7 @@ static const struct pca9450_regulator_desc pca9450bc_regulators[] = {
+ 			.vsel_reg = PCA9450_REG_BUCK2OUT_DVS0,
+ 			.vsel_mask = BUCK2OUT_DVS0_MASK,
+ 			.enable_reg = PCA9450_REG_BUCK2CTRL,
+-			.enable_mask = BUCK1_ENMODE_MASK,
++			.enable_mask = BUCK2_ENMODE_MASK,
+ 			.ramp_reg = PCA9450_REG_BUCK2CTRL,
+ 			.ramp_mask = BUCK2_RAMP_MASK,
+ 			.ramp_delay_table = pca9450_dvs_buck_ramp_table,
+diff --git a/drivers/tee/optee/smc_abi.c b/drivers/tee/optee/smc_abi.c
+index a1c1fa1a9c28a..e6e0428f8e7be 100644
+--- a/drivers/tee/optee/smc_abi.c
++++ b/drivers/tee/optee/smc_abi.c
+@@ -984,8 +984,10 @@ static u32 get_async_notif_value(optee_invoke_fn *invoke_fn, bool *value_valid,
+ 
+ 	invoke_fn(OPTEE_SMC_GET_ASYNC_NOTIF_VALUE, 0, 0, 0, 0, 0, 0, 0, &res);
+ 
+-	if (res.a0)
++	if (res.a0) {
++		*value_valid = false;
+ 		return 0;
++	}
+ 	*value_valid = (res.a2 & OPTEE_SMC_ASYNC_NOTIF_VALUE_VALID);
+ 	*value_pending = (res.a2 & OPTEE_SMC_ASYNC_NOTIF_VALUE_PENDING);
+ 	return res.a1;
+diff --git a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
+index d0295123cc3e4..33429cdaf7a3c 100644
+--- a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
++++ b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
+@@ -131,7 +131,7 @@ static ssize_t available_uuids_show(struct device *dev,
+ 
+ 	for (i = 0; i < INT3400_THERMAL_MAXIMUM_UUID; i++) {
+ 		if (priv->uuid_bitmap & (1 << i))
+-			length += sysfs_emit_at(buf, length, int3400_thermal_uuids[i]);
++			length += sysfs_emit_at(buf, length, "%s\n", int3400_thermal_uuids[i]);
+ 	}
+ 
+ 	return length;
+@@ -149,7 +149,7 @@ static ssize_t current_uuid_show(struct device *dev,
+ 
+ 	for (i = 0; i <= INT3400_THERMAL_CRITICAL; i++) {
+ 		if (priv->os_uuid_mask & BIT(i))
+-			length += sysfs_emit_at(buf, length, int3400_thermal_uuids[i]);
++			length += sysfs_emit_at(buf, length, "%s\n", int3400_thermal_uuids[i]);
+ 	}
+ 
+ 	if (length)
+diff --git a/drivers/usb/core/usb.c b/drivers/usb/core/usb.c
+index 34742fbbd84d3..901ec732321c5 100644
+--- a/drivers/usb/core/usb.c
++++ b/drivers/usb/core/usb.c
+@@ -206,6 +206,82 @@ int usb_find_common_endpoints_reverse(struct usb_host_interface *alt,
+ }
+ EXPORT_SYMBOL_GPL(usb_find_common_endpoints_reverse);
+ 
++/**
++ * usb_find_endpoint() - Given an endpoint address, search for the endpoint's
++ * usb_host_endpoint structure in an interface's current altsetting.
++ * @intf: the interface whose current altsetting should be searched
++ * @ep_addr: the endpoint address (number and direction) to find
++ *
++ * Search the altsetting's list of endpoints for one with the specified address.
++ *
++ * Return: Pointer to the usb_host_endpoint if found, %NULL otherwise.
++ */
++static const struct usb_host_endpoint *usb_find_endpoint(
++		const struct usb_interface *intf, unsigned int ep_addr)
++{
++	int n;
++	const struct usb_host_endpoint *ep;
++
++	n = intf->cur_altsetting->desc.bNumEndpoints;
++	ep = intf->cur_altsetting->endpoint;
++	for (; n > 0; (--n, ++ep)) {
++		if (ep->desc.bEndpointAddress == ep_addr)
++			return ep;
++	}
++	return NULL;
++}
++
++/**
++ * usb_check_bulk_endpoints - Check whether an interface's current altsetting
++ * contains a set of bulk endpoints with the given addresses.
++ * @intf: the interface whose current altsetting should be searched
++ * @ep_addrs: 0-terminated array of the endpoint addresses (number and
++ * direction) to look for
++ *
++ * Search for endpoints with the specified addresses and check their types.
++ *
++ * Return: %true if all the endpoints are found and are bulk, %false otherwise.
++ */
++bool usb_check_bulk_endpoints(
++		const struct usb_interface *intf, const u8 *ep_addrs)
++{
++	const struct usb_host_endpoint *ep;
++
++	for (; *ep_addrs; ++ep_addrs) {
++		ep = usb_find_endpoint(intf, *ep_addrs);
++		if (!ep || !usb_endpoint_xfer_bulk(&ep->desc))
++			return false;
++	}
++	return true;
++}
++EXPORT_SYMBOL_GPL(usb_check_bulk_endpoints);
++
++/**
++ * usb_check_int_endpoints - Check whether an interface's current altsetting
++ * contains a set of interrupt endpoints with the given addresses.
++ * @intf: the interface whose current altsetting should be searched
++ * @ep_addrs: 0-terminated array of the endpoint addresses (number and
++ * direction) to look for
++ *
++ * Search for endpoints with the specified addresses and check their types.
++ *
++ * Return: %true if all the endpoints are found and are interrupt,
++ * %false otherwise.
++ */
++bool usb_check_int_endpoints(
++		const struct usb_interface *intf, const u8 *ep_addrs)
++{
++	const struct usb_host_endpoint *ep;
++
++	for (; *ep_addrs; ++ep_addrs) {
++		ep = usb_find_endpoint(intf, *ep_addrs);
++		if (!ep || !usb_endpoint_xfer_int(&ep->desc))
++			return false;
++	}
++	return true;
++}
++EXPORT_SYMBOL_GPL(usb_check_int_endpoints);
++
+ /**
+  * usb_find_alt_setting() - Given a configuration, find the alternate setting
+  * for the given interface.
+diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
+index 4743e918dcafa..54ce9907a408c 100644
+--- a/drivers/usb/dwc3/core.h
++++ b/drivers/usb/dwc3/core.h
+@@ -1110,6 +1110,7 @@ struct dwc3_scratchpad_array {
+  *	3	- Reserved
+  * @dis_metastability_quirk: set to disable metastability quirk.
+  * @dis_split_quirk: set to disable split boundary.
++ * @suspended: set to track suspend event due to U3/L2.
+  * @imod_interval: set the interrupt moderation interval in 250ns
+  *			increments or 0 to disable.
+  * @max_cfg_eps: current max number of IN eps used across all USB configs.
+@@ -1327,6 +1328,7 @@ struct dwc3 {
+ 
+ 	unsigned		dis_split_quirk:1;
+ 	unsigned		async_callbacks:1;
++	unsigned		suspended:1;
+ 
+ 	u16			imod_interval;
+ 
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 80ae308448451..1cf352fbe7039 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -3838,6 +3838,8 @@ static void dwc3_gadget_disconnect_interrupt(struct dwc3 *dwc)
+ {
+ 	int			reg;
+ 
++	dwc->suspended = false;
++
+ 	dwc3_gadget_set_link_state(dwc, DWC3_LINK_STATE_RX_DET);
+ 
+ 	reg = dwc3_readl(dwc->regs, DWC3_DCTL);
+@@ -3869,6 +3871,8 @@ static void dwc3_gadget_reset_interrupt(struct dwc3 *dwc)
+ {
+ 	u32			reg;
+ 
++	dwc->suspended = false;
++
+ 	/*
+ 	 * Ideally, dwc3_reset_gadget() would trigger the function
+ 	 * drivers to stop any active transfers through ep disable.
+@@ -4098,6 +4102,8 @@ static void dwc3_gadget_conndone_interrupt(struct dwc3 *dwc)
+ 
+ static void dwc3_gadget_wakeup_interrupt(struct dwc3 *dwc)
+ {
++	dwc->suspended = false;
++
+ 	/*
+ 	 * TODO take core out of low power mode when that's
+ 	 * implemented.
+@@ -4213,8 +4219,10 @@ static void dwc3_gadget_suspend_interrupt(struct dwc3 *dwc,
+ {
+ 	enum dwc3_link_state next = evtinfo & DWC3_LINK_STATE_MASK;
+ 
+-	if (dwc->link_state != next && next == DWC3_LINK_STATE_U3)
++	if (!dwc->suspended && next == DWC3_LINK_STATE_U3) {
++		dwc->suspended = true;
+ 		dwc3_suspend_gadget(dwc);
++	}
+ 
+ 	dwc->link_state = next;
+ }
+diff --git a/drivers/usb/misc/sisusbvga/sisusbvga.c b/drivers/usb/misc/sisusbvga/sisusbvga.c
+index 654a79fd3231e..febf34f9f0499 100644
+--- a/drivers/usb/misc/sisusbvga/sisusbvga.c
++++ b/drivers/usb/misc/sisusbvga/sisusbvga.c
+@@ -2778,6 +2778,20 @@ static int sisusb_probe(struct usb_interface *intf,
+ 	struct usb_device *dev = interface_to_usbdev(intf);
+ 	struct sisusb_usb_data *sisusb;
+ 	int retval = 0, i;
++	static const u8 ep_addresses[] = {
++		SISUSB_EP_GFX_IN | USB_DIR_IN,
++		SISUSB_EP_GFX_OUT | USB_DIR_OUT,
++		SISUSB_EP_GFX_BULK_OUT | USB_DIR_OUT,
++		SISUSB_EP_GFX_LBULK_OUT | USB_DIR_OUT,
++		SISUSB_EP_BRIDGE_IN | USB_DIR_IN,
++		SISUSB_EP_BRIDGE_OUT | USB_DIR_OUT,
++		0};
++
++	/* Are the expected endpoints present? */
++	if (!usb_check_bulk_endpoints(intf, ep_addresses)) {
++		dev_err(&intf->dev, "Invalid USB2VGA device\n");
++		return -EINVAL;
++	}
+ 
+ 	dev_info(&dev->dev, "USB2VGA dongle found at address %d\n",
+ 			dev->devnum);
+diff --git a/drivers/video/fbdev/udlfb.c b/drivers/video/fbdev/udlfb.c
+index 216d49c9d47e5..256d9b61f4eaa 100644
+--- a/drivers/video/fbdev/udlfb.c
++++ b/drivers/video/fbdev/udlfb.c
+@@ -27,6 +27,8 @@
+ #include <video/udlfb.h>
+ #include "edid.h"
+ 
++#define OUT_EP_NUM	1	/* The endpoint number we will use */
++
+ static const struct fb_fix_screeninfo dlfb_fix = {
+ 	.id =           "udlfb",
+ 	.type =         FB_TYPE_PACKED_PIXELS,
+@@ -1652,7 +1654,7 @@ static int dlfb_usb_probe(struct usb_interface *intf,
+ 	struct fb_info *info;
+ 	int retval;
+ 	struct usb_device *usbdev = interface_to_usbdev(intf);
+-	struct usb_endpoint_descriptor *out;
++	static u8 out_ep[] = {OUT_EP_NUM + USB_DIR_OUT, 0};
+ 
+ 	/* usb initialization */
+ 	dlfb = kzalloc(sizeof(*dlfb), GFP_KERNEL);
+@@ -1666,9 +1668,9 @@ static int dlfb_usb_probe(struct usb_interface *intf,
+ 	dlfb->udev = usb_get_dev(usbdev);
+ 	usb_set_intfdata(intf, dlfb);
+ 
+-	retval = usb_find_common_endpoints(intf->cur_altsetting, NULL, &out, NULL, NULL);
+-	if (retval) {
+-		dev_err(&intf->dev, "Device should have at lease 1 bulk endpoint!\n");
++	if (!usb_check_bulk_endpoints(intf, out_ep)) {
++		dev_err(&intf->dev, "Invalid DisplayLink device!\n");
++		retval = -EINVAL;
+ 		goto error;
+ 	}
+ 
+@@ -1927,7 +1929,8 @@ retry:
+ 		}
+ 
+ 		/* urb->transfer_buffer_length set to actual before submit */
+-		usb_fill_bulk_urb(urb, dlfb->udev, usb_sndbulkpipe(dlfb->udev, 1),
++		usb_fill_bulk_urb(urb, dlfb->udev,
++			usb_sndbulkpipe(dlfb->udev, OUT_EP_NUM),
+ 			buf, size, dlfb_urb_completion, unode);
+ 		urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
+ 
+diff --git a/drivers/watchdog/sp5100_tco.c b/drivers/watchdog/sp5100_tco.c
+index fb426b7d81dac..14f8d8d90920f 100644
+--- a/drivers/watchdog/sp5100_tco.c
++++ b/drivers/watchdog/sp5100_tco.c
+@@ -115,6 +115,10 @@ static int tco_timer_start(struct watchdog_device *wdd)
+ 	val |= SP5100_WDT_START_STOP_BIT;
+ 	writel(val, SP5100_WDT_CONTROL(tco->tcobase));
+ 
++	/* This must be a distinct write. */
++	val |= SP5100_WDT_TRIGGER_BIT;
++	writel(val, SP5100_WDT_CONTROL(tco->tcobase));
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
+index 1f5219e12cc36..7beaf2c41fbbb 100644
+--- a/drivers/xen/pvcalls-back.c
++++ b/drivers/xen/pvcalls-back.c
+@@ -325,8 +325,10 @@ static struct sock_mapping *pvcalls_new_active_socket(
+ 	void *page;
+ 
+ 	map = kzalloc(sizeof(*map), GFP_KERNEL);
+-	if (map == NULL)
++	if (map == NULL) {
++		sock_release(sock);
+ 		return NULL;
++	}
+ 
+ 	map->fedata = fedata;
+ 	map->sock = sock;
+@@ -418,10 +420,8 @@ static int pvcalls_back_connect(struct xenbus_device *dev,
+ 					req->u.connect.ref,
+ 					req->u.connect.evtchn,
+ 					sock);
+-	if (!map) {
++	if (!map)
+ 		ret = -EFAULT;
+-		sock_release(sock);
+-	}
+ 
+ out:
+ 	rsp = RING_GET_RESPONSE(&fedata->ring, fedata->ring.rsp_prod_pvt++);
+@@ -561,7 +561,6 @@ static void __pvcalls_back_accept(struct work_struct *work)
+ 					sock);
+ 	if (!map) {
+ 		ret = -EFAULT;
+-		sock_release(sock);
+ 		goto out_error;
+ 	}
+ 
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 9892dae178b75..316bef1891425 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -4951,7 +4951,11 @@ static void btrfs_destroy_delalloc_inodes(struct btrfs_root *root)
+ 		 */
+ 		inode = igrab(&btrfs_inode->vfs_inode);
+ 		if (inode) {
++			unsigned int nofs_flag;
++
++			nofs_flag = memalloc_nofs_save();
+ 			invalidate_inode_pages2(inode->i_mapping);
++			memalloc_nofs_restore(nofs_flag);
+ 			iput(inode);
+ 		}
+ 		spin_lock(&root->delalloc_lock);
+@@ -5057,7 +5061,12 @@ static void btrfs_cleanup_bg_io(struct btrfs_block_group *cache)
+ 
+ 	inode = cache->io_ctl.inode;
+ 	if (inode) {
++		unsigned int nofs_flag;
++
++		nofs_flag = memalloc_nofs_save();
+ 		invalidate_inode_pages2(inode->i_mapping);
++		memalloc_nofs_restore(nofs_flag);
++
+ 		BTRFS_I(inode)->generation = 0;
+ 		cache->io_ctl.inode = NULL;
+ 		iput(inode);
+diff --git a/fs/cifs/dfs.c b/fs/cifs/dfs.c
+index a93dbca1411b2..2f93bf8c3325a 100644
+--- a/fs/cifs/dfs.c
++++ b/fs/cifs/dfs.c
+@@ -303,7 +303,7 @@ int dfs_mount_share(struct cifs_mount_ctx *mnt_ctx, bool *isdfs)
+ 	if (!nodfs) {
+ 		rc = dfs_get_referral(mnt_ctx, ctx->UNC + 1, NULL, NULL);
+ 		if (rc) {
+-			if (rc != -ENOENT && rc != -EOPNOTSUPP)
++			if (rc != -ENOENT && rc != -EOPNOTSUPP && rc != -EIO)
+ 				goto out;
+ 			nodfs = true;
+ 		}
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index ba7f2e09d6c8e..df88b8c04d03d 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -3353,9 +3353,10 @@ static size_t cifs_limit_bvec_subset(const struct iov_iter *iter, size_t max_siz
+ 	while (n && ix < nbv) {
+ 		len = min3(n, bvecs[ix].bv_len - skip, max_size);
+ 		span += len;
++		max_size -= len;
+ 		nsegs++;
+ 		ix++;
+-		if (span >= max_size || nsegs >= max_segs)
++		if (max_size == 0 || nsegs >= max_segs)
+ 			break;
+ 		skip = 0;
+ 		n -= len;
+diff --git a/fs/cifs/fs_context.c b/fs/cifs/fs_context.c
+index ace11a1a7c8ab..1bda75609b642 100644
+--- a/fs/cifs/fs_context.c
++++ b/fs/cifs/fs_context.c
+@@ -904,6 +904,14 @@ static int smb3_fs_context_parse_param(struct fs_context *fc,
+ 			ctx->sfu_remap = false; /* disable SFU mapping */
+ 		}
+ 		break;
++	case Opt_mapchars:
++		if (result.negated)
++			ctx->sfu_remap = false;
++		else {
++			ctx->sfu_remap = true;
++			ctx->remap = false; /* disable SFM (mapposix) mapping */
++		}
++		break;
+ 	case Opt_user_xattr:
+ 		if (result.negated)
+ 			ctx->no_xattr = 1;
+diff --git a/fs/ocfs2/namei.c b/fs/ocfs2/namei.c
+index 9175dbc472011..17c52225b87d4 100644
+--- a/fs/ocfs2/namei.c
++++ b/fs/ocfs2/namei.c
+@@ -242,6 +242,7 @@ static int ocfs2_mknod(struct mnt_idmap *idmap,
+ 	int want_meta = 0;
+ 	int xattr_credits = 0;
+ 	struct ocfs2_security_xattr_info si = {
++		.name = NULL,
+ 		.enable = 1,
+ 	};
+ 	int did_quota_inode = 0;
+@@ -1805,6 +1806,7 @@ static int ocfs2_symlink(struct mnt_idmap *idmap,
+ 	int want_clusters = 0;
+ 	int xattr_credits = 0;
+ 	struct ocfs2_security_xattr_info si = {
++		.name = NULL,
+ 		.enable = 1,
+ 	};
+ 	int did_quota = 0, did_quota_inode = 0;
+diff --git a/fs/ocfs2/xattr.c b/fs/ocfs2/xattr.c
+index 389308efe854f..469ec45baee2b 100644
+--- a/fs/ocfs2/xattr.c
++++ b/fs/ocfs2/xattr.c
+@@ -7259,9 +7259,21 @@ static int ocfs2_xattr_security_set(const struct xattr_handler *handler,
+ static int ocfs2_initxattrs(struct inode *inode, const struct xattr *xattr_array,
+ 		     void *fs_info)
+ {
++	struct ocfs2_security_xattr_info *si = fs_info;
+ 	const struct xattr *xattr;
+ 	int err = 0;
+ 
++	if (si) {
++		si->value = kmemdup(xattr_array->value, xattr_array->value_len,
++				    GFP_KERNEL);
++		if (!si->value)
++			return -ENOMEM;
++
++		si->name = xattr_array->name;
++		si->value_len = xattr_array->value_len;
++		return 0;
++	}
++
+ 	for (xattr = xattr_array; xattr->name != NULL; xattr++) {
+ 		err = ocfs2_xattr_set(inode, OCFS2_XATTR_INDEX_SECURITY,
+ 				      xattr->name, xattr->value,
+@@ -7277,13 +7289,23 @@ int ocfs2_init_security_get(struct inode *inode,
+ 			    const struct qstr *qstr,
+ 			    struct ocfs2_security_xattr_info *si)
+ {
++	int ret;
++
+ 	/* check whether ocfs2 support feature xattr */
+ 	if (!ocfs2_supports_xattr(OCFS2_SB(dir->i_sb)))
+ 		return -EOPNOTSUPP;
+-	if (si)
+-		return security_old_inode_init_security(inode, dir, qstr,
+-							&si->name, &si->value,
+-							&si->value_len);
++	if (si) {
++		ret = security_inode_init_security(inode, dir, qstr,
++						   &ocfs2_initxattrs, si);
++		/*
++		 * security_inode_init_security() does not return -EOPNOTSUPP,
++		 * we have to check the xattr ourselves.
++		 */
++		if (!ret && !si->name)
++			si->enable = 0;
++
++		return ret;
++	}
+ 
+ 	return security_inode_init_security(inode, dir, qstr,
+ 					    &ocfs2_initxattrs, NULL);
+diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c
+index 34de6e6898c48..3ce43068daa13 100644
+--- a/fs/xfs/libxfs/xfs_bmap.c
++++ b/fs/xfs/libxfs/xfs_bmap.c
+@@ -3505,7 +3505,6 @@ xfs_bmap_btalloc_at_eof(
+ 	 * original non-aligned state so the caller can proceed on allocation
+ 	 * failure as if this function was never called.
+ 	 */
+-	args->fsbno = ap->blkno;
+ 	args->alignment = 1;
+ 	return 0;
+ }
+diff --git a/include/drm/drm_managed.h b/include/drm/drm_managed.h
+index 359883942612e..ad08f834af408 100644
+--- a/include/drm/drm_managed.h
++++ b/include/drm/drm_managed.h
+@@ -105,6 +105,22 @@ char *drmm_kstrdup(struct drm_device *dev, const char *s, gfp_t gfp);
+ 
+ void drmm_kfree(struct drm_device *dev, void *data);
+ 
+-int drmm_mutex_init(struct drm_device *dev, struct mutex *lock);
++void __drmm_mutex_release(struct drm_device *dev, void *res);
++
++/**
++ * drmm_mutex_init - &drm_device-managed mutex_init()
++ * @dev: DRM device
++ * @lock: lock to be initialized
++ *
++ * Returns:
++ * 0 on success, or a negative errno code otherwise.
++ *
++ * This is a &drm_device-managed version of mutex_init(). The initialized
++ * lock is automatically destroyed on the final drm_dev_put().
++ */
++#define drmm_mutex_init(dev, lock) ({					     \
++	mutex_init(lock);						     \
++	drmm_add_action_or_reset(dev, __drmm_mutex_release, lock);	     \
++})									     \
+ 
+ #endif
+diff --git a/include/linux/arm_ffa.h b/include/linux/arm_ffa.h
+index c87aeecaa9b2a..583fe3b49a49c 100644
+--- a/include/linux/arm_ffa.h
++++ b/include/linux/arm_ffa.h
+@@ -96,6 +96,7 @@
+ 
+ /* FFA Bus/Device/Driver related */
+ struct ffa_device {
++	u32 id;
+ 	int vm_id;
+ 	bool mode_32bit;
+ 	uuid_t uuid;
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index c85916e9f7db5..9183982abd6e6 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -1059,29 +1059,29 @@ extern int send_sigurg(struct fown_struct *fown);
+  * sb->s_flags.  Note that these mirror the equivalent MS_* flags where
+  * represented in both.
+  */
+-#define SB_RDONLY	 1	/* Mount read-only */
+-#define SB_NOSUID	 2	/* Ignore suid and sgid bits */
+-#define SB_NODEV	 4	/* Disallow access to device special files */
+-#define SB_NOEXEC	 8	/* Disallow program execution */
+-#define SB_SYNCHRONOUS	16	/* Writes are synced at once */
+-#define SB_MANDLOCK	64	/* Allow mandatory locks on an FS */
+-#define SB_DIRSYNC	128	/* Directory modifications are synchronous */
+-#define SB_NOATIME	1024	/* Do not update access times. */
+-#define SB_NODIRATIME	2048	/* Do not update directory access times */
+-#define SB_SILENT	32768
+-#define SB_POSIXACL	(1<<16)	/* VFS does not apply the umask */
+-#define SB_INLINECRYPT	(1<<17)	/* Use blk-crypto for encrypted files */
+-#define SB_KERNMOUNT	(1<<22) /* this is a kern_mount call */
+-#define SB_I_VERSION	(1<<23) /* Update inode I_version field */
+-#define SB_LAZYTIME	(1<<25) /* Update the on-disk [acm]times lazily */
++#define SB_RDONLY       BIT(0)	/* Mount read-only */
++#define SB_NOSUID       BIT(1)	/* Ignore suid and sgid bits */
++#define SB_NODEV        BIT(2)	/* Disallow access to device special files */
++#define SB_NOEXEC       BIT(3)	/* Disallow program execution */
++#define SB_SYNCHRONOUS  BIT(4)	/* Writes are synced at once */
++#define SB_MANDLOCK     BIT(6)	/* Allow mandatory locks on an FS */
++#define SB_DIRSYNC      BIT(7)	/* Directory modifications are synchronous */
++#define SB_NOATIME      BIT(10)	/* Do not update access times. */
++#define SB_NODIRATIME   BIT(11)	/* Do not update directory access times */
++#define SB_SILENT       BIT(15)
++#define SB_POSIXACL     BIT(16)	/* VFS does not apply the umask */
++#define SB_INLINECRYPT  BIT(17)	/* Use blk-crypto for encrypted files */
++#define SB_KERNMOUNT    BIT(22)	/* this is a kern_mount call */
++#define SB_I_VERSION    BIT(23)	/* Update inode I_version field */
++#define SB_LAZYTIME     BIT(25)	/* Update the on-disk [acm]times lazily */
+ 
+ /* These sb flags are internal to the kernel */
+-#define SB_SUBMOUNT     (1<<26)
+-#define SB_FORCE    	(1<<27)
+-#define SB_NOSEC	(1<<28)
+-#define SB_BORN		(1<<29)
+-#define SB_ACTIVE	(1<<30)
+-#define SB_NOUSER	(1<<31)
++#define SB_SUBMOUNT     BIT(26)
++#define SB_FORCE        BIT(27)
++#define SB_NOSEC        BIT(28)
++#define SB_BORN         BIT(29)
++#define SB_ACTIVE       BIT(30)
++#define SB_NOUSER       BIT(31)
+ 
+ /* These flags relate to encoding and casefolding */
+ #define SB_ENC_STRICT_MODE_FL	(1 << 0)
+diff --git a/include/linux/if_team.h b/include/linux/if_team.h
+index fc985e5c739d4..8de6b6e678295 100644
+--- a/include/linux/if_team.h
++++ b/include/linux/if_team.h
+@@ -208,6 +208,7 @@ struct team {
+ 	bool queue_override_enabled;
+ 	struct list_head *qom_lists; /* array of queue override mapping lists */
+ 	bool port_mtu_change_allowed;
++	bool notifier_ctx;
+ 	struct {
+ 		unsigned int count;
+ 		unsigned int interval; /* in ms */
+diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
+index 1636d7f65630a..651f563f2c50e 100644
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -1679,7 +1679,9 @@ struct mlx5_ifc_cmd_hca_cap_bits {
+ 	u8         rc[0x1];
+ 
+ 	u8         uar_4k[0x1];
+-	u8         reserved_at_241[0x9];
++	u8         reserved_at_241[0x7];
++	u8         fl_rc_qp_when_roce_disabled[0x1];
++	u8         regexp_params[0x1];
+ 	u8         uar_sz[0x6];
+ 	u8         port_selection_cap[0x1];
+ 	u8         reserved_at_248[0x1];
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index 1f79667824eb6..ced82b9c18e57 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -3425,6 +3425,22 @@ void vmemmap_populate_print_last(void);
+ void vmemmap_free(unsigned long start, unsigned long end,
+ 		struct vmem_altmap *altmap);
+ #endif
++
++#ifdef CONFIG_ARCH_WANT_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
++static inline bool vmemmap_can_optimize(struct vmem_altmap *altmap,
++					   struct dev_pagemap *pgmap)
++{
++	return is_power_of_2(sizeof(struct page)) &&
++		pgmap && (pgmap_vmemmap_nr(pgmap) > 1) && !altmap;
++}
++#else
++static inline bool vmemmap_can_optimize(struct vmem_altmap *altmap,
++					   struct dev_pagemap *pgmap)
++{
++	return false;
++}
++#endif
++
+ void register_page_bootmem_memmap(unsigned long section_nr, struct page *map,
+ 				  unsigned long nr_pages);
+ 
+diff --git a/include/linux/msi.h b/include/linux/msi.h
+index cdb14a1ef2681..a50ea79522f85 100644
+--- a/include/linux/msi.h
++++ b/include/linux/msi.h
+@@ -383,6 +383,13 @@ int arch_setup_msi_irq(struct pci_dev *dev, struct msi_desc *desc);
+ void arch_teardown_msi_irq(unsigned int irq);
+ int arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type);
+ void arch_teardown_msi_irqs(struct pci_dev *dev);
++#endif /* CONFIG_PCI_MSI_ARCH_FALLBACKS */
++
++/*
++ * Xen uses non-default msi_domain_ops and hence needs a way to populate sysfs
++ * entries of MSI IRQs.
++ */
++#if defined(CONFIG_PCI_XEN) || defined(CONFIG_PCI_MSI_ARCH_FALLBACKS)
+ #ifdef CONFIG_SYSFS
+ int msi_device_populate_sysfs(struct device *dev);
+ void msi_device_destroy_sysfs(struct device *dev);
+@@ -390,7 +397,7 @@ void msi_device_destroy_sysfs(struct device *dev);
+ static inline int msi_device_populate_sysfs(struct device *dev) { return 0; }
+ static inline void msi_device_destroy_sysfs(struct device *dev) { }
+ #endif /* !CONFIG_SYSFS */
+-#endif /* CONFIG_PCI_MSI_ARCH_FALLBACKS */
++#endif /* CONFIG_PCI_XEN || CONFIG_PCI_MSI_ARCH_FALLBACKS */
+ 
+ /*
+  * The restore hook is still available even for fully irq domain based
+diff --git a/include/linux/power/bq27xxx_battery.h b/include/linux/power/bq27xxx_battery.h
+index a1aa68141d0b5..7c8d65414a70a 100644
+--- a/include/linux/power/bq27xxx_battery.h
++++ b/include/linux/power/bq27xxx_battery.h
+@@ -2,6 +2,8 @@
+ #ifndef __LINUX_BQ27X00_BATTERY_H__
+ #define __LINUX_BQ27X00_BATTERY_H__
+ 
++#include <linux/power_supply.h>
++
+ enum bq27xxx_chip {
+ 	BQ27000 = 1, /* bq27000, bq27200 */
+ 	BQ27010, /* bq27010, bq27210 */
+@@ -68,7 +70,9 @@ struct bq27xxx_device_info {
+ 	struct bq27xxx_access_methods bus;
+ 	struct bq27xxx_reg_cache cache;
+ 	int charge_design_full;
++	bool removed;
+ 	unsigned long last_update;
++	union power_supply_propval last_status;
+ 	struct delayed_work work;
+ 	struct power_supply *bat;
+ 	struct list_head list;
+diff --git a/include/linux/tpm.h b/include/linux/tpm.h
+index 4dc97b9f65fb0..6a1e8f1572551 100644
+--- a/include/linux/tpm.h
++++ b/include/linux/tpm.h
+@@ -274,13 +274,15 @@ enum tpm2_cc_attrs {
+ #define TPM_VID_ATML     0x1114
+ 
+ enum tpm_chip_flags {
+-	TPM_CHIP_FLAG_TPM2		= BIT(1),
+-	TPM_CHIP_FLAG_IRQ		= BIT(2),
+-	TPM_CHIP_FLAG_VIRTUAL		= BIT(3),
+-	TPM_CHIP_FLAG_HAVE_TIMEOUTS	= BIT(4),
+-	TPM_CHIP_FLAG_ALWAYS_POWERED	= BIT(5),
++	TPM_CHIP_FLAG_BOOTSTRAPPED		= BIT(0),
++	TPM_CHIP_FLAG_TPM2			= BIT(1),
++	TPM_CHIP_FLAG_IRQ			= BIT(2),
++	TPM_CHIP_FLAG_VIRTUAL			= BIT(3),
++	TPM_CHIP_FLAG_HAVE_TIMEOUTS		= BIT(4),
++	TPM_CHIP_FLAG_ALWAYS_POWERED		= BIT(5),
+ 	TPM_CHIP_FLAG_FIRMWARE_POWER_MANAGED	= BIT(6),
+-	TPM_CHIP_FLAG_FIRMWARE_UPGRADE	= BIT(7),
++	TPM_CHIP_FLAG_FIRMWARE_UPGRADE		= BIT(7),
++	TPM_CHIP_FLAG_SUSPENDED			= BIT(8),
+ };
+ 
+ #define to_tpm_chip(d) container_of(d, struct tpm_chip, dev)
+diff --git a/include/linux/usb.h b/include/linux/usb.h
+index 9642ee02d713b..3af83b5ddc559 100644
+--- a/include/linux/usb.h
++++ b/include/linux/usb.h
+@@ -291,6 +291,11 @@ void usb_put_intf(struct usb_interface *intf);
+ #define USB_MAXINTERFACES	32
+ #define USB_MAXIADS		(USB_MAXINTERFACES/2)
+ 
++bool usb_check_bulk_endpoints(
++		const struct usb_interface *intf, const u8 *ep_addrs);
++bool usb_check_int_endpoints(
++		const struct usb_interface *intf, const u8 *ep_addrs);
++
+ /*
+  * USB Resume Timer: Every Host controller driver should drive the resume
+  * signalling on the bus for the amount of time defined by this macro.
+diff --git a/include/net/bonding.h b/include/net/bonding.h
+index 2d034e07b796c..0913f07e1dfcd 100644
+--- a/include/net/bonding.h
++++ b/include/net/bonding.h
+@@ -221,6 +221,7 @@ struct bonding {
+ 	struct   bond_up_slave __rcu *usable_slaves;
+ 	struct   bond_up_slave __rcu *all_slaves;
+ 	bool     force_primary;
++	bool     notifier_ctx;
+ 	s32      slave_cnt; /* never change this value outside the attach/detach wrappers */
+ 	int     (*recv_probe)(const struct sk_buff *, struct bonding *,
+ 			      struct slave *);
+diff --git a/include/net/ip.h b/include/net/ip.h
+index c3fffaa92d6e0..acec504c469a0 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -76,6 +76,7 @@ struct ipcm_cookie {
+ 	__be32			addr;
+ 	int			oif;
+ 	struct ip_options_rcu	*opt;
++	__u8			protocol;
+ 	__u8			ttl;
+ 	__s16			tos;
+ 	char			priority;
+@@ -96,6 +97,7 @@ static inline void ipcm_init_sk(struct ipcm_cookie *ipcm,
+ 	ipcm->sockc.tsflags = inet->sk.sk_tsflags;
+ 	ipcm->oif = READ_ONCE(inet->sk.sk_bound_dev_if);
+ 	ipcm->addr = inet->inet_saddr;
++	ipcm->protocol = inet->inet_num;
+ }
+ 
+ #define IPCB(skb) ((struct inet_skb_parm*)((skb)->cb))
+diff --git a/include/net/page_pool.h b/include/net/page_pool.h
+index ddfa0b3286777..41679d348e810 100644
+--- a/include/net/page_pool.h
++++ b/include/net/page_pool.h
+@@ -393,22 +393,4 @@ static inline void page_pool_nid_changed(struct page_pool *pool, int new_nid)
+ 		page_pool_update_nid(pool, new_nid);
+ }
+ 
+-static inline void page_pool_ring_lock(struct page_pool *pool)
+-	__acquires(&pool->ring.producer_lock)
+-{
+-	if (in_softirq())
+-		spin_lock(&pool->ring.producer_lock);
+-	else
+-		spin_lock_bh(&pool->ring.producer_lock);
+-}
+-
+-static inline void page_pool_ring_unlock(struct page_pool *pool)
+-	__releases(&pool->ring.producer_lock)
+-{
+-	if (in_softirq())
+-		spin_unlock(&pool->ring.producer_lock);
+-	else
+-		spin_unlock_bh(&pool->ring.producer_lock);
+-}
+-
+ #endif /* _NET_PAGE_POOL_H */
+diff --git a/include/uapi/linux/in.h b/include/uapi/linux/in.h
+index 4b7f2df66b995..e682ab628dfa6 100644
+--- a/include/uapi/linux/in.h
++++ b/include/uapi/linux/in.h
+@@ -163,6 +163,7 @@ struct in_addr {
+ #define IP_MULTICAST_ALL		49
+ #define IP_UNICAST_IF			50
+ #define IP_LOCAL_PORT_RANGE		51
++#define IP_PROTOCOL			52
+ 
+ #define MCAST_EXCLUDE	0
+ #define MCAST_INCLUDE	1
+diff --git a/include/uapi/sound/skl-tplg-interface.h b/include/uapi/sound/skl-tplg-interface.h
+index f29899b179a62..4bf9c4f9add8a 100644
+--- a/include/uapi/sound/skl-tplg-interface.h
++++ b/include/uapi/sound/skl-tplg-interface.h
+@@ -66,7 +66,8 @@ enum skl_ch_cfg {
+ 	SKL_CH_CFG_DUAL_MONO = 9,
+ 	SKL_CH_CFG_I2S_DUAL_STEREO_0 = 10,
+ 	SKL_CH_CFG_I2S_DUAL_STEREO_1 = 11,
+-	SKL_CH_CFG_4_CHANNEL = 12,
++	SKL_CH_CFG_7_1 = 12,
++	SKL_CH_CFG_4_CHANNEL = SKL_CH_CFG_7_1,
+ 	SKL_CH_CFG_INVALID
+ };
+ 
+diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
+index 90852e0e64f26..5193198773753 100644
+--- a/kernel/bpf/hashtab.c
++++ b/kernel/bpf/hashtab.c
+@@ -1197,7 +1197,7 @@ static long htab_lru_map_update_elem(struct bpf_map *map, void *key, void *value
+ 
+ 	ret = htab_lock_bucket(htab, b, hash, &flags);
+ 	if (ret)
+-		return ret;
++		goto err_lock_bucket;
+ 
+ 	l_old = lookup_elem_raw(head, hash, key, key_size);
+ 
+@@ -1218,6 +1218,7 @@ static long htab_lru_map_update_elem(struct bpf_map *map, void *key, void *value
+ err:
+ 	htab_unlock_bucket(htab, b, hash, flags);
+ 
++err_lock_bucket:
+ 	if (ret)
+ 		htab_lru_push_free(htab, l_new);
+ 	else if (l_old)
+@@ -1320,7 +1321,7 @@ static long __htab_lru_percpu_map_update_elem(struct bpf_map *map, void *key,
+ 
+ 	ret = htab_lock_bucket(htab, b, hash, &flags);
+ 	if (ret)
+-		return ret;
++		goto err_lock_bucket;
+ 
+ 	l_old = lookup_elem_raw(head, hash, key, key_size);
+ 
+@@ -1343,6 +1344,7 @@ static long __htab_lru_percpu_map_update_elem(struct bpf_map *map, void *key,
+ 	ret = 0;
+ err:
+ 	htab_unlock_bucket(htab, b, hash, flags);
++err_lock_bucket:
+ 	if (l_new)
+ 		bpf_lru_push_free(&htab->lru, &l_new->lru_node);
+ 	return ret;
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index f4b267082cbf9..8ed149cc9f648 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -16017,7 +16017,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
+ 					insn_buf[cnt++] = BPF_ALU64_IMM(BPF_RSH,
+ 									insn->dst_reg,
+ 									shift);
+-				insn_buf[cnt++] = BPF_ALU64_IMM(BPF_AND, insn->dst_reg,
++				insn_buf[cnt++] = BPF_ALU32_IMM(BPF_AND, insn->dst_reg,
+ 								(1ULL << size * 8) - 1);
+ 			}
+ 		}
+diff --git a/kernel/irq/msi.c b/kernel/irq/msi.c
+index 7a97bcb086bf3..b4c31a5c11473 100644
+--- a/kernel/irq/msi.c
++++ b/kernel/irq/msi.c
+@@ -542,7 +542,7 @@ fail:
+ 	return ret;
+ }
+ 
+-#ifdef CONFIG_PCI_MSI_ARCH_FALLBACKS
++#if defined(CONFIG_PCI_MSI_ARCH_FALLBACKS) || defined(CONFIG_PCI_XEN)
+ /**
+  * msi_device_populate_sysfs - Populate msi_irqs sysfs entries for a device
+  * @dev:	The device (PCI, platform etc) which will get sysfs entries
+@@ -574,7 +574,7 @@ void msi_device_destroy_sysfs(struct device *dev)
+ 	msi_for_each_desc(desc, dev, MSI_DESC_ALL)
+ 		msi_sysfs_remove_desc(dev, desc);
+ }
+-#endif /* CONFIG_PCI_MSI_ARCH_FALLBACK */
++#endif /* CONFIG_PCI_MSI_ARCH_FALLBACK || CONFIG_PCI_XEN */
+ #else /* CONFIG_SYSFS */
+ static inline int msi_sysfs_create_group(struct device *dev) { return 0; }
+ static inline int msi_sysfs_populate_desc(struct device *dev, struct msi_desc *desc) { return 0; }
+diff --git a/lib/debugobjects.c b/lib/debugobjects.c
+index 003edc5ebd673..986adca357b4b 100644
+--- a/lib/debugobjects.c
++++ b/lib/debugobjects.c
+@@ -126,7 +126,7 @@ static const char *obj_states[ODEBUG_STATE_MAX] = {
+ 
+ static void fill_pool(void)
+ {
+-	gfp_t gfp = GFP_ATOMIC | __GFP_NORETRY | __GFP_NOWARN;
++	gfp_t gfp = __GFP_HIGH | __GFP_NOWARN;
+ 	struct debug_obj *obj;
+ 	unsigned long flags;
+ 
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 8e39705c7bdc2..afcfb2a94e6e3 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -6905,10 +6905,12 @@ static void __ref __init_zone_device_page(struct page *page, unsigned long pfn,
+  * of an altmap. See vmemmap_populate_compound_pages().
+  */
+ static inline unsigned long compound_nr_pages(struct vmem_altmap *altmap,
+-					      unsigned long nr_pages)
++					      struct dev_pagemap *pgmap)
+ {
+-	return is_power_of_2(sizeof(struct page)) &&
+-		!altmap ? 2 * (PAGE_SIZE / sizeof(struct page)) : nr_pages;
++	if (!vmemmap_can_optimize(altmap, pgmap))
++		return pgmap_vmemmap_nr(pgmap);
++
++	return 2 * (PAGE_SIZE / sizeof(struct page));
+ }
+ 
+ static void __ref memmap_init_compound(struct page *head,
+@@ -6973,7 +6975,7 @@ void __ref memmap_init_zone_device(struct zone *zone,
+ 			continue;
+ 
+ 		memmap_init_compound(page, pfn, zone_idx, nid, pgmap,
+-				     compound_nr_pages(altmap, pfns_per_compound));
++				     compound_nr_pages(altmap, pgmap));
+ 	}
+ 
+ 	pr_info("%s initialised %lu pages in %ums\n", __func__,
+diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
+index c5398a5960d05..10d73a0dfcec7 100644
+--- a/mm/sparse-vmemmap.c
++++ b/mm/sparse-vmemmap.c
+@@ -458,8 +458,7 @@ struct page * __meminit __populate_section_memmap(unsigned long pfn,
+ 		!IS_ALIGNED(nr_pages, PAGES_PER_SUBSECTION)))
+ 		return NULL;
+ 
+-	if (is_power_of_2(sizeof(struct page)) &&
+-	    pgmap && pgmap_vmemmap_nr(pgmap) > 1 && !altmap)
++	if (vmemmap_can_optimize(altmap, pgmap))
+ 		r = vmemmap_populate_compound_pages(pfn, start, end, nid, pgmap);
+ 	else
+ 		r = vmemmap_populate(start, end, nid, altmap);
+diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
+index 3aed46ab7e6cb..0d451b61573cb 100644
+--- a/mm/zsmalloc.c
++++ b/mm/zsmalloc.c
+@@ -1350,31 +1350,6 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
+ 	obj_to_location(obj, &page, &obj_idx);
+ 	zspage = get_zspage(page);
+ 
+-#ifdef CONFIG_ZPOOL
+-	/*
+-	 * Move the zspage to front of pool's LRU.
+-	 *
+-	 * Note that this is swap-specific, so by definition there are no ongoing
+-	 * accesses to the memory while the page is swapped out that would make
+-	 * it "hot". A new entry is hot, then ages to the tail until it gets either
+-	 * written back or swaps back in.
+-	 *
+-	 * Furthermore, map is also called during writeback. We must not put an
+-	 * isolated page on the LRU mid-reclaim.
+-	 *
+-	 * As a result, only update the LRU when the page is mapped for write
+-	 * when it's first instantiated.
+-	 *
+-	 * This is a deviation from the other backends, which perform this update
+-	 * in the allocation function (zbud_alloc, z3fold_alloc).
+-	 */
+-	if (mm == ZS_MM_WO) {
+-		if (!list_empty(&zspage->lru))
+-			list_del(&zspage->lru);
+-		list_add(&zspage->lru, &pool->lru);
+-	}
+-#endif
+-
+ 	/*
+ 	 * migration cannot move any zpages in this zspage. Here, pool->lock
+ 	 * is too heavy since callers would take some time until they calls
+@@ -1544,9 +1519,8 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp)
+ 		fix_fullness_group(class, zspage);
+ 		record_obj(handle, obj);
+ 		class_stat_inc(class, OBJ_USED, 1);
+-		spin_unlock(&pool->lock);
+ 
+-		return handle;
++		goto out;
+ 	}
+ 
+ 	spin_unlock(&pool->lock);
+@@ -1570,6 +1544,14 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp)
+ 
+ 	/* We completely set up zspage so mark them as movable */
+ 	SetZsPageMovable(pool, zspage);
++out:
++#ifdef CONFIG_ZPOOL
++	/* Add/move zspage to beginning of LRU */
++	if (!list_empty(&zspage->lru))
++		list_del(&zspage->lru);
++	list_add(&zspage->lru, &pool->lru);
++#endif
++
+ 	spin_unlock(&pool->lock);
+ 
+ 	return handle;
+diff --git a/net/core/page_pool.c b/net/core/page_pool.c
+index 193c187998650..2396c99bedeaa 100644
+--- a/net/core/page_pool.c
++++ b/net/core/page_pool.c
+@@ -133,6 +133,29 @@ EXPORT_SYMBOL(page_pool_ethtool_stats_get);
+ #define recycle_stat_add(pool, __stat, val)
+ #endif
+ 
++static bool page_pool_producer_lock(struct page_pool *pool)
++	__acquires(&pool->ring.producer_lock)
++{
++	bool in_softirq = in_softirq();
++
++	if (in_softirq)
++		spin_lock(&pool->ring.producer_lock);
++	else
++		spin_lock_bh(&pool->ring.producer_lock);
++
++	return in_softirq;
++}
++
++static void page_pool_producer_unlock(struct page_pool *pool,
++				      bool in_softirq)
++	__releases(&pool->ring.producer_lock)
++{
++	if (in_softirq)
++		spin_unlock(&pool->ring.producer_lock);
++	else
++		spin_unlock_bh(&pool->ring.producer_lock);
++}
++
+ static int page_pool_init(struct page_pool *pool,
+ 			  const struct page_pool_params *params)
+ {
+@@ -615,6 +638,7 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data,
+ 			     int count)
+ {
+ 	int i, bulk_len = 0;
++	bool in_softirq;
+ 
+ 	for (i = 0; i < count; i++) {
+ 		struct page *page = virt_to_head_page(data[i]);
+@@ -633,7 +657,7 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data,
+ 		return;
+ 
+ 	/* Bulk producer into ptr_ring page_pool cache */
+-	page_pool_ring_lock(pool);
++	in_softirq = page_pool_producer_lock(pool);
+ 	for (i = 0; i < bulk_len; i++) {
+ 		if (__ptr_ring_produce(&pool->ring, data[i])) {
+ 			/* ring full */
+@@ -642,7 +666,7 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data,
+ 		}
+ 	}
+ 	recycle_stat_add(pool, ring, i);
+-	page_pool_ring_unlock(pool);
++	page_pool_producer_unlock(pool, in_softirq);
+ 
+ 	/* Hopefully all pages was return into ptr_ring */
+ 	if (likely(i == bulk_len))
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index afec5e2c21ac0..a39a386f2ae0d 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -5171,8 +5171,10 @@ void __skb_tstamp_tx(struct sk_buff *orig_skb,
+ 	} else {
+ 		skb = skb_clone(orig_skb, GFP_ATOMIC);
+ 
+-		if (skb_orphan_frags_rx(skb, GFP_ATOMIC))
++		if (skb_orphan_frags_rx(skb, GFP_ATOMIC)) {
++			kfree_skb(skb);
+ 			return;
++		}
+ 	}
+ 	if (!skb)
+ 		return;
+diff --git a/net/ipv4/ip_sockglue.c b/net/ipv4/ip_sockglue.c
+index b511ff0adc0aa..8e97d8d4cc9d9 100644
+--- a/net/ipv4/ip_sockglue.c
++++ b/net/ipv4/ip_sockglue.c
+@@ -317,7 +317,14 @@ int ip_cmsg_send(struct sock *sk, struct msghdr *msg, struct ipcm_cookie *ipc,
+ 			ipc->tos = val;
+ 			ipc->priority = rt_tos2priority(ipc->tos);
+ 			break;
+-
++		case IP_PROTOCOL:
++			if (cmsg->cmsg_len != CMSG_LEN(sizeof(int)))
++				return -EINVAL;
++			val = *(int *)CMSG_DATA(cmsg);
++			if (val < 1 || val > 255)
++				return -EINVAL;
++			ipc->protocol = val;
++			break;
+ 		default:
+ 			return -EINVAL;
+ 		}
+@@ -1761,6 +1768,9 @@ int do_ip_getsockopt(struct sock *sk, int level, int optname,
+ 	case IP_LOCAL_PORT_RANGE:
+ 		val = inet->local_port_range.hi << 16 | inet->local_port_range.lo;
+ 		break;
++	case IP_PROTOCOL:
++		val = inet_sk(sk)->inet_num;
++		break;
+ 	default:
+ 		sockopt_release_sock(sk);
+ 		return -ENOPROTOOPT;
+diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c
+index 8088a5011e7df..066c696dc5446 100644
+--- a/net/ipv4/raw.c
++++ b/net/ipv4/raw.c
+@@ -532,6 +532,9 @@ static int raw_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 	}
+ 
+ 	ipcm_init_sk(&ipc, inet);
++	/* Keep backward compat */
++	if (hdrincl)
++		ipc.protocol = IPPROTO_RAW;
+ 
+ 	if (msg->msg_controllen) {
+ 		err = ip_cmsg_send(sk, msg, &ipc, false);
+@@ -599,7 +602,7 @@ static int raw_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 
+ 	flowi4_init_output(&fl4, ipc.oif, ipc.sockc.mark, tos,
+ 			   RT_SCOPE_UNIVERSE,
+-			   hdrincl ? IPPROTO_RAW : sk->sk_protocol,
++			   hdrincl ? ipc.protocol : sk->sk_protocol,
+ 			   inet_sk_flowi_flags(sk) |
+ 			    (hdrincl ? FLOWI_FLAG_KNOWN_NH : 0),
+ 			   daddr, saddr, 0, 0, sk->sk_uid);
+diff --git a/net/ipv4/udplite.c b/net/ipv4/udplite.c
+index e0c9cc39b81e3..56d94d23b9e0f 100644
+--- a/net/ipv4/udplite.c
++++ b/net/ipv4/udplite.c
+@@ -64,6 +64,8 @@ struct proto 	udplite_prot = {
+ 	.per_cpu_fw_alloc  = &udp_memory_per_cpu_fw_alloc,
+ 
+ 	.sysctl_mem	   = sysctl_udp_mem,
++	.sysctl_wmem_offset = offsetof(struct net, ipv4.sysctl_udp_wmem_min),
++	.sysctl_rmem_offset = offsetof(struct net, ipv4.sysctl_udp_rmem_min),
+ 	.obj_size	   = sizeof(struct udp_sock),
+ 	.h.udp_table	   = &udplite_table,
+ };
+diff --git a/net/ipv6/exthdrs_core.c b/net/ipv6/exthdrs_core.c
+index da46c42846765..49e31e4ae7b7f 100644
+--- a/net/ipv6/exthdrs_core.c
++++ b/net/ipv6/exthdrs_core.c
+@@ -143,6 +143,8 @@ int ipv6_find_tlv(const struct sk_buff *skb, int offset, int type)
+ 			optlen = 1;
+ 			break;
+ 		default:
++			if (len < 2)
++				goto bad;
+ 			optlen = nh[offset + 1] + 2;
+ 			if (optlen > len)
+ 				goto bad;
+diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c
+index 9986e2d15c8be..a60f2c5d0f4c9 100644
+--- a/net/ipv6/raw.c
++++ b/net/ipv6/raw.c
+@@ -793,7 +793,8 @@ static int rawv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 
+ 		if (!proto)
+ 			proto = inet->inet_num;
+-		else if (proto != inet->inet_num)
++		else if (proto != inet->inet_num &&
++			 inet->inet_num != IPPROTO_RAW)
+ 			return -EINVAL;
+ 
+ 		if (proto > 255)
+diff --git a/net/ipv6/udplite.c b/net/ipv6/udplite.c
+index 67eaf3ca14cea..3bab0cc136977 100644
+--- a/net/ipv6/udplite.c
++++ b/net/ipv6/udplite.c
+@@ -60,6 +60,8 @@ struct proto udplitev6_prot = {
+ 	.per_cpu_fw_alloc  = &udp_memory_per_cpu_fw_alloc,
+ 
+ 	.sysctl_mem	   = sysctl_udp_mem,
++	.sysctl_wmem_offset = offsetof(struct net, ipv4.sysctl_udp_wmem_min),
++	.sysctl_rmem_offset = offsetof(struct net, ipv4.sysctl_udp_rmem_min),
+ 	.obj_size	   = sizeof(struct udp6_sock),
+ 	.h.udp_table	   = &udplite_table,
+ };
+diff --git a/net/sctp/transport.c b/net/sctp/transport.c
+index 2f66a20065174..2abe45af98e7c 100644
+--- a/net/sctp/transport.c
++++ b/net/sctp/transport.c
+@@ -324,9 +324,12 @@ bool sctp_transport_pl_recv(struct sctp_transport *t)
+ 		t->pl.probe_size += SCTP_PL_BIG_STEP;
+ 	} else if (t->pl.state == SCTP_PL_SEARCH) {
+ 		if (!t->pl.probe_high) {
+-			t->pl.probe_size = min(t->pl.probe_size + SCTP_PL_BIG_STEP,
+-					       SCTP_MAX_PLPMTU);
+-			return false;
++			if (t->pl.probe_size < SCTP_MAX_PLPMTU) {
++				t->pl.probe_size = min(t->pl.probe_size + SCTP_PL_BIG_STEP,
++						       SCTP_MAX_PLPMTU);
++				return false;
++			}
++			t->pl.probe_high = SCTP_MAX_PLPMTU;
+ 		}
+ 		t->pl.probe_size += SCTP_PL_MIN_STEP;
+ 		if (t->pl.probe_size >= t->pl.probe_high) {
+@@ -341,7 +344,7 @@ bool sctp_transport_pl_recv(struct sctp_transport *t)
+ 	} else if (t->pl.state == SCTP_PL_COMPLETE) {
+ 		/* Raise probe_size again after 30 * interval in Search Complete */
+ 		t->pl.state = SCTP_PL_SEARCH; /* Search Complete -> Search */
+-		t->pl.probe_size += SCTP_PL_MIN_STEP;
++		t->pl.probe_size = min(t->pl.probe_size + SCTP_PL_MIN_STEP, SCTP_MAX_PLPMTU);
+ 	}
+ 
+ 	return t->pl.state == SCTP_PL_COMPLETE;
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 50c38b624f772..538e9c6ec8c98 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -2000,8 +2000,10 @@ static int smc_listen_rdma_init(struct smc_sock *new_smc,
+ 		return rc;
+ 
+ 	/* create send buffer and rmb */
+-	if (smc_buf_create(new_smc, false))
++	if (smc_buf_create(new_smc, false)) {
++		smc_conn_abort(new_smc, ini->first_contact_local);
+ 		return SMC_CLC_DECL_MEM;
++	}
+ 
+ 	return 0;
+ }
+@@ -2217,8 +2219,11 @@ static void smc_find_rdma_v2_device_serv(struct smc_sock *new_smc,
+ 	smcr_version = ini->smcr_version;
+ 	ini->smcr_version = SMC_V2;
+ 	rc = smc_listen_rdma_init(new_smc, ini);
+-	if (!rc)
++	if (!rc) {
+ 		rc = smc_listen_rdma_reg(new_smc, ini->first_contact_local);
++		if (rc)
++			smc_conn_abort(new_smc, ini->first_contact_local);
++	}
+ 	if (!rc)
+ 		return;
+ 	ini->smcr_version = smcr_version;
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index 454356771cda5..3f465faf2b681 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -127,6 +127,7 @@ static int smcr_lgr_conn_assign_link(struct smc_connection *conn, bool first)
+ 	int i, j;
+ 
+ 	/* do link balancing */
++	conn->lnk = NULL;	/* reset conn->lnk first */
+ 	for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {
+ 		struct smc_link *lnk = &conn->lgr->lnk[i];
+ 
+diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
+index c8321de341eea..6debf4fd42d4e 100644
+--- a/net/sunrpc/sched.c
++++ b/net/sunrpc/sched.c
+@@ -927,11 +927,10 @@ static void __rpc_execute(struct rpc_task *task)
+ 		 */
+ 		do_action = task->tk_action;
+ 		/* Tasks with an RPC error status should exit */
+-		if (do_action != rpc_exit_task &&
++		if (do_action && do_action != rpc_exit_task &&
+ 		    (status = READ_ONCE(task->tk_rpc_status)) != 0) {
+ 			task->tk_status = status;
+-			if (do_action != NULL)
+-				do_action = rpc_exit_task;
++			do_action = rpc_exit_task;
+ 		}
+ 		/* Callbacks override all actions */
+ 		if (task->tk_callback) {
+diff --git a/sound/hda/hdac_device.c b/sound/hda/hdac_device.c
+index accc9d279ce5a..6c043fbd606f1 100644
+--- a/sound/hda/hdac_device.c
++++ b/sound/hda/hdac_device.c
+@@ -611,7 +611,7 @@ EXPORT_SYMBOL_GPL(snd_hdac_power_up_pm);
+ int snd_hdac_keep_power_up(struct hdac_device *codec)
+ {
+ 	if (!atomic_inc_not_zero(&codec->in_pm)) {
+-		int ret = pm_runtime_get_if_in_use(&codec->dev);
++		int ret = pm_runtime_get_if_active(&codec->dev, true);
+ 		if (!ret)
+ 			return -1;
+ 		if (ret < 0)
+diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
+index 099722ebaed83..748a3c40966e9 100644
+--- a/sound/pci/hda/patch_ca0132.c
++++ b/sound/pci/hda/patch_ca0132.c
+@@ -1306,6 +1306,7 @@ static const struct snd_pci_quirk ca0132_quirks[] = {
+ 	SND_PCI_QUIRK(0x1458, 0xA026, "Gigabyte G1.Sniper Z97", QUIRK_R3DI),
+ 	SND_PCI_QUIRK(0x1458, 0xA036, "Gigabyte GA-Z170X-Gaming 7", QUIRK_R3DI),
+ 	SND_PCI_QUIRK(0x3842, 0x1038, "EVGA X99 Classified", QUIRK_R3DI),
++	SND_PCI_QUIRK(0x3842, 0x104b, "EVGA X299 Dark", QUIRK_R3DI),
+ 	SND_PCI_QUIRK(0x3842, 0x1055, "EVGA Z390 DARK", QUIRK_R3DI),
+ 	SND_PCI_QUIRK(0x1102, 0x0013, "Recon3D", QUIRK_R3D),
+ 	SND_PCI_QUIRK(0x1102, 0x0018, "Recon3D", QUIRK_R3D),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index c757607177368..379f216158ab4 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -11699,6 +11699,8 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x32cb, "Lenovo ThinkCentre M70", ALC897_FIXUP_HEADSET_MIC_PIN),
+ 	SND_PCI_QUIRK(0x17aa, 0x32cf, "Lenovo ThinkCentre M950", ALC897_FIXUP_HEADSET_MIC_PIN),
+ 	SND_PCI_QUIRK(0x17aa, 0x32f7, "Lenovo ThinkCentre M90", ALC897_FIXUP_HEADSET_MIC_PIN),
++	SND_PCI_QUIRK(0x17aa, 0x3321, "Lenovo ThinkCentre M70 Gen4", ALC897_FIXUP_HEADSET_MIC_PIN),
++	SND_PCI_QUIRK(0x17aa, 0x331b, "Lenovo ThinkCentre M90 Gen4", ALC897_FIXUP_HEADSET_MIC_PIN),
+ 	SND_PCI_QUIRK(0x17aa, 0x3742, "Lenovo TianYi510Pro-14IOB", ALC897_FIXUP_HEADSET_MIC_PIN2),
+ 	SND_PCI_QUIRK(0x17aa, 0x38af, "Lenovo Ideapad Y550P", ALC662_FIXUP_IDEAPAD),
+ 	SND_PCI_QUIRK(0x17aa, 0x3a0d, "Lenovo Ideapad Y550", ALC662_FIXUP_IDEAPAD),
+diff --git a/sound/soc/codecs/lpass-tx-macro.c b/sound/soc/codecs/lpass-tx-macro.c
+index 589c490a8c487..88d8fa58ec1eb 100644
+--- a/sound/soc/codecs/lpass-tx-macro.c
++++ b/sound/soc/codecs/lpass-tx-macro.c
+@@ -746,6 +746,8 @@ static int tx_macro_put_dec_enum(struct snd_kcontrol *kcontrol,
+ 	struct tx_macro *tx = snd_soc_component_get_drvdata(component);
+ 
+ 	val = ucontrol->value.enumerated.item[0];
++	if (val >= e->items)
++		return -EINVAL;
+ 
+ 	switch (e->reg) {
+ 	case CDC_TX_INP_MUX_ADC_MUX0_CFG0:
+@@ -772,6 +774,9 @@ static int tx_macro_put_dec_enum(struct snd_kcontrol *kcontrol,
+ 	case CDC_TX_INP_MUX_ADC_MUX7_CFG0:
+ 		mic_sel_reg = CDC_TX7_TX_PATH_CFG0;
+ 		break;
++	default:
++		dev_err(component->dev, "Error in configuration!!\n");
++		return -EINVAL;
+ 	}
+ 
+ 	if (val != 0) {
+diff --git a/sound/soc/codecs/rt5682-i2c.c b/sound/soc/codecs/rt5682-i2c.c
+index 2935c1bb81f3f..5bc46b0417866 100644
+--- a/sound/soc/codecs/rt5682-i2c.c
++++ b/sound/soc/codecs/rt5682-i2c.c
+@@ -267,7 +267,9 @@ static int rt5682_i2c_probe(struct i2c_client *i2c)
+ 		ret = devm_request_threaded_irq(&i2c->dev, i2c->irq, NULL,
+ 			rt5682_irq, IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING
+ 			| IRQF_ONESHOT, "rt5682", rt5682);
+-		if (ret)
++		if (!ret)
++			rt5682->irq = i2c->irq;
++		else
+ 			dev_err(&i2c->dev, "Failed to reguest IRQ: %d\n", ret);
+ 	}
+ 
+diff --git a/sound/soc/codecs/rt5682.c b/sound/soc/codecs/rt5682.c
+index f6c798b65c08a..5d992543b791f 100644
+--- a/sound/soc/codecs/rt5682.c
++++ b/sound/soc/codecs/rt5682.c
+@@ -2959,6 +2959,9 @@ static int rt5682_suspend(struct snd_soc_component *component)
+ 	if (rt5682->is_sdw)
+ 		return 0;
+ 
++	if (rt5682->irq)
++		disable_irq(rt5682->irq);
++
+ 	cancel_delayed_work_sync(&rt5682->jack_detect_work);
+ 	cancel_delayed_work_sync(&rt5682->jd_check_work);
+ 	if (rt5682->hs_jack && (rt5682->jack_type & SND_JACK_HEADSET) == SND_JACK_HEADSET) {
+@@ -3027,6 +3030,9 @@ static int rt5682_resume(struct snd_soc_component *component)
+ 	mod_delayed_work(system_power_efficient_wq,
+ 		&rt5682->jack_detect_work, msecs_to_jiffies(0));
+ 
++	if (rt5682->irq)
++		enable_irq(rt5682->irq);
++
+ 	return 0;
+ }
+ #else
+diff --git a/sound/soc/codecs/rt5682.h b/sound/soc/codecs/rt5682.h
+index d568c6993c337..e8efd8a84a6cc 100644
+--- a/sound/soc/codecs/rt5682.h
++++ b/sound/soc/codecs/rt5682.h
+@@ -1462,6 +1462,7 @@ struct rt5682_priv {
+ 	int pll_out[RT5682_PLLS];
+ 
+ 	int jack_type;
++	int irq;
+ 	int irq_work_delay_time;
+ };
+ 
+diff --git a/sound/soc/intel/avs/apl.c b/sound/soc/intel/avs/apl.c
+index 02683dce277af..1860099c782a7 100644
+--- a/sound/soc/intel/avs/apl.c
++++ b/sound/soc/intel/avs/apl.c
+@@ -169,6 +169,7 @@ static bool apl_lp_streaming(struct avs_dev *adev)
+ {
+ 	struct avs_path *path;
+ 
++	spin_lock(&adev->path_list_lock);
+ 	/* Any gateway without buffer allocated in LP area disqualifies D0IX. */
+ 	list_for_each_entry(path, &adev->path_list, node) {
+ 		struct avs_path_pipeline *ppl;
+@@ -188,11 +189,14 @@ static bool apl_lp_streaming(struct avs_dev *adev)
+ 				if (cfg->copier.dma_type == INVALID_OBJECT_ID)
+ 					continue;
+ 
+-				if (!mod->gtw_attrs.lp_buffer_alloc)
++				if (!mod->gtw_attrs.lp_buffer_alloc) {
++					spin_unlock(&adev->path_list_lock);
+ 					return false;
++				}
+ 			}
+ 		}
+ 	}
++	spin_unlock(&adev->path_list_lock);
+ 
+ 	return true;
+ }
+diff --git a/sound/soc/intel/avs/messages.h b/sound/soc/intel/avs/messages.h
+index d3b60ae7d7432..7f23a304b4a94 100644
+--- a/sound/soc/intel/avs/messages.h
++++ b/sound/soc/intel/avs/messages.h
+@@ -619,7 +619,7 @@ enum avs_channel_config {
+ 	AVS_CHANNEL_CONFIG_DUAL_MONO = 9,
+ 	AVS_CHANNEL_CONFIG_I2S_DUAL_STEREO_0 = 10,
+ 	AVS_CHANNEL_CONFIG_I2S_DUAL_STEREO_1 = 11,
+-	AVS_CHANNEL_CONFIG_4_CHANNEL = 12,
++	AVS_CHANNEL_CONFIG_7_1 = 12,
+ 	AVS_CHANNEL_CONFIG_INVALID
+ };
+ 
+diff --git a/tools/testing/cxl/Kbuild b/tools/testing/cxl/Kbuild
+index fba7bec96acd1..6f9347ade82cd 100644
+--- a/tools/testing/cxl/Kbuild
++++ b/tools/testing/cxl/Kbuild
+@@ -6,6 +6,7 @@ ldflags-y += --wrap=acpi_pci_find_root
+ ldflags-y += --wrap=nvdimm_bus_register
+ ldflags-y += --wrap=devm_cxl_port_enumerate_dports
+ ldflags-y += --wrap=devm_cxl_setup_hdm
++ldflags-y += --wrap=devm_cxl_enable_hdm
+ ldflags-y += --wrap=devm_cxl_add_passthrough_decoder
+ ldflags-y += --wrap=devm_cxl_enumerate_decoders
+ ldflags-y += --wrap=cxl_await_media_ready
+diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c
+index 9263b04d35f7b..2c153cdeb420d 100644
+--- a/tools/testing/cxl/test/mem.c
++++ b/tools/testing/cxl/test/mem.c
+@@ -1010,6 +1010,7 @@ static int cxl_mock_mem_probe(struct platform_device *pdev)
+ 	if (rc)
+ 		return rc;
+ 
++	cxlds->media_ready = true;
+ 	rc = cxl_dev_state_identify(cxlds);
+ 	if (rc)
+ 		return rc;
+diff --git a/tools/testing/cxl/test/mock.c b/tools/testing/cxl/test/mock.c
+index c4e53f22e4215..652b7dae1feba 100644
+--- a/tools/testing/cxl/test/mock.c
++++ b/tools/testing/cxl/test/mock.c
+@@ -149,6 +149,21 @@ struct cxl_hdm *__wrap_devm_cxl_setup_hdm(struct cxl_port *port,
+ }
+ EXPORT_SYMBOL_NS_GPL(__wrap_devm_cxl_setup_hdm, CXL);
+ 
++int __wrap_devm_cxl_enable_hdm(struct cxl_port *port, struct cxl_hdm *cxlhdm)
++{
++	int index, rc;
++	struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
++
++	if (ops && ops->is_mock_port(port->uport))
++		rc = 0;
++	else
++		rc = devm_cxl_enable_hdm(port, cxlhdm);
++	put_cxl_mock_ops(index);
++
++	return rc;
++}
++EXPORT_SYMBOL_NS_GPL(__wrap_devm_cxl_enable_hdm, CXL);
++
+ int __wrap_devm_cxl_add_passthrough_decoder(struct cxl_port *port)
+ {
+ 	int rc, index;
+diff --git a/tools/testing/selftests/net/fib_tests.sh b/tools/testing/selftests/net/fib_tests.sh
+index 7da8ec838c63a..35d89dfa6f11b 100755
+--- a/tools/testing/selftests/net/fib_tests.sh
++++ b/tools/testing/selftests/net/fib_tests.sh
+@@ -68,7 +68,7 @@ setup()
+ cleanup()
+ {
+ 	$IP link del dev dummy0 &> /dev/null
+-	ip netns del ns1
++	ip netns del ns1 &> /dev/null
+ 	ip netns del ns2 &> /dev/null
+ }
+ 


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:6.3 commit in: /
@ 2023-05-30 16:59 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2023-05-30 16:59 UTC (permalink / raw
  To: gentoo-commits

commit:     22a51a4a4c1294ecd5ba5ee46871da150cf1408e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue May 30 16:58:47 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue May 30 16:58:47 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=22a51a4a

Remove redundant patch

Removed:
1900_xfs-livelock-fix-in-delaye-alloc-at-ENOSPC.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |  4 --
 ...fs-livelock-fix-in-delaye-alloc-at-ENOSPC.patch | 56 ----------------------
 2 files changed, 60 deletions(-)

diff --git a/0000_README b/0000_README
index e0952135..72e833a4 100644
--- a/0000_README
+++ b/0000_README
@@ -75,10 +75,6 @@ Patch:  1700_sparc-address-warray-bound-warnings.patch
 From:		https://github.com/KSPP/linux/issues/109
 Desc:		Address -Warray-bounds warnings 
 
-Patch:  1900_xfs-livelock-fix-in-delaye-alloc-at-ENOSPC.patch
-From:		https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git
-Desc:		xfs: fix livelock in delayed allocation at ENOSPC
-
 Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758

diff --git a/1900_xfs-livelock-fix-in-delaye-alloc-at-ENOSPC.patch b/1900_xfs-livelock-fix-in-delaye-alloc-at-ENOSPC.patch
deleted file mode 100644
index e1e5726b..00000000
--- a/1900_xfs-livelock-fix-in-delaye-alloc-at-ENOSPC.patch
+++ /dev/null
@@ -1,56 +0,0 @@
-From 9419092fb2630c30e4ffeb9ef61007ef0c61827a Mon Sep 17 00:00:00 2001
-From: Dave Chinner <dchinner@redhat.com>
-Date: Thu, 27 Apr 2023 09:02:11 +1000
-Subject: xfs: fix livelock in delayed allocation at ENOSPC
-
-On a filesystem with a non-zero stripe unit and a large sequential
-write, delayed allocation will set a minimum allocation length of
-the stripe unit. If allocation fails because there are no extents
-long enough for an aligned minlen allocation, it is supposed to
-fall back to unaligned allocation which allows single block extents
-to be allocated.
-
-When the allocator code was rewritting in the 6.3 cycle, this
-fallback was broken - the old code used args->fsbno as the both the
-allocation target and the allocation result, the new code passes the
-target as a separate parameter. The conversion didn't handle the
-aligned->unaligned fallback path correctly - it reset args->fsbno to
-the target fsbno on failure which broke allocation failure detection
-in the high level code and so it never fell back to unaligned
-allocations.
-
-This resulted in a loop in writeback trying to allocate an aligned
-block, getting a false positive success, trying to insert the result
-in the BMBT. This did nothing because the extent already was in the
-BMBT (merge results in an unchanged extent) and so it returned the
-prior extent to the conversion code as the current iomap.
-
-Because the iomap returned didn't cover the offset we tried to map,
-xfs_convert_blocks() then retries the allocation, which fails in the
-same way and now we have a livelock.
-
-Reported-and-tested-by: Brian Foster <bfoster@redhat.com>
-Fixes: 85843327094f ("xfs: factor xfs_bmap_btalloc()")
-Signed-off-by: Dave Chinner <dchinner@redhat.com>
-Reviewed-by: Darrick J. Wong <djwong@kernel.org>
----
- fs/xfs/libxfs/xfs_bmap.c | 1 -
- 1 file changed, 1 deletion(-)
-
-(limited to 'fs/xfs/libxfs/xfs_bmap.c')
-
-diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c
-index 1a4e446194dd8..b512de0540d54 100644
---- a/fs/xfs/libxfs/xfs_bmap.c
-+++ b/fs/xfs/libxfs/xfs_bmap.c
-@@ -3540,7 +3540,6 @@ xfs_bmap_btalloc_at_eof(
- 	 * original non-aligned state so the caller can proceed on allocation
- 	 * failure as if this function was never called.
- 	 */
--	args->fsbno = ap->blkno;
- 	args->alignment = 1;
- 	return 0;
- }
--- 
-cgit 
-


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:6.3 commit in: /
@ 2023-06-02 15:09 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2023-06-02 15:09 UTC (permalink / raw
  To: gentoo-commits

commit:     b8a52a844724d67119ef377a2a8e97f1dfee9a33
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jun  2 15:09:34 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jun  2 15:09:34 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b8a52a84

io_uring: undeprecate epoll_ctl support

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                       |  4 ++++
 2100_io-uring-undeprecate-epoll-ctl-support.patch | 21 +++++++++++++++++++++
 2 files changed, 25 insertions(+)

diff --git a/0000_README b/0000_README
index 72e833a4..66dc0900 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
+Patch:  2100_io-uring-undeprecate-epoll-ctl-support.patch
+From:   https://patchwork.kernel.org/project/io-uring/patch/20230506095502.13401-1-info@bnoordhuis.nl/
+Desc:   io_uring: undeprecate epoll_ctl support
+
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino

diff --git a/2100_io-uring-undeprecate-epoll-ctl-support.patch b/2100_io-uring-undeprecate-epoll-ctl-support.patch
new file mode 100644
index 00000000..4c3d3904
--- /dev/null
+++ b/2100_io-uring-undeprecate-epoll-ctl-support.patch
@@ -0,0 +1,21 @@
+io_uring: undeprecate epoll_ctl support
+
+---
+ io_uring/epoll.c | 4 ----
+ 1 file changed, 4 deletions(-)
+
+diff --git a/io_uring/epoll.c b/io_uring/epoll.c
+index 9aa74d2c80bc..89bff2068a19 100644
+--- a/io_uring/epoll.c
++++ b/io_uring/epoll.c
+@@ -25,10 +25,6 @@ int io_epoll_ctl_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ {
+ 	struct io_epoll *epoll = io_kiocb_to_cmd(req, struct io_epoll);
+ 
+-	pr_warn_once("%s: epoll_ctl support in io_uring is deprecated and will "
+-		     "be removed in a future Linux kernel version.\n",
+-		     current->comm);
+-
+ 	if (sqe->buf_index || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:6.3 commit in: /
@ 2023-06-05 11:47 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2023-06-05 11:47 UTC (permalink / raw
  To: gentoo-commits

commit:     c25f4b52eeb6d16c5291a77e8e382fbddce0c5b4
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Jun  5 11:47:44 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Jun  5 11:47:44 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c25f4b52

Linux patch 6.3.6

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |    4 +
 1005_linux-6.3.6.patch | 2266 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2270 insertions(+)

diff --git a/0000_README b/0000_README
index 66dc0900..210d6768 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch:  1004_linux-6.3.5.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.3.5
 
+Patch:  1005_linux-6.3.6.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.3.6
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1005_linux-6.3.6.patch b/1005_linux-6.3.6.patch
new file mode 100644
index 00000000..b9df57e3
--- /dev/null
+++ b/1005_linux-6.3.6.patch
@@ -0,0 +1,2266 @@
+diff --git a/Makefile b/Makefile
+index d710ff6a3d566..1dffadbf1f87f 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 3
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/arch/arm/boot/dts/imx6ull-dhcor-som.dtsi b/arch/arm/boot/dts/imx6ull-dhcor-som.dtsi
+index 5882c7565f649..32a6022625d97 100644
+--- a/arch/arm/boot/dts/imx6ull-dhcor-som.dtsi
++++ b/arch/arm/boot/dts/imx6ull-dhcor-som.dtsi
+@@ -8,6 +8,7 @@
+ #include <dt-bindings/input/input.h>
+ #include <dt-bindings/leds/common.h>
+ #include <dt-bindings/pwm/pwm.h>
++#include <dt-bindings/regulator/dlg,da9063-regulator.h>
+ #include "imx6ull.dtsi"
+ 
+ / {
+@@ -84,16 +85,20 @@
+ 
+ 		regulators {
+ 			vdd_soc_in_1v4: buck1 {
++				regulator-allowed-modes = <DA9063_BUCK_MODE_SLEEP>; /* PFM */
+ 				regulator-always-on;
+ 				regulator-boot-on;
++				regulator-initial-mode = <DA9063_BUCK_MODE_SLEEP>;
+ 				regulator-max-microvolt = <1400000>;
+ 				regulator-min-microvolt = <1400000>;
+ 				regulator-name = "vdd_soc_in_1v4";
+ 			};
+ 
+ 			vcc_3v3: buck2 {
++				regulator-allowed-modes = <DA9063_BUCK_MODE_SYNC>; /* PWM */
+ 				regulator-always-on;
+ 				regulator-boot-on;
++				regulator-initial-mode = <DA9063_BUCK_MODE_SYNC>;
+ 				regulator-max-microvolt = <3300000>;
+ 				regulator-min-microvolt = <3300000>;
+ 				regulator-name = "vcc_3v3";
+@@ -106,8 +111,10 @@
+ 			 * the voltage is set to 1.5V.
+ 			 */
+ 			vcc_ddr_1v35: buck3 {
++				regulator-allowed-modes = <DA9063_BUCK_MODE_SYNC>; /* PWM */
+ 				regulator-always-on;
+ 				regulator-boot-on;
++				regulator-initial-mode = <DA9063_BUCK_MODE_SYNC>;
+ 				regulator-max-microvolt = <1500000>;
+ 				regulator-min-microvolt = <1500000>;
+ 				regulator-name = "vcc_ddr_1v35";
+diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
+index 9eb968e14d31f..a80d7c62bdfe6 100644
+--- a/block/blk-mq-tag.c
++++ b/block/blk-mq-tag.c
+@@ -41,16 +41,20 @@ void __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
+ {
+ 	unsigned int users;
+ 
++	/*
++	 * calling test_bit() prior to test_and_set_bit() is intentional,
++	 * it avoids dirtying the cacheline if the queue is already active.
++	 */
+ 	if (blk_mq_is_shared_tags(hctx->flags)) {
+ 		struct request_queue *q = hctx->queue;
+ 
+-		if (test_bit(QUEUE_FLAG_HCTX_ACTIVE, &q->queue_flags))
++		if (test_bit(QUEUE_FLAG_HCTX_ACTIVE, &q->queue_flags) ||
++		    test_and_set_bit(QUEUE_FLAG_HCTX_ACTIVE, &q->queue_flags))
+ 			return;
+-		set_bit(QUEUE_FLAG_HCTX_ACTIVE, &q->queue_flags);
+ 	} else {
+-		if (test_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state))
++		if (test_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state) ||
++		    test_and_set_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state))
+ 			return;
+-		set_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state);
+ 	}
+ 
+ 	users = atomic_inc_return(&hctx->tags->active_queues);
+diff --git a/block/blk-wbt.c b/block/blk-wbt.c
+index e49a486845327..9ec2a2f1eda38 100644
+--- a/block/blk-wbt.c
++++ b/block/blk-wbt.c
+@@ -730,14 +730,16 @@ void wbt_enable_default(struct gendisk *disk)
+ {
+ 	struct request_queue *q = disk->queue;
+ 	struct rq_qos *rqos;
+-	bool disable_flag = q->elevator &&
+-		    test_bit(ELEVATOR_FLAG_DISABLE_WBT, &q->elevator->flags);
++	bool enable = IS_ENABLED(CONFIG_BLK_WBT_MQ);
++
++	if (q->elevator &&
++	    test_bit(ELEVATOR_FLAG_DISABLE_WBT, &q->elevator->flags))
++		enable = false;
+ 
+ 	/* Throttling already enabled? */
+ 	rqos = wbt_rq_qos(q);
+ 	if (rqos) {
+-		if (!disable_flag &&
+-		    RQWB(rqos)->enable_state == WBT_STATE_OFF_DEFAULT)
++		if (enable && RQWB(rqos)->enable_state == WBT_STATE_OFF_DEFAULT)
+ 			RQWB(rqos)->enable_state = WBT_STATE_ON_DEFAULT;
+ 		return;
+ 	}
+@@ -746,7 +748,7 @@ void wbt_enable_default(struct gendisk *disk)
+ 	if (!blk_queue_registered(q))
+ 		return;
+ 
+-	if (queue_is_mq(q) && !disable_flag)
++	if (queue_is_mq(q) && enable)
+ 		wbt_init(disk);
+ }
+ EXPORT_SYMBOL_GPL(wbt_enable_default);
+diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
+index 8dd46fad151eb..a7eb25066c274 100644
+--- a/drivers/cpufreq/amd-pstate.c
++++ b/drivers/cpufreq/amd-pstate.c
+@@ -422,9 +422,8 @@ static int amd_pstate_verify(struct cpufreq_policy_data *policy)
+ 	return 0;
+ }
+ 
+-static int amd_pstate_target(struct cpufreq_policy *policy,
+-			     unsigned int target_freq,
+-			     unsigned int relation)
++static int amd_pstate_update_freq(struct cpufreq_policy *policy,
++				  unsigned int target_freq, bool fast_switch)
+ {
+ 	struct cpufreq_freqs freqs;
+ 	struct amd_cpudata *cpudata = policy->driver_data;
+@@ -443,26 +442,50 @@ static int amd_pstate_target(struct cpufreq_policy *policy,
+ 	des_perf = DIV_ROUND_CLOSEST(target_freq * cap_perf,
+ 				     cpudata->max_freq);
+ 
+-	cpufreq_freq_transition_begin(policy, &freqs);
+-	amd_pstate_update(cpudata, min_perf, des_perf,
+-			  max_perf, false);
+-	cpufreq_freq_transition_end(policy, &freqs, false);
++	WARN_ON(fast_switch && !policy->fast_switch_enabled);
++	/*
++	 * If fast_switch is desired, then there aren't any registered
++	 * transition notifiers. See comment for
++	 * cpufreq_enable_fast_switch().
++	 */
++	if (!fast_switch)
++		cpufreq_freq_transition_begin(policy, &freqs);
++
++	amd_pstate_update(cpudata, min_perf, des_perf, max_perf, fast_switch);
++
++	if (!fast_switch)
++		cpufreq_freq_transition_end(policy, &freqs, false);
+ 
+ 	return 0;
+ }
+ 
++static int amd_pstate_target(struct cpufreq_policy *policy,
++			     unsigned int target_freq,
++			     unsigned int relation)
++{
++	return amd_pstate_update_freq(policy, target_freq, false);
++}
++
++static unsigned int amd_pstate_fast_switch(struct cpufreq_policy *policy,
++				  unsigned int target_freq)
++{
++	return amd_pstate_update_freq(policy, target_freq, true);
++}
++
+ static void amd_pstate_adjust_perf(unsigned int cpu,
+ 				   unsigned long _min_perf,
+ 				   unsigned long target_perf,
+ 				   unsigned long capacity)
+ {
+ 	unsigned long max_perf, min_perf, des_perf,
+-		      cap_perf, lowest_nonlinear_perf;
++		      cap_perf, lowest_nonlinear_perf, max_freq;
+ 	struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
+ 	struct amd_cpudata *cpudata = policy->driver_data;
++	unsigned int target_freq;
+ 
+ 	cap_perf = READ_ONCE(cpudata->highest_perf);
+ 	lowest_nonlinear_perf = READ_ONCE(cpudata->lowest_nonlinear_perf);
++	max_freq = READ_ONCE(cpudata->max_freq);
+ 
+ 	des_perf = cap_perf;
+ 	if (target_perf < capacity)
+@@ -479,6 +502,10 @@ static void amd_pstate_adjust_perf(unsigned int cpu,
+ 	if (max_perf < min_perf)
+ 		max_perf = min_perf;
+ 
++	des_perf = clamp_t(unsigned long, des_perf, min_perf, max_perf);
++	target_freq = div_u64(des_perf * max_freq, max_perf);
++	policy->cur = target_freq;
++
+ 	amd_pstate_update(cpudata, min_perf, des_perf, max_perf, true);
+ 	cpufreq_cpu_put(policy);
+ }
+@@ -692,6 +719,7 @@ static int amd_pstate_cpu_exit(struct cpufreq_policy *policy)
+ 
+ 	freq_qos_remove_request(&cpudata->req[1]);
+ 	freq_qos_remove_request(&cpudata->req[0]);
++	policy->fast_switch_possible = false;
+ 	kfree(cpudata);
+ 
+ 	return 0;
+@@ -996,7 +1024,6 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
+ 	policy->policy = CPUFREQ_POLICY_POWERSAVE;
+ 
+ 	if (boot_cpu_has(X86_FEATURE_CPPC)) {
+-		policy->fast_switch_possible = true;
+ 		ret = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, &value);
+ 		if (ret)
+ 			return ret;
+@@ -1019,7 +1046,6 @@ free_cpudata1:
+ static int amd_pstate_epp_cpu_exit(struct cpufreq_policy *policy)
+ {
+ 	pr_debug("CPU %d exiting\n", policy->cpu);
+-	policy->fast_switch_possible = false;
+ 	return 0;
+ }
+ 
+@@ -1226,6 +1252,7 @@ static struct cpufreq_driver amd_pstate_driver = {
+ 	.flags		= CPUFREQ_CONST_LOOPS | CPUFREQ_NEED_UPDATE_LIMITS,
+ 	.verify		= amd_pstate_verify,
+ 	.target		= amd_pstate_target,
++	.fast_switch    = amd_pstate_fast_switch,
+ 	.init		= amd_pstate_cpu_init,
+ 	.exit		= amd_pstate_cpu_exit,
+ 	.suspend	= amd_pstate_cpu_suspend,
+diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c
+index 4d1f9c5b5029a..27cbf457416d2 100644
+--- a/drivers/cxl/core/port.c
++++ b/drivers/cxl/core/port.c
+@@ -751,11 +751,10 @@ struct cxl_port *devm_cxl_add_port(struct device *host, struct device *uport,
+ 
+ 	parent_port = parent_dport ? parent_dport->port : NULL;
+ 	if (IS_ERR(port)) {
+-		dev_dbg(uport, "Failed to add %s%s%s%s: %ld\n",
+-			dev_name(&port->dev),
+-			parent_port ? " to " : "",
++		dev_dbg(uport, "Failed to add%s%s%s: %ld\n",
++			parent_port ? " port to " : "",
+ 			parent_port ? dev_name(&parent_port->dev) : "",
+-			parent_port ? "" : " (root port)",
++			parent_port ? "" : " root port",
+ 			PTR_ERR(port));
+ 	} else {
+ 		dev_dbg(uport, "%s added%s%s%s\n",
+diff --git a/drivers/firmware/arm_ffa/driver.c b/drivers/firmware/arm_ffa/driver.c
+index 02774baa90078..e234091386671 100644
+--- a/drivers/firmware/arm_ffa/driver.c
++++ b/drivers/firmware/arm_ffa/driver.c
+@@ -193,7 +193,8 @@ __ffa_partition_info_get(u32 uuid0, u32 uuid1, u32 uuid2, u32 uuid3,
+ 	int idx, count, flags = 0, sz, buf_sz;
+ 	ffa_value_t partition_info;
+ 
+-	if (!buffer || !num_partitions) /* Just get the count for now */
++	if (drv_info->version > FFA_VERSION_1_0 &&
++	    (!buffer || !num_partitions)) /* Just get the count for now */
+ 		flags = PARTITION_INFO_GET_RETURN_COUNT_ONLY;
+ 
+ 	mutex_lock(&drv_info->rx_lock);
+diff --git a/drivers/firmware/arm_scmi/raw_mode.c b/drivers/firmware/arm_scmi/raw_mode.c
+index d40df099fd515..6971dcf72fb99 100644
+--- a/drivers/firmware/arm_scmi/raw_mode.c
++++ b/drivers/firmware/arm_scmi/raw_mode.c
+@@ -1066,7 +1066,7 @@ static int scmi_xfer_raw_worker_init(struct scmi_raw_mode_info *raw)
+ 
+ 	raw->wait_wq = alloc_workqueue("scmi-raw-wait-wq-%d",
+ 				       WQ_UNBOUND | WQ_FREEZABLE |
+-				       WQ_HIGHPRI, WQ_SYSFS, raw->id);
++				       WQ_HIGHPRI | WQ_SYSFS, 0, raw->id);
+ 	if (!raw->wait_wq)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/gpio/Kconfig b/drivers/gpio/Kconfig
+index badbe05823180..14b655411aa0a 100644
+--- a/drivers/gpio/Kconfig
++++ b/drivers/gpio/Kconfig
+@@ -879,7 +879,7 @@ config GPIO_F7188X
+ 	help
+ 	  This option enables support for GPIOs found on Fintek Super-I/O
+ 	  chips F71869, F71869A, F71882FG, F71889F and F81866.
+-	  As well as Nuvoton Super-I/O chip NCT6116D.
++	  As well as Nuvoton Super-I/O chip NCT6126D.
+ 
+ 	  To compile this driver as a module, choose M here: the module will
+ 	  be called f7188x-gpio.
+diff --git a/drivers/gpio/gpio-f7188x.c b/drivers/gpio/gpio-f7188x.c
+index 9effa7769bef5..f54ca5a1775ea 100644
+--- a/drivers/gpio/gpio-f7188x.c
++++ b/drivers/gpio/gpio-f7188x.c
+@@ -48,7 +48,7 @@
+ /*
+  * Nuvoton devices.
+  */
+-#define SIO_NCT6116D_ID		0xD283  /* NCT6116D chipset ID */
++#define SIO_NCT6126D_ID		0xD283  /* NCT6126D chipset ID */
+ 
+ #define SIO_LD_GPIO_NUVOTON	0x07	/* GPIO logical device */
+ 
+@@ -62,7 +62,7 @@ enum chips {
+ 	f81866,
+ 	f81804,
+ 	f81865,
+-	nct6116d,
++	nct6126d,
+ };
+ 
+ static const char * const f7188x_names[] = {
+@@ -74,7 +74,7 @@ static const char * const f7188x_names[] = {
+ 	"f81866",
+ 	"f81804",
+ 	"f81865",
+-	"nct6116d",
++	"nct6126d",
+ };
+ 
+ struct f7188x_sio {
+@@ -187,8 +187,8 @@ static int f7188x_gpio_set_config(struct gpio_chip *chip, unsigned offset,
+ /* Output mode register (0:open drain 1:push-pull). */
+ #define f7188x_gpio_out_mode(base) ((base) + 3)
+ 
+-#define f7188x_gpio_dir_invert(type)	((type) == nct6116d)
+-#define f7188x_gpio_data_single(type)	((type) == nct6116d)
++#define f7188x_gpio_dir_invert(type)	((type) == nct6126d)
++#define f7188x_gpio_data_single(type)	((type) == nct6126d)
+ 
+ static struct f7188x_gpio_bank f71869_gpio_bank[] = {
+ 	F7188X_GPIO_BANK(0, 6, 0xF0, DRVNAME "-0"),
+@@ -274,7 +274,7 @@ static struct f7188x_gpio_bank f81865_gpio_bank[] = {
+ 	F7188X_GPIO_BANK(60, 5, 0x90, DRVNAME "-6"),
+ };
+ 
+-static struct f7188x_gpio_bank nct6116d_gpio_bank[] = {
++static struct f7188x_gpio_bank nct6126d_gpio_bank[] = {
+ 	F7188X_GPIO_BANK(0, 8, 0xE0, DRVNAME "-0"),
+ 	F7188X_GPIO_BANK(10, 8, 0xE4, DRVNAME "-1"),
+ 	F7188X_GPIO_BANK(20, 8, 0xE8, DRVNAME "-2"),
+@@ -282,7 +282,7 @@ static struct f7188x_gpio_bank nct6116d_gpio_bank[] = {
+ 	F7188X_GPIO_BANK(40, 8, 0xF0, DRVNAME "-4"),
+ 	F7188X_GPIO_BANK(50, 8, 0xF4, DRVNAME "-5"),
+ 	F7188X_GPIO_BANK(60, 8, 0xF8, DRVNAME "-6"),
+-	F7188X_GPIO_BANK(70, 1, 0xFC, DRVNAME "-7"),
++	F7188X_GPIO_BANK(70, 8, 0xFC, DRVNAME "-7"),
+ };
+ 
+ static int f7188x_gpio_get_direction(struct gpio_chip *chip, unsigned offset)
+@@ -490,9 +490,9 @@ static int f7188x_gpio_probe(struct platform_device *pdev)
+ 		data->nr_bank = ARRAY_SIZE(f81865_gpio_bank);
+ 		data->bank = f81865_gpio_bank;
+ 		break;
+-	case nct6116d:
+-		data->nr_bank = ARRAY_SIZE(nct6116d_gpio_bank);
+-		data->bank = nct6116d_gpio_bank;
++	case nct6126d:
++		data->nr_bank = ARRAY_SIZE(nct6126d_gpio_bank);
++		data->bank = nct6126d_gpio_bank;
+ 		break;
+ 	default:
+ 		return -ENODEV;
+@@ -559,9 +559,9 @@ static int __init f7188x_find(int addr, struct f7188x_sio *sio)
+ 	case SIO_F81865_ID:
+ 		sio->type = f81865;
+ 		break;
+-	case SIO_NCT6116D_ID:
++	case SIO_NCT6126D_ID:
+ 		sio->device = SIO_LD_GPIO_NUVOTON;
+-		sio->type = nct6116d;
++		sio->type = nct6126d;
+ 		break;
+ 	default:
+ 		pr_info("Unsupported Fintek device 0x%04x\n", devid);
+@@ -569,7 +569,7 @@ static int __init f7188x_find(int addr, struct f7188x_sio *sio)
+ 	}
+ 
+ 	/* double check manufacturer where possible */
+-	if (sio->type != nct6116d) {
++	if (sio->type != nct6126d) {
+ 		manid = superio_inw(addr, SIO_FINTEK_MANID);
+ 		if (manid != SIO_FINTEK_ID) {
+ 			pr_debug("Not a Fintek device at 0x%08x\n", addr);
+@@ -581,7 +581,7 @@ static int __init f7188x_find(int addr, struct f7188x_sio *sio)
+ 	err = 0;
+ 
+ 	pr_info("Found %s at %#x\n", f7188x_names[sio->type], (unsigned int)addr);
+-	if (sio->type != nct6116d)
++	if (sio->type != nct6126d)
+ 		pr_info("   revision %d\n", superio_inb(addr, SIO_FINTEK_DEVREV));
+ 
+ err:
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 19bd23044b017..4472214fcd43a 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -193,6 +193,8 @@ static int gpiochip_find_base(int ngpio)
+ 			break;
+ 		/* nope, check the space right after the chip */
+ 		base = gdev->base + gdev->ngpio;
++		if (base < GPIO_DYNAMIC_BASE)
++			base = GPIO_DYNAMIC_BASE;
+ 	}
+ 
+ 	if (gpio_is_valid(base)) {
+diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
+index 254559abedfba..379050d228941 100644
+--- a/drivers/gpu/drm/i915/display/intel_ddi.c
++++ b/drivers/gpu/drm/i915/display/intel_ddi.c
+@@ -2731,9 +2731,6 @@ static void intel_ddi_post_disable(struct intel_atomic_state *state,
+ 				   const struct drm_connector_state *old_conn_state)
+ {
+ 	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
+-	struct intel_digital_port *dig_port = enc_to_dig_port(encoder);
+-	enum phy phy = intel_port_to_phy(dev_priv, encoder->port);
+-	bool is_tc_port = intel_phy_is_tc(dev_priv, phy);
+ 	struct intel_crtc *slave_crtc;
+ 
+ 	if (!intel_crtc_has_type(old_crtc_state, INTEL_OUTPUT_DP_MST)) {
+@@ -2783,6 +2780,17 @@ static void intel_ddi_post_disable(struct intel_atomic_state *state,
+ 	else
+ 		intel_ddi_post_disable_dp(state, encoder, old_crtc_state,
+ 					  old_conn_state);
++}
++
++static void intel_ddi_post_pll_disable(struct intel_atomic_state *state,
++				       struct intel_encoder *encoder,
++				       const struct intel_crtc_state *old_crtc_state,
++				       const struct drm_connector_state *old_conn_state)
++{
++	struct drm_i915_private *i915 = to_i915(encoder->base.dev);
++	struct intel_digital_port *dig_port = enc_to_dig_port(encoder);
++	enum phy phy = intel_port_to_phy(i915, encoder->port);
++	bool is_tc_port = intel_phy_is_tc(i915, phy);
+ 
+ 	main_link_aux_power_domain_put(dig_port, old_crtc_state);
+ 
+@@ -4381,6 +4389,7 @@ void intel_ddi_init(struct drm_i915_private *dev_priv, enum port port)
+ 	encoder->pre_pll_enable = intel_ddi_pre_pll_enable;
+ 	encoder->pre_enable = intel_ddi_pre_enable;
+ 	encoder->disable = intel_disable_ddi;
++	encoder->post_pll_disable = intel_ddi_post_pll_disable;
+ 	encoder->post_disable = intel_ddi_post_disable;
+ 	encoder->update_pipe = intel_ddi_update_pipe;
+ 	encoder->get_hw_state = intel_ddi_get_hw_state;
+diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
+index 2bef50ab0ad19..c84b581c61c6b 100644
+--- a/drivers/gpu/drm/i915/display/intel_display.c
++++ b/drivers/gpu/drm/i915/display/intel_display.c
+@@ -2000,6 +2000,8 @@ static void ilk_crtc_disable(struct intel_atomic_state *state,
+ 
+ 	intel_set_cpu_fifo_underrun_reporting(dev_priv, pipe, true);
+ 	intel_set_pch_fifo_underrun_reporting(dev_priv, pipe, true);
++
++	intel_disable_shared_dpll(old_crtc_state);
+ }
+ 
+ static void hsw_crtc_disable(struct intel_atomic_state *state,
+@@ -2018,7 +2020,19 @@ static void hsw_crtc_disable(struct intel_atomic_state *state,
+ 		intel_encoders_post_disable(state, crtc);
+ 	}
+ 
+-	intel_dmc_disable_pipe(i915, crtc->pipe);
++	intel_disable_shared_dpll(old_crtc_state);
++
++	if (!intel_crtc_is_bigjoiner_slave(old_crtc_state)) {
++		struct intel_crtc *slave_crtc;
++
++		intel_encoders_post_pll_disable(state, crtc);
++
++		intel_dmc_disable_pipe(i915, crtc->pipe);
++
++		for_each_intel_crtc_in_pipe_mask(&i915->drm, slave_crtc,
++						 intel_crtc_bigjoiner_slave_pipes(old_crtc_state))
++			intel_dmc_disable_pipe(i915, slave_crtc->pipe);
++	}
+ }
+ 
+ static void i9xx_pfit_enable(const struct intel_crtc_state *crtc_state)
+@@ -7140,7 +7154,6 @@ static void intel_old_crtc_state_disables(struct intel_atomic_state *state,
+ 	dev_priv->display.funcs.display->crtc_disable(state, crtc);
+ 	crtc->active = false;
+ 	intel_fbc_disable(crtc);
+-	intel_disable_shared_dpll(old_crtc_state);
+ 
+ 	if (!new_crtc_state->hw.active)
+ 		intel_initial_watermarks(state, crtc);
+diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
+index 7c9b328bc2d73..a93018ce0e312 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
++++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
+@@ -623,6 +623,20 @@ static void intel_mst_post_disable_dp(struct intel_atomic_state *state,
+ 		    intel_dp->active_mst_links);
+ }
+ 
++static void intel_mst_post_pll_disable_dp(struct intel_atomic_state *state,
++					  struct intel_encoder *encoder,
++					  const struct intel_crtc_state *old_crtc_state,
++					  const struct drm_connector_state *old_conn_state)
++{
++	struct intel_dp_mst_encoder *intel_mst = enc_to_mst(encoder);
++	struct intel_digital_port *dig_port = intel_mst->primary;
++	struct intel_dp *intel_dp = &dig_port->dp;
++
++	if (intel_dp->active_mst_links == 0 &&
++	    dig_port->base.post_pll_disable)
++		dig_port->base.post_pll_disable(state, encoder, old_crtc_state, old_conn_state);
++}
++
+ static void intel_mst_pre_pll_enable_dp(struct intel_atomic_state *state,
+ 					struct intel_encoder *encoder,
+ 					const struct intel_crtc_state *pipe_config,
+@@ -1146,6 +1160,7 @@ intel_dp_create_fake_mst_encoder(struct intel_digital_port *dig_port, enum pipe
+ 	intel_encoder->compute_config_late = intel_dp_mst_compute_config_late;
+ 	intel_encoder->disable = intel_mst_disable_dp;
+ 	intel_encoder->post_disable = intel_mst_post_disable_dp;
++	intel_encoder->post_pll_disable = intel_mst_post_pll_disable_dp;
+ 	intel_encoder->update_pipe = intel_ddi_update_pipe;
+ 	intel_encoder->pre_pll_enable = intel_mst_pre_pll_enable_dp;
+ 	intel_encoder->pre_enable = intel_mst_pre_enable_dp;
+diff --git a/drivers/gpu/drm/i915/display/intel_modeset_setup.c b/drivers/gpu/drm/i915/display/intel_modeset_setup.c
+index 52cdbd4fc2fa0..48b726e408057 100644
+--- a/drivers/gpu/drm/i915/display/intel_modeset_setup.c
++++ b/drivers/gpu/drm/i915/display/intel_modeset_setup.c
+@@ -96,7 +96,6 @@ static void intel_crtc_disable_noatomic(struct intel_crtc *crtc,
+ 
+ 	intel_fbc_disable(crtc);
+ 	intel_update_watermarks(i915);
+-	intel_disable_shared_dpll(crtc_state);
+ 
+ 	intel_display_power_put_all_in_set(i915, &crtc->enabled_power_domains);
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.c b/drivers/hwtracing/coresight/coresight-etm-perf.c
+index 711f451b69469..89e8ed214ea49 100644
+--- a/drivers/hwtracing/coresight/coresight-etm-perf.c
++++ b/drivers/hwtracing/coresight/coresight-etm-perf.c
+@@ -402,6 +402,7 @@ static void *etm_setup_aux(struct perf_event *event, void **pages,
+ 		trace_id = coresight_trace_id_get_cpu_id(cpu);
+ 		if (!IS_VALID_CS_TRACE_ID(trace_id)) {
+ 			cpumask_clear_cpu(cpu, mask);
++			coresight_release_path(path);
+ 			continue;
+ 		}
+ 
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 577d94821b3e7..38e5b5abe067c 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -3834,6 +3834,11 @@ static int fec_enet_txq_xmit_frame(struct fec_enet_private *fep,
+ 	index = fec_enet_get_bd_index(last_bdp, &txq->bd);
+ 	txq->tx_skbuff[index] = NULL;
+ 
++	/* Make sure the updates to rest of the descriptor are performed before
++	 * transferring ownership.
++	 */
++	dma_wmb();
++
+ 	/* Send it on its way.  Tell FEC it's ready, interrupt when done,
+ 	 * it's the last BD of the frame, and to put the CRC on the end.
+ 	 */
+@@ -3843,8 +3848,14 @@ static int fec_enet_txq_xmit_frame(struct fec_enet_private *fep,
+ 	/* If this was the last BD in the ring, start at the beginning again. */
+ 	bdp = fec_enet_get_nextdesc(last_bdp, &txq->bd);
+ 
++	/* Make sure the update to bdp are performed before txq->bd.cur. */
++	dma_wmb();
++
+ 	txq->bd.cur = bdp;
+ 
++	/* Trigger transmission start */
++	writel(0, txq->bd.reg_desc_active);
++
+ 	return 0;
+ }
+ 
+@@ -3873,12 +3884,6 @@ static int fec_enet_xdp_xmit(struct net_device *dev,
+ 		sent_frames++;
+ 	}
+ 
+-	/* Make sure the update to bdp and tx_skbuff are performed. */
+-	wmb();
+-
+-	/* Trigger transmission start */
+-	writel(0, txq->bd.reg_desc_active);
+-
+ 	__netif_tx_unlock(nq);
+ 
+ 	return sent_frames;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Makefile b/drivers/net/ethernet/mellanox/mlx5/core/Makefile
+index 8d4e25cc54ea3..78755dfeaccea 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/Makefile
++++ b/drivers/net/ethernet/mellanox/mlx5/core/Makefile
+@@ -69,7 +69,7 @@ mlx5_core-$(CONFIG_MLX5_TC_SAMPLE)   += en/tc/sample.o
+ #
+ mlx5_core-$(CONFIG_MLX5_ESWITCH)   += eswitch.o eswitch_offloads.o eswitch_offloads_termtbl.o \
+ 				      ecpf.o rdma.o esw/legacy.o \
+-				      esw/debugfs.o esw/devlink_port.o esw/vporttbl.o esw/qos.o
++				      esw/devlink_port.o esw/vporttbl.o esw/qos.o
+ 
+ mlx5_core-$(CONFIG_MLX5_ESWITCH)   += esw/acl/helper.o \
+ 				      esw/acl/egress_lgcy.o esw/acl/egress_ofld.o \
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index 2b1094e5b0c9d..53acd9a8a4c35 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -5793,22 +5793,43 @@ bool mlx5e_tc_update_skb_nic(struct mlx5_cqe64 *cqe, struct sk_buff *skb)
+ 				   0, NULL);
+ }
+ 
++static struct mapping_ctx *
++mlx5e_get_priv_obj_mapping(struct mlx5e_priv *priv)
++{
++	struct mlx5e_tc_table *tc;
++	struct mlx5_eswitch *esw;
++	struct mapping_ctx *ctx;
++
++	if (is_mdev_switchdev_mode(priv->mdev)) {
++		esw = priv->mdev->priv.eswitch;
++		ctx = esw->offloads.reg_c0_obj_pool;
++	} else {
++		tc = mlx5e_fs_get_tc(priv->fs);
++		ctx = tc->mapping;
++	}
++
++	return ctx;
++}
++
+ int mlx5e_tc_action_miss_mapping_get(struct mlx5e_priv *priv, struct mlx5_flow_attr *attr,
+ 				     u64 act_miss_cookie, u32 *act_miss_mapping)
+ {
+-	struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
+ 	struct mlx5_mapped_obj mapped_obj = {};
++	struct mlx5_eswitch *esw;
+ 	struct mapping_ctx *ctx;
+ 	int err;
+ 
+-	ctx = esw->offloads.reg_c0_obj_pool;
+-
++	ctx = mlx5e_get_priv_obj_mapping(priv);
+ 	mapped_obj.type = MLX5_MAPPED_OBJ_ACT_MISS;
+ 	mapped_obj.act_miss_cookie = act_miss_cookie;
+ 	err = mapping_add(ctx, &mapped_obj, act_miss_mapping);
+ 	if (err)
+ 		return err;
+ 
++	if (!is_mdev_switchdev_mode(priv->mdev))
++		return 0;
++
++	esw = priv->mdev->priv.eswitch;
+ 	attr->act_id_restore_rule = esw_add_restore_rule(esw, *act_miss_mapping);
+ 	if (IS_ERR(attr->act_id_restore_rule))
+ 		goto err_rule;
+@@ -5823,10 +5844,9 @@ err_rule:
+ void mlx5e_tc_action_miss_mapping_put(struct mlx5e_priv *priv, struct mlx5_flow_attr *attr,
+ 				      u32 act_miss_mapping)
+ {
+-	struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
+-	struct mapping_ctx *ctx;
++	struct mapping_ctx *ctx = mlx5e_get_priv_obj_mapping(priv);
+ 
+-	ctx = esw->offloads.reg_c0_obj_pool;
+-	mlx5_del_flow_rules(attr->act_id_restore_rule);
++	if (is_mdev_switchdev_mode(priv->mdev))
++		mlx5_del_flow_rules(attr->act_id_restore_rule);
+ 	mapping_remove(ctx, act_miss_mapping);
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/debugfs.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/debugfs.c
+deleted file mode 100644
+index 3d0bbcca1cb99..0000000000000
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/debugfs.c
++++ /dev/null
+@@ -1,198 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+-/* Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */
+-
+-#include <linux/debugfs.h>
+-#include "eswitch.h"
+-
+-enum vnic_diag_counter {
+-	MLX5_VNIC_DIAG_TOTAL_Q_UNDER_PROCESSOR_HANDLE,
+-	MLX5_VNIC_DIAG_SEND_QUEUE_PRIORITY_UPDATE_FLOW,
+-	MLX5_VNIC_DIAG_COMP_EQ_OVERRUN,
+-	MLX5_VNIC_DIAG_ASYNC_EQ_OVERRUN,
+-	MLX5_VNIC_DIAG_CQ_OVERRUN,
+-	MLX5_VNIC_DIAG_INVALID_COMMAND,
+-	MLX5_VNIC_DIAG_QOUTA_EXCEEDED_COMMAND,
+-	MLX5_VNIC_DIAG_RX_STEERING_DISCARD,
+-};
+-
+-static int mlx5_esw_query_vnic_diag(struct mlx5_vport *vport, enum vnic_diag_counter counter,
+-				    u64 *val)
+-{
+-	u32 out[MLX5_ST_SZ_DW(query_vnic_env_out)] = {};
+-	u32 in[MLX5_ST_SZ_DW(query_vnic_env_in)] = {};
+-	struct mlx5_core_dev *dev = vport->dev;
+-	u16 vport_num = vport->vport;
+-	void *vnic_diag_out;
+-	int err;
+-
+-	MLX5_SET(query_vnic_env_in, in, opcode, MLX5_CMD_OP_QUERY_VNIC_ENV);
+-	MLX5_SET(query_vnic_env_in, in, vport_number, vport_num);
+-	if (!mlx5_esw_is_manager_vport(dev->priv.eswitch, vport_num))
+-		MLX5_SET(query_vnic_env_in, in, other_vport, 1);
+-
+-	err = mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out));
+-	if (err)
+-		return err;
+-
+-	vnic_diag_out = MLX5_ADDR_OF(query_vnic_env_out, out, vport_env);
+-	switch (counter) {
+-	case MLX5_VNIC_DIAG_TOTAL_Q_UNDER_PROCESSOR_HANDLE:
+-		*val = MLX5_GET(vnic_diagnostic_statistics, vnic_diag_out, total_error_queues);
+-		break;
+-	case MLX5_VNIC_DIAG_SEND_QUEUE_PRIORITY_UPDATE_FLOW:
+-		*val = MLX5_GET(vnic_diagnostic_statistics, vnic_diag_out,
+-				send_queue_priority_update_flow);
+-		break;
+-	case MLX5_VNIC_DIAG_COMP_EQ_OVERRUN:
+-		*val = MLX5_GET(vnic_diagnostic_statistics, vnic_diag_out, comp_eq_overrun);
+-		break;
+-	case MLX5_VNIC_DIAG_ASYNC_EQ_OVERRUN:
+-		*val = MLX5_GET(vnic_diagnostic_statistics, vnic_diag_out, async_eq_overrun);
+-		break;
+-	case MLX5_VNIC_DIAG_CQ_OVERRUN:
+-		*val = MLX5_GET(vnic_diagnostic_statistics, vnic_diag_out, cq_overrun);
+-		break;
+-	case MLX5_VNIC_DIAG_INVALID_COMMAND:
+-		*val = MLX5_GET(vnic_diagnostic_statistics, vnic_diag_out, invalid_command);
+-		break;
+-	case MLX5_VNIC_DIAG_QOUTA_EXCEEDED_COMMAND:
+-		*val = MLX5_GET(vnic_diagnostic_statistics, vnic_diag_out, quota_exceeded_command);
+-		break;
+-	case MLX5_VNIC_DIAG_RX_STEERING_DISCARD:
+-		*val = MLX5_GET64(vnic_diagnostic_statistics, vnic_diag_out,
+-				  nic_receive_steering_discard);
+-		break;
+-	}
+-
+-	return 0;
+-}
+-
+-static int __show_vnic_diag(struct seq_file *file, struct mlx5_vport *vport,
+-			    enum vnic_diag_counter type)
+-{
+-	u64 val = 0;
+-	int ret;
+-
+-	ret = mlx5_esw_query_vnic_diag(vport, type, &val);
+-	if (ret)
+-		return ret;
+-
+-	seq_printf(file, "%llu\n", val);
+-	return 0;
+-}
+-
+-static int total_q_under_processor_handle_show(struct seq_file *file, void *priv)
+-{
+-	return __show_vnic_diag(file, file->private, MLX5_VNIC_DIAG_TOTAL_Q_UNDER_PROCESSOR_HANDLE);
+-}
+-
+-static int send_queue_priority_update_flow_show(struct seq_file *file, void *priv)
+-{
+-	return __show_vnic_diag(file, file->private,
+-				MLX5_VNIC_DIAG_SEND_QUEUE_PRIORITY_UPDATE_FLOW);
+-}
+-
+-static int comp_eq_overrun_show(struct seq_file *file, void *priv)
+-{
+-	return __show_vnic_diag(file, file->private, MLX5_VNIC_DIAG_COMP_EQ_OVERRUN);
+-}
+-
+-static int async_eq_overrun_show(struct seq_file *file, void *priv)
+-{
+-	return __show_vnic_diag(file, file->private, MLX5_VNIC_DIAG_ASYNC_EQ_OVERRUN);
+-}
+-
+-static int cq_overrun_show(struct seq_file *file, void *priv)
+-{
+-	return __show_vnic_diag(file, file->private, MLX5_VNIC_DIAG_CQ_OVERRUN);
+-}
+-
+-static int invalid_command_show(struct seq_file *file, void *priv)
+-{
+-	return __show_vnic_diag(file, file->private, MLX5_VNIC_DIAG_INVALID_COMMAND);
+-}
+-
+-static int quota_exceeded_command_show(struct seq_file *file, void *priv)
+-{
+-	return __show_vnic_diag(file, file->private, MLX5_VNIC_DIAG_QOUTA_EXCEEDED_COMMAND);
+-}
+-
+-static int rx_steering_discard_show(struct seq_file *file, void *priv)
+-{
+-	return __show_vnic_diag(file, file->private, MLX5_VNIC_DIAG_RX_STEERING_DISCARD);
+-}
+-
+-DEFINE_SHOW_ATTRIBUTE(total_q_under_processor_handle);
+-DEFINE_SHOW_ATTRIBUTE(send_queue_priority_update_flow);
+-DEFINE_SHOW_ATTRIBUTE(comp_eq_overrun);
+-DEFINE_SHOW_ATTRIBUTE(async_eq_overrun);
+-DEFINE_SHOW_ATTRIBUTE(cq_overrun);
+-DEFINE_SHOW_ATTRIBUTE(invalid_command);
+-DEFINE_SHOW_ATTRIBUTE(quota_exceeded_command);
+-DEFINE_SHOW_ATTRIBUTE(rx_steering_discard);
+-
+-void mlx5_esw_vport_debugfs_destroy(struct mlx5_eswitch *esw, u16 vport_num)
+-{
+-	struct mlx5_vport *vport = mlx5_eswitch_get_vport(esw, vport_num);
+-
+-	debugfs_remove_recursive(vport->dbgfs);
+-	vport->dbgfs = NULL;
+-}
+-
+-/* vnic diag dir name is "pf", "ecpf" or "{vf/sf}_xxxx" */
+-#define VNIC_DIAG_DIR_NAME_MAX_LEN 8
+-
+-void mlx5_esw_vport_debugfs_create(struct mlx5_eswitch *esw, u16 vport_num, bool is_sf, u16 sf_num)
+-{
+-	struct mlx5_vport *vport = mlx5_eswitch_get_vport(esw, vport_num);
+-	struct dentry *vnic_diag;
+-	char dir_name[VNIC_DIAG_DIR_NAME_MAX_LEN];
+-	int err;
+-
+-	if (!MLX5_CAP_GEN(esw->dev, vport_group_manager))
+-		return;
+-
+-	if (vport_num == MLX5_VPORT_PF) {
+-		strcpy(dir_name, "pf");
+-	} else if (vport_num == MLX5_VPORT_ECPF) {
+-		strcpy(dir_name, "ecpf");
+-	} else {
+-		err = snprintf(dir_name, VNIC_DIAG_DIR_NAME_MAX_LEN, "%s_%d", is_sf ? "sf" : "vf",
+-			       is_sf ? sf_num : vport_num - MLX5_VPORT_FIRST_VF);
+-		if (WARN_ON(err < 0))
+-			return;
+-	}
+-
+-	vport->dbgfs = debugfs_create_dir(dir_name, esw->dbgfs);
+-	vnic_diag = debugfs_create_dir("vnic_diag", vport->dbgfs);
+-
+-	if (MLX5_CAP_GEN(esw->dev, vnic_env_queue_counters)) {
+-		debugfs_create_file("total_q_under_processor_handle", 0444, vnic_diag, vport,
+-				    &total_q_under_processor_handle_fops);
+-		debugfs_create_file("send_queue_priority_update_flow", 0444, vnic_diag, vport,
+-				    &send_queue_priority_update_flow_fops);
+-	}
+-
+-	if (MLX5_CAP_GEN(esw->dev, eq_overrun_count)) {
+-		debugfs_create_file("comp_eq_overrun", 0444, vnic_diag, vport,
+-				    &comp_eq_overrun_fops);
+-		debugfs_create_file("async_eq_overrun", 0444, vnic_diag, vport,
+-				    &async_eq_overrun_fops);
+-	}
+-
+-	if (MLX5_CAP_GEN(esw->dev, vnic_env_cq_overrun))
+-		debugfs_create_file("cq_overrun", 0444, vnic_diag, vport, &cq_overrun_fops);
+-
+-	if (MLX5_CAP_GEN(esw->dev, invalid_command_count))
+-		debugfs_create_file("invalid_command", 0444, vnic_diag, vport,
+-				    &invalid_command_fops);
+-
+-	if (MLX5_CAP_GEN(esw->dev, quota_exceeded_count))
+-		debugfs_create_file("quota_exceeded_command", 0444, vnic_diag, vport,
+-				    &quota_exceeded_command_fops);
+-
+-	if (MLX5_CAP_GEN(esw->dev, nic_receive_steering_discard))
+-		debugfs_create_file("rx_steering_discard", 0444, vnic_diag, vport,
+-				    &rx_steering_discard_fops);
+-
+-}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+index 19fed514fc173..bb2720a23a501 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+@@ -36,7 +36,6 @@
+ #include <linux/mlx5/vport.h>
+ #include <linux/mlx5/fs.h>
+ #include <linux/mlx5/mpfs.h>
+-#include <linux/debugfs.h>
+ #include "esw/acl/lgcy.h"
+ #include "esw/legacy.h"
+ #include "esw/qos.h"
+@@ -1056,7 +1055,6 @@ int mlx5_eswitch_load_vport(struct mlx5_eswitch *esw, u16 vport_num,
+ 	if (err)
+ 		return err;
+ 
+-	mlx5_esw_vport_debugfs_create(esw, vport_num, false, 0);
+ 	err = esw_offloads_load_rep(esw, vport_num);
+ 	if (err)
+ 		goto err_rep;
+@@ -1064,7 +1062,6 @@ int mlx5_eswitch_load_vport(struct mlx5_eswitch *esw, u16 vport_num,
+ 	return err;
+ 
+ err_rep:
+-	mlx5_esw_vport_debugfs_destroy(esw, vport_num);
+ 	mlx5_esw_vport_disable(esw, vport_num);
+ 	return err;
+ }
+@@ -1072,7 +1069,6 @@ err_rep:
+ void mlx5_eswitch_unload_vport(struct mlx5_eswitch *esw, u16 vport_num)
+ {
+ 	esw_offloads_unload_rep(esw, vport_num);
+-	mlx5_esw_vport_debugfs_destroy(esw, vport_num);
+ 	mlx5_esw_vport_disable(esw, vport_num);
+ }
+ 
+@@ -1672,7 +1668,6 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev)
+ 	dev->priv.eswitch = esw;
+ 	BLOCKING_INIT_NOTIFIER_HEAD(&esw->n_head);
+ 
+-	esw->dbgfs = debugfs_create_dir("esw", mlx5_debugfs_get_dev_root(esw->dev));
+ 	esw_info(dev,
+ 		 "Total vports %d, per vport: max uc(%d) max mc(%d)\n",
+ 		 esw->total_vports,
+@@ -1696,7 +1691,6 @@ void mlx5_eswitch_cleanup(struct mlx5_eswitch *esw)
+ 
+ 	esw_info(esw->dev, "cleanup\n");
+ 
+-	debugfs_remove_recursive(esw->dbgfs);
+ 	esw->dev->priv.eswitch = NULL;
+ 	destroy_workqueue(esw->work_queue);
+ 	WARN_ON(refcount_read(&esw->qos.refcnt));
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+index c8c12d1672f99..5fd971cee6fdc 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+@@ -195,7 +195,6 @@ struct mlx5_vport {
+ 	enum mlx5_eswitch_vport_event enabled_events;
+ 	int index;
+ 	struct devlink_port *dl_port;
+-	struct dentry *dbgfs;
+ };
+ 
+ struct mlx5_esw_indir_table;
+@@ -342,7 +341,7 @@ struct mlx5_eswitch {
+ 		u32             large_group_num;
+ 	}  params;
+ 	struct blocking_notifier_head n_head;
+-	struct dentry *dbgfs;
++	bool paired[MLX5_MAX_PORTS];
+ };
+ 
+ void esw_offloads_disable(struct mlx5_eswitch *esw);
+@@ -705,9 +704,6 @@ int mlx5_esw_offloads_devlink_port_register(struct mlx5_eswitch *esw, u16 vport_
+ void mlx5_esw_offloads_devlink_port_unregister(struct mlx5_eswitch *esw, u16 vport_num);
+ struct devlink_port *mlx5_esw_offloads_devlink_port(struct mlx5_eswitch *esw, u16 vport_num);
+ 
+-void mlx5_esw_vport_debugfs_create(struct mlx5_eswitch *esw, u16 vport_num, bool is_sf, u16 sf_num);
+-void mlx5_esw_vport_debugfs_destroy(struct mlx5_eswitch *esw, u16 vport_num);
+-
+ int mlx5_esw_devlink_sf_port_register(struct mlx5_eswitch *esw, struct devlink_port *dl_port,
+ 				      u16 vport_num, u32 controller, u32 sfnum);
+ void mlx5_esw_devlink_sf_port_unregister(struct mlx5_eswitch *esw, u16 vport_num);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index 590df9bf39a56..a60c9f292e10c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -2744,6 +2744,9 @@ static int mlx5_esw_offloads_devcom_event(int event,
+ 		    mlx5_eswitch_vport_match_metadata_enabled(peer_esw))
+ 			break;
+ 
++		if (esw->paired[mlx5_get_dev_index(peer_esw->dev)])
++			break;
++
+ 		err = mlx5_esw_offloads_set_ns_peer(esw, peer_esw, true);
+ 		if (err)
+ 			goto err_out;
+@@ -2755,14 +2758,18 @@ static int mlx5_esw_offloads_devcom_event(int event,
+ 		if (err)
+ 			goto err_pair;
+ 
++		esw->paired[mlx5_get_dev_index(peer_esw->dev)] = true;
++		peer_esw->paired[mlx5_get_dev_index(esw->dev)] = true;
+ 		mlx5_devcom_set_paired(devcom, MLX5_DEVCOM_ESW_OFFLOADS, true);
+ 		break;
+ 
+ 	case ESW_OFFLOADS_DEVCOM_UNPAIR:
+-		if (!mlx5_devcom_is_paired(devcom, MLX5_DEVCOM_ESW_OFFLOADS))
++		if (!esw->paired[mlx5_get_dev_index(peer_esw->dev)])
+ 			break;
+ 
+ 		mlx5_devcom_set_paired(devcom, MLX5_DEVCOM_ESW_OFFLOADS, false);
++		esw->paired[mlx5_get_dev_index(peer_esw->dev)] = false;
++		peer_esw->paired[mlx5_get_dev_index(esw->dev)] = false;
+ 		mlx5_esw_offloads_unpair(peer_esw);
+ 		mlx5_esw_offloads_unpair(esw);
+ 		mlx5_esw_offloads_set_ns_peer(esw, peer_esw, false);
+@@ -3777,14 +3784,12 @@ int mlx5_esw_offloads_sf_vport_enable(struct mlx5_eswitch *esw, struct devlink_p
+ 	if (err)
+ 		goto devlink_err;
+ 
+-	mlx5_esw_vport_debugfs_create(esw, vport_num, true, sfnum);
+ 	err = mlx5_esw_offloads_rep_load(esw, vport_num);
+ 	if (err)
+ 		goto rep_err;
+ 	return 0;
+ 
+ rep_err:
+-	mlx5_esw_vport_debugfs_destroy(esw, vport_num);
+ 	mlx5_esw_devlink_sf_port_unregister(esw, vport_num);
+ devlink_err:
+ 	mlx5_esw_vport_disable(esw, vport_num);
+@@ -3794,7 +3799,6 @@ devlink_err:
+ void mlx5_esw_offloads_sf_vport_disable(struct mlx5_eswitch *esw, u16 vport_num)
+ {
+ 	mlx5_esw_offloads_rep_unload(esw, vport_num);
+-	mlx5_esw_vport_debugfs_destroy(esw, vport_num);
+ 	mlx5_esw_devlink_sf_port_unregister(esw, vport_num);
+ 	mlx5_esw_vport_disable(esw, vport_num);
+ }
+diff --git a/drivers/net/phy/mscc/mscc.h b/drivers/net/phy/mscc/mscc.h
+index a50235fdf7d99..055e4ca5b3b5c 100644
+--- a/drivers/net/phy/mscc/mscc.h
++++ b/drivers/net/phy/mscc/mscc.h
+@@ -179,6 +179,7 @@ enum rgmii_clock_delay {
+ #define VSC8502_RGMII_CNTL		  20
+ #define VSC8502_RGMII_RX_DELAY_MASK	  0x0070
+ #define VSC8502_RGMII_TX_DELAY_MASK	  0x0007
++#define VSC8502_RGMII_RX_CLK_DISABLE	  0x0800
+ 
+ #define MSCC_PHY_WOL_LOWER_MAC_ADDR	  21
+ #define MSCC_PHY_WOL_MID_MAC_ADDR	  22
+diff --git a/drivers/net/phy/mscc/mscc_main.c b/drivers/net/phy/mscc/mscc_main.c
+index bd81a4b041e52..adc8cd6f2d95a 100644
+--- a/drivers/net/phy/mscc/mscc_main.c
++++ b/drivers/net/phy/mscc/mscc_main.c
+@@ -519,14 +519,27 @@ out_unlock:
+  *  * 2.0 ns (which causes the data to be sampled at exactly half way between
+  *    clock transitions at 1000 Mbps) if delays should be enabled
+  */
+-static int vsc85xx_rgmii_set_skews(struct phy_device *phydev, u32 rgmii_cntl,
+-				   u16 rgmii_rx_delay_mask,
+-				   u16 rgmii_tx_delay_mask)
++static int vsc85xx_update_rgmii_cntl(struct phy_device *phydev, u32 rgmii_cntl,
++				     u16 rgmii_rx_delay_mask,
++				     u16 rgmii_tx_delay_mask)
+ {
+ 	u16 rgmii_rx_delay_pos = ffs(rgmii_rx_delay_mask) - 1;
+ 	u16 rgmii_tx_delay_pos = ffs(rgmii_tx_delay_mask) - 1;
+ 	u16 reg_val = 0;
+-	int rc;
++	u16 mask = 0;
++	int rc = 0;
++
++	/* For traffic to pass, the VSC8502 family needs the RX_CLK disable bit
++	 * to be unset for all PHY modes, so do that as part of the paged
++	 * register modification.
++	 * For some family members (like VSC8530/31/40/41) this bit is reserved
++	 * and read-only, and the RX clock is enabled by default.
++	 */
++	if (rgmii_cntl == VSC8502_RGMII_CNTL)
++		mask |= VSC8502_RGMII_RX_CLK_DISABLE;
++
++	if (phy_interface_is_rgmii(phydev))
++		mask |= rgmii_rx_delay_mask | rgmii_tx_delay_mask;
+ 
+ 	mutex_lock(&phydev->lock);
+ 
+@@ -537,10 +550,9 @@ static int vsc85xx_rgmii_set_skews(struct phy_device *phydev, u32 rgmii_cntl,
+ 	    phydev->interface == PHY_INTERFACE_MODE_RGMII_ID)
+ 		reg_val |= RGMII_CLK_DELAY_2_0_NS << rgmii_tx_delay_pos;
+ 
+-	rc = phy_modify_paged(phydev, MSCC_PHY_PAGE_EXTENDED_2,
+-			      rgmii_cntl,
+-			      rgmii_rx_delay_mask | rgmii_tx_delay_mask,
+-			      reg_val);
++	if (mask)
++		rc = phy_modify_paged(phydev, MSCC_PHY_PAGE_EXTENDED_2,
++				      rgmii_cntl, mask, reg_val);
+ 
+ 	mutex_unlock(&phydev->lock);
+ 
+@@ -549,19 +561,11 @@ static int vsc85xx_rgmii_set_skews(struct phy_device *phydev, u32 rgmii_cntl,
+ 
+ static int vsc85xx_default_config(struct phy_device *phydev)
+ {
+-	int rc;
+-
+ 	phydev->mdix_ctrl = ETH_TP_MDI_AUTO;
+ 
+-	if (phy_interface_mode_is_rgmii(phydev->interface)) {
+-		rc = vsc85xx_rgmii_set_skews(phydev, VSC8502_RGMII_CNTL,
+-					     VSC8502_RGMII_RX_DELAY_MASK,
+-					     VSC8502_RGMII_TX_DELAY_MASK);
+-		if (rc)
+-			return rc;
+-	}
+-
+-	return 0;
++	return vsc85xx_update_rgmii_cntl(phydev, VSC8502_RGMII_CNTL,
++					 VSC8502_RGMII_RX_DELAY_MASK,
++					 VSC8502_RGMII_TX_DELAY_MASK);
+ }
+ 
+ static int vsc85xx_get_tunable(struct phy_device *phydev,
+@@ -1758,13 +1762,11 @@ static int vsc8584_config_init(struct phy_device *phydev)
+ 	if (ret)
+ 		return ret;
+ 
+-	if (phy_interface_is_rgmii(phydev)) {
+-		ret = vsc85xx_rgmii_set_skews(phydev, VSC8572_RGMII_CNTL,
+-					      VSC8572_RGMII_RX_DELAY_MASK,
+-					      VSC8572_RGMII_TX_DELAY_MASK);
+-		if (ret)
+-			return ret;
+-	}
++	ret = vsc85xx_update_rgmii_cntl(phydev, VSC8572_RGMII_CNTL,
++					VSC8572_RGMII_RX_DELAY_MASK,
++					VSC8572_RGMII_TX_DELAY_MASK);
++	if (ret)
++		return ret;
+ 
+ 	ret = genphy_soft_reset(phydev);
+ 	if (ret)
+diff --git a/drivers/platform/x86/amd/pmf/core.c b/drivers/platform/x86/amd/pmf/core.c
+index 0acc0b6221290..dc9803e1a4b9b 100644
+--- a/drivers/platform/x86/amd/pmf/core.c
++++ b/drivers/platform/x86/amd/pmf/core.c
+@@ -245,24 +245,29 @@ static const struct pci_device_id pmf_pci_ids[] = {
+ 	{ }
+ };
+ 
+-int amd_pmf_init_metrics_table(struct amd_pmf_dev *dev)
++static void amd_pmf_set_dram_addr(struct amd_pmf_dev *dev)
+ {
+ 	u64 phys_addr;
+ 	u32 hi, low;
+ 
+-	INIT_DELAYED_WORK(&dev->work_buffer, amd_pmf_get_metrics);
++	phys_addr = virt_to_phys(dev->buf);
++	hi = phys_addr >> 32;
++	low = phys_addr & GENMASK(31, 0);
++
++	amd_pmf_send_cmd(dev, SET_DRAM_ADDR_HIGH, 0, hi, NULL);
++	amd_pmf_send_cmd(dev, SET_DRAM_ADDR_LOW, 0, low, NULL);
++}
+ 
++int amd_pmf_init_metrics_table(struct amd_pmf_dev *dev)
++{
+ 	/* Get Metrics Table Address */
+ 	dev->buf = kzalloc(sizeof(dev->m_table), GFP_KERNEL);
+ 	if (!dev->buf)
+ 		return -ENOMEM;
+ 
+-	phys_addr = virt_to_phys(dev->buf);
+-	hi = phys_addr >> 32;
+-	low = phys_addr & GENMASK(31, 0);
++	INIT_DELAYED_WORK(&dev->work_buffer, amd_pmf_get_metrics);
+ 
+-	amd_pmf_send_cmd(dev, SET_DRAM_ADDR_HIGH, 0, hi, NULL);
+-	amd_pmf_send_cmd(dev, SET_DRAM_ADDR_LOW, 0, low, NULL);
++	amd_pmf_set_dram_addr(dev);
+ 
+ 	/*
+ 	 * Start collecting the metrics data after a small delay
+@@ -273,6 +278,18 @@ int amd_pmf_init_metrics_table(struct amd_pmf_dev *dev)
+ 	return 0;
+ }
+ 
++static int amd_pmf_resume_handler(struct device *dev)
++{
++	struct amd_pmf_dev *pdev = dev_get_drvdata(dev);
++
++	if (pdev->buf)
++		amd_pmf_set_dram_addr(pdev);
++
++	return 0;
++}
++
++static DEFINE_SIMPLE_DEV_PM_OPS(amd_pmf_pm, NULL, amd_pmf_resume_handler);
++
+ static void amd_pmf_init_features(struct amd_pmf_dev *dev)
+ {
+ 	int ret;
+@@ -414,6 +431,7 @@ static struct platform_driver amd_pmf_driver = {
+ 		.name = "amd-pmf",
+ 		.acpi_match_table = amd_pmf_acpi_ids,
+ 		.dev_groups = amd_pmf_driver_groups,
++		.pm = pm_sleep_ptr(&amd_pmf_pm),
+ 	},
+ 	.probe = amd_pmf_probe,
+ 	.remove = amd_pmf_remove,
+diff --git a/drivers/power/supply/rt9467-charger.c b/drivers/power/supply/rt9467-charger.c
+index 73f744a3155d4..ea33693b69779 100644
+--- a/drivers/power/supply/rt9467-charger.c
++++ b/drivers/power/supply/rt9467-charger.c
+@@ -1023,7 +1023,7 @@ static int rt9467_request_interrupt(struct rt9467_chg_data *data)
+ 	for (i = 0; i < num_chg_irqs; i++) {
+ 		virq = regmap_irq_get_virq(data->irq_chip_data, chg_irqs[i].hwirq);
+ 		if (virq <= 0)
+-			return dev_err_probe(dev, virq, "Failed to get (%s) irq\n",
++			return dev_err_probe(dev, -EINVAL, "Failed to get (%s) irq\n",
+ 					     chg_irqs[i].name);
+ 
+ 		ret = devm_request_threaded_irq(dev, virq, NULL, chg_irqs[i].handler,
+diff --git a/drivers/spi/spi-geni-qcom.c b/drivers/spi/spi-geni-qcom.c
+index babb039bcb431..b106faf21a723 100644
+--- a/drivers/spi/spi-geni-qcom.c
++++ b/drivers/spi/spi-geni-qcom.c
+@@ -294,6 +294,8 @@ static void spi_geni_set_cs(struct spi_device *slv, bool set_flag)
+ 	mas->cs_flag = set_flag;
+ 	/* set xfer_mode to FIFO to complete cs_done in isr */
+ 	mas->cur_xfer_mode = GENI_SE_FIFO;
++	geni_se_select_mode(se, mas->cur_xfer_mode);
++
+ 	reinit_completion(&mas->cs_done);
+ 	if (set_flag)
+ 		geni_se_setup_m_cmd(se, SPI_CS_ASSERT, 0);
+diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
+index 493c31de0edb9..0620dbe5cca0c 100644
+--- a/drivers/vfio/vfio_iommu_type1.c
++++ b/drivers/vfio/vfio_iommu_type1.c
+@@ -860,6 +860,11 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
+ 		if (ret)
+ 			goto pin_unwind;
+ 
++		if (!pfn_valid(phys_pfn)) {
++			ret = -EINVAL;
++			goto pin_unwind;
++		}
++
+ 		ret = vfio_add_to_pfn_list(dma, iova, phys_pfn);
+ 		if (ret) {
+ 			if (put_pfn(phys_pfn, dma->prot) && do_accounting)
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index dbcaac8b69665..4a882f9ba1f1f 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -1577,6 +1577,16 @@ static inline void skb_copy_hash(struct sk_buff *to, const struct sk_buff *from)
+ 	to->l4_hash = from->l4_hash;
+ };
+ 
++static inline int skb_cmp_decrypted(const struct sk_buff *skb1,
++				    const struct sk_buff *skb2)
++{
++#ifdef CONFIG_TLS_DEVICE
++	return skb2->decrypted - skb1->decrypted;
++#else
++	return 0;
++#endif
++}
++
+ static inline void skb_copy_decrypted(struct sk_buff *to,
+ 				      const struct sk_buff *from)
+ {
+diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
+index 84f787416a54d..054d7911bfc9f 100644
+--- a/include/linux/skmsg.h
++++ b/include/linux/skmsg.h
+@@ -71,7 +71,6 @@ struct sk_psock_link {
+ };
+ 
+ struct sk_psock_work_state {
+-	struct sk_buff			*skb;
+ 	u32				len;
+ 	u32				off;
+ };
+@@ -105,7 +104,7 @@ struct sk_psock {
+ 	struct proto			*sk_proto;
+ 	struct mutex			work_mutex;
+ 	struct sk_psock_work_state	work_state;
+-	struct work_struct		work;
++	struct delayed_work		work;
+ 	struct rcu_work			rwork;
+ };
+ 
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index db9f828e9d1ee..76bf0a11bdc77 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -1467,6 +1467,8 @@ static inline void tcp_adjust_rcv_ssthresh(struct sock *sk)
+ }
+ 
+ void tcp_cleanup_rbuf(struct sock *sk, int copied);
++void __tcp_cleanup_rbuf(struct sock *sk, int copied);
++
+ 
+ /* We provision sk_rcvbuf around 200% of sk_rcvlowat.
+  * If 87.5 % (7/8) of the space has been consumed, we want to override
+@@ -2323,6 +2325,14 @@ int tcp_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore);
+ void tcp_bpf_clone(const struct sock *sk, struct sock *newsk);
+ #endif /* CONFIG_BPF_SYSCALL */
+ 
++#ifdef CONFIG_INET
++void tcp_eat_skb(struct sock *sk, struct sk_buff *skb);
++#else
++static inline void tcp_eat_skb(struct sock *sk, struct sk_buff *skb)
++{
++}
++#endif
++
+ int tcp_bpf_sendmsg_redir(struct sock *sk, bool ingress,
+ 			  struct sk_msg *msg, u32 bytes, int flags);
+ #endif /* CONFIG_NET_SOCK_MSG */
+diff --git a/include/net/tls.h b/include/net/tls.h
+index 154949c7b0c88..c36bf4c50027e 100644
+--- a/include/net/tls.h
++++ b/include/net/tls.h
+@@ -124,6 +124,7 @@ struct tls_strparser {
+ 	u32 mark : 8;
+ 	u32 stopped : 1;
+ 	u32 copy_mode : 1;
++	u32 mixed_decrypted : 1;
+ 	u32 msg_ready : 1;
+ 
+ 	struct strp_msg stm;
+diff --git a/kernel/bpf/offload.c b/kernel/bpf/offload.c
+index 0c85e06f7ea7f..ee146430d9984 100644
+--- a/kernel/bpf/offload.c
++++ b/kernel/bpf/offload.c
+@@ -853,4 +853,4 @@ static int __init bpf_offload_init(void)
+ 	return rhashtable_init(&offdevs, &offdevs_params);
+ }
+ 
+-late_initcall(bpf_offload_init);
++core_initcall(bpf_offload_init);
+diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c
+index f597fe0db9f8f..1d249d839819d 100644
+--- a/net/bluetooth/hci_sock.c
++++ b/net/bluetooth/hci_sock.c
+@@ -987,6 +987,34 @@ static int hci_sock_ioctl(struct socket *sock, unsigned int cmd,
+ 
+ 	BT_DBG("cmd %x arg %lx", cmd, arg);
+ 
++	/* Make sure the cmd is valid before doing anything */
++	switch (cmd) {
++	case HCIGETDEVLIST:
++	case HCIGETDEVINFO:
++	case HCIGETCONNLIST:
++	case HCIDEVUP:
++	case HCIDEVDOWN:
++	case HCIDEVRESET:
++	case HCIDEVRESTAT:
++	case HCISETSCAN:
++	case HCISETAUTH:
++	case HCISETENCRYPT:
++	case HCISETPTYPE:
++	case HCISETLINKPOL:
++	case HCISETLINKMODE:
++	case HCISETACLMTU:
++	case HCISETSCOMTU:
++	case HCIINQUIRY:
++	case HCISETRAW:
++	case HCIGETCONNINFO:
++	case HCIGETAUTHINFO:
++	case HCIBLOCKADDR:
++	case HCIUNBLOCKADDR:
++		break;
++	default:
++		return -ENOIOCTLCMD;
++	}
++
+ 	lock_sock(sk);
+ 
+ 	if (hci_pi(sk)->channel != HCI_CHANNEL_RAW) {
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index f81883759d381..a9060e1f0e437 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -481,8 +481,6 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
+ 		msg_rx = sk_psock_peek_msg(psock);
+ 	}
+ out:
+-	if (psock->work_state.skb && copied > 0)
+-		schedule_work(&psock->work);
+ 	return copied;
+ }
+ EXPORT_SYMBOL_GPL(sk_msg_recvmsg);
+@@ -624,42 +622,33 @@ static int sk_psock_handle_skb(struct sk_psock *psock, struct sk_buff *skb,
+ 
+ static void sk_psock_skb_state(struct sk_psock *psock,
+ 			       struct sk_psock_work_state *state,
+-			       struct sk_buff *skb,
+ 			       int len, int off)
+ {
+ 	spin_lock_bh(&psock->ingress_lock);
+ 	if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) {
+-		state->skb = skb;
+ 		state->len = len;
+ 		state->off = off;
+-	} else {
+-		sock_drop(psock->sk, skb);
+ 	}
+ 	spin_unlock_bh(&psock->ingress_lock);
+ }
+ 
+ static void sk_psock_backlog(struct work_struct *work)
+ {
+-	struct sk_psock *psock = container_of(work, struct sk_psock, work);
++	struct delayed_work *dwork = to_delayed_work(work);
++	struct sk_psock *psock = container_of(dwork, struct sk_psock, work);
+ 	struct sk_psock_work_state *state = &psock->work_state;
+ 	struct sk_buff *skb = NULL;
++	u32 len = 0, off = 0;
+ 	bool ingress;
+-	u32 len, off;
+ 	int ret;
+ 
+ 	mutex_lock(&psock->work_mutex);
+-	if (unlikely(state->skb)) {
+-		spin_lock_bh(&psock->ingress_lock);
+-		skb = state->skb;
++	if (unlikely(state->len)) {
+ 		len = state->len;
+ 		off = state->off;
+-		state->skb = NULL;
+-		spin_unlock_bh(&psock->ingress_lock);
+ 	}
+-	if (skb)
+-		goto start;
+ 
+-	while ((skb = skb_dequeue(&psock->ingress_skb))) {
++	while ((skb = skb_peek(&psock->ingress_skb))) {
+ 		len = skb->len;
+ 		off = 0;
+ 		if (skb_bpf_strparser(skb)) {
+@@ -668,7 +657,6 @@ static void sk_psock_backlog(struct work_struct *work)
+ 			off = stm->offset;
+ 			len = stm->full_len;
+ 		}
+-start:
+ 		ingress = skb_bpf_ingress(skb);
+ 		skb_bpf_redirect_clear(skb);
+ 		do {
+@@ -678,22 +666,28 @@ start:
+ 							  len, ingress);
+ 			if (ret <= 0) {
+ 				if (ret == -EAGAIN) {
+-					sk_psock_skb_state(psock, state, skb,
+-							   len, off);
++					sk_psock_skb_state(psock, state, len, off);
++
++					/* Delay slightly to prioritize any
++					 * other work that might be here.
++					 */
++					if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED))
++						schedule_delayed_work(&psock->work, 1);
+ 					goto end;
+ 				}
+ 				/* Hard errors break pipe and stop xmit. */
+ 				sk_psock_report_error(psock, ret ? -ret : EPIPE);
+ 				sk_psock_clear_state(psock, SK_PSOCK_TX_ENABLED);
+-				sock_drop(psock->sk, skb);
+ 				goto end;
+ 			}
+ 			off += ret;
+ 			len -= ret;
+ 		} while (len);
+ 
+-		if (!ingress)
++		skb = skb_dequeue(&psock->ingress_skb);
++		if (!ingress) {
+ 			kfree_skb(skb);
++		}
+ 	}
+ end:
+ 	mutex_unlock(&psock->work_mutex);
+@@ -734,7 +728,7 @@ struct sk_psock *sk_psock_init(struct sock *sk, int node)
+ 	INIT_LIST_HEAD(&psock->link);
+ 	spin_lock_init(&psock->link_lock);
+ 
+-	INIT_WORK(&psock->work, sk_psock_backlog);
++	INIT_DELAYED_WORK(&psock->work, sk_psock_backlog);
+ 	mutex_init(&psock->work_mutex);
+ 	INIT_LIST_HEAD(&psock->ingress_msg);
+ 	spin_lock_init(&psock->ingress_lock);
+@@ -786,11 +780,6 @@ static void __sk_psock_zap_ingress(struct sk_psock *psock)
+ 		skb_bpf_redirect_clear(skb);
+ 		sock_drop(psock->sk, skb);
+ 	}
+-	kfree_skb(psock->work_state.skb);
+-	/* We null the skb here to ensure that calls to sk_psock_backlog
+-	 * do not pick up the free'd skb.
+-	 */
+-	psock->work_state.skb = NULL;
+ 	__sk_psock_purge_ingress_msg(psock);
+ }
+ 
+@@ -809,7 +798,6 @@ void sk_psock_stop(struct sk_psock *psock)
+ 	spin_lock_bh(&psock->ingress_lock);
+ 	sk_psock_clear_state(psock, SK_PSOCK_TX_ENABLED);
+ 	sk_psock_cork_free(psock);
+-	__sk_psock_zap_ingress(psock);
+ 	spin_unlock_bh(&psock->ingress_lock);
+ }
+ 
+@@ -823,7 +811,8 @@ static void sk_psock_destroy(struct work_struct *work)
+ 
+ 	sk_psock_done_strp(psock);
+ 
+-	cancel_work_sync(&psock->work);
++	cancel_delayed_work_sync(&psock->work);
++	__sk_psock_zap_ingress(psock);
+ 	mutex_destroy(&psock->work_mutex);
+ 
+ 	psock_progs_drop(&psock->progs);
+@@ -938,7 +927,7 @@ static int sk_psock_skb_redirect(struct sk_psock *from, struct sk_buff *skb)
+ 	}
+ 
+ 	skb_queue_tail(&psock_other->ingress_skb, skb);
+-	schedule_work(&psock_other->work);
++	schedule_delayed_work(&psock_other->work, 0);
+ 	spin_unlock_bh(&psock_other->ingress_lock);
+ 	return 0;
+ }
+@@ -990,10 +979,8 @@ static int sk_psock_verdict_apply(struct sk_psock *psock, struct sk_buff *skb,
+ 		err = -EIO;
+ 		sk_other = psock->sk;
+ 		if (sock_flag(sk_other, SOCK_DEAD) ||
+-		    !sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) {
+-			skb_bpf_redirect_clear(skb);
++		    !sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED))
+ 			goto out_free;
+-		}
+ 
+ 		skb_bpf_set_ingress(skb);
+ 
+@@ -1018,22 +1005,23 @@ static int sk_psock_verdict_apply(struct sk_psock *psock, struct sk_buff *skb,
+ 			spin_lock_bh(&psock->ingress_lock);
+ 			if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) {
+ 				skb_queue_tail(&psock->ingress_skb, skb);
+-				schedule_work(&psock->work);
++				schedule_delayed_work(&psock->work, 0);
+ 				err = 0;
+ 			}
+ 			spin_unlock_bh(&psock->ingress_lock);
+-			if (err < 0) {
+-				skb_bpf_redirect_clear(skb);
++			if (err < 0)
+ 				goto out_free;
+-			}
+ 		}
+ 		break;
+ 	case __SK_REDIRECT:
++		tcp_eat_skb(psock->sk, skb);
+ 		err = sk_psock_skb_redirect(psock, skb);
+ 		break;
+ 	case __SK_DROP:
+ 	default:
+ out_free:
++		skb_bpf_redirect_clear(skb);
++		tcp_eat_skb(psock->sk, skb);
+ 		sock_drop(psock->sk, skb);
+ 	}
+ 
+@@ -1049,7 +1037,7 @@ static void sk_psock_write_space(struct sock *sk)
+ 	psock = sk_psock(sk);
+ 	if (likely(psock)) {
+ 		if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED))
+-			schedule_work(&psock->work);
++			schedule_delayed_work(&psock->work, 0);
+ 		write_space = psock->saved_write_space;
+ 	}
+ 	rcu_read_unlock();
+@@ -1078,8 +1066,7 @@ static void sk_psock_strp_read(struct strparser *strp, struct sk_buff *skb)
+ 		skb_dst_drop(skb);
+ 		skb_bpf_redirect_clear(skb);
+ 		ret = bpf_prog_run_pin_on_cpu(prog, skb);
+-		if (ret == SK_PASS)
+-			skb_bpf_set_strparser(skb);
++		skb_bpf_set_strparser(skb);
+ 		ret = sk_psock_map_verd(ret, skb_bpf_redirect_fetch(skb));
+ 		skb->sk = NULL;
+ 	}
+@@ -1183,12 +1170,11 @@ static int sk_psock_verdict_recv(struct sock *sk, struct sk_buff *skb)
+ 	int ret = __SK_DROP;
+ 	int len = skb->len;
+ 
+-	skb_get(skb);
+-
+ 	rcu_read_lock();
+ 	psock = sk_psock(sk);
+ 	if (unlikely(!psock)) {
+ 		len = 0;
++		tcp_eat_skb(sk, skb);
+ 		sock_drop(sk, skb);
+ 		goto out;
+ 	}
+@@ -1212,12 +1198,21 @@ out:
+ static void sk_psock_verdict_data_ready(struct sock *sk)
+ {
+ 	struct socket *sock = sk->sk_socket;
++	int copied;
+ 
+ 	trace_sk_data_ready(sk);
+ 
+ 	if (unlikely(!sock || !sock->ops || !sock->ops->read_skb))
+ 		return;
+-	sock->ops->read_skb(sk, sk_psock_verdict_recv);
++	copied = sock->ops->read_skb(sk, sk_psock_verdict_recv);
++	if (copied >= 0) {
++		struct sk_psock *psock;
++
++		rcu_read_lock();
++		psock = sk_psock(sk);
++		psock->saved_data_ready(sk);
++		rcu_read_unlock();
++	}
+ }
+ 
+ void sk_psock_start_verdict(struct sock *sk, struct sk_psock *psock)
+diff --git a/net/core/sock_map.c b/net/core/sock_map.c
+index a055139f410e2..08851511294c0 100644
+--- a/net/core/sock_map.c
++++ b/net/core/sock_map.c
+@@ -1624,9 +1624,10 @@ void sock_map_close(struct sock *sk, long timeout)
+ 		rcu_read_unlock();
+ 		sk_psock_stop(psock);
+ 		release_sock(sk);
+-		cancel_work_sync(&psock->work);
++		cancel_delayed_work_sync(&psock->work);
+ 		sk_psock_put(sk, psock);
+ 	}
++
+ 	/* Make sure we do not recurse. This is a bug.
+ 	 * Leak the socket instead of crashing on a stack overflow.
+ 	 */
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 6c7c666554ced..ed63ee8f0d7e3 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -1570,7 +1570,7 @@ static int tcp_peek_sndq(struct sock *sk, struct msghdr *msg, int len)
+  * calculation of whether or not we must ACK for the sake of
+  * a window update.
+  */
+-static void __tcp_cleanup_rbuf(struct sock *sk, int copied)
++void __tcp_cleanup_rbuf(struct sock *sk, int copied)
+ {
+ 	struct tcp_sock *tp = tcp_sk(sk);
+ 	bool time_to_ack = false;
+@@ -1772,7 +1772,6 @@ int tcp_read_skb(struct sock *sk, skb_read_actor_t recv_actor)
+ 		WARN_ON_ONCE(!skb_set_owner_sk_safe(skb, sk));
+ 		tcp_flags = TCP_SKB_CB(skb)->tcp_flags;
+ 		used = recv_actor(sk, skb);
+-		consume_skb(skb);
+ 		if (used < 0) {
+ 			if (!copied)
+ 				copied = used;
+@@ -1786,14 +1785,6 @@ int tcp_read_skb(struct sock *sk, skb_read_actor_t recv_actor)
+ 			break;
+ 		}
+ 	}
+-	WRITE_ONCE(tp->copied_seq, seq);
+-
+-	tcp_rcv_space_adjust(sk);
+-
+-	/* Clean up data we have read: This will do ACK frames. */
+-	if (copied > 0)
+-		__tcp_cleanup_rbuf(sk, copied);
+-
+ 	return copied;
+ }
+ EXPORT_SYMBOL(tcp_read_skb);
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index 2e9547467edbe..5f93918c063c7 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -11,6 +11,24 @@
+ #include <net/inet_common.h>
+ #include <net/tls.h>
+ 
++void tcp_eat_skb(struct sock *sk, struct sk_buff *skb)
++{
++	struct tcp_sock *tcp;
++	int copied;
++
++	if (!skb || !skb->len || !sk_is_tcp(sk))
++		return;
++
++	if (skb_bpf_strparser(skb))
++		return;
++
++	tcp = tcp_sk(sk);
++	copied = tcp->copied_seq + skb->len;
++	WRITE_ONCE(tcp->copied_seq, copied);
++	tcp_rcv_space_adjust(sk);
++	__tcp_cleanup_rbuf(sk, skb->len);
++}
++
+ static int bpf_tcp_ingress(struct sock *sk, struct sk_psock *psock,
+ 			   struct sk_msg *msg, u32 apply_bytes, int flags)
+ {
+@@ -174,14 +192,34 @@ static int tcp_msg_wait_data(struct sock *sk, struct sk_psock *psock,
+ 	return ret;
+ }
+ 
++static bool is_next_msg_fin(struct sk_psock *psock)
++{
++	struct scatterlist *sge;
++	struct sk_msg *msg_rx;
++	int i;
++
++	msg_rx = sk_psock_peek_msg(psock);
++	i = msg_rx->sg.start;
++	sge = sk_msg_elem(msg_rx, i);
++	if (!sge->length) {
++		struct sk_buff *skb = msg_rx->skb;
++
++		if (skb && TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN)
++			return true;
++	}
++	return false;
++}
++
+ static int tcp_bpf_recvmsg_parser(struct sock *sk,
+ 				  struct msghdr *msg,
+ 				  size_t len,
+ 				  int flags,
+ 				  int *addr_len)
+ {
++	struct tcp_sock *tcp = tcp_sk(sk);
++	u32 seq = tcp->copied_seq;
+ 	struct sk_psock *psock;
+-	int copied;
++	int copied = 0;
+ 
+ 	if (unlikely(flags & MSG_ERRQUEUE))
+ 		return inet_recv_error(sk, msg, len, addr_len);
+@@ -194,8 +232,43 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk,
+ 		return tcp_recvmsg(sk, msg, len, flags, addr_len);
+ 
+ 	lock_sock(sk);
++
++	/* We may have received data on the sk_receive_queue pre-accept and
++	 * then we can not use read_skb in this context because we haven't
++	 * assigned a sk_socket yet so have no link to the ops. The work-around
++	 * is to check the sk_receive_queue and in these cases read skbs off
++	 * queue again. The read_skb hook is not running at this point because
++	 * of lock_sock so we avoid having multiple runners in read_skb.
++	 */
++	if (unlikely(!skb_queue_empty(&sk->sk_receive_queue))) {
++		tcp_data_ready(sk);
++		/* This handles the ENOMEM errors if we both receive data
++		 * pre accept and are already under memory pressure. At least
++		 * let user know to retry.
++		 */
++		if (unlikely(!skb_queue_empty(&sk->sk_receive_queue))) {
++			copied = -EAGAIN;
++			goto out;
++		}
++	}
++
+ msg_bytes_ready:
+ 	copied = sk_msg_recvmsg(sk, psock, msg, len, flags);
++	/* The typical case for EFAULT is the socket was gracefully
++	 * shutdown with a FIN pkt. So check here the other case is
++	 * some error on copy_page_to_iter which would be unexpected.
++	 * On fin return correct return code to zero.
++	 */
++	if (copied == -EFAULT) {
++		bool is_fin = is_next_msg_fin(psock);
++
++		if (is_fin) {
++			copied = 0;
++			seq++;
++			goto out;
++		}
++	}
++	seq += copied;
+ 	if (!copied) {
+ 		long timeo;
+ 		int data;
+@@ -233,6 +306,10 @@ msg_bytes_ready:
+ 		copied = -EAGAIN;
+ 	}
+ out:
++	WRITE_ONCE(tcp->copied_seq, seq);
++	tcp_rcv_space_adjust(sk);
++	if (copied > 0)
++		__tcp_cleanup_rbuf(sk, copied);
+ 	release_sock(sk);
+ 	sk_psock_put(sk, psock);
+ 	return copied;
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index c605d171eb2d9..8aaae82e78aeb 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -1813,7 +1813,7 @@ EXPORT_SYMBOL(__skb_recv_udp);
+ int udp_read_skb(struct sock *sk, skb_read_actor_t recv_actor)
+ {
+ 	struct sk_buff *skb;
+-	int err, copied;
++	int err;
+ 
+ try_again:
+ 	skb = skb_recv_udp(sk, MSG_DONTWAIT, &err);
+@@ -1832,10 +1832,7 @@ try_again:
+ 	}
+ 
+ 	WARN_ON_ONCE(!skb_set_owner_sk_safe(skb, sk));
+-	copied = recv_actor(sk, skb);
+-	kfree_skb(skb);
+-
+-	return copied;
++	return recv_actor(sk, skb);
+ }
+ EXPORT_SYMBOL(udp_read_skb);
+ 
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index 6f3b23a6653cc..d40544cd61a6c 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -1559,9 +1559,6 @@ static const struct nla_policy ct_nla_policy[CTA_MAX+1] = {
+ 
+ static int ctnetlink_flush_iterate(struct nf_conn *ct, void *data)
+ {
+-	if (test_bit(IPS_OFFLOAD_BIT, &ct->status))
+-		return 0;
+-
+ 	return ctnetlink_filter_match(ct, data);
+ }
+ 
+@@ -1631,11 +1628,6 @@ static int ctnetlink_del_conntrack(struct sk_buff *skb,
+ 
+ 	ct = nf_ct_tuplehash_to_ctrack(h);
+ 
+-	if (test_bit(IPS_OFFLOAD_BIT, &ct->status)) {
+-		nf_ct_put(ct);
+-		return -EBUSY;
+-	}
+-
+ 	if (cda[CTA_ID]) {
+ 		__be32 id = nla_get_be32(cda[CTA_ID]);
+ 
+diff --git a/net/tls/tls.h b/net/tls/tls.h
+index 804c3880d0288..0672acab27731 100644
+--- a/net/tls/tls.h
++++ b/net/tls/tls.h
+@@ -167,6 +167,11 @@ static inline bool tls_strp_msg_ready(struct tls_sw_context_rx *ctx)
+ 	return ctx->strp.msg_ready;
+ }
+ 
++static inline bool tls_strp_msg_mixed_decrypted(struct tls_sw_context_rx *ctx)
++{
++	return ctx->strp.mixed_decrypted;
++}
++
+ #ifdef CONFIG_TLS_DEVICE
+ int tls_device_init(void);
+ void tls_device_cleanup(void);
+diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
+index a7cc4f9faac28..bf69c9d6d06c0 100644
+--- a/net/tls/tls_device.c
++++ b/net/tls/tls_device.c
+@@ -1007,20 +1007,14 @@ int tls_device_decrypted(struct sock *sk, struct tls_context *tls_ctx)
+ 	struct tls_sw_context_rx *sw_ctx = tls_sw_ctx_rx(tls_ctx);
+ 	struct sk_buff *skb = tls_strp_msg(sw_ctx);
+ 	struct strp_msg *rxm = strp_msg(skb);
+-	int is_decrypted = skb->decrypted;
+-	int is_encrypted = !is_decrypted;
+-	struct sk_buff *skb_iter;
+-	int left;
+-
+-	left = rxm->full_len - skb->len;
+-	/* Check if all the data is decrypted already */
+-	skb_iter = skb_shinfo(skb)->frag_list;
+-	while (skb_iter && left > 0) {
+-		is_decrypted &= skb_iter->decrypted;
+-		is_encrypted &= !skb_iter->decrypted;
+-
+-		left -= skb_iter->len;
+-		skb_iter = skb_iter->next;
++	int is_decrypted, is_encrypted;
++
++	if (!tls_strp_msg_mixed_decrypted(sw_ctx)) {
++		is_decrypted = skb->decrypted;
++		is_encrypted = !is_decrypted;
++	} else {
++		is_decrypted = 0;
++		is_encrypted = 0;
+ 	}
+ 
+ 	trace_tls_device_decrypted(sk, tcp_sk(sk)->copied_seq - rxm->full_len,
+diff --git a/net/tls/tls_strp.c b/net/tls/tls_strp.c
+index 955ac3e0bf4d3..da95abbb7ea32 100644
+--- a/net/tls/tls_strp.c
++++ b/net/tls/tls_strp.c
+@@ -29,34 +29,50 @@ static void tls_strp_anchor_free(struct tls_strparser *strp)
+ 	struct skb_shared_info *shinfo = skb_shinfo(strp->anchor);
+ 
+ 	DEBUG_NET_WARN_ON_ONCE(atomic_read(&shinfo->dataref) != 1);
+-	shinfo->frag_list = NULL;
++	if (!strp->copy_mode)
++		shinfo->frag_list = NULL;
+ 	consume_skb(strp->anchor);
+ 	strp->anchor = NULL;
+ }
+ 
+-/* Create a new skb with the contents of input copied to its page frags */
+-static struct sk_buff *tls_strp_msg_make_copy(struct tls_strparser *strp)
++static struct sk_buff *
++tls_strp_skb_copy(struct tls_strparser *strp, struct sk_buff *in_skb,
++		  int offset, int len)
+ {
+-	struct strp_msg *rxm;
+ 	struct sk_buff *skb;
+-	int i, err, offset;
++	int i, err;
+ 
+-	skb = alloc_skb_with_frags(0, strp->stm.full_len, TLS_PAGE_ORDER,
++	skb = alloc_skb_with_frags(0, len, TLS_PAGE_ORDER,
+ 				   &err, strp->sk->sk_allocation);
+ 	if (!skb)
+ 		return NULL;
+ 
+-	offset = strp->stm.offset;
+ 	for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+ 		skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+ 
+-		WARN_ON_ONCE(skb_copy_bits(strp->anchor, offset,
++		WARN_ON_ONCE(skb_copy_bits(in_skb, offset,
+ 					   skb_frag_address(frag),
+ 					   skb_frag_size(frag)));
+ 		offset += skb_frag_size(frag);
+ 	}
+ 
+-	skb_copy_header(skb, strp->anchor);
++	skb->len = len;
++	skb->data_len = len;
++	skb_copy_header(skb, in_skb);
++	return skb;
++}
++
++/* Create a new skb with the contents of input copied to its page frags */
++static struct sk_buff *tls_strp_msg_make_copy(struct tls_strparser *strp)
++{
++	struct strp_msg *rxm;
++	struct sk_buff *skb;
++
++	skb = tls_strp_skb_copy(strp, strp->anchor, strp->stm.offset,
++				strp->stm.full_len);
++	if (!skb)
++		return NULL;
++
+ 	rxm = strp_msg(skb);
+ 	rxm->offset = 0;
+ 	return skb;
+@@ -180,22 +196,22 @@ static void tls_strp_flush_anchor_copy(struct tls_strparser *strp)
+ 	for (i = 0; i < shinfo->nr_frags; i++)
+ 		__skb_frag_unref(&shinfo->frags[i], false);
+ 	shinfo->nr_frags = 0;
++	if (strp->copy_mode) {
++		kfree_skb_list(shinfo->frag_list);
++		shinfo->frag_list = NULL;
++	}
+ 	strp->copy_mode = 0;
++	strp->mixed_decrypted = 0;
+ }
+ 
+-static int tls_strp_copyin(read_descriptor_t *desc, struct sk_buff *in_skb,
+-			   unsigned int offset, size_t in_len)
++static int tls_strp_copyin_frag(struct tls_strparser *strp, struct sk_buff *skb,
++				struct sk_buff *in_skb, unsigned int offset,
++				size_t in_len)
+ {
+-	struct tls_strparser *strp = (struct tls_strparser *)desc->arg.data;
+-	struct sk_buff *skb;
+-	skb_frag_t *frag;
+ 	size_t len, chunk;
++	skb_frag_t *frag;
+ 	int sz;
+ 
+-	if (strp->msg_ready)
+-		return 0;
+-
+-	skb = strp->anchor;
+ 	frag = &skb_shinfo(skb)->frags[skb->len / PAGE_SIZE];
+ 
+ 	len = in_len;
+@@ -208,19 +224,26 @@ static int tls_strp_copyin(read_descriptor_t *desc, struct sk_buff *in_skb,
+ 					   skb_frag_size(frag),
+ 					   chunk));
+ 
+-		sz = tls_rx_msg_size(strp, strp->anchor);
+-		if (sz < 0) {
+-			desc->error = sz;
+-			return 0;
+-		}
+-
+-		/* We may have over-read, sz == 0 is guaranteed under-read */
+-		if (sz > 0)
+-			chunk =	min_t(size_t, chunk, sz - skb->len);
+-
+ 		skb->len += chunk;
+ 		skb->data_len += chunk;
+ 		skb_frag_size_add(frag, chunk);
++
++		sz = tls_rx_msg_size(strp, skb);
++		if (sz < 0)
++			return sz;
++
++		/* We may have over-read, sz == 0 is guaranteed under-read */
++		if (unlikely(sz && sz < skb->len)) {
++			int over = skb->len - sz;
++
++			WARN_ON_ONCE(over > chunk);
++			skb->len -= over;
++			skb->data_len -= over;
++			skb_frag_size_add(frag, -over);
++
++			chunk -= over;
++		}
++
+ 		frag++;
+ 		len -= chunk;
+ 		offset += chunk;
+@@ -247,15 +270,99 @@ static int tls_strp_copyin(read_descriptor_t *desc, struct sk_buff *in_skb,
+ 		offset += chunk;
+ 	}
+ 
+-	if (strp->stm.full_len == skb->len) {
++read_done:
++	return in_len - len;
++}
++
++static int tls_strp_copyin_skb(struct tls_strparser *strp, struct sk_buff *skb,
++			       struct sk_buff *in_skb, unsigned int offset,
++			       size_t in_len)
++{
++	struct sk_buff *nskb, *first, *last;
++	struct skb_shared_info *shinfo;
++	size_t chunk;
++	int sz;
++
++	if (strp->stm.full_len)
++		chunk = strp->stm.full_len - skb->len;
++	else
++		chunk = TLS_MAX_PAYLOAD_SIZE + PAGE_SIZE;
++	chunk = min(chunk, in_len);
++
++	nskb = tls_strp_skb_copy(strp, in_skb, offset, chunk);
++	if (!nskb)
++		return -ENOMEM;
++
++	shinfo = skb_shinfo(skb);
++	if (!shinfo->frag_list) {
++		shinfo->frag_list = nskb;
++		nskb->prev = nskb;
++	} else {
++		first = shinfo->frag_list;
++		last = first->prev;
++		last->next = nskb;
++		first->prev = nskb;
++	}
++
++	skb->len += chunk;
++	skb->data_len += chunk;
++
++	if (!strp->stm.full_len) {
++		sz = tls_rx_msg_size(strp, skb);
++		if (sz < 0)
++			return sz;
++
++		/* We may have over-read, sz == 0 is guaranteed under-read */
++		if (unlikely(sz && sz < skb->len)) {
++			int over = skb->len - sz;
++
++			WARN_ON_ONCE(over > chunk);
++			skb->len -= over;
++			skb->data_len -= over;
++			__pskb_trim(nskb, nskb->len - over);
++
++			chunk -= over;
++		}
++
++		strp->stm.full_len = sz;
++	}
++
++	return chunk;
++}
++
++static int tls_strp_copyin(read_descriptor_t *desc, struct sk_buff *in_skb,
++			   unsigned int offset, size_t in_len)
++{
++	struct tls_strparser *strp = (struct tls_strparser *)desc->arg.data;
++	struct sk_buff *skb;
++	int ret;
++
++	if (strp->msg_ready)
++		return 0;
++
++	skb = strp->anchor;
++	if (!skb->len)
++		skb_copy_decrypted(skb, in_skb);
++	else
++		strp->mixed_decrypted |= !!skb_cmp_decrypted(skb, in_skb);
++
++	if (IS_ENABLED(CONFIG_TLS_DEVICE) && strp->mixed_decrypted)
++		ret = tls_strp_copyin_skb(strp, skb, in_skb, offset, in_len);
++	else
++		ret = tls_strp_copyin_frag(strp, skb, in_skb, offset, in_len);
++	if (ret < 0) {
++		desc->error = ret;
++		ret = 0;
++	}
++
++	if (strp->stm.full_len && strp->stm.full_len == skb->len) {
+ 		desc->count = 0;
+ 
+ 		strp->msg_ready = 1;
+ 		tls_rx_msg_ready(strp);
+ 	}
+ 
+-read_done:
+-	return in_len - len;
++	return ret;
+ }
+ 
+ static int tls_strp_read_copyin(struct tls_strparser *strp)
+@@ -315,15 +422,19 @@ static int tls_strp_read_copy(struct tls_strparser *strp, bool qshort)
+ 	return 0;
+ }
+ 
+-static bool tls_strp_check_no_dup(struct tls_strparser *strp)
++static bool tls_strp_check_queue_ok(struct tls_strparser *strp)
+ {
+ 	unsigned int len = strp->stm.offset + strp->stm.full_len;
+-	struct sk_buff *skb;
++	struct sk_buff *first, *skb;
+ 	u32 seq;
+ 
+-	skb = skb_shinfo(strp->anchor)->frag_list;
+-	seq = TCP_SKB_CB(skb)->seq;
++	first = skb_shinfo(strp->anchor)->frag_list;
++	skb = first;
++	seq = TCP_SKB_CB(first)->seq;
+ 
++	/* Make sure there's no duplicate data in the queue,
++	 * and the decrypted status matches.
++	 */
+ 	while (skb->len < len) {
+ 		seq += skb->len;
+ 		len -= skb->len;
+@@ -331,6 +442,8 @@ static bool tls_strp_check_no_dup(struct tls_strparser *strp)
+ 
+ 		if (TCP_SKB_CB(skb)->seq != seq)
+ 			return false;
++		if (skb_cmp_decrypted(first, skb))
++			return false;
+ 	}
+ 
+ 	return true;
+@@ -411,7 +524,7 @@ static int tls_strp_read_sock(struct tls_strparser *strp)
+ 			return tls_strp_read_copy(strp, true);
+ 	}
+ 
+-	if (!tls_strp_check_no_dup(strp))
++	if (!tls_strp_check_queue_ok(strp))
+ 		return tls_strp_read_copy(strp, false);
+ 
+ 	strp->msg_ready = 1;
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 635b8bf6b937c..6e6a7c37d685c 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -2304,10 +2304,14 @@ static void tls_data_ready(struct sock *sk)
+ 	struct tls_context *tls_ctx = tls_get_ctx(sk);
+ 	struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
+ 	struct sk_psock *psock;
++	gfp_t alloc_save;
+ 
+ 	trace_sk_data_ready(sk);
+ 
++	alloc_save = sk->sk_allocation;
++	sk->sk_allocation = GFP_ATOMIC;
+ 	tls_strp_data_ready(&ctx->strp);
++	sk->sk_allocation = alloc_save;
+ 
+ 	psock = sk_psock_get(sk);
+ 	if (psock) {
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 29c6083a37daf..9383afe3e570b 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -2553,7 +2553,7 @@ static int unix_read_skb(struct sock *sk, skb_read_actor_t recv_actor)
+ {
+ 	struct unix_sock *u = unix_sk(sk);
+ 	struct sk_buff *skb;
+-	int err, copied;
++	int err;
+ 
+ 	mutex_lock(&u->iolock);
+ 	skb = skb_recv_datagram(sk, MSG_DONTWAIT, &err);
+@@ -2561,10 +2561,7 @@ static int unix_read_skb(struct sock *sk, skb_read_actor_t recv_actor)
+ 	if (!skb)
+ 		return err;
+ 
+-	copied = recv_actor(sk, skb);
+-	kfree_skb(skb);
+-
+-	return copied;
++	return recv_actor(sk, skb);
+ }
+ 
+ /*
+diff --git a/sound/soc/intel/avs/control.c b/sound/soc/intel/avs/control.c
+index a8b14b784f8a5..3dfa2e9816db0 100644
+--- a/sound/soc/intel/avs/control.c
++++ b/sound/soc/intel/avs/control.c
+@@ -21,17 +21,25 @@ static struct avs_dev *avs_get_kcontrol_adev(struct snd_kcontrol *kcontrol)
+ 	return to_avs_dev(w->dapm->component->dev);
+ }
+ 
+-static struct avs_path_module *avs_get_kcontrol_module(struct avs_dev *adev, u32 id)
++static struct avs_path_module *avs_get_volume_module(struct avs_dev *adev, u32 id)
+ {
+ 	struct avs_path *path;
+ 	struct avs_path_pipeline *ppl;
+ 	struct avs_path_module *mod;
+ 
+-	list_for_each_entry(path, &adev->path_list, node)
+-		list_for_each_entry(ppl, &path->ppl_list, node)
+-			list_for_each_entry(mod, &ppl->mod_list, node)
+-				if (mod->template->ctl_id && mod->template->ctl_id == id)
++	spin_lock(&adev->path_list_lock);
++	list_for_each_entry(path, &adev->path_list, node) {
++		list_for_each_entry(ppl, &path->ppl_list, node) {
++			list_for_each_entry(mod, &ppl->mod_list, node) {
++				if (guid_equal(&mod->template->cfg_ext->type, &AVS_PEAKVOL_MOD_UUID)
++				    && mod->template->ctl_id == id) {
++					spin_unlock(&adev->path_list_lock);
+ 					return mod;
++				}
++			}
++		}
++	}
++	spin_unlock(&adev->path_list_lock);
+ 
+ 	return NULL;
+ }
+@@ -49,7 +57,7 @@ int avs_control_volume_get(struct snd_kcontrol *kcontrol, struct snd_ctl_elem_va
+ 	/* prevent access to modules while path is being constructed */
+ 	mutex_lock(&adev->path_mutex);
+ 
+-	active_module = avs_get_kcontrol_module(adev, ctl_data->id);
++	active_module = avs_get_volume_module(adev, ctl_data->id);
+ 	if (active_module) {
+ 		ret = avs_ipc_peakvol_get_volume(adev, active_module->module_id,
+ 						 active_module->instance_id, &dspvols,
+@@ -89,7 +97,7 @@ int avs_control_volume_put(struct snd_kcontrol *kcontrol, struct snd_ctl_elem_va
+ 		changed = 1;
+ 	}
+ 
+-	active_module = avs_get_kcontrol_module(adev, ctl_data->id);
++	active_module = avs_get_volume_module(adev, ctl_data->id);
+ 	if (active_module) {
+ 		dspvol.channel_id = AVS_ALL_CHANNELS_MASK;
+ 		dspvol.target_volume = *volume;
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index b677dcd0b77af..ad01c9e1ff12b 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -197,7 +197,7 @@ $(OUTPUT)/urandom_read: urandom_read.c urandom_read_aux.c $(OUTPUT)/liburandom_r
+ 
+ $(OUTPUT)/sign-file: ../../../../scripts/sign-file.c
+ 	$(call msg,SIGN-FILE,,$@)
+-	$(Q)$(CC) $(shell $(HOSTPKG_CONFIG)--cflags libcrypto 2> /dev/null) \
++	$(Q)$(CC) $(shell $(HOSTPKG_CONFIG) --cflags libcrypto 2> /dev/null) \
+ 		  $< -o $@ \
+ 		  $(shell $(HOSTPKG_CONFIG) --libs libcrypto 2> /dev/null || echo -lcrypto)
+ 


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:6.3 commit in: /
@ 2023-06-09 11:29 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2023-06-09 11:29 UTC (permalink / raw
  To: gentoo-commits

commit:     fd5c3f7d69dde3604864f5404eedccb725ba9c0b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jun  9 11:28:52 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jun  9 11:28:52 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=fd5c3f7d

Linux patch 6.3.7

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |     4 +
 1006_linux-6.3.7.patch | 10821 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 10825 insertions(+)

diff --git a/0000_README b/0000_README
index 210d6768..5091043c 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch:  1005_linux-6.3.6.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.3.6
 
+Patch:  1006_linux-6.3.7.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.3.7
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1006_linux-6.3.7.patch b/1006_linux-6.3.7.patch
new file mode 100644
index 00000000..0a42e292
--- /dev/null
+++ b/1006_linux-6.3.7.patch
@@ -0,0 +1,10821 @@
+diff --git a/Documentation/devicetree/bindings/iio/adc/renesas,rcar-gyroadc.yaml b/Documentation/devicetree/bindings/iio/adc/renesas,rcar-gyroadc.yaml
+index c115e2e99bd9a..4a7b1385fdc7e 100644
+--- a/Documentation/devicetree/bindings/iio/adc/renesas,rcar-gyroadc.yaml
++++ b/Documentation/devicetree/bindings/iio/adc/renesas,rcar-gyroadc.yaml
+@@ -86,7 +86,7 @@ patternProperties:
+             of the MAX chips to the GyroADC, while MISO line of each Maxim
+             ADC connects to a shared input pin of the GyroADC.
+         enum:
+-          - adi,7476
++          - adi,ad7476
+           - fujitsu,mb88101a
+           - maxim,max1162
+           - maxim,max11100
+diff --git a/Documentation/devicetree/bindings/serial/8250_omap.yaml b/Documentation/devicetree/bindings/serial/8250_omap.yaml
+index eb3488d8f9ee6..6a7be42da523c 100644
+--- a/Documentation/devicetree/bindings/serial/8250_omap.yaml
++++ b/Documentation/devicetree/bindings/serial/8250_omap.yaml
+@@ -70,6 +70,7 @@ properties:
+   dsr-gpios: true
+   rng-gpios: true
+   dcd-gpios: true
++  rs485-rts-active-high: true
+   rts-gpio: true
+   power-domains: true
+   clock-frequency: true
+diff --git a/Documentation/devicetree/bindings/sound/tas2562.yaml b/Documentation/devicetree/bindings/sound/tas2562.yaml
+index 1085592cefccc..81218c07079a8 100644
+--- a/Documentation/devicetree/bindings/sound/tas2562.yaml
++++ b/Documentation/devicetree/bindings/sound/tas2562.yaml
+@@ -55,7 +55,9 @@ properties:
+     description: TDM TX current sense time slot.
+ 
+   '#sound-dai-cells':
+-    const: 1
++    # The codec has a single DAI, the #sound-dai-cells=<1>; case is left in for backward
++    # compatibility but is deprecated.
++    enum: [0, 1]
+ 
+ required:
+   - compatible
+@@ -72,7 +74,7 @@ examples:
+      codec: codec@4c {
+        compatible = "ti,tas2562";
+        reg = <0x4c>;
+-       #sound-dai-cells = <1>;
++       #sound-dai-cells = <0>;
+        interrupt-parent = <&gpio1>;
+        interrupts = <14>;
+        shutdown-gpios = <&gpio1 15 0>;
+diff --git a/Documentation/devicetree/bindings/sound/tas2770.yaml b/Documentation/devicetree/bindings/sound/tas2770.yaml
+index 982949ba8a4be..cdb493db47f9b 100644
+--- a/Documentation/devicetree/bindings/sound/tas2770.yaml
++++ b/Documentation/devicetree/bindings/sound/tas2770.yaml
+@@ -57,7 +57,9 @@ properties:
+       - 1 # Falling edge
+ 
+   '#sound-dai-cells':
+-    const: 1
++    # The codec has a single DAI, the #sound-dai-cells=<1>; case is left in for backward
++    # compatibility but is deprecated.
++    enum: [0, 1]
+ 
+ required:
+   - compatible
+@@ -74,7 +76,7 @@ examples:
+      codec: codec@41 {
+        compatible = "ti,tas2770";
+        reg = <0x41>;
+-       #sound-dai-cells = <1>;
++       #sound-dai-cells = <0>;
+        interrupt-parent = <&gpio1>;
+        interrupts = <14>;
+        reset-gpio = <&gpio1 15 0>;
+diff --git a/Documentation/devicetree/bindings/sound/tas27xx.yaml b/Documentation/devicetree/bindings/sound/tas27xx.yaml
+index 0957dd435bb4b..2ef05aacc167a 100644
+--- a/Documentation/devicetree/bindings/sound/tas27xx.yaml
++++ b/Documentation/devicetree/bindings/sound/tas27xx.yaml
+@@ -50,7 +50,9 @@ properties:
+     description: TDM TX voltage sense time slot.
+ 
+   '#sound-dai-cells':
+-    const: 1
++    # The codec has a single DAI, the #sound-dai-cells=<1>; case is left in for backward
++    # compatibility but is deprecated.
++    enum: [0, 1]
+ 
+ required:
+   - compatible
+@@ -67,7 +69,7 @@ examples:
+      codec: codec@38 {
+        compatible = "ti,tas2764";
+        reg = <0x38>;
+-       #sound-dai-cells = <1>;
++       #sound-dai-cells = <0>;
+        interrupt-parent = <&gpio1>;
+        interrupts = <14>;
+        reset-gpios = <&gpio1 15 0>;
+diff --git a/Documentation/devicetree/bindings/usb/snps,dwc3.yaml b/Documentation/devicetree/bindings/usb/snps,dwc3.yaml
+index be36956af53b0..22cd1e37d5aa4 100644
+--- a/Documentation/devicetree/bindings/usb/snps,dwc3.yaml
++++ b/Documentation/devicetree/bindings/usb/snps,dwc3.yaml
+@@ -270,7 +270,7 @@ properties:
+     description:
+       High-Speed PHY interface selection between UTMI+ and ULPI when the
+       DWC_USB3_HSPHY_INTERFACE has value 3.
+-    $ref: /schemas/types.yaml#/definitions/uint8
++    $ref: /schemas/types.yaml#/definitions/string
+     enum: [utmi, ulpi]
+ 
+   snps,quirk-frame-length-adjustment:
+diff --git a/Makefile b/Makefile
+index 1dffadbf1f87f..71c958fd52854 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 3
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/arch/arm/boot/dts/stm32f7-pinctrl.dtsi b/arch/arm/boot/dts/stm32f7-pinctrl.dtsi
+index c8e6c52fb248e..9f65403295ca0 100644
+--- a/arch/arm/boot/dts/stm32f7-pinctrl.dtsi
++++ b/arch/arm/boot/dts/stm32f7-pinctrl.dtsi
+@@ -283,6 +283,88 @@
+ 					slew-rate = <2>;
+ 				};
+ 			};
++
++			can1_pins_a: can1-0 {
++				pins1 {
++					pinmux = <STM32_PINMUX('A', 12, AF9)>; /* CAN1_TX */
++				};
++				pins2 {
++					pinmux = <STM32_PINMUX('A', 11, AF9)>; /* CAN1_RX */
++					bias-pull-up;
++				};
++			};
++
++			can1_pins_b: can1-1 {
++				pins1 {
++					pinmux = <STM32_PINMUX('B', 9, AF9)>; /* CAN1_TX */
++				};
++				pins2 {
++					pinmux = <STM32_PINMUX('B', 8, AF9)>; /* CAN1_RX */
++					bias-pull-up;
++				};
++			};
++
++			can1_pins_c: can1-2 {
++				pins1 {
++					pinmux = <STM32_PINMUX('D', 1, AF9)>; /* CAN1_TX */
++				};
++				pins2 {
++					pinmux = <STM32_PINMUX('D', 0, AF9)>; /* CAN1_RX */
++					bias-pull-up;
++
++				};
++			};
++
++			can1_pins_d: can1-3 {
++				pins1 {
++					pinmux = <STM32_PINMUX('H', 13, AF9)>; /* CAN1_TX */
++				};
++				pins2 {
++					pinmux = <STM32_PINMUX('H', 14, AF9)>; /* CAN1_RX */
++					bias-pull-up;
++
++				};
++			};
++
++			can2_pins_a: can2-0 {
++				pins1 {
++					pinmux = <STM32_PINMUX('B', 6, AF9)>; /* CAN2_TX */
++				};
++				pins2 {
++					pinmux = <STM32_PINMUX('B', 5, AF9)>; /* CAN2_RX */
++					bias-pull-up;
++				};
++			};
++
++			can2_pins_b: can2-1 {
++				pins1 {
++					pinmux = <STM32_PINMUX('B', 13, AF9)>; /* CAN2_TX */
++				};
++				pins2 {
++					pinmux = <STM32_PINMUX('B', 12, AF9)>; /* CAN2_RX */
++					bias-pull-up;
++				};
++			};
++
++			can3_pins_a: can3-0 {
++				pins1 {
++					pinmux = <STM32_PINMUX('A', 15, AF11)>; /* CAN3_TX */
++				};
++				pins2 {
++					pinmux = <STM32_PINMUX('A', 8, AF11)>; /* CAN3_RX */
++					bias-pull-up;
++				};
++			};
++
++			can3_pins_b: can3-1 {
++				pins1 {
++					pinmux = <STM32_PINMUX('B', 4, AF11)>;  /* CAN3_TX */
++				};
++				pins2 {
++					pinmux = <STM32_PINMUX('B', 3, AF11)>; /* CAN3_RX */
++					bias-pull-up;
++				};
++			};
+ 		};
+ 	};
+ };
+diff --git a/arch/arm/kernel/unwind.c b/arch/arm/kernel/unwind.c
+index 53be7ea6181b3..9d2192156087b 100644
+--- a/arch/arm/kernel/unwind.c
++++ b/arch/arm/kernel/unwind.c
+@@ -308,6 +308,29 @@ static int unwind_exec_pop_subset_r0_to_r3(struct unwind_ctrl_block *ctrl,
+ 	return URC_OK;
+ }
+ 
++static unsigned long unwind_decode_uleb128(struct unwind_ctrl_block *ctrl)
++{
++	unsigned long bytes = 0;
++	unsigned long insn;
++	unsigned long result = 0;
++
++	/*
++	 * unwind_get_byte() will advance `ctrl` one instruction at a time, so
++	 * loop until we get an instruction byte where bit 7 is not set.
++	 *
++	 * Note: This decodes a maximum of 4 bytes to output 28 bits data where
++	 * max is 0xfffffff: that will cover a vsp increment of 1073742336, hence
++	 * it is sufficient for unwinding the stack.
++	 */
++	do {
++		insn = unwind_get_byte(ctrl);
++		result |= (insn & 0x7f) << (bytes * 7);
++		bytes++;
++	} while (!!(insn & 0x80) && (bytes != sizeof(result)));
++
++	return result;
++}
++
+ /*
+  * Execute the current unwind instruction.
+  */
+@@ -361,7 +384,7 @@ static int unwind_exec_insn(struct unwind_ctrl_block *ctrl)
+ 		if (ret)
+ 			goto error;
+ 	} else if (insn == 0xb2) {
+-		unsigned long uleb128 = unwind_get_byte(ctrl);
++		unsigned long uleb128 = unwind_decode_uleb128(ctrl);
+ 
+ 		ctrl->vrs[SP] += 0x204 + (uleb128 << 2);
+ 	} else {
+diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
+index dc3c072e862f1..93bd0975b15f5 100644
+--- a/arch/arm64/include/asm/kvm_pgtable.h
++++ b/arch/arm64/include/asm/kvm_pgtable.h
+@@ -632,9 +632,9 @@ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size);
+  *
+  * The walker will walk the page-table entries corresponding to the input
+  * address range specified, visiting entries according to the walker flags.
+- * Invalid entries are treated as leaf entries. Leaf entries are reloaded
+- * after invoking the walker callback, allowing the walker to descend into
+- * a newly installed table.
++ * Invalid entries are treated as leaf entries. The visited page table entry is
++ * reloaded after invoking the walker callback, allowing the walker to descend
++ * into a newly installed table.
+  *
+  * Returning a negative error code from the walker callback function will
+  * terminate the walk immediately with the same error code.
+diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
+index 0119dc91abb5d..d9e1355730ef5 100644
+--- a/arch/arm64/kernel/vdso.c
++++ b/arch/arm64/kernel/vdso.c
+@@ -288,7 +288,7 @@ static int aarch32_alloc_kuser_vdso_page(void)
+ 
+ 	memcpy((void *)(vdso_page + 0x1000 - kuser_sz), __kuser_helper_start,
+ 	       kuser_sz);
+-	aarch32_vectors_page = virt_to_page(vdso_page);
++	aarch32_vectors_page = virt_to_page((void *)vdso_page);
+ 	return 0;
+ }
+ 
+diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
+index 07d37ff88a3f2..33f4d42003296 100644
+--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
++++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
+@@ -351,17 +351,21 @@ static bool kvm_hyp_handle_cp15_32(struct kvm_vcpu *vcpu, u64 *exit_code)
+ 	return false;
+ }
+ 
+-static bool kvm_hyp_handle_iabt_low(struct kvm_vcpu *vcpu, u64 *exit_code)
++static bool kvm_hyp_handle_memory_fault(struct kvm_vcpu *vcpu, u64 *exit_code)
+ {
+ 	if (!__populate_fault_info(vcpu))
+ 		return true;
+ 
+ 	return false;
+ }
++static bool kvm_hyp_handle_iabt_low(struct kvm_vcpu *vcpu, u64 *exit_code)
++	__alias(kvm_hyp_handle_memory_fault);
++static bool kvm_hyp_handle_watchpt_low(struct kvm_vcpu *vcpu, u64 *exit_code)
++	__alias(kvm_hyp_handle_memory_fault);
+ 
+ static bool kvm_hyp_handle_dabt_low(struct kvm_vcpu *vcpu, u64 *exit_code)
+ {
+-	if (!__populate_fault_info(vcpu))
++	if (kvm_hyp_handle_memory_fault(vcpu, exit_code))
+ 		return true;
+ 
+ 	if (static_branch_unlikely(&vgic_v2_cpuif_trap)) {
+diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
+index 552653fa18be3..dab14d3ca7bb6 100644
+--- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
++++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
+@@ -568,7 +568,7 @@ struct pkvm_mem_donation {
+ 
+ struct check_walk_data {
+ 	enum pkvm_page_state	desired;
+-	enum pkvm_page_state	(*get_page_state)(kvm_pte_t pte);
++	enum pkvm_page_state	(*get_page_state)(kvm_pte_t pte, u64 addr);
+ };
+ 
+ static int __check_page_state_visitor(const struct kvm_pgtable_visit_ctx *ctx,
+@@ -576,10 +576,7 @@ static int __check_page_state_visitor(const struct kvm_pgtable_visit_ctx *ctx,
+ {
+ 	struct check_walk_data *d = ctx->arg;
+ 
+-	if (kvm_pte_valid(ctx->old) && !addr_is_allowed_memory(kvm_pte_to_phys(ctx->old)))
+-		return -EINVAL;
+-
+-	return d->get_page_state(ctx->old) == d->desired ? 0 : -EPERM;
++	return d->get_page_state(ctx->old, ctx->addr) == d->desired ? 0 : -EPERM;
+ }
+ 
+ static int check_page_state_range(struct kvm_pgtable *pgt, u64 addr, u64 size,
+@@ -594,8 +591,11 @@ static int check_page_state_range(struct kvm_pgtable *pgt, u64 addr, u64 size,
+ 	return kvm_pgtable_walk(pgt, addr, size, &walker);
+ }
+ 
+-static enum pkvm_page_state host_get_page_state(kvm_pte_t pte)
++static enum pkvm_page_state host_get_page_state(kvm_pte_t pte, u64 addr)
+ {
++	if (!addr_is_allowed_memory(addr))
++		return PKVM_NOPAGE;
++
+ 	if (!kvm_pte_valid(pte) && pte)
+ 		return PKVM_NOPAGE;
+ 
+@@ -702,7 +702,7 @@ static int host_complete_donation(u64 addr, const struct pkvm_mem_transition *tx
+ 	return host_stage2_set_owner_locked(addr, size, host_id);
+ }
+ 
+-static enum pkvm_page_state hyp_get_page_state(kvm_pte_t pte)
++static enum pkvm_page_state hyp_get_page_state(kvm_pte_t pte, u64 addr)
+ {
+ 	if (!kvm_pte_valid(pte))
+ 		return PKVM_NOPAGE;
+diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
+index c2cb46ca4fb66..895fb32000762 100644
+--- a/arch/arm64/kvm/hyp/nvhe/switch.c
++++ b/arch/arm64/kvm/hyp/nvhe/switch.c
+@@ -186,6 +186,7 @@ static const exit_handler_fn hyp_exit_handlers[] = {
+ 	[ESR_ELx_EC_FP_ASIMD]		= kvm_hyp_handle_fpsimd,
+ 	[ESR_ELx_EC_IABT_LOW]		= kvm_hyp_handle_iabt_low,
+ 	[ESR_ELx_EC_DABT_LOW]		= kvm_hyp_handle_dabt_low,
++	[ESR_ELx_EC_WATCHPT_LOW]	= kvm_hyp_handle_watchpt_low,
+ 	[ESR_ELx_EC_PAC]		= kvm_hyp_handle_ptrauth,
+ };
+ 
+@@ -196,6 +197,7 @@ static const exit_handler_fn pvm_exit_handlers[] = {
+ 	[ESR_ELx_EC_FP_ASIMD]		= kvm_hyp_handle_fpsimd,
+ 	[ESR_ELx_EC_IABT_LOW]		= kvm_hyp_handle_iabt_low,
+ 	[ESR_ELx_EC_DABT_LOW]		= kvm_hyp_handle_dabt_low,
++	[ESR_ELx_EC_WATCHPT_LOW]	= kvm_hyp_handle_watchpt_low,
+ 	[ESR_ELx_EC_PAC]		= kvm_hyp_handle_ptrauth,
+ };
+ 
+diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
+index 140f82300db5a..acd233e5586a4 100644
+--- a/arch/arm64/kvm/hyp/pgtable.c
++++ b/arch/arm64/kvm/hyp/pgtable.c
+@@ -209,14 +209,26 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
+ 		.flags	= flags,
+ 	};
+ 	int ret = 0;
++	bool reload = false;
+ 	kvm_pteref_t childp;
+ 	bool table = kvm_pte_table(ctx.old, level);
+ 
+-	if (table && (ctx.flags & KVM_PGTABLE_WALK_TABLE_PRE))
++	if (table && (ctx.flags & KVM_PGTABLE_WALK_TABLE_PRE)) {
+ 		ret = kvm_pgtable_visitor_cb(data, &ctx, KVM_PGTABLE_WALK_TABLE_PRE);
++		reload = true;
++	}
+ 
+ 	if (!table && (ctx.flags & KVM_PGTABLE_WALK_LEAF)) {
+ 		ret = kvm_pgtable_visitor_cb(data, &ctx, KVM_PGTABLE_WALK_LEAF);
++		reload = true;
++	}
++
++	/*
++	 * Reload the page table after invoking the walker callback for leaf
++	 * entries or after pre-order traversal, to allow the walker to descend
++	 * into a newly installed or replaced table.
++	 */
++	if (reload) {
+ 		ctx.old = READ_ONCE(*ptep);
+ 		table = kvm_pte_table(ctx.old, level);
+ 	}
+@@ -1321,4 +1333,7 @@ void kvm_pgtable_stage2_free_removed(struct kvm_pgtable_mm_ops *mm_ops, void *pg
+ 	};
+ 
+ 	WARN_ON(__kvm_pgtable_walk(&data, mm_ops, ptep, level + 1));
++
++	WARN_ON(mm_ops->page_count(pgtable) != 1);
++	mm_ops->put_page(pgtable);
+ }
+diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
+index cd3f3117bf164..ccb70cf100a4f 100644
+--- a/arch/arm64/kvm/hyp/vhe/switch.c
++++ b/arch/arm64/kvm/hyp/vhe/switch.c
+@@ -110,6 +110,7 @@ static const exit_handler_fn hyp_exit_handlers[] = {
+ 	[ESR_ELx_EC_FP_ASIMD]		= kvm_hyp_handle_fpsimd,
+ 	[ESR_ELx_EC_IABT_LOW]		= kvm_hyp_handle_iabt_low,
+ 	[ESR_ELx_EC_DABT_LOW]		= kvm_hyp_handle_dabt_low,
++	[ESR_ELx_EC_WATCHPT_LOW]	= kvm_hyp_handle_watchpt_low,
+ 	[ESR_ELx_EC_PAC]		= kvm_hyp_handle_ptrauth,
+ };
+ 
+diff --git a/arch/arm64/kvm/vgic/vgic-init.c b/arch/arm64/kvm/vgic/vgic-init.c
+index 9d42c7cb2b588..c199ba2f192ef 100644
+--- a/arch/arm64/kvm/vgic/vgic-init.c
++++ b/arch/arm64/kvm/vgic/vgic-init.c
+@@ -235,9 +235,9 @@ int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu)
+ 	 * KVM io device for the redistributor that belongs to this VCPU.
+ 	 */
+ 	if (dist->vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) {
+-		mutex_lock(&vcpu->kvm->arch.config_lock);
++		mutex_lock(&vcpu->kvm->slots_lock);
+ 		ret = vgic_register_redist_iodev(vcpu);
+-		mutex_unlock(&vcpu->kvm->arch.config_lock);
++		mutex_unlock(&vcpu->kvm->slots_lock);
+ 	}
+ 	return ret;
+ }
+@@ -446,11 +446,13 @@ int vgic_lazy_init(struct kvm *kvm)
+ int kvm_vgic_map_resources(struct kvm *kvm)
+ {
+ 	struct vgic_dist *dist = &kvm->arch.vgic;
++	gpa_t dist_base;
+ 	int ret = 0;
+ 
+ 	if (likely(vgic_ready(kvm)))
+ 		return 0;
+ 
++	mutex_lock(&kvm->slots_lock);
+ 	mutex_lock(&kvm->arch.config_lock);
+ 	if (vgic_ready(kvm))
+ 		goto out;
+@@ -463,13 +465,26 @@ int kvm_vgic_map_resources(struct kvm *kvm)
+ 	else
+ 		ret = vgic_v3_map_resources(kvm);
+ 
+-	if (ret)
++	if (ret) {
+ 		__kvm_vgic_destroy(kvm);
+-	else
+-		dist->ready = true;
++		goto out;
++	}
++	dist->ready = true;
++	dist_base = dist->vgic_dist_base;
++	mutex_unlock(&kvm->arch.config_lock);
++
++	ret = vgic_register_dist_iodev(kvm, dist_base,
++				       kvm_vgic_global_state.type);
++	if (ret) {
++		kvm_err("Unable to register VGIC dist MMIO regions\n");
++		kvm_vgic_destroy(kvm);
++	}
++	mutex_unlock(&kvm->slots_lock);
++	return ret;
+ 
+ out:
+ 	mutex_unlock(&kvm->arch.config_lock);
++	mutex_unlock(&kvm->slots_lock);
+ 	return ret;
+ }
+ 
+diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c
+index 750e51e3779a3..5fe2365a629f2 100644
+--- a/arch/arm64/kvm/vgic/vgic-its.c
++++ b/arch/arm64/kvm/vgic/vgic-its.c
+@@ -1936,6 +1936,7 @@ void vgic_lpi_translation_cache_destroy(struct kvm *kvm)
+ 
+ static int vgic_its_create(struct kvm_device *dev, u32 type)
+ {
++	int ret;
+ 	struct vgic_its *its;
+ 
+ 	if (type != KVM_DEV_TYPE_ARM_VGIC_ITS)
+@@ -1945,9 +1946,12 @@ static int vgic_its_create(struct kvm_device *dev, u32 type)
+ 	if (!its)
+ 		return -ENOMEM;
+ 
++	mutex_lock(&dev->kvm->arch.config_lock);
++
+ 	if (vgic_initialized(dev->kvm)) {
+-		int ret = vgic_v4_init(dev->kvm);
++		ret = vgic_v4_init(dev->kvm);
+ 		if (ret < 0) {
++			mutex_unlock(&dev->kvm->arch.config_lock);
+ 			kfree(its);
+ 			return ret;
+ 		}
+@@ -1960,12 +1964,10 @@ static int vgic_its_create(struct kvm_device *dev, u32 type)
+ 
+ 	/* Yep, even more trickery for lock ordering... */
+ #ifdef CONFIG_LOCKDEP
+-	mutex_lock(&dev->kvm->arch.config_lock);
+ 	mutex_lock(&its->cmd_lock);
+ 	mutex_lock(&its->its_lock);
+ 	mutex_unlock(&its->its_lock);
+ 	mutex_unlock(&its->cmd_lock);
+-	mutex_unlock(&dev->kvm->arch.config_lock);
+ #endif
+ 
+ 	its->vgic_its_base = VGIC_ADDR_UNDEF;
+@@ -1986,7 +1988,11 @@ static int vgic_its_create(struct kvm_device *dev, u32 type)
+ 
+ 	dev->private = its;
+ 
+-	return vgic_its_set_abi(its, NR_ITS_ABIS - 1);
++	ret = vgic_its_set_abi(its, NR_ITS_ABIS - 1);
++
++	mutex_unlock(&dev->kvm->arch.config_lock);
++
++	return ret;
+ }
+ 
+ static void vgic_its_destroy(struct kvm_device *kvm_dev)
+diff --git a/arch/arm64/kvm/vgic/vgic-kvm-device.c b/arch/arm64/kvm/vgic/vgic-kvm-device.c
+index 07e727023deb7..bf4b3d9631ce1 100644
+--- a/arch/arm64/kvm/vgic/vgic-kvm-device.c
++++ b/arch/arm64/kvm/vgic/vgic-kvm-device.c
+@@ -102,7 +102,11 @@ static int kvm_vgic_addr(struct kvm *kvm, struct kvm_device_attr *attr, bool wri
+ 		if (get_user(addr, uaddr))
+ 			return -EFAULT;
+ 
+-	mutex_lock(&kvm->arch.config_lock);
++	/*
++	 * Since we can't hold config_lock while registering the redistributor
++	 * iodevs, take the slots_lock immediately.
++	 */
++	mutex_lock(&kvm->slots_lock);
+ 	switch (attr->attr) {
+ 	case KVM_VGIC_V2_ADDR_TYPE_DIST:
+ 		r = vgic_check_type(kvm, KVM_DEV_TYPE_ARM_VGIC_V2);
+@@ -182,6 +186,7 @@ static int kvm_vgic_addr(struct kvm *kvm, struct kvm_device_attr *attr, bool wri
+ 	if (r)
+ 		goto out;
+ 
++	mutex_lock(&kvm->arch.config_lock);
+ 	if (write) {
+ 		r = vgic_check_iorange(kvm, *addr_ptr, addr, alignment, size);
+ 		if (!r)
+@@ -189,9 +194,10 @@ static int kvm_vgic_addr(struct kvm *kvm, struct kvm_device_attr *attr, bool wri
+ 	} else {
+ 		addr = *addr_ptr;
+ 	}
++	mutex_unlock(&kvm->arch.config_lock);
+ 
+ out:
+-	mutex_unlock(&kvm->arch.config_lock);
++	mutex_unlock(&kvm->slots_lock);
+ 
+ 	if (!r && !write)
+ 		r =  put_user(addr, uaddr);
+diff --git a/arch/arm64/kvm/vgic/vgic-mmio-v3.c b/arch/arm64/kvm/vgic/vgic-mmio-v3.c
+index 472b18ac92a24..188d2187eede9 100644
+--- a/arch/arm64/kvm/vgic/vgic-mmio-v3.c
++++ b/arch/arm64/kvm/vgic/vgic-mmio-v3.c
+@@ -769,10 +769,13 @@ int vgic_register_redist_iodev(struct kvm_vcpu *vcpu)
+ 	struct vgic_io_device *rd_dev = &vcpu->arch.vgic_cpu.rd_iodev;
+ 	struct vgic_redist_region *rdreg;
+ 	gpa_t rd_base;
+-	int ret;
++	int ret = 0;
++
++	lockdep_assert_held(&kvm->slots_lock);
++	mutex_lock(&kvm->arch.config_lock);
+ 
+ 	if (!IS_VGIC_ADDR_UNDEF(vgic_cpu->rd_iodev.base_addr))
+-		return 0;
++		goto out_unlock;
+ 
+ 	/*
+ 	 * We may be creating VCPUs before having set the base address for the
+@@ -782,10 +785,12 @@ int vgic_register_redist_iodev(struct kvm_vcpu *vcpu)
+ 	 */
+ 	rdreg = vgic_v3_rdist_free_slot(&vgic->rd_regions);
+ 	if (!rdreg)
+-		return 0;
++		goto out_unlock;
+ 
+-	if (!vgic_v3_check_base(kvm))
+-		return -EINVAL;
++	if (!vgic_v3_check_base(kvm)) {
++		ret = -EINVAL;
++		goto out_unlock;
++	}
+ 
+ 	vgic_cpu->rdreg = rdreg;
+ 	vgic_cpu->rdreg_index = rdreg->free_index;
+@@ -799,16 +804,20 @@ int vgic_register_redist_iodev(struct kvm_vcpu *vcpu)
+ 	rd_dev->nr_regions = ARRAY_SIZE(vgic_v3_rd_registers);
+ 	rd_dev->redist_vcpu = vcpu;
+ 
+-	mutex_lock(&kvm->slots_lock);
++	mutex_unlock(&kvm->arch.config_lock);
++
+ 	ret = kvm_io_bus_register_dev(kvm, KVM_MMIO_BUS, rd_base,
+ 				      2 * SZ_64K, &rd_dev->dev);
+-	mutex_unlock(&kvm->slots_lock);
+-
+ 	if (ret)
+ 		return ret;
+ 
++	/* Protected by slots_lock */
+ 	rdreg->free_index++;
+ 	return 0;
++
++out_unlock:
++	mutex_unlock(&kvm->arch.config_lock);
++	return ret;
+ }
+ 
+ static void vgic_unregister_redist_iodev(struct kvm_vcpu *vcpu)
+@@ -834,12 +843,10 @@ static int vgic_register_all_redist_iodevs(struct kvm *kvm)
+ 		/* The current c failed, so iterate over the previous ones. */
+ 		int i;
+ 
+-		mutex_lock(&kvm->slots_lock);
+ 		for (i = 0; i < c; i++) {
+ 			vcpu = kvm_get_vcpu(kvm, i);
+ 			vgic_unregister_redist_iodev(vcpu);
+ 		}
+-		mutex_unlock(&kvm->slots_lock);
+ 	}
+ 
+ 	return ret;
+@@ -938,7 +945,9 @@ int vgic_v3_set_redist_base(struct kvm *kvm, u32 index, u64 addr, u32 count)
+ {
+ 	int ret;
+ 
++	mutex_lock(&kvm->arch.config_lock);
+ 	ret = vgic_v3_alloc_redist_region(kvm, index, addr, count);
++	mutex_unlock(&kvm->arch.config_lock);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -950,8 +959,10 @@ int vgic_v3_set_redist_base(struct kvm *kvm, u32 index, u64 addr, u32 count)
+ 	if (ret) {
+ 		struct vgic_redist_region *rdreg;
+ 
++		mutex_lock(&kvm->arch.config_lock);
+ 		rdreg = vgic_v3_rdist_region_from_index(kvm, index);
+ 		vgic_v3_free_redist_region(rdreg);
++		mutex_unlock(&kvm->arch.config_lock);
+ 		return ret;
+ 	}
+ 
+diff --git a/arch/arm64/kvm/vgic/vgic-mmio.c b/arch/arm64/kvm/vgic/vgic-mmio.c
+index 1939c94e0b248..ff558c05e990c 100644
+--- a/arch/arm64/kvm/vgic/vgic-mmio.c
++++ b/arch/arm64/kvm/vgic/vgic-mmio.c
+@@ -1096,7 +1096,6 @@ int vgic_register_dist_iodev(struct kvm *kvm, gpa_t dist_base_address,
+ 			     enum vgic_type type)
+ {
+ 	struct vgic_io_device *io_device = &kvm->arch.vgic.dist_iodev;
+-	int ret = 0;
+ 	unsigned int len;
+ 
+ 	switch (type) {
+@@ -1114,10 +1113,6 @@ int vgic_register_dist_iodev(struct kvm *kvm, gpa_t dist_base_address,
+ 	io_device->iodev_type = IODEV_DIST;
+ 	io_device->redist_vcpu = NULL;
+ 
+-	mutex_lock(&kvm->slots_lock);
+-	ret = kvm_io_bus_register_dev(kvm, KVM_MMIO_BUS, dist_base_address,
+-				      len, &io_device->dev);
+-	mutex_unlock(&kvm->slots_lock);
+-
+-	return ret;
++	return kvm_io_bus_register_dev(kvm, KVM_MMIO_BUS, dist_base_address,
++				       len, &io_device->dev);
+ }
+diff --git a/arch/arm64/kvm/vgic/vgic-v2.c b/arch/arm64/kvm/vgic/vgic-v2.c
+index 645648349c99b..7e9cdb78f7ce8 100644
+--- a/arch/arm64/kvm/vgic/vgic-v2.c
++++ b/arch/arm64/kvm/vgic/vgic-v2.c
+@@ -312,12 +312,6 @@ int vgic_v2_map_resources(struct kvm *kvm)
+ 		return ret;
+ 	}
+ 
+-	ret = vgic_register_dist_iodev(kvm, dist->vgic_dist_base, VGIC_V2);
+-	if (ret) {
+-		kvm_err("Unable to register VGIC MMIO regions\n");
+-		return ret;
+-	}
+-
+ 	if (!static_branch_unlikely(&vgic_v2_cpuif_trap)) {
+ 		ret = kvm_phys_addr_ioremap(kvm, dist->vgic_cpu_base,
+ 					    kvm_vgic_global_state.vcpu_base,
+diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c
+index 469d816f356f3..76af07e66d731 100644
+--- a/arch/arm64/kvm/vgic/vgic-v3.c
++++ b/arch/arm64/kvm/vgic/vgic-v3.c
+@@ -539,7 +539,6 @@ int vgic_v3_map_resources(struct kvm *kvm)
+ {
+ 	struct vgic_dist *dist = &kvm->arch.vgic;
+ 	struct kvm_vcpu *vcpu;
+-	int ret = 0;
+ 	unsigned long c;
+ 
+ 	kvm_for_each_vcpu(c, vcpu, kvm) {
+@@ -569,12 +568,6 @@ int vgic_v3_map_resources(struct kvm *kvm)
+ 		return -EBUSY;
+ 	}
+ 
+-	ret = vgic_register_dist_iodev(kvm, dist->vgic_dist_base, VGIC_V3);
+-	if (ret) {
+-		kvm_err("Unable to register VGICv3 dist MMIO regions\n");
+-		return ret;
+-	}
+-
+ 	if (kvm_vgic_global_state.has_gicv4_1)
+ 		vgic_v4_configure_vsgis(kvm);
+ 
+diff --git a/arch/arm64/kvm/vgic/vgic-v4.c b/arch/arm64/kvm/vgic/vgic-v4.c
+index 3bb0034780605..c1c28fe680ba3 100644
+--- a/arch/arm64/kvm/vgic/vgic-v4.c
++++ b/arch/arm64/kvm/vgic/vgic-v4.c
+@@ -184,13 +184,14 @@ static void vgic_v4_disable_vsgis(struct kvm_vcpu *vcpu)
+ 	}
+ }
+ 
+-/* Must be called with the kvm lock held */
+ void vgic_v4_configure_vsgis(struct kvm *kvm)
+ {
+ 	struct vgic_dist *dist = &kvm->arch.vgic;
+ 	struct kvm_vcpu *vcpu;
+ 	unsigned long i;
+ 
++	lockdep_assert_held(&kvm->arch.config_lock);
++
+ 	kvm_arm_halt_guest(kvm);
+ 
+ 	kvm_for_each_vcpu(i, vcpu, kvm) {
+diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
+index f4cb0f85ccf49..d1136259b7b85 100644
+--- a/arch/arm64/mm/fault.c
++++ b/arch/arm64/mm/fault.c
+@@ -480,8 +480,8 @@ static void do_bad_area(unsigned long far, unsigned long esr,
+ 	}
+ }
+ 
+-#define VM_FAULT_BADMAP		0x010000
+-#define VM_FAULT_BADACCESS	0x020000
++#define VM_FAULT_BADMAP		((__force vm_fault_t)0x010000)
++#define VM_FAULT_BADACCESS	((__force vm_fault_t)0x020000)
+ 
+ static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr,
+ 				  unsigned int mm_flags, unsigned long vm_flags,
+diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
+index 3ddde336e6a56..3e5d6acbf2409 100644
+--- a/arch/loongarch/Kconfig
++++ b/arch/loongarch/Kconfig
+@@ -10,6 +10,7 @@ config LOONGARCH
+ 	select ARCH_ENABLE_MEMORY_HOTPLUG
+ 	select ARCH_ENABLE_MEMORY_HOTREMOVE
+ 	select ARCH_HAS_ACPI_TABLE_UPGRADE	if ACPI
++	select ARCH_HAS_FORTIFY_SOURCE
+ 	select ARCH_HAS_NMI_SAFE_THIS_CPU_OPS
+ 	select ARCH_HAS_PTE_SPECIAL
+ 	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
+diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
+index a8acd3b402d7c..95e56c16d5144 100644
+--- a/arch/powerpc/platforms/pseries/iommu.c
++++ b/arch/powerpc/platforms/pseries/iommu.c
+@@ -311,13 +311,22 @@ static void tce_free_pSeriesLP(unsigned long liobn, long tcenum, long tceshift,
+ static void tce_freemulti_pSeriesLP(struct iommu_table *tbl, long tcenum, long npages)
+ {
+ 	u64 rc;
++	long rpages = npages;
++	unsigned long limit;
+ 
+ 	if (!firmware_has_feature(FW_FEATURE_STUFF_TCE))
+ 		return tce_free_pSeriesLP(tbl->it_index, tcenum,
+ 					  tbl->it_page_shift, npages);
+ 
+-	rc = plpar_tce_stuff((u64)tbl->it_index,
+-			     (u64)tcenum << tbl->it_page_shift, 0, npages);
++	do {
++		limit = min_t(unsigned long, rpages, 512);
++
++		rc = plpar_tce_stuff((u64)tbl->it_index,
++				     (u64)tcenum << tbl->it_page_shift, 0, limit);
++
++		rpages -= limit;
++		tcenum += limit;
++	} while (rpages > 0 && !rc);
+ 
+ 	if (rc && printk_ratelimit()) {
+ 		printk("tce_freemulti_pSeriesLP: plpar_tce_stuff failed\n");
+diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
+index e753a6bd48881..1a3b0f6451424 100644
+--- a/arch/powerpc/xmon/xmon.c
++++ b/arch/powerpc/xmon/xmon.c
+@@ -88,7 +88,7 @@ static unsigned long ndump = 64;
+ static unsigned long nidump = 16;
+ static unsigned long ncsum = 4096;
+ static int termch;
+-static char tmpstr[128];
++static char tmpstr[KSYM_NAME_LEN];
+ static int tracing_enabled;
+ 
+ static long bus_error_jmp[JMP_BUF_LEN];
+diff --git a/arch/riscv/include/asm/perf_event.h b/arch/riscv/include/asm/perf_event.h
+index d42c901f9a977..665bbc9b2f840 100644
+--- a/arch/riscv/include/asm/perf_event.h
++++ b/arch/riscv/include/asm/perf_event.h
+@@ -10,4 +10,11 @@
+ 
+ #include <linux/perf_event.h>
+ #define perf_arch_bpf_user_pt_regs(regs) (struct user_regs_struct *)regs
++
++#define perf_arch_fetch_caller_regs(regs, __ip) { \
++	(regs)->epc = (__ip); \
++	(regs)->s0 = (unsigned long) __builtin_frame_address(0); \
++	(regs)->sp = current_stack_pointer; \
++	(regs)->status = SR_PP; \
++}
+ #endif /* _ASM_RISCV_PERF_EVENT_H */
+diff --git a/arch/riscv/kernel/vmlinux.lds.S b/arch/riscv/kernel/vmlinux.lds.S
+index 53a8ad65b255f..db56c38f0e194 100644
+--- a/arch/riscv/kernel/vmlinux.lds.S
++++ b/arch/riscv/kernel/vmlinux.lds.S
+@@ -129,6 +129,8 @@ SECTIONS
+ 		*(.sdata*)
+ 	}
+ 
++	.got : { *(.got*) }
++
+ #ifdef CONFIG_EFI
+ 	.pecoff_edata_padding : { BYTE(0); . = ALIGN(PECOFF_FILE_ALIGNMENT); }
+ 	__pecoff_data_raw_size = ABSOLUTE(. - __pecoff_text_end);
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index 6ebb75a9a6b9f..dc1793bf01796 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -846,9 +846,9 @@ static void __init create_kernel_page_table(pgd_t *pgdir, bool early)
+ static void __init create_fdt_early_page_table(uintptr_t fix_fdt_va,
+ 					       uintptr_t dtb_pa)
+ {
++#ifndef CONFIG_BUILTIN_DTB
+ 	uintptr_t pa = dtb_pa & ~(PMD_SIZE - 1);
+ 
+-#ifndef CONFIG_BUILTIN_DTB
+ 	/* Make sure the fdt fixmap address is always aligned on PMD size */
+ 	BUILD_BUG_ON(FIX_FDT % (PMD_SIZE / PAGE_SIZE));
+ 
+diff --git a/arch/s390/kernel/ipl.c b/arch/s390/kernel/ipl.c
+index 5f0f5c86963a9..e43ee9becbbb9 100644
+--- a/arch/s390/kernel/ipl.c
++++ b/arch/s390/kernel/ipl.c
+@@ -1936,14 +1936,13 @@ static struct shutdown_action __refdata dump_action = {
+ 
+ static void dump_reipl_run(struct shutdown_trigger *trigger)
+ {
+-	unsigned long ipib = (unsigned long) reipl_block_actual;
+ 	struct lowcore *abs_lc;
+ 	unsigned int csum;
+ 
+ 	csum = (__force unsigned int)
+ 	       csum_partial(reipl_block_actual, reipl_block_actual->hdr.len, 0);
+ 	abs_lc = get_abs_lowcore();
+-	abs_lc->ipib = ipib;
++	abs_lc->ipib = __pa(reipl_block_actual);
+ 	abs_lc->ipib_checksum = csum;
+ 	put_abs_lowcore(abs_lc);
+ 	dump_run(trigger);
+diff --git a/arch/s390/kernel/topology.c b/arch/s390/kernel/topology.c
+index c6eecd4a5302d..10b20aeb27d3b 100644
+--- a/arch/s390/kernel/topology.c
++++ b/arch/s390/kernel/topology.c
+@@ -95,7 +95,7 @@ out:
+ static void cpu_thread_map(cpumask_t *dst, unsigned int cpu)
+ {
+ 	static cpumask_t mask;
+-	int i;
++	unsigned int max_cpu;
+ 
+ 	cpumask_clear(&mask);
+ 	if (!cpumask_test_cpu(cpu, &cpu_setup_mask))
+@@ -104,9 +104,10 @@ static void cpu_thread_map(cpumask_t *dst, unsigned int cpu)
+ 	if (topology_mode != TOPOLOGY_MODE_HW)
+ 		goto out;
+ 	cpu -= cpu % (smp_cpu_mtid + 1);
+-	for (i = 0; i <= smp_cpu_mtid; i++) {
+-		if (cpumask_test_cpu(cpu + i, &cpu_setup_mask))
+-			cpumask_set_cpu(cpu + i, &mask);
++	max_cpu = min(cpu + smp_cpu_mtid, nr_cpu_ids - 1);
++	for (; cpu <= max_cpu; cpu++) {
++		if (cpumask_test_cpu(cpu, &cpu_setup_mask))
++			cpumask_set_cpu(cpu, &mask);
+ 	}
+ out:
+ 	cpumask_copy(dst, &mask);
+@@ -123,25 +124,26 @@ static void add_cpus_to_mask(struct topology_core *tl_core,
+ 	unsigned int core;
+ 
+ 	for_each_set_bit(core, &tl_core->mask, TOPOLOGY_CORE_BITS) {
+-		unsigned int rcore;
+-		int lcpu, i;
++		unsigned int max_cpu, rcore;
++		int cpu;
+ 
+ 		rcore = TOPOLOGY_CORE_BITS - 1 - core + tl_core->origin;
+-		lcpu = smp_find_processor_id(rcore << smp_cpu_mt_shift);
+-		if (lcpu < 0)
++		cpu = smp_find_processor_id(rcore << smp_cpu_mt_shift);
++		if (cpu < 0)
+ 			continue;
+-		for (i = 0; i <= smp_cpu_mtid; i++) {
+-			topo = &cpu_topology[lcpu + i];
++		max_cpu = min(cpu + smp_cpu_mtid, nr_cpu_ids - 1);
++		for (; cpu <= max_cpu; cpu++) {
++			topo = &cpu_topology[cpu];
+ 			topo->drawer_id = drawer->id;
+ 			topo->book_id = book->id;
+ 			topo->socket_id = socket->id;
+ 			topo->core_id = rcore;
+-			topo->thread_id = lcpu + i;
++			topo->thread_id = cpu;
+ 			topo->dedicated = tl_core->d;
+-			cpumask_set_cpu(lcpu + i, &drawer->mask);
+-			cpumask_set_cpu(lcpu + i, &book->mask);
+-			cpumask_set_cpu(lcpu + i, &socket->mask);
+-			smp_cpu_set_polarization(lcpu + i, tl_core->pp);
++			cpumask_set_cpu(cpu, &drawer->mask);
++			cpumask_set_cpu(cpu, &book->mask);
++			cpumask_set_cpu(cpu, &socket->mask);
++			smp_cpu_set_polarization(cpu, tl_core->pp);
+ 		}
+ 	}
+ }
+diff --git a/arch/um/drivers/Makefile b/arch/um/drivers/Makefile
+index dee6f66353b33..a461a950f0518 100644
+--- a/arch/um/drivers/Makefile
++++ b/arch/um/drivers/Makefile
+@@ -16,7 +16,8 @@ mconsole-objs := mconsole_kern.o mconsole_user.o
+ hostaudio-objs := hostaudio_kern.o
+ ubd-objs := ubd_kern.o ubd_user.o
+ port-objs := port_kern.o port_user.o
+-harddog-objs := harddog_kern.o harddog_user.o
++harddog-objs := harddog_kern.o
++harddog-builtin-$(CONFIG_UML_WATCHDOG) := harddog_user.o harddog_user_exp.o
+ rtc-objs := rtc_kern.o rtc_user.o
+ 
+ LDFLAGS_pcap.o = $(shell $(CC) $(KBUILD_CFLAGS) -print-file-name=libpcap.a)
+@@ -60,6 +61,7 @@ obj-$(CONFIG_PTY_CHAN) += pty.o
+ obj-$(CONFIG_TTY_CHAN) += tty.o 
+ obj-$(CONFIG_XTERM_CHAN) += xterm.o xterm_kern.o
+ obj-$(CONFIG_UML_WATCHDOG) += harddog.o
++obj-y += $(harddog-builtin-y) $(harddog-builtin-m)
+ obj-$(CONFIG_BLK_DEV_COW_COMMON) += cow_user.o
+ obj-$(CONFIG_UML_RANDOM) += random.o
+ obj-$(CONFIG_VIRTIO_UML) += virtio_uml.o
+diff --git a/arch/um/drivers/harddog.h b/arch/um/drivers/harddog.h
+new file mode 100644
+index 0000000000000..6d9ea60e7133e
+--- /dev/null
++++ b/arch/um/drivers/harddog.h
+@@ -0,0 +1,9 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef UM_WATCHDOG_H
++#define UM_WATCHDOG_H
++
++int start_watchdog(int *in_fd_ret, int *out_fd_ret, char *sock);
++void stop_watchdog(int in_fd, int out_fd);
++int ping_watchdog(int fd);
++
++#endif /* UM_WATCHDOG_H */
+diff --git a/arch/um/drivers/harddog_kern.c b/arch/um/drivers/harddog_kern.c
+index e6d4f43deba82..60d1c6cab8a95 100644
+--- a/arch/um/drivers/harddog_kern.c
++++ b/arch/um/drivers/harddog_kern.c
+@@ -47,6 +47,7 @@
+ #include <linux/spinlock.h>
+ #include <linux/uaccess.h>
+ #include "mconsole.h"
++#include "harddog.h"
+ 
+ MODULE_LICENSE("GPL");
+ 
+@@ -60,8 +61,6 @@ static int harddog_out_fd = -1;
+  *	Allow only one person to hold it open
+  */
+ 
+-extern int start_watchdog(int *in_fd_ret, int *out_fd_ret, char *sock);
+-
+ static int harddog_open(struct inode *inode, struct file *file)
+ {
+ 	int err = -EBUSY;
+@@ -92,8 +91,6 @@ err:
+ 	return err;
+ }
+ 
+-extern void stop_watchdog(int in_fd, int out_fd);
+-
+ static int harddog_release(struct inode *inode, struct file *file)
+ {
+ 	/*
+@@ -112,8 +109,6 @@ static int harddog_release(struct inode *inode, struct file *file)
+ 	return 0;
+ }
+ 
+-extern int ping_watchdog(int fd);
+-
+ static ssize_t harddog_write(struct file *file, const char __user *data, size_t len,
+ 			     loff_t *ppos)
+ {
+diff --git a/arch/um/drivers/harddog_user.c b/arch/um/drivers/harddog_user.c
+index 070468d22e394..9ed89304975ed 100644
+--- a/arch/um/drivers/harddog_user.c
++++ b/arch/um/drivers/harddog_user.c
+@@ -7,6 +7,7 @@
+ #include <unistd.h>
+ #include <errno.h>
+ #include <os.h>
++#include "harddog.h"
+ 
+ struct dog_data {
+ 	int stdin_fd;
+diff --git a/arch/um/drivers/harddog_user_exp.c b/arch/um/drivers/harddog_user_exp.c
+new file mode 100644
+index 0000000000000..c74d4b815d143
+--- /dev/null
++++ b/arch/um/drivers/harddog_user_exp.c
+@@ -0,0 +1,9 @@
++// SPDX-License-Identifier: GPL-2.0
++#include <linux/export.h>
++#include "harddog.h"
++
++#if IS_MODULE(CONFIG_UML_WATCHDOG)
++EXPORT_SYMBOL(start_watchdog);
++EXPORT_SYMBOL(stop_watchdog);
++EXPORT_SYMBOL(ping_watchdog);
++#endif
+diff --git a/arch/x86/crypto/aria-aesni-avx-asm_64.S b/arch/x86/crypto/aria-aesni-avx-asm_64.S
+index 9243f6289d34b..ed6c22fb16720 100644
+--- a/arch/x86/crypto/aria-aesni-avx-asm_64.S
++++ b/arch/x86/crypto/aria-aesni-avx-asm_64.S
+@@ -773,8 +773,6 @@
+ 	.octa 0x3F893781E95FE1576CDA64D2BA0CB204
+ 
+ #ifdef CONFIG_AS_GFNI
+-.section	.rodata.cst8, "aM", @progbits, 8
+-.align 8
+ /* AES affine: */
+ #define tf_aff_const BV8(1, 1, 0, 0, 0, 1, 1, 0)
+ .Ltf_aff_bitmatrix:
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index a3fb996a86a10..161b8f71eb5a7 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -4074,7 +4074,7 @@ static struct perf_guest_switch_msr *intel_guest_get_msrs(int *nr, void *data)
+ 	if (x86_pmu.intel_cap.pebs_baseline) {
+ 		arr[(*nr)++] = (struct perf_guest_switch_msr){
+ 			.msr = MSR_PEBS_DATA_CFG,
+-			.host = cpuc->pebs_data_cfg,
++			.host = cpuc->active_pebs_data_cfg,
+ 			.guest = kvm_pmu->pebs_data_cfg,
+ 		};
+ 	}
+diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
+index a2e566e53076e..df88576d6b2a5 100644
+--- a/arch/x86/events/intel/ds.c
++++ b/arch/x86/events/intel/ds.c
+@@ -1229,12 +1229,14 @@ pebs_update_state(bool needed_cb, struct cpu_hw_events *cpuc,
+ 		  struct perf_event *event, bool add)
+ {
+ 	struct pmu *pmu = event->pmu;
++
+ 	/*
+ 	 * Make sure we get updated with the first PEBS
+ 	 * event. It will trigger also during removal, but
+ 	 * that does not hurt:
+ 	 */
+-	bool update = cpuc->n_pebs == 1;
++	if (cpuc->n_pebs == 1)
++		cpuc->pebs_data_cfg = PEBS_UPDATE_DS_SW;
+ 
+ 	if (needed_cb != pebs_needs_sched_cb(cpuc)) {
+ 		if (!needed_cb)
+@@ -1242,7 +1244,7 @@ pebs_update_state(bool needed_cb, struct cpu_hw_events *cpuc,
+ 		else
+ 			perf_sched_cb_dec(pmu);
+ 
+-		update = true;
++		cpuc->pebs_data_cfg |= PEBS_UPDATE_DS_SW;
+ 	}
+ 
+ 	/*
+@@ -1252,24 +1254,13 @@ pebs_update_state(bool needed_cb, struct cpu_hw_events *cpuc,
+ 	if (x86_pmu.intel_cap.pebs_baseline && add) {
+ 		u64 pebs_data_cfg;
+ 
+-		/* Clear pebs_data_cfg and pebs_record_size for first PEBS. */
+-		if (cpuc->n_pebs == 1) {
+-			cpuc->pebs_data_cfg = 0;
+-			cpuc->pebs_record_size = sizeof(struct pebs_basic);
+-		}
+-
+ 		pebs_data_cfg = pebs_update_adaptive_cfg(event);
+-
+-		/* Update pebs_record_size if new event requires more data. */
+-		if (pebs_data_cfg & ~cpuc->pebs_data_cfg) {
+-			cpuc->pebs_data_cfg |= pebs_data_cfg;
+-			adaptive_pebs_record_size_update();
+-			update = true;
+-		}
++		/*
++		 * Be sure to update the thresholds when we change the record.
++		 */
++		if (pebs_data_cfg & ~cpuc->pebs_data_cfg)
++			cpuc->pebs_data_cfg |= pebs_data_cfg | PEBS_UPDATE_DS_SW;
+ 	}
+-
+-	if (update)
+-		pebs_update_threshold(cpuc);
+ }
+ 
+ void intel_pmu_pebs_add(struct perf_event *event)
+@@ -1326,9 +1317,17 @@ static void intel_pmu_pebs_via_pt_enable(struct perf_event *event)
+ 	wrmsrl(base + idx, value);
+ }
+ 
++static inline void intel_pmu_drain_large_pebs(struct cpu_hw_events *cpuc)
++{
++	if (cpuc->n_pebs == cpuc->n_large_pebs &&
++	    cpuc->n_pebs != cpuc->n_pebs_via_pt)
++		intel_pmu_drain_pebs_buffer();
++}
++
+ void intel_pmu_pebs_enable(struct perf_event *event)
+ {
+ 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
++	u64 pebs_data_cfg = cpuc->pebs_data_cfg & ~PEBS_UPDATE_DS_SW;
+ 	struct hw_perf_event *hwc = &event->hw;
+ 	struct debug_store *ds = cpuc->ds;
+ 	unsigned int idx = hwc->idx;
+@@ -1344,11 +1343,22 @@ void intel_pmu_pebs_enable(struct perf_event *event)
+ 
+ 	if (x86_pmu.intel_cap.pebs_baseline) {
+ 		hwc->config |= ICL_EVENTSEL_ADAPTIVE;
+-		if (cpuc->pebs_data_cfg != cpuc->active_pebs_data_cfg) {
+-			wrmsrl(MSR_PEBS_DATA_CFG, cpuc->pebs_data_cfg);
+-			cpuc->active_pebs_data_cfg = cpuc->pebs_data_cfg;
++		if (pebs_data_cfg != cpuc->active_pebs_data_cfg) {
++			/*
++			 * drain_pebs() assumes uniform record size;
++			 * hence we need to drain when changing said
++			 * size.
++			 */
++			intel_pmu_drain_large_pebs(cpuc);
++			adaptive_pebs_record_size_update();
++			wrmsrl(MSR_PEBS_DATA_CFG, pebs_data_cfg);
++			cpuc->active_pebs_data_cfg = pebs_data_cfg;
+ 		}
+ 	}
++	if (cpuc->pebs_data_cfg & PEBS_UPDATE_DS_SW) {
++		cpuc->pebs_data_cfg = pebs_data_cfg;
++		pebs_update_threshold(cpuc);
++	}
+ 
+ 	if (idx >= INTEL_PMC_IDX_FIXED) {
+ 		if (x86_pmu.intel_cap.pebs_format < 5)
+@@ -1391,9 +1401,7 @@ void intel_pmu_pebs_disable(struct perf_event *event)
+ 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+ 	struct hw_perf_event *hwc = &event->hw;
+ 
+-	if (cpuc->n_pebs == cpuc->n_large_pebs &&
+-	    cpuc->n_pebs != cpuc->n_pebs_via_pt)
+-		intel_pmu_drain_pebs_buffer();
++	intel_pmu_drain_large_pebs(cpuc);
+ 
+ 	cpuc->pebs_enabled &= ~(1ULL << hwc->idx);
+ 
+diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
+index 8fc15ed5e60bb..abf09882f58b6 100644
+--- a/arch/x86/include/asm/perf_event.h
++++ b/arch/x86/include/asm/perf_event.h
+@@ -121,6 +121,9 @@
+ #define PEBS_DATACFG_LBRS	BIT_ULL(3)
+ #define PEBS_DATACFG_LBR_SHIFT	24
+ 
++/* Steal the highest bit of pebs_data_cfg for SW usage */
++#define PEBS_UPDATE_DS_SW	BIT_ULL(63)
++
+ /*
+  * Intel "Architectural Performance Monitoring" CPUID
+  * detection/enumeration details:
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index e542cf285b518..3c300a196bdf9 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -228,6 +228,23 @@ static int kvm_recalculate_phys_map(struct kvm_apic_map *new,
+ 	u32 xapic_id = kvm_xapic_id(apic);
+ 	u32 physical_id;
+ 
++	/*
++	 * For simplicity, KVM always allocates enough space for all possible
++	 * xAPIC IDs.  Yell, but don't kill the VM, as KVM can continue on
++	 * without the optimized map.
++	 */
++	if (WARN_ON_ONCE(xapic_id > new->max_apic_id))
++		return -EINVAL;
++
++	/*
++	 * Bail if a vCPU was added and/or enabled its APIC between allocating
++	 * the map and doing the actual calculations for the map.  Note, KVM
++	 * hardcodes the x2APIC ID to vcpu_id, i.e. there's no TOCTOU bug if
++	 * the compiler decides to reload x2apic_id after this check.
++	 */
++	if (x2apic_id > new->max_apic_id)
++		return -E2BIG;
++
+ 	/*
+ 	 * Deliberately truncate the vCPU ID when detecting a mismatched APIC
+ 	 * ID to avoid false positives if the vCPU ID, i.e. x2APIC ID, is a
+@@ -253,8 +270,7 @@ static int kvm_recalculate_phys_map(struct kvm_apic_map *new,
+ 	 */
+ 	if (vcpu->kvm->arch.x2apic_format) {
+ 		/* See also kvm_apic_match_physical_addr(). */
+-		if ((apic_x2apic_mode(apic) || x2apic_id > 0xff) &&
+-			x2apic_id <= new->max_apic_id)
++		if (apic_x2apic_mode(apic) || x2apic_id > 0xff)
+ 			new->phys_map[x2apic_id] = apic;
+ 
+ 		if (!apic_x2apic_mode(apic) && !new->phys_map[xapic_id])
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index d3812de54b02c..d5c03f14cdc70 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -7011,7 +7011,10 @@ static void kvm_recover_nx_huge_pages(struct kvm *kvm)
+ 		 */
+ 		slot = NULL;
+ 		if (atomic_read(&kvm->nr_memslots_dirty_logging)) {
+-			slot = gfn_to_memslot(kvm, sp->gfn);
++			struct kvm_memslots *slots;
++
++			slots = kvm_memslots_for_spte_role(kvm, sp->role);
++			slot = __gfn_to_memslot(slots, sp->gfn);
+ 			WARN_ON_ONCE(!slot);
+ 		}
+ 
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 999b2db0737be..38c2f4f1f59a8 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -10682,6 +10682,9 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ 			exit_fastpath = EXIT_FASTPATH_EXIT_HANDLED;
+ 			break;
+ 		}
++
++		/* Note, VM-Exits that go down the "slow" path are accounted below. */
++		++vcpu->stat.exits;
+ 	}
+ 
+ 	/*
+diff --git a/block/blk-settings.c b/block/blk-settings.c
+index 896b4654ab00f..4dd59059b788e 100644
+--- a/block/blk-settings.c
++++ b/block/blk-settings.c
+@@ -915,6 +915,7 @@ static bool disk_has_partitions(struct gendisk *disk)
+ void disk_set_zoned(struct gendisk *disk, enum blk_zoned_model model)
+ {
+ 	struct request_queue *q = disk->queue;
++	unsigned int old_model = q->limits.zoned;
+ 
+ 	switch (model) {
+ 	case BLK_ZONED_HM:
+@@ -952,7 +953,7 @@ void disk_set_zoned(struct gendisk *disk, enum blk_zoned_model model)
+ 		 */
+ 		blk_queue_zone_write_granularity(q,
+ 						queue_logical_block_size(q));
+-	} else {
++	} else if (old_model != BLK_ZONED_NONE) {
+ 		disk_clear_zone_settings(disk);
+ 	}
+ }
+diff --git a/block/fops.c b/block/fops.c
+index d2e6be4e3d1c7..58d0aebc7313a 100644
+--- a/block/fops.c
++++ b/block/fops.c
+@@ -678,6 +678,16 @@ static long blkdev_fallocate(struct file *file, int mode, loff_t start,
+ 	return error;
+ }
+ 
++static int blkdev_mmap(struct file *file, struct vm_area_struct *vma)
++{
++	struct inode *bd_inode = bdev_file_inode(file);
++
++	if (bdev_read_only(I_BDEV(bd_inode)))
++		return generic_file_readonly_mmap(file, vma);
++
++	return generic_file_mmap(file, vma);
++}
++
+ const struct file_operations def_blk_fops = {
+ 	.open		= blkdev_open,
+ 	.release	= blkdev_close,
+@@ -685,7 +695,7 @@ const struct file_operations def_blk_fops = {
+ 	.read_iter	= blkdev_read_iter,
+ 	.write_iter	= blkdev_write_iter,
+ 	.iopoll		= iocb_bio_iopoll,
+-	.mmap		= generic_file_mmap,
++	.mmap		= blkdev_mmap,
+ 	.fsync		= blkdev_fsync,
+ 	.unlocked_ioctl	= blkdev_ioctl,
+ #ifdef CONFIG_COMPAT
+diff --git a/crypto/asymmetric_keys/public_key.c b/crypto/asymmetric_keys/public_key.c
+index eca5671ad3f22..50c933f86b218 100644
+--- a/crypto/asymmetric_keys/public_key.c
++++ b/crypto/asymmetric_keys/public_key.c
+@@ -380,9 +380,10 @@ int public_key_verify_signature(const struct public_key *pkey,
+ 	struct crypto_wait cwait;
+ 	struct crypto_akcipher *tfm;
+ 	struct akcipher_request *req;
+-	struct scatterlist src_sg[2];
++	struct scatterlist src_sg;
+ 	char alg_name[CRYPTO_MAX_ALG_NAME];
+-	char *key, *ptr;
++	char *buf, *ptr;
++	size_t buf_len;
+ 	int ret;
+ 
+ 	pr_devel("==>%s()\n", __func__);
+@@ -420,34 +421,37 @@ int public_key_verify_signature(const struct public_key *pkey,
+ 	if (!req)
+ 		goto error_free_tfm;
+ 
+-	key = kmalloc(pkey->keylen + sizeof(u32) * 2 + pkey->paramlen,
+-		      GFP_KERNEL);
+-	if (!key)
++	buf_len = max_t(size_t, pkey->keylen + sizeof(u32) * 2 + pkey->paramlen,
++			sig->s_size + sig->digest_size);
++
++	buf = kmalloc(buf_len, GFP_KERNEL);
++	if (!buf)
+ 		goto error_free_req;
+ 
+-	memcpy(key, pkey->key, pkey->keylen);
+-	ptr = key + pkey->keylen;
++	memcpy(buf, pkey->key, pkey->keylen);
++	ptr = buf + pkey->keylen;
+ 	ptr = pkey_pack_u32(ptr, pkey->algo);
+ 	ptr = pkey_pack_u32(ptr, pkey->paramlen);
+ 	memcpy(ptr, pkey->params, pkey->paramlen);
+ 
+ 	if (pkey->key_is_private)
+-		ret = crypto_akcipher_set_priv_key(tfm, key, pkey->keylen);
++		ret = crypto_akcipher_set_priv_key(tfm, buf, pkey->keylen);
+ 	else
+-		ret = crypto_akcipher_set_pub_key(tfm, key, pkey->keylen);
++		ret = crypto_akcipher_set_pub_key(tfm, buf, pkey->keylen);
+ 	if (ret)
+-		goto error_free_key;
++		goto error_free_buf;
+ 
+ 	if (strcmp(pkey->pkey_algo, "sm2") == 0 && sig->data_size) {
+ 		ret = cert_sig_digest_update(sig, tfm);
+ 		if (ret)
+-			goto error_free_key;
++			goto error_free_buf;
+ 	}
+ 
+-	sg_init_table(src_sg, 2);
+-	sg_set_buf(&src_sg[0], sig->s, sig->s_size);
+-	sg_set_buf(&src_sg[1], sig->digest, sig->digest_size);
+-	akcipher_request_set_crypt(req, src_sg, NULL, sig->s_size,
++	memcpy(buf, sig->s, sig->s_size);
++	memcpy(buf + sig->s_size, sig->digest, sig->digest_size);
++
++	sg_init_one(&src_sg, buf, sig->s_size + sig->digest_size);
++	akcipher_request_set_crypt(req, &src_sg, NULL, sig->s_size,
+ 				   sig->digest_size);
+ 	crypto_init_wait(&cwait);
+ 	akcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG |
+@@ -455,8 +459,8 @@ int public_key_verify_signature(const struct public_key *pkey,
+ 				      crypto_req_done, &cwait);
+ 	ret = crypto_wait_req(crypto_akcipher_verify(req), &cwait);
+ 
+-error_free_key:
+-	kfree(key);
++error_free_buf:
++	kfree(buf);
+ error_free_req:
+ 	akcipher_request_free(req);
+ error_free_tfm:
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index e8492b3a393ab..0800a9d775580 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -516,6 +516,17 @@ static const struct dmi_system_id maingear_laptop[] = {
+ 	{ }
+ };
+ 
++static const struct dmi_system_id lg_laptop[] = {
++	{
++		.ident = "LG Electronics 17U70P",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LG Electronics"),
++			DMI_MATCH(DMI_BOARD_NAME, "17U70P"),
++		},
++	},
++	{ }
++};
++
+ struct irq_override_cmp {
+ 	const struct dmi_system_id *system;
+ 	unsigned char irq;
+@@ -532,6 +543,7 @@ static const struct irq_override_cmp override_table[] = {
+ 	{ lenovo_laptop, 10, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, true },
+ 	{ tongfang_gm_rg, 1, ACPI_EDGE_SENSITIVE, ACPI_ACTIVE_LOW, 1, true },
+ 	{ maingear_laptop, 1, ACPI_EDGE_SENSITIVE, ACPI_ACTIVE_LOW, 1, true },
++	{ lg_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, false },
+ };
+ 
+ static bool acpi_dev_irq_override(u32 gsi, u8 triggering, u8 polarity,
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index e093c7a7deebd..a9e768fdae4e3 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -2694,18 +2694,36 @@ static unsigned int atapi_xlat(struct ata_queued_cmd *qc)
+ 	return 0;
+ }
+ 
+-static struct ata_device *ata_find_dev(struct ata_port *ap, int devno)
++static struct ata_device *ata_find_dev(struct ata_port *ap, unsigned int devno)
+ {
+-	if (!sata_pmp_attached(ap)) {
+-		if (likely(devno >= 0 &&
+-			   devno < ata_link_max_devices(&ap->link)))
++	/*
++	 * For the non-PMP case, ata_link_max_devices() returns 1 (SATA case),
++	 * or 2 (IDE master + slave case). However, the former case includes
++	 * libsas hosted devices which are numbered per scsi host, leading
++	 * to devno potentially being larger than 0 but with each struct
++	 * ata_device having its own struct ata_port and struct ata_link.
++	 * To accommodate these, ignore devno and always use device number 0.
++	 */
++	if (likely(!sata_pmp_attached(ap))) {
++		int link_max_devices = ata_link_max_devices(&ap->link);
++
++		if (link_max_devices == 1)
++			return &ap->link.device[0];
++
++		if (devno < link_max_devices)
+ 			return &ap->link.device[devno];
+-	} else {
+-		if (likely(devno >= 0 &&
+-			   devno < ap->nr_pmp_links))
+-			return &ap->pmp_link[devno].device[0];
++
++		return NULL;
+ 	}
+ 
++	/*
++	 * For PMP-attached devices, the device number corresponds to C
++	 * (channel) of SCSI [H:C:I:L], indicating the port pmp link
++	 * for the device.
++	 */
++	if (devno < ap->nr_pmp_links)
++		return &ap->pmp_link[devno].device[0];
++
+ 	return NULL;
+ }
+ 
+diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c
+index ea8f416852bd9..0fc8fbe7b361d 100644
+--- a/drivers/base/cacheinfo.c
++++ b/drivers/base/cacheinfo.c
+@@ -380,6 +380,16 @@ static int cache_shared_cpu_map_setup(unsigned int cpu)
+ 				continue;/* skip if itself or no cacheinfo */
+ 			for (sib_index = 0; sib_index < cache_leaves(i); sib_index++) {
+ 				sib_leaf = per_cpu_cacheinfo_idx(i, sib_index);
++
++				/*
++				 * Comparing cache IDs only makes sense if the leaves
++				 * belong to the same cache level of same type. Skip
++				 * the check if level and type do not match.
++				 */
++				if (sib_leaf->level != this_leaf->level ||
++				    sib_leaf->type != this_leaf->type)
++					continue;
++
+ 				if (cache_leaves_are_shared(this_leaf, sib_leaf)) {
+ 					cpumask_set_cpu(cpu, &sib_leaf->shared_cpu_map);
+ 					cpumask_set_cpu(i, &this_leaf->shared_cpu_map);
+@@ -392,11 +402,14 @@ static int cache_shared_cpu_map_setup(unsigned int cpu)
+ 			coherency_max_size = this_leaf->coherency_line_size;
+ 	}
+ 
++	/* shared_cpu_map is now populated for the cpu */
++	this_cpu_ci->cpu_map_populated = true;
+ 	return 0;
+ }
+ 
+ static void cache_shared_cpu_map_remove(unsigned int cpu)
+ {
++	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+ 	struct cacheinfo *this_leaf, *sib_leaf;
+ 	unsigned int sibling, index, sib_index;
+ 
+@@ -411,6 +424,16 @@ static void cache_shared_cpu_map_remove(unsigned int cpu)
+ 
+ 			for (sib_index = 0; sib_index < cache_leaves(sibling); sib_index++) {
+ 				sib_leaf = per_cpu_cacheinfo_idx(sibling, sib_index);
++
++				/*
++				 * Comparing cache IDs only makes sense if the leaves
++				 * belong to the same cache level of same type. Skip
++				 * the check if level and type do not match.
++				 */
++				if (sib_leaf->level != this_leaf->level ||
++				    sib_leaf->type != this_leaf->type)
++					continue;
++
+ 				if (cache_leaves_are_shared(this_leaf, sib_leaf)) {
+ 					cpumask_clear_cpu(cpu, &sib_leaf->shared_cpu_map);
+ 					cpumask_clear_cpu(sibling, &this_leaf->shared_cpu_map);
+@@ -419,6 +442,9 @@ static void cache_shared_cpu_map_remove(unsigned int cpu)
+ 			}
+ 		}
+ 	}
++
++	/* cpu is no longer populated in the shared map */
++	this_cpu_ci->cpu_map_populated = false;
+ }
+ 
+ static void free_cache_attributes(unsigned int cpu)
+diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
+index d2a54eb0efd9b..f9f38ab3f030e 100644
+--- a/drivers/base/regmap/regmap.c
++++ b/drivers/base/regmap/regmap.c
+@@ -2064,6 +2064,8 @@ int _regmap_raw_write(struct regmap *map, unsigned int reg,
+ 	size_t val_count = val_len / val_bytes;
+ 	size_t chunk_count, chunk_bytes;
+ 	size_t chunk_regs = val_count;
++	size_t max_data = map->max_raw_write - map->format.reg_bytes -
++			map->format.pad_bytes;
+ 	int ret, i;
+ 
+ 	if (!val_count)
+@@ -2071,8 +2073,8 @@ int _regmap_raw_write(struct regmap *map, unsigned int reg,
+ 
+ 	if (map->use_single_write)
+ 		chunk_regs = 1;
+-	else if (map->max_raw_write && val_len > map->max_raw_write)
+-		chunk_regs = map->max_raw_write / val_bytes;
++	else if (map->max_raw_write && val_len > max_data)
++		chunk_regs = max_data / val_bytes;
+ 
+ 	chunk_count = val_count / chunk_regs;
+ 	chunk_bytes = chunk_regs * val_bytes;
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index e1c954094b6c0..dd0adcf745ff5 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -1666,7 +1666,7 @@ static int nbd_dev_dbg_init(struct nbd_device *nbd)
+ 		return -EIO;
+ 
+ 	dir = debugfs_create_dir(nbd_name(nbd), nbd_dbg_dir);
+-	if (!dir) {
++	if (IS_ERR(dir)) {
+ 		dev_err(nbd_to_dev(nbd), "Failed to create debugfs dir for '%s'\n",
+ 			nbd_name(nbd));
+ 		return -EIO;
+@@ -1692,7 +1692,7 @@ static int nbd_dbg_init(void)
+ 	struct dentry *dbg_dir;
+ 
+ 	dbg_dir = debugfs_create_dir("nbd", NULL);
+-	if (!dbg_dir)
++	if (IS_ERR(dbg_dir))
+ 		return -EIO;
+ 
+ 	nbd_dbg_dir = dbg_dir;
+diff --git a/drivers/block/rnbd/rnbd-proto.h b/drivers/block/rnbd/rnbd-proto.h
+index ea7ac8bca63cf..da1d0542d7e2c 100644
+--- a/drivers/block/rnbd/rnbd-proto.h
++++ b/drivers/block/rnbd/rnbd-proto.h
+@@ -241,7 +241,7 @@ static inline blk_opf_t rnbd_to_bio_flags(u32 rnbd_opf)
+ 		bio_opf = REQ_OP_WRITE;
+ 		break;
+ 	case RNBD_OP_FLUSH:
+-		bio_opf = REQ_OP_FLUSH | REQ_PREFLUSH;
++		bio_opf = REQ_OP_WRITE | REQ_PREFLUSH;
+ 		break;
+ 	case RNBD_OP_DISCARD:
+ 		bio_opf = REQ_OP_DISCARD;
+diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
+index 41c35ab2c25a1..4db5f1bcac44a 100644
+--- a/drivers/block/ublk_drv.c
++++ b/drivers/block/ublk_drv.c
+@@ -1122,6 +1122,11 @@ static inline bool ublk_queue_ready(struct ublk_queue *ubq)
+ 	return ubq->nr_io_ready == ubq->q_depth;
+ }
+ 
++static void ublk_cmd_cancel_cb(struct io_uring_cmd *cmd, unsigned issue_flags)
++{
++	io_uring_cmd_done(cmd, UBLK_IO_RES_ABORT, 0, issue_flags);
++}
++
+ static void ublk_cancel_queue(struct ublk_queue *ubq)
+ {
+ 	int i;
+@@ -1133,8 +1138,8 @@ static void ublk_cancel_queue(struct ublk_queue *ubq)
+ 		struct ublk_io *io = &ubq->ios[i];
+ 
+ 		if (io->flags & UBLK_IO_FLAG_ACTIVE)
+-			io_uring_cmd_done(io->cmd, UBLK_IO_RES_ABORT, 0,
+-						IO_URING_F_UNLOCKED);
++			io_uring_cmd_complete_in_task(io->cmd,
++						      ublk_cmd_cancel_cb);
+ 	}
+ 
+ 	/* all io commands are canceled */
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index f02b583005a53..e05d2b227de37 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -805,8 +805,11 @@ static int tpm_tis_probe_irq_single(struct tpm_chip *chip, u32 intmask,
+ 	int rc;
+ 	u32 int_status;
+ 
+-	if (devm_request_irq(chip->dev.parent, irq, tis_int_handler, flags,
+-			     dev_name(&chip->dev), chip) != 0) {
++
++	rc = devm_request_threaded_irq(chip->dev.parent, irq, NULL,
++				       tis_int_handler, IRQF_ONESHOT | flags,
++				       dev_name(&chip->dev), chip);
++	if (rc) {
+ 		dev_info(&chip->dev, "Unable to request irq: %d for probe\n",
+ 			 irq);
+ 		return -1;
+diff --git a/drivers/char/tpm/tpm_tis_core.h b/drivers/char/tpm/tpm_tis_core.h
+index e978f457fd4d4..610bfadb6acf1 100644
+--- a/drivers/char/tpm/tpm_tis_core.h
++++ b/drivers/char/tpm/tpm_tis_core.h
+@@ -84,10 +84,10 @@ enum tis_defaults {
+ #define ILB_REMAP_SIZE			0x100
+ 
+ enum tpm_tis_flags {
+-	TPM_TIS_ITPM_WORKAROUND		= BIT(0),
+-	TPM_TIS_INVALID_STATUS		= BIT(1),
+-	TPM_TIS_DEFAULT_CANCELLATION	= BIT(2),
+-	TPM_TIS_IRQ_TESTED		= BIT(3),
++	TPM_TIS_ITPM_WORKAROUND		= 0,
++	TPM_TIS_INVALID_STATUS		= 1,
++	TPM_TIS_DEFAULT_CANCELLATION	= 2,
++	TPM_TIS_IRQ_TESTED		= 3,
+ };
+ 
+ struct tpm_tis_data {
+diff --git a/drivers/dma/at_hdmac.c b/drivers/dma/at_hdmac.c
+index 8858470246e15..ee3a219e3a897 100644
+--- a/drivers/dma/at_hdmac.c
++++ b/drivers/dma/at_hdmac.c
+@@ -132,7 +132,7 @@
+ #define ATC_DST_PIP		BIT(12)		/* Destination Picture-in-Picture enabled */
+ #define ATC_SRC_DSCR_DIS	BIT(16)		/* Src Descriptor fetch disable */
+ #define ATC_DST_DSCR_DIS	BIT(20)		/* Dst Descriptor fetch disable */
+-#define ATC_FC			GENMASK(22, 21)	/* Choose Flow Controller */
++#define ATC_FC			GENMASK(23, 21)	/* Choose Flow Controller */
+ #define ATC_FC_MEM2MEM		0x0		/* Mem-to-Mem (DMA) */
+ #define ATC_FC_MEM2PER		0x1		/* Mem-to-Periph (DMA) */
+ #define ATC_FC_PER2MEM		0x2		/* Periph-to-Mem (DMA) */
+@@ -153,8 +153,6 @@
+ #define ATC_AUTO		BIT(31)		/* Auto multiple buffer tx enable */
+ 
+ /* Bitfields in CFG */
+-#define ATC_PER_MSB(h)	((0x30U & (h)) >> 4)	/* Extract most significant bits of a handshaking identifier */
+-
+ #define ATC_SRC_PER		GENMASK(3, 0)	/* Channel src rq associated with periph handshaking ifc h */
+ #define ATC_DST_PER		GENMASK(7, 4)	/* Channel dst rq associated with periph handshaking ifc h */
+ #define ATC_SRC_REP		BIT(8)		/* Source Replay Mod */
+@@ -181,10 +179,15 @@
+ #define ATC_DPIP_HOLE		GENMASK(15, 0)
+ #define ATC_DPIP_BOUNDARY	GENMASK(25, 16)
+ 
+-#define ATC_SRC_PER_ID(id)	(FIELD_PREP(ATC_SRC_PER_MSB, (id)) |	\
+-				 FIELD_PREP(ATC_SRC_PER, (id)))
+-#define ATC_DST_PER_ID(id)	(FIELD_PREP(ATC_DST_PER_MSB, (id)) |	\
+-				 FIELD_PREP(ATC_DST_PER, (id)))
++#define ATC_PER_MSB		GENMASK(5, 4)	/* Extract MSBs of a handshaking identifier */
++#define ATC_SRC_PER_ID(id)					       \
++	({ typeof(id) _id = (id);				       \
++	   FIELD_PREP(ATC_SRC_PER_MSB, FIELD_GET(ATC_PER_MSB, _id)) |  \
++	   FIELD_PREP(ATC_SRC_PER, _id); })
++#define ATC_DST_PER_ID(id)					       \
++	({ typeof(id) _id = (id);				       \
++	   FIELD_PREP(ATC_DST_PER_MSB, FIELD_GET(ATC_PER_MSB, _id)) |  \
++	   FIELD_PREP(ATC_DST_PER, _id); })
+ 
+ 
+ 
+diff --git a/drivers/dma/at_xdmac.c b/drivers/dma/at_xdmac.c
+index 96f1b69f8a75e..ab13704f27f11 100644
+--- a/drivers/dma/at_xdmac.c
++++ b/drivers/dma/at_xdmac.c
+@@ -1102,6 +1102,8 @@ at_xdmac_prep_interleaved(struct dma_chan *chan,
+ 							NULL,
+ 							src_addr, dst_addr,
+ 							xt, xt->sgl);
++		if (!first)
++			return NULL;
+ 
+ 		/* Length of the block is (BLEN+1) microblocks. */
+ 		for (i = 0; i < xt->numf - 1; i++)
+@@ -1132,8 +1134,9 @@ at_xdmac_prep_interleaved(struct dma_chan *chan,
+ 							       src_addr, dst_addr,
+ 							       xt, chunk);
+ 			if (!desc) {
+-				list_splice_tail_init(&first->descs_list,
+-						      &atchan->free_descs_list);
++				if (first)
++					list_splice_tail_init(&first->descs_list,
++							      &atchan->free_descs_list);
+ 				return NULL;
+ 			}
+ 
+diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c
+index 0d9257fbdfb0d..b4731fe6bbc14 100644
+--- a/drivers/dma/pl330.c
++++ b/drivers/dma/pl330.c
+@@ -1050,7 +1050,7 @@ static bool _trigger(struct pl330_thread *thrd)
+ 	return true;
+ }
+ 
+-static bool _start(struct pl330_thread *thrd)
++static bool pl330_start_thread(struct pl330_thread *thrd)
+ {
+ 	switch (_state(thrd)) {
+ 	case PL330_STATE_FAULT_COMPLETING:
+@@ -1702,7 +1702,7 @@ static int pl330_update(struct pl330_dmac *pl330)
+ 			thrd->req_running = -1;
+ 
+ 			/* Get going again ASAP */
+-			_start(thrd);
++			pl330_start_thread(thrd);
+ 
+ 			/* For now, just make a list of callbacks to be done */
+ 			list_add_tail(&descdone->rqd, &pl330->req_done);
+@@ -2089,7 +2089,7 @@ static void pl330_tasklet(struct tasklet_struct *t)
+ 	} else {
+ 		/* Make sure the PL330 Channel thread is active */
+ 		spin_lock(&pch->thread->dmac->lock);
+-		_start(pch->thread);
++		pl330_start_thread(pch->thread);
+ 		spin_unlock(&pch->thread->dmac->lock);
+ 	}
+ 
+@@ -2107,7 +2107,7 @@ static void pl330_tasklet(struct tasklet_struct *t)
+ 			if (power_down) {
+ 				pch->active = true;
+ 				spin_lock(&pch->thread->dmac->lock);
+-				_start(pch->thread);
++				pl330_start_thread(pch->thread);
+ 				spin_unlock(&pch->thread->dmac->lock);
+ 				power_down = false;
+ 			}
+diff --git a/drivers/firmware/qcom_scm.c b/drivers/firmware/qcom_scm.c
+index 5f281cb1ae6ba..75cf3556b99a1 100644
+--- a/drivers/firmware/qcom_scm.c
++++ b/drivers/firmware/qcom_scm.c
+@@ -905,7 +905,7 @@ static int __qcom_scm_assign_mem(struct device *dev, phys_addr_t mem_region,
+  * Return negative errno on failure or 0 on success with @srcvm updated.
+  */
+ int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz,
+-			unsigned int *srcvm,
++			u64 *srcvm,
+ 			const struct qcom_scm_vmperm *newvm,
+ 			unsigned int dest_cnt)
+ {
+@@ -922,9 +922,9 @@ int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz,
+ 	__le32 *src;
+ 	void *ptr;
+ 	int ret, i, b;
+-	unsigned long srcvm_bits = *srcvm;
++	u64 srcvm_bits = *srcvm;
+ 
+-	src_sz = hweight_long(srcvm_bits) * sizeof(*src);
++	src_sz = hweight64(srcvm_bits) * sizeof(*src);
+ 	mem_to_map_sz = sizeof(*mem_to_map);
+ 	dest_sz = dest_cnt * sizeof(*destvm);
+ 	ptr_sz = ALIGN(src_sz, SZ_64) + ALIGN(mem_to_map_sz, SZ_64) +
+@@ -937,8 +937,10 @@ int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz,
+ 	/* Fill source vmid detail */
+ 	src = ptr;
+ 	i = 0;
+-	for_each_set_bit(b, &srcvm_bits, BITS_PER_LONG)
+-		src[i++] = cpu_to_le32(b);
++	for (b = 0; b < BITS_PER_TYPE(u64); b++) {
++		if (srcvm_bits & BIT(b))
++			src[i++] = cpu_to_le32(b);
++	}
+ 
+ 	/* Fill details of mem buff to map */
+ 	mem_to_map = ptr + ALIGN(src_sz, SZ_64);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index aa46726dfdb01..31413a604d0ae 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -2523,8 +2523,6 @@ static int amdgpu_device_ip_init(struct amdgpu_device *adev)
+ 	amdgpu_fru_get_product_info(adev);
+ 
+ init_failed:
+-	if (amdgpu_sriov_vf(adev))
+-		amdgpu_virt_release_full_gpu(adev, true);
+ 
+ 	return r;
+ }
+@@ -3560,6 +3558,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
+ 	int r, i;
+ 	bool px = false;
+ 	u32 max_MBps;
++	int tmp;
+ 
+ 	adev->shutdown = false;
+ 	adev->flags = flags;
+@@ -3738,6 +3737,12 @@ int amdgpu_device_init(struct amdgpu_device *adev,
+ 		adev->have_atomics_support = ((struct amd_sriov_msg_pf2vf_info *)
+ 			adev->virt.fw_reserve.p_pf2vf)->pcie_atomic_ops_support_flags ==
+ 			(PCI_EXP_DEVCAP2_ATOMIC_COMP32 | PCI_EXP_DEVCAP2_ATOMIC_COMP64);
++	/* APUs w/ gfx9 onwards doesn't reply on PCIe atomics, rather it is a
++	 * internal path natively support atomics, set have_atomics_support to true.
++	 */
++	else if ((adev->flags & AMD_IS_APU) &&
++		(adev->ip_versions[GC_HWIP][0] > IP_VERSION(9, 0, 0)))
++		adev->have_atomics_support = true;
+ 	else
+ 		adev->have_atomics_support =
+ 			!pci_enable_atomic_ops_to_root(adev->pdev,
+@@ -3781,7 +3786,13 @@ int amdgpu_device_init(struct amdgpu_device *adev,
+ 				}
+ 			}
+ 		} else {
++			tmp = amdgpu_reset_method;
++			/* It should do a default reset when loading or reloading the driver,
++			 * regardless of the module parameter reset_method.
++			 */
++			amdgpu_reset_method = AMD_RESET_METHOD_NONE;
+ 			r = amdgpu_asic_reset(adev);
++			amdgpu_reset_method = tmp;
+ 			if (r) {
+ 				dev_err(adev->dev, "asic reset on init failed\n");
+ 				goto failed;
+@@ -3841,18 +3852,6 @@ fence_driver_init:
+ 
+ 	r = amdgpu_device_ip_init(adev);
+ 	if (r) {
+-		/* failed in exclusive mode due to timeout */
+-		if (amdgpu_sriov_vf(adev) &&
+-		    !amdgpu_sriov_runtime(adev) &&
+-		    amdgpu_virt_mmio_blocked(adev) &&
+-		    !amdgpu_virt_wait_reset(adev)) {
+-			dev_err(adev->dev, "VF exclusive mode timeout\n");
+-			/* Don't send request since VF is inactive. */
+-			adev->virt.caps &= ~AMDGPU_SRIOV_CAPS_RUNTIME;
+-			adev->virt.ops = NULL;
+-			r = -EAGAIN;
+-			goto release_ras_con;
+-		}
+ 		dev_err(adev->dev, "amdgpu_device_ip_init failed\n");
+ 		amdgpu_vf_error_put(adev, AMDGIM_ERROR_VF_AMDGPU_INIT_FAIL, 0, 0);
+ 		goto release_ras_con;
+@@ -3924,8 +3923,10 @@ fence_driver_init:
+ 				   msecs_to_jiffies(AMDGPU_RESUME_MS));
+ 	}
+ 
+-	if (amdgpu_sriov_vf(adev))
++	if (amdgpu_sriov_vf(adev)) {
++		amdgpu_virt_release_full_gpu(adev, true);
+ 		flush_delayed_work(&adev->delayed_init_work);
++	}
+ 
+ 	r = sysfs_create_files(&adev->dev->kobj, amdgpu_dev_attributes);
+ 	if (r)
+@@ -3965,6 +3966,20 @@ fence_driver_init:
+ 	return 0;
+ 
+ release_ras_con:
++	if (amdgpu_sriov_vf(adev))
++		amdgpu_virt_release_full_gpu(adev, true);
++
++	/* failed in exclusive mode due to timeout */
++	if (amdgpu_sriov_vf(adev) &&
++		!amdgpu_sriov_runtime(adev) &&
++		amdgpu_virt_mmio_blocked(adev) &&
++		!amdgpu_virt_wait_reset(adev)) {
++		dev_err(adev->dev, "VF exclusive mode timeout\n");
++		/* Don't send request since VF is inactive. */
++		adev->virt.caps &= ~AMDGPU_SRIOV_CAPS_RUNTIME;
++		adev->virt.ops = NULL;
++		r = -EAGAIN;
++	}
+ 	amdgpu_release_ras_context(adev);
+ 
+ failed:
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+index f52d0ba91a770..a7d250809da99 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+@@ -582,7 +582,8 @@ void amdgpu_fence_driver_hw_fini(struct amdgpu_device *adev)
+ 		if (r)
+ 			amdgpu_fence_driver_force_completion(ring);
+ 
+-		if (ring->fence_drv.irq_src)
++		if (!drm_dev_is_unplugged(adev_to_drm(adev)) &&
++		    ring->fence_drv.irq_src)
+ 			amdgpu_irq_put(adev, ring->fence_drv.irq_src,
+ 				       ring->fence_drv.irq_type);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+index 12a6826caef47..8ad2c71d3cd1f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+@@ -534,6 +534,8 @@ void amdgpu_gmc_tmz_set(struct amdgpu_device *adev)
+ 	case IP_VERSION(9, 3, 0):
+ 	/* GC 10.3.7 */
+ 	case IP_VERSION(10, 3, 7):
++	/* GC 11.0.1 */
++	case IP_VERSION(11, 0, 1):
+ 		if (amdgpu_tmz == 0) {
+ 			adev->gmc.tmz_enabled = false;
+ 			dev_info(adev->dev,
+@@ -557,7 +559,6 @@ void amdgpu_gmc_tmz_set(struct amdgpu_device *adev)
+ 	case IP_VERSION(10, 3, 1):
+ 	/* YELLOW_CARP*/
+ 	case IP_VERSION(10, 3, 3):
+-	case IP_VERSION(11, 0, 1):
+ 	case IP_VERSION(11, 0, 4):
+ 		/* Don't enable it by default yet.
+ 		 */
+diff --git a/drivers/gpu/drm/amd/amdgpu/nv.c b/drivers/gpu/drm/amd/amdgpu/nv.c
+index ebe0e2d7dbd1b..aa7f82b3fd6a9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nv.c
++++ b/drivers/gpu/drm/amd/amdgpu/nv.c
+@@ -98,6 +98,16 @@ static const struct amdgpu_video_codecs nv_video_codecs_decode =
+ };
+ 
+ /* Sienna Cichlid */
++static const struct amdgpu_video_codec_info sc_video_codecs_encode_array[] = {
++	{codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 2160, 0)},
++	{codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 7680, 4352, 0)},
++};
++
++static const struct amdgpu_video_codecs sc_video_codecs_encode = {
++	.codec_count = ARRAY_SIZE(sc_video_codecs_encode_array),
++	.codec_array = sc_video_codecs_encode_array,
++};
++
+ static const struct amdgpu_video_codec_info sc_video_codecs_decode_array_vcn0[] =
+ {
+ 	{codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 4096, 4096, 3)},
+@@ -136,8 +146,8 @@ static const struct amdgpu_video_codecs sc_video_codecs_decode_vcn1 =
+ /* SRIOV Sienna Cichlid, not const since data is controlled by host */
+ static struct amdgpu_video_codec_info sriov_sc_video_codecs_encode_array[] =
+ {
+-	{codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 2304, 0)},
+-	{codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 4096, 2304, 0)},
++	{codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 2160, 0)},
++	{codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 7680, 4352, 0)},
+ };
+ 
+ static struct amdgpu_video_codec_info sriov_sc_video_codecs_decode_array_vcn0[] =
+@@ -237,12 +247,12 @@ static int nv_query_video_codecs(struct amdgpu_device *adev, bool encode,
+ 		} else {
+ 			if (adev->vcn.harvest_config & AMDGPU_VCN_HARVEST_VCN0) {
+ 				if (encode)
+-					*codecs = &nv_video_codecs_encode;
++					*codecs = &sc_video_codecs_encode;
+ 				else
+ 					*codecs = &sc_video_codecs_decode_vcn1;
+ 			} else {
+ 				if (encode)
+-					*codecs = &nv_video_codecs_encode;
++					*codecs = &sc_video_codecs_encode;
+ 				else
+ 					*codecs = &sc_video_codecs_decode_vcn0;
+ 			}
+@@ -251,14 +261,14 @@ static int nv_query_video_codecs(struct amdgpu_device *adev, bool encode,
+ 	case IP_VERSION(3, 0, 16):
+ 	case IP_VERSION(3, 0, 2):
+ 		if (encode)
+-			*codecs = &nv_video_codecs_encode;
++			*codecs = &sc_video_codecs_encode;
+ 		else
+ 			*codecs = &sc_video_codecs_decode_vcn0;
+ 		return 0;
+ 	case IP_VERSION(3, 1, 1):
+ 	case IP_VERSION(3, 1, 2):
+ 		if (encode)
+-			*codecs = &nv_video_codecs_encode;
++			*codecs = &sc_video_codecs_encode;
+ 		else
+ 			*codecs = &yc_video_codecs_decode;
+ 		return 0;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 0695c7c3d489d..ce46f3a061c44 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -3095,9 +3095,12 @@ void amdgpu_dm_update_connector_after_detect(
+ 						    aconnector->edid);
+ 		}
+ 
+-		aconnector->timing_requested = kzalloc(sizeof(struct dc_crtc_timing), GFP_KERNEL);
+-		if (!aconnector->timing_requested)
+-			dm_error("%s: failed to create aconnector->requested_timing\n", __func__);
++		if (!aconnector->timing_requested) {
++			aconnector->timing_requested =
++				kzalloc(sizeof(struct dc_crtc_timing), GFP_KERNEL);
++			if (!aconnector->timing_requested)
++				dm_error("failed to create aconnector->requested_timing\n");
++		}
+ 
+ 		drm_connector_update_edid_property(connector, aconnector->edid);
+ 		amdgpu_dm_update_freesync_caps(connector, aconnector->edid);
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index d4a1670a54506..f07cba121d010 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -1093,7 +1093,8 @@ static void phantom_pipe_blank(
+ 			otg_active_height,
+ 			0);
+ 
+-	hws->funcs.wait_for_blank_complete(opp);
++	if (tg->funcs->is_tg_enabled(tg))
++		hws->funcs.wait_for_blank_complete(opp);
+ }
+ 
+ static void disable_dangling_plane(struct dc *dc, struct dc_state *context)
+@@ -1156,6 +1157,7 @@ static void disable_dangling_plane(struct dc *dc, struct dc_state *context)
+ 			if (old_stream->mall_stream_config.type == SUBVP_PHANTOM) {
+ 				if (tg->funcs->enable_crtc) {
+ 					int main_pipe_width, main_pipe_height;
++
+ 					main_pipe_width = old_stream->mall_stream_config.paired_stream->dst.width;
+ 					main_pipe_height = old_stream->mall_stream_config.paired_stream->dst.height;
+ 					phantom_pipe_blank(dc, tg, main_pipe_width, main_pipe_height);
+diff --git a/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c b/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
+index d6d9e3b1b2c0e..02e69ccff3bac 100644
+--- a/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
++++ b/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
+@@ -6925,23 +6925,6 @@ static int si_dpm_enable(struct amdgpu_device *adev)
+ 	return 0;
+ }
+ 
+-static int si_set_temperature_range(struct amdgpu_device *adev)
+-{
+-	int ret;
+-
+-	ret = si_thermal_enable_alert(adev, false);
+-	if (ret)
+-		return ret;
+-	ret = si_thermal_set_temperature_range(adev, R600_TEMP_RANGE_MIN, R600_TEMP_RANGE_MAX);
+-	if (ret)
+-		return ret;
+-	ret = si_thermal_enable_alert(adev, true);
+-	if (ret)
+-		return ret;
+-
+-	return ret;
+-}
+-
+ static void si_dpm_disable(struct amdgpu_device *adev)
+ {
+ 	struct rv7xx_power_info *pi = rv770_get_pi(adev);
+@@ -7626,18 +7609,6 @@ static int si_dpm_process_interrupt(struct amdgpu_device *adev,
+ 
+ static int si_dpm_late_init(void *handle)
+ {
+-	int ret;
+-	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+-
+-	if (!adev->pm.dpm_enabled)
+-		return 0;
+-
+-	ret = si_set_temperature_range(adev);
+-	if (ret)
+-		return ret;
+-#if 0 //TODO ?
+-	si_dpm_powergate_uvd(adev, true);
+-#endif
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
+index cb10c7e312646..1b731a9c92d93 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
+@@ -580,7 +580,7 @@ static int vangogh_print_legacy_clk_levels(struct smu_context *smu,
+ 	DpmClocks_t *clk_table = smu->smu_table.clocks_table;
+ 	SmuMetrics_legacy_t metrics;
+ 	struct smu_dpm_context *smu_dpm_ctx = &(smu->smu_dpm);
+-	int i, size = 0, ret = 0;
++	int i, idx, size = 0, ret = 0;
+ 	uint32_t cur_value = 0, value = 0, count = 0;
+ 	bool cur_value_match_level = false;
+ 
+@@ -654,7 +654,8 @@ static int vangogh_print_legacy_clk_levels(struct smu_context *smu,
+ 	case SMU_MCLK:
+ 	case SMU_FCLK:
+ 		for (i = 0; i < count; i++) {
+-			ret = vangogh_get_dpm_clk_limited(smu, clk_type, i, &value);
++			idx = (clk_type == SMU_FCLK || clk_type == SMU_MCLK) ? (count - i - 1) : i;
++			ret = vangogh_get_dpm_clk_limited(smu, clk_type, idx, &value);
+ 			if (ret)
+ 				return ret;
+ 			if (!value)
+@@ -681,7 +682,7 @@ static int vangogh_print_clk_levels(struct smu_context *smu,
+ 	DpmClocks_t *clk_table = smu->smu_table.clocks_table;
+ 	SmuMetrics_t metrics;
+ 	struct smu_dpm_context *smu_dpm_ctx = &(smu->smu_dpm);
+-	int i, size = 0, ret = 0;
++	int i, idx, size = 0, ret = 0;
+ 	uint32_t cur_value = 0, value = 0, count = 0;
+ 	bool cur_value_match_level = false;
+ 	uint32_t min, max;
+@@ -763,7 +764,8 @@ static int vangogh_print_clk_levels(struct smu_context *smu,
+ 	case SMU_MCLK:
+ 	case SMU_FCLK:
+ 		for (i = 0; i < count; i++) {
+-			ret = vangogh_get_dpm_clk_limited(smu, clk_type, i, &value);
++			idx = (clk_type == SMU_FCLK || clk_type == SMU_MCLK) ? (count - i - 1) : i;
++			ret = vangogh_get_dpm_clk_limited(smu, clk_type, idx, &value);
+ 			if (ret)
+ 				return ret;
+ 			if (!value)
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
+index 5cdc07165480b..8a8ba25c9ad7c 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
+@@ -494,7 +494,7 @@ static int renoir_set_fine_grain_gfx_freq_parameters(struct smu_context *smu)
+ static int renoir_print_clk_levels(struct smu_context *smu,
+ 			enum smu_clk_type clk_type, char *buf)
+ {
+-	int i, size = 0, ret = 0;
++	int i, idx, size = 0, ret = 0;
+ 	uint32_t cur_value = 0, value = 0, count = 0, min = 0, max = 0;
+ 	SmuMetrics_t metrics;
+ 	struct smu_dpm_context *smu_dpm_ctx = &(smu->smu_dpm);
+@@ -594,7 +594,8 @@ static int renoir_print_clk_levels(struct smu_context *smu,
+ 	case SMU_VCLK:
+ 	case SMU_DCLK:
+ 		for (i = 0; i < count; i++) {
+-			ret = renoir_get_dpm_clk_limited(smu, clk_type, i, &value);
++			idx = (clk_type == SMU_FCLK || clk_type == SMU_MCLK) ? (count - i - 1) : i;
++			ret = renoir_get_dpm_clk_limited(smu, clk_type, idx, &value);
+ 			if (ret)
+ 				return ret;
+ 			if (!value)
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_4_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_4_ppt.c
+index 8fa9a36c38b64..6d9760eac16d8 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_4_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_4_ppt.c
+@@ -478,7 +478,7 @@ static int smu_v13_0_4_get_dpm_level_count(struct smu_context *smu,
+ static int smu_v13_0_4_print_clk_levels(struct smu_context *smu,
+ 					enum smu_clk_type clk_type, char *buf)
+ {
+-	int i, size = 0, ret = 0;
++	int i, idx, size = 0, ret = 0;
+ 	uint32_t cur_value = 0, value = 0, count = 0;
+ 	uint32_t min, max;
+ 
+@@ -512,7 +512,8 @@ static int smu_v13_0_4_print_clk_levels(struct smu_context *smu,
+ 			break;
+ 
+ 		for (i = 0; i < count; i++) {
+-			ret = smu_v13_0_4_get_dpm_freq_by_index(smu, clk_type, i, &value);
++			idx = (clk_type == SMU_FCLK || clk_type == SMU_MCLK) ? (count - i - 1) : i;
++			ret = smu_v13_0_4_get_dpm_freq_by_index(smu, clk_type, idx, &value);
+ 			if (ret)
+ 				break;
+ 
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_5_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_5_ppt.c
+index 66445964efbd1..0081fa607e02e 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_5_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_5_ppt.c
+@@ -866,7 +866,7 @@ out:
+ static int smu_v13_0_5_print_clk_levels(struct smu_context *smu,
+ 				enum smu_clk_type clk_type, char *buf)
+ {
+-	int i, size = 0, ret = 0;
++	int i, idx, size = 0, ret = 0;
+ 	uint32_t cur_value = 0, value = 0, count = 0;
+ 	uint32_t min = 0, max = 0;
+ 
+@@ -898,7 +898,8 @@ static int smu_v13_0_5_print_clk_levels(struct smu_context *smu,
+ 			goto print_clk_out;
+ 
+ 		for (i = 0; i < count; i++) {
+-			ret = smu_v13_0_5_get_dpm_freq_by_index(smu, clk_type, i, &value);
++			idx = (clk_type == SMU_MCLK) ? (count - i - 1) : i;
++			ret = smu_v13_0_5_get_dpm_freq_by_index(smu, clk_type, idx, &value);
+ 			if (ret)
+ 				goto print_clk_out;
+ 
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/yellow_carp_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/yellow_carp_ppt.c
+index 04e56b0b3033e..798f36cfcebd3 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/yellow_carp_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/yellow_carp_ppt.c
+@@ -1000,7 +1000,7 @@ out:
+ static int yellow_carp_print_clk_levels(struct smu_context *smu,
+ 				enum smu_clk_type clk_type, char *buf)
+ {
+-	int i, size = 0, ret = 0;
++	int i, idx, size = 0, ret = 0;
+ 	uint32_t cur_value = 0, value = 0, count = 0;
+ 	uint32_t min, max;
+ 
+@@ -1033,7 +1033,8 @@ static int yellow_carp_print_clk_levels(struct smu_context *smu,
+ 			goto print_clk_out;
+ 
+ 		for (i = 0; i < count; i++) {
+-			ret = yellow_carp_get_dpm_freq_by_index(smu, clk_type, i, &value);
++			idx = (clk_type == SMU_FCLK || clk_type == SMU_MCLK) ? (count - i - 1) : i;
++			ret = yellow_carp_get_dpm_freq_by_index(smu, clk_type, idx, &value);
+ 			if (ret)
+ 				goto print_clk_out;
+ 
+diff --git a/drivers/gpu/drm/ast/ast_main.c b/drivers/gpu/drm/ast/ast_main.c
+index f83ce77127cb4..a6d0ee4da2b88 100644
+--- a/drivers/gpu/drm/ast/ast_main.c
++++ b/drivers/gpu/drm/ast/ast_main.c
+@@ -425,11 +425,12 @@ struct ast_private *ast_device_create(const struct drm_driver *drv,
+ 		return ERR_PTR(-EIO);
+ 
+ 	/*
+-	 * If we don't have IO space at all, use MMIO now and
+-	 * assume the chip has MMIO enabled by default (rev 0x20
+-	 * and higher).
++	 * After AST2500, MMIO is enabled by default, and it should be adopted
++	 * to be compatible with Arm.
+ 	 */
+-	if (!(pci_resource_flags(pdev, 2) & IORESOURCE_IO)) {
++	if (pdev->revision >= 0x40) {
++		ast->ioregs = ast->regs + AST_IO_MM_OFFSET;
++	} else if (!(pci_resource_flags(pdev, 2) & IORESOURCE_IO)) {
+ 		drm_info(dev, "platform has no IO space, trying MMIO\n");
+ 		ast->ioregs = ast->regs + AST_IO_MM_OFFSET;
+ 	}
+diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c
+index c2507582ecf34..0d6a69cd6f7a5 100644
+--- a/drivers/gpu/drm/msm/msm_iommu.c
++++ b/drivers/gpu/drm/msm/msm_iommu.c
+@@ -234,7 +234,12 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent)
+ 	/* Get the pagetable configuration from the domain */
+ 	if (adreno_smmu->cookie)
+ 		ttbr1_cfg = adreno_smmu->get_ttbr1_cfg(adreno_smmu->cookie);
+-	if (!ttbr1_cfg)
++
++	/*
++	 * If you hit this WARN_ONCE() you are probably missing an entry in
++	 * qcom_smmu_impl_of_match[] in arm-smmu-qcom.c
++	 */
++	if (WARN_ONCE(!ttbr1_cfg, "No per-process page tables"))
+ 		return ERR_PTR(-ENODEV);
+ 
+ 	/*
+diff --git a/drivers/hid/hid-google-hammer.c b/drivers/hid/hid-google-hammer.c
+index 7ae5f27df54dd..c6bdb9c4ef3e0 100644
+--- a/drivers/hid/hid-google-hammer.c
++++ b/drivers/hid/hid-google-hammer.c
+@@ -586,6 +586,8 @@ static const struct hid_device_id hammer_devices[] = {
+ 		     USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_EEL) },
+ 	{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
+ 		     USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_HAMMER) },
++	{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
++		     USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_JEWEL) },
+ 	{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
+ 		     USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_MAGNEMITE) },
+ 	{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 8f3e0a5d5f834..2e23018c9f23d 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -529,6 +529,7 @@
+ #define USB_DEVICE_ID_GOOGLE_MOONBALL	0x5044
+ #define USB_DEVICE_ID_GOOGLE_DON	0x5050
+ #define USB_DEVICE_ID_GOOGLE_EEL	0x5057
++#define USB_DEVICE_ID_GOOGLE_JEWEL	0x5061
+ 
+ #define USB_VENDOR_ID_GOTOP		0x08f2
+ #define USB_DEVICE_ID_SUPER_Q2		0x007f
+diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
+index da89e84c9cbeb..14bce1fdec9fe 100644
+--- a/drivers/hid/hid-logitech-hidpp.c
++++ b/drivers/hid/hid-logitech-hidpp.c
+@@ -283,7 +283,7 @@ static int hidpp_send_message_sync(struct hidpp_device *hidpp,
+ 	struct hidpp_report *message,
+ 	struct hidpp_report *response)
+ {
+-	int ret;
++	int ret = -1;
+ 	int max_retries = 3;
+ 
+ 	mutex_lock(&hidpp->send_mutex);
+@@ -297,13 +297,13 @@ static int hidpp_send_message_sync(struct hidpp_device *hidpp,
+ 	 */
+ 	*response = *message;
+ 
+-	for (; max_retries != 0; max_retries--) {
++	for (; max_retries != 0 && ret; max_retries--) {
+ 		ret = __hidpp_send_report(hidpp->hid_dev, message);
+ 
+ 		if (ret) {
+ 			dbg_hid("__hidpp_send_report returned err: %d\n", ret);
+ 			memset(response, 0, sizeof(struct hidpp_report));
+-			goto exit;
++			break;
+ 		}
+ 
+ 		if (!wait_event_timeout(hidpp->wait, hidpp->answer_available,
+@@ -311,13 +311,14 @@ static int hidpp_send_message_sync(struct hidpp_device *hidpp,
+ 			dbg_hid("%s:timeout waiting for response\n", __func__);
+ 			memset(response, 0, sizeof(struct hidpp_report));
+ 			ret = -ETIMEDOUT;
++			break;
+ 		}
+ 
+ 		if (response->report_id == REPORT_ID_HIDPP_SHORT &&
+ 		    response->rap.sub_id == HIDPP_ERROR) {
+ 			ret = response->rap.params[1];
+ 			dbg_hid("%s:got hidpp error %02X\n", __func__, ret);
+-			goto exit;
++			break;
+ 		}
+ 
+ 		if ((response->report_id == REPORT_ID_HIDPP_LONG ||
+@@ -326,13 +327,12 @@ static int hidpp_send_message_sync(struct hidpp_device *hidpp,
+ 			ret = response->fap.params[1];
+ 			if (ret != HIDPP20_ERROR_BUSY) {
+ 				dbg_hid("%s:got hidpp 2.0 error %02X\n", __func__, ret);
+-				goto exit;
++				break;
+ 			}
+ 			dbg_hid("%s:got busy hidpp 2.0 error %02X, retrying\n", __func__, ret);
+ 		}
+ 	}
+ 
+-exit:
+ 	mutex_unlock(&hidpp->send_mutex);
+ 	return ret;
+ 
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index d2f500242ed40..9c30dd30537af 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -826,7 +826,7 @@ static int wacom_intuos_inout(struct wacom_wac *wacom)
+ 	/* Enter report */
+ 	if ((data[1] & 0xfc) == 0xc0) {
+ 		/* serial number of the tool */
+-		wacom->serial[idx] = ((data[3] & 0x0f) << 28) +
++		wacom->serial[idx] = ((__u64)(data[3] & 0x0f) << 28) +
+ 			(data[4] << 20) + (data[5] << 12) +
+ 			(data[6] << 4) + (data[7] >> 4);
+ 
+diff --git a/drivers/hwmon/k10temp.c b/drivers/hwmon/k10temp.c
+index be8bbb1c3a02d..823d0ca1d6059 100644
+--- a/drivers/hwmon/k10temp.c
++++ b/drivers/hwmon/k10temp.c
+@@ -507,6 +507,7 @@ static const struct pci_device_id k10temp_id_table[] = {
+ 	{ PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_19H_M50H_DF_F3) },
+ 	{ PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_19H_M60H_DF_F3) },
+ 	{ PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_19H_M70H_DF_F3) },
++	{ PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_19H_M78H_DF_F3) },
+ 	{ PCI_VDEVICE(HYGON, PCI_DEVICE_ID_AMD_17H_DF_F3) },
+ 	{}
+ };
+diff --git a/drivers/iio/accel/kionix-kx022a.c b/drivers/iio/accel/kionix-kx022a.c
+index 1c3a72380fb85..0fe84db659021 100644
+--- a/drivers/iio/accel/kionix-kx022a.c
++++ b/drivers/iio/accel/kionix-kx022a.c
+@@ -1049,7 +1049,7 @@ int kx022a_probe_internal(struct device *dev)
+ 		data->ien_reg = KX022A_REG_INC4;
+ 	} else {
+ 		irq = fwnode_irq_get_byname(fwnode, "INT2");
+-		if (irq <= 0)
++		if (irq < 0)
+ 			return dev_err_probe(dev, irq, "No suitable IRQ\n");
+ 
+ 		data->inc_reg = KX022A_REG_INC5;
+diff --git a/drivers/iio/accel/st_accel_core.c b/drivers/iio/accel/st_accel_core.c
+index 6b8562f684d5c..912637fb74ebd 100644
+--- a/drivers/iio/accel/st_accel_core.c
++++ b/drivers/iio/accel/st_accel_core.c
+@@ -1290,12 +1290,12 @@ static int apply_acpi_orientation(struct iio_dev *indio_dev)
+ 
+ 	adev = ACPI_COMPANION(indio_dev->dev.parent);
+ 	if (!adev)
+-		return 0;
++		return -ENXIO;
+ 
+ 	/* Read _ONT data, which should be a package of 6 integers. */
+ 	status = acpi_evaluate_object(adev->handle, "_ONT", NULL, &buffer);
+ 	if (status == AE_NOT_FOUND) {
+-		return 0;
++		return -ENXIO;
+ 	} else if (ACPI_FAILURE(status)) {
+ 		dev_warn(&indio_dev->dev, "failed to execute _ONT: %d\n",
+ 			 status);
+diff --git a/drivers/iio/adc/ad4130.c b/drivers/iio/adc/ad4130.c
+index 38394341fd6e7..5a5dd5e87ffc4 100644
+--- a/drivers/iio/adc/ad4130.c
++++ b/drivers/iio/adc/ad4130.c
+@@ -1817,6 +1817,11 @@ static const struct clk_ops ad4130_int_clk_ops = {
+ 	.unprepare = ad4130_int_clk_unprepare,
+ };
+ 
++static void ad4130_clk_del_provider(void *of_node)
++{
++	of_clk_del_provider(of_node);
++}
++
+ static int ad4130_setup_int_clk(struct ad4130_state *st)
+ {
+ 	struct device *dev = &st->spi->dev;
+@@ -1824,6 +1829,7 @@ static int ad4130_setup_int_clk(struct ad4130_state *st)
+ 	struct clk_init_data init;
+ 	const char *clk_name;
+ 	struct clk *clk;
++	int ret;
+ 
+ 	if (st->int_pin_sel == AD4130_INT_PIN_CLK ||
+ 	    st->mclk_sel != AD4130_MCLK_76_8KHZ)
+@@ -1843,7 +1849,11 @@ static int ad4130_setup_int_clk(struct ad4130_state *st)
+ 	if (IS_ERR(clk))
+ 		return PTR_ERR(clk);
+ 
+-	return of_clk_add_provider(of_node, of_clk_src_simple_get, clk);
++	ret = of_clk_add_provider(of_node, of_clk_src_simple_get, clk);
++	if (ret)
++		return ret;
++
++	return devm_add_action_or_reset(dev, ad4130_clk_del_provider, of_node);
+ }
+ 
+ static int ad4130_setup(struct iio_dev *indio_dev)
+diff --git a/drivers/iio/adc/ad7192.c b/drivers/iio/adc/ad7192.c
+index 55a6ab5910160..99bb604b78c8c 100644
+--- a/drivers/iio/adc/ad7192.c
++++ b/drivers/iio/adc/ad7192.c
+@@ -897,10 +897,6 @@ static const struct iio_info ad7195_info = {
+ 	__AD719x_CHANNEL(_si, _channel1, -1, _address, NULL, IIO_VOLTAGE, \
+ 		BIT(IIO_CHAN_INFO_SCALE), ad7192_calibsys_ext_info)
+ 
+-#define AD719x_SHORTED_CHANNEL(_si, _channel1, _address) \
+-	__AD719x_CHANNEL(_si, _channel1, -1, _address, "shorted", IIO_VOLTAGE, \
+-		BIT(IIO_CHAN_INFO_SCALE), ad7192_calibsys_ext_info)
+-
+ #define AD719x_TEMP_CHANNEL(_si, _address) \
+ 	__AD719x_CHANNEL(_si, 0, -1, _address, NULL, IIO_TEMP, 0, NULL)
+ 
+@@ -908,7 +904,7 @@ static const struct iio_chan_spec ad7192_channels[] = {
+ 	AD719x_DIFF_CHANNEL(0, 1, 2, AD7192_CH_AIN1P_AIN2M),
+ 	AD719x_DIFF_CHANNEL(1, 3, 4, AD7192_CH_AIN3P_AIN4M),
+ 	AD719x_TEMP_CHANNEL(2, AD7192_CH_TEMP),
+-	AD719x_SHORTED_CHANNEL(3, 2, AD7192_CH_AIN2P_AIN2M),
++	AD719x_DIFF_CHANNEL(3, 2, 2, AD7192_CH_AIN2P_AIN2M),
+ 	AD719x_CHANNEL(4, 1, AD7192_CH_AIN1),
+ 	AD719x_CHANNEL(5, 2, AD7192_CH_AIN2),
+ 	AD719x_CHANNEL(6, 3, AD7192_CH_AIN3),
+@@ -922,7 +918,7 @@ static const struct iio_chan_spec ad7193_channels[] = {
+ 	AD719x_DIFF_CHANNEL(2, 5, 6, AD7193_CH_AIN5P_AIN6M),
+ 	AD719x_DIFF_CHANNEL(3, 7, 8, AD7193_CH_AIN7P_AIN8M),
+ 	AD719x_TEMP_CHANNEL(4, AD7193_CH_TEMP),
+-	AD719x_SHORTED_CHANNEL(5, 2, AD7193_CH_AIN2P_AIN2M),
++	AD719x_DIFF_CHANNEL(5, 2, 2, AD7193_CH_AIN2P_AIN2M),
+ 	AD719x_CHANNEL(6, 1, AD7193_CH_AIN1),
+ 	AD719x_CHANNEL(7, 2, AD7193_CH_AIN2),
+ 	AD719x_CHANNEL(8, 3, AD7193_CH_AIN3),
+diff --git a/drivers/iio/adc/ad_sigma_delta.c b/drivers/iio/adc/ad_sigma_delta.c
+index d8570f620785a..7e21928707437 100644
+--- a/drivers/iio/adc/ad_sigma_delta.c
++++ b/drivers/iio/adc/ad_sigma_delta.c
+@@ -584,6 +584,10 @@ static int devm_ad_sd_probe_trigger(struct device *dev, struct iio_dev *indio_de
+ 	init_completion(&sigma_delta->completion);
+ 
+ 	sigma_delta->irq_dis = true;
++
++	/* the IRQ core clears IRQ_DISABLE_UNLAZY flag when freeing an IRQ */
++	irq_set_status_flags(sigma_delta->spi->irq, IRQ_DISABLE_UNLAZY);
++
+ 	ret = devm_request_irq(dev, sigma_delta->spi->irq,
+ 			       ad_sd_data_rdy_trig_poll,
+ 			       sigma_delta->info->irq_flags | IRQF_NO_AUTOEN,
+diff --git a/drivers/iio/adc/imx93_adc.c b/drivers/iio/adc/imx93_adc.c
+index a775d2e405671..dce9ec91e4a77 100644
+--- a/drivers/iio/adc/imx93_adc.c
++++ b/drivers/iio/adc/imx93_adc.c
+@@ -236,8 +236,7 @@ static int imx93_adc_read_raw(struct iio_dev *indio_dev,
+ {
+ 	struct imx93_adc *adc = iio_priv(indio_dev);
+ 	struct device *dev = adc->dev;
+-	long ret;
+-	u32 vref_uv;
++	int ret;
+ 
+ 	switch (mask) {
+ 	case IIO_CHAN_INFO_RAW:
+@@ -253,10 +252,10 @@ static int imx93_adc_read_raw(struct iio_dev *indio_dev,
+ 		return IIO_VAL_INT;
+ 
+ 	case IIO_CHAN_INFO_SCALE:
+-		ret = vref_uv = regulator_get_voltage(adc->vref);
++		ret = regulator_get_voltage(adc->vref);
+ 		if (ret < 0)
+ 			return ret;
+-		*val = vref_uv / 1000;
++		*val = ret / 1000;
+ 		*val2 = 12;
+ 		return IIO_VAL_FRACTIONAL_LOG2;
+ 
+diff --git a/drivers/iio/adc/mt6370-adc.c b/drivers/iio/adc/mt6370-adc.c
+index bc62e5a9d50d1..0bc112135bca1 100644
+--- a/drivers/iio/adc/mt6370-adc.c
++++ b/drivers/iio/adc/mt6370-adc.c
+@@ -19,6 +19,7 @@
+ 
+ #include <dt-bindings/iio/adc/mediatek,mt6370_adc.h>
+ 
++#define MT6370_REG_DEV_INFO		0x100
+ #define MT6370_REG_CHG_CTRL3		0x113
+ #define MT6370_REG_CHG_CTRL7		0x117
+ #define MT6370_REG_CHG_ADC		0x121
+@@ -27,6 +28,7 @@
+ #define MT6370_ADC_START_MASK		BIT(0)
+ #define MT6370_ADC_IN_SEL_MASK		GENMASK(7, 4)
+ #define MT6370_AICR_ICHG_MASK		GENMASK(7, 2)
++#define MT6370_VENID_MASK		GENMASK(7, 4)
+ 
+ #define MT6370_AICR_100_mA		0x0
+ #define MT6370_AICR_150_mA		0x1
+@@ -47,6 +49,10 @@
+ #define ADC_CONV_TIME_MS		35
+ #define ADC_CONV_POLLING_TIME_US	1000
+ 
++#define MT6370_VID_RT5081		0x8
++#define MT6370_VID_RT5081A		0xA
++#define MT6370_VID_MT6370		0xE
++
+ struct mt6370_adc_data {
+ 	struct device *dev;
+ 	struct regmap *regmap;
+@@ -55,6 +61,7 @@ struct mt6370_adc_data {
+ 	 * from being read at the same time.
+ 	 */
+ 	struct mutex adc_lock;
++	unsigned int vid;
+ };
+ 
+ static int mt6370_adc_read_channel(struct mt6370_adc_data *priv, int chan,
+@@ -98,6 +105,30 @@ adc_unlock:
+ 	return ret;
+ }
+ 
++static int mt6370_adc_get_ibus_scale(struct mt6370_adc_data *priv)
++{
++	switch (priv->vid) {
++	case MT6370_VID_RT5081:
++	case MT6370_VID_RT5081A:
++	case MT6370_VID_MT6370:
++		return 3350;
++	default:
++		return 3875;
++	}
++}
++
++static int mt6370_adc_get_ibat_scale(struct mt6370_adc_data *priv)
++{
++	switch (priv->vid) {
++	case MT6370_VID_RT5081:
++	case MT6370_VID_RT5081A:
++	case MT6370_VID_MT6370:
++		return 2680;
++	default:
++		return 3870;
++	}
++}
++
+ static int mt6370_adc_read_scale(struct mt6370_adc_data *priv,
+ 				 int chan, int *val1, int *val2)
+ {
+@@ -123,7 +154,7 @@ static int mt6370_adc_read_scale(struct mt6370_adc_data *priv,
+ 		case MT6370_AICR_250_mA:
+ 		case MT6370_AICR_300_mA:
+ 		case MT6370_AICR_350_mA:
+-			*val1 = 3350;
++			*val1 = mt6370_adc_get_ibus_scale(priv);
+ 			break;
+ 		default:
+ 			*val1 = 5000;
+@@ -150,7 +181,7 @@ static int mt6370_adc_read_scale(struct mt6370_adc_data *priv,
+ 		case MT6370_ICHG_600_mA:
+ 		case MT6370_ICHG_700_mA:
+ 		case MT6370_ICHG_800_mA:
+-			*val1 = 2680;
++			*val1 = mt6370_adc_get_ibat_scale(priv);
+ 			break;
+ 		default:
+ 			*val1 = 5000;
+@@ -251,6 +282,20 @@ static const struct iio_chan_spec mt6370_adc_channels[] = {
+ 	MT6370_ADC_CHAN(TEMP_JC, IIO_TEMP, 12, BIT(IIO_CHAN_INFO_OFFSET)),
+ };
+ 
++static int mt6370_get_vendor_info(struct mt6370_adc_data *priv)
++{
++	unsigned int dev_info;
++	int ret;
++
++	ret = regmap_read(priv->regmap, MT6370_REG_DEV_INFO, &dev_info);
++	if (ret)
++		return ret;
++
++	priv->vid = FIELD_GET(MT6370_VENID_MASK, dev_info);
++
++	return 0;
++}
++
+ static int mt6370_adc_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+@@ -272,6 +317,10 @@ static int mt6370_adc_probe(struct platform_device *pdev)
+ 	priv->regmap = regmap;
+ 	mutex_init(&priv->adc_lock);
+ 
++	ret = mt6370_get_vendor_info(priv);
++	if (ret)
++		return dev_err_probe(dev, ret, "Failed to get vid\n");
++
+ 	ret = regmap_write(priv->regmap, MT6370_REG_CHG_ADC, 0);
+ 	if (ret)
+ 		return dev_err_probe(dev, ret, "Failed to reset ADC\n");
+diff --git a/drivers/iio/adc/mxs-lradc-adc.c b/drivers/iio/adc/mxs-lradc-adc.c
+index bca79a93cbe43..a50f39143d3ea 100644
+--- a/drivers/iio/adc/mxs-lradc-adc.c
++++ b/drivers/iio/adc/mxs-lradc-adc.c
+@@ -757,13 +757,13 @@ static int mxs_lradc_adc_probe(struct platform_device *pdev)
+ 
+ 	ret = mxs_lradc_adc_trigger_init(iio);
+ 	if (ret)
+-		goto err_trig;
++		return ret;
+ 
+ 	ret = iio_triggered_buffer_setup(iio, &iio_pollfunc_store_time,
+ 					 &mxs_lradc_adc_trigger_handler,
+ 					 &mxs_lradc_adc_buffer_ops);
+ 	if (ret)
+-		return ret;
++		goto err_trig;
+ 
+ 	adc->vref_mv = mxs_lradc_adc_vref_mv[lradc->soc];
+ 
+@@ -801,9 +801,9 @@ static int mxs_lradc_adc_probe(struct platform_device *pdev)
+ 
+ err_dev:
+ 	mxs_lradc_adc_hw_stop(adc);
+-	mxs_lradc_adc_trigger_remove(iio);
+-err_trig:
+ 	iio_triggered_buffer_cleanup(iio);
++err_trig:
++	mxs_lradc_adc_trigger_remove(iio);
+ 	return ret;
+ }
+ 
+@@ -814,8 +814,8 @@ static int mxs_lradc_adc_remove(struct platform_device *pdev)
+ 
+ 	iio_device_unregister(iio);
+ 	mxs_lradc_adc_hw_stop(adc);
+-	mxs_lradc_adc_trigger_remove(iio);
+ 	iio_triggered_buffer_cleanup(iio);
++	mxs_lradc_adc_trigger_remove(iio);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/iio/adc/stm32-adc.c b/drivers/iio/adc/stm32-adc.c
+index 45d4e79f8e55e..30afdb6d4950e 100644
+--- a/drivers/iio/adc/stm32-adc.c
++++ b/drivers/iio/adc/stm32-adc.c
+@@ -2006,16 +2006,15 @@ static int stm32_adc_get_legacy_chan_count(struct iio_dev *indio_dev, struct stm
+ 	 * to get the *real* number of channels.
+ 	 */
+ 	ret = device_property_count_u32(dev, "st,adc-diff-channels");
+-	if (ret < 0)
+-		return ret;
+-
+-	ret /= (int)(sizeof(struct stm32_adc_diff_channel) / sizeof(u32));
+-	if (ret > adc_info->max_channels) {
+-		dev_err(&indio_dev->dev, "Bad st,adc-diff-channels?\n");
+-		return -EINVAL;
+-	} else if (ret > 0) {
+-		adc->num_diff = ret;
+-		num_channels += ret;
++	if (ret > 0) {
++		ret /= (int)(sizeof(struct stm32_adc_diff_channel) / sizeof(u32));
++		if (ret > adc_info->max_channels) {
++			dev_err(&indio_dev->dev, "Bad st,adc-diff-channels?\n");
++			return -EINVAL;
++		} else if (ret > 0) {
++			adc->num_diff = ret;
++			num_channels += ret;
++		}
+ 	}
+ 
+ 	/* Optional sample time is provided either for each, or all channels */
+@@ -2037,6 +2036,7 @@ static int stm32_adc_legacy_chan_init(struct iio_dev *indio_dev,
+ 	struct stm32_adc_diff_channel diff[STM32_ADC_CH_MAX];
+ 	struct device *dev = &indio_dev->dev;
+ 	u32 num_diff = adc->num_diff;
++	int num_se = nchans - num_diff;
+ 	int size = num_diff * sizeof(*diff) / sizeof(u32);
+ 	int scan_index = 0, ret, i, c;
+ 	u32 smp = 0, smps[STM32_ADC_CH_MAX], chans[STM32_ADC_CH_MAX];
+@@ -2063,29 +2063,32 @@ static int stm32_adc_legacy_chan_init(struct iio_dev *indio_dev,
+ 			scan_index++;
+ 		}
+ 	}
+-
+-	ret = device_property_read_u32_array(dev, "st,adc-channels", chans,
+-					     nchans);
+-	if (ret)
+-		return ret;
+-
+-	for (c = 0; c < nchans; c++) {
+-		if (chans[c] >= adc_info->max_channels) {
+-			dev_err(&indio_dev->dev, "Invalid channel %d\n",
+-				chans[c]);
+-			return -EINVAL;
++	if (num_se > 0) {
++		ret = device_property_read_u32_array(dev, "st,adc-channels", chans, num_se);
++		if (ret) {
++			dev_err(&indio_dev->dev, "Failed to get st,adc-channels %d\n", ret);
++			return ret;
+ 		}
+ 
+-		/* Channel can't be configured both as single-ended & diff */
+-		for (i = 0; i < num_diff; i++) {
+-			if (chans[c] == diff[i].vinp) {
+-				dev_err(&indio_dev->dev, "channel %d misconfigured\n",	chans[c]);
++		for (c = 0; c < num_se; c++) {
++			if (chans[c] >= adc_info->max_channels) {
++				dev_err(&indio_dev->dev, "Invalid channel %d\n",
++					chans[c]);
+ 				return -EINVAL;
+ 			}
++
++			/* Channel can't be configured both as single-ended & diff */
++			for (i = 0; i < num_diff; i++) {
++				if (chans[c] == diff[i].vinp) {
++					dev_err(&indio_dev->dev, "channel %d misconfigured\n",
++						chans[c]);
++					return -EINVAL;
++				}
++			}
++			stm32_adc_chan_init_one(indio_dev, &channels[scan_index],
++						chans[c], 0, scan_index, false);
++			scan_index++;
+ 		}
+-		stm32_adc_chan_init_one(indio_dev, &channels[scan_index],
+-					chans[c], 0, scan_index, false);
+-		scan_index++;
+ 	}
+ 
+ 	if (adc->nsmps > 0) {
+@@ -2306,7 +2309,7 @@ static int stm32_adc_chan_fw_init(struct iio_dev *indio_dev, bool timestamping)
+ 
+ 	if (legacy)
+ 		ret = stm32_adc_legacy_chan_init(indio_dev, adc, channels,
+-						 num_channels);
++						 timestamping ? num_channels - 1 : num_channels);
+ 	else
+ 		ret = stm32_adc_generic_chan_init(indio_dev, adc, channels);
+ 	if (ret < 0)
+diff --git a/drivers/iio/addac/ad74413r.c b/drivers/iio/addac/ad74413r.c
+index f32c8c2fb26d2..48350ba4a5a5d 100644
+--- a/drivers/iio/addac/ad74413r.c
++++ b/drivers/iio/addac/ad74413r.c
+@@ -981,7 +981,7 @@ static int ad74413r_read_raw(struct iio_dev *indio_dev,
+ 
+ 		ret = ad74413r_get_single_adc_result(indio_dev, chan->channel,
+ 						     val);
+-		if (ret)
++		if (ret < 0)
+ 			return ret;
+ 
+ 		ad74413r_adc_to_resistance_result(*val, val);
+diff --git a/drivers/iio/dac/Makefile b/drivers/iio/dac/Makefile
+index 6c74fea21736a..addd97a788389 100644
+--- a/drivers/iio/dac/Makefile
++++ b/drivers/iio/dac/Makefile
+@@ -17,7 +17,7 @@ obj-$(CONFIG_AD5592R_BASE) += ad5592r-base.o
+ obj-$(CONFIG_AD5592R) += ad5592r.o
+ obj-$(CONFIG_AD5593R) += ad5593r.o
+ obj-$(CONFIG_AD5755) += ad5755.o
+-obj-$(CONFIG_AD5755) += ad5758.o
++obj-$(CONFIG_AD5758) += ad5758.o
+ obj-$(CONFIG_AD5761) += ad5761.o
+ obj-$(CONFIG_AD5764) += ad5764.o
+ obj-$(CONFIG_AD5766) += ad5766.o
+diff --git a/drivers/iio/dac/mcp4725.c b/drivers/iio/dac/mcp4725.c
+index 46bf758760f85..3f5661a3718fe 100644
+--- a/drivers/iio/dac/mcp4725.c
++++ b/drivers/iio/dac/mcp4725.c
+@@ -47,12 +47,18 @@ static int mcp4725_suspend(struct device *dev)
+ 	struct mcp4725_data *data = iio_priv(i2c_get_clientdata(
+ 		to_i2c_client(dev)));
+ 	u8 outbuf[2];
++	int ret;
+ 
+ 	outbuf[0] = (data->powerdown_mode + 1) << 4;
+ 	outbuf[1] = 0;
+ 	data->powerdown = true;
+ 
+-	return i2c_master_send(data->client, outbuf, 2);
++	ret = i2c_master_send(data->client, outbuf, 2);
++	if (ret < 0)
++		return ret;
++	else if (ret != 2)
++		return -EIO;
++	return 0;
+ }
+ 
+ static int mcp4725_resume(struct device *dev)
+@@ -60,13 +66,19 @@ static int mcp4725_resume(struct device *dev)
+ 	struct mcp4725_data *data = iio_priv(i2c_get_clientdata(
+ 		to_i2c_client(dev)));
+ 	u8 outbuf[2];
++	int ret;
+ 
+ 	/* restore previous DAC value */
+ 	outbuf[0] = (data->dac_value >> 8) & 0xf;
+ 	outbuf[1] = data->dac_value & 0xff;
+ 	data->powerdown = false;
+ 
+-	return i2c_master_send(data->client, outbuf, 2);
++	ret = i2c_master_send(data->client, outbuf, 2);
++	if (ret < 0)
++		return ret;
++	else if (ret != 2)
++		return -EIO;
++	return 0;
+ }
+ static DEFINE_SIMPLE_DEV_PM_OPS(mcp4725_pm_ops, mcp4725_suspend,
+ 				mcp4725_resume);
+diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.c
+index 99576b2c171f4..32d7f83642303 100644
+--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.c
++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.c
+@@ -275,9 +275,14 @@ static int inv_icm42600_buffer_preenable(struct iio_dev *indio_dev)
+ {
+ 	struct inv_icm42600_state *st = iio_device_get_drvdata(indio_dev);
+ 	struct device *dev = regmap_get_device(st->map);
++	struct inv_icm42600_timestamp *ts = iio_priv(indio_dev);
+ 
+ 	pm_runtime_get_sync(dev);
+ 
++	mutex_lock(&st->lock);
++	inv_icm42600_timestamp_reset(ts);
++	mutex_unlock(&st->lock);
++
+ 	return 0;
+ }
+ 
+@@ -375,7 +380,6 @@ static int inv_icm42600_buffer_postdisable(struct iio_dev *indio_dev)
+ 	struct device *dev = regmap_get_device(st->map);
+ 	unsigned int sensor;
+ 	unsigned int *watermark;
+-	struct inv_icm42600_timestamp *ts;
+ 	struct inv_icm42600_sensor_conf conf = INV_ICM42600_SENSOR_CONF_INIT;
+ 	unsigned int sleep_temp = 0;
+ 	unsigned int sleep_sensor = 0;
+@@ -385,11 +389,9 @@ static int inv_icm42600_buffer_postdisable(struct iio_dev *indio_dev)
+ 	if (indio_dev == st->indio_gyro) {
+ 		sensor = INV_ICM42600_SENSOR_GYRO;
+ 		watermark = &st->fifo.watermark.gyro;
+-		ts = iio_priv(st->indio_gyro);
+ 	} else if (indio_dev == st->indio_accel) {
+ 		sensor = INV_ICM42600_SENSOR_ACCEL;
+ 		watermark = &st->fifo.watermark.accel;
+-		ts = iio_priv(st->indio_accel);
+ 	} else {
+ 		return -EINVAL;
+ 	}
+@@ -417,8 +419,6 @@ static int inv_icm42600_buffer_postdisable(struct iio_dev *indio_dev)
+ 	if (!st->fifo.on)
+ 		ret = inv_icm42600_set_temp_conf(st, false, &sleep_temp);
+ 
+-	inv_icm42600_timestamp_reset(ts);
+-
+ out_unlock:
+ 	mutex_unlock(&st->lock);
+ 
+diff --git a/drivers/iio/light/vcnl4035.c b/drivers/iio/light/vcnl4035.c
+index 84148b9440006..6b511b416c04a 100644
+--- a/drivers/iio/light/vcnl4035.c
++++ b/drivers/iio/light/vcnl4035.c
+@@ -8,6 +8,7 @@
+  * TODO: Proximity
+  */
+ #include <linux/bitops.h>
++#include <linux/bitfield.h>
+ #include <linux/i2c.h>
+ #include <linux/module.h>
+ #include <linux/pm_runtime.h>
+@@ -42,6 +43,7 @@
+ #define VCNL4035_ALS_PERS_MASK		GENMASK(3, 2)
+ #define VCNL4035_INT_ALS_IF_H_MASK	BIT(12)
+ #define VCNL4035_INT_ALS_IF_L_MASK	BIT(13)
++#define VCNL4035_DEV_ID_MASK		GENMASK(7, 0)
+ 
+ /* Default values */
+ #define VCNL4035_MODE_ALS_ENABLE	BIT(0)
+@@ -413,6 +415,7 @@ static int vcnl4035_init(struct vcnl4035_data *data)
+ 		return ret;
+ 	}
+ 
++	id = FIELD_GET(VCNL4035_DEV_ID_MASK, id);
+ 	if (id != VCNL4035_DEV_ID_VAL) {
+ 		dev_err(&data->client->dev, "Wrong id, got %x, expected %x\n",
+ 			id, VCNL4035_DEV_ID_VAL);
+diff --git a/drivers/iio/magnetometer/tmag5273.c b/drivers/iio/magnetometer/tmag5273.c
+index 28bb7efe8df8c..e155a75b3cd29 100644
+--- a/drivers/iio/magnetometer/tmag5273.c
++++ b/drivers/iio/magnetometer/tmag5273.c
+@@ -296,12 +296,13 @@ static int tmag5273_read_raw(struct iio_dev *indio_dev,
+ 			return ret;
+ 
+ 		ret = tmag5273_get_measure(data, &t, &x, &y, &z, &angle, &magnitude);
+-		if (ret)
+-			return ret;
+ 
+ 		pm_runtime_mark_last_busy(data->dev);
+ 		pm_runtime_put_autosuspend(data->dev);
+ 
++		if (ret)
++			return ret;
++
+ 		switch (chan->address) {
+ 		case TEMPERATURE:
+ 			*val = t;
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index 989edc7896338..94222de1d3719 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -3241,9 +3241,7 @@ static int bnxt_re_process_raw_qp_pkt_rx(struct bnxt_re_qp *gsi_qp,
+ 	udwr.remote_qkey = gsi_sqp->qplib_qp.qkey;
+ 
+ 	/* post data received  in the send queue */
+-	rc = bnxt_re_post_send_shadow_qp(rdev, gsi_sqp, swr);
+-
+-	return 0;
++	return bnxt_re_post_send_shadow_qp(rdev, gsi_sqp, swr);
+ }
+ 
+ static void bnxt_re_process_res_rawqp1_wc(struct ib_wc *wc,
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+index 96e581ced50e2..ab2cc1c67f70b 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+@@ -2043,6 +2043,12 @@ int bnxt_qplib_create_cq(struct bnxt_qplib_res *res, struct bnxt_qplib_cq *cq)
+ 	u32 pg_sz_lvl;
+ 	int rc;
+ 
++	if (!cq->dpi) {
++		dev_err(&rcfw->pdev->dev,
++			"FP: CREATE_CQ failed due to NULL DPI\n");
++		return -EINVAL;
++	}
++
+ 	hwq_attr.res = res;
+ 	hwq_attr.depth = cq->max_wqe;
+ 	hwq_attr.stride = sizeof(struct cq_base);
+@@ -2054,11 +2060,6 @@ int bnxt_qplib_create_cq(struct bnxt_qplib_res *res, struct bnxt_qplib_cq *cq)
+ 
+ 	RCFW_CMD_PREP(req, CREATE_CQ, cmd_flags);
+ 
+-	if (!cq->dpi) {
+-		dev_err(&rcfw->pdev->dev,
+-			"FP: CREATE_CQ failed due to NULL DPI\n");
+-		return -EINVAL;
+-	}
+ 	req.dpi = cpu_to_le32(cq->dpi->dpi);
+ 	req.cq_handle = cpu_to_le64(cq->cq_handle);
+ 	req.cq_size = cpu_to_le32(cq->hwq.max_elements);
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_res.c b/drivers/infiniband/hw/bnxt_re/qplib_res.c
+index 126d4f26f75ad..81b0c5e879f9e 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_res.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_res.c
+@@ -215,17 +215,9 @@ int bnxt_qplib_alloc_init_hwq(struct bnxt_qplib_hwq *hwq,
+ 			return -EINVAL;
+ 		hwq_attr->sginfo->npages = npages;
+ 	} else {
+-		unsigned long sginfo_num_pages = ib_umem_num_dma_blocks(
+-			hwq_attr->sginfo->umem, hwq_attr->sginfo->pgsize);
+-
++		npages = ib_umem_num_dma_blocks(hwq_attr->sginfo->umem,
++						hwq_attr->sginfo->pgsize);
+ 		hwq->is_user = true;
+-		npages = sginfo_num_pages;
+-		npages = (npages * PAGE_SIZE) /
+-			  BIT_ULL(hwq_attr->sginfo->pgshft);
+-		if ((sginfo_num_pages * PAGE_SIZE) %
+-		     BIT_ULL(hwq_attr->sginfo->pgshft))
+-			if (!npages)
+-				npages++;
+ 	}
+ 
+ 	if (npages == MAX_PBL_LVL_0_PGS && !hwq_attr->sginfo->nopte) {
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.c b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
+index b802981b71716..bae7d89261439 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_sp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
+@@ -584,16 +584,15 @@ int bnxt_qplib_reg_mr(struct bnxt_qplib_res *res, struct bnxt_qplib_mrw *mr,
+ 		/* Free the hwq if it already exist, must be a rereg */
+ 		if (mr->hwq.max_elements)
+ 			bnxt_qplib_free_hwq(res, &mr->hwq);
+-		/* Use system PAGE_SIZE */
+ 		hwq_attr.res = res;
+ 		hwq_attr.depth = pages;
+-		hwq_attr.stride = buf_pg_size;
++		hwq_attr.stride = sizeof(dma_addr_t);
+ 		hwq_attr.type = HWQ_TYPE_MR;
+ 		hwq_attr.sginfo = &sginfo;
+ 		hwq_attr.sginfo->umem = umem;
+ 		hwq_attr.sginfo->npages = pages;
+-		hwq_attr.sginfo->pgsize = PAGE_SIZE;
+-		hwq_attr.sginfo->pgshft = PAGE_SHIFT;
++		hwq_attr.sginfo->pgsize = buf_pg_size;
++		hwq_attr.sginfo->pgshft = ilog2(buf_pg_size);
+ 		rc = bnxt_qplib_alloc_init_hwq(&mr->hwq, &hwq_attr);
+ 		if (rc) {
+ 			dev_err(&res->pdev->dev,
+diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c
+index 31454643f8c54..f9526a4c75b26 100644
+--- a/drivers/infiniband/hw/efa/efa_verbs.c
++++ b/drivers/infiniband/hw/efa/efa_verbs.c
+@@ -1397,7 +1397,7 @@ static int pbl_continuous_initialize(struct efa_dev *dev,
+  */
+ static int pbl_indirect_initialize(struct efa_dev *dev, struct pbl_context *pbl)
+ {
+-	u32 size_in_pages = DIV_ROUND_UP(pbl->pbl_buf_size_in_bytes, PAGE_SIZE);
++	u32 size_in_pages = DIV_ROUND_UP(pbl->pbl_buf_size_in_bytes, EFA_CHUNK_PAYLOAD_SIZE);
+ 	struct scatterlist *sgl;
+ 	int sg_dma_cnt, err;
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index dbf97fe5948ff..9369f93afaedd 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -4664,11 +4664,9 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
+ 	mtu = ib_mtu_enum_to_int(ib_mtu);
+ 	if (WARN_ON(mtu <= 0))
+ 		return -EINVAL;
+-#define MAX_LP_MSG_LEN 16384
+-	/* MTU * (2 ^ LP_PKTN_INI) shouldn't be bigger than 16KB */
+-	lp_pktn_ini = ilog2(MAX_LP_MSG_LEN / mtu);
+-	if (WARN_ON(lp_pktn_ini >= 0xF))
+-		return -EINVAL;
++#define MIN_LP_MSG_LEN 1024
++	/* mtu * (2 ^ lp_pktn_ini) should be in the range of 1024 to mtu */
++	lp_pktn_ini = ilog2(max(mtu, MIN_LP_MSG_LEN) / mtu);
+ 
+ 	if (attr_mask & IB_QP_PATH_MTU) {
+ 		hr_reg_write(context, QPC_MTU, ib_mtu);
+@@ -5093,7 +5091,6 @@ static int hns_roce_v2_set_abs_fields(struct ib_qp *ibqp,
+ static bool check_qp_timeout_cfg_range(struct hns_roce_dev *hr_dev, u8 *timeout)
+ {
+ #define QP_ACK_TIMEOUT_MAX_HIP08 20
+-#define QP_ACK_TIMEOUT_OFFSET 10
+ #define QP_ACK_TIMEOUT_MAX 31
+ 
+ 	if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08) {
+@@ -5102,7 +5099,7 @@ static bool check_qp_timeout_cfg_range(struct hns_roce_dev *hr_dev, u8 *timeout)
+ 				   "local ACK timeout shall be 0 to 20.\n");
+ 			return false;
+ 		}
+-		*timeout += QP_ACK_TIMEOUT_OFFSET;
++		*timeout += HNS_ROCE_V2_QP_ACK_TIMEOUT_OFS_HIP08;
+ 	} else if (hr_dev->pci_dev->revision > PCI_REVISION_ID_HIP08) {
+ 		if (*timeout > QP_ACK_TIMEOUT_MAX) {
+ 			ibdev_warn(&hr_dev->ib_dev,
+@@ -5388,6 +5385,18 @@ out:
+ 	return ret;
+ }
+ 
++static u8 get_qp_timeout_attr(struct hns_roce_dev *hr_dev,
++			      struct hns_roce_v2_qp_context *context)
++{
++	u8 timeout;
++
++	timeout = (u8)hr_reg_read(context, QPC_AT);
++	if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08)
++		timeout -= HNS_ROCE_V2_QP_ACK_TIMEOUT_OFS_HIP08;
++
++	return timeout;
++}
++
+ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
+ 				int qp_attr_mask,
+ 				struct ib_qp_init_attr *qp_init_attr)
+@@ -5465,7 +5474,7 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
+ 	qp_attr->max_dest_rd_atomic = 1 << hr_reg_read(&context, QPC_RR_MAX);
+ 
+ 	qp_attr->min_rnr_timer = (u8)hr_reg_read(&context, QPC_MIN_RNR_TIME);
+-	qp_attr->timeout = (u8)hr_reg_read(&context, QPC_AT);
++	qp_attr->timeout = get_qp_timeout_attr(hr_dev, &context);
+ 	qp_attr->retry_cnt = hr_reg_read(&context, QPC_RETRY_NUM_INIT);
+ 	qp_attr->rnr_retry = hr_reg_read(&context, QPC_RNR_NUM_INIT);
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+index af9d00225cdf5..b5a336e182f83 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+@@ -72,6 +72,8 @@
+ #define HNS_ROCE_V2_IDX_ENTRY_SZ		4
+ 
+ #define HNS_ROCE_V2_SCCC_SZ			32
++#define HNS_ROCE_V2_QP_ACK_TIMEOUT_OFS_HIP08    10
++
+ #define HNS_ROCE_V3_SCCC_SZ			64
+ #define HNS_ROCE_V3_GMV_ENTRY_SZ		32
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
+index 37a5cf62f88b4..14376490ac226 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_mr.c
++++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
+@@ -33,6 +33,7 @@
+ 
+ #include <linux/vmalloc.h>
+ #include <rdma/ib_umem.h>
++#include <linux/math.h>
+ #include "hns_roce_device.h"
+ #include "hns_roce_cmd.h"
+ #include "hns_roce_hem.h"
+@@ -909,6 +910,44 @@ static int mtr_init_buf_cfg(struct hns_roce_dev *hr_dev,
+ 	return page_cnt;
+ }
+ 
++static u64 cal_pages_per_l1ba(unsigned int ba_per_bt, unsigned int hopnum)
++{
++	return int_pow(ba_per_bt, hopnum - 1);
++}
++
++static unsigned int cal_best_bt_pg_sz(struct hns_roce_dev *hr_dev,
++				      struct hns_roce_mtr *mtr,
++				      unsigned int pg_shift)
++{
++	unsigned long cap = hr_dev->caps.page_size_cap;
++	struct hns_roce_buf_region *re;
++	unsigned int pgs_per_l1ba;
++	unsigned int ba_per_bt;
++	unsigned int ba_num;
++	int i;
++
++	for_each_set_bit_from(pg_shift, &cap, sizeof(cap) * BITS_PER_BYTE) {
++		if (!(BIT(pg_shift) & cap))
++			continue;
++
++		ba_per_bt = BIT(pg_shift) / BA_BYTE_LEN;
++		ba_num = 0;
++		for (i = 0; i < mtr->hem_cfg.region_count; i++) {
++			re = &mtr->hem_cfg.region[i];
++			if (re->hopnum == 0)
++				continue;
++
++			pgs_per_l1ba = cal_pages_per_l1ba(ba_per_bt, re->hopnum);
++			ba_num += DIV_ROUND_UP(re->count, pgs_per_l1ba);
++		}
++
++		if (ba_num <= ba_per_bt)
++			return pg_shift;
++	}
++
++	return 0;
++}
++
+ static int mtr_alloc_mtt(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
+ 			 unsigned int ba_page_shift)
+ {
+@@ -917,6 +956,10 @@ static int mtr_alloc_mtt(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
+ 
+ 	hns_roce_hem_list_init(&mtr->hem_list);
+ 	if (!cfg->is_direct) {
++		ba_page_shift = cal_best_bt_pg_sz(hr_dev, mtr, ba_page_shift);
++		if (!ba_page_shift)
++			return -ERANGE;
++
+ 		ret = hns_roce_hem_list_request(hr_dev, &mtr->hem_list,
+ 						cfg->region, cfg->region_count,
+ 						ba_page_shift);
+diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c
+index 1b2e3e800c9a6..d0bb21d3007c2 100644
+--- a/drivers/infiniband/hw/irdma/verbs.c
++++ b/drivers/infiniband/hw/irdma/verbs.c
+@@ -522,11 +522,6 @@ static int irdma_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
+ 	if (!iwqp->user_mode)
+ 		cancel_delayed_work_sync(&iwqp->dwork_flush);
+ 
+-	irdma_qp_rem_ref(&iwqp->ibqp);
+-	wait_for_completion(&iwqp->free_qp);
+-	irdma_free_lsmm_rsrc(iwqp);
+-	irdma_cqp_qp_destroy_cmd(&iwdev->rf->sc_dev, &iwqp->sc_qp);
+-
+ 	if (!iwqp->user_mode) {
+ 		if (iwqp->iwscq) {
+ 			irdma_clean_cqes(iwqp, iwqp->iwscq);
+@@ -534,6 +529,12 @@ static int irdma_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
+ 				irdma_clean_cqes(iwqp, iwqp->iwrcq);
+ 		}
+ 	}
++
++	irdma_qp_rem_ref(&iwqp->ibqp);
++	wait_for_completion(&iwqp->free_qp);
++	irdma_free_lsmm_rsrc(iwqp);
++	irdma_cqp_qp_destroy_cmd(&iwdev->rf->sc_dev, &iwqp->sc_qp);
++
+ 	irdma_remove_push_mmap_entries(iwqp);
+ 	irdma_free_qp_rsrc(iwqp);
+ 
+@@ -3296,6 +3297,7 @@ static int irdma_post_send(struct ib_qp *ibqp,
+ 			break;
+ 		case IB_WR_LOCAL_INV:
+ 			info.op_type = IRDMA_OP_TYPE_INV_STAG;
++			info.local_fence = info.read_fence;
+ 			info.op.inv_local_stag.target_stag = ib_wr->ex.invalidate_rkey;
+ 			err = irdma_uk_stag_local_invalidate(ukqp, &info, true);
+ 			break;
+diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
+index 889c7efd050bc..18e68fbaec884 100644
+--- a/drivers/iommu/Kconfig
++++ b/drivers/iommu/Kconfig
+@@ -287,6 +287,7 @@ config EXYNOS_IOMMU_DEBUG
+ config IPMMU_VMSA
+ 	bool "Renesas VMSA-compatible IPMMU"
+ 	depends on ARCH_RENESAS || COMPILE_TEST
++	depends on ARM || ARM64 || COMPILE_TEST
+ 	depends on !GENERIC_ATOMIC64	# for IOMMU_IO_PGTABLE_LPAE
+ 	select IOMMU_API
+ 	select IOMMU_IO_PGTABLE_LPAE
+diff --git a/drivers/iommu/amd/amd_iommu.h b/drivers/iommu/amd/amd_iommu.h
+index c160a332ce339..471f40351f4c8 100644
+--- a/drivers/iommu/amd/amd_iommu.h
++++ b/drivers/iommu/amd/amd_iommu.h
+@@ -15,9 +15,7 @@ extern irqreturn_t amd_iommu_int_thread(int irq, void *data);
+ extern irqreturn_t amd_iommu_int_handler(int irq, void *data);
+ extern void amd_iommu_apply_erratum_63(struct amd_iommu *iommu, u16 devid);
+ extern void amd_iommu_restart_event_logging(struct amd_iommu *iommu);
+-extern int amd_iommu_init_devices(void);
+-extern void amd_iommu_uninit_devices(void);
+-extern void amd_iommu_init_notifier(void);
++extern void amd_iommu_restart_ga_log(struct amd_iommu *iommu);
+ extern void amd_iommu_set_rlookup_table(struct amd_iommu *iommu, u16 devid);
+ 
+ #ifdef CONFIG_AMD_IOMMU_DEBUGFS
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index 19a46b9f73574..fd487c33b28aa 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -751,6 +751,30 @@ void amd_iommu_restart_event_logging(struct amd_iommu *iommu)
+ 	iommu_feature_enable(iommu, CONTROL_EVT_LOG_EN);
+ }
+ 
++/*
++ * This function restarts event logging in case the IOMMU experienced
++ * an GA log overflow.
++ */
++void amd_iommu_restart_ga_log(struct amd_iommu *iommu)
++{
++	u32 status;
++
++	status = readl(iommu->mmio_base + MMIO_STATUS_OFFSET);
++	if (status & MMIO_STATUS_GALOG_RUN_MASK)
++		return;
++
++	pr_info_ratelimited("IOMMU GA Log restarting\n");
++
++	iommu_feature_disable(iommu, CONTROL_GALOG_EN);
++	iommu_feature_disable(iommu, CONTROL_GAINT_EN);
++
++	writel(MMIO_STATUS_GALOG_OVERFLOW_MASK,
++	       iommu->mmio_base + MMIO_STATUS_OFFSET);
++
++	iommu_feature_enable(iommu, CONTROL_GAINT_EN);
++	iommu_feature_enable(iommu, CONTROL_GALOG_EN);
++}
++
+ /*
+  * This function resets the command buffer if the IOMMU stopped fetching
+  * commands from it.
+diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
+index 167da5b1a5e31..cd0e3fb78c3ff 100644
+--- a/drivers/iommu/amd/iommu.c
++++ b/drivers/iommu/amd/iommu.c
+@@ -845,6 +845,7 @@ amd_iommu_set_pci_msi_domain(struct device *dev, struct amd_iommu *iommu) { }
+ 	(MMIO_STATUS_EVT_OVERFLOW_INT_MASK | \
+ 	 MMIO_STATUS_EVT_INT_MASK | \
+ 	 MMIO_STATUS_PPR_INT_MASK | \
++	 MMIO_STATUS_GALOG_OVERFLOW_MASK | \
+ 	 MMIO_STATUS_GALOG_INT_MASK)
+ 
+ irqreturn_t amd_iommu_int_thread(int irq, void *data)
+@@ -868,10 +869,16 @@ irqreturn_t amd_iommu_int_thread(int irq, void *data)
+ 		}
+ 
+ #ifdef CONFIG_IRQ_REMAP
+-		if (status & MMIO_STATUS_GALOG_INT_MASK) {
++		if (status & (MMIO_STATUS_GALOG_INT_MASK |
++			      MMIO_STATUS_GALOG_OVERFLOW_MASK)) {
+ 			pr_devel("Processing IOMMU GA Log\n");
+ 			iommu_poll_ga_log(iommu);
+ 		}
++
++		if (status & MMIO_STATUS_GALOG_OVERFLOW_MASK) {
++			pr_info_ratelimited("IOMMU GA Log overflow\n");
++			amd_iommu_restart_ga_log(iommu);
++		}
+ #endif
+ 
+ 		if (status & MMIO_STATUS_EVT_OVERFLOW_INT_MASK) {
+@@ -2058,7 +2065,7 @@ static struct protection_domain *protection_domain_alloc(unsigned int type)
+ {
+ 	struct io_pgtable_ops *pgtbl_ops;
+ 	struct protection_domain *domain;
+-	int pgtable = amd_iommu_pgtable;
++	int pgtable;
+ 	int mode = DEFAULT_PGTABLE_LEVEL;
+ 	int ret;
+ 
+@@ -2075,6 +2082,10 @@ static struct protection_domain *protection_domain_alloc(unsigned int type)
+ 		mode = PAGE_MODE_NONE;
+ 	} else if (type == IOMMU_DOMAIN_UNMANAGED) {
+ 		pgtable = AMD_IOMMU_V1;
++	} else if (type == IOMMU_DOMAIN_DMA || type == IOMMU_DOMAIN_DMA_FQ) {
++		pgtable = amd_iommu_pgtable;
++	} else {
++		return NULL;
+ 	}
+ 
+ 	switch (pgtable) {
+@@ -2107,6 +2118,15 @@ out_err:
+ 	return NULL;
+ }
+ 
++static inline u64 dma_max_address(void)
++{
++	if (amd_iommu_pgtable == AMD_IOMMU_V1)
++		return ~0ULL;
++
++	/* V2 with 4 level page table */
++	return ((1ULL << PM_LEVEL_SHIFT(PAGE_MODE_4_LEVEL)) - 1);
++}
++
+ static struct iommu_domain *amd_iommu_domain_alloc(unsigned type)
+ {
+ 	struct protection_domain *domain;
+@@ -2123,7 +2143,7 @@ static struct iommu_domain *amd_iommu_domain_alloc(unsigned type)
+ 		return NULL;
+ 
+ 	domain->domain.geometry.aperture_start = 0;
+-	domain->domain.geometry.aperture_end   = ~0ULL;
++	domain->domain.geometry.aperture_end   = dma_max_address();
+ 	domain->domain.geometry.force_aperture = true;
+ 
+ 	return &domain->domain;
+@@ -2376,7 +2396,7 @@ static void amd_iommu_iotlb_sync(struct iommu_domain *domain,
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&dom->lock, flags);
+-	domain_flush_pages(dom, gather->start, gather->end - gather->start, 1);
++	domain_flush_pages(dom, gather->start, gather->end - gather->start + 1, 1);
+ 	amd_iommu_domain_flush_complete(dom);
+ 	spin_unlock_irqrestore(&dom->lock, flags);
+ }
+@@ -3482,8 +3502,7 @@ int amd_iommu_activate_guest_mode(void *data)
+ 	struct irte_ga *entry = (struct irte_ga *) ir_data->entry;
+ 	u64 valid;
+ 
+-	if (!AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir) ||
+-	    !entry || entry->lo.fields_vapic.guest_mode)
++	if (!AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir) || !entry)
+ 		return 0;
+ 
+ 	valid = entry->lo.fields_vapic.valid;
+diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
+index 6a00ce208dc2b..248b8c2bc4071 100644
+--- a/drivers/iommu/mtk_iommu.c
++++ b/drivers/iommu/mtk_iommu.c
+@@ -738,7 +738,8 @@ static void mtk_iommu_flush_iotlb_all(struct iommu_domain *domain)
+ {
+ 	struct mtk_iommu_domain *dom = to_mtk_domain(domain);
+ 
+-	mtk_iommu_tlb_flush_all(dom->bank->parent_data);
++	if (dom->bank)
++		mtk_iommu_tlb_flush_all(dom->bank->parent_data);
+ }
+ 
+ static void mtk_iommu_iotlb_sync(struct iommu_domain *domain,
+diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c
+index f30db22ea5d7a..31cd1e2929e9f 100644
+--- a/drivers/iommu/rockchip-iommu.c
++++ b/drivers/iommu/rockchip-iommu.c
+@@ -1302,20 +1302,22 @@ static int rk_iommu_probe(struct platform_device *pdev)
+ 	for (i = 0; i < iommu->num_irq; i++) {
+ 		int irq = platform_get_irq(pdev, i);
+ 
+-		if (irq < 0)
+-			return irq;
++		if (irq < 0) {
++			err = irq;
++			goto err_pm_disable;
++		}
+ 
+ 		err = devm_request_irq(iommu->dev, irq, rk_iommu_irq,
+ 				       IRQF_SHARED, dev_name(dev), iommu);
+-		if (err) {
+-			pm_runtime_disable(dev);
+-			goto err_remove_sysfs;
+-		}
++		if (err)
++			goto err_pm_disable;
+ 	}
+ 
+ 	dma_set_mask_and_coherent(dev, rk_ops->dma_bit_mask);
+ 
+ 	return 0;
++err_pm_disable:
++	pm_runtime_disable(dev);
+ err_remove_sysfs:
+ 	iommu_device_sysfs_remove(&iommu->iommu);
+ err_put_group:
+diff --git a/drivers/mailbox/mailbox-test.c b/drivers/mailbox/mailbox-test.c
+index 4555d678fadda..abcee58e851c2 100644
+--- a/drivers/mailbox/mailbox-test.c
++++ b/drivers/mailbox/mailbox-test.c
+@@ -12,6 +12,7 @@
+ #include <linux/kernel.h>
+ #include <linux/mailbox_client.h>
+ #include <linux/module.h>
++#include <linux/mutex.h>
+ #include <linux/of.h>
+ #include <linux/platform_device.h>
+ #include <linux/poll.h>
+@@ -38,6 +39,7 @@ struct mbox_test_device {
+ 	char			*signal;
+ 	char			*message;
+ 	spinlock_t		lock;
++	struct mutex		mutex;
+ 	wait_queue_head_t	waitq;
+ 	struct fasync_struct	*async_queue;
+ 	struct dentry		*root_debugfs_dir;
+@@ -95,6 +97,7 @@ static ssize_t mbox_test_message_write(struct file *filp,
+ 				       size_t count, loff_t *ppos)
+ {
+ 	struct mbox_test_device *tdev = filp->private_data;
++	char *message;
+ 	void *data;
+ 	int ret;
+ 
+@@ -110,10 +113,13 @@ static ssize_t mbox_test_message_write(struct file *filp,
+ 		return -EINVAL;
+ 	}
+ 
+-	tdev->message = kzalloc(MBOX_MAX_MSG_LEN, GFP_KERNEL);
+-	if (!tdev->message)
++	message = kzalloc(MBOX_MAX_MSG_LEN, GFP_KERNEL);
++	if (!message)
+ 		return -ENOMEM;
+ 
++	mutex_lock(&tdev->mutex);
++
++	tdev->message = message;
+ 	ret = copy_from_user(tdev->message, userbuf, count);
+ 	if (ret) {
+ 		ret = -EFAULT;
+@@ -144,6 +150,8 @@ out:
+ 	kfree(tdev->message);
+ 	tdev->signal = NULL;
+ 
++	mutex_unlock(&tdev->mutex);
++
+ 	return ret < 0 ? ret : count;
+ }
+ 
+@@ -392,6 +400,7 @@ static int mbox_test_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, tdev);
+ 
+ 	spin_lock_init(&tdev->lock);
++	mutex_init(&tdev->mutex);
+ 
+ 	if (tdev->rx_channel) {
+ 		tdev->rx_buffer = devm_kzalloc(&pdev->dev,
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index f787c9e5b10e7..fbef3c9badb65 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -5516,7 +5516,7 @@ static int raid5_read_one_chunk(struct mddev *mddev, struct bio *raid_bio)
+ 
+ 	sector = raid5_compute_sector(conf, raid_bio->bi_iter.bi_sector, 0,
+ 				      &dd_idx, NULL);
+-	end_sector = bio_end_sector(raid_bio);
++	end_sector = sector + bio_sectors(raid_bio);
+ 
+ 	rcu_read_lock();
+ 	if (r5c_big_stripe_cached(conf, sector))
+diff --git a/drivers/media/dvb-core/dvb_ca_en50221.c b/drivers/media/dvb-core/dvb_ca_en50221.c
+index c2d2792227f86..baf64540dc00a 100644
+--- a/drivers/media/dvb-core/dvb_ca_en50221.c
++++ b/drivers/media/dvb-core/dvb_ca_en50221.c
+@@ -151,6 +151,12 @@ struct dvb_ca_private {
+ 
+ 	/* mutex serializing ioctls */
+ 	struct mutex ioctl_mutex;
++
++	/* A mutex used when a device is disconnected */
++	struct mutex remove_mutex;
++
++	/* Whether the device is disconnected */
++	int exit;
+ };
+ 
+ static void dvb_ca_private_free(struct dvb_ca_private *ca)
+@@ -187,7 +193,7 @@ static void dvb_ca_en50221_thread_wakeup(struct dvb_ca_private *ca);
+ static int dvb_ca_en50221_read_data(struct dvb_ca_private *ca, int slot,
+ 				    u8 *ebuf, int ecount);
+ static int dvb_ca_en50221_write_data(struct dvb_ca_private *ca, int slot,
+-				     u8 *ebuf, int ecount);
++				     u8 *ebuf, int ecount, int size_write_flag);
+ 
+ /**
+  * findstr - Safely find needle in haystack.
+@@ -370,7 +376,7 @@ static int dvb_ca_en50221_link_init(struct dvb_ca_private *ca, int slot)
+ 	ret = dvb_ca_en50221_wait_if_status(ca, slot, STATUSREG_FR, HZ / 10);
+ 	if (ret)
+ 		return ret;
+-	ret = dvb_ca_en50221_write_data(ca, slot, buf, 2);
++	ret = dvb_ca_en50221_write_data(ca, slot, buf, 2, CMDREG_SW);
+ 	if (ret != 2)
+ 		return -EIO;
+ 	ret = ca->pub->write_cam_control(ca->pub, slot, CTRLIF_COMMAND, IRQEN);
+@@ -778,11 +784,13 @@ exit:
+  * @buf: The data in this buffer is treated as a complete link-level packet to
+  *	 be written.
+  * @bytes_write: Size of ebuf.
++ * @size_write_flag: A flag on Command Register which says whether the link size
++ * information will be writen or not.
+  *
+  * return: Number of bytes written, or < 0 on error.
+  */
+ static int dvb_ca_en50221_write_data(struct dvb_ca_private *ca, int slot,
+-				     u8 *buf, int bytes_write)
++				     u8 *buf, int bytes_write, int size_write_flag)
+ {
+ 	struct dvb_ca_slot *sl = &ca->slot_info[slot];
+ 	int status;
+@@ -817,7 +825,7 @@ static int dvb_ca_en50221_write_data(struct dvb_ca_private *ca, int slot,
+ 
+ 	/* OK, set HC bit */
+ 	status = ca->pub->write_cam_control(ca->pub, slot, CTRLIF_COMMAND,
+-					    IRQEN | CMDREG_HC);
++					    IRQEN | CMDREG_HC | size_write_flag);
+ 	if (status)
+ 		goto exit;
+ 
+@@ -1508,7 +1516,7 @@ static ssize_t dvb_ca_en50221_io_write(struct file *file,
+ 
+ 			mutex_lock(&sl->slot_lock);
+ 			status = dvb_ca_en50221_write_data(ca, slot, fragbuf,
+-							   fraglen + 2);
++							   fraglen + 2, 0);
+ 			mutex_unlock(&sl->slot_lock);
+ 			if (status == (fraglen + 2)) {
+ 				written = 1;
+@@ -1709,12 +1717,22 @@ static int dvb_ca_en50221_io_open(struct inode *inode, struct file *file)
+ 
+ 	dprintk("%s\n", __func__);
+ 
+-	if (!try_module_get(ca->pub->owner))
++	mutex_lock(&ca->remove_mutex);
++
++	if (ca->exit) {
++		mutex_unlock(&ca->remove_mutex);
++		return -ENODEV;
++	}
++
++	if (!try_module_get(ca->pub->owner)) {
++		mutex_unlock(&ca->remove_mutex);
+ 		return -EIO;
++	}
+ 
+ 	err = dvb_generic_open(inode, file);
+ 	if (err < 0) {
+ 		module_put(ca->pub->owner);
++		mutex_unlock(&ca->remove_mutex);
+ 		return err;
+ 	}
+ 
+@@ -1739,6 +1757,7 @@ static int dvb_ca_en50221_io_open(struct inode *inode, struct file *file)
+ 
+ 	dvb_ca_private_get(ca);
+ 
++	mutex_unlock(&ca->remove_mutex);
+ 	return 0;
+ }
+ 
+@@ -1758,6 +1777,8 @@ static int dvb_ca_en50221_io_release(struct inode *inode, struct file *file)
+ 
+ 	dprintk("%s\n", __func__);
+ 
++	mutex_lock(&ca->remove_mutex);
++
+ 	/* mark the CA device as closed */
+ 	ca->open = 0;
+ 	dvb_ca_en50221_thread_update_delay(ca);
+@@ -1768,6 +1789,13 @@ static int dvb_ca_en50221_io_release(struct inode *inode, struct file *file)
+ 
+ 	dvb_ca_private_put(ca);
+ 
++	if (dvbdev->users == 1 && ca->exit == 1) {
++		mutex_unlock(&ca->remove_mutex);
++		wake_up(&dvbdev->wait_queue);
++	} else {
++		mutex_unlock(&ca->remove_mutex);
++	}
++
+ 	return err;
+ }
+ 
+@@ -1891,6 +1919,7 @@ int dvb_ca_en50221_init(struct dvb_adapter *dvb_adapter,
+ 	}
+ 
+ 	mutex_init(&ca->ioctl_mutex);
++	mutex_init(&ca->remove_mutex);
+ 
+ 	if (signal_pending(current)) {
+ 		ret = -EINTR;
+@@ -1933,6 +1962,14 @@ void dvb_ca_en50221_release(struct dvb_ca_en50221 *pubca)
+ 
+ 	dprintk("%s\n", __func__);
+ 
++	mutex_lock(&ca->remove_mutex);
++	ca->exit = 1;
++	mutex_unlock(&ca->remove_mutex);
++
++	if (ca->dvbdev->users < 1)
++		wait_event(ca->dvbdev->wait_queue,
++				ca->dvbdev->users == 1);
++
+ 	/* shutdown the thread if there was one */
+ 	kthread_stop(ca->thread);
+ 
+diff --git a/drivers/media/dvb-core/dvb_demux.c b/drivers/media/dvb-core/dvb_demux.c
+index 398c86279b5b0..7c4d86bfdd6c9 100644
+--- a/drivers/media/dvb-core/dvb_demux.c
++++ b/drivers/media/dvb-core/dvb_demux.c
+@@ -115,12 +115,12 @@ static inline int dvb_dmx_swfilter_payload(struct dvb_demux_feed *feed,
+ 
+ 	cc = buf[3] & 0x0f;
+ 	ccok = ((feed->cc + 1) & 0x0f) == cc;
+-	feed->cc = cc;
+ 	if (!ccok) {
+ 		set_buf_flags(feed, DMX_BUFFER_FLAG_DISCONTINUITY_DETECTED);
+ 		dprintk_sect_loss("missed packet: %d instead of %d!\n",
+ 				  cc, (feed->cc + 1) & 0x0f);
+ 	}
++	feed->cc = cc;
+ 
+ 	if (buf[1] & 0x40)	// PUSI ?
+ 		feed->peslen = 0xfffa;
+@@ -300,7 +300,6 @@ static int dvb_dmx_swfilter_section_packet(struct dvb_demux_feed *feed,
+ 
+ 	cc = buf[3] & 0x0f;
+ 	ccok = ((feed->cc + 1) & 0x0f) == cc;
+-	feed->cc = cc;
+ 
+ 	if (buf[3] & 0x20) {
+ 		/* adaption field present, check for discontinuity_indicator */
+@@ -336,6 +335,7 @@ static int dvb_dmx_swfilter_section_packet(struct dvb_demux_feed *feed,
+ 		feed->pusi_seen = false;
+ 		dvb_dmx_swfilter_section_new(feed);
+ 	}
++	feed->cc = cc;
+ 
+ 	if (buf[1] & 0x40) {
+ 		/* PUSI=1 (is set), section boundary is here */
+diff --git a/drivers/media/dvb-core/dvb_frontend.c b/drivers/media/dvb-core/dvb_frontend.c
+index cc0a789f09ae5..bc6950a5740f6 100644
+--- a/drivers/media/dvb-core/dvb_frontend.c
++++ b/drivers/media/dvb-core/dvb_frontend.c
+@@ -293,14 +293,22 @@ static int dvb_frontend_get_event(struct dvb_frontend *fe,
+ 	}
+ 
+ 	if (events->eventw == events->eventr) {
+-		int ret;
++		struct wait_queue_entry wait;
++		int ret = 0;
+ 
+ 		if (flags & O_NONBLOCK)
+ 			return -EWOULDBLOCK;
+ 
+-		ret = wait_event_interruptible(events->wait_queue,
+-					       dvb_frontend_test_event(fepriv, events));
+-
++		init_waitqueue_entry(&wait, current);
++		add_wait_queue(&events->wait_queue, &wait);
++		while (!dvb_frontend_test_event(fepriv, events)) {
++			wait_woken(&wait, TASK_INTERRUPTIBLE, 0);
++			if (signal_pending(current)) {
++				ret = -ERESTARTSYS;
++				break;
++			}
++		}
++		remove_wait_queue(&events->wait_queue, &wait);
+ 		if (ret < 0)
+ 			return ret;
+ 	}
+@@ -809,15 +817,26 @@ static void dvb_frontend_stop(struct dvb_frontend *fe)
+ 
+ 	dev_dbg(fe->dvb->device, "%s:\n", __func__);
+ 
++	mutex_lock(&fe->remove_mutex);
++
+ 	if (fe->exit != DVB_FE_DEVICE_REMOVED)
+ 		fe->exit = DVB_FE_NORMAL_EXIT;
+ 	mb();
+ 
+-	if (!fepriv->thread)
++	if (!fepriv->thread) {
++		mutex_unlock(&fe->remove_mutex);
+ 		return;
++	}
+ 
+ 	kthread_stop(fepriv->thread);
+ 
++	mutex_unlock(&fe->remove_mutex);
++
++	if (fepriv->dvbdev->users < -1) {
++		wait_event(fepriv->dvbdev->wait_queue,
++			   fepriv->dvbdev->users == -1);
++	}
++
+ 	sema_init(&fepriv->sem, 1);
+ 	fepriv->state = FESTATE_IDLE;
+ 
+@@ -2761,9 +2780,13 @@ static int dvb_frontend_open(struct inode *inode, struct file *file)
+ 	struct dvb_adapter *adapter = fe->dvb;
+ 	int ret;
+ 
++	mutex_lock(&fe->remove_mutex);
++
+ 	dev_dbg(fe->dvb->device, "%s:\n", __func__);
+-	if (fe->exit == DVB_FE_DEVICE_REMOVED)
+-		return -ENODEV;
++	if (fe->exit == DVB_FE_DEVICE_REMOVED) {
++		ret = -ENODEV;
++		goto err_remove_mutex;
++	}
+ 
+ 	if (adapter->mfe_shared == 2) {
+ 		mutex_lock(&adapter->mfe_lock);
+@@ -2771,7 +2794,8 @@ static int dvb_frontend_open(struct inode *inode, struct file *file)
+ 			if (adapter->mfe_dvbdev &&
+ 			    !adapter->mfe_dvbdev->writers) {
+ 				mutex_unlock(&adapter->mfe_lock);
+-				return -EBUSY;
++				ret = -EBUSY;
++				goto err_remove_mutex;
+ 			}
+ 			adapter->mfe_dvbdev = dvbdev;
+ 		}
+@@ -2794,8 +2818,10 @@ static int dvb_frontend_open(struct inode *inode, struct file *file)
+ 			while (mferetry-- && (mfedev->users != -1 ||
+ 					      mfepriv->thread)) {
+ 				if (msleep_interruptible(500)) {
+-					if (signal_pending(current))
+-						return -EINTR;
++					if (signal_pending(current)) {
++						ret = -EINTR;
++						goto err_remove_mutex;
++					}
+ 				}
+ 			}
+ 
+@@ -2807,7 +2833,8 @@ static int dvb_frontend_open(struct inode *inode, struct file *file)
+ 				if (mfedev->users != -1 ||
+ 				    mfepriv->thread) {
+ 					mutex_unlock(&adapter->mfe_lock);
+-					return -EBUSY;
++					ret = -EBUSY;
++					goto err_remove_mutex;
+ 				}
+ 				adapter->mfe_dvbdev = dvbdev;
+ 			}
+@@ -2866,6 +2893,8 @@ static int dvb_frontend_open(struct inode *inode, struct file *file)
+ 
+ 	if (adapter->mfe_shared)
+ 		mutex_unlock(&adapter->mfe_lock);
++
++	mutex_unlock(&fe->remove_mutex);
+ 	return ret;
+ 
+ err3:
+@@ -2887,6 +2916,9 @@ err1:
+ err0:
+ 	if (adapter->mfe_shared)
+ 		mutex_unlock(&adapter->mfe_lock);
++
++err_remove_mutex:
++	mutex_unlock(&fe->remove_mutex);
+ 	return ret;
+ }
+ 
+@@ -2897,6 +2929,8 @@ static int dvb_frontend_release(struct inode *inode, struct file *file)
+ 	struct dvb_frontend_private *fepriv = fe->frontend_priv;
+ 	int ret;
+ 
++	mutex_lock(&fe->remove_mutex);
++
+ 	dev_dbg(fe->dvb->device, "%s:\n", __func__);
+ 
+ 	if ((file->f_flags & O_ACCMODE) != O_RDONLY) {
+@@ -2918,10 +2952,18 @@ static int dvb_frontend_release(struct inode *inode, struct file *file)
+ 		}
+ 		mutex_unlock(&fe->dvb->mdev_lock);
+ #endif
+-		if (fe->exit != DVB_FE_NO_EXIT)
+-			wake_up(&dvbdev->wait_queue);
+ 		if (fe->ops.ts_bus_ctrl)
+ 			fe->ops.ts_bus_ctrl(fe, 0);
++
++		if (fe->exit != DVB_FE_NO_EXIT) {
++			mutex_unlock(&fe->remove_mutex);
++			wake_up(&dvbdev->wait_queue);
++		} else {
++			mutex_unlock(&fe->remove_mutex);
++		}
++
++	} else {
++		mutex_unlock(&fe->remove_mutex);
+ 	}
+ 
+ 	dvb_frontend_put(fe);
+@@ -3022,6 +3064,7 @@ int dvb_register_frontend(struct dvb_adapter *dvb,
+ 	fepriv = fe->frontend_priv;
+ 
+ 	kref_init(&fe->refcount);
++	mutex_init(&fe->remove_mutex);
+ 
+ 	/*
+ 	 * After initialization, there need to be two references: one
+diff --git a/drivers/media/dvb-core/dvb_net.c b/drivers/media/dvb-core/dvb_net.c
+index 8a2febf33ce28..8bb8dd34c223e 100644
+--- a/drivers/media/dvb-core/dvb_net.c
++++ b/drivers/media/dvb-core/dvb_net.c
+@@ -1564,15 +1564,43 @@ static long dvb_net_ioctl(struct file *file,
+ 	return dvb_usercopy(file, cmd, arg, dvb_net_do_ioctl);
+ }
+ 
++static int locked_dvb_net_open(struct inode *inode, struct file *file)
++{
++	struct dvb_device *dvbdev = file->private_data;
++	struct dvb_net *dvbnet = dvbdev->priv;
++	int ret;
++
++	if (mutex_lock_interruptible(&dvbnet->remove_mutex))
++		return -ERESTARTSYS;
++
++	if (dvbnet->exit) {
++		mutex_unlock(&dvbnet->remove_mutex);
++		return -ENODEV;
++	}
++
++	ret = dvb_generic_open(inode, file);
++
++	mutex_unlock(&dvbnet->remove_mutex);
++
++	return ret;
++}
++
+ static int dvb_net_close(struct inode *inode, struct file *file)
+ {
+ 	struct dvb_device *dvbdev = file->private_data;
+ 	struct dvb_net *dvbnet = dvbdev->priv;
+ 
++	mutex_lock(&dvbnet->remove_mutex);
++
+ 	dvb_generic_release(inode, file);
+ 
+-	if(dvbdev->users == 1 && dvbnet->exit == 1)
++	if (dvbdev->users == 1 && dvbnet->exit == 1) {
++		mutex_unlock(&dvbnet->remove_mutex);
+ 		wake_up(&dvbdev->wait_queue);
++	} else {
++		mutex_unlock(&dvbnet->remove_mutex);
++	}
++
+ 	return 0;
+ }
+ 
+@@ -1580,7 +1608,7 @@ static int dvb_net_close(struct inode *inode, struct file *file)
+ static const struct file_operations dvb_net_fops = {
+ 	.owner = THIS_MODULE,
+ 	.unlocked_ioctl = dvb_net_ioctl,
+-	.open =	dvb_generic_open,
++	.open =	locked_dvb_net_open,
+ 	.release = dvb_net_close,
+ 	.llseek = noop_llseek,
+ };
+@@ -1599,10 +1627,13 @@ void dvb_net_release (struct dvb_net *dvbnet)
+ {
+ 	int i;
+ 
++	mutex_lock(&dvbnet->remove_mutex);
+ 	dvbnet->exit = 1;
++	mutex_unlock(&dvbnet->remove_mutex);
++
+ 	if (dvbnet->dvbdev->users < 1)
+ 		wait_event(dvbnet->dvbdev->wait_queue,
+-				dvbnet->dvbdev->users==1);
++				dvbnet->dvbdev->users == 1);
+ 
+ 	dvb_unregister_device(dvbnet->dvbdev);
+ 
+@@ -1621,6 +1652,7 @@ int dvb_net_init (struct dvb_adapter *adap, struct dvb_net *dvbnet,
+ 	int i;
+ 
+ 	mutex_init(&dvbnet->ioctl_mutex);
++	mutex_init(&dvbnet->remove_mutex);
+ 	dvbnet->demux = dmx;
+ 
+ 	for (i=0; i<DVB_NET_DEVICES_MAX; i++)
+diff --git a/drivers/media/dvb-core/dvbdev.c b/drivers/media/dvb-core/dvbdev.c
+index 0ed087caf7f3b..73d296f27ff92 100644
+--- a/drivers/media/dvb-core/dvbdev.c
++++ b/drivers/media/dvb-core/dvbdev.c
+@@ -27,6 +27,7 @@
+ #include <media/tuner.h>
+ 
+ static DEFINE_MUTEX(dvbdev_mutex);
++static LIST_HEAD(dvbdevfops_list);
+ static int dvbdev_debug;
+ 
+ module_param(dvbdev_debug, int, 0644);
+@@ -453,14 +454,15 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
+ 			enum dvb_device_type type, int demux_sink_pads)
+ {
+ 	struct dvb_device *dvbdev;
+-	struct file_operations *dvbdevfops;
++	struct file_operations *dvbdevfops = NULL;
++	struct dvbdevfops_node *node = NULL, *new_node = NULL;
+ 	struct device *clsdev;
+ 	int minor;
+ 	int id, ret;
+ 
+ 	mutex_lock(&dvbdev_register_lock);
+ 
+-	if ((id = dvbdev_get_free_id (adap, type)) < 0){
++	if ((id = dvbdev_get_free_id (adap, type)) < 0) {
+ 		mutex_unlock(&dvbdev_register_lock);
+ 		*pdvbdev = NULL;
+ 		pr_err("%s: couldn't find free device id\n", __func__);
+@@ -468,18 +470,45 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
+ 	}
+ 
+ 	*pdvbdev = dvbdev = kzalloc(sizeof(*dvbdev), GFP_KERNEL);
+-
+ 	if (!dvbdev){
+ 		mutex_unlock(&dvbdev_register_lock);
+ 		return -ENOMEM;
+ 	}
+ 
+-	dvbdevfops = kmemdup(template->fops, sizeof(*dvbdevfops), GFP_KERNEL);
++	/*
++	 * When a device of the same type is probe()d more than once,
++	 * the first allocated fops are used. This prevents memory leaks
++	 * that can occur when the same device is probe()d repeatedly.
++	 */
++	list_for_each_entry(node, &dvbdevfops_list, list_head) {
++		if (node->fops->owner == adap->module &&
++				node->type == type &&
++				node->template == template) {
++			dvbdevfops = node->fops;
++			break;
++		}
++	}
+ 
+-	if (!dvbdevfops){
+-		kfree (dvbdev);
+-		mutex_unlock(&dvbdev_register_lock);
+-		return -ENOMEM;
++	if (dvbdevfops == NULL) {
++		dvbdevfops = kmemdup(template->fops, sizeof(*dvbdevfops), GFP_KERNEL);
++		if (!dvbdevfops) {
++			kfree(dvbdev);
++			mutex_unlock(&dvbdev_register_lock);
++			return -ENOMEM;
++		}
++
++		new_node = kzalloc(sizeof(struct dvbdevfops_node), GFP_KERNEL);
++		if (!new_node) {
++			kfree(dvbdevfops);
++			kfree(dvbdev);
++			mutex_unlock(&dvbdev_register_lock);
++			return -ENOMEM;
++		}
++
++		new_node->fops = dvbdevfops;
++		new_node->type = type;
++		new_node->template = template;
++		list_add_tail (&new_node->list_head, &dvbdevfops_list);
+ 	}
+ 
+ 	memcpy(dvbdev, template, sizeof(struct dvb_device));
+@@ -490,20 +519,20 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
+ 	dvbdev->priv = priv;
+ 	dvbdev->fops = dvbdevfops;
+ 	init_waitqueue_head (&dvbdev->wait_queue);
+-
+ 	dvbdevfops->owner = adap->module;
+-
+ 	list_add_tail (&dvbdev->list_head, &adap->device_list);
+-
+ 	down_write(&minor_rwsem);
+ #ifdef CONFIG_DVB_DYNAMIC_MINORS
+ 	for (minor = 0; minor < MAX_DVB_MINORS; minor++)
+ 		if (dvb_minors[minor] == NULL)
+ 			break;
+-
+ 	if (minor == MAX_DVB_MINORS) {
++		if (new_node) {
++			list_del (&new_node->list_head);
++			kfree(dvbdevfops);
++			kfree(new_node);
++		}
+ 		list_del (&dvbdev->list_head);
+-		kfree(dvbdevfops);
+ 		kfree(dvbdev);
+ 		up_write(&minor_rwsem);
+ 		mutex_unlock(&dvbdev_register_lock);
+@@ -512,41 +541,47 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
+ #else
+ 	minor = nums2minor(adap->num, type, id);
+ #endif
+-
+ 	dvbdev->minor = minor;
+ 	dvb_minors[minor] = dvb_device_get(dvbdev);
+ 	up_write(&minor_rwsem);
+-
+ 	ret = dvb_register_media_device(dvbdev, type, minor, demux_sink_pads);
+ 	if (ret) {
+ 		pr_err("%s: dvb_register_media_device failed to create the mediagraph\n",
+ 		      __func__);
+-
++		if (new_node) {
++			list_del (&new_node->list_head);
++			kfree(dvbdevfops);
++			kfree(new_node);
++		}
+ 		dvb_media_device_free(dvbdev);
+ 		list_del (&dvbdev->list_head);
+-		kfree(dvbdevfops);
+ 		kfree(dvbdev);
+ 		mutex_unlock(&dvbdev_register_lock);
+ 		return ret;
+ 	}
+ 
+-	mutex_unlock(&dvbdev_register_lock);
+-
+ 	clsdev = device_create(dvb_class, adap->device,
+ 			       MKDEV(DVB_MAJOR, minor),
+ 			       dvbdev, "dvb%d.%s%d", adap->num, dnames[type], id);
+ 	if (IS_ERR(clsdev)) {
+ 		pr_err("%s: failed to create device dvb%d.%s%d (%ld)\n",
+ 		       __func__, adap->num, dnames[type], id, PTR_ERR(clsdev));
++		if (new_node) {
++			list_del (&new_node->list_head);
++			kfree(dvbdevfops);
++			kfree(new_node);
++		}
+ 		dvb_media_device_free(dvbdev);
+ 		list_del (&dvbdev->list_head);
+-		kfree(dvbdevfops);
+ 		kfree(dvbdev);
++		mutex_unlock(&dvbdev_register_lock);
+ 		return PTR_ERR(clsdev);
+ 	}
++
+ 	dprintk("DVB: register adapter%d/%s%d @ minor: %i (0x%02x)\n",
+ 		adap->num, dnames[type], id, minor, minor);
+ 
++	mutex_unlock(&dvbdev_register_lock);
+ 	return 0;
+ }
+ EXPORT_SYMBOL(dvb_register_device);
+@@ -575,7 +610,6 @@ static void dvb_free_device(struct kref *ref)
+ {
+ 	struct dvb_device *dvbdev = container_of(ref, struct dvb_device, ref);
+ 
+-	kfree (dvbdev->fops);
+ 	kfree (dvbdev);
+ }
+ 
+@@ -1081,9 +1115,17 @@ error:
+ 
+ static void __exit exit_dvbdev(void)
+ {
++	struct dvbdevfops_node *node, *next;
++
+ 	class_destroy(dvb_class);
+ 	cdev_del(&dvb_device_cdev);
+ 	unregister_chrdev_region(MKDEV(DVB_MAJOR, 0), MAX_DVB_MINORS);
++
++	list_for_each_entry_safe(node, next, &dvbdevfops_list, list_head) {
++		list_del (&node->list_head);
++		kfree(node->fops);
++		kfree(node);
++	}
+ }
+ 
+ subsys_initcall(init_dvbdev);
+diff --git a/drivers/media/dvb-frontends/mn88443x.c b/drivers/media/dvb-frontends/mn88443x.c
+index 1f1753f2ab1a3..0782f8377eb2f 100644
+--- a/drivers/media/dvb-frontends/mn88443x.c
++++ b/drivers/media/dvb-frontends/mn88443x.c
+@@ -798,7 +798,7 @@ MODULE_DEVICE_TABLE(i2c, mn88443x_i2c_id);
+ static struct i2c_driver mn88443x_driver = {
+ 	.driver = {
+ 		.name = "mn88443x",
+-		.of_match_table = of_match_ptr(mn88443x_of_match),
++		.of_match_table = mn88443x_of_match,
+ 	},
+ 	.probe_new = mn88443x_probe,
+ 	.remove   = mn88443x_remove,
+diff --git a/drivers/media/pci/netup_unidvb/netup_unidvb_core.c b/drivers/media/pci/netup_unidvb/netup_unidvb_core.c
+index aaa1d2dedebdd..d85bfbb77a250 100644
+--- a/drivers/media/pci/netup_unidvb/netup_unidvb_core.c
++++ b/drivers/media/pci/netup_unidvb/netup_unidvb_core.c
+@@ -887,12 +887,7 @@ static int netup_unidvb_initdev(struct pci_dev *pci_dev,
+ 		ndev->lmmio0, (u32)pci_resource_len(pci_dev, 0),
+ 		ndev->lmmio1, (u32)pci_resource_len(pci_dev, 1),
+ 		pci_dev->irq);
+-	if (request_irq(pci_dev->irq, netup_unidvb_isr, IRQF_SHARED,
+-			"netup_unidvb", pci_dev) < 0) {
+-		dev_err(&pci_dev->dev,
+-			"%s(): can't get IRQ %d\n", __func__, pci_dev->irq);
+-		goto irq_request_err;
+-	}
++
+ 	ndev->dma_size = 2 * 188 *
+ 		NETUP_DMA_BLOCKS_COUNT * NETUP_DMA_PACKETS_COUNT;
+ 	ndev->dma_virt = dma_alloc_coherent(&pci_dev->dev,
+@@ -933,6 +928,14 @@ static int netup_unidvb_initdev(struct pci_dev *pci_dev,
+ 		dev_err(&pci_dev->dev, "netup_unidvb: DMA setup failed\n");
+ 		goto dma_setup_err;
+ 	}
++
++	if (request_irq(pci_dev->irq, netup_unidvb_isr, IRQF_SHARED,
++			"netup_unidvb", pci_dev) < 0) {
++		dev_err(&pci_dev->dev,
++			"%s(): can't get IRQ %d\n", __func__, pci_dev->irq);
++		goto dma_setup_err;
++	}
++
+ 	dev_info(&pci_dev->dev,
+ 		"netup_unidvb: device has been initialized\n");
+ 	return 0;
+@@ -951,8 +954,6 @@ spi_setup_err:
+ 	dma_free_coherent(&pci_dev->dev, ndev->dma_size,
+ 			ndev->dma_virt, ndev->dma_phys);
+ dma_alloc_err:
+-	free_irq(pci_dev->irq, pci_dev);
+-irq_request_err:
+ 	iounmap(ndev->lmmio1);
+ pci_bar1_error:
+ 	iounmap(ndev->lmmio0);
+diff --git a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_stateful.c b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_stateful.c
+index 29991551cf614..0fbd030026c72 100644
+--- a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_stateful.c
++++ b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_stateful.c
+@@ -584,6 +584,9 @@ static void mtk_init_vdec_params(struct mtk_vcodec_ctx *ctx)
+ 
+ 	if (!(ctx->dev->dec_capability & VCODEC_CAPABILITY_4K_DISABLED)) {
+ 		for (i = 0; i < num_supported_formats; i++) {
++			if (mtk_video_formats[i].type != MTK_FMT_DEC)
++				continue;
++
+ 			mtk_video_formats[i].frmsize.max_width =
+ 				VCODEC_DEC_4K_CODED_WIDTH;
+ 			mtk_video_formats[i].frmsize.max_height =
+diff --git a/drivers/media/platform/renesas/rcar-vin/rcar-dma.c b/drivers/media/platform/renesas/rcar-vin/rcar-dma.c
+index 98bfd445a649b..2a77353f10b59 100644
+--- a/drivers/media/platform/renesas/rcar-vin/rcar-dma.c
++++ b/drivers/media/platform/renesas/rcar-vin/rcar-dma.c
+@@ -728,11 +728,9 @@ static int rvin_setup(struct rvin_dev *vin)
+ 	case V4L2_FIELD_SEQ_TB:
+ 	case V4L2_FIELD_SEQ_BT:
+ 	case V4L2_FIELD_NONE:
+-		vnmc = VNMC_IM_ODD_EVEN;
+-		progressive = true;
+-		break;
+ 	case V4L2_FIELD_ALTERNATE:
+ 		vnmc = VNMC_IM_ODD_EVEN;
++		progressive = true;
+ 		break;
+ 	default:
+ 		vnmc = VNMC_IM_ODD;
+@@ -1312,12 +1310,23 @@ static int rvin_mc_validate_format(struct rvin_dev *vin, struct v4l2_subdev *sd,
+ 	}
+ 
+ 	if (rvin_scaler_needed(vin)) {
++		/* Gen3 can't scale NV12 */
++		if (vin->info->model == RCAR_GEN3 &&
++		    vin->format.pixelformat == V4L2_PIX_FMT_NV12)
++			return -EPIPE;
++
+ 		if (!vin->scaler)
+ 			return -EPIPE;
+ 	} else {
+-		if (fmt.format.width != vin->format.width ||
+-		    fmt.format.height != vin->format.height)
+-			return -EPIPE;
++		if (vin->format.pixelformat == V4L2_PIX_FMT_NV12) {
++			if (ALIGN(fmt.format.width, 32) != vin->format.width ||
++			    ALIGN(fmt.format.height, 32) != vin->format.height)
++				return -EPIPE;
++		} else {
++			if (fmt.format.width != vin->format.width ||
++			    fmt.format.height != vin->format.height)
++				return -EPIPE;
++		}
+ 	}
+ 
+ 	if (fmt.format.code != vin->mbus_code)
+diff --git a/drivers/media/usb/dvb-usb-v2/ce6230.c b/drivers/media/usb/dvb-usb-v2/ce6230.c
+index 44540de1a2066..d3b5cb4a24daf 100644
+--- a/drivers/media/usb/dvb-usb-v2/ce6230.c
++++ b/drivers/media/usb/dvb-usb-v2/ce6230.c
+@@ -101,6 +101,10 @@ static int ce6230_i2c_master_xfer(struct i2c_adapter *adap,
+ 		if (num > i + 1 && (msg[i+1].flags & I2C_M_RD)) {
+ 			if (msg[i].addr ==
+ 				ce6230_zl10353_config.demod_address) {
++				if (msg[i].len < 1) {
++					i = -EOPNOTSUPP;
++					break;
++				}
+ 				req.cmd = DEMOD_READ;
+ 				req.value = msg[i].addr >> 1;
+ 				req.index = msg[i].buf[0];
+@@ -117,6 +121,10 @@ static int ce6230_i2c_master_xfer(struct i2c_adapter *adap,
+ 		} else {
+ 			if (msg[i].addr ==
+ 				ce6230_zl10353_config.demod_address) {
++				if (msg[i].len < 1) {
++					i = -EOPNOTSUPP;
++					break;
++				}
+ 				req.cmd = DEMOD_WRITE;
+ 				req.value = msg[i].addr >> 1;
+ 				req.index = msg[i].buf[0];
+diff --git a/drivers/media/usb/dvb-usb-v2/ec168.c b/drivers/media/usb/dvb-usb-v2/ec168.c
+index 7ed0ab9e429b1..0e4773fc025c9 100644
+--- a/drivers/media/usb/dvb-usb-v2/ec168.c
++++ b/drivers/media/usb/dvb-usb-v2/ec168.c
+@@ -115,6 +115,10 @@ static int ec168_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[],
+ 	while (i < num) {
+ 		if (num > i + 1 && (msg[i+1].flags & I2C_M_RD)) {
+ 			if (msg[i].addr == ec168_ec100_config.demod_address) {
++				if (msg[i].len < 1) {
++					i = -EOPNOTSUPP;
++					break;
++				}
+ 				req.cmd = READ_DEMOD;
+ 				req.value = 0;
+ 				req.index = 0xff00 + msg[i].buf[0]; /* reg */
+@@ -131,6 +135,10 @@ static int ec168_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[],
+ 			}
+ 		} else {
+ 			if (msg[i].addr == ec168_ec100_config.demod_address) {
++				if (msg[i].len < 1) {
++					i = -EOPNOTSUPP;
++					break;
++				}
+ 				req.cmd = WRITE_DEMOD;
+ 				req.value = msg[i].buf[1]; /* val */
+ 				req.index = 0xff00 + msg[i].buf[0]; /* reg */
+@@ -139,6 +147,10 @@ static int ec168_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[],
+ 				ret = ec168_ctrl_msg(d, &req);
+ 				i += 1;
+ 			} else {
++				if (msg[i].len < 1) {
++					i = -EOPNOTSUPP;
++					break;
++				}
+ 				req.cmd = WRITE_I2C;
+ 				req.value = msg[i].buf[0]; /* val */
+ 				req.index = 0x0100 + msg[i].addr; /* I2C addr */
+diff --git a/drivers/media/usb/dvb-usb-v2/rtl28xxu.c b/drivers/media/usb/dvb-usb-v2/rtl28xxu.c
+index 795a012d40200..f7884bb56fccf 100644
+--- a/drivers/media/usb/dvb-usb-v2/rtl28xxu.c
++++ b/drivers/media/usb/dvb-usb-v2/rtl28xxu.c
+@@ -176,6 +176,10 @@ static int rtl28xxu_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[],
+ 			ret = -EOPNOTSUPP;
+ 			goto err_mutex_unlock;
+ 		} else if (msg[0].addr == 0x10) {
++			if (msg[0].len < 1 || msg[1].len < 1) {
++				ret = -EOPNOTSUPP;
++				goto err_mutex_unlock;
++			}
+ 			/* method 1 - integrated demod */
+ 			if (msg[0].buf[0] == 0x00) {
+ 				/* return demod page from driver cache */
+@@ -189,6 +193,10 @@ static int rtl28xxu_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[],
+ 				ret = rtl28xxu_ctrl_msg(d, &req);
+ 			}
+ 		} else if (msg[0].len < 2) {
++			if (msg[0].len < 1) {
++				ret = -EOPNOTSUPP;
++				goto err_mutex_unlock;
++			}
+ 			/* method 2 - old I2C */
+ 			req.value = (msg[0].buf[0] << 8) | (msg[0].addr << 1);
+ 			req.index = CMD_I2C_RD;
+@@ -217,8 +225,16 @@ static int rtl28xxu_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[],
+ 			ret = -EOPNOTSUPP;
+ 			goto err_mutex_unlock;
+ 		} else if (msg[0].addr == 0x10) {
++			if (msg[0].len < 1) {
++				ret = -EOPNOTSUPP;
++				goto err_mutex_unlock;
++			}
+ 			/* method 1 - integrated demod */
+ 			if (msg[0].buf[0] == 0x00) {
++				if (msg[0].len < 2) {
++					ret = -EOPNOTSUPP;
++					goto err_mutex_unlock;
++				}
+ 				/* save demod page for later demod access */
+ 				dev->page = msg[0].buf[1];
+ 				ret = 0;
+@@ -231,6 +247,10 @@ static int rtl28xxu_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[],
+ 				ret = rtl28xxu_ctrl_msg(d, &req);
+ 			}
+ 		} else if ((msg[0].len < 23) && (!dev->new_i2c_write)) {
++			if (msg[0].len < 1) {
++				ret = -EOPNOTSUPP;
++				goto err_mutex_unlock;
++			}
+ 			/* method 2 - old I2C */
+ 			req.value = (msg[0].buf[0] << 8) | (msg[0].addr << 1);
+ 			req.index = CMD_I2C_WR;
+diff --git a/drivers/media/usb/dvb-usb/az6027.c b/drivers/media/usb/dvb-usb/az6027.c
+index 7d78ee09be5e1..a31c6f82f4e90 100644
+--- a/drivers/media/usb/dvb-usb/az6027.c
++++ b/drivers/media/usb/dvb-usb/az6027.c
+@@ -988,6 +988,10 @@ static int az6027_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[], int n
+ 			/* write/read request */
+ 			if (i + 1 < num && (msg[i + 1].flags & I2C_M_RD)) {
+ 				req = 0xB9;
++				if (msg[i].len < 1) {
++					i = -EOPNOTSUPP;
++					break;
++				}
+ 				index = (((msg[i].buf[0] << 8) & 0xff00) | (msg[i].buf[1] & 0x00ff));
+ 				value = msg[i].addr + (msg[i].len << 8);
+ 				length = msg[i + 1].len + 6;
+@@ -1001,6 +1005,10 @@ static int az6027_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[], int n
+ 
+ 				/* demod 16bit addr */
+ 				req = 0xBD;
++				if (msg[i].len < 1) {
++					i = -EOPNOTSUPP;
++					break;
++				}
+ 				index = (((msg[i].buf[0] << 8) & 0xff00) | (msg[i].buf[1] & 0x00ff));
+ 				value = msg[i].addr + (2 << 8);
+ 				length = msg[i].len - 2;
+@@ -1026,6 +1034,10 @@ static int az6027_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[], int n
+ 			} else {
+ 
+ 				req = 0xBD;
++				if (msg[i].len < 1) {
++					i = -EOPNOTSUPP;
++					break;
++				}
+ 				index = msg[i].buf[0] & 0x00FF;
+ 				value = msg[i].addr + (1 << 8);
+ 				length = msg[i].len - 1;
+diff --git a/drivers/media/usb/dvb-usb/digitv.c b/drivers/media/usb/dvb-usb/digitv.c
+index 2756815a780bc..32134be169148 100644
+--- a/drivers/media/usb/dvb-usb/digitv.c
++++ b/drivers/media/usb/dvb-usb/digitv.c
+@@ -63,6 +63,10 @@ static int digitv_i2c_xfer(struct i2c_adapter *adap,struct i2c_msg msg[],int num
+ 		warn("more than 2 i2c messages at a time is not handled yet. TODO.");
+ 
+ 	for (i = 0; i < num; i++) {
++		if (msg[i].len < 1) {
++			i = -EOPNOTSUPP;
++			break;
++		}
+ 		/* write/read request */
+ 		if (i+1 < num && (msg[i+1].flags & I2C_M_RD)) {
+ 			if (digitv_ctrl_msg(d, USB_READ_COFDM, msg[i].buf[0], NULL, 0,
+diff --git a/drivers/media/usb/dvb-usb/dw2102.c b/drivers/media/usb/dvb-usb/dw2102.c
+index 0ca764282c767..8747960e61461 100644
+--- a/drivers/media/usb/dvb-usb/dw2102.c
++++ b/drivers/media/usb/dvb-usb/dw2102.c
+@@ -946,7 +946,7 @@ static int su3000_read_mac_address(struct dvb_usb_device *d, u8 mac[6])
+ 	for (i = 0; i < 6; i++) {
+ 		obuf[1] = 0xf0 + i;
+ 		if (i2c_transfer(&d->i2c_adap, msg, 2) != 2)
+-			break;
++			return -1;
+ 		else
+ 			mac[i] = ibuf[0];
+ 	}
+diff --git a/drivers/media/usb/ttusb-dec/ttusb_dec.c b/drivers/media/usb/ttusb-dec/ttusb_dec.c
+index 38822cedd93a9..c4474d4c44e28 100644
+--- a/drivers/media/usb/ttusb-dec/ttusb_dec.c
++++ b/drivers/media/usb/ttusb-dec/ttusb_dec.c
+@@ -1544,8 +1544,7 @@ static void ttusb_dec_exit_dvb(struct ttusb_dec *dec)
+ 	dvb_dmx_release(&dec->demux);
+ 	if (dec->fe) {
+ 		dvb_unregister_frontend(dec->fe);
+-		if (dec->fe->ops.release)
+-			dec->fe->ops.release(dec->fe);
++		dvb_frontend_detach(dec->fe);
+ 	}
+ 	dvb_unregister_adapter(&dec->adapter);
+ }
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index 7aefa76a42b31..d631ce4f9f7bb 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -251,14 +251,17 @@ static int uvc_parse_format(struct uvc_device *dev,
+ 		/* Find the format descriptor from its GUID. */
+ 		fmtdesc = uvc_format_by_guid(&buffer[5]);
+ 
+-		if (fmtdesc != NULL) {
+-			format->fcc = fmtdesc->fcc;
+-		} else {
++		if (!fmtdesc) {
++			/*
++			 * Unknown video formats are not fatal errors, the
++			 * caller will skip this descriptor.
++			 */
+ 			dev_info(&streaming->intf->dev,
+ 				 "Unknown video format %pUl\n", &buffer[5]);
+-			format->fcc = 0;
++			return 0;
+ 		}
+ 
++		format->fcc = fmtdesc->fcc;
+ 		format->bpp = buffer[21];
+ 
+ 		/*
+@@ -675,7 +678,7 @@ static int uvc_parse_streaming(struct uvc_device *dev,
+ 	interval = (u32 *)&frame[nframes];
+ 
+ 	streaming->format = format;
+-	streaming->nformats = nformats;
++	streaming->nformats = 0;
+ 
+ 	/* Parse the format descriptors. */
+ 	while (buflen > 2 && buffer[1] == USB_DT_CS_INTERFACE) {
+@@ -689,7 +692,10 @@ static int uvc_parse_streaming(struct uvc_device *dev,
+ 				&interval, buffer, buflen);
+ 			if (ret < 0)
+ 				goto error;
++			if (!ret)
++				break;
+ 
++			streaming->nformats++;
+ 			frame += format->nframes;
+ 			format++;
+ 
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index a701132638cf2..30d4d0476248f 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -262,7 +262,7 @@ struct fastrpc_channel_ctx {
+ 	int domain_id;
+ 	int sesscount;
+ 	int vmcount;
+-	u32 perms;
++	u64 perms;
+ 	struct qcom_scm_vmperm vmperms[FASTRPC_MAX_VMIDS];
+ 	struct rpmsg_device *rpdev;
+ 	struct fastrpc_session_ctx session[FASTRPC_MAX_SESSIONS];
+@@ -316,12 +316,14 @@ static void fastrpc_free_map(struct kref *ref)
+ 	if (map->table) {
+ 		if (map->attr & FASTRPC_ATTR_SECUREMAP) {
+ 			struct qcom_scm_vmperm perm;
++			int vmid = map->fl->cctx->vmperms[0].vmid;
++			u64 src_perms = BIT(QCOM_SCM_VMID_HLOS) | BIT(vmid);
+ 			int err = 0;
+ 
+ 			perm.vmid = QCOM_SCM_VMID_HLOS;
+ 			perm.perm = QCOM_SCM_PERM_RWX;
+ 			err = qcom_scm_assign_mem(map->phys, map->size,
+-				&map->fl->cctx->perms, &perm, 1);
++				&src_perms, &perm, 1);
+ 			if (err) {
+ 				dev_err(map->fl->sctx->dev, "Failed to assign memory phys 0x%llx size 0x%llx err %d",
+ 						map->phys, map->size, err);
+@@ -787,8 +789,12 @@ static int fastrpc_map_create(struct fastrpc_user *fl, int fd,
+ 		goto map_err;
+ 	}
+ 
+-	map->phys = sg_dma_address(map->table->sgl);
+-	map->phys += ((u64)fl->sctx->sid << 32);
++	if (attr & FASTRPC_ATTR_SECUREMAP) {
++		map->phys = sg_phys(map->table->sgl);
++	} else {
++		map->phys = sg_dma_address(map->table->sgl);
++		map->phys += ((u64)fl->sctx->sid << 32);
++	}
+ 	map->size = len;
+ 	map->va = sg_virt(map->table->sgl);
+ 	map->len = len;
+@@ -798,9 +804,15 @@ static int fastrpc_map_create(struct fastrpc_user *fl, int fd,
+ 		 * If subsystem VMIDs are defined in DTSI, then do
+ 		 * hyp_assign from HLOS to those VM(s)
+ 		 */
++		u64 src_perms = BIT(QCOM_SCM_VMID_HLOS);
++		struct qcom_scm_vmperm dst_perms[2] = {0};
++
++		dst_perms[0].vmid = QCOM_SCM_VMID_HLOS;
++		dst_perms[0].perm = QCOM_SCM_PERM_RW;
++		dst_perms[1].vmid = fl->cctx->vmperms[0].vmid;
++		dst_perms[1].perm = QCOM_SCM_PERM_RWX;
+ 		map->attr = attr;
+-		err = qcom_scm_assign_mem(map->phys, (u64)map->size, &fl->cctx->perms,
+-				fl->cctx->vmperms, fl->cctx->vmcount);
++		err = qcom_scm_assign_mem(map->phys, (u64)map->size, &src_perms, dst_perms, 2);
+ 		if (err) {
+ 			dev_err(sess->dev, "Failed to assign memory with phys 0x%llx size 0x%llx err %d",
+ 					map->phys, map->size, err);
+@@ -1892,7 +1904,7 @@ static int fastrpc_req_mmap(struct fastrpc_user *fl, char __user *argp)
+ 	req.vaddrout = rsp_msg.vaddr;
+ 
+ 	/* Add memory to static PD pool, protection thru hypervisor */
+-	if (req.flags != ADSP_MMAP_REMOTE_HEAP_ADDR && fl->cctx->vmcount) {
++	if (req.flags == ADSP_MMAP_REMOTE_HEAP_ADDR && fl->cctx->vmcount) {
+ 		struct qcom_scm_vmperm perm;
+ 
+ 		perm.vmid = QCOM_SCM_VMID_HLOS;
+@@ -2337,8 +2349,10 @@ static void fastrpc_notify_users(struct fastrpc_user *user)
+ 	struct fastrpc_invoke_ctx *ctx;
+ 
+ 	spin_lock(&user->lock);
+-	list_for_each_entry(ctx, &user->pending, node)
++	list_for_each_entry(ctx, &user->pending, node) {
++		ctx->retval = -EPIPE;
+ 		complete(&ctx->work);
++	}
+ 	spin_unlock(&user->lock);
+ }
+ 
+@@ -2349,7 +2363,9 @@ static void fastrpc_rpmsg_remove(struct rpmsg_device *rpdev)
+ 	struct fastrpc_user *user;
+ 	unsigned long flags;
+ 
++	/* No invocations past this point */
+ 	spin_lock_irqsave(&cctx->lock, flags);
++	cctx->rpdev = NULL;
+ 	list_for_each_entry(user, &cctx->users, user)
+ 		fastrpc_notify_users(user);
+ 	spin_unlock_irqrestore(&cctx->lock, flags);
+@@ -2368,7 +2384,6 @@ static void fastrpc_rpmsg_remove(struct rpmsg_device *rpdev)
+ 
+ 	of_platform_depopulate(&rpdev->dev);
+ 
+-	cctx->rpdev = NULL;
+ 	fastrpc_channel_ctx_put(cctx);
+ }
+ 
+diff --git a/drivers/mmc/core/pwrseq_sd8787.c b/drivers/mmc/core/pwrseq_sd8787.c
+index 2e120ad83020f..0c5f5e371e1f8 100644
+--- a/drivers/mmc/core/pwrseq_sd8787.c
++++ b/drivers/mmc/core/pwrseq_sd8787.c
+@@ -28,7 +28,6 @@ struct mmc_pwrseq_sd8787 {
+ 	struct mmc_pwrseq pwrseq;
+ 	struct gpio_desc *reset_gpio;
+ 	struct gpio_desc *pwrdn_gpio;
+-	u32 reset_pwrdwn_delay_ms;
+ };
+ 
+ #define to_pwrseq_sd8787(p) container_of(p, struct mmc_pwrseq_sd8787, pwrseq)
+@@ -39,7 +38,7 @@ static void mmc_pwrseq_sd8787_pre_power_on(struct mmc_host *host)
+ 
+ 	gpiod_set_value_cansleep(pwrseq->reset_gpio, 1);
+ 
+-	msleep(pwrseq->reset_pwrdwn_delay_ms);
++	msleep(300);
+ 	gpiod_set_value_cansleep(pwrseq->pwrdn_gpio, 1);
+ }
+ 
+@@ -51,17 +50,37 @@ static void mmc_pwrseq_sd8787_power_off(struct mmc_host *host)
+ 	gpiod_set_value_cansleep(pwrseq->reset_gpio, 0);
+ }
+ 
++static void mmc_pwrseq_wilc1000_pre_power_on(struct mmc_host *host)
++{
++	struct mmc_pwrseq_sd8787 *pwrseq = to_pwrseq_sd8787(host->pwrseq);
++
++	/* The pwrdn_gpio is really CHIP_EN, reset_gpio is RESETN */
++	gpiod_set_value_cansleep(pwrseq->pwrdn_gpio, 1);
++	msleep(5);
++	gpiod_set_value_cansleep(pwrseq->reset_gpio, 1);
++}
++
++static void mmc_pwrseq_wilc1000_power_off(struct mmc_host *host)
++{
++	struct mmc_pwrseq_sd8787 *pwrseq = to_pwrseq_sd8787(host->pwrseq);
++
++	gpiod_set_value_cansleep(pwrseq->reset_gpio, 0);
++	gpiod_set_value_cansleep(pwrseq->pwrdn_gpio, 0);
++}
++
+ static const struct mmc_pwrseq_ops mmc_pwrseq_sd8787_ops = {
+ 	.pre_power_on = mmc_pwrseq_sd8787_pre_power_on,
+ 	.power_off = mmc_pwrseq_sd8787_power_off,
+ };
+ 
+-static const u32 sd8787_delay_ms = 300;
+-static const u32 wilc1000_delay_ms = 5;
++static const struct mmc_pwrseq_ops mmc_pwrseq_wilc1000_ops = {
++	.pre_power_on = mmc_pwrseq_wilc1000_pre_power_on,
++	.power_off = mmc_pwrseq_wilc1000_power_off,
++};
+ 
+ static const struct of_device_id mmc_pwrseq_sd8787_of_match[] = {
+-	{ .compatible = "mmc-pwrseq-sd8787", .data = &sd8787_delay_ms },
+-	{ .compatible = "mmc-pwrseq-wilc1000", .data = &wilc1000_delay_ms },
++	{ .compatible = "mmc-pwrseq-sd8787", .data = &mmc_pwrseq_sd8787_ops },
++	{ .compatible = "mmc-pwrseq-wilc1000", .data = &mmc_pwrseq_wilc1000_ops },
+ 	{/* sentinel */},
+ };
+ MODULE_DEVICE_TABLE(of, mmc_pwrseq_sd8787_of_match);
+@@ -77,7 +96,6 @@ static int mmc_pwrseq_sd8787_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	match = of_match_node(mmc_pwrseq_sd8787_of_match, pdev->dev.of_node);
+-	pwrseq->reset_pwrdwn_delay_ms = *(u32 *)match->data;
+ 
+ 	pwrseq->pwrdn_gpio = devm_gpiod_get(dev, "powerdown", GPIOD_OUT_LOW);
+ 	if (IS_ERR(pwrseq->pwrdn_gpio))
+@@ -88,7 +106,7 @@ static int mmc_pwrseq_sd8787_probe(struct platform_device *pdev)
+ 		return PTR_ERR(pwrseq->reset_gpio);
+ 
+ 	pwrseq->pwrseq.dev = dev;
+-	pwrseq->pwrseq.ops = &mmc_pwrseq_sd8787_ops;
++	pwrseq->pwrseq.ops = match->data;
+ 	pwrseq->pwrseq.owner = THIS_MODULE;
+ 	platform_set_drvdata(pdev, pwrseq);
+ 
+diff --git a/drivers/mmc/host/vub300.c b/drivers/mmc/host/vub300.c
+index 72f65f32abbc7..7dc0e91dabfc7 100644
+--- a/drivers/mmc/host/vub300.c
++++ b/drivers/mmc/host/vub300.c
+@@ -1715,6 +1715,9 @@ static void construct_request_response(struct vub300_mmc_host *vub300,
+ 	int bytes = 3 & less_cmd;
+ 	int words = less_cmd >> 2;
+ 	u8 *r = vub300->resp.response.command_response;
++
++	if (!resp_len)
++		return;
+ 	if (bytes == 3) {
+ 		cmd->resp[words] = (r[1 + (words << 2)] << 24)
+ 			| (r[2 + (words << 2)] << 16)
+diff --git a/drivers/mtd/mtdchar.c b/drivers/mtd/mtdchar.c
+index 01f1c6792df9c..8dc4f5c493fcb 100644
+--- a/drivers/mtd/mtdchar.c
++++ b/drivers/mtd/mtdchar.c
+@@ -590,8 +590,8 @@ static void adjust_oob_length(struct mtd_info *mtd, uint64_t start,
+ 			    (end_page - start_page + 1) * oob_per_page);
+ }
+ 
+-static int mtdchar_write_ioctl(struct mtd_info *mtd,
+-		struct mtd_write_req __user *argp)
++static noinline_for_stack int
++mtdchar_write_ioctl(struct mtd_info *mtd, struct mtd_write_req __user *argp)
+ {
+ 	struct mtd_info *master = mtd_get_master(mtd);
+ 	struct mtd_write_req req;
+@@ -688,8 +688,8 @@ static int mtdchar_write_ioctl(struct mtd_info *mtd,
+ 	return ret;
+ }
+ 
+-static int mtdchar_read_ioctl(struct mtd_info *mtd,
+-		struct mtd_read_req __user *argp)
++static noinline_for_stack int
++mtdchar_read_ioctl(struct mtd_info *mtd, struct mtd_read_req __user *argp)
+ {
+ 	struct mtd_info *master = mtd_get_master(mtd);
+ 	struct mtd_read_req req;
+diff --git a/drivers/mtd/nand/raw/ingenic/ingenic_ecc.h b/drivers/mtd/nand/raw/ingenic/ingenic_ecc.h
+index 2cda439b5e11b..017868f59f222 100644
+--- a/drivers/mtd/nand/raw/ingenic/ingenic_ecc.h
++++ b/drivers/mtd/nand/raw/ingenic/ingenic_ecc.h
+@@ -36,25 +36,25 @@ int ingenic_ecc_correct(struct ingenic_ecc *ecc,
+ void ingenic_ecc_release(struct ingenic_ecc *ecc);
+ struct ingenic_ecc *of_ingenic_ecc_get(struct device_node *np);
+ #else /* CONFIG_MTD_NAND_INGENIC_ECC */
+-int ingenic_ecc_calculate(struct ingenic_ecc *ecc,
++static inline int ingenic_ecc_calculate(struct ingenic_ecc *ecc,
+ 			  struct ingenic_ecc_params *params,
+ 			  const u8 *buf, u8 *ecc_code)
+ {
+ 	return -ENODEV;
+ }
+ 
+-int ingenic_ecc_correct(struct ingenic_ecc *ecc,
++static inline int ingenic_ecc_correct(struct ingenic_ecc *ecc,
+ 			struct ingenic_ecc_params *params, u8 *buf,
+ 			u8 *ecc_code)
+ {
+ 	return -ENODEV;
+ }
+ 
+-void ingenic_ecc_release(struct ingenic_ecc *ecc)
++static inline void ingenic_ecc_release(struct ingenic_ecc *ecc)
+ {
+ }
+ 
+-struct ingenic_ecc *of_ingenic_ecc_get(struct device_node *np)
++static inline struct ingenic_ecc *of_ingenic_ecc_get(struct device_node *np)
+ {
+ 	return ERR_PTR(-ENODEV);
+ }
+diff --git a/drivers/mtd/nand/raw/marvell_nand.c b/drivers/mtd/nand/raw/marvell_nand.c
+index 3034916d2e252..2d34c2cb0f7ae 100644
+--- a/drivers/mtd/nand/raw/marvell_nand.c
++++ b/drivers/mtd/nand/raw/marvell_nand.c
+@@ -2457,6 +2457,12 @@ static int marvell_nfc_setup_interface(struct nand_chip *chip, int chipnr,
+ 			NDTR1_WAIT_MODE;
+ 	}
+ 
++	/*
++	 * Reset nfc->selected_chip so the next command will cause the timing
++	 * registers to be updated in marvell_nfc_select_target().
++	 */
++	nfc->selected_chip = NULL;
++
+ 	return 0;
+ }
+ 
+@@ -2894,10 +2900,6 @@ static int marvell_nfc_init(struct marvell_nfc *nfc)
+ 		regmap_update_bits(sysctrl_base, GENCONF_CLK_GATING_CTRL,
+ 				   GENCONF_CLK_GATING_CTRL_ND_GATE,
+ 				   GENCONF_CLK_GATING_CTRL_ND_GATE);
+-
+-		regmap_update_bits(sysctrl_base, GENCONF_ND_CLK_CTRL,
+-				   GENCONF_ND_CLK_CTRL_EN,
+-				   GENCONF_ND_CLK_CTRL_EN);
+ 	}
+ 
+ 	/* Configure the DMA if appropriate */
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 902f407213404..39770a5b74e2e 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -7158,7 +7158,7 @@ static int mv88e6xxx_probe(struct mdio_device *mdiodev)
+ 		goto out;
+ 	}
+ 	if (chip->reset)
+-		usleep_range(1000, 2000);
++		usleep_range(10000, 20000);
+ 
+ 	/* Detect if the device is configured in single chip addressing mode,
+ 	 * otherwise continue with address specific smi init/detection.
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+index 33a9574e9e043..32d2c6fac6526 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+@@ -1329,7 +1329,7 @@ static enum xgbe_mode xgbe_phy_status_aneg(struct xgbe_prv_data *pdata)
+ 	return pdata->phy_if.phy_impl.an_outcome(pdata);
+ }
+ 
+-static void xgbe_phy_status_result(struct xgbe_prv_data *pdata)
++static bool xgbe_phy_status_result(struct xgbe_prv_data *pdata)
+ {
+ 	struct ethtool_link_ksettings *lks = &pdata->phy.lks;
+ 	enum xgbe_mode mode;
+@@ -1367,8 +1367,13 @@ static void xgbe_phy_status_result(struct xgbe_prv_data *pdata)
+ 
+ 	pdata->phy.duplex = DUPLEX_FULL;
+ 
+-	if (xgbe_set_mode(pdata, mode) && pdata->an_again)
++	if (!xgbe_set_mode(pdata, mode))
++		return false;
++
++	if (pdata->an_again)
+ 		xgbe_phy_reconfig_aneg(pdata);
++
++	return true;
+ }
+ 
+ static void xgbe_phy_status(struct xgbe_prv_data *pdata)
+@@ -1398,7 +1403,8 @@ static void xgbe_phy_status(struct xgbe_prv_data *pdata)
+ 			return;
+ 		}
+ 
+-		xgbe_phy_status_result(pdata);
++		if (xgbe_phy_status_result(pdata))
++			return;
+ 
+ 		if (test_bit(XGBE_LINK_INIT, &pdata->dev_state))
+ 			clear_bit(XGBE_LINK_INIT, &pdata->dev_state);
+diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
+index 059bd911c51d8..52d0a126eb616 100644
+--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
++++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
+@@ -1152,11 +1152,11 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
+ 	unsigned int total_rx_bytes = 0, total_rx_pkts = 0;
+ 	unsigned int offset = rx_ring->rx_offset;
+ 	struct xdp_buff *xdp = &rx_ring->xdp;
++	u32 cached_ntc = rx_ring->first_desc;
+ 	struct ice_tx_ring *xdp_ring = NULL;
+ 	struct bpf_prog *xdp_prog = NULL;
+ 	u32 ntc = rx_ring->next_to_clean;
+ 	u32 cnt = rx_ring->count;
+-	u32 cached_ntc = ntc;
+ 	u32 xdp_xmit = 0;
+ 	u32 cached_ntu;
+ 	bool failure;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+index f40497823e65f..7c0f2adbea000 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+@@ -490,7 +490,7 @@ static void poll_trace(struct mlx5_fw_tracer *tracer,
+ 				(u64)timestamp_low;
+ 		break;
+ 	default:
+-		if (tracer_event->event_id >= tracer->str_db.first_string_trace ||
++		if (tracer_event->event_id >= tracer->str_db.first_string_trace &&
+ 		    tracer_event->event_id <= tracer->str_db.first_string_trace +
+ 					      tracer->str_db.num_string_trace) {
+ 			tracer_event->type = TRACER_EVENT_TYPE_STRING;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+index 4a19ef4a98110..5ee90a394fff9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+@@ -327,6 +327,7 @@ struct mlx5e_params {
+ 	unsigned int sw_mtu;
+ 	int hard_mtu;
+ 	bool ptp_rx;
++	__be32 terminate_lkey_be;
+ };
+ 
+ static inline u8 mlx5e_get_dcb_num_tc(struct mlx5e_params *params)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
+index 7ac1ad9c46de0..7e8e96cc5cd08 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
+@@ -51,7 +51,7 @@ int mlx5e_port_query_buffer(struct mlx5e_priv *priv,
+ 	if (err)
+ 		goto out;
+ 
+-	for (i = 0; i < MLX5E_MAX_BUFFER; i++) {
++	for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) {
+ 		buffer = MLX5_ADDR_OF(pbmc_reg, out, buffer[i]);
+ 		port_buffer->buffer[i].lossy =
+ 			MLX5_GET(bufferx_reg, buffer, lossy);
+@@ -73,14 +73,24 @@ int mlx5e_port_query_buffer(struct mlx5e_priv *priv,
+ 			  port_buffer->buffer[i].lossy);
+ 	}
+ 
+-	port_buffer->headroom_size = total_used;
++	port_buffer->internal_buffers_size = 0;
++	for (i = MLX5E_MAX_NETWORK_BUFFER; i < MLX5E_TOTAL_BUFFERS; i++) {
++		buffer = MLX5_ADDR_OF(pbmc_reg, out, buffer[i]);
++		port_buffer->internal_buffers_size +=
++			MLX5_GET(bufferx_reg, buffer, size) * port_buff_cell_sz;
++	}
++
+ 	port_buffer->port_buffer_size =
+ 		MLX5_GET(pbmc_reg, out, port_buffer_size) * port_buff_cell_sz;
+-	port_buffer->spare_buffer_size =
+-		port_buffer->port_buffer_size - total_used;
+-
+-	mlx5e_dbg(HW, priv, "total buffer size=%d, spare buffer size=%d\n",
+-		  port_buffer->port_buffer_size,
++	port_buffer->headroom_size = total_used;
++	port_buffer->spare_buffer_size = port_buffer->port_buffer_size -
++					 port_buffer->internal_buffers_size -
++					 port_buffer->headroom_size;
++
++	mlx5e_dbg(HW, priv,
++		  "total buffer size=%u, headroom buffer size=%u, internal buffers size=%u, spare buffer size=%u\n",
++		  port_buffer->port_buffer_size, port_buffer->headroom_size,
++		  port_buffer->internal_buffers_size,
+ 		  port_buffer->spare_buffer_size);
+ out:
+ 	kfree(out);
+@@ -206,11 +216,11 @@ static int port_update_pool_cfg(struct mlx5_core_dev *mdev,
+ 	if (!MLX5_CAP_GEN(mdev, sbcam_reg))
+ 		return 0;
+ 
+-	for (i = 0; i < MLX5E_MAX_BUFFER; i++)
++	for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++)
+ 		lossless_buff_count += ((port_buffer->buffer[i].size) &&
+ 				       (!(port_buffer->buffer[i].lossy)));
+ 
+-	for (i = 0; i < MLX5E_MAX_BUFFER; i++) {
++	for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) {
+ 		p = select_sbcm_params(&port_buffer->buffer[i], lossless_buff_count);
+ 		err = mlx5e_port_set_sbcm(mdev, 0, i,
+ 					  MLX5_INGRESS_DIR,
+@@ -293,7 +303,7 @@ static int port_set_buffer(struct mlx5e_priv *priv,
+ 	if (err)
+ 		goto out;
+ 
+-	for (i = 0; i < MLX5E_MAX_BUFFER; i++) {
++	for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) {
+ 		void *buffer = MLX5_ADDR_OF(pbmc_reg, in, buffer[i]);
+ 		u64 size = port_buffer->buffer[i].size;
+ 		u64 xoff = port_buffer->buffer[i].xoff;
+@@ -351,7 +361,7 @@ static int update_xoff_threshold(struct mlx5e_port_buffer *port_buffer,
+ {
+ 	int i;
+ 
+-	for (i = 0; i < MLX5E_MAX_BUFFER; i++) {
++	for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) {
+ 		if (port_buffer->buffer[i].lossy) {
+ 			port_buffer->buffer[i].xoff = 0;
+ 			port_buffer->buffer[i].xon  = 0;
+@@ -408,7 +418,7 @@ static int update_buffer_lossy(struct mlx5_core_dev *mdev,
+ 	int err;
+ 	int i;
+ 
+-	for (i = 0; i < MLX5E_MAX_BUFFER; i++) {
++	for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) {
+ 		prio_count = 0;
+ 		lossy_count = 0;
+ 
+@@ -432,11 +442,11 @@ static int update_buffer_lossy(struct mlx5_core_dev *mdev,
+ 	}
+ 
+ 	if (changed) {
+-		err = port_update_pool_cfg(mdev, port_buffer);
++		err = update_xoff_threshold(port_buffer, xoff, max_mtu, port_buff_cell_sz);
+ 		if (err)
+ 			return err;
+ 
+-		err = update_xoff_threshold(port_buffer, xoff, max_mtu, port_buff_cell_sz);
++		err = port_update_pool_cfg(mdev, port_buffer);
+ 		if (err)
+ 			return err;
+ 
+@@ -515,7 +525,7 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+ 
+ 	if (change & MLX5E_PORT_BUFFER_PRIO2BUFFER) {
+ 		update_prio2buffer = true;
+-		for (i = 0; i < MLX5E_MAX_BUFFER; i++)
++		for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++)
+ 			mlx5e_dbg(HW, priv, "%s: requested to map prio[%d] to buffer %d\n",
+ 				  __func__, i, prio2buffer[i]);
+ 
+@@ -530,7 +540,7 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+ 	}
+ 
+ 	if (change & MLX5E_PORT_BUFFER_SIZE) {
+-		for (i = 0; i < MLX5E_MAX_BUFFER; i++) {
++		for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) {
+ 			mlx5e_dbg(HW, priv, "%s: buffer[%d]=%d\n", __func__, i, buffer_size[i]);
+ 			if (!port_buffer.buffer[i].lossy && !buffer_size[i]) {
+ 				mlx5e_dbg(HW, priv, "%s: lossless buffer[%d] size cannot be zero\n",
+@@ -544,7 +554,9 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+ 
+ 		mlx5e_dbg(HW, priv, "%s: total buffer requested=%d\n", __func__, total_used);
+ 
+-		if (total_used > port_buffer.port_buffer_size)
++		if (total_used > port_buffer.headroom_size &&
++		    (total_used - port_buffer.headroom_size) >
++			    port_buffer.spare_buffer_size)
+ 			return -EINVAL;
+ 
+ 		update_buffer = true;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.h b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.h
+index a6ef118de758f..f4a19ffbb641c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.h
+@@ -35,7 +35,8 @@
+ #include "en.h"
+ #include "port.h"
+ 
+-#define MLX5E_MAX_BUFFER 8
++#define MLX5E_MAX_NETWORK_BUFFER 8
++#define MLX5E_TOTAL_BUFFERS 10
+ #define MLX5E_DEFAULT_CABLE_LEN 7 /* 7 meters */
+ 
+ #define MLX5_BUFFER_SUPPORTED(mdev) (MLX5_CAP_GEN(mdev, pcam_reg) && \
+@@ -60,8 +61,9 @@ struct mlx5e_bufferx_reg {
+ struct mlx5e_port_buffer {
+ 	u32                       port_buffer_size;
+ 	u32                       spare_buffer_size;
+-	u32                       headroom_size;
+-	struct mlx5e_bufferx_reg  buffer[MLX5E_MAX_BUFFER];
++	u32                       headroom_size;	  /* Buffers 0-7 */
++	u32                       internal_buffers_size;  /* Buffers 8-9 */
++	struct mlx5e_bufferx_reg  buffer[MLX5E_MAX_NETWORK_BUFFER];
+ };
+ 
+ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.c
+index eba0c86989263..0380a04c3691c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.c
+@@ -82,29 +82,9 @@ mlx5e_tc_act_init_parse_state(struct mlx5e_tc_act_parse_state *parse_state,
+ 	parse_state->flow_action = flow_action;
+ }
+ 
+-void
+-mlx5e_tc_act_reorder_flow_actions(struct flow_action *flow_action,
+-				  struct mlx5e_tc_flow_action *flow_action_reorder)
+-{
+-	struct flow_action_entry *act;
+-	int i, j = 0;
+-
+-	flow_action_for_each(i, act, flow_action) {
+-		/* Add CT action to be first. */
+-		if (act->id == FLOW_ACTION_CT)
+-			flow_action_reorder->entries[j++] = act;
+-	}
+-
+-	flow_action_for_each(i, act, flow_action) {
+-		if (act->id == FLOW_ACTION_CT)
+-			continue;
+-		flow_action_reorder->entries[j++] = act;
+-	}
+-}
+-
+ int
+ mlx5e_tc_act_post_parse(struct mlx5e_tc_act_parse_state *parse_state,
+-			struct flow_action *flow_action,
++			struct flow_action *flow_action, int from, int to,
+ 			struct mlx5_flow_attr *attr,
+ 			enum mlx5_flow_namespace_type ns_type)
+ {
+@@ -116,6 +96,11 @@ mlx5e_tc_act_post_parse(struct mlx5e_tc_act_parse_state *parse_state,
+ 	priv = parse_state->flow->priv;
+ 
+ 	flow_action_for_each(i, act, flow_action) {
++		if (i < from)
++			continue;
++		else if (i > to)
++			break;
++
+ 		tc_act = mlx5e_tc_act_get(act->id, ns_type);
+ 		if (!tc_act || !tc_act->post_parse)
+ 			continue;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.h b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.h
+index 8346557eeaf63..84c78d5f5bed8 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.h
+@@ -56,6 +56,8 @@ struct mlx5e_tc_act {
+ 				   const struct flow_action_entry *act,
+ 				   struct mlx5_flow_attr *attr);
+ 
++	bool (*is_missable)(const struct flow_action_entry *act);
++
+ 	int (*offload_action)(struct mlx5e_priv *priv,
+ 			      struct flow_offload_action *fl_act,
+ 			      struct flow_action_entry *act);
+@@ -110,13 +112,9 @@ mlx5e_tc_act_init_parse_state(struct mlx5e_tc_act_parse_state *parse_state,
+ 			      struct flow_action *flow_action,
+ 			      struct netlink_ext_ack *extack);
+ 
+-void
+-mlx5e_tc_act_reorder_flow_actions(struct flow_action *flow_action,
+-				  struct mlx5e_tc_flow_action *flow_action_reorder);
+-
+ int
+ mlx5e_tc_act_post_parse(struct mlx5e_tc_act_parse_state *parse_state,
+-			struct flow_action *flow_action,
++			struct flow_action *flow_action, int from, int to,
+ 			struct mlx5_flow_attr *attr,
+ 			enum mlx5_flow_namespace_type ns_type);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/ct.c
+index a829c94289c10..fce1c0fd24535 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/ct.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/ct.c
+@@ -95,10 +95,17 @@ tc_act_is_multi_table_act_ct(struct mlx5e_priv *priv,
+ 	return true;
+ }
+ 
++static bool
++tc_act_is_missable_ct(const struct flow_action_entry *act)
++{
++	return !(act->ct.action & TCA_CT_ACT_CLEAR);
++}
++
+ struct mlx5e_tc_act mlx5e_tc_act_ct = {
+ 	.can_offload = tc_act_can_offload_ct,
+ 	.parse_action = tc_act_parse_ct,
+-	.is_multi_table_act = tc_act_is_multi_table_act_ct,
+ 	.post_parse = tc_act_post_parse_ct,
++	.is_multi_table_act = tc_act_is_multi_table_act_ct,
++	.is_missable = tc_act_is_missable_ct,
+ };
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
+index fbb392d54fa51..bbab164eab546 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
+@@ -492,6 +492,19 @@ void mlx5e_encap_put(struct mlx5e_priv *priv, struct mlx5e_encap_entry *e)
+ 	mlx5e_encap_dealloc(priv, e);
+ }
+ 
++static void mlx5e_encap_put_locked(struct mlx5e_priv *priv, struct mlx5e_encap_entry *e)
++{
++	struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
++
++	lockdep_assert_held(&esw->offloads.encap_tbl_lock);
++
++	if (!refcount_dec_and_test(&e->refcnt))
++		return;
++	list_del(&e->route_list);
++	hash_del_rcu(&e->encap_hlist);
++	mlx5e_encap_dealloc(priv, e);
++}
++
+ static void mlx5e_decap_put(struct mlx5e_priv *priv, struct mlx5e_decap_entry *d)
+ {
+ 	struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
+@@ -785,6 +798,8 @@ int mlx5e_attach_encap(struct mlx5e_priv *priv,
+ 	uintptr_t hash_key;
+ 	int err = 0;
+ 
++	lockdep_assert_held(&esw->offloads.encap_tbl_lock);
++
+ 	parse_attr = attr->parse_attr;
+ 	tun_info = parse_attr->tun_info[out_index];
+ 	mpls_info = &parse_attr->mpls_info[out_index];
+@@ -798,7 +813,6 @@ int mlx5e_attach_encap(struct mlx5e_priv *priv,
+ 
+ 	hash_key = hash_encap_info(&key);
+ 
+-	mutex_lock(&esw->offloads.encap_tbl_lock);
+ 	e = mlx5e_encap_get(priv, &key, hash_key);
+ 
+ 	/* must verify if encap is valid or not */
+@@ -809,15 +823,6 @@ int mlx5e_attach_encap(struct mlx5e_priv *priv,
+ 			goto out_err;
+ 		}
+ 
+-		mutex_unlock(&esw->offloads.encap_tbl_lock);
+-		wait_for_completion(&e->res_ready);
+-
+-		/* Protect against concurrent neigh update. */
+-		mutex_lock(&esw->offloads.encap_tbl_lock);
+-		if (e->compl_result < 0) {
+-			err = -EREMOTEIO;
+-			goto out_err;
+-		}
+ 		goto attach_flow;
+ 	}
+ 
+@@ -846,15 +851,12 @@ int mlx5e_attach_encap(struct mlx5e_priv *priv,
+ 	INIT_LIST_HEAD(&e->flows);
+ 	hash_add_rcu(esw->offloads.encap_tbl, &e->encap_hlist, hash_key);
+ 	tbl_time_before = mlx5e_route_tbl_get_last_update(priv);
+-	mutex_unlock(&esw->offloads.encap_tbl_lock);
+ 
+ 	if (family == AF_INET)
+ 		err = mlx5e_tc_tun_create_header_ipv4(priv, mirred_dev, e);
+ 	else if (family == AF_INET6)
+ 		err = mlx5e_tc_tun_create_header_ipv6(priv, mirred_dev, e);
+ 
+-	/* Protect against concurrent neigh update. */
+-	mutex_lock(&esw->offloads.encap_tbl_lock);
+ 	complete_all(&e->res_ready);
+ 	if (err) {
+ 		e->compl_result = err;
+@@ -889,18 +891,15 @@ attach_flow:
+ 	} else {
+ 		flow_flag_set(flow, SLOW);
+ 	}
+-	mutex_unlock(&esw->offloads.encap_tbl_lock);
+ 
+ 	return err;
+ 
+ out_err:
+-	mutex_unlock(&esw->offloads.encap_tbl_lock);
+ 	if (e)
+-		mlx5e_encap_put(priv, e);
++		mlx5e_encap_put_locked(priv, e);
+ 	return err;
+ 
+ out_err_init:
+-	mutex_unlock(&esw->offloads.encap_tbl_lock);
+ 	kfree(tun_info);
+ 	kfree(e);
+ 	return err;
+@@ -985,6 +984,93 @@ out_err:
+ 	return err;
+ }
+ 
++int mlx5e_tc_tun_encap_dests_set(struct mlx5e_priv *priv,
++				 struct mlx5e_tc_flow *flow,
++				 struct mlx5_flow_attr *attr,
++				 struct netlink_ext_ack *extack,
++				 bool *vf_tun)
++{
++	struct mlx5e_tc_flow_parse_attr *parse_attr;
++	struct mlx5_esw_flow_attr *esw_attr;
++	struct net_device *encap_dev = NULL;
++	struct mlx5e_rep_priv *rpriv;
++	struct mlx5e_priv *out_priv;
++	struct mlx5_eswitch *esw;
++	int out_index;
++	int err = 0;
++
++	if (!mlx5e_is_eswitch_flow(flow))
++		return 0;
++
++	parse_attr = attr->parse_attr;
++	esw_attr = attr->esw_attr;
++	*vf_tun = false;
++
++	esw = priv->mdev->priv.eswitch;
++	mutex_lock(&esw->offloads.encap_tbl_lock);
++	for (out_index = 0; out_index < MLX5_MAX_FLOW_FWD_VPORTS; out_index++) {
++		struct net_device *out_dev;
++		int mirred_ifindex;
++
++		if (!(esw_attr->dests[out_index].flags & MLX5_ESW_DEST_ENCAP))
++			continue;
++
++		mirred_ifindex = parse_attr->mirred_ifindex[out_index];
++		out_dev = dev_get_by_index(dev_net(priv->netdev), mirred_ifindex);
++		if (!out_dev) {
++			NL_SET_ERR_MSG_MOD(extack, "Requested mirred device not found");
++			err = -ENODEV;
++			goto out;
++		}
++		err = mlx5e_attach_encap(priv, flow, attr, out_dev, out_index,
++					 extack, &encap_dev);
++		dev_put(out_dev);
++		if (err)
++			goto out;
++
++		if (esw_attr->dests[out_index].flags &
++		    MLX5_ESW_DEST_CHAIN_WITH_SRC_PORT_CHANGE &&
++		    !esw_attr->dest_int_port)
++			*vf_tun = true;
++
++		out_priv = netdev_priv(encap_dev);
++		rpriv = out_priv->ppriv;
++		esw_attr->dests[out_index].rep = rpriv->rep;
++		esw_attr->dests[out_index].mdev = out_priv->mdev;
++	}
++
++	if (*vf_tun && esw_attr->out_count > 1) {
++		NL_SET_ERR_MSG_MOD(extack, "VF tunnel encap with mirroring is not supported");
++		err = -EOPNOTSUPP;
++		goto out;
++	}
++
++out:
++	mutex_unlock(&esw->offloads.encap_tbl_lock);
++	return err;
++}
++
++void mlx5e_tc_tun_encap_dests_unset(struct mlx5e_priv *priv,
++				    struct mlx5e_tc_flow *flow,
++				    struct mlx5_flow_attr *attr)
++{
++	struct mlx5_esw_flow_attr *esw_attr;
++	int out_index;
++
++	if (!mlx5e_is_eswitch_flow(flow))
++		return;
++
++	esw_attr = attr->esw_attr;
++
++	for (out_index = 0; out_index < MLX5_MAX_FLOW_FWD_VPORTS; out_index++) {
++		if (!(esw_attr->dests[out_index].flags & MLX5_ESW_DEST_ENCAP))
++			continue;
++
++		mlx5e_detach_encap(flow->priv, flow, attr, out_index);
++		kfree(attr->parse_attr->tun_info[out_index]);
++	}
++}
++
+ static int cmp_route_info(struct mlx5e_route_key *a,
+ 			  struct mlx5e_route_key *b)
+ {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.h b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.h
+index 8ad273dde40ee..5d7d67687cbcd 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.h
+@@ -30,6 +30,15 @@ int mlx5e_attach_decap_route(struct mlx5e_priv *priv,
+ void mlx5e_detach_decap_route(struct mlx5e_priv *priv,
+ 			      struct mlx5e_tc_flow *flow);
+ 
++int mlx5e_tc_tun_encap_dests_set(struct mlx5e_priv *priv,
++				 struct mlx5e_tc_flow *flow,
++				 struct mlx5_flow_attr *attr,
++				 struct netlink_ext_ack *extack,
++				 bool *vf_tun);
++void mlx5e_tc_tun_encap_dests_unset(struct mlx5e_priv *priv,
++				    struct mlx5e_tc_flow *flow,
++				    struct mlx5_flow_attr *attr);
++
+ struct ip_tunnel_info *mlx5e_dup_tun_info(const struct ip_tunnel_info *tun_info);
+ 
+ int mlx5e_tc_set_attr_rx_tun(struct mlx5e_tc_flow *flow,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_common.c b/drivers/net/ethernet/mellanox/mlx5/core/en_common.c
+index 993af4c12d909..21cd232c2c20c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_common.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_common.c
+@@ -149,10 +149,8 @@ int mlx5e_refresh_tirs(struct mlx5e_priv *priv, bool enable_uc_lb,
+ 
+ 	inlen = MLX5_ST_SZ_BYTES(modify_tir_in);
+ 	in = kvzalloc(inlen, GFP_KERNEL);
+-	if (!in) {
+-		err = -ENOMEM;
+-		goto out;
+-	}
++	if (!in)
++		return -ENOMEM;
+ 
+ 	if (enable_uc_lb)
+ 		lb_flags = MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST;
+@@ -170,14 +168,13 @@ int mlx5e_refresh_tirs(struct mlx5e_priv *priv, bool enable_uc_lb,
+ 		tirn = tir->tirn;
+ 		err = mlx5_core_modify_tir(mdev, tirn, in);
+ 		if (err)
+-			goto out;
++			break;
+ 	}
++	mutex_unlock(&mdev->mlx5e_res.hw_objs.td.list_lock);
+ 
+-out:
+ 	kvfree(in);
+ 	if (err)
+ 		netdev_err(priv->netdev, "refresh tir(0x%x) failed, %d\n", tirn, err);
+-	mutex_unlock(&mdev->mlx5e_res.hw_objs.td.list_lock);
+ 
+ 	return err;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
+index 89de92d064836..ebee52a8361aa 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
+@@ -926,9 +926,10 @@ static int mlx5e_dcbnl_getbuffer(struct net_device *dev,
+ 	if (err)
+ 		return err;
+ 
+-	for (i = 0; i < MLX5E_MAX_BUFFER; i++)
++	for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++)
+ 		dcb_buffer->buffer_size[i] = port_buffer.buffer[i].size;
+-	dcb_buffer->total_size = port_buffer.port_buffer_size;
++	dcb_buffer->total_size = port_buffer.port_buffer_size -
++				 port_buffer.internal_buffers_size;
+ 
+ 	return 0;
+ }
+@@ -970,7 +971,7 @@ static int mlx5e_dcbnl_setbuffer(struct net_device *dev,
+ 	if (err)
+ 		return err;
+ 
+-	for (i = 0; i < MLX5E_MAX_BUFFER; i++) {
++	for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) {
+ 		if (port_buffer.buffer[i].size != dcb_buffer->buffer_size[i]) {
+ 			changed |= MLX5E_PORT_BUFFER_SIZE;
+ 			buffer_size = dcb_buffer->buffer_size;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 579c2d217fdc6..7c72bed7f81aa 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -668,26 +668,6 @@ static void mlx5e_rq_free_shampo(struct mlx5e_rq *rq)
+ 	mlx5e_rq_shampo_hd_free(rq);
+ }
+ 
+-static __be32 mlx5e_get_terminate_scatter_list_mkey(struct mlx5_core_dev *dev)
+-{
+-	u32 out[MLX5_ST_SZ_DW(query_special_contexts_out)] = {};
+-	u32 in[MLX5_ST_SZ_DW(query_special_contexts_in)] = {};
+-	int res;
+-
+-	if (!MLX5_CAP_GEN(dev, terminate_scatter_list_mkey))
+-		return MLX5_TERMINATE_SCATTER_LIST_LKEY;
+-
+-	MLX5_SET(query_special_contexts_in, in, opcode,
+-		 MLX5_CMD_OP_QUERY_SPECIAL_CONTEXTS);
+-	res = mlx5_cmd_exec_inout(dev, query_special_contexts, in, out);
+-	if (res)
+-		return MLX5_TERMINATE_SCATTER_LIST_LKEY;
+-
+-	res = MLX5_GET(query_special_contexts_out, out,
+-		       terminate_scatter_list_mkey);
+-	return cpu_to_be32(res);
+-}
+-
+ static int mlx5e_alloc_rq(struct mlx5e_params *params,
+ 			  struct mlx5e_xsk_param *xsk,
+ 			  struct mlx5e_rq_param *rqp,
+@@ -852,7 +832,7 @@ static int mlx5e_alloc_rq(struct mlx5e_params *params,
+ 			/* check if num_frags is not a pow of two */
+ 			if (rq->wqe.info.num_frags < (1 << rq->wqe.info.log_num_frags)) {
+ 				wqe->data[f].byte_count = 0;
+-				wqe->data[f].lkey = mlx5e_get_terminate_scatter_list_mkey(mdev);
++				wqe->data[f].lkey = params->terminate_lkey_be;
+ 				wqe->data[f].addr = 0;
+ 			}
+ 		}
+@@ -4973,6 +4953,8 @@ void mlx5e_build_nic_params(struct mlx5e_priv *priv, struct mlx5e_xsk *xsk, u16
+ 	/* RQ */
+ 	mlx5e_build_rq_params(mdev, params);
+ 
++	params->terminate_lkey_be = mlx5_core_get_terminate_scatter_list_mkey(mdev);
++
+ 	params->packet_merge.timeout = mlx5e_choose_lro_timeout(mdev, MLX5E_DEFAULT_LRO_TIMEOUT);
+ 
+ 	/* CQ moderation params */
+@@ -5244,12 +5226,16 @@ static int mlx5e_nic_init(struct mlx5_core_dev *mdev,
+ 
+ 	mlx5e_timestamp_init(priv);
+ 
++	priv->dfs_root = debugfs_create_dir("nic",
++					    mlx5_debugfs_get_dev_root(mdev));
++
+ 	fs = mlx5e_fs_init(priv->profile, mdev,
+ 			   !test_bit(MLX5E_STATE_DESTROYING, &priv->state),
+ 			   priv->dfs_root);
+ 	if (!fs) {
+ 		err = -ENOMEM;
+ 		mlx5_core_err(mdev, "FS initialization failed, %d\n", err);
++		debugfs_remove_recursive(priv->dfs_root);
+ 		return err;
+ 	}
+ 	priv->fs = fs;
+@@ -5270,6 +5256,7 @@ static void mlx5e_nic_cleanup(struct mlx5e_priv *priv)
+ 	mlx5e_health_destroy_reporters(priv);
+ 	mlx5e_ktls_cleanup(priv);
+ 	mlx5e_fs_cleanup(priv->fs);
++	debugfs_remove_recursive(priv->dfs_root);
+ 	priv->fs = NULL;
+ }
+ 
+@@ -5816,8 +5803,8 @@ void mlx5e_detach_netdev(struct mlx5e_priv *priv)
+ }
+ 
+ static int
+-mlx5e_netdev_attach_profile(struct net_device *netdev, struct mlx5_core_dev *mdev,
+-			    const struct mlx5e_profile *new_profile, void *new_ppriv)
++mlx5e_netdev_init_profile(struct net_device *netdev, struct mlx5_core_dev *mdev,
++			  const struct mlx5e_profile *new_profile, void *new_ppriv)
+ {
+ 	struct mlx5e_priv *priv = netdev_priv(netdev);
+ 	int err;
+@@ -5833,6 +5820,25 @@ mlx5e_netdev_attach_profile(struct net_device *netdev, struct mlx5_core_dev *mde
+ 	err = new_profile->init(priv->mdev, priv->netdev);
+ 	if (err)
+ 		goto priv_cleanup;
++
++	return 0;
++
++priv_cleanup:
++	mlx5e_priv_cleanup(priv);
++	return err;
++}
++
++static int
++mlx5e_netdev_attach_profile(struct net_device *netdev, struct mlx5_core_dev *mdev,
++			    const struct mlx5e_profile *new_profile, void *new_ppriv)
++{
++	struct mlx5e_priv *priv = netdev_priv(netdev);
++	int err;
++
++	err = mlx5e_netdev_init_profile(netdev, mdev, new_profile, new_ppriv);
++	if (err)
++		return err;
++
+ 	err = mlx5e_attach_netdev(priv);
+ 	if (err)
+ 		goto profile_cleanup;
+@@ -5840,7 +5846,6 @@ mlx5e_netdev_attach_profile(struct net_device *netdev, struct mlx5_core_dev *mde
+ 
+ profile_cleanup:
+ 	new_profile->cleanup(priv);
+-priv_cleanup:
+ 	mlx5e_priv_cleanup(priv);
+ 	return err;
+ }
+@@ -5859,6 +5864,12 @@ int mlx5e_netdev_change_profile(struct mlx5e_priv *priv,
+ 	priv->profile->cleanup(priv);
+ 	mlx5e_priv_cleanup(priv);
+ 
++	if (mdev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) {
++		mlx5e_netdev_init_profile(netdev, mdev, new_profile, new_ppriv);
++		set_bit(MLX5E_STATE_DESTROYING, &priv->state);
++		return -EIO;
++	}
++
+ 	err = mlx5e_netdev_attach_profile(netdev, mdev, new_profile, new_ppriv);
+ 	if (err) { /* roll back to original profile */
+ 		netdev_warn(netdev, "%s: new profile init failed, %d\n", __func__, err);
+@@ -5920,8 +5931,11 @@ static int mlx5e_suspend(struct auxiliary_device *adev, pm_message_t state)
+ 	struct net_device *netdev = priv->netdev;
+ 	struct mlx5_core_dev *mdev = priv->mdev;
+ 
+-	if (!netif_device_present(netdev))
++	if (!netif_device_present(netdev)) {
++		if (test_bit(MLX5E_STATE_DESTROYING, &priv->state))
++			mlx5e_destroy_mdev_resources(mdev);
+ 		return -ENODEV;
++	}
+ 
+ 	mlx5e_detach_netdev(priv);
+ 	mlx5e_destroy_mdev_resources(mdev);
+@@ -5967,9 +5981,6 @@ static int mlx5e_probe(struct auxiliary_device *adev,
+ 	priv->profile = profile;
+ 	priv->ppriv = NULL;
+ 
+-	priv->dfs_root = debugfs_create_dir("nic",
+-					    mlx5_debugfs_get_dev_root(priv->mdev));
+-
+ 	err = profile->init(mdev, netdev);
+ 	if (err) {
+ 		mlx5_core_err(mdev, "mlx5e_nic_profile init failed, %d\n", err);
+@@ -5998,7 +6009,6 @@ err_resume:
+ err_profile_cleanup:
+ 	profile->cleanup(priv);
+ err_destroy_netdev:
+-	debugfs_remove_recursive(priv->dfs_root);
+ 	mlx5e_destroy_netdev(priv);
+ err_devlink_port_unregister:
+ 	mlx5e_devlink_port_unregister(mlx5e_dev);
+@@ -6018,7 +6028,6 @@ static void mlx5e_remove(struct auxiliary_device *adev)
+ 	unregister_netdev(priv->netdev);
+ 	mlx5e_suspend(adev, state);
+ 	priv->profile->cleanup(priv);
+-	debugfs_remove_recursive(priv->dfs_root);
+ 	mlx5e_destroy_netdev(priv);
+ 	mlx5e_devlink_port_unregister(mlx5e_dev);
+ 	mlx5e_destroy_devlink(mlx5e_dev);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+index 6e18d91c3d766..992f3f9c11925 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+@@ -30,6 +30,7 @@
+  * SOFTWARE.
+  */
+ 
++#include <linux/debugfs.h>
+ #include <linux/mlx5/fs.h>
+ #include <net/switchdev.h>
+ #include <net/pkt_cls.h>
+@@ -811,11 +812,15 @@ static int mlx5e_init_ul_rep(struct mlx5_core_dev *mdev,
+ {
+ 	struct mlx5e_priv *priv = netdev_priv(netdev);
+ 
++	priv->dfs_root = debugfs_create_dir("nic",
++					    mlx5_debugfs_get_dev_root(mdev));
++
+ 	priv->fs = mlx5e_fs_init(priv->profile, mdev,
+ 				 !test_bit(MLX5E_STATE_DESTROYING, &priv->state),
+ 				 priv->dfs_root);
+ 	if (!priv->fs) {
+ 		netdev_err(priv->netdev, "FS allocation failed\n");
++		debugfs_remove_recursive(priv->dfs_root);
+ 		return -ENOMEM;
+ 	}
+ 
+@@ -828,6 +833,7 @@ static int mlx5e_init_ul_rep(struct mlx5_core_dev *mdev,
+ static void mlx5e_cleanup_rep(struct mlx5e_priv *priv)
+ {
+ 	mlx5e_fs_cleanup(priv->fs);
++	debugfs_remove_recursive(priv->dfs_root);
+ 	priv->fs = NULL;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index 53acd9a8a4c35..82b96196e97b7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -183,7 +183,8 @@ static struct lock_class_key tc_ht_wq_key;
+ 
+ static void mlx5e_put_flow_tunnel_id(struct mlx5e_tc_flow *flow);
+ static void free_flow_post_acts(struct mlx5e_tc_flow *flow);
+-static void mlx5_free_flow_attr(struct mlx5e_tc_flow *flow, struct mlx5_flow_attr *attr);
++static void mlx5_free_flow_attr_actions(struct mlx5e_tc_flow *flow,
++					struct mlx5_flow_attr *attr);
+ 
+ void
+ mlx5e_tc_match_to_reg_match(struct mlx5_flow_spec *spec,
+@@ -1726,98 +1727,6 @@ int mlx5e_tc_query_route_vport(struct net_device *out_dev, struct net_device *ro
+ 	return mlx5_eswitch_vhca_id_to_vport(esw, vhca_id, vport);
+ }
+ 
+-static int
+-set_encap_dests(struct mlx5e_priv *priv,
+-		struct mlx5e_tc_flow *flow,
+-		struct mlx5_flow_attr *attr,
+-		struct netlink_ext_ack *extack,
+-		bool *vf_tun)
+-{
+-	struct mlx5e_tc_flow_parse_attr *parse_attr;
+-	struct mlx5_esw_flow_attr *esw_attr;
+-	struct net_device *encap_dev = NULL;
+-	struct mlx5e_rep_priv *rpriv;
+-	struct mlx5e_priv *out_priv;
+-	int out_index;
+-	int err = 0;
+-
+-	if (!mlx5e_is_eswitch_flow(flow))
+-		return 0;
+-
+-	parse_attr = attr->parse_attr;
+-	esw_attr = attr->esw_attr;
+-	*vf_tun = false;
+-
+-	for (out_index = 0; out_index < MLX5_MAX_FLOW_FWD_VPORTS; out_index++) {
+-		struct net_device *out_dev;
+-		int mirred_ifindex;
+-
+-		if (!(esw_attr->dests[out_index].flags & MLX5_ESW_DEST_ENCAP))
+-			continue;
+-
+-		mirred_ifindex = parse_attr->mirred_ifindex[out_index];
+-		out_dev = dev_get_by_index(dev_net(priv->netdev), mirred_ifindex);
+-		if (!out_dev) {
+-			NL_SET_ERR_MSG_MOD(extack, "Requested mirred device not found");
+-			err = -ENODEV;
+-			goto out;
+-		}
+-		err = mlx5e_attach_encap(priv, flow, attr, out_dev, out_index,
+-					 extack, &encap_dev);
+-		dev_put(out_dev);
+-		if (err)
+-			goto out;
+-
+-		if (esw_attr->dests[out_index].flags &
+-		    MLX5_ESW_DEST_CHAIN_WITH_SRC_PORT_CHANGE &&
+-		    !esw_attr->dest_int_port)
+-			*vf_tun = true;
+-
+-		out_priv = netdev_priv(encap_dev);
+-		rpriv = out_priv->ppriv;
+-		esw_attr->dests[out_index].rep = rpriv->rep;
+-		esw_attr->dests[out_index].mdev = out_priv->mdev;
+-	}
+-
+-	if (*vf_tun && esw_attr->out_count > 1) {
+-		NL_SET_ERR_MSG_MOD(extack, "VF tunnel encap with mirroring is not supported");
+-		err = -EOPNOTSUPP;
+-		goto out;
+-	}
+-
+-out:
+-	return err;
+-}
+-
+-static void
+-clean_encap_dests(struct mlx5e_priv *priv,
+-		  struct mlx5e_tc_flow *flow,
+-		  struct mlx5_flow_attr *attr,
+-		  bool *vf_tun)
+-{
+-	struct mlx5_esw_flow_attr *esw_attr;
+-	int out_index;
+-
+-	if (!mlx5e_is_eswitch_flow(flow))
+-		return;
+-
+-	esw_attr = attr->esw_attr;
+-	*vf_tun = false;
+-
+-	for (out_index = 0; out_index < MLX5_MAX_FLOW_FWD_VPORTS; out_index++) {
+-		if (!(esw_attr->dests[out_index].flags & MLX5_ESW_DEST_ENCAP))
+-			continue;
+-
+-		if (esw_attr->dests[out_index].flags &
+-		    MLX5_ESW_DEST_CHAIN_WITH_SRC_PORT_CHANGE &&
+-		    !esw_attr->dest_int_port)
+-			*vf_tun = true;
+-
+-		mlx5e_detach_encap(priv, flow, attr, out_index);
+-		kfree(attr->parse_attr->tun_info[out_index]);
+-	}
+-}
+-
+ static int
+ verify_attr_actions(u32 actions, struct netlink_ext_ack *extack)
+ {
+@@ -1854,7 +1763,7 @@ post_process_attr(struct mlx5e_tc_flow *flow,
+ 	if (err)
+ 		goto err_out;
+ 
+-	err = set_encap_dests(flow->priv, flow, attr, extack, &vf_tun);
++	err = mlx5e_tc_tun_encap_dests_set(flow->priv, flow, attr, extack, &vf_tun);
+ 	if (err)
+ 		goto err_out;
+ 
+@@ -2035,7 +1944,7 @@ static void free_branch_attr(struct mlx5e_tc_flow *flow, struct mlx5_flow_attr *
+ 	if (!attr)
+ 		return;
+ 
+-	mlx5_free_flow_attr(flow, attr);
++	mlx5_free_flow_attr_actions(flow, attr);
+ 	kvfree(attr->parse_attr);
+ 	kfree(attr);
+ }
+@@ -2046,7 +1955,6 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv,
+ 	struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
+ 	struct mlx5_flow_attr *attr = flow->attr;
+ 	struct mlx5_esw_flow_attr *esw_attr;
+-	bool vf_tun;
+ 
+ 	esw_attr = attr->esw_attr;
+ 	mlx5e_put_flow_tunnel_id(flow);
+@@ -2068,18 +1976,8 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv,
+ 	if (flow->decap_route)
+ 		mlx5e_detach_decap_route(priv, flow);
+ 
+-	clean_encap_dests(priv, flow, attr, &vf_tun);
+-
+ 	mlx5_tc_ct_match_del(get_ct_priv(priv), &flow->attr->ct_attr);
+ 
+-	if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) {
+-		mlx5e_mod_hdr_dealloc(&attr->parse_attr->mod_hdr_acts);
+-		mlx5e_tc_detach_mod_hdr(priv, flow, attr);
+-	}
+-
+-	if (attr->action & MLX5_FLOW_CONTEXT_ACTION_COUNT)
+-		mlx5_fc_destroy(esw_attr->counter_dev, attr->counter);
+-
+ 	if (esw_attr->int_port)
+ 		mlx5e_tc_int_port_put(mlx5e_get_int_port_priv(priv), esw_attr->int_port);
+ 
+@@ -2092,8 +1990,7 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv,
+ 	mlx5e_tc_act_stats_del_flow(get_act_stats_handle(priv), flow);
+ 
+ 	free_flow_post_acts(flow);
+-	free_branch_attr(flow, attr->branch_true);
+-	free_branch_attr(flow, attr->branch_false);
++	mlx5_free_flow_attr_actions(flow, attr);
+ 
+ 	kvfree(attr->esw_attr->rx_tun_attr);
+ 	kvfree(attr->parse_attr);
+@@ -3812,9 +3709,7 @@ free_flow_post_acts(struct mlx5e_tc_flow *flow)
+ 		if (list_is_last(&attr->list, &flow->attrs))
+ 			break;
+ 
+-		mlx5_free_flow_attr(flow, attr);
+-		free_branch_attr(flow, attr->branch_true);
+-		free_branch_attr(flow, attr->branch_false);
++		mlx5_free_flow_attr_actions(flow, attr);
+ 
+ 		list_del(&attr->list);
+ 		kvfree(attr->parse_attr);
+@@ -4072,78 +3967,83 @@ parse_tc_actions(struct mlx5e_tc_act_parse_state *parse_state,
+ 		 struct flow_action *flow_action)
+ {
+ 	struct netlink_ext_ack *extack = parse_state->extack;
+-	struct mlx5e_tc_flow_action flow_action_reorder;
+ 	struct mlx5e_tc_flow *flow = parse_state->flow;
+ 	struct mlx5e_tc_jump_state jump_state = {};
+ 	struct mlx5_flow_attr *attr = flow->attr;
+ 	enum mlx5_flow_namespace_type ns_type;
+ 	struct mlx5e_priv *priv = flow->priv;
+-	struct flow_action_entry *act, **_act;
++	struct mlx5_flow_attr *prev_attr;
++	struct flow_action_entry *act;
+ 	struct mlx5e_tc_act *tc_act;
+-	int err, i;
+-
+-	flow_action_reorder.num_entries = flow_action->num_entries;
+-	flow_action_reorder.entries = kcalloc(flow_action->num_entries,
+-					      sizeof(flow_action), GFP_KERNEL);
+-	if (!flow_action_reorder.entries)
+-		return -ENOMEM;
+-
+-	mlx5e_tc_act_reorder_flow_actions(flow_action, &flow_action_reorder);
++	int err, i, i_split = 0;
++	bool is_missable;
+ 
+ 	ns_type = mlx5e_get_flow_namespace(flow);
+ 	list_add(&attr->list, &flow->attrs);
+ 
+-	flow_action_for_each(i, _act, &flow_action_reorder) {
++	flow_action_for_each(i, act, flow_action) {
+ 		jump_state.jump_target = false;
+-		act = *_act;
++		is_missable = false;
++		prev_attr = attr;
++
+ 		tc_act = mlx5e_tc_act_get(act->id, ns_type);
+ 		if (!tc_act) {
+ 			NL_SET_ERR_MSG_MOD(extack, "Not implemented offload action");
+ 			err = -EOPNOTSUPP;
+-			goto out_free;
++			goto out_free_post_acts;
+ 		}
+ 
+ 		if (!tc_act->can_offload(parse_state, act, i, attr)) {
+ 			err = -EOPNOTSUPP;
+-			goto out_free;
++			goto out_free_post_acts;
+ 		}
+ 
+ 		err = tc_act->parse_action(parse_state, act, priv, attr);
+ 		if (err)
+-			goto out_free;
++			goto out_free_post_acts;
+ 
+ 		dec_jump_count(act, tc_act, attr, priv, &jump_state);
+ 
+ 		err = parse_branch_ctrl(act, tc_act, flow, attr, &jump_state, extack);
+ 		if (err)
+-			goto out_free;
++			goto out_free_post_acts;
+ 
+ 		parse_state->actions |= attr->action;
+-		if (!tc_act->stats_action)
+-			attr->tc_act_cookies[attr->tc_act_cookies_count++] = act->cookie;
+ 
+ 		/* Split attr for multi table act if not the last act. */
+ 		if (jump_state.jump_target ||
+ 		    (tc_act->is_multi_table_act &&
+ 		    tc_act->is_multi_table_act(priv, act, attr) &&
+-		    i < flow_action_reorder.num_entries - 1)) {
+-			err = mlx5e_tc_act_post_parse(parse_state, flow_action, attr, ns_type);
++		    i < flow_action->num_entries - 1)) {
++			is_missable = tc_act->is_missable ? tc_act->is_missable(act) : false;
++
++			err = mlx5e_tc_act_post_parse(parse_state, flow_action, i_split, i, attr,
++						      ns_type);
+ 			if (err)
+-				goto out_free;
++				goto out_free_post_acts;
+ 
+ 			attr = mlx5e_clone_flow_attr_for_post_act(flow->attr, ns_type);
+ 			if (!attr) {
+ 				err = -ENOMEM;
+-				goto out_free;
++				goto out_free_post_acts;
+ 			}
+ 
++			i_split = i + 1;
+ 			list_add(&attr->list, &flow->attrs);
+ 		}
+-	}
+ 
+-	kfree(flow_action_reorder.entries);
++		if (is_missable) {
++			/* Add counter to prev, and assign act to new (next) attr */
++			prev_attr->action |= MLX5_FLOW_CONTEXT_ACTION_COUNT;
++			flow_flag_set(flow, USE_ACT_STATS);
++
++			attr->tc_act_cookies[attr->tc_act_cookies_count++] = act->cookie;
++		} else if (!tc_act->stats_action) {
++			prev_attr->tc_act_cookies[prev_attr->tc_act_cookies_count++] = act->cookie;
++		}
++	}
+ 
+-	err = mlx5e_tc_act_post_parse(parse_state, flow_action, attr, ns_type);
++	err = mlx5e_tc_act_post_parse(parse_state, flow_action, i_split, i, attr, ns_type);
+ 	if (err)
+ 		goto out_free_post_acts;
+ 
+@@ -4153,8 +4053,6 @@ parse_tc_actions(struct mlx5e_tc_act_parse_state *parse_state,
+ 
+ 	return 0;
+ 
+-out_free:
+-	kfree(flow_action_reorder.entries);
+ out_free_post_acts:
+ 	free_flow_post_acts(flow);
+ 
+@@ -4449,10 +4347,9 @@ mlx5_alloc_flow_attr(enum mlx5_flow_namespace_type type)
+ }
+ 
+ static void
+-mlx5_free_flow_attr(struct mlx5e_tc_flow *flow, struct mlx5_flow_attr *attr)
++mlx5_free_flow_attr_actions(struct mlx5e_tc_flow *flow, struct mlx5_flow_attr *attr)
+ {
+ 	struct mlx5_core_dev *counter_dev = get_flow_counter_dev(flow);
+-	bool vf_tun;
+ 
+ 	if (!attr)
+ 		return;
+@@ -4460,7 +4357,7 @@ mlx5_free_flow_attr(struct mlx5e_tc_flow *flow, struct mlx5_flow_attr *attr)
+ 	if (attr->post_act_handle)
+ 		mlx5e_tc_post_act_del(get_post_action(flow->priv), attr->post_act_handle);
+ 
+-	clean_encap_dests(flow->priv, flow, attr, &vf_tun);
++	mlx5e_tc_tun_encap_dests_unset(flow->priv, flow, attr);
+ 
+ 	if (attr->action & MLX5_FLOW_CONTEXT_ACTION_COUNT)
+ 		mlx5_fc_destroy(counter_dev, attr->counter);
+@@ -4469,6 +4366,9 @@ mlx5_free_flow_attr(struct mlx5e_tc_flow *flow, struct mlx5_flow_attr *attr)
+ 		mlx5e_mod_hdr_dealloc(&attr->parse_attr->mod_hdr_acts);
+ 		mlx5e_tc_detach_mod_hdr(flow->priv, flow, attr);
+ 	}
++
++	free_branch_attr(flow, attr->branch_true);
++	free_branch_attr(flow, attr->branch_false);
+ }
+ 
+ static int
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index 62327b52f1acf..9058fa8c5b657 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -913,7 +913,6 @@ static int mlx5_pci_init(struct mlx5_core_dev *dev, struct pci_dev *pdev,
+ 	}
+ 
+ 	mlx5_pci_vsc_init(dev);
+-	dev->caps.embedded_cpu = mlx5_read_embedded_cpu(dev);
+ 	return 0;
+ 
+ err_clr_master:
+@@ -1147,6 +1146,7 @@ static int mlx5_function_setup(struct mlx5_core_dev *dev, bool boot, u64 timeout
+ 		goto err_cmd_cleanup;
+ 	}
+ 
++	dev->caps.embedded_cpu = mlx5_read_embedded_cpu(dev);
+ 	mlx5_cmd_set_state(dev, MLX5_CMDIF_STATE_UP);
+ 
+ 	mlx5_start_health_poll(dev);
+@@ -1790,14 +1790,15 @@ static void remove_one(struct pci_dev *pdev)
+ 	struct devlink *devlink = priv_to_devlink(dev);
+ 
+ 	set_bit(MLX5_BREAK_FW_WAIT, &dev->intf_state);
+-	/* mlx5_drain_fw_reset() is using devlink APIs. Hence, we must drain
+-	 * fw_reset before unregistering the devlink.
++	/* mlx5_drain_fw_reset() and mlx5_drain_health_wq() are using
++	 * devlink notify APIs.
++	 * Hence, we must drain them before unregistering the devlink.
+ 	 */
+ 	mlx5_drain_fw_reset(dev);
++	mlx5_drain_health_wq(dev);
+ 	devlink_unregister(devlink);
+ 	mlx5_sriov_disable(pdev);
+ 	mlx5_crdump_disable(dev);
+-	mlx5_drain_health_wq(dev);
+ 	mlx5_uninit_one(dev);
+ 	mlx5_pci_close(dev);
+ 	mlx5_mdev_uninit(dev);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mr.c b/drivers/net/ethernet/mellanox/mlx5/core/mr.c
+index 9d735c343a3b8..678f0be813752 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/mr.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/mr.c
+@@ -32,6 +32,7 @@
+ 
+ #include <linux/kernel.h>
+ #include <linux/mlx5/driver.h>
++#include <linux/mlx5/qp.h>
+ #include "mlx5_core.h"
+ 
+ int mlx5_core_create_mkey(struct mlx5_core_dev *dev, u32 *mkey, u32 *in,
+@@ -122,3 +123,23 @@ int mlx5_core_destroy_psv(struct mlx5_core_dev *dev, int psv_num)
+ 	return mlx5_cmd_exec_in(dev, destroy_psv, in);
+ }
+ EXPORT_SYMBOL(mlx5_core_destroy_psv);
++
++__be32 mlx5_core_get_terminate_scatter_list_mkey(struct mlx5_core_dev *dev)
++{
++	u32 out[MLX5_ST_SZ_DW(query_special_contexts_out)] = {};
++	u32 in[MLX5_ST_SZ_DW(query_special_contexts_in)] = {};
++	u32 mkey;
++
++	if (!MLX5_CAP_GEN(dev, terminate_scatter_list_mkey))
++		return MLX5_TERMINATE_SCATTER_LIST_LKEY;
++
++	MLX5_SET(query_special_contexts_in, in, opcode,
++		 MLX5_CMD_OP_QUERY_SPECIAL_CONTEXTS);
++	if (mlx5_cmd_exec_inout(dev, query_special_contexts, in, out))
++		return MLX5_TERMINATE_SCATTER_LIST_LKEY;
++
++	mkey = MLX5_GET(query_special_contexts_out, out,
++			terminate_scatter_list_mkey);
++	return cpu_to_be32(mkey);
++}
++EXPORT_SYMBOL(mlx5_core_get_terminate_scatter_list_mkey);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c b/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c
+index a7377619ba6f2..2424cdf9cca99 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c
+@@ -63,6 +63,7 @@ static void mlx5_sf_dev_remove(struct auxiliary_device *adev)
+ 	struct mlx5_sf_dev *sf_dev = container_of(adev, struct mlx5_sf_dev, adev);
+ 	struct devlink *devlink = priv_to_devlink(sf_dev->mdev);
+ 
++	mlx5_drain_health_wq(sf_dev->mdev);
+ 	devlink_unregister(devlink);
+ 	mlx5_uninit_one(sf_dev->mdev);
+ 	iounmap(sf_dev->mdev->iseg);
+diff --git a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_rx.c b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_rx.c
+index afa3b92a6905f..0d5a41a2ae010 100644
+--- a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_rx.c
++++ b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_rx.c
+@@ -245,12 +245,6 @@ static bool mlxbf_gige_rx_packet(struct mlxbf_gige *priv, int *rx_pkts)
+ 
+ 		skb = priv->rx_skb[rx_pi_rem];
+ 
+-		skb_put(skb, datalen);
+-
+-		skb->ip_summed = CHECKSUM_NONE; /* device did not checksum packet */
+-
+-		skb->protocol = eth_type_trans(skb, netdev);
+-
+ 		/* Alloc another RX SKB for this same index */
+ 		rx_skb = mlxbf_gige_alloc_skb(priv, MLXBF_GIGE_DEFAULT_BUF_SZ,
+ 					      &rx_buf_dma, DMA_FROM_DEVICE);
+@@ -259,6 +253,13 @@ static bool mlxbf_gige_rx_packet(struct mlxbf_gige *priv, int *rx_pkts)
+ 		priv->rx_skb[rx_pi_rem] = rx_skb;
+ 		dma_unmap_single(priv->dev, *rx_wqe_addr,
+ 				 MLXBF_GIGE_DEFAULT_BUF_SZ, DMA_FROM_DEVICE);
++
++		skb_put(skb, datalen);
++
++		skb->ip_summed = CHECKSUM_NONE; /* device did not checksum packet */
++
++		skb->protocol = eth_type_trans(skb, netdev);
++
+ 		*rx_wqe_addr = rx_buf_dma;
+ 	} else if (rx_cqe & MLXBF_GIGE_RX_CQE_PKT_STATUS_MAC_ERR) {
+ 		priv->stats.rx_mac_errors++;
+diff --git a/drivers/net/ethernet/renesas/rswitch.c b/drivers/net/ethernet/renesas/rswitch.c
+index c4f93d24c6a42..7855d9ef81eb1 100644
+--- a/drivers/net/ethernet/renesas/rswitch.c
++++ b/drivers/net/ethernet/renesas/rswitch.c
+@@ -1487,7 +1487,7 @@ static netdev_tx_t rswitch_start_xmit(struct sk_buff *skb, struct net_device *nd
+ 
+ 	if (rswitch_get_num_cur_queues(gq) >= gq->ring_size - 1) {
+ 		netif_stop_subqueue(ndev, 0);
+-		return ret;
++		return NETDEV_TX_BUSY;
+ 	}
+ 
+ 	if (skb_put_padto(skb, ETH_ZLEN))
+diff --git a/drivers/net/ethernet/sfc/tc.c b/drivers/net/ethernet/sfc/tc.c
+index deeaab9ee761d..217f3876af722 100644
+--- a/drivers/net/ethernet/sfc/tc.c
++++ b/drivers/net/ethernet/sfc/tc.c
+@@ -379,9 +379,9 @@ static int efx_tc_flower_replace(struct efx_nic *efx,
+ 	if (old) {
+ 		netif_dbg(efx, drv, efx->net_dev,
+ 			  "Already offloaded rule (cookie %lx)\n", tc->cookie);
+-		rc = -EEXIST;
+ 		NL_SET_ERR_MSG_MOD(extack, "Rule already offloaded");
+-		goto release;
++		kfree(rule);
++		return -EEXIST;
+ 	}
+ 
+ 	/* Parse actions */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index f9cd063f1fe30..71f8f78ce0090 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -7176,8 +7176,7 @@ int stmmac_dvr_probe(struct device *device,
+ 	ndev->hw_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
+ 			    NETIF_F_RXCSUM;
+ 	ndev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
+-			     NETDEV_XDP_ACT_XSK_ZEROCOPY |
+-			     NETDEV_XDP_ACT_NDO_XMIT;
++			     NETDEV_XDP_ACT_XSK_ZEROCOPY;
+ 
+ 	ret = stmmac_tc_init(priv, priv);
+ 	if (!ret) {
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.c
+index 9d4d8c3dad0a3..aa6f16d3df649 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.c
+@@ -117,6 +117,9 @@ int stmmac_xdp_set_prog(struct stmmac_priv *priv, struct bpf_prog *prog,
+ 		return -EOPNOTSUPP;
+ 	}
+ 
++	if (!prog)
++		xdp_features_clear_redirect_target(dev);
++
+ 	need_update = !!priv->xdp_prog != !!prog;
+ 	if (if_running && need_update)
+ 		stmmac_xdp_release(dev);
+@@ -131,5 +134,8 @@ int stmmac_xdp_set_prog(struct stmmac_priv *priv, struct bpf_prog *prog,
+ 	if (if_running && need_update)
+ 		stmmac_xdp_open(dev);
+ 
++	if (prog)
++		xdp_features_set_redirect_target(dev, false);
++
+ 	return 0;
+ }
+diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
+index 2ee80ed140b72..afa1d56d9095c 100644
+--- a/drivers/net/ipa/ipa_endpoint.c
++++ b/drivers/net/ipa/ipa_endpoint.c
+@@ -119,7 +119,7 @@ enum ipa_status_field_id {
+ };
+ 
+ /* Size in bytes of an IPA packet status structure */
+-#define IPA_STATUS_SIZE			sizeof(__le32[4])
++#define IPA_STATUS_SIZE			sizeof(__le32[8])
+ 
+ /* IPA status structure decoder; looks up field values for a structure */
+ static u32 ipa_status_extract(struct ipa *ipa, const void *data,
+diff --git a/drivers/net/phy/mxl-gpy.c b/drivers/net/phy/mxl-gpy.c
+index e5972b4ef6e8f..4041ebd7ad9b3 100644
+--- a/drivers/net/phy/mxl-gpy.c
++++ b/drivers/net/phy/mxl-gpy.c
+@@ -267,13 +267,6 @@ static int gpy_config_init(struct phy_device *phydev)
+ 	return ret < 0 ? ret : 0;
+ }
+ 
+-static bool gpy_has_broken_mdint(struct phy_device *phydev)
+-{
+-	/* At least these PHYs are known to have broken interrupt handling */
+-	return phydev->drv->phy_id == PHY_ID_GPY215B ||
+-	       phydev->drv->phy_id == PHY_ID_GPY215C;
+-}
+-
+ static int gpy_probe(struct phy_device *phydev)
+ {
+ 	struct device *dev = &phydev->mdio.dev;
+@@ -293,8 +286,7 @@ static int gpy_probe(struct phy_device *phydev)
+ 	phydev->priv = priv;
+ 	mutex_init(&priv->mbox_lock);
+ 
+-	if (gpy_has_broken_mdint(phydev) &&
+-	    !device_property_present(dev, "maxlinear,use-broken-interrupts"))
++	if (!device_property_present(dev, "maxlinear,use-broken-interrupts"))
+ 		phydev->dev_flags |= PHY_F_NO_IRQ;
+ 
+ 	fw_version = phy_read(phydev, PHY_FWV);
+@@ -652,11 +644,9 @@ static irqreturn_t gpy_handle_interrupt(struct phy_device *phydev)
+ 	 * frame. Therefore, polling is the best we can do and won't do any more
+ 	 * harm.
+ 	 * It was observed that this bug happens on link state and link speed
+-	 * changes on a GPY215B and GYP215C independent of the firmware version
+-	 * (which doesn't mean that this list is exhaustive).
++	 * changes independent of the firmware version.
+ 	 */
+-	if (gpy_has_broken_mdint(phydev) &&
+-	    (reg & (PHY_IMASK_LSTC | PHY_IMASK_LSPC))) {
++	if (reg & (PHY_IMASK_LSTC | PHY_IMASK_LSPC)) {
+ 		reg = gpy_mbox_read(phydev, REG_GPIO0_OUT);
+ 		if (reg < 0) {
+ 			phy_error(phydev);
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 571e37e67f9ce..f1865d047971b 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1325,7 +1325,7 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x2001, 0x7e3d, 4)},	/* D-Link DWM-222 A2 */
+ 	{QMI_FIXED_INTF(0x2020, 0x2031, 4)},	/* Olicard 600 */
+ 	{QMI_FIXED_INTF(0x2020, 0x2033, 4)},	/* BroadMobi BM806U */
+-	{QMI_FIXED_INTF(0x2020, 0x2060, 4)},	/* BroadMobi BM818 */
++	{QMI_QUIRK_SET_DTR(0x2020, 0x2060, 4)},	/* BroadMobi BM818 */
+ 	{QMI_FIXED_INTF(0x0f3d, 0x68a2, 8)},    /* Sierra Wireless MC7700 */
+ 	{QMI_FIXED_INTF(0x114f, 0x68a2, 8)},    /* Sierra Wireless MC7750 */
+ 	{QMI_FIXED_INTF(0x1199, 0x68a2, 8)},	/* Sierra Wireless MC7710 in QMI mode */
+diff --git a/drivers/net/wireless/ath/ath10k/qmi.c b/drivers/net/wireless/ath/ath10k/qmi.c
+index 90f457b8e1feb..038c5903c0dc1 100644
+--- a/drivers/net/wireless/ath/ath10k/qmi.c
++++ b/drivers/net/wireless/ath/ath10k/qmi.c
+@@ -33,7 +33,7 @@ static int ath10k_qmi_map_msa_permission(struct ath10k_qmi *qmi,
+ {
+ 	struct qcom_scm_vmperm dst_perms[3];
+ 	struct ath10k *ar = qmi->ar;
+-	unsigned int src_perms;
++	u64 src_perms;
+ 	u32 perm_count;
+ 	int ret;
+ 
+@@ -65,7 +65,7 @@ static int ath10k_qmi_unmap_msa_permission(struct ath10k_qmi *qmi,
+ {
+ 	struct qcom_scm_vmperm dst_perms;
+ 	struct ath10k *ar = qmi->ar;
+-	unsigned int src_perms;
++	u64 src_perms;
+ 	int ret;
+ 
+ 	src_perms = BIT(QCOM_SCM_VMID_MSS_MSA) | BIT(QCOM_SCM_VMID_WLAN);
+diff --git a/drivers/net/wireless/broadcom/b43/b43.h b/drivers/net/wireless/broadcom/b43/b43.h
+index 9fc7c088a539e..67b4bac048e58 100644
+--- a/drivers/net/wireless/broadcom/b43/b43.h
++++ b/drivers/net/wireless/broadcom/b43/b43.h
+@@ -651,7 +651,7 @@ struct b43_iv {
+ 	union {
+ 		__be16 d16;
+ 		__be32 d32;
+-	} data __packed;
++	} __packed data;
+ } __packed;
+ 
+ 
+diff --git a/drivers/net/wireless/broadcom/b43legacy/b43legacy.h b/drivers/net/wireless/broadcom/b43legacy/b43legacy.h
+index 6b0cec467938f..f49365d14619f 100644
+--- a/drivers/net/wireless/broadcom/b43legacy/b43legacy.h
++++ b/drivers/net/wireless/broadcom/b43legacy/b43legacy.h
+@@ -379,7 +379,7 @@ struct b43legacy_iv {
+ 	union {
+ 		__be16 d16;
+ 		__be32 d32;
+-	} data __packed;
++	} __packed data;
+ } __packed;
+ 
+ #define B43legacy_PHYMODE(phytype)	(1 << (phytype))
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rs.c b/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
+index 0b50b816684a0..2be6801d48aca 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
+@@ -2692,6 +2692,8 @@ static void rs_drv_get_rate(void *mvm_r, struct ieee80211_sta *sta,
+ 		return;
+ 
+ 	lq_sta = mvm_sta;
++
++	spin_lock(&lq_sta->pers.lock);
+ 	iwl_mvm_hwrate_to_tx_rate_v1(lq_sta->last_rate_n_flags,
+ 				     info->band, &info->control.rates[0]);
+ 	info->control.rates[0].count = 1;
+@@ -2706,6 +2708,7 @@ static void rs_drv_get_rate(void *mvm_r, struct ieee80211_sta *sta,
+ 		iwl_mvm_hwrate_to_tx_rate_v1(last_ucode_rate, info->band,
+ 					     &txrc->reported_rate);
+ 	}
++	spin_unlock(&lq_sta->pers.lock);
+ }
+ 
+ static void *rs_drv_alloc_sta(void *mvm_rate, struct ieee80211_sta *sta,
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
+index c8cee4a247551..4088aaa1c618d 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
+@@ -1518,6 +1518,7 @@ struct rtl8xxxu_priv {
+ 	u32 rege9c;
+ 	u32 regeb4;
+ 	u32 regebc;
++	u32 regrcr;
+ 	int next_mbox;
+ 	int nr_out_eps;
+ 
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+index 54ca6f2ced3f3..74ff5130971e2 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+@@ -4049,6 +4049,7 @@ static int rtl8xxxu_init_device(struct ieee80211_hw *hw)
+ 		RCR_ACCEPT_MGMT_FRAME | RCR_HTC_LOC_CTRL |
+ 		RCR_APPEND_PHYSTAT | RCR_APPEND_ICV | RCR_APPEND_MIC;
+ 	rtl8xxxu_write32(priv, REG_RCR, val32);
++	priv->regrcr = val32;
+ 
+ 	if (priv->rtl_chip == RTL8188F) {
+ 		/* Accept all data frames */
+@@ -6269,7 +6270,7 @@ static void rtl8xxxu_configure_filter(struct ieee80211_hw *hw,
+ 				      unsigned int *total_flags, u64 multicast)
+ {
+ 	struct rtl8xxxu_priv *priv = hw->priv;
+-	u32 rcr = rtl8xxxu_read32(priv, REG_RCR);
++	u32 rcr = priv->regrcr;
+ 
+ 	dev_dbg(&priv->udev->dev, "%s: changed_flags %08x, total_flags %08x\n",
+ 		__func__, changed_flags, *total_flags);
+@@ -6315,6 +6316,7 @@ static void rtl8xxxu_configure_filter(struct ieee80211_hw *hw,
+ 	 */
+ 
+ 	rtl8xxxu_write32(priv, REG_RCR, rcr);
++	priv->regrcr = rcr;
+ 
+ 	*total_flags &= (FIF_ALLMULTI | FIF_FCSFAIL | FIF_BCN_PRBRESP_PROMISC |
+ 			 FIF_CONTROL | FIF_OTHER_BSS | FIF_PSPOLL |
+diff --git a/drivers/net/wwan/t7xx/t7xx_pci.c b/drivers/net/wwan/t7xx/t7xx_pci.c
+index 226fc1703e90f..91256e005b846 100644
+--- a/drivers/net/wwan/t7xx/t7xx_pci.c
++++ b/drivers/net/wwan/t7xx/t7xx_pci.c
+@@ -45,6 +45,7 @@
+ #define T7XX_PCI_IREG_BASE		0
+ #define T7XX_PCI_EREG_BASE		2
+ 
++#define T7XX_INIT_TIMEOUT		20
+ #define PM_SLEEP_DIS_TIMEOUT_MS		20
+ #define PM_ACK_TIMEOUT_MS		1500
+ #define PM_AUTOSUSPEND_MS		20000
+@@ -96,6 +97,7 @@ static int t7xx_pci_pm_init(struct t7xx_pci_dev *t7xx_dev)
+ 	spin_lock_init(&t7xx_dev->md_pm_lock);
+ 	init_completion(&t7xx_dev->sleep_lock_acquire);
+ 	init_completion(&t7xx_dev->pm_sr_ack);
++	init_completion(&t7xx_dev->init_done);
+ 	atomic_set(&t7xx_dev->md_pm_state, MTK_PM_INIT);
+ 
+ 	device_init_wakeup(&pdev->dev, true);
+@@ -124,6 +126,7 @@ void t7xx_pci_pm_init_late(struct t7xx_pci_dev *t7xx_dev)
+ 	pm_runtime_mark_last_busy(&t7xx_dev->pdev->dev);
+ 	pm_runtime_allow(&t7xx_dev->pdev->dev);
+ 	pm_runtime_put_noidle(&t7xx_dev->pdev->dev);
++	complete_all(&t7xx_dev->init_done);
+ }
+ 
+ static int t7xx_pci_pm_reinit(struct t7xx_pci_dev *t7xx_dev)
+@@ -529,6 +532,20 @@ static void t7xx_pci_shutdown(struct pci_dev *pdev)
+ 	__t7xx_pci_pm_suspend(pdev);
+ }
+ 
++static int t7xx_pci_pm_prepare(struct device *dev)
++{
++	struct pci_dev *pdev = to_pci_dev(dev);
++	struct t7xx_pci_dev *t7xx_dev;
++
++	t7xx_dev = pci_get_drvdata(pdev);
++	if (!wait_for_completion_timeout(&t7xx_dev->init_done, T7XX_INIT_TIMEOUT * HZ)) {
++		dev_warn(dev, "Not ready for system sleep.\n");
++		return -ETIMEDOUT;
++	}
++
++	return 0;
++}
++
+ static int t7xx_pci_pm_suspend(struct device *dev)
+ {
+ 	return __t7xx_pci_pm_suspend(to_pci_dev(dev));
+@@ -555,6 +572,7 @@ static int t7xx_pci_pm_runtime_resume(struct device *dev)
+ }
+ 
+ static const struct dev_pm_ops t7xx_pci_pm_ops = {
++	.prepare = t7xx_pci_pm_prepare,
+ 	.suspend = t7xx_pci_pm_suspend,
+ 	.resume = t7xx_pci_pm_resume,
+ 	.resume_noirq = t7xx_pci_pm_resume_noirq,
+diff --git a/drivers/net/wwan/t7xx/t7xx_pci.h b/drivers/net/wwan/t7xx/t7xx_pci.h
+index 112efa534eace..f08f1ab744691 100644
+--- a/drivers/net/wwan/t7xx/t7xx_pci.h
++++ b/drivers/net/wwan/t7xx/t7xx_pci.h
+@@ -69,6 +69,7 @@ struct t7xx_pci_dev {
+ 	struct t7xx_modem	*md;
+ 	struct t7xx_ccmni_ctrl	*ccmni_ctlb;
+ 	bool			rgu_pci_irq_en;
++	struct completion	init_done;
+ 
+ 	/* Low Power Items */
+ 	struct list_head	md_pm_entities;
+diff --git a/drivers/nvme/host/constants.c b/drivers/nvme/host/constants.c
+index bc523ca022548..5e4f8848dce08 100644
+--- a/drivers/nvme/host/constants.c
++++ b/drivers/nvme/host/constants.c
+@@ -21,7 +21,7 @@ static const char * const nvme_ops[] = {
+ 	[nvme_cmd_resv_release] = "Reservation Release",
+ 	[nvme_cmd_zone_mgmt_send] = "Zone Management Send",
+ 	[nvme_cmd_zone_mgmt_recv] = "Zone Management Receive",
+-	[nvme_cmd_zone_append] = "Zone Management Append",
++	[nvme_cmd_zone_append] = "Zone Append",
+ };
+ 
+ static const char * const nvme_admin_ops[] = {
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index bdf1601219fc4..c015393beeee8 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -3585,6 +3585,9 @@ static ssize_t nvme_sysfs_delete(struct device *dev,
+ {
+ 	struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
+ 
++	if (!test_bit(NVME_CTRL_STARTED_ONCE, &ctrl->flags))
++		return -EBUSY;
++
+ 	if (device_remove_file_self(dev, attr))
+ 		nvme_delete_ctrl_sync(ctrl);
+ 	return count;
+@@ -5045,7 +5048,7 @@ void nvme_start_ctrl(struct nvme_ctrl *ctrl)
+ 	 * that were missed. We identify persistent discovery controllers by
+ 	 * checking that they started once before, hence are reconnecting back.
+ 	 */
+-	if (test_and_set_bit(NVME_CTRL_STARTED_ONCE, &ctrl->flags) &&
++	if (test_bit(NVME_CTRL_STARTED_ONCE, &ctrl->flags) &&
+ 	    nvme_discovery_ctrl(ctrl))
+ 		nvme_change_uevent(ctrl, "NVME_EVENT=rediscover");
+ 
+@@ -5056,6 +5059,7 @@ void nvme_start_ctrl(struct nvme_ctrl *ctrl)
+ 	}
+ 
+ 	nvme_change_uevent(ctrl, "NVME_EVENT=connected");
++	set_bit(NVME_CTRL_STARTED_ONCE, &ctrl->flags);
+ }
+ EXPORT_SYMBOL_GPL(nvme_start_ctrl);
+ 
+diff --git a/drivers/nvme/host/hwmon.c b/drivers/nvme/host/hwmon.c
+index 9e6e56c20ec99..316f3e4ca7cc6 100644
+--- a/drivers/nvme/host/hwmon.c
++++ b/drivers/nvme/host/hwmon.c
+@@ -163,7 +163,9 @@ static umode_t nvme_hwmon_is_visible(const void *_data,
+ 	case hwmon_temp_max:
+ 	case hwmon_temp_min:
+ 		if ((!channel && data->ctrl->wctemp) ||
+-		    (channel && data->log->temp_sensor[channel - 1])) {
++		    (channel && data->log->temp_sensor[channel - 1] &&
++		     !(data->ctrl->quirks &
++		       NVME_QUIRK_NO_SECONDARY_TEMP_THRESH))) {
+ 			if (data->ctrl->quirks &
+ 			    NVME_QUIRK_NO_TEMP_THRESH_CHANGE)
+ 				return 0444;
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index 9171452e2f6d4..2bc159a318ff0 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -884,7 +884,6 @@ void nvme_mpath_remove_disk(struct nvme_ns_head *head)
+ {
+ 	if (!head->disk)
+ 		return;
+-	blk_mark_disk_dead(head->disk);
+ 	/* make sure all pending bios are cleaned up */
+ 	kblockd_schedule_work(&head->requeue_work);
+ 	flush_work(&head->requeue_work);
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index bf46f122e9e1e..a2d4f59e0535a 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -149,6 +149,11 @@ enum nvme_quirks {
+ 	 * Reports garbage in the namespace identifiers (eui64, nguid, uuid).
+ 	 */
+ 	NVME_QUIRK_BOGUS_NID			= (1 << 18),
++
++	/*
++	 * No temperature thresholds for channels other than 0 (Composite).
++	 */
++	NVME_QUIRK_NO_SECONDARY_TEMP_THRESH	= (1 << 19),
+ };
+ 
+ /*
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index cd7873de31215..60f51155a6d20 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -2960,7 +2960,7 @@ static struct nvme_dev *nvme_pci_alloc_dev(struct pci_dev *pdev,
+ 	 * over a single page.
+ 	 */
+ 	dev->ctrl.max_hw_sectors = min_t(u32,
+-		NVME_MAX_KB_SZ << 1, dma_max_mapping_size(&pdev->dev) >> 9);
++		NVME_MAX_KB_SZ << 1, dma_opt_mapping_size(&pdev->dev) >> 9);
+ 	dev->ctrl.max_segments = NVME_MAX_SEGS;
+ 
+ 	/*
+@@ -3406,6 +3406,8 @@ static const struct pci_device_id nvme_id_table[] = {
+ 		.driver_data = NVME_QUIRK_NO_DEEPEST_PS, },
+ 	{ PCI_DEVICE(0x2646, 0x2263),   /* KINGSTON A2000 NVMe SSD  */
+ 		.driver_data = NVME_QUIRK_NO_DEEPEST_PS, },
++	{ PCI_DEVICE(0x2646, 0x5013),   /* Kingston KC3000, Kingston FURY Renegade */
++		.driver_data = NVME_QUIRK_NO_SECONDARY_TEMP_THRESH, },
+ 	{ PCI_DEVICE(0x2646, 0x5018),   /* KINGSTON OM8SFP4xxxxP OS21012 NVMe SSD */
+ 		.driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
+ 	{ PCI_DEVICE(0x2646, 0x5016),   /* KINGSTON OM3PGP4xxxxP OS21011 NVMe SSD */
+@@ -3445,6 +3447,10 @@ static const struct pci_device_id nvme_id_table[] = {
+ 				NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+ 	{ PCI_DEVICE(0x10ec, 0x5763), /* TEAMGROUP T-FORCE CARDEA ZERO Z330 SSD */
+ 		.driver_data = NVME_QUIRK_BOGUS_NID, },
++	{ PCI_DEVICE(0x1e4b, 0x1602), /* HS-SSD-FUTURE 2048G  */
++		.driver_data = NVME_QUIRK_BOGUS_NID, },
++	{ PCI_DEVICE(0x10ec, 0x5765), /* TEAMGROUP MP33 2TB SSD */
++		.driver_data = NVME_QUIRK_BOGUS_NID, },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_AMAZON, 0x0061),
+ 		.driver_data = NVME_QUIRK_DMA_ADDRESS_BITS_48, },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_AMAZON, 0x0065),
+diff --git a/drivers/phy/amlogic/phy-meson-g12a-mipi-dphy-analog.c b/drivers/phy/amlogic/phy-meson-g12a-mipi-dphy-analog.c
+index c14089fa7db49..cabdddbbabfd7 100644
+--- a/drivers/phy/amlogic/phy-meson-g12a-mipi-dphy-analog.c
++++ b/drivers/phy/amlogic/phy-meson-g12a-mipi-dphy-analog.c
+@@ -70,7 +70,7 @@ static int phy_g12a_mipi_dphy_analog_power_on(struct phy *phy)
+ 		     HHI_MIPI_CNTL1_BANDGAP);
+ 
+ 	regmap_write(priv->regmap, HHI_MIPI_CNTL2,
+-		     FIELD_PREP(HHI_MIPI_CNTL2_DIF_TX_CTL0, 0x459) |
++		     FIELD_PREP(HHI_MIPI_CNTL2_DIF_TX_CTL0, 0x45a) |
+ 		     FIELD_PREP(HHI_MIPI_CNTL2_DIF_TX_CTL1, 0x2680));
+ 
+ 	reg = DSI_LANE_CLK;
+diff --git a/drivers/phy/qualcomm/phy-qcom-qmp-combo.c b/drivers/phy/qualcomm/phy-qcom-qmp-combo.c
+index c1483e157af4a..ae412d64b4265 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qmp-combo.c
++++ b/drivers/phy/qualcomm/phy-qcom-qmp-combo.c
+@@ -2487,7 +2487,7 @@ static int qmp_combo_com_init(struct qmp_combo *qmp)
+ 	ret = regulator_bulk_enable(cfg->num_vregs, qmp->vregs);
+ 	if (ret) {
+ 		dev_err(qmp->dev, "failed to enable regulators, err=%d\n", ret);
+-		goto err_unlock;
++		goto err_decrement_count;
+ 	}
+ 
+ 	ret = reset_control_bulk_assert(cfg->num_resets, qmp->resets);
+@@ -2537,7 +2537,8 @@ err_assert_reset:
+ 	reset_control_bulk_assert(cfg->num_resets, qmp->resets);
+ err_disable_regulators:
+ 	regulator_bulk_disable(cfg->num_vregs, qmp->vregs);
+-err_unlock:
++err_decrement_count:
++	qmp->init_count--;
+ 	mutex_unlock(&qmp->phy_mutex);
+ 
+ 	return ret;
+diff --git a/drivers/phy/qualcomm/phy-qcom-qmp-pcie-msm8996.c b/drivers/phy/qualcomm/phy-qcom-qmp-pcie-msm8996.c
+index 09824be088c96..0c603bc06e099 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qmp-pcie-msm8996.c
++++ b/drivers/phy/qualcomm/phy-qcom-qmp-pcie-msm8996.c
+@@ -379,7 +379,7 @@ static int qmp_pcie_msm8996_com_init(struct qmp_phy *qphy)
+ 	ret = regulator_bulk_enable(cfg->num_vregs, qmp->vregs);
+ 	if (ret) {
+ 		dev_err(qmp->dev, "failed to enable regulators, err=%d\n", ret);
+-		goto err_unlock;
++		goto err_decrement_count;
+ 	}
+ 
+ 	ret = reset_control_bulk_assert(cfg->num_resets, qmp->resets);
+@@ -409,7 +409,8 @@ err_assert_reset:
+ 	reset_control_bulk_assert(cfg->num_resets, qmp->resets);
+ err_disable_regulators:
+ 	regulator_bulk_disable(cfg->num_vregs, qmp->vregs);
+-err_unlock:
++err_decrement_count:
++	qmp->init_count--;
+ 	mutex_unlock(&qmp->phy_mutex);
+ 
+ 	return ret;
+diff --git a/drivers/platform/mellanox/mlxbf-tmfifo.c b/drivers/platform/mellanox/mlxbf-tmfifo.c
+index 91a077c35b8b8..a79318e90a139 100644
+--- a/drivers/platform/mellanox/mlxbf-tmfifo.c
++++ b/drivers/platform/mellanox/mlxbf-tmfifo.c
+@@ -784,7 +784,7 @@ static void mlxbf_tmfifo_rxtx(struct mlxbf_tmfifo_vring *vring, bool is_rx)
+ 	fifo = vring->fifo;
+ 
+ 	/* Return if vdev is not ready. */
+-	if (!fifo->vdev[devid])
++	if (!fifo || !fifo->vdev[devid])
+ 		return;
+ 
+ 	/* Return if another vring is running. */
+@@ -980,9 +980,13 @@ static int mlxbf_tmfifo_virtio_find_vqs(struct virtio_device *vdev,
+ 
+ 		vq->num_max = vring->num;
+ 
++		vq->priv = vring;
++
++		/* Make vq update visible before using it. */
++		virtio_mb(false);
++
+ 		vqs[i] = vq;
+ 		vring->vq = vq;
+-		vq->priv = vring;
+ 	}
+ 
+ 	return 0;
+@@ -1302,6 +1306,9 @@ static int mlxbf_tmfifo_probe(struct platform_device *pdev)
+ 
+ 	mod_timer(&fifo->timer, jiffies + MLXBF_TMFIFO_TIMER_INTERVAL);
+ 
++	/* Make all updates visible before setting the 'is_ready' flag. */
++	virtio_mb(false);
++
+ 	fifo->is_ready = true;
+ 	return 0;
+ 
+diff --git a/drivers/platform/x86/intel_scu_pcidrv.c b/drivers/platform/x86/intel_scu_pcidrv.c
+index 80abc708e4f2f..d904fad499aa5 100644
+--- a/drivers/platform/x86/intel_scu_pcidrv.c
++++ b/drivers/platform/x86/intel_scu_pcidrv.c
+@@ -34,6 +34,7 @@ static int intel_scu_pci_probe(struct pci_dev *pdev,
+ 
+ static const struct pci_device_id pci_ids[] = {
+ 	{ PCI_VDEVICE(INTEL, 0x080e) },
++	{ PCI_VDEVICE(INTEL, 0x082a) },
+ 	{ PCI_VDEVICE(INTEL, 0x08ea) },
+ 	{ PCI_VDEVICE(INTEL, 0x0a94) },
+ 	{ PCI_VDEVICE(INTEL, 0x11a0) },
+diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
+index ab053084f7a22..1ba711bc01000 100644
+--- a/drivers/remoteproc/qcom_q6v5_mss.c
++++ b/drivers/remoteproc/qcom_q6v5_mss.c
+@@ -235,8 +235,8 @@ struct q6v5 {
+ 	bool has_qaccept_regs;
+ 	bool has_ext_cntl_regs;
+ 	bool has_vq6;
+-	int mpss_perm;
+-	int mba_perm;
++	u64 mpss_perm;
++	u64 mba_perm;
+ 	const char *hexagon_mdt_image;
+ 	int version;
+ };
+@@ -414,7 +414,7 @@ static void q6v5_pds_disable(struct q6v5 *qproc, struct device **pds,
+ 	}
+ }
+ 
+-static int q6v5_xfer_mem_ownership(struct q6v5 *qproc, int *current_perm,
++static int q6v5_xfer_mem_ownership(struct q6v5 *qproc, u64 *current_perm,
+ 				   bool local, bool remote, phys_addr_t addr,
+ 				   size_t size)
+ {
+@@ -967,7 +967,7 @@ static int q6v5_mpss_init_image(struct q6v5 *qproc, const struct firmware *fw,
+ 	unsigned long dma_attrs = DMA_ATTR_FORCE_CONTIGUOUS;
+ 	dma_addr_t phys;
+ 	void *metadata;
+-	int mdata_perm;
++	u64 mdata_perm;
+ 	int xferop_ret;
+ 	size_t size;
+ 	void *ptr;
+diff --git a/drivers/remoteproc/qcom_q6v5_pas.c b/drivers/remoteproc/qcom_q6v5_pas.c
+index 0871108fb4dc5..c99a205426851 100644
+--- a/drivers/remoteproc/qcom_q6v5_pas.c
++++ b/drivers/remoteproc/qcom_q6v5_pas.c
+@@ -94,7 +94,7 @@ struct qcom_adsp {
+ 	size_t region_assign_size;
+ 
+ 	int region_assign_idx;
+-	int region_assign_perms;
++	u64 region_assign_perms;
+ 
+ 	struct qcom_rproc_glink glink_subdev;
+ 	struct qcom_rproc_subdev smd_subdev;
+diff --git a/drivers/s390/crypto/pkey_api.c b/drivers/s390/crypto/pkey_api.c
+index 5a05d1cdfec20..a8def50c149bd 100644
+--- a/drivers/s390/crypto/pkey_api.c
++++ b/drivers/s390/crypto/pkey_api.c
+@@ -1293,6 +1293,7 @@ static long pkey_unlocked_ioctl(struct file *filp, unsigned int cmd,
+ 			return PTR_ERR(kkey);
+ 		rc = pkey_keyblob2pkey(kkey, ktp.keylen, &ktp.protkey);
+ 		DEBUG_DBG("%s pkey_keyblob2pkey()=%d\n", __func__, rc);
++		memzero_explicit(kkey, ktp.keylen);
+ 		kfree(kkey);
+ 		if (rc)
+ 			break;
+@@ -1426,6 +1427,7 @@ static long pkey_unlocked_ioctl(struct file *filp, unsigned int cmd,
+ 					kkey, ktp.keylen, &ktp.protkey);
+ 		DEBUG_DBG("%s pkey_keyblob2pkey2()=%d\n", __func__, rc);
+ 		kfree(apqns);
++		memzero_explicit(kkey, ktp.keylen);
+ 		kfree(kkey);
+ 		if (rc)
+ 			break;
+@@ -1552,6 +1554,7 @@ static long pkey_unlocked_ioctl(struct file *filp, unsigned int cmd,
+ 					protkey, &protkeylen);
+ 		DEBUG_DBG("%s pkey_keyblob2pkey3()=%d\n", __func__, rc);
+ 		kfree(apqns);
++		memzero_explicit(kkey, ktp.keylen);
+ 		kfree(kkey);
+ 		if (rc) {
+ 			kfree(protkey);
+diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
+index ec0e987b71fa5..807ae5ede44c6 100644
+--- a/drivers/scsi/qla2xxx/qla_def.h
++++ b/drivers/scsi/qla2xxx/qla_def.h
+@@ -3797,6 +3797,7 @@ struct qla_qpair {
+ 	uint64_t retry_term_jiff;
+ 	struct qla_tgt_counters tgt_counters;
+ 	uint16_t cpuid;
++	bool cpu_mapped;
+ 	struct qla_fw_resources fwres ____cacheline_aligned;
+ 	struct  qla_buf_pool buf_pool;
+ 	u32	cmd_cnt;
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index ec0423ec66817..1a955c3ff3d6c 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -9426,6 +9426,9 @@ struct qla_qpair *qla2xxx_create_qpair(struct scsi_qla_host *vha, int qos,
+ 		qpair->rsp->req = qpair->req;
+ 		qpair->rsp->qpair = qpair;
+ 
++		if (!qpair->cpu_mapped)
++			qla_cpu_update(qpair, raw_smp_processor_id());
++
+ 		if (IS_T10_PI_CAPABLE(ha) && ql2xenabledif) {
+ 			if (ha->fw_attributes & BIT_4)
+ 				qpair->difdix_supported = 1;
+diff --git a/drivers/scsi/qla2xxx/qla_inline.h b/drivers/scsi/qla2xxx/qla_inline.h
+index cce6e425c1214..7b42558a8839a 100644
+--- a/drivers/scsi/qla2xxx/qla_inline.h
++++ b/drivers/scsi/qla2xxx/qla_inline.h
+@@ -539,11 +539,14 @@ qla_mapq_init_qp_cpu_map(struct qla_hw_data *ha,
+ 	if (!ha->qp_cpu_map)
+ 		return;
+ 	mask = pci_irq_get_affinity(ha->pdev, msix->vector_base0);
++	if (!mask)
++		return;
+ 	qpair->cpuid = cpumask_first(mask);
+ 	for_each_cpu(cpu, mask) {
+ 		ha->qp_cpu_map[cpu] = qpair;
+ 	}
+ 	msix->cpuid = qpair->cpuid;
++	qpair->cpu_mapped = true;
+ }
+ 
+ static inline void
+diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
+index 71feda2cdb630..245e3a5d81fd3 100644
+--- a/drivers/scsi/qla2xxx/qla_isr.c
++++ b/drivers/scsi/qla2xxx/qla_isr.c
+@@ -3770,6 +3770,9 @@ void qla24xx_process_response_queue(struct scsi_qla_host *vha,
+ 
+ 	if (rsp->qpair->cpuid != smp_processor_id() || !rsp->qpair->rcv_intr) {
+ 		rsp->qpair->rcv_intr = 1;
++
++		if (!rsp->qpair->cpu_mapped)
++			qla_cpu_update(rsp->qpair, raw_smp_processor_id());
+ 	}
+ 
+ #define __update_rsp_in(_is_shadow_hba, _rsp, _rsp_in)			\
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index 03964b26f3f27..0226c9279cef6 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -1485,6 +1485,7 @@ static int scsi_dispatch_cmd(struct scsi_cmnd *cmd)
+ 		 */
+ 		SCSI_LOG_MLQUEUE(3, scmd_printk(KERN_INFO, cmd,
+ 			"queuecommand : device blocked\n"));
++		atomic_dec(&cmd->device->iorequest_cnt);
+ 		return SCSI_MLQUEUE_DEVICE_BUSY;
+ 	}
+ 
+@@ -1517,6 +1518,7 @@ static int scsi_dispatch_cmd(struct scsi_cmnd *cmd)
+ 	trace_scsi_dispatch_cmd_start(cmd);
+ 	rtn = host->hostt->queuecommand(host, cmd);
+ 	if (rtn) {
++		atomic_dec(&cmd->device->iorequest_cnt);
+ 		trace_scsi_dispatch_cmd_error(cmd, rtn);
+ 		if (rtn != SCSI_MLQUEUE_DEVICE_BUSY &&
+ 		    rtn != SCSI_MLQUEUE_TARGET_BUSY)
+diff --git a/drivers/scsi/stex.c b/drivers/scsi/stex.c
+index 8def242675ef3..6b07f367918ef 100644
+--- a/drivers/scsi/stex.c
++++ b/drivers/scsi/stex.c
+@@ -109,7 +109,9 @@ enum {
+ 	TASK_ATTRIBUTE_HEADOFQUEUE		= 0x1,
+ 	TASK_ATTRIBUTE_ORDERED			= 0x2,
+ 	TASK_ATTRIBUTE_ACA			= 0x4,
++};
+ 
++enum {
+ 	SS_STS_NORMAL				= 0x80000000,
+ 	SS_STS_DONE				= 0x40000000,
+ 	SS_STS_HANDSHAKE			= 0x20000000,
+@@ -121,7 +123,9 @@ enum {
+ 	SS_I2H_REQUEST_RESET			= 0x2000,
+ 
+ 	SS_MU_OPERATIONAL			= 0x80000000,
++};
+ 
++enum {
+ 	STEX_CDB_LENGTH				= 16,
+ 	STATUS_VAR_LEN				= 128,
+ 
+diff --git a/drivers/soc/qcom/rmtfs_mem.c b/drivers/soc/qcom/rmtfs_mem.c
+index 538fa182169a4..0d31377f178d5 100644
+--- a/drivers/soc/qcom/rmtfs_mem.c
++++ b/drivers/soc/qcom/rmtfs_mem.c
+@@ -31,7 +31,7 @@ struct qcom_rmtfs_mem {
+ 
+ 	unsigned int client_id;
+ 
+-	unsigned int perms;
++	u64 perms;
+ };
+ 
+ static ssize_t qcom_rmtfs_mem_show(struct device *dev,
+diff --git a/drivers/tty/serial/8250/8250_tegra.c b/drivers/tty/serial/8250/8250_tegra.c
+index e7cddeec9d8e0..c424e2ae0e8fe 100644
+--- a/drivers/tty/serial/8250/8250_tegra.c
++++ b/drivers/tty/serial/8250/8250_tegra.c
+@@ -112,13 +112,15 @@ static int tegra_uart_probe(struct platform_device *pdev)
+ 
+ 	ret = serial8250_register_8250_port(&port8250);
+ 	if (ret < 0)
+-		goto err_clkdisable;
++		goto err_ctrl_assert;
+ 
+ 	platform_set_drvdata(pdev, uart);
+ 	uart->line = ret;
+ 
+ 	return 0;
+ 
++err_ctrl_assert:
++	reset_control_assert(uart->rst);
+ err_clkdisable:
+ 	clk_disable_unprepare(uart->clk);
+ 
+diff --git a/drivers/tty/serial/Kconfig b/drivers/tty/serial/Kconfig
+index 0072892ca7fc9..b00e7c52aa16f 100644
+--- a/drivers/tty/serial/Kconfig
++++ b/drivers/tty/serial/Kconfig
+@@ -769,7 +769,7 @@ config SERIAL_PMACZILOG_CONSOLE
+ 
+ config SERIAL_CPM
+ 	tristate "CPM SCC/SMC serial port support"
+-	depends on CPM2 || CPM1 || (PPC32 && COMPILE_TEST)
++	depends on CPM2 || CPM1
+ 	select SERIAL_CORE
+ 	help
+ 	  This driver supports the SCC and SMC serial ports on Motorola 
+diff --git a/drivers/tty/serial/cpm_uart/cpm_uart.h b/drivers/tty/serial/cpm_uart/cpm_uart.h
+index 0577618e78c04..46c03ed71c31b 100644
+--- a/drivers/tty/serial/cpm_uart/cpm_uart.h
++++ b/drivers/tty/serial/cpm_uart/cpm_uart.h
+@@ -19,8 +19,6 @@ struct gpio_desc;
+ #include "cpm_uart_cpm2.h"
+ #elif defined(CONFIG_CPM1)
+ #include "cpm_uart_cpm1.h"
+-#elif defined(CONFIG_COMPILE_TEST)
+-#include "cpm_uart_cpm2.h"
+ #endif
+ 
+ #define SERIAL_CPM_MAJOR	204
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index aef9be3e73c15..2087a5e6f4357 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -1495,34 +1495,36 @@ static void lpuart_break_ctl(struct uart_port *port, int break_state)
+ 
+ static void lpuart32_break_ctl(struct uart_port *port, int break_state)
+ {
+-	unsigned long temp, modem;
+-	struct tty_struct *tty;
+-	unsigned int cflag = 0;
+-
+-	tty = tty_port_tty_get(&port->state->port);
+-	if (tty) {
+-		cflag = tty->termios.c_cflag;
+-		tty_kref_put(tty);
+-	}
++	unsigned long temp;
+ 
+-	temp = lpuart32_read(port, UARTCTRL) & ~UARTCTRL_SBK;
+-	modem = lpuart32_read(port, UARTMODIR);
++	temp = lpuart32_read(port, UARTCTRL);
+ 
++	/*
++	 * LPUART IP now has two known bugs, one is CTS has higher priority than the
++	 * break signal, which causes the break signal sending through UARTCTRL_SBK
++	 * may impacted by the CTS input if the HW flow control is enabled. It
++	 * exists on all platforms we support in this driver.
++	 * Another bug is i.MX8QM LPUART may have an additional break character
++	 * being sent after SBK was cleared.
++	 * To avoid above two bugs, we use Transmit Data Inversion function to send
++	 * the break signal instead of UARTCTRL_SBK.
++	 */
+ 	if (break_state != 0) {
+-		temp |= UARTCTRL_SBK;
+ 		/*
+-		 * LPUART CTS has higher priority than SBK, need to disable CTS before
+-		 * asserting SBK to avoid any interference if flow control is enabled.
++		 * Disable the transmitter to prevent any data from being sent out
++		 * during break, then invert the TX line to send break.
+ 		 */
+-		if (cflag & CRTSCTS && modem & UARTMODIR_TXCTSE)
+-			lpuart32_write(port, modem & ~UARTMODIR_TXCTSE, UARTMODIR);
++		temp &= ~UARTCTRL_TE;
++		lpuart32_write(port, temp, UARTCTRL);
++		temp |= UARTCTRL_TXINV;
++		lpuart32_write(port, temp, UARTCTRL);
+ 	} else {
+-		/* Re-enable the CTS when break off. */
+-		if (cflag & CRTSCTS && !(modem & UARTMODIR_TXCTSE))
+-			lpuart32_write(port, modem | UARTMODIR_TXCTSE, UARTMODIR);
++		/* Disable the TXINV to turn off break and re-enable transmitter. */
++		temp &= ~UARTCTRL_TXINV;
++		lpuart32_write(port, temp, UARTCTRL);
++		temp |= UARTCTRL_TE;
++		lpuart32_write(port, temp, UARTCTRL);
+ 	}
+-
+-	lpuart32_write(port, temp, UARTCTRL);
+ }
+ 
+ static void lpuart_setup_watermark(struct lpuart_port *sport)
+diff --git a/drivers/ufs/core/ufs-mcq.c b/drivers/ufs/core/ufs-mcq.c
+index 202ff71e1b582..51b3c6ae781df 100644
+--- a/drivers/ufs/core/ufs-mcq.c
++++ b/drivers/ufs/core/ufs-mcq.c
+@@ -150,7 +150,8 @@ static int ufshcd_mcq_config_nr_queues(struct ufs_hba *hba)
+ 	u32 hba_maxq, rem, tot_queues;
+ 	struct Scsi_Host *host = hba->host;
+ 
+-	hba_maxq = FIELD_GET(MAX_QUEUE_SUP, hba->mcq_capabilities);
++	/* maxq is 0 based value */
++	hba_maxq = FIELD_GET(MAX_QUEUE_SUP, hba->mcq_capabilities) + 1;
+ 
+ 	tot_queues = UFS_MCQ_NUM_DEV_CMD_QUEUES + read_queues + poll_queues +
+ 			rw_queues;
+@@ -265,7 +266,7 @@ static int ufshcd_mcq_get_tag(struct ufs_hba *hba,
+ 	addr = (le64_to_cpu(cqe->command_desc_base_addr) & CQE_UCD_BA) -
+ 		hba->ucdl_dma_addr;
+ 
+-	return div_u64(addr, sizeof(struct utp_transfer_cmd_desc));
++	return div_u64(addr, ufshcd_get_ucd_size(hba));
+ }
+ 
+ static void ufshcd_mcq_process_cqe(struct ufs_hba *hba,
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 8ac2945e849f4..aec74987cb4e0 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -2821,10 +2821,10 @@ static void ufshcd_map_queues(struct Scsi_Host *shost)
+ static void ufshcd_init_lrb(struct ufs_hba *hba, struct ufshcd_lrb *lrb, int i)
+ {
+ 	struct utp_transfer_cmd_desc *cmd_descp = (void *)hba->ucdl_base_addr +
+-		i * sizeof_utp_transfer_cmd_desc(hba);
++		i * ufshcd_get_ucd_size(hba);
+ 	struct utp_transfer_req_desc *utrdlp = hba->utrdl_base_addr;
+ 	dma_addr_t cmd_desc_element_addr = hba->ucdl_dma_addr +
+-		i * sizeof_utp_transfer_cmd_desc(hba);
++		i * ufshcd_get_ucd_size(hba);
+ 	u16 response_offset = offsetof(struct utp_transfer_cmd_desc,
+ 				       response_upiu);
+ 	u16 prdt_offset = offsetof(struct utp_transfer_cmd_desc, prd_table);
+@@ -3733,7 +3733,7 @@ static int ufshcd_memory_alloc(struct ufs_hba *hba)
+ 	size_t utmrdl_size, utrdl_size, ucdl_size;
+ 
+ 	/* Allocate memory for UTP command descriptors */
+-	ucdl_size = sizeof_utp_transfer_cmd_desc(hba) * hba->nutrs;
++	ucdl_size = ufshcd_get_ucd_size(hba) * hba->nutrs;
+ 	hba->ucdl_base_addr = dmam_alloc_coherent(hba->dev,
+ 						  ucdl_size,
+ 						  &hba->ucdl_dma_addr,
+@@ -3833,7 +3833,7 @@ static void ufshcd_host_memory_configure(struct ufs_hba *hba)
+ 	prdt_offset =
+ 		offsetof(struct utp_transfer_cmd_desc, prd_table);
+ 
+-	cmd_desc_size = sizeof_utp_transfer_cmd_desc(hba);
++	cmd_desc_size = ufshcd_get_ucd_size(hba);
+ 	cmd_desc_dma_addr = hba->ucdl_dma_addr;
+ 
+ 	for (i = 0; i < hba->nutrs; i++) {
+@@ -8422,7 +8422,7 @@ static void ufshcd_release_sdb_queue(struct ufs_hba *hba, int nutrs)
+ {
+ 	size_t ucdl_size, utrdl_size;
+ 
+-	ucdl_size = sizeof(struct utp_transfer_cmd_desc) * nutrs;
++	ucdl_size = ufshcd_get_ucd_size(hba) * nutrs;
+ 	dmam_free_coherent(hba->dev, ucdl_size, hba->ucdl_base_addr,
+ 			   hba->ucdl_dma_addr);
+ 
+diff --git a/drivers/usb/cdns3/cdns3-gadget.c b/drivers/usb/cdns3/cdns3-gadget.c
+index ccfaebca6faa7..1dcadef933e3a 100644
+--- a/drivers/usb/cdns3/cdns3-gadget.c
++++ b/drivers/usb/cdns3/cdns3-gadget.c
+@@ -2097,6 +2097,19 @@ int cdns3_ep_config(struct cdns3_endpoint *priv_ep, bool enable)
+ 	else
+ 		priv_ep->trb_burst_size = 16;
+ 
++	/*
++	 * In versions preceding DEV_VER_V2, for example, iMX8QM, there exit the bugs
++	 * in the DMA. These bugs occur when the trb_burst_size exceeds 16 and the
++	 * address is not aligned to 128 Bytes (which is a product of the 64-bit AXI
++	 * and AXI maximum burst length of 16 or 0xF+1, dma_axi_ctrl0[3:0]). This
++	 * results in data corruption when it crosses the 4K border. The corruption
++	 * specifically occurs from the position (4K - (address & 0x7F)) to 4K.
++	 *
++	 * So force trb_burst_size to 16 at such platform.
++	 */
++	if (priv_dev->dev_ver < DEV_VER_V2)
++		priv_ep->trb_burst_size = 16;
++
+ 	mult = min_t(u8, mult, EP_CFG_MULT_MAX);
+ 	buffering = min_t(u8, buffering, EP_CFG_BUFFERING_MAX);
+ 	maxburst = min_t(u8, maxburst, EP_CFG_MAXBURST_MAX);
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index 56cdfb2e42113..20dadaffbe363 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -3620,6 +3620,7 @@ static void ffs_func_unbind(struct usb_configuration *c,
+ 	/* Drain any pending AIO completions */
+ 	drain_workqueue(ffs->io_completion_wq);
+ 
++	ffs_event_add(ffs, FUNCTIONFS_UNBIND);
+ 	if (!--opts->refcnt)
+ 		functionfs_unbind(ffs);
+ 
+@@ -3644,7 +3645,6 @@ static void ffs_func_unbind(struct usb_configuration *c,
+ 	func->function.ssp_descriptors = NULL;
+ 	func->interfaces_nums = NULL;
+ 
+-	ffs_event_add(ffs, FUNCTIONFS_UNBIND);
+ }
+ 
+ static struct usb_function *ffs_alloc(struct usb_function_instance *fi)
+diff --git a/drivers/video/fbdev/core/bitblit.c b/drivers/video/fbdev/core/bitblit.c
+index f98e8f298bc19..8587c9da06700 100644
+--- a/drivers/video/fbdev/core/bitblit.c
++++ b/drivers/video/fbdev/core/bitblit.c
+@@ -247,6 +247,9 @@ static void bit_cursor(struct vc_data *vc, struct fb_info *info, int mode,
+ 
+ 	cursor.set = 0;
+ 
++	if (!vc->vc_font.data)
++		return;
++
+  	c = scr_readw((u16 *) vc->vc_pos);
+ 	attribute = get_attribute(info, c);
+ 	src = vc->vc_font.data + ((c & charmask) * (w * vc->vc_font.height));
+diff --git a/drivers/video/fbdev/core/modedb.c b/drivers/video/fbdev/core/modedb.c
+index 6473e0dfe1464..e78ec7f728463 100644
+--- a/drivers/video/fbdev/core/modedb.c
++++ b/drivers/video/fbdev/core/modedb.c
+@@ -257,6 +257,11 @@ static const struct fb_videomode modedb[] = {
+ 	{ NULL, 72, 480, 300, 33386, 40, 24, 11, 19, 80, 3, 0,
+ 		FB_VMODE_DOUBLE },
+ 
++	/* 1920x1080 @ 60 Hz, 67.3 kHz hsync */
++	{ NULL, 60, 1920, 1080, 6734, 148, 88, 36, 4, 44, 5, 0,
++		FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT,
++		FB_VMODE_NONINTERLACED },
++
+ 	/* 1920x1200 @ 60 Hz, 74.5 Khz hsync */
+ 	{ NULL, 60, 1920, 1200, 5177, 128, 336, 1, 38, 208, 3,
+ 		FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT,
+diff --git a/drivers/video/fbdev/imsttfb.c b/drivers/video/fbdev/imsttfb.c
+index bea45647184e1..975dd682fae4b 100644
+--- a/drivers/video/fbdev/imsttfb.c
++++ b/drivers/video/fbdev/imsttfb.c
+@@ -1347,7 +1347,7 @@ static const struct fb_ops imsttfb_ops = {
+ 	.fb_ioctl 	= imsttfb_ioctl,
+ };
+ 
+-static void init_imstt(struct fb_info *info)
++static int init_imstt(struct fb_info *info)
+ {
+ 	struct imstt_par *par = info->par;
+ 	__u32 i, tmp, *ip, *end;
+@@ -1420,7 +1420,7 @@ static void init_imstt(struct fb_info *info)
+ 	    || !(compute_imstt_regvals(par, info->var.xres, info->var.yres))) {
+ 		printk("imsttfb: %ux%ux%u not supported\n", info->var.xres, info->var.yres, info->var.bits_per_pixel);
+ 		framebuffer_release(info);
+-		return;
++		return -ENODEV;
+ 	}
+ 
+ 	sprintf(info->fix.id, "IMS TT (%s)", par->ramdac == IBM ? "IBM" : "TVP");
+@@ -1456,12 +1456,13 @@ static void init_imstt(struct fb_info *info)
+ 
+ 	if (register_framebuffer(info) < 0) {
+ 		framebuffer_release(info);
+-		return;
++		return -ENODEV;
+ 	}
+ 
+ 	tmp = (read_reg_le32(par->dc_regs, SSTATUS) & 0x0f00) >> 8;
+ 	fb_info(info, "%s frame buffer; %uMB vram; chip version %u\n",
+ 		info->fix.id, info->fix.smem_len >> 20, tmp);
++	return 0;
+ }
+ 
+ static int imsttfb_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+@@ -1529,10 +1530,10 @@ static int imsttfb_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	if (!par->cmap_regs)
+ 		goto error;
+ 	info->pseudo_palette = par->palette;
+-	init_imstt(info);
+-
+-	pci_set_drvdata(pdev, info);
+-	return 0;
++	ret = init_imstt(info);
++	if (!ret)
++		pci_set_drvdata(pdev, info);
++	return ret;
+ 
+ error:
+ 	if (par->dc_regs)
+diff --git a/drivers/video/fbdev/stifb.c b/drivers/video/fbdev/stifb.c
+index ef8a4c5fc6875..63f51783352dc 100644
+--- a/drivers/video/fbdev/stifb.c
++++ b/drivers/video/fbdev/stifb.c
+@@ -1413,6 +1413,7 @@ out_err1:
+ 	iounmap(info->screen_base);
+ out_err0:
+ 	kfree(fb);
++	sti->info = NULL;
+ 	return -ENXIO;
+ }
+ 
+diff --git a/drivers/watchdog/menz69_wdt.c b/drivers/watchdog/menz69_wdt.c
+index 8973f98bc6a56..bca0938f3429f 100644
+--- a/drivers/watchdog/menz69_wdt.c
++++ b/drivers/watchdog/menz69_wdt.c
+@@ -98,14 +98,6 @@ static const struct watchdog_ops men_z069_ops = {
+ 	.set_timeout = men_z069_wdt_set_timeout,
+ };
+ 
+-static struct watchdog_device men_z069_wdt = {
+-	.info = &men_z069_info,
+-	.ops = &men_z069_ops,
+-	.timeout = MEN_Z069_DEFAULT_TIMEOUT,
+-	.min_timeout = 1,
+-	.max_timeout = MEN_Z069_WDT_COUNTER_MAX / MEN_Z069_TIMER_FREQ,
+-};
+-
+ static int men_z069_probe(struct mcb_device *dev,
+ 			  const struct mcb_device_id *id)
+ {
+@@ -125,15 +117,19 @@ static int men_z069_probe(struct mcb_device *dev,
+ 		goto release_mem;
+ 
+ 	drv->mem = mem;
++	drv->wdt.info = &men_z069_info;
++	drv->wdt.ops = &men_z069_ops;
++	drv->wdt.timeout = MEN_Z069_DEFAULT_TIMEOUT;
++	drv->wdt.min_timeout = 1;
++	drv->wdt.max_timeout = MEN_Z069_WDT_COUNTER_MAX / MEN_Z069_TIMER_FREQ;
+ 
+-	drv->wdt = men_z069_wdt;
+ 	watchdog_init_timeout(&drv->wdt, 0, &dev->dev);
+ 	watchdog_set_nowayout(&drv->wdt, nowayout);
+ 	watchdog_set_drvdata(&drv->wdt, drv);
+ 	drv->wdt.parent = &dev->dev;
+ 	mcb_set_drvdata(dev, drv);
+ 
+-	return watchdog_register_device(&men_z069_wdt);
++	return watchdog_register_device(&drv->wdt);
+ 
+ release_mem:
+ 	mcb_release_mem(mem);
+diff --git a/fs/btrfs/bio.c b/fs/btrfs/bio.c
+index 726592868e9c5..ada899613486a 100644
+--- a/fs/btrfs/bio.c
++++ b/fs/btrfs/bio.c
+@@ -307,7 +307,7 @@ static void btrfs_end_bio_work(struct work_struct *work)
+ 
+ 	/* Metadata reads are checked and repaired by the submitter. */
+ 	if (bbio->bio.bi_opf & REQ_META)
+-		bbio->end_io(bbio);
++		btrfs_orig_bbio_end_io(bbio);
+ 	else
+ 		btrfs_check_read_bio(bbio, bbio->bio.bi_private);
+ }
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index 26bb10b6ca85d..986827370d8e1 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -3222,6 +3222,7 @@ static int push_leaf_right(struct btrfs_trans_handle *trans, struct btrfs_root
+ 
+ 	if (check_sibling_keys(left, right)) {
+ 		ret = -EUCLEAN;
++		btrfs_abort_transaction(trans, ret);
+ 		btrfs_tree_unlock(right);
+ 		free_extent_buffer(right);
+ 		return ret;
+@@ -3444,6 +3445,7 @@ static int push_leaf_left(struct btrfs_trans_handle *trans, struct btrfs_root
+ 
+ 	if (check_sibling_keys(left, right)) {
+ 		ret = -EUCLEAN;
++		btrfs_abort_transaction(trans, ret);
+ 		goto out;
+ 	}
+ 	return __push_leaf_left(trans, path, min_data_size, empty, left,
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 316bef1891425..53eedc74cfcca 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -96,7 +96,7 @@ static void csum_tree_block(struct extent_buffer *buf, u8 *result)
+ 	crypto_shash_update(shash, kaddr + BTRFS_CSUM_SIZE,
+ 			    first_page_part - BTRFS_CSUM_SIZE);
+ 
+-	for (i = 1; i < num_pages; i++) {
++	for (i = 1; i < num_pages && INLINE_EXTENT_BUFFER_PAGES > 1; i++) {
+ 		kaddr = page_address(buf->pages[i]);
+ 		crypto_shash_update(shash, kaddr, PAGE_SIZE);
+ 	}
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 54e3c2ab21d22..1989c8deea55a 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -3938,7 +3938,7 @@ static int reconnect_caps_cb(struct inode *inode, int mds, void *arg)
+ 	struct dentry *dentry;
+ 	struct ceph_cap *cap;
+ 	char *path;
+-	int pathlen = 0, err = 0;
++	int pathlen = 0, err;
+ 	u64 pathbase;
+ 	u64 snap_follows;
+ 
+@@ -3961,6 +3961,7 @@ static int reconnect_caps_cb(struct inode *inode, int mds, void *arg)
+ 	cap = __get_cap_for_mds(ci, mds);
+ 	if (!cap) {
+ 		spin_unlock(&ci->i_ceph_lock);
++		err = 0;
+ 		goto out_err;
+ 	}
+ 	dout(" adding %p ino %llx.%llx cap %p %lld %s\n",
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 68228bd60e836..80449095aa2a3 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -1007,11 +1007,13 @@ do {									       \
+  *			  where the second inode has larger inode number
+  *			  than the first
+  *  I_DATA_SEM_QUOTA  - Used for quota inodes only
++ *  I_DATA_SEM_EA     - Used for ea_inodes only
+  */
+ enum {
+ 	I_DATA_SEM_NORMAL = 0,
+ 	I_DATA_SEM_OTHER,
+ 	I_DATA_SEM_QUOTA,
++	I_DATA_SEM_EA
+ };
+ 
+ 
+@@ -2992,7 +2994,8 @@ typedef enum {
+ 	EXT4_IGET_NORMAL =	0,
+ 	EXT4_IGET_SPECIAL =	0x0001, /* OK to iget a system inode */
+ 	EXT4_IGET_HANDLE = 	0x0002,	/* Inode # is from a handle */
+-	EXT4_IGET_BAD =		0x0004  /* Allow to iget a bad inode */
++	EXT4_IGET_BAD =		0x0004, /* Allow to iget a bad inode */
++	EXT4_IGET_EA_INODE =	0x0008	/* Inode should contain an EA value */
+ } ext4_iget_flags;
+ 
+ extern struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 145ea24d589b8..211fa8395ec68 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -4835,6 +4835,24 @@ static inline void ext4_inode_set_iversion_queried(struct inode *inode, u64 val)
+ 		inode_set_iversion_queried(inode, val);
+ }
+ 
++static const char *check_igot_inode(struct inode *inode, ext4_iget_flags flags)
++
++{
++	if (flags & EXT4_IGET_EA_INODE) {
++		if (!(EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL))
++			return "missing EA_INODE flag";
++		if (ext4_test_inode_state(inode, EXT4_STATE_XATTR) ||
++		    EXT4_I(inode)->i_file_acl)
++			return "ea_inode with extended attributes";
++	} else {
++		if ((EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL))
++			return "unexpected EA_INODE flag";
++	}
++	if (is_bad_inode(inode) && !(flags & EXT4_IGET_BAD))
++		return "unexpected bad inode w/o EXT4_IGET_BAD";
++	return NULL;
++}
++
+ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+ 			  ext4_iget_flags flags, const char *function,
+ 			  unsigned int line)
+@@ -4844,6 +4862,7 @@ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+ 	struct ext4_inode_info *ei;
+ 	struct ext4_super_block *es = EXT4_SB(sb)->s_es;
+ 	struct inode *inode;
++	const char *err_str;
+ 	journal_t *journal = EXT4_SB(sb)->s_journal;
+ 	long ret;
+ 	loff_t size;
+@@ -4871,8 +4890,14 @@ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+ 	inode = iget_locked(sb, ino);
+ 	if (!inode)
+ 		return ERR_PTR(-ENOMEM);
+-	if (!(inode->i_state & I_NEW))
++	if (!(inode->i_state & I_NEW)) {
++		if ((err_str = check_igot_inode(inode, flags)) != NULL) {
++			ext4_error_inode(inode, function, line, 0, err_str);
++			iput(inode);
++			return ERR_PTR(-EFSCORRUPTED);
++		}
+ 		return inode;
++	}
+ 
+ 	ei = EXT4_I(inode);
+ 	iloc.bh = NULL;
+@@ -5138,10 +5163,9 @@ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+ 	if (IS_CASEFOLDED(inode) && !ext4_has_feature_casefold(inode->i_sb))
+ 		ext4_error_inode(inode, function, line, 0,
+ 				 "casefold flag without casefold feature");
+-	if (is_bad_inode(inode) && !(flags & EXT4_IGET_BAD)) {
+-		ext4_error_inode(inode, function, line, 0,
+-				 "bad inode without EXT4_IGET_BAD flag");
+-		ret = -EUCLEAN;
++	if ((err_str = check_igot_inode(inode, flags)) != NULL) {
++		ext4_error_inode(inode, function, line, 0, err_str);
++		ret = -EFSCORRUPTED;
+ 		goto bad_inode;
+ 	}
+ 
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index d34afa8e0c158..1f222c396932e 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -6554,18 +6554,6 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
+ 		}
+ 	}
+ 
+-	/*
+-	 * Reinitialize lazy itable initialization thread based on
+-	 * current settings
+-	 */
+-	if (sb_rdonly(sb) || !test_opt(sb, INIT_INODE_TABLE))
+-		ext4_unregister_li_request(sb);
+-	else {
+-		ext4_group_t first_not_zeroed;
+-		first_not_zeroed = ext4_has_uninit_itable(sb);
+-		ext4_register_li_request(sb, first_not_zeroed);
+-	}
+-
+ 	/*
+ 	 * Handle creation of system zone data early because it can fail.
+ 	 * Releasing of existing data is done when we are sure remount will
+@@ -6603,6 +6591,18 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
+ 	if (enable_rw)
+ 		sb->s_flags &= ~SB_RDONLY;
+ 
++	/*
++	 * Reinitialize lazy itable initialization thread based on
++	 * current settings
++	 */
++	if (sb_rdonly(sb) || !test_opt(sb, INIT_INODE_TABLE))
++		ext4_unregister_li_request(sb);
++	else {
++		ext4_group_t first_not_zeroed;
++		first_not_zeroed = ext4_has_uninit_itable(sb);
++		ext4_register_li_request(sb, first_not_zeroed);
++	}
++
+ 	if (!ext4_has_feature_mmp(sb) || sb_rdonly(sb))
+ 		ext4_stop_mmpd(sbi);
+ 
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index e33a323faf3c3..6fa38707163b1 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -125,7 +125,11 @@ ext4_expand_inode_array(struct ext4_xattr_inode_array **ea_inode_array,
+ #ifdef CONFIG_LOCKDEP
+ void ext4_xattr_inode_set_class(struct inode *ea_inode)
+ {
++	struct ext4_inode_info *ei = EXT4_I(ea_inode);
++
+ 	lockdep_set_subclass(&ea_inode->i_rwsem, 1);
++	(void) ei;	/* shut up clang warning if !CONFIG_LOCKDEP */
++	lockdep_set_subclass(&ei->i_data_sem, I_DATA_SEM_EA);
+ }
+ #endif
+ 
+@@ -433,7 +437,7 @@ static int ext4_xattr_inode_iget(struct inode *parent, unsigned long ea_ino,
+ 		return -EFSCORRUPTED;
+ 	}
+ 
+-	inode = ext4_iget(parent->i_sb, ea_ino, EXT4_IGET_NORMAL);
++	inode = ext4_iget(parent->i_sb, ea_ino, EXT4_IGET_EA_INODE);
+ 	if (IS_ERR(inode)) {
+ 		err = PTR_ERR(inode);
+ 		ext4_error(parent->i_sb,
+@@ -441,23 +445,6 @@ static int ext4_xattr_inode_iget(struct inode *parent, unsigned long ea_ino,
+ 			   err);
+ 		return err;
+ 	}
+-
+-	if (is_bad_inode(inode)) {
+-		ext4_error(parent->i_sb,
+-			   "error while reading EA inode %lu is_bad_inode",
+-			   ea_ino);
+-		err = -EIO;
+-		goto error;
+-	}
+-
+-	if (!(EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL)) {
+-		ext4_error(parent->i_sb,
+-			   "EA inode %lu does not have EXT4_EA_INODE_FL flag",
+-			    ea_ino);
+-		err = -EINVAL;
+-		goto error;
+-	}
+-
+ 	ext4_xattr_inode_set_class(inode);
+ 
+ 	/*
+@@ -478,9 +465,6 @@ static int ext4_xattr_inode_iget(struct inode *parent, unsigned long ea_ino,
+ 
+ 	*ea_inode = inode;
+ 	return 0;
+-error:
+-	iput(inode);
+-	return err;
+ }
+ 
+ /* Remove entry from mbcache when EA inode is getting evicted */
+@@ -1557,11 +1541,11 @@ ext4_xattr_inode_cache_find(struct inode *inode, const void *value,
+ 
+ 	while (ce) {
+ 		ea_inode = ext4_iget(inode->i_sb, ce->e_value,
+-				     EXT4_IGET_NORMAL);
+-		if (!IS_ERR(ea_inode) &&
+-		    !is_bad_inode(ea_inode) &&
+-		    (EXT4_I(ea_inode)->i_flags & EXT4_EA_INODE_FL) &&
+-		    i_size_read(ea_inode) == value_len &&
++				     EXT4_IGET_EA_INODE);
++		if (IS_ERR(ea_inode))
++			goto next_entry;
++		ext4_xattr_inode_set_class(ea_inode);
++		if (i_size_read(ea_inode) == value_len &&
+ 		    !ext4_xattr_inode_read(ea_inode, ea_data, value_len) &&
+ 		    !ext4_xattr_inode_verify_hashes(ea_inode, NULL, ea_data,
+ 						    value_len) &&
+@@ -1571,9 +1555,8 @@ ext4_xattr_inode_cache_find(struct inode *inode, const void *value,
+ 			kvfree(ea_data);
+ 			return ea_inode;
+ 		}
+-
+-		if (!IS_ERR(ea_inode))
+-			iput(ea_inode);
++		iput(ea_inode);
++	next_entry:
+ 		ce = mb_cache_entry_find_next(ea_inode_cache, ce);
+ 	}
+ 	kvfree(ea_data);
+diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
+index a83fa62106f0e..7891f331082aa 100644
+--- a/fs/gfs2/super.c
++++ b/fs/gfs2/super.c
+@@ -1410,6 +1410,14 @@ static void gfs2_evict_inode(struct inode *inode)
+ 	if (inode->i_nlink || sb_rdonly(sb) || !ip->i_no_addr)
+ 		goto out;
+ 
++	/*
++	 * In case of an incomplete mount, gfs2_evict_inode() may be called for
++	 * system files without having an active journal to write to.  In that
++	 * case, skip the filesystem evict.
++	 */
++	if (!sdp->sd_jdesc)
++		goto out;
++
+ 	gfs2_holder_mark_uninitialized(&gh);
+ 	ret = evict_should_delete(inode, &gh);
+ 	if (ret == SHOULD_DEFER_EVICTION)
+diff --git a/fs/ksmbd/oplock.c b/fs/ksmbd/oplock.c
+index 6d1ccb9998939..db181bdad73a2 100644
+--- a/fs/ksmbd/oplock.c
++++ b/fs/ksmbd/oplock.c
+@@ -157,13 +157,42 @@ static struct oplock_info *opinfo_get_list(struct ksmbd_inode *ci)
+ 	rcu_read_lock();
+ 	opinfo = list_first_or_null_rcu(&ci->m_op_list, struct oplock_info,
+ 					op_entry);
+-	if (opinfo && !atomic_inc_not_zero(&opinfo->refcount))
+-		opinfo = NULL;
++	if (opinfo) {
++		if (!atomic_inc_not_zero(&opinfo->refcount))
++			opinfo = NULL;
++		else {
++			atomic_inc(&opinfo->conn->r_count);
++			if (ksmbd_conn_releasing(opinfo->conn)) {
++				atomic_dec(&opinfo->conn->r_count);
++				atomic_dec(&opinfo->refcount);
++				opinfo = NULL;
++			}
++		}
++	}
++
+ 	rcu_read_unlock();
+ 
+ 	return opinfo;
+ }
+ 
++static void opinfo_conn_put(struct oplock_info *opinfo)
++{
++	struct ksmbd_conn *conn;
++
++	if (!opinfo)
++		return;
++
++	conn = opinfo->conn;
++	/*
++	 * Checking waitqueue to dropping pending requests on
++	 * disconnection. waitqueue_active is safe because it
++	 * uses atomic operation for condition.
++	 */
++	if (!atomic_dec_return(&conn->r_count) && waitqueue_active(&conn->r_count_q))
++		wake_up(&conn->r_count_q);
++	opinfo_put(opinfo);
++}
++
+ void opinfo_put(struct oplock_info *opinfo)
+ {
+ 	if (!atomic_dec_and_test(&opinfo->refcount))
+@@ -666,13 +695,6 @@ static void __smb2_oplock_break_noti(struct work_struct *wk)
+ 
+ out:
+ 	ksmbd_free_work_struct(work);
+-	/*
+-	 * Checking waitqueue to dropping pending requests on
+-	 * disconnection. waitqueue_active is safe because it
+-	 * uses atomic operation for condition.
+-	 */
+-	if (!atomic_dec_return(&conn->r_count) && waitqueue_active(&conn->r_count_q))
+-		wake_up(&conn->r_count_q);
+ }
+ 
+ /**
+@@ -706,7 +728,6 @@ static int smb2_oplock_break_noti(struct oplock_info *opinfo)
+ 	work->conn = conn;
+ 	work->sess = opinfo->sess;
+ 
+-	atomic_inc(&conn->r_count);
+ 	if (opinfo->op_state == OPLOCK_ACK_WAIT) {
+ 		INIT_WORK(&work->work, __smb2_oplock_break_noti);
+ 		ksmbd_queue_work(work);
+@@ -776,13 +797,6 @@ static void __smb2_lease_break_noti(struct work_struct *wk)
+ 
+ out:
+ 	ksmbd_free_work_struct(work);
+-	/*
+-	 * Checking waitqueue to dropping pending requests on
+-	 * disconnection. waitqueue_active is safe because it
+-	 * uses atomic operation for condition.
+-	 */
+-	if (!atomic_dec_return(&conn->r_count) && waitqueue_active(&conn->r_count_q))
+-		wake_up(&conn->r_count_q);
+ }
+ 
+ /**
+@@ -822,7 +836,6 @@ static int smb2_lease_break_noti(struct oplock_info *opinfo)
+ 	work->conn = conn;
+ 	work->sess = opinfo->sess;
+ 
+-	atomic_inc(&conn->r_count);
+ 	if (opinfo->op_state == OPLOCK_ACK_WAIT) {
+ 		list_for_each_safe(tmp, t, &opinfo->interim_list) {
+ 			struct ksmbd_work *in_work;
+@@ -1144,8 +1157,10 @@ int smb_grant_oplock(struct ksmbd_work *work, int req_op_level, u64 pid,
+ 	}
+ 	prev_opinfo = opinfo_get_list(ci);
+ 	if (!prev_opinfo ||
+-	    (prev_opinfo->level == SMB2_OPLOCK_LEVEL_NONE && lctx))
++	    (prev_opinfo->level == SMB2_OPLOCK_LEVEL_NONE && lctx)) {
++		opinfo_conn_put(prev_opinfo);
+ 		goto set_lev;
++	}
+ 	prev_op_has_lease = prev_opinfo->is_lease;
+ 	if (prev_op_has_lease)
+ 		prev_op_state = prev_opinfo->o_lease->state;
+@@ -1153,19 +1168,19 @@ int smb_grant_oplock(struct ksmbd_work *work, int req_op_level, u64 pid,
+ 	if (share_ret < 0 &&
+ 	    prev_opinfo->level == SMB2_OPLOCK_LEVEL_EXCLUSIVE) {
+ 		err = share_ret;
+-		opinfo_put(prev_opinfo);
++		opinfo_conn_put(prev_opinfo);
+ 		goto err_out;
+ 	}
+ 
+ 	if (prev_opinfo->level != SMB2_OPLOCK_LEVEL_BATCH &&
+ 	    prev_opinfo->level != SMB2_OPLOCK_LEVEL_EXCLUSIVE) {
+-		opinfo_put(prev_opinfo);
++		opinfo_conn_put(prev_opinfo);
+ 		goto op_break_not_needed;
+ 	}
+ 
+ 	list_add(&work->interim_entry, &prev_opinfo->interim_list);
+ 	err = oplock_break(prev_opinfo, SMB2_OPLOCK_LEVEL_II);
+-	opinfo_put(prev_opinfo);
++	opinfo_conn_put(prev_opinfo);
+ 	if (err == -ENOENT)
+ 		goto set_lev;
+ 	/* Check all oplock was freed by close */
+@@ -1228,14 +1243,14 @@ static void smb_break_all_write_oplock(struct ksmbd_work *work,
+ 		return;
+ 	if (brk_opinfo->level != SMB2_OPLOCK_LEVEL_BATCH &&
+ 	    brk_opinfo->level != SMB2_OPLOCK_LEVEL_EXCLUSIVE) {
+-		opinfo_put(brk_opinfo);
++		opinfo_conn_put(brk_opinfo);
+ 		return;
+ 	}
+ 
+ 	brk_opinfo->open_trunc = is_trunc;
+ 	list_add(&work->interim_entry, &brk_opinfo->interim_list);
+ 	oplock_break(brk_opinfo, SMB2_OPLOCK_LEVEL_II);
+-	opinfo_put(brk_opinfo);
++	opinfo_conn_put(brk_opinfo);
+ }
+ 
+ /**
+@@ -1263,6 +1278,13 @@ void smb_break_all_levII_oplock(struct ksmbd_work *work, struct ksmbd_file *fp,
+ 	list_for_each_entry_rcu(brk_op, &ci->m_op_list, op_entry) {
+ 		if (!atomic_inc_not_zero(&brk_op->refcount))
+ 			continue;
++
++		atomic_inc(&brk_op->conn->r_count);
++		if (ksmbd_conn_releasing(brk_op->conn)) {
++			atomic_dec(&brk_op->conn->r_count);
++			continue;
++		}
++
+ 		rcu_read_unlock();
+ 		if (brk_op->is_lease && (brk_op->o_lease->state &
+ 		    (~(SMB2_LEASE_READ_CACHING_LE |
+@@ -1292,7 +1314,7 @@ void smb_break_all_levII_oplock(struct ksmbd_work *work, struct ksmbd_file *fp,
+ 		brk_op->open_trunc = is_trunc;
+ 		oplock_break(brk_op, SMB2_OPLOCK_LEVEL_NONE);
+ next:
+-		opinfo_put(brk_op);
++		opinfo_conn_put(brk_op);
+ 		rcu_read_lock();
+ 	}
+ 	rcu_read_unlock();
+diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c
+index cca593e340967..46f815098a753 100644
+--- a/fs/ksmbd/smb2pdu.c
++++ b/fs/ksmbd/smb2pdu.c
+@@ -326,13 +326,9 @@ int smb2_set_rsp_credits(struct ksmbd_work *work)
+ 	if (hdr->Command == SMB2_NEGOTIATE)
+ 		aux_max = 1;
+ 	else
+-		aux_max = conn->vals->max_credits - credit_charge;
++		aux_max = conn->vals->max_credits - conn->total_credits;
+ 	credits_granted = min_t(unsigned short, credits_requested, aux_max);
+ 
+-	if (conn->vals->max_credits - conn->total_credits < credits_granted)
+-		credits_granted = conn->vals->max_credits -
+-			conn->total_credits;
+-
+ 	conn->total_credits += credits_granted;
+ 	work->credits_granted += credits_granted;
+ 
+@@ -877,13 +873,14 @@ static void assemble_neg_contexts(struct ksmbd_conn *conn,
+ 
+ static __le32 decode_preauth_ctxt(struct ksmbd_conn *conn,
+ 				  struct smb2_preauth_neg_context *pneg_ctxt,
+-				  int len_of_ctxts)
++				  int ctxt_len)
+ {
+ 	/*
+ 	 * sizeof(smb2_preauth_neg_context) assumes SMB311_SALT_SIZE Salt,
+ 	 * which may not be present. Only check for used HashAlgorithms[1].
+ 	 */
+-	if (len_of_ctxts < MIN_PREAUTH_CTXT_DATA_LEN)
++	if (ctxt_len <
++	    sizeof(struct smb2_neg_context) + MIN_PREAUTH_CTXT_DATA_LEN)
+ 		return STATUS_INVALID_PARAMETER;
+ 
+ 	if (pneg_ctxt->HashAlgorithms != SMB2_PREAUTH_INTEGRITY_SHA512)
+@@ -895,15 +892,23 @@ static __le32 decode_preauth_ctxt(struct ksmbd_conn *conn,
+ 
+ static void decode_encrypt_ctxt(struct ksmbd_conn *conn,
+ 				struct smb2_encryption_neg_context *pneg_ctxt,
+-				int len_of_ctxts)
++				int ctxt_len)
+ {
+-	int cph_cnt = le16_to_cpu(pneg_ctxt->CipherCount);
+-	int i, cphs_size = cph_cnt * sizeof(__le16);
++	int cph_cnt;
++	int i, cphs_size;
++
++	if (sizeof(struct smb2_encryption_neg_context) > ctxt_len) {
++		pr_err("Invalid SMB2_ENCRYPTION_CAPABILITIES context size\n");
++		return;
++	}
+ 
+ 	conn->cipher_type = 0;
+ 
++	cph_cnt = le16_to_cpu(pneg_ctxt->CipherCount);
++	cphs_size = cph_cnt * sizeof(__le16);
++
+ 	if (sizeof(struct smb2_encryption_neg_context) + cphs_size >
+-	    len_of_ctxts) {
++	    ctxt_len) {
+ 		pr_err("Invalid cipher count(%d)\n", cph_cnt);
+ 		return;
+ 	}
+@@ -951,15 +956,22 @@ static void decode_compress_ctxt(struct ksmbd_conn *conn,
+ 
+ static void decode_sign_cap_ctxt(struct ksmbd_conn *conn,
+ 				 struct smb2_signing_capabilities *pneg_ctxt,
+-				 int len_of_ctxts)
++				 int ctxt_len)
+ {
+-	int sign_algo_cnt = le16_to_cpu(pneg_ctxt->SigningAlgorithmCount);
+-	int i, sign_alos_size = sign_algo_cnt * sizeof(__le16);
++	int sign_algo_cnt;
++	int i, sign_alos_size;
++
++	if (sizeof(struct smb2_signing_capabilities) > ctxt_len) {
++		pr_err("Invalid SMB2_SIGNING_CAPABILITIES context length\n");
++		return;
++	}
+ 
+ 	conn->signing_negotiated = false;
++	sign_algo_cnt = le16_to_cpu(pneg_ctxt->SigningAlgorithmCount);
++	sign_alos_size = sign_algo_cnt * sizeof(__le16);
+ 
+ 	if (sizeof(struct smb2_signing_capabilities) + sign_alos_size >
+-	    len_of_ctxts) {
++	    ctxt_len) {
+ 		pr_err("Invalid signing algorithm count(%d)\n", sign_algo_cnt);
+ 		return;
+ 	}
+@@ -997,18 +1009,16 @@ static __le32 deassemble_neg_contexts(struct ksmbd_conn *conn,
+ 	len_of_ctxts = len_of_smb - offset;
+ 
+ 	while (i++ < neg_ctxt_cnt) {
+-		int clen;
+-
+-		/* check that offset is not beyond end of SMB */
+-		if (len_of_ctxts == 0)
+-			break;
++		int clen, ctxt_len;
+ 
+ 		if (len_of_ctxts < sizeof(struct smb2_neg_context))
+ 			break;
+ 
+ 		pctx = (struct smb2_neg_context *)((char *)pctx + offset);
+ 		clen = le16_to_cpu(pctx->DataLength);
+-		if (clen + sizeof(struct smb2_neg_context) > len_of_ctxts)
++		ctxt_len = clen + sizeof(struct smb2_neg_context);
++
++		if (ctxt_len > len_of_ctxts)
+ 			break;
+ 
+ 		if (pctx->ContextType == SMB2_PREAUTH_INTEGRITY_CAPABILITIES) {
+@@ -1019,7 +1029,7 @@ static __le32 deassemble_neg_contexts(struct ksmbd_conn *conn,
+ 
+ 			status = decode_preauth_ctxt(conn,
+ 						     (struct smb2_preauth_neg_context *)pctx,
+-						     len_of_ctxts);
++						     ctxt_len);
+ 			if (status != STATUS_SUCCESS)
+ 				break;
+ 		} else if (pctx->ContextType == SMB2_ENCRYPTION_CAPABILITIES) {
+@@ -1030,7 +1040,7 @@ static __le32 deassemble_neg_contexts(struct ksmbd_conn *conn,
+ 
+ 			decode_encrypt_ctxt(conn,
+ 					    (struct smb2_encryption_neg_context *)pctx,
+-					    len_of_ctxts);
++					    ctxt_len);
+ 		} else if (pctx->ContextType == SMB2_COMPRESSION_CAPABILITIES) {
+ 			ksmbd_debug(SMB,
+ 				    "deassemble SMB2_COMPRESSION_CAPABILITIES context\n");
+@@ -1049,9 +1059,10 @@ static __le32 deassemble_neg_contexts(struct ksmbd_conn *conn,
+ 		} else if (pctx->ContextType == SMB2_SIGNING_CAPABILITIES) {
+ 			ksmbd_debug(SMB,
+ 				    "deassemble SMB2_SIGNING_CAPABILITIES context\n");
++
+ 			decode_sign_cap_ctxt(conn,
+ 					     (struct smb2_signing_capabilities *)pctx,
+-					     len_of_ctxts);
++					     ctxt_len);
+ 		}
+ 
+ 		/* offsets must be 8 byte aligned */
+@@ -1085,16 +1096,16 @@ int smb2_handle_negotiate(struct ksmbd_work *work)
+ 		return rc;
+ 	}
+ 
+-	if (req->DialectCount == 0) {
+-		pr_err("malformed packet\n");
++	smb2_buf_len = get_rfc1002_len(work->request_buf);
++	smb2_neg_size = offsetof(struct smb2_negotiate_req, Dialects);
++	if (smb2_neg_size > smb2_buf_len) {
+ 		rsp->hdr.Status = STATUS_INVALID_PARAMETER;
+ 		rc = -EINVAL;
+ 		goto err_out;
+ 	}
+ 
+-	smb2_buf_len = get_rfc1002_len(work->request_buf);
+-	smb2_neg_size = offsetof(struct smb2_negotiate_req, Dialects);
+-	if (smb2_neg_size > smb2_buf_len) {
++	if (req->DialectCount == 0) {
++		pr_err("malformed packet\n");
+ 		rsp->hdr.Status = STATUS_INVALID_PARAMETER;
+ 		rc = -EINVAL;
+ 		goto err_out;
+@@ -4384,21 +4395,6 @@ static int get_file_basic_info(struct smb2_query_info_rsp *rsp,
+ 	return 0;
+ }
+ 
+-static unsigned long long get_allocation_size(struct inode *inode,
+-					      struct kstat *stat)
+-{
+-	unsigned long long alloc_size = 0;
+-
+-	if (!S_ISDIR(stat->mode)) {
+-		if ((inode->i_blocks << 9) <= stat->size)
+-			alloc_size = stat->size;
+-		else
+-			alloc_size = inode->i_blocks << 9;
+-	}
+-
+-	return alloc_size;
+-}
+-
+ static void get_file_standard_info(struct smb2_query_info_rsp *rsp,
+ 				   struct ksmbd_file *fp, void *rsp_org)
+ {
+@@ -4413,7 +4409,7 @@ static void get_file_standard_info(struct smb2_query_info_rsp *rsp,
+ 	sinfo = (struct smb2_file_standard_info *)rsp->Buffer;
+ 	delete_pending = ksmbd_inode_pending_delete(fp);
+ 
+-	sinfo->AllocationSize = cpu_to_le64(get_allocation_size(inode, &stat));
++	sinfo->AllocationSize = cpu_to_le64(inode->i_blocks << 9);
+ 	sinfo->EndOfFile = S_ISDIR(stat.mode) ? 0 : cpu_to_le64(stat.size);
+ 	sinfo->NumberOfLinks = cpu_to_le32(get_nlink(&stat) - delete_pending);
+ 	sinfo->DeletePending = delete_pending;
+@@ -4478,7 +4474,7 @@ static int get_file_all_info(struct ksmbd_work *work,
+ 	file_info->Attributes = fp->f_ci->m_fattr;
+ 	file_info->Pad1 = 0;
+ 	file_info->AllocationSize =
+-		cpu_to_le64(get_allocation_size(inode, &stat));
++		cpu_to_le64(inode->i_blocks << 9);
+ 	file_info->EndOfFile = S_ISDIR(stat.mode) ? 0 : cpu_to_le64(stat.size);
+ 	file_info->NumberOfLinks =
+ 			cpu_to_le32(get_nlink(&stat) - delete_pending);
+@@ -4667,7 +4663,7 @@ static int get_file_network_open_info(struct smb2_query_info_rsp *rsp,
+ 	file_info->ChangeTime = cpu_to_le64(time);
+ 	file_info->Attributes = fp->f_ci->m_fattr;
+ 	file_info->AllocationSize =
+-		cpu_to_le64(get_allocation_size(inode, &stat));
++		cpu_to_le64(inode->i_blocks << 9);
+ 	file_info->EndOfFile = S_ISDIR(stat.mode) ? 0 : cpu_to_le64(stat.size);
+ 	file_info->Reserved = cpu_to_le32(0);
+ 	rsp->OutputBufferLength =
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index 7b8f17ee52243..ba07757f3cd0a 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -702,16 +702,11 @@ static ssize_t __write_ports_addfd(char *buf, struct net *net, const struct cred
+ 	if (err != 0 || fd < 0)
+ 		return -EINVAL;
+ 
+-	if (svc_alien_sock(net, fd)) {
+-		printk(KERN_ERR "%s: socket net is different to NFSd's one\n", __func__);
+-		return -EINVAL;
+-	}
+-
+ 	err = nfsd_create_serv(net);
+ 	if (err != 0)
+ 		return err;
+ 
+-	err = svc_addsock(nn->nfsd_serv, fd, buf, SIMPLE_TRANSACTION_LIMIT, cred);
++	err = svc_addsock(nn->nfsd_serv, net, fd, buf, SIMPLE_TRANSACTION_LIMIT, cred);
+ 
+ 	if (err >= 0 &&
+ 	    !nn->nfsd_serv->sv_nrthreads && !xchg(&nn->keep_active, 1))
+diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
+index 5783209f17fc5..e4884dde048ce 100644
+--- a/fs/nfsd/vfs.c
++++ b/fs/nfsd/vfs.c
+@@ -536,7 +536,15 @@ nfsd_setattr(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ 
+ 	inode_lock(inode);
+ 	for (retries = 1;;) {
+-		host_err = __nfsd_setattr(dentry, iap);
++		struct iattr attrs;
++
++		/*
++		 * notify_change() can alter its iattr argument, making
++		 * @iap unsuitable for submission multiple times. Make a
++		 * copy for every loop iteration.
++		 */
++		attrs = *iap;
++		host_err = __nfsd_setattr(dentry, &attrs);
+ 		if (host_err != -EAGAIN || !retries--)
+ 			break;
+ 		if (!nfsd_wait_for_delegreturn(rqstp, inode))
+diff --git a/fs/xfs/xfs_buf_item_recover.c b/fs/xfs/xfs_buf_item_recover.c
+index ffa94102094d9..43167f543afc3 100644
+--- a/fs/xfs/xfs_buf_item_recover.c
++++ b/fs/xfs/xfs_buf_item_recover.c
+@@ -943,6 +943,16 @@ xlog_recover_buf_commit_pass2(
+ 	if (lsn && lsn != -1 && XFS_LSN_CMP(lsn, current_lsn) >= 0) {
+ 		trace_xfs_log_recover_buf_skip(log, buf_f);
+ 		xlog_recover_validate_buf_type(mp, bp, buf_f, NULLCOMMITLSN);
++
++		/*
++		 * We're skipping replay of this buffer log item due to the log
++		 * item LSN being behind the ondisk buffer.  Verify the buffer
++		 * contents since we aren't going to run the write verifier.
++		 */
++		if (bp->b_ops) {
++			bp->b_ops->verify_read(bp);
++			error = bp->b_error;
++		}
+ 		goto out_release;
+ 	}
+ 
+diff --git a/include/linux/firmware/qcom/qcom_scm.h b/include/linux/firmware/qcom/qcom_scm.h
+index 1e449a5d7f5c1..250ea4efb7cb6 100644
+--- a/include/linux/firmware/qcom/qcom_scm.h
++++ b/include/linux/firmware/qcom/qcom_scm.h
+@@ -94,7 +94,7 @@ extern int qcom_scm_mem_protect_video_var(u32 cp_start, u32 cp_size,
+ 					  u32 cp_nonpixel_start,
+ 					  u32 cp_nonpixel_size);
+ extern int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz,
+-			       unsigned int *src,
++			       u64 *src,
+ 			       const struct qcom_scm_vmperm *newvm,
+ 			       unsigned int dest_cnt);
+ 
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index 7e225e41d55b8..68a3183d5d589 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -1088,6 +1088,7 @@ void mlx5_cmdif_debugfs_cleanup(struct mlx5_core_dev *dev);
+ int mlx5_core_create_psv(struct mlx5_core_dev *dev, u32 pdn,
+ 			 int npsvs, u32 *sig_index);
+ int mlx5_core_destroy_psv(struct mlx5_core_dev *dev, int psv_num);
++__be32 mlx5_core_get_terminate_scatter_list_mkey(struct mlx5_core_dev *dev);
+ void mlx5_core_put_rsc(struct mlx5_core_rsc_common *common);
+ int mlx5_query_odp_caps(struct mlx5_core_dev *dev,
+ 			struct mlx5_odp_caps *odp_caps);
+diff --git a/include/linux/pe.h b/include/linux/pe.h
+index 6ffabf1e6d039..16754fb2f954a 100644
+--- a/include/linux/pe.h
++++ b/include/linux/pe.h
+@@ -11,25 +11,26 @@
+ #include <linux/types.h>
+ 
+ /*
+- * Linux EFI stub v1.0 adds the following functionality:
+- * - Loading initrd from the LINUX_EFI_INITRD_MEDIA_GUID device path,
+- * - Loading/starting the kernel from firmware that targets a different
+- *   machine type, via the entrypoint exposed in the .compat PE/COFF section.
++ * Starting from version v3.0, the major version field should be interpreted as
++ * a bit mask of features supported by the kernel's EFI stub:
++ * - 0x1: initrd loading from the LINUX_EFI_INITRD_MEDIA_GUID device path,
++ * - 0x2: initrd loading using the initrd= command line option, where the file
++ *        may be specified using device path notation, and is not required to
++ *        reside on the same volume as the loaded kernel image.
+  *
+  * The recommended way of loading and starting v1.0 or later kernels is to use
+  * the LoadImage() and StartImage() EFI boot services, and expose the initrd
+  * via the LINUX_EFI_INITRD_MEDIA_GUID device path.
+  *
+- * Versions older than v1.0 support initrd loading via the image load options
+- * (using initrd=, limited to the volume from which the kernel itself was
+- * loaded), or via arch specific means (bootparams, DT, etc).
++ * Versions older than v1.0 may support initrd loading via the image load
++ * options (using initrd=, limited to the volume from which the kernel itself
++ * was loaded), or only via arch specific means (bootparams, DT, etc).
+  *
+- * On x86, LoadImage() and StartImage() can be omitted if the EFI handover
+- * protocol is implemented, which can be inferred from the version,
+- * handover_offset and xloadflags fields in the bootparams structure.
++ * The minor version field must remain 0x0.
++ * (https://lore.kernel.org/all/efd6f2d4-547c-1378-1faa-53c044dbd297@gmail.com/)
+  */
+-#define LINUX_EFISTUB_MAJOR_VERSION		0x1
+-#define LINUX_EFISTUB_MINOR_VERSION		0x1
++#define LINUX_EFISTUB_MAJOR_VERSION		0x3
++#define LINUX_EFISTUB_MINOR_VERSION		0x0
+ 
+ /*
+  * LINUX_PE_MAGIC appears at offset 0x38 into the MS-DOS header of EFI bootable
+diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
+index bcc555c7ae9c6..13aff355d5a13 100644
+--- a/include/linux/sunrpc/svcsock.h
++++ b/include/linux/sunrpc/svcsock.h
+@@ -59,10 +59,9 @@ int		svc_recv(struct svc_rqst *, long);
+ int		svc_send(struct svc_rqst *);
+ void		svc_drop(struct svc_rqst *);
+ void		svc_sock_update_bufs(struct svc_serv *serv);
+-bool		svc_alien_sock(struct net *net, int fd);
+-int		svc_addsock(struct svc_serv *serv, const int fd,
+-					char *name_return, const size_t len,
+-					const struct cred *cred);
++int		svc_addsock(struct svc_serv *serv, struct net *net,
++			    const int fd, char *name_return, const size_t len,
++			    const struct cred *cred);
+ void		svc_init_xprt_sock(void);
+ void		svc_cleanup_xprt_sock(void);
+ struct svc_xprt *svc_sock_create(struct svc_serv *serv, int prot);
+diff --git a/include/media/dvb_frontend.h b/include/media/dvb_frontend.h
+index e7c44870f20de..367d5381217b5 100644
+--- a/include/media/dvb_frontend.h
++++ b/include/media/dvb_frontend.h
+@@ -686,7 +686,10 @@ struct dtv_frontend_properties {
+  * @id:			Frontend ID
+  * @exit:		Used to inform the DVB core that the frontend
+  *			thread should exit (usually, means that the hardware
+- *			got disconnected.
++ *			got disconnected).
++ * @remove_mutex:	mutex that avoids a race condition between a callback
++ *			called when the hardware is disconnected and the
++ *			file_operations of dvb_frontend.
+  */
+ 
+ struct dvb_frontend {
+@@ -704,6 +707,7 @@ struct dvb_frontend {
+ 	int (*callback)(void *adapter_priv, int component, int cmd, int arg);
+ 	int id;
+ 	unsigned int exit;
++	struct mutex remove_mutex;
+ };
+ 
+ /**
+diff --git a/include/media/dvb_net.h b/include/media/dvb_net.h
+index 5e31d37f25fac..cc01dffcc9f35 100644
+--- a/include/media/dvb_net.h
++++ b/include/media/dvb_net.h
+@@ -41,6 +41,9 @@
+  * @exit:		flag to indicate when the device is being removed.
+  * @demux:		pointer to &struct dmx_demux.
+  * @ioctl_mutex:	protect access to this struct.
++ * @remove_mutex:	mutex that avoids a race condition between a callback
++ *			called when the hardware is disconnected and the
++ *			file_operations of dvb_net.
+  *
+  * Currently, the core supports up to %DVB_NET_DEVICES_MAX (10) network
+  * devices.
+@@ -53,6 +56,7 @@ struct dvb_net {
+ 	unsigned int exit:1;
+ 	struct dmx_demux *demux;
+ 	struct mutex ioctl_mutex;
++	struct mutex remove_mutex;
+ };
+ 
+ /**
+diff --git a/include/media/dvbdev.h b/include/media/dvbdev.h
+index 29d25c8a6f13f..8958e5e2fc5b7 100644
+--- a/include/media/dvbdev.h
++++ b/include/media/dvbdev.h
+@@ -193,6 +193,21 @@ struct dvb_device {
+ 	void *priv;
+ };
+ 
++/**
++ * struct dvbdevfops_node - fops nodes registered in dvbdevfops_list
++ *
++ * @fops:		Dynamically allocated fops for ->owner registration
++ * @type:		type of dvb_device
++ * @template:		dvb_device used for registration
++ * @list_head:		list_head for dvbdevfops_list
++ */
++struct dvbdevfops_node {
++	struct file_operations *fops;
++	enum dvb_device_type type;
++	const struct dvb_device *template;
++	struct list_head list_head;
++};
++
+ /**
+  * dvb_device_get - Increase dvb_device reference
+  *
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 9cd0354221507..45e46a1c4afc6 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -336,6 +336,7 @@ struct sk_filter;
+   *	@sk_cgrp_data: cgroup data for this cgroup
+   *	@sk_memcg: this socket's memory cgroup association
+   *	@sk_write_pending: a write to stream socket waits to start
++  *	@sk_wait_pending: number of threads blocked on this socket
+   *	@sk_state_change: callback to indicate change in the state of the sock
+   *	@sk_data_ready: callback to indicate there is data to be processed
+   *	@sk_write_space: callback to indicate there is bf sending space available
+@@ -428,6 +429,7 @@ struct sock {
+ 	unsigned int		sk_napi_id;
+ #endif
+ 	int			sk_rcvbuf;
++	int			sk_wait_pending;
+ 
+ 	struct sk_filter __rcu	*sk_filter;
+ 	union {
+@@ -1174,6 +1176,7 @@ static inline void sock_rps_reset_rxhash(struct sock *sk)
+ 
+ #define sk_wait_event(__sk, __timeo, __condition, __wait)		\
+ 	({	int __rc;						\
++		__sk->sk_wait_pending++;				\
+ 		release_sock(__sk);					\
+ 		__rc = __condition;					\
+ 		if (!__rc) {						\
+@@ -1183,6 +1186,7 @@ static inline void sock_rps_reset_rxhash(struct sock *sk)
+ 		}							\
+ 		sched_annotate_sleep();					\
+ 		lock_sock(__sk);					\
++		__sk->sk_wait_pending--;				\
+ 		__rc = __condition;					\
+ 		__rc;							\
+ 	})
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 76bf0a11bdc77..99c74fc300839 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -632,6 +632,7 @@ void tcp_reset(struct sock *sk, struct sk_buff *skb);
+ void tcp_skb_mark_lost_uncond_verify(struct tcp_sock *tp, struct sk_buff *skb);
+ void tcp_fin(struct sock *sk);
+ void tcp_check_space(struct sock *sk);
++void tcp_sack_compress_send_ack(struct sock *sk);
+ 
+ /* tcp_timer.c */
+ void tcp_init_xmit_timers(struct sock *);
+diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
+index 431c3afb2ce0f..db70944c681aa 100644
+--- a/include/ufs/ufshcd.h
++++ b/include/ufs/ufshcd.h
+@@ -1138,7 +1138,7 @@ static inline size_t ufshcd_sg_entry_size(const struct ufs_hba *hba)
+ 	({ (void)(hba); BUILD_BUG_ON(sg_entry_size != sizeof(struct ufshcd_sg_entry)); })
+ #endif
+ 
+-static inline size_t sizeof_utp_transfer_cmd_desc(const struct ufs_hba *hba)
++static inline size_t ufshcd_get_ucd_size(const struct ufs_hba *hba)
+ {
+ 	return sizeof(struct utp_transfer_cmd_desc) + SG_ALL * ufshcd_sg_entry_size(hba);
+ }
+diff --git a/io_uring/epoll.c b/io_uring/epoll.c
+index 9aa74d2c80bc4..89bff2068a190 100644
+--- a/io_uring/epoll.c
++++ b/io_uring/epoll.c
+@@ -25,10 +25,6 @@ int io_epoll_ctl_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ {
+ 	struct io_epoll *epoll = io_kiocb_to_cmd(req, struct io_epoll);
+ 
+-	pr_warn_once("%s: epoll_ctl support in io_uring is deprecated and will "
+-		     "be removed in a future Linux kernel version.\n",
+-		     current->comm);
+-
+ 	if (sqe->buf_index || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 
+diff --git a/kernel/module/decompress.c b/kernel/module/decompress.c
+index 7ddc87bee2741..1c076b7aa660e 100644
+--- a/kernel/module/decompress.c
++++ b/kernel/module/decompress.c
+@@ -257,7 +257,7 @@ static ssize_t module_zstd_decompress(struct load_info *info,
+ 	do {
+ 		struct page *page = module_get_next_page(info);
+ 
+-		if (!IS_ERR(page)) {
++		if (IS_ERR(page)) {
+ 			retval = PTR_ERR(page);
+ 			goto out;
+ 		}
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 486cca3c2b754..543cb7dc84ad4 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -4238,13 +4238,19 @@ static int __create_val_field(struct hist_trigger_data *hist_data,
+ 		goto out;
+ 	}
+ 
+-	/* Some types cannot be a value */
+-	if (hist_field->flags & (HIST_FIELD_FL_GRAPH | HIST_FIELD_FL_PERCENT |
+-				 HIST_FIELD_FL_BUCKET | HIST_FIELD_FL_LOG2 |
+-				 HIST_FIELD_FL_SYM | HIST_FIELD_FL_SYM_OFFSET |
+-				 HIST_FIELD_FL_SYSCALL | HIST_FIELD_FL_STACKTRACE)) {
+-		hist_err(file->tr, HIST_ERR_BAD_FIELD_MODIFIER, errpos(field_str));
+-		ret = -EINVAL;
++	/* values and variables should not have some modifiers */
++	if (hist_field->flags & HIST_FIELD_FL_VAR) {
++		/* Variable */
++		if (hist_field->flags & (HIST_FIELD_FL_GRAPH | HIST_FIELD_FL_PERCENT |
++					 HIST_FIELD_FL_BUCKET | HIST_FIELD_FL_LOG2))
++			goto err;
++	} else {
++		/* Value */
++		if (hist_field->flags & (HIST_FIELD_FL_GRAPH | HIST_FIELD_FL_PERCENT |
++					 HIST_FIELD_FL_BUCKET | HIST_FIELD_FL_LOG2 |
++					 HIST_FIELD_FL_SYM | HIST_FIELD_FL_SYM_OFFSET |
++					 HIST_FIELD_FL_SYSCALL | HIST_FIELD_FL_STACKTRACE))
++			goto err;
+ 	}
+ 
+ 	hist_data->fields[val_idx] = hist_field;
+@@ -4256,6 +4262,9 @@ static int __create_val_field(struct hist_trigger_data *hist_data,
+ 		ret = -EINVAL;
+  out:
+ 	return ret;
++ err:
++	hist_err(file->tr, HIST_ERR_BAD_FIELD_MODIFIER, errpos(field_str));
++	return -EINVAL;
+ }
+ 
+ static int create_val_field(struct hist_trigger_data *hist_data,
+diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
+index 4496975f2029f..77360d4069f2b 100644
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -1652,6 +1652,8 @@ static enum hrtimer_restart timerlat_irq(struct hrtimer *timer)
+ 			osnoise_stop_tracing();
+ 			notify_new_max_latency(diff);
+ 
++			wake_up_process(tlat->kthread);
++
+ 			return HRTIMER_NORESTART;
+ 		}
+ 	}
+diff --git a/kernel/trace/trace_probe.h b/kernel/trace/trace_probe.h
+index ef8ed3b65d055..6a4ecfb1da438 100644
+--- a/kernel/trace/trace_probe.h
++++ b/kernel/trace/trace_probe.h
+@@ -308,7 +308,7 @@ trace_probe_primary_from_call(struct trace_event_call *call)
+ {
+ 	struct trace_probe_event *tpe = trace_probe_event_from_call(call);
+ 
+-	return list_first_entry(&tpe->probes, struct trace_probe, list);
++	return list_first_entry_or_null(&tpe->probes, struct trace_probe, list);
+ }
+ 
+ static inline struct list_head *trace_probe_probe_list(struct trace_probe *tp)
+diff --git a/lib/test_firmware.c b/lib/test_firmware.c
+index 05ed84c2fc4ce..1d7d480b8eeb3 100644
+--- a/lib/test_firmware.c
++++ b/lib/test_firmware.c
+@@ -45,6 +45,7 @@ struct test_batched_req {
+ 	bool sent;
+ 	const struct firmware *fw;
+ 	const char *name;
++	const char *fw_buf;
+ 	struct completion completion;
+ 	struct task_struct *task;
+ 	struct device *dev;
+@@ -175,8 +176,14 @@ static void __test_release_all_firmware(void)
+ 
+ 	for (i = 0; i < test_fw_config->num_requests; i++) {
+ 		req = &test_fw_config->reqs[i];
+-		if (req->fw)
++		if (req->fw) {
++			if (req->fw_buf) {
++				kfree_const(req->fw_buf);
++				req->fw_buf = NULL;
++			}
+ 			release_firmware(req->fw);
++			req->fw = NULL;
++		}
+ 	}
+ 
+ 	vfree(test_fw_config->reqs);
+@@ -353,16 +360,26 @@ static ssize_t config_test_show_str(char *dst,
+ 	return len;
+ }
+ 
+-static int test_dev_config_update_bool(const char *buf, size_t size,
++static inline int __test_dev_config_update_bool(const char *buf, size_t size,
+ 				       bool *cfg)
+ {
+ 	int ret;
+ 
+-	mutex_lock(&test_fw_mutex);
+ 	if (kstrtobool(buf, cfg) < 0)
+ 		ret = -EINVAL;
+ 	else
+ 		ret = size;
++
++	return ret;
++}
++
++static int test_dev_config_update_bool(const char *buf, size_t size,
++				       bool *cfg)
++{
++	int ret;
++
++	mutex_lock(&test_fw_mutex);
++	ret = __test_dev_config_update_bool(buf, size, cfg);
+ 	mutex_unlock(&test_fw_mutex);
+ 
+ 	return ret;
+@@ -373,7 +390,8 @@ static ssize_t test_dev_config_show_bool(char *buf, bool val)
+ 	return snprintf(buf, PAGE_SIZE, "%d\n", val);
+ }
+ 
+-static int test_dev_config_update_size_t(const char *buf,
++static int __test_dev_config_update_size_t(
++					 const char *buf,
+ 					 size_t size,
+ 					 size_t *cfg)
+ {
+@@ -384,9 +402,7 @@ static int test_dev_config_update_size_t(const char *buf,
+ 	if (ret)
+ 		return ret;
+ 
+-	mutex_lock(&test_fw_mutex);
+ 	*(size_t *)cfg = new;
+-	mutex_unlock(&test_fw_mutex);
+ 
+ 	/* Always return full write size even if we didn't consume all */
+ 	return size;
+@@ -402,7 +418,7 @@ static ssize_t test_dev_config_show_int(char *buf, int val)
+ 	return snprintf(buf, PAGE_SIZE, "%d\n", val);
+ }
+ 
+-static int test_dev_config_update_u8(const char *buf, size_t size, u8 *cfg)
++static int __test_dev_config_update_u8(const char *buf, size_t size, u8 *cfg)
+ {
+ 	u8 val;
+ 	int ret;
+@@ -411,14 +427,23 @@ static int test_dev_config_update_u8(const char *buf, size_t size, u8 *cfg)
+ 	if (ret)
+ 		return ret;
+ 
+-	mutex_lock(&test_fw_mutex);
+ 	*(u8 *)cfg = val;
+-	mutex_unlock(&test_fw_mutex);
+ 
+ 	/* Always return full write size even if we didn't consume all */
+ 	return size;
+ }
+ 
++static int test_dev_config_update_u8(const char *buf, size_t size, u8 *cfg)
++{
++	int ret;
++
++	mutex_lock(&test_fw_mutex);
++	ret = __test_dev_config_update_u8(buf, size, cfg);
++	mutex_unlock(&test_fw_mutex);
++
++	return ret;
++}
++
+ static ssize_t test_dev_config_show_u8(char *buf, u8 val)
+ {
+ 	return snprintf(buf, PAGE_SIZE, "%u\n", val);
+@@ -471,10 +496,10 @@ static ssize_t config_num_requests_store(struct device *dev,
+ 		mutex_unlock(&test_fw_mutex);
+ 		goto out;
+ 	}
+-	mutex_unlock(&test_fw_mutex);
+ 
+-	rc = test_dev_config_update_u8(buf, count,
+-				       &test_fw_config->num_requests);
++	rc = __test_dev_config_update_u8(buf, count,
++					 &test_fw_config->num_requests);
++	mutex_unlock(&test_fw_mutex);
+ 
+ out:
+ 	return rc;
+@@ -518,10 +543,10 @@ static ssize_t config_buf_size_store(struct device *dev,
+ 		mutex_unlock(&test_fw_mutex);
+ 		goto out;
+ 	}
+-	mutex_unlock(&test_fw_mutex);
+ 
+-	rc = test_dev_config_update_size_t(buf, count,
+-					   &test_fw_config->buf_size);
++	rc = __test_dev_config_update_size_t(buf, count,
++					     &test_fw_config->buf_size);
++	mutex_unlock(&test_fw_mutex);
+ 
+ out:
+ 	return rc;
+@@ -548,10 +573,10 @@ static ssize_t config_file_offset_store(struct device *dev,
+ 		mutex_unlock(&test_fw_mutex);
+ 		goto out;
+ 	}
+-	mutex_unlock(&test_fw_mutex);
+ 
+-	rc = test_dev_config_update_size_t(buf, count,
+-					   &test_fw_config->file_offset);
++	rc = __test_dev_config_update_size_t(buf, count,
++					     &test_fw_config->file_offset);
++	mutex_unlock(&test_fw_mutex);
+ 
+ out:
+ 	return rc;
+@@ -652,6 +677,8 @@ static ssize_t trigger_request_store(struct device *dev,
+ 
+ 	mutex_lock(&test_fw_mutex);
+ 	release_firmware(test_firmware);
++	if (test_fw_config->reqs)
++		__test_release_all_firmware();
+ 	test_firmware = NULL;
+ 	rc = request_firmware(&test_firmware, name, dev);
+ 	if (rc) {
+@@ -752,6 +779,8 @@ static ssize_t trigger_async_request_store(struct device *dev,
+ 	mutex_lock(&test_fw_mutex);
+ 	release_firmware(test_firmware);
+ 	test_firmware = NULL;
++	if (test_fw_config->reqs)
++		__test_release_all_firmware();
+ 	rc = request_firmware_nowait(THIS_MODULE, 1, name, dev, GFP_KERNEL,
+ 				     NULL, trigger_async_request_cb);
+ 	if (rc) {
+@@ -794,6 +823,8 @@ static ssize_t trigger_custom_fallback_store(struct device *dev,
+ 
+ 	mutex_lock(&test_fw_mutex);
+ 	release_firmware(test_firmware);
++	if (test_fw_config->reqs)
++		__test_release_all_firmware();
+ 	test_firmware = NULL;
+ 	rc = request_firmware_nowait(THIS_MODULE, FW_ACTION_NOUEVENT, name,
+ 				     dev, GFP_KERNEL, NULL,
+@@ -856,6 +887,8 @@ static int test_fw_run_batch_request(void *data)
+ 						 test_fw_config->buf_size);
+ 		if (!req->fw)
+ 			kfree(test_buf);
++		else
++			req->fw_buf = test_buf;
+ 	} else {
+ 		req->rc = test_fw_config->req_firmware(&req->fw,
+ 						       req->name,
+@@ -895,6 +928,11 @@ static ssize_t trigger_batched_requests_store(struct device *dev,
+ 
+ 	mutex_lock(&test_fw_mutex);
+ 
++	if (test_fw_config->reqs) {
++		rc = -EBUSY;
++		goto out_bail;
++	}
++
+ 	test_fw_config->reqs =
+ 		vzalloc(array3_size(sizeof(struct test_batched_req),
+ 				    test_fw_config->num_requests, 2));
+@@ -911,6 +949,7 @@ static ssize_t trigger_batched_requests_store(struct device *dev,
+ 		req->fw = NULL;
+ 		req->idx = i;
+ 		req->name = test_fw_config->name;
++		req->fw_buf = NULL;
+ 		req->dev = dev;
+ 		init_completion(&req->completion);
+ 		req->task = kthread_run(test_fw_run_batch_request, req,
+@@ -993,6 +1032,11 @@ ssize_t trigger_batched_requests_async_store(struct device *dev,
+ 
+ 	mutex_lock(&test_fw_mutex);
+ 
++	if (test_fw_config->reqs) {
++		rc = -EBUSY;
++		goto out_bail;
++	}
++
+ 	test_fw_config->reqs =
+ 		vzalloc(array3_size(sizeof(struct test_batched_req),
+ 				    test_fw_config->num_requests, 2));
+@@ -1010,6 +1054,7 @@ ssize_t trigger_batched_requests_async_store(struct device *dev,
+ 	for (i = 0; i < test_fw_config->num_requests; i++) {
+ 		req = &test_fw_config->reqs[i];
+ 		req->name = test_fw_config->name;
++		req->fw_buf = NULL;
+ 		req->fw = NULL;
+ 		req->idx = i;
+ 		init_completion(&req->completion);
+diff --git a/net/atm/resources.c b/net/atm/resources.c
+index 2b2d33eeaf200..995d29e7fb138 100644
+--- a/net/atm/resources.c
++++ b/net/atm/resources.c
+@@ -400,6 +400,7 @@ done:
+ 	return error;
+ }
+ 
++#ifdef CONFIG_PROC_FS
+ void *atm_dev_seq_start(struct seq_file *seq, loff_t *pos)
+ {
+ 	mutex_lock(&atm_dev_mutex);
+@@ -415,3 +416,4 @@ void *atm_dev_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+ {
+ 	return seq_list_next(v, &atm_devs, pos);
+ }
++#endif
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 6e44e92ebdf5d..f235cc6832767 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -2382,6 +2382,37 @@ static int validate_linkmsg(struct net_device *dev, struct nlattr *tb[],
+ 		if (tb[IFLA_BROADCAST] &&
+ 		    nla_len(tb[IFLA_BROADCAST]) < dev->addr_len)
+ 			return -EINVAL;
++
++		if (tb[IFLA_GSO_MAX_SIZE] &&
++		    nla_get_u32(tb[IFLA_GSO_MAX_SIZE]) > dev->tso_max_size) {
++			NL_SET_ERR_MSG(extack, "too big gso_max_size");
++			return -EINVAL;
++		}
++
++		if (tb[IFLA_GSO_MAX_SEGS] &&
++		    (nla_get_u32(tb[IFLA_GSO_MAX_SEGS]) > GSO_MAX_SEGS ||
++		     nla_get_u32(tb[IFLA_GSO_MAX_SEGS]) > dev->tso_max_segs)) {
++			NL_SET_ERR_MSG(extack, "too big gso_max_segs");
++			return -EINVAL;
++		}
++
++		if (tb[IFLA_GRO_MAX_SIZE] &&
++		    nla_get_u32(tb[IFLA_GRO_MAX_SIZE]) > GRO_MAX_SIZE) {
++			NL_SET_ERR_MSG(extack, "too big gro_max_size");
++			return -EINVAL;
++		}
++
++		if (tb[IFLA_GSO_IPV4_MAX_SIZE] &&
++		    nla_get_u32(tb[IFLA_GSO_IPV4_MAX_SIZE]) > dev->tso_max_size) {
++			NL_SET_ERR_MSG(extack, "too big gso_ipv4_max_size");
++			return -EINVAL;
++		}
++
++		if (tb[IFLA_GRO_IPV4_MAX_SIZE] &&
++		    nla_get_u32(tb[IFLA_GRO_IPV4_MAX_SIZE]) > GRO_MAX_SIZE) {
++			NL_SET_ERR_MSG(extack, "too big gro_ipv4_max_size");
++			return -EINVAL;
++		}
+ 	}
+ 
+ 	if (tb[IFLA_AF_SPEC]) {
+@@ -2855,11 +2886,6 @@ static int do_setlink(const struct sk_buff *skb,
+ 	if (tb[IFLA_GSO_MAX_SIZE]) {
+ 		u32 max_size = nla_get_u32(tb[IFLA_GSO_MAX_SIZE]);
+ 
+-		if (max_size > dev->tso_max_size) {
+-			err = -EINVAL;
+-			goto errout;
+-		}
+-
+ 		if (dev->gso_max_size ^ max_size) {
+ 			netif_set_gso_max_size(dev, max_size);
+ 			status |= DO_SETLINK_MODIFIED;
+@@ -2869,11 +2895,6 @@ static int do_setlink(const struct sk_buff *skb,
+ 	if (tb[IFLA_GSO_MAX_SEGS]) {
+ 		u32 max_segs = nla_get_u32(tb[IFLA_GSO_MAX_SEGS]);
+ 
+-		if (max_segs > GSO_MAX_SEGS || max_segs > dev->tso_max_segs) {
+-			err = -EINVAL;
+-			goto errout;
+-		}
+-
+ 		if (dev->gso_max_segs ^ max_segs) {
+ 			netif_set_gso_max_segs(dev, max_segs);
+ 			status |= DO_SETLINK_MODIFIED;
+@@ -2892,11 +2913,6 @@ static int do_setlink(const struct sk_buff *skb,
+ 	if (tb[IFLA_GSO_IPV4_MAX_SIZE]) {
+ 		u32 max_size = nla_get_u32(tb[IFLA_GSO_IPV4_MAX_SIZE]);
+ 
+-		if (max_size > dev->tso_max_size) {
+-			err = -EINVAL;
+-			goto errout;
+-		}
+-
+ 		if (dev->gso_ipv4_max_size ^ max_size) {
+ 			netif_set_gso_ipv4_max_size(dev, max_size);
+ 			status |= DO_SETLINK_MODIFIED;
+@@ -3282,6 +3298,7 @@ struct net_device *rtnl_create_link(struct net *net, const char *ifname,
+ 	struct net_device *dev;
+ 	unsigned int num_tx_queues = 1;
+ 	unsigned int num_rx_queues = 1;
++	int err;
+ 
+ 	if (tb[IFLA_NUM_TX_QUEUES])
+ 		num_tx_queues = nla_get_u32(tb[IFLA_NUM_TX_QUEUES]);
+@@ -3317,13 +3334,18 @@ struct net_device *rtnl_create_link(struct net *net, const char *ifname,
+ 	if (!dev)
+ 		return ERR_PTR(-ENOMEM);
+ 
++	err = validate_linkmsg(dev, tb, extack);
++	if (err < 0) {
++		free_netdev(dev);
++		return ERR_PTR(err);
++	}
++
+ 	dev_net_set(dev, net);
+ 	dev->rtnl_link_ops = ops;
+ 	dev->rtnl_link_state = RTNL_LINK_INITIALIZING;
+ 
+ 	if (tb[IFLA_MTU]) {
+ 		u32 mtu = nla_get_u32(tb[IFLA_MTU]);
+-		int err;
+ 
+ 		err = dev_validate_mtu(dev, mtu, extack);
+ 		if (err) {
+diff --git a/net/core/sock.c b/net/core/sock.c
+index c258887953905..3fd71f343c9f2 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -2386,7 +2386,6 @@ void sk_setup_caps(struct sock *sk, struct dst_entry *dst)
+ {
+ 	u32 max_segs = 1;
+ 
+-	sk_dst_set(sk, dst);
+ 	sk->sk_route_caps = dst->dev->features;
+ 	if (sk_is_tcp(sk))
+ 		sk->sk_route_caps |= NETIF_F_GSO;
+@@ -2405,6 +2404,7 @@ void sk_setup_caps(struct sock *sk, struct dst_entry *dst)
+ 		}
+ 	}
+ 	sk->sk_gso_max_segs = max_segs;
++	sk_dst_set(sk, dst);
+ }
+ EXPORT_SYMBOL_GPL(sk_setup_caps);
+ 
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index 70fd769f1174b..daeec363b0976 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -586,6 +586,7 @@ static long inet_wait_for_connect(struct sock *sk, long timeo, int writebias)
+ 
+ 	add_wait_queue(sk_sleep(sk), &wait);
+ 	sk->sk_write_pending += writebias;
++	sk->sk_wait_pending++;
+ 
+ 	/* Basic assumption: if someone sets sk->sk_err, he _must_
+ 	 * change state of the socket from TCP_SYN_*.
+@@ -601,6 +602,7 @@ static long inet_wait_for_connect(struct sock *sk, long timeo, int writebias)
+ 	}
+ 	remove_wait_queue(sk_sleep(sk), &wait);
+ 	sk->sk_write_pending -= writebias;
++	sk->sk_wait_pending--;
+ 	return timeo;
+ }
+ 
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index 65ad4251f6fd8..1386787eaf1a5 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -1142,6 +1142,7 @@ struct sock *inet_csk_clone_lock(const struct sock *sk,
+ 	if (newsk) {
+ 		struct inet_connection_sock *newicsk = inet_csk(newsk);
+ 
++		newsk->sk_wait_pending = 0;
+ 		inet_sk_set_state(newsk, TCP_SYN_RECV);
+ 		newicsk->icsk_bind_hash = NULL;
+ 		newicsk->icsk_bind2_hash = NULL;
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index ed63ee8f0d7e3..6bb8eb8031051 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -3080,6 +3080,12 @@ int tcp_disconnect(struct sock *sk, int flags)
+ 	int old_state = sk->sk_state;
+ 	u32 seq;
+ 
++	/* Deny disconnect if other threads are blocked in sk_wait_event()
++	 * or inet_wait_for_connect().
++	 */
++	if (sk->sk_wait_pending)
++		return -EBUSY;
++
+ 	if (old_state != TCP_CLOSE)
+ 		tcp_set_state(sk, TCP_CLOSE);
+ 
+@@ -4071,7 +4077,8 @@ int do_tcp_getsockopt(struct sock *sk, int level,
+ 	switch (optname) {
+ 	case TCP_MAXSEG:
+ 		val = tp->mss_cache;
+-		if (!val && ((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN)))
++		if (tp->rx_opt.user_mss &&
++		    ((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN)))
+ 			val = tp->rx_opt.user_mss;
+ 		if (tp->repair)
+ 			val = tp->rx_opt.mss_clamp;
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 10776c54ff784..dee174c40e874 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -4530,7 +4530,7 @@ static void tcp_sack_maybe_coalesce(struct tcp_sock *tp)
+ 	}
+ }
+ 
+-static void tcp_sack_compress_send_ack(struct sock *sk)
++void tcp_sack_compress_send_ack(struct sock *sk)
+ {
+ 	struct tcp_sock *tp = tcp_sk(sk);
+ 
+diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
+index cb79127f45c34..0b5d0a2867a8c 100644
+--- a/net/ipv4/tcp_timer.c
++++ b/net/ipv4/tcp_timer.c
+@@ -290,9 +290,19 @@ static int tcp_write_timeout(struct sock *sk)
+ void tcp_delack_timer_handler(struct sock *sk)
+ {
+ 	struct inet_connection_sock *icsk = inet_csk(sk);
++	struct tcp_sock *tp = tcp_sk(sk);
+ 
+-	if (((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN)) ||
+-	    !(icsk->icsk_ack.pending & ICSK_ACK_TIMER))
++	if ((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN))
++		return;
++
++	/* Handling the sack compression case */
++	if (tp->compressed_ack) {
++		tcp_mstamp_refresh(tp);
++		tcp_sack_compress_send_ack(sk);
++		return;
++	}
++
++	if (!(icsk->icsk_ack.pending & ICSK_ACK_TIMER))
+ 		return;
+ 
+ 	if (time_after(icsk->icsk_ack.timeout, jiffies)) {
+@@ -312,7 +322,7 @@ void tcp_delack_timer_handler(struct sock *sk)
+ 			inet_csk_exit_pingpong_mode(sk);
+ 			icsk->icsk_ack.ato      = TCP_ATO_MIN;
+ 		}
+-		tcp_mstamp_refresh(tcp_sk(sk));
++		tcp_mstamp_refresh(tp);
+ 		tcp_send_ack(sk);
+ 		__NET_INC_STATS(sock_net(sk), LINUX_MIB_DELAYEDACKS);
+ 	}
+diff --git a/net/mac80211/chan.c b/net/mac80211/chan.c
+index dbc34fbe7c8f4..77c90ed8f5d7d 100644
+--- a/net/mac80211/chan.c
++++ b/net/mac80211/chan.c
+@@ -258,7 +258,8 @@ ieee80211_get_max_required_bw(struct ieee80211_sub_if_data *sdata,
+ 
+ static enum nl80211_chan_width
+ ieee80211_get_chanctx_vif_max_required_bw(struct ieee80211_sub_if_data *sdata,
+-					  struct ieee80211_chanctx_conf *conf)
++					  struct ieee80211_chanctx *ctx,
++					  struct ieee80211_link_data *rsvd_for)
+ {
+ 	enum nl80211_chan_width max_bw = NL80211_CHAN_WIDTH_20_NOHT;
+ 	struct ieee80211_vif *vif = &sdata->vif;
+@@ -267,13 +268,14 @@ ieee80211_get_chanctx_vif_max_required_bw(struct ieee80211_sub_if_data *sdata,
+ 	rcu_read_lock();
+ 	for (link_id = 0; link_id < ARRAY_SIZE(sdata->link); link_id++) {
+ 		enum nl80211_chan_width width = NL80211_CHAN_WIDTH_20_NOHT;
+-		struct ieee80211_bss_conf *link_conf =
+-			rcu_dereference(sdata->vif.link_conf[link_id]);
++		struct ieee80211_link_data *link =
++			rcu_dereference(sdata->link[link_id]);
+ 
+-		if (!link_conf)
++		if (!link)
+ 			continue;
+ 
+-		if (rcu_access_pointer(link_conf->chanctx_conf) != conf)
++		if (link != rsvd_for &&
++		    rcu_access_pointer(link->conf->chanctx_conf) != &ctx->conf)
+ 			continue;
+ 
+ 		switch (vif->type) {
+@@ -287,7 +289,7 @@ ieee80211_get_chanctx_vif_max_required_bw(struct ieee80211_sub_if_data *sdata,
+ 			 * point, so take the width from the chandef, but
+ 			 * account also for TDLS peers
+ 			 */
+-			width = max(link_conf->chandef.width,
++			width = max(link->conf->chandef.width,
+ 				    ieee80211_get_max_required_bw(sdata, link_id));
+ 			break;
+ 		case NL80211_IFTYPE_P2P_DEVICE:
+@@ -296,7 +298,7 @@ ieee80211_get_chanctx_vif_max_required_bw(struct ieee80211_sub_if_data *sdata,
+ 		case NL80211_IFTYPE_ADHOC:
+ 		case NL80211_IFTYPE_MESH_POINT:
+ 		case NL80211_IFTYPE_OCB:
+-			width = link_conf->chandef.width;
++			width = link->conf->chandef.width;
+ 			break;
+ 		case NL80211_IFTYPE_WDS:
+ 		case NL80211_IFTYPE_UNSPECIFIED:
+@@ -316,7 +318,8 @@ ieee80211_get_chanctx_vif_max_required_bw(struct ieee80211_sub_if_data *sdata,
+ 
+ static enum nl80211_chan_width
+ ieee80211_get_chanctx_max_required_bw(struct ieee80211_local *local,
+-				      struct ieee80211_chanctx_conf *conf)
++				      struct ieee80211_chanctx *ctx,
++				      struct ieee80211_link_data *rsvd_for)
+ {
+ 	struct ieee80211_sub_if_data *sdata;
+ 	enum nl80211_chan_width max_bw = NL80211_CHAN_WIDTH_20_NOHT;
+@@ -328,7 +331,8 @@ ieee80211_get_chanctx_max_required_bw(struct ieee80211_local *local,
+ 		if (!ieee80211_sdata_running(sdata))
+ 			continue;
+ 
+-		width = ieee80211_get_chanctx_vif_max_required_bw(sdata, conf);
++		width = ieee80211_get_chanctx_vif_max_required_bw(sdata, ctx,
++								  rsvd_for);
+ 
+ 		max_bw = max(max_bw, width);
+ 	}
+@@ -336,8 +340,8 @@ ieee80211_get_chanctx_max_required_bw(struct ieee80211_local *local,
+ 	/* use the configured bandwidth in case of monitor interface */
+ 	sdata = rcu_dereference(local->monitor_sdata);
+ 	if (sdata &&
+-	    rcu_access_pointer(sdata->vif.bss_conf.chanctx_conf) == conf)
+-		max_bw = max(max_bw, conf->def.width);
++	    rcu_access_pointer(sdata->vif.bss_conf.chanctx_conf) == &ctx->conf)
++		max_bw = max(max_bw, ctx->conf.def.width);
+ 
+ 	rcu_read_unlock();
+ 
+@@ -349,8 +353,10 @@ ieee80211_get_chanctx_max_required_bw(struct ieee80211_local *local,
+  * the max of min required widths of all the interfaces bound to this
+  * channel context.
+  */
+-static u32 _ieee80211_recalc_chanctx_min_def(struct ieee80211_local *local,
+-					     struct ieee80211_chanctx *ctx)
++static u32
++_ieee80211_recalc_chanctx_min_def(struct ieee80211_local *local,
++				  struct ieee80211_chanctx *ctx,
++				  struct ieee80211_link_data *rsvd_for)
+ {
+ 	enum nl80211_chan_width max_bw;
+ 	struct cfg80211_chan_def min_def;
+@@ -370,7 +376,7 @@ static u32 _ieee80211_recalc_chanctx_min_def(struct ieee80211_local *local,
+ 		return 0;
+ 	}
+ 
+-	max_bw = ieee80211_get_chanctx_max_required_bw(local, &ctx->conf);
++	max_bw = ieee80211_get_chanctx_max_required_bw(local, ctx, rsvd_for);
+ 
+ 	/* downgrade chandef up to max_bw */
+ 	min_def = ctx->conf.def;
+@@ -448,9 +454,10 @@ static void ieee80211_chan_bw_change(struct ieee80211_local *local,
+  * channel context.
+  */
+ void ieee80211_recalc_chanctx_min_def(struct ieee80211_local *local,
+-				      struct ieee80211_chanctx *ctx)
++				      struct ieee80211_chanctx *ctx,
++				      struct ieee80211_link_data *rsvd_for)
+ {
+-	u32 changed = _ieee80211_recalc_chanctx_min_def(local, ctx);
++	u32 changed = _ieee80211_recalc_chanctx_min_def(local, ctx, rsvd_for);
+ 
+ 	if (!changed)
+ 		return;
+@@ -464,10 +471,11 @@ void ieee80211_recalc_chanctx_min_def(struct ieee80211_local *local,
+ 	ieee80211_chan_bw_change(local, ctx, false);
+ }
+ 
+-static void ieee80211_change_chanctx(struct ieee80211_local *local,
+-				     struct ieee80211_chanctx *ctx,
+-				     struct ieee80211_chanctx *old_ctx,
+-				     const struct cfg80211_chan_def *chandef)
++static void _ieee80211_change_chanctx(struct ieee80211_local *local,
++				      struct ieee80211_chanctx *ctx,
++				      struct ieee80211_chanctx *old_ctx,
++				      const struct cfg80211_chan_def *chandef,
++				      struct ieee80211_link_data *rsvd_for)
+ {
+ 	u32 changed;
+ 
+@@ -492,7 +500,7 @@ static void ieee80211_change_chanctx(struct ieee80211_local *local,
+ 	ieee80211_chan_bw_change(local, old_ctx, true);
+ 
+ 	if (cfg80211_chandef_identical(&ctx->conf.def, chandef)) {
+-		ieee80211_recalc_chanctx_min_def(local, ctx);
++		ieee80211_recalc_chanctx_min_def(local, ctx, rsvd_for);
+ 		return;
+ 	}
+ 
+@@ -502,7 +510,7 @@ static void ieee80211_change_chanctx(struct ieee80211_local *local,
+ 
+ 	/* check if min chanctx also changed */
+ 	changed = IEEE80211_CHANCTX_CHANGE_WIDTH |
+-		  _ieee80211_recalc_chanctx_min_def(local, ctx);
++		  _ieee80211_recalc_chanctx_min_def(local, ctx, rsvd_for);
+ 	drv_change_chanctx(local, ctx, changed);
+ 
+ 	if (!local->use_chanctx) {
+@@ -514,6 +522,14 @@ static void ieee80211_change_chanctx(struct ieee80211_local *local,
+ 	ieee80211_chan_bw_change(local, old_ctx, false);
+ }
+ 
++static void ieee80211_change_chanctx(struct ieee80211_local *local,
++				     struct ieee80211_chanctx *ctx,
++				     struct ieee80211_chanctx *old_ctx,
++				     const struct cfg80211_chan_def *chandef)
++{
++	_ieee80211_change_chanctx(local, ctx, old_ctx, chandef, NULL);
++}
++
+ static struct ieee80211_chanctx *
+ ieee80211_find_chanctx(struct ieee80211_local *local,
+ 		       const struct cfg80211_chan_def *chandef,
+@@ -638,7 +654,7 @@ ieee80211_alloc_chanctx(struct ieee80211_local *local,
+ 	ctx->conf.rx_chains_dynamic = 1;
+ 	ctx->mode = mode;
+ 	ctx->conf.radar_enabled = false;
+-	ieee80211_recalc_chanctx_min_def(local, ctx);
++	_ieee80211_recalc_chanctx_min_def(local, ctx, NULL);
+ 
+ 	return ctx;
+ }
+@@ -855,6 +871,9 @@ static int ieee80211_assign_link_chanctx(struct ieee80211_link_data *link,
+ 	}
+ 
+ 	if (new_ctx) {
++		/* recalc considering the link we'll use it for now */
++		ieee80211_recalc_chanctx_min_def(local, new_ctx, link);
++
+ 		ret = drv_assign_vif_chanctx(local, sdata, link->conf, new_ctx);
+ 		if (ret)
+ 			goto out;
+@@ -873,12 +892,12 @@ out:
+ 		ieee80211_recalc_chanctx_chantype(local, curr_ctx);
+ 		ieee80211_recalc_smps_chanctx(local, curr_ctx);
+ 		ieee80211_recalc_radar_chanctx(local, curr_ctx);
+-		ieee80211_recalc_chanctx_min_def(local, curr_ctx);
++		ieee80211_recalc_chanctx_min_def(local, curr_ctx, NULL);
+ 	}
+ 
+ 	if (new_ctx && ieee80211_chanctx_num_assigned(local, new_ctx) > 0) {
+ 		ieee80211_recalc_txpower(sdata, false);
+-		ieee80211_recalc_chanctx_min_def(local, new_ctx);
++		ieee80211_recalc_chanctx_min_def(local, new_ctx, NULL);
+ 	}
+ 
+ 	if (sdata->vif.type != NL80211_IFTYPE_P2P_DEVICE &&
+@@ -1270,7 +1289,7 @@ ieee80211_link_use_reserved_reassign(struct ieee80211_link_data *link)
+ 
+ 	ieee80211_link_update_chandef(link, &link->reserved_chandef);
+ 
+-	ieee80211_change_chanctx(local, new_ctx, old_ctx, chandef);
++	_ieee80211_change_chanctx(local, new_ctx, old_ctx, chandef, link);
+ 
+ 	vif_chsw[0].vif = &sdata->vif;
+ 	vif_chsw[0].old_ctx = &old_ctx->conf;
+@@ -1300,7 +1319,7 @@ ieee80211_link_use_reserved_reassign(struct ieee80211_link_data *link)
+ 	if (ieee80211_chanctx_refcount(local, old_ctx) == 0)
+ 		ieee80211_free_chanctx(local, old_ctx);
+ 
+-	ieee80211_recalc_chanctx_min_def(local, new_ctx);
++	ieee80211_recalc_chanctx_min_def(local, new_ctx, NULL);
+ 	ieee80211_recalc_smps_chanctx(local, new_ctx);
+ 	ieee80211_recalc_radar_chanctx(local, new_ctx);
+ 
+@@ -1665,7 +1684,7 @@ static int ieee80211_vif_use_reserved_switch(struct ieee80211_local *local)
+ 		ieee80211_recalc_chanctx_chantype(local, ctx);
+ 		ieee80211_recalc_smps_chanctx(local, ctx);
+ 		ieee80211_recalc_radar_chanctx(local, ctx);
+-		ieee80211_recalc_chanctx_min_def(local, ctx);
++		ieee80211_recalc_chanctx_min_def(local, ctx, NULL);
+ 
+ 		list_for_each_entry_safe(link, link_tmp, &ctx->reserved_links,
+ 					 reserved_chanctx_list) {
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index e082582e0aa28..eba7ae63fac45 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -2494,7 +2494,8 @@ int ieee80211_chanctx_refcount(struct ieee80211_local *local,
+ void ieee80211_recalc_smps_chanctx(struct ieee80211_local *local,
+ 				   struct ieee80211_chanctx *chanctx);
+ void ieee80211_recalc_chanctx_min_def(struct ieee80211_local *local,
+-				      struct ieee80211_chanctx *ctx);
++				      struct ieee80211_chanctx *ctx,
++				      struct ieee80211_link_data *rsvd_for);
+ bool ieee80211_is_radar_required(struct ieee80211_local *local);
+ 
+ void ieee80211_dfs_cac_timer(unsigned long data);
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index 8c397650b96f6..d7b382866b260 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -3007,7 +3007,7 @@ void ieee80211_recalc_min_chandef(struct ieee80211_sub_if_data *sdata,
+ 
+ 		chanctx = container_of(chanctx_conf, struct ieee80211_chanctx,
+ 				       conf);
+-		ieee80211_recalc_chanctx_min_def(local, chanctx);
++		ieee80211_recalc_chanctx_min_def(local, chanctx, NULL);
+ 	}
+  unlock:
+ 	mutex_unlock(&local->chanctx_mtx);
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index b998e9df53cef..0e7c194948cfe 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -102,8 +102,8 @@ static int __mptcp_socket_create(struct mptcp_sock *msk)
+ 	if (err)
+ 		return err;
+ 
+-	msk->first = ssock->sk;
+-	msk->subflow = ssock;
++	WRITE_ONCE(msk->first, ssock->sk);
++	WRITE_ONCE(msk->subflow, ssock);
+ 	subflow = mptcp_subflow_ctx(ssock->sk);
+ 	list_add(&subflow->node, &msk->conn_list);
+ 	sock_hold(ssock->sk);
+@@ -590,7 +590,7 @@ static bool mptcp_check_data_fin(struct sock *sk)
+ 		WRITE_ONCE(msk->ack_seq, msk->ack_seq + 1);
+ 		WRITE_ONCE(msk->rcv_data_fin, 0);
+ 
+-		sk->sk_shutdown |= RCV_SHUTDOWN;
++		WRITE_ONCE(sk->sk_shutdown, sk->sk_shutdown | RCV_SHUTDOWN);
+ 		smp_mb__before_atomic(); /* SHUTDOWN must be visible first */
+ 
+ 		switch (sk->sk_state) {
+@@ -812,6 +812,13 @@ void mptcp_data_ready(struct sock *sk, struct sock *ssk)
+ 	mptcp_data_unlock(sk);
+ }
+ 
++static void mptcp_subflow_joined(struct mptcp_sock *msk, struct sock *ssk)
++{
++	mptcp_subflow_ctx(ssk)->map_seq = READ_ONCE(msk->ack_seq);
++	WRITE_ONCE(msk->allow_infinite_fallback, false);
++	mptcp_event(MPTCP_EVENT_SUB_ESTABLISHED, msk, ssk, GFP_ATOMIC);
++}
++
+ static bool __mptcp_finish_join(struct mptcp_sock *msk, struct sock *ssk)
+ {
+ 	struct sock *sk = (struct sock *)msk;
+@@ -826,6 +833,7 @@ static bool __mptcp_finish_join(struct mptcp_sock *msk, struct sock *ssk)
+ 		mptcp_sock_graft(ssk, sk->sk_socket);
+ 
+ 	mptcp_sockopt_sync_locked(msk, ssk);
++	mptcp_subflow_joined(msk, ssk);
+ 	return true;
+ }
+ 
+@@ -897,7 +905,7 @@ static void mptcp_check_for_eof(struct mptcp_sock *msk)
+ 		/* hopefully temporary hack: propagate shutdown status
+ 		 * to msk, when all subflows agree on it
+ 		 */
+-		sk->sk_shutdown |= RCV_SHUTDOWN;
++		WRITE_ONCE(sk->sk_shutdown, sk->sk_shutdown | RCV_SHUTDOWN);
+ 
+ 		smp_mb__before_atomic(); /* SHUTDOWN must be visible first */
+ 		sk->sk_data_ready(sk);
+@@ -1671,7 +1679,6 @@ static int mptcp_sendmsg_fastopen(struct sock *sk, struct sock *ssk, struct msgh
+ 
+ 	lock_sock(ssk);
+ 	msg->msg_flags |= MSG_DONTWAIT;
+-	msk->connect_flags = O_NONBLOCK;
+ 	msk->fastopening = 1;
+ 	ret = tcp_sendmsg_fastopen(ssk, msg, copied_syn, len, NULL);
+ 	msk->fastopening = 0;
+@@ -2254,7 +2261,7 @@ static void mptcp_dispose_initial_subflow(struct mptcp_sock *msk)
+ {
+ 	if (msk->subflow) {
+ 		iput(SOCK_INODE(msk->subflow));
+-		msk->subflow = NULL;
++		WRITE_ONCE(msk->subflow, NULL);
+ 	}
+ }
+ 
+@@ -2391,7 +2398,7 @@ out_release:
+ 	sock_put(ssk);
+ 
+ 	if (ssk == msk->first)
+-		msk->first = NULL;
++		WRITE_ONCE(msk->first, NULL);
+ 
+ out:
+ 	if (ssk == msk->last_snd)
+@@ -2498,7 +2505,7 @@ static void mptcp_check_fastclose(struct mptcp_sock *msk)
+ 	}
+ 
+ 	inet_sk_state_store(sk, TCP_CLOSE);
+-	sk->sk_shutdown = SHUTDOWN_MASK;
++	WRITE_ONCE(sk->sk_shutdown, SHUTDOWN_MASK);
+ 	smp_mb__before_atomic(); /* SHUTDOWN must be visible first */
+ 	set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &msk->flags);
+ 
+@@ -2692,7 +2699,7 @@ static int __mptcp_init_sock(struct sock *sk)
+ 	WRITE_ONCE(msk->rmem_released, 0);
+ 	msk->timer_ival = TCP_RTO_MIN;
+ 
+-	msk->first = NULL;
++	WRITE_ONCE(msk->first, NULL);
+ 	inet_csk(sk)->icsk_sync_mss = mptcp_sync_mss;
+ 	WRITE_ONCE(msk->csum_enabled, mptcp_is_checksum_enabled(sock_net(sk)));
+ 	WRITE_ONCE(msk->allow_infinite_fallback, true);
+@@ -2934,7 +2941,7 @@ bool __mptcp_close(struct sock *sk, long timeout)
+ 	bool do_cancel_work = false;
+ 	int subflows_alive = 0;
+ 
+-	sk->sk_shutdown = SHUTDOWN_MASK;
++	WRITE_ONCE(sk->sk_shutdown, SHUTDOWN_MASK);
+ 
+ 	if ((1 << sk->sk_state) & (TCPF_LISTEN | TCPF_CLOSE)) {
+ 		mptcp_listen_inuse_dec(sk);
+@@ -3011,7 +3018,7 @@ static void mptcp_close(struct sock *sk, long timeout)
+ 	sock_put(sk);
+ }
+ 
+-void mptcp_copy_inaddrs(struct sock *msk, const struct sock *ssk)
++static void mptcp_copy_inaddrs(struct sock *msk, const struct sock *ssk)
+ {
+ #if IS_ENABLED(CONFIG_MPTCP_IPV6)
+ 	const struct ipv6_pinfo *ssk6 = inet6_sk(ssk);
+@@ -3074,7 +3081,7 @@ static int mptcp_disconnect(struct sock *sk, int flags)
+ 	mptcp_pm_data_reset(msk);
+ 	mptcp_ca_reset(sk);
+ 
+-	sk->sk_shutdown = 0;
++	WRITE_ONCE(sk->sk_shutdown, 0);
+ 	sk_error_report(sk);
+ 	return 0;
+ }
+@@ -3088,9 +3095,10 @@ static struct ipv6_pinfo *mptcp_inet6_sk(const struct sock *sk)
+ }
+ #endif
+ 
+-struct sock *mptcp_sk_clone(const struct sock *sk,
+-			    const struct mptcp_options_received *mp_opt,
+-			    struct request_sock *req)
++struct sock *mptcp_sk_clone_init(const struct sock *sk,
++				 const struct mptcp_options_received *mp_opt,
++				 struct sock *ssk,
++				 struct request_sock *req)
+ {
+ 	struct mptcp_subflow_request_sock *subflow_req = mptcp_subflow_rsk(req);
+ 	struct sock *nsk = sk_clone_lock(sk, GFP_ATOMIC);
+@@ -3109,7 +3117,7 @@ struct sock *mptcp_sk_clone(const struct sock *sk,
+ 	msk = mptcp_sk(nsk);
+ 	msk->local_key = subflow_req->local_key;
+ 	msk->token = subflow_req->token;
+-	msk->subflow = NULL;
++	WRITE_ONCE(msk->subflow, NULL);
+ 	msk->in_accept_queue = 1;
+ 	WRITE_ONCE(msk->fully_established, false);
+ 	if (mp_opt->suboptions & OPTION_MPTCP_CSUMREQD)
+@@ -3122,10 +3130,30 @@ struct sock *mptcp_sk_clone(const struct sock *sk,
+ 	msk->setsockopt_seq = mptcp_sk(sk)->setsockopt_seq;
+ 
+ 	sock_reset_flag(nsk, SOCK_RCU_FREE);
+-	/* will be fully established after successful MPC subflow creation */
+-	inet_sk_state_store(nsk, TCP_SYN_RECV);
+-
+ 	security_inet_csk_clone(nsk, req);
++
++	/* this can't race with mptcp_close(), as the msk is
++	 * not yet exposted to user-space
++	 */
++	inet_sk_state_store(nsk, TCP_ESTABLISHED);
++
++	/* The msk maintain a ref to each subflow in the connections list */
++	WRITE_ONCE(msk->first, ssk);
++	list_add(&mptcp_subflow_ctx(ssk)->node, &msk->conn_list);
++	sock_hold(ssk);
++
++	/* new mpc subflow takes ownership of the newly
++	 * created mptcp socket
++	 */
++	mptcp_token_accept(subflow_req, msk);
++
++	/* set msk addresses early to ensure mptcp_pm_get_local_id()
++	 * uses the correct data
++	 */
++	mptcp_copy_inaddrs(nsk, ssk);
++	mptcp_propagate_sndbuf(nsk, ssk);
++
++	mptcp_rcv_space_init(msk, ssk);
+ 	bh_unlock_sock(nsk);
+ 
+ 	/* note: the newly allocated socket refcount is 2 now */
+@@ -3157,7 +3185,7 @@ static struct sock *mptcp_accept(struct sock *sk, int flags, int *err,
+ 	struct socket *listener;
+ 	struct sock *newsk;
+ 
+-	listener = __mptcp_nmpc_socket(msk);
++	listener = READ_ONCE(msk->subflow);
+ 	if (WARN_ON_ONCE(!listener)) {
+ 		*err = -EINVAL;
+ 		return NULL;
+@@ -3377,7 +3405,7 @@ static int mptcp_get_port(struct sock *sk, unsigned short snum)
+ 	struct mptcp_sock *msk = mptcp_sk(sk);
+ 	struct socket *ssock;
+ 
+-	ssock = __mptcp_nmpc_socket(msk);
++	ssock = msk->subflow;
+ 	pr_debug("msk=%p, subflow=%p", msk, ssock);
+ 	if (WARN_ON_ONCE(!ssock))
+ 		return -EINVAL;
+@@ -3437,14 +3465,16 @@ bool mptcp_finish_join(struct sock *ssk)
+ 		return false;
+ 	}
+ 
+-	if (!list_empty(&subflow->node))
+-		goto out;
++	/* active subflow, already present inside the conn_list */
++	if (!list_empty(&subflow->node)) {
++		mptcp_subflow_joined(msk, ssk);
++		return true;
++	}
+ 
+ 	if (!mptcp_pm_allow_new_subflow(msk))
+ 		goto err_prohibited;
+ 
+-	/* active connections are already on conn_list.
+-	 * If we can't acquire msk socket lock here, let the release callback
++	/* If we can't acquire msk socket lock here, let the release callback
+ 	 * handle it
+ 	 */
+ 	mptcp_data_lock(parent);
+@@ -3467,11 +3497,6 @@ err_prohibited:
+ 		return false;
+ 	}
+ 
+-	subflow->map_seq = READ_ONCE(msk->ack_seq);
+-	WRITE_ONCE(msk->allow_infinite_fallback, false);
+-
+-out:
+-	mptcp_event(MPTCP_EVENT_SUB_ESTABLISHED, msk, ssk, GFP_ATOMIC);
+ 	return true;
+ }
+ 
+@@ -3589,9 +3614,9 @@ static int mptcp_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ 	 * acquired the subflow socket lock, too.
+ 	 */
+ 	if (msk->fastopening)
+-		err = __inet_stream_connect(ssock, uaddr, addr_len, msk->connect_flags, 1);
++		err = __inet_stream_connect(ssock, uaddr, addr_len, O_NONBLOCK, 1);
+ 	else
+-		err = inet_stream_connect(ssock, uaddr, addr_len, msk->connect_flags);
++		err = inet_stream_connect(ssock, uaddr, addr_len, O_NONBLOCK);
+ 	inet_sk(sk)->defer_connect = inet_sk(ssock->sk)->defer_connect;
+ 
+ 	/* on successful connect, the msk state will be moved to established by
+@@ -3604,12 +3629,10 @@ static int mptcp_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ 
+ 	mptcp_copy_inaddrs(sk, ssock->sk);
+ 
+-	/* unblocking connect, mptcp-level inet_stream_connect will error out
+-	 * without changing the socket state, update it here.
++	/* silence EINPROGRESS and let the caller inet_stream_connect
++	 * handle the connection in progress
+ 	 */
+-	if (err == -EINPROGRESS)
+-		sk->sk_socket->state = ssock->state;
+-	return err;
++	return 0;
+ }
+ 
+ static struct proto mptcp_prot = {
+@@ -3668,18 +3691,6 @@ unlock:
+ 	return err;
+ }
+ 
+-static int mptcp_stream_connect(struct socket *sock, struct sockaddr *uaddr,
+-				int addr_len, int flags)
+-{
+-	int ret;
+-
+-	lock_sock(sock->sk);
+-	mptcp_sk(sock->sk)->connect_flags = flags;
+-	ret = __inet_stream_connect(sock, uaddr, addr_len, flags, 0);
+-	release_sock(sock->sk);
+-	return ret;
+-}
+-
+ static int mptcp_listen(struct socket *sock, int backlog)
+ {
+ 	struct mptcp_sock *msk = mptcp_sk(sock->sk);
+@@ -3723,7 +3734,10 @@ static int mptcp_stream_accept(struct socket *sock, struct socket *newsock,
+ 
+ 	pr_debug("msk=%p", msk);
+ 
+-	ssock = __mptcp_nmpc_socket(msk);
++	/* Buggy applications can call accept on socket states other then LISTEN
++	 * but no need to allocate the first subflow just to error out.
++	 */
++	ssock = READ_ONCE(msk->subflow);
+ 	if (!ssock)
+ 		return -EINVAL;
+ 
+@@ -3769,9 +3783,6 @@ static __poll_t mptcp_check_writeable(struct mptcp_sock *msk)
+ {
+ 	struct sock *sk = (struct sock *)msk;
+ 
+-	if (unlikely(sk->sk_shutdown & SEND_SHUTDOWN))
+-		return EPOLLOUT | EPOLLWRNORM;
+-
+ 	if (sk_stream_is_writeable(sk))
+ 		return EPOLLOUT | EPOLLWRNORM;
+ 
+@@ -3789,6 +3800,7 @@ static __poll_t mptcp_poll(struct file *file, struct socket *sock,
+ 	struct sock *sk = sock->sk;
+ 	struct mptcp_sock *msk;
+ 	__poll_t mask = 0;
++	u8 shutdown;
+ 	int state;
+ 
+ 	msk = mptcp_sk(sk);
+@@ -3797,23 +3809,30 @@ static __poll_t mptcp_poll(struct file *file, struct socket *sock,
+ 	state = inet_sk_state_load(sk);
+ 	pr_debug("msk=%p state=%d flags=%lx", msk, state, msk->flags);
+ 	if (state == TCP_LISTEN) {
+-		if (WARN_ON_ONCE(!msk->subflow || !msk->subflow->sk))
++		struct socket *ssock = READ_ONCE(msk->subflow);
++
++		if (WARN_ON_ONCE(!ssock || !ssock->sk))
+ 			return 0;
+ 
+-		return inet_csk_listen_poll(msk->subflow->sk);
++		return inet_csk_listen_poll(ssock->sk);
+ 	}
+ 
++	shutdown = READ_ONCE(sk->sk_shutdown);
++	if (shutdown == SHUTDOWN_MASK || state == TCP_CLOSE)
++		mask |= EPOLLHUP;
++	if (shutdown & RCV_SHUTDOWN)
++		mask |= EPOLLIN | EPOLLRDNORM | EPOLLRDHUP;
++
+ 	if (state != TCP_SYN_SENT && state != TCP_SYN_RECV) {
+ 		mask |= mptcp_check_readable(msk);
+-		mask |= mptcp_check_writeable(msk);
++		if (shutdown & SEND_SHUTDOWN)
++			mask |= EPOLLOUT | EPOLLWRNORM;
++		else
++			mask |= mptcp_check_writeable(msk);
+ 	} else if (state == TCP_SYN_SENT && inet_sk(sk)->defer_connect) {
+ 		/* cf tcp_poll() note about TFO */
+ 		mask |= EPOLLOUT | EPOLLWRNORM;
+ 	}
+-	if (sk->sk_shutdown == SHUTDOWN_MASK || state == TCP_CLOSE)
+-		mask |= EPOLLHUP;
+-	if (sk->sk_shutdown & RCV_SHUTDOWN)
+-		mask |= EPOLLIN | EPOLLRDNORM | EPOLLRDHUP;
+ 
+ 	/* This barrier is coupled with smp_wmb() in __mptcp_error_report() */
+ 	smp_rmb();
+@@ -3828,7 +3847,7 @@ static const struct proto_ops mptcp_stream_ops = {
+ 	.owner		   = THIS_MODULE,
+ 	.release	   = inet_release,
+ 	.bind		   = mptcp_bind,
+-	.connect	   = mptcp_stream_connect,
++	.connect	   = inet_stream_connect,
+ 	.socketpair	   = sock_no_socketpair,
+ 	.accept		   = mptcp_stream_accept,
+ 	.getname	   = inet_getname,
+@@ -3923,7 +3942,7 @@ static const struct proto_ops mptcp_v6_stream_ops = {
+ 	.owner		   = THIS_MODULE,
+ 	.release	   = inet6_release,
+ 	.bind		   = mptcp_bind,
+-	.connect	   = mptcp_stream_connect,
++	.connect	   = inet_stream_connect,
+ 	.socketpair	   = sock_no_socketpair,
+ 	.accept		   = mptcp_stream_accept,
+ 	.getname	   = inet6_getname,
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index d6469b6ab38e3..9c4860cf18a97 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -297,7 +297,6 @@ struct mptcp_sock {
+ 			nodelay:1,
+ 			fastopening:1,
+ 			in_accept_queue:1;
+-	int		connect_flags;
+ 	struct work_struct work;
+ 	struct sk_buff  *ooo_last_skb;
+ 	struct rb_root  out_of_order_queue;
+@@ -306,7 +305,11 @@ struct mptcp_sock {
+ 	struct list_head rtx_queue;
+ 	struct mptcp_data_frag *first_pending;
+ 	struct list_head join_list;
+-	struct socket	*subflow; /* outgoing connect/listener/!mp_capable */
++	struct socket	*subflow; /* outgoing connect/listener/!mp_capable
++				   * The mptcp ops can safely dereference, using suitable
++				   * ONCE annotation, the subflow outside the socket
++				   * lock as such sock is freed after close().
++				   */
+ 	struct sock	*first;
+ 	struct mptcp_pm_data	pm;
+ 	struct {
+@@ -616,7 +619,6 @@ int mptcp_is_checksum_enabled(const struct net *net);
+ int mptcp_allow_join_id0(const struct net *net);
+ unsigned int mptcp_stale_loss_cnt(const struct net *net);
+ int mptcp_get_pm_type(const struct net *net);
+-void mptcp_copy_inaddrs(struct sock *msk, const struct sock *ssk);
+ void mptcp_subflow_fully_established(struct mptcp_subflow_context *subflow,
+ 				     const struct mptcp_options_received *mp_opt);
+ bool __mptcp_retransmit_pending_data(struct sock *sk);
+@@ -686,9 +688,10 @@ void __init mptcp_proto_init(void);
+ int __init mptcp_proto_v6_init(void);
+ #endif
+ 
+-struct sock *mptcp_sk_clone(const struct sock *sk,
+-			    const struct mptcp_options_received *mp_opt,
+-			    struct request_sock *req);
++struct sock *mptcp_sk_clone_init(const struct sock *sk,
++				 const struct mptcp_options_received *mp_opt,
++				 struct sock *ssk,
++				 struct request_sock *req);
+ void mptcp_get_options(const struct sk_buff *skb,
+ 		       struct mptcp_options_received *mp_opt);
+ 
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 281c1cc8dc8dc..bb0301398d3b4 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -695,14 +695,6 @@ static bool subflow_hmac_valid(const struct request_sock *req,
+ 	return !crypto_memneq(hmac, mp_opt->hmac, MPTCPOPT_HMAC_LEN);
+ }
+ 
+-static void mptcp_force_close(struct sock *sk)
+-{
+-	/* the msk is not yet exposed to user-space, and refcount is 2 */
+-	inet_sk_state_store(sk, TCP_CLOSE);
+-	sk_common_release(sk);
+-	sock_put(sk);
+-}
+-
+ static void subflow_ulp_fallback(struct sock *sk,
+ 				 struct mptcp_subflow_context *old_ctx)
+ {
+@@ -757,7 +749,6 @@ static struct sock *subflow_syn_recv_sock(const struct sock *sk,
+ 	struct mptcp_subflow_request_sock *subflow_req;
+ 	struct mptcp_options_received mp_opt;
+ 	bool fallback, fallback_is_fatal;
+-	struct sock *new_msk = NULL;
+ 	struct mptcp_sock *owner;
+ 	struct sock *child;
+ 
+@@ -786,14 +777,9 @@ static struct sock *subflow_syn_recv_sock(const struct sock *sk,
+ 		 * options.
+ 		 */
+ 		mptcp_get_options(skb, &mp_opt);
+-		if (!(mp_opt.suboptions & OPTIONS_MPTCP_MPC)) {
++		if (!(mp_opt.suboptions & OPTIONS_MPTCP_MPC))
+ 			fallback = true;
+-			goto create_child;
+-		}
+ 
+-		new_msk = mptcp_sk_clone(listener->conn, &mp_opt, req);
+-		if (!new_msk)
+-			fallback = true;
+ 	} else if (subflow_req->mp_join) {
+ 		mptcp_get_options(skb, &mp_opt);
+ 		if (!(mp_opt.suboptions & OPTIONS_MPTCP_MPJ) ||
+@@ -822,47 +808,19 @@ create_child:
+ 				subflow_add_reset_reason(skb, MPTCP_RST_EMPTCP);
+ 				goto dispose_child;
+ 			}
+-
+-			if (new_msk)
+-				mptcp_copy_inaddrs(new_msk, child);
+-			mptcp_subflow_drop_ctx(child);
+-			goto out;
++			goto fallback;
+ 		}
+ 
+ 		/* ssk inherits options of listener sk */
+ 		ctx->setsockopt_seq = listener->setsockopt_seq;
+ 
+ 		if (ctx->mp_capable) {
+-			owner = mptcp_sk(new_msk);
++			ctx->conn = mptcp_sk_clone_init(listener->conn, &mp_opt, child, req);
++			if (!ctx->conn)
++				goto fallback;
+ 
+-			/* this can't race with mptcp_close(), as the msk is
+-			 * not yet exposted to user-space
+-			 */
+-			inet_sk_state_store((void *)new_msk, TCP_ESTABLISHED);
+-
+-			/* record the newly created socket as the first msk
+-			 * subflow, but don't link it yet into conn_list
+-			 */
+-			WRITE_ONCE(owner->first, child);
+-
+-			/* new mpc subflow takes ownership of the newly
+-			 * created mptcp socket
+-			 */
+-			mptcp_sk(new_msk)->setsockopt_seq = ctx->setsockopt_seq;
++			owner = mptcp_sk(ctx->conn);
+ 			mptcp_pm_new_connection(owner, child, 1);
+-			mptcp_token_accept(subflow_req, owner);
+-			ctx->conn = new_msk;
+-			new_msk = NULL;
+-
+-			/* set msk addresses early to ensure mptcp_pm_get_local_id()
+-			 * uses the correct data
+-			 */
+-			mptcp_copy_inaddrs(ctx->conn, child);
+-			mptcp_propagate_sndbuf(ctx->conn, child);
+-
+-			mptcp_rcv_space_init(owner, child);
+-			list_add(&ctx->node, &owner->conn_list);
+-			sock_hold(child);
+ 
+ 			/* with OoO packets we can reach here without ingress
+ 			 * mpc option
+@@ -902,11 +860,6 @@ create_child:
+ 		}
+ 	}
+ 
+-out:
+-	/* dispose of the left over mptcp master, if any */
+-	if (unlikely(new_msk))
+-		mptcp_force_close(new_msk);
+-
+ 	/* check for expected invariant - should never trigger, just help
+ 	 * catching eariler subtle bugs
+ 	 */
+@@ -924,6 +877,10 @@ dispose_child:
+ 
+ 	/* The last child reference will be released by the caller */
+ 	return child;
++
++fallback:
++	mptcp_subflow_drop_ctx(child);
++	return child;
+ }
+ 
+ static struct inet_connection_sock_af_ops subflow_specific __ro_after_init;
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index d40544cd61a6c..69c8c8c7e9b8e 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -2976,7 +2976,9 @@ nla_put_failure:
+ 	return -1;
+ }
+ 
++#if IS_ENABLED(CONFIG_NF_NAT)
+ static const union nf_inet_addr any_addr;
++#endif
+ 
+ static __be32 nf_expect_get_id(const struct nf_conntrack_expect *exp)
+ {
+@@ -3460,10 +3462,12 @@ ctnetlink_change_expect(struct nf_conntrack_expect *x,
+ 	return 0;
+ }
+ 
++#if IS_ENABLED(CONFIG_NF_NAT)
+ static const struct nla_policy exp_nat_nla_policy[CTA_EXPECT_NAT_MAX+1] = {
+ 	[CTA_EXPECT_NAT_DIR]	= { .type = NLA_U32 },
+ 	[CTA_EXPECT_NAT_TUPLE]	= { .type = NLA_NESTED },
+ };
++#endif
+ 
+ static int
+ ctnetlink_parse_expect_nat(const struct nlattr *attr,
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 45d47b39de225..717e27a4b66a0 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -1779,7 +1779,7 @@ static int netlink_getsockopt(struct socket *sock, int level, int optname,
+ 				break;
+ 			}
+ 		}
+-		if (put_user(ALIGN(nlk->ngroups / 8, sizeof(u32)), optlen))
++		if (put_user(ALIGN(BITS_TO_BYTES(nlk->ngroups), sizeof(u32)), optlen))
+ 			err = -EFAULT;
+ 		netlink_unlock_table();
+ 		return err;
+diff --git a/net/netrom/nr_subr.c b/net/netrom/nr_subr.c
+index 3f99b432ea707..e2d2af924cff4 100644
+--- a/net/netrom/nr_subr.c
++++ b/net/netrom/nr_subr.c
+@@ -123,7 +123,7 @@ void nr_write_internal(struct sock *sk, int frametype)
+ 	unsigned char  *dptr;
+ 	int len, timeout;
+ 
+-	len = NR_NETWORK_LEN + NR_TRANSPORT_LEN;
++	len = NR_TRANSPORT_LEN;
+ 
+ 	switch (frametype & 0x0F) {
+ 	case NR_CONNREQ:
+@@ -141,7 +141,8 @@ void nr_write_internal(struct sock *sk, int frametype)
+ 		return;
+ 	}
+ 
+-	if ((skb = alloc_skb(len, GFP_ATOMIC)) == NULL)
++	skb = alloc_skb(NR_NETWORK_LEN + len, GFP_ATOMIC);
++	if (!skb)
+ 		return;
+ 
+ 	/*
+@@ -149,7 +150,7 @@ void nr_write_internal(struct sock *sk, int frametype)
+ 	 */
+ 	skb_reserve(skb, NR_NETWORK_LEN);
+ 
+-	dptr = skb_put(skb, skb_tailroom(skb));
++	dptr = skb_put(skb, len);
+ 
+ 	switch (frametype & 0x0F) {
+ 	case NR_CONNREQ:
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index db9c2fa71c50c..b79d2fa788061 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -3193,6 +3193,9 @@ static int packet_do_bind(struct sock *sk, const char *name, int ifindex,
+ 
+ 	lock_sock(sk);
+ 	spin_lock(&po->bind_lock);
++	if (!proto)
++		proto = po->num;
++
+ 	rcu_read_lock();
+ 
+ 	if (po->fanout) {
+@@ -3291,7 +3294,7 @@ static int packet_bind_spkt(struct socket *sock, struct sockaddr *uaddr,
+ 	memcpy(name, uaddr->sa_data, sizeof(uaddr->sa_data_min));
+ 	name[sizeof(uaddr->sa_data_min)] = 0;
+ 
+-	return packet_do_bind(sk, name, 0, pkt_sk(sk)->num);
++	return packet_do_bind(sk, name, 0, 0);
+ }
+ 
+ static int packet_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+@@ -3308,8 +3311,7 @@ static int packet_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len
+ 	if (sll->sll_family != AF_PACKET)
+ 		return -EINVAL;
+ 
+-	return packet_do_bind(sk, NULL, sll->sll_ifindex,
+-			      sll->sll_protocol ? : pkt_sk(sk)->num);
++	return packet_do_bind(sk, NULL, sll->sll_ifindex, sll->sll_protocol);
+ }
+ 
+ static struct proto packet_proto = {
+diff --git a/net/packet/diag.c b/net/packet/diag.c
+index d704c7bf51b20..a68a84574c739 100644
+--- a/net/packet/diag.c
++++ b/net/packet/diag.c
+@@ -143,7 +143,7 @@ static int sk_diag_fill(struct sock *sk, struct sk_buff *skb,
+ 	rp = nlmsg_data(nlh);
+ 	rp->pdiag_family = AF_PACKET;
+ 	rp->pdiag_type = sk->sk_type;
+-	rp->pdiag_num = ntohs(po->num);
++	rp->pdiag_num = ntohs(READ_ONCE(po->num));
+ 	rp->pdiag_ino = sk_ino;
+ 	sock_diag_save_cookie(sk, rp->pdiag_cookie);
+ 
+diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c
+index a6f0d29f35ef9..f5d1fc1266a5a 100644
+--- a/net/rxrpc/af_rxrpc.c
++++ b/net/rxrpc/af_rxrpc.c
+@@ -967,6 +967,7 @@ static int __init af_rxrpc_init(void)
+ 	BUILD_BUG_ON(sizeof(struct rxrpc_skb_priv) > sizeof_field(struct sk_buff, cb));
+ 
+ 	ret = -ENOMEM;
++	rxrpc_gen_version_string();
+ 	rxrpc_call_jar = kmem_cache_create(
+ 		"rxrpc_call_jar", sizeof(struct rxrpc_call), 0,
+ 		SLAB_HWCACHE_ALIGN, NULL);
+diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
+index 5d44dc08f66d0..e8e14c6f904d9 100644
+--- a/net/rxrpc/ar-internal.h
++++ b/net/rxrpc/ar-internal.h
+@@ -1068,6 +1068,7 @@ int rxrpc_get_server_data_key(struct rxrpc_connection *, const void *, time64_t,
+ /*
+  * local_event.c
+  */
++void rxrpc_gen_version_string(void);
+ void rxrpc_send_version_request(struct rxrpc_local *local,
+ 				struct rxrpc_host_header *hdr,
+ 				struct sk_buff *skb);
+diff --git a/net/rxrpc/local_event.c b/net/rxrpc/local_event.c
+index 5e69ea6b233da..993c69f97488c 100644
+--- a/net/rxrpc/local_event.c
++++ b/net/rxrpc/local_event.c
+@@ -16,7 +16,16 @@
+ #include <generated/utsrelease.h>
+ #include "ar-internal.h"
+ 
+-static const char rxrpc_version_string[65] = "linux-" UTS_RELEASE " AF_RXRPC";
++static char rxrpc_version_string[65]; // "linux-" UTS_RELEASE " AF_RXRPC";
++
++/*
++ * Generate the VERSION packet string.
++ */
++void rxrpc_gen_version_string(void)
++{
++	snprintf(rxrpc_version_string, sizeof(rxrpc_version_string),
++		 "linux-%.49s AF_RXRPC", UTS_RELEASE);
++}
+ 
+ /*
+  * Reply to a version request
+diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
+index a1c4ee2e0be22..fd5dc47cb2134 100644
+--- a/net/sched/cls_flower.c
++++ b/net/sched/cls_flower.c
+@@ -1153,6 +1153,9 @@ static int fl_set_geneve_opt(const struct nlattr *nla, struct fl_flow_key *key,
+ 	if (option_len > sizeof(struct geneve_opt))
+ 		data_len = option_len - sizeof(struct geneve_opt);
+ 
++	if (key->enc_opts.len > FLOW_DIS_TUN_OPTS_MAX - 4)
++		return -ERANGE;
++
+ 	opt = (struct geneve_opt *)&key->enc_opts.data[key->enc_opts.len];
+ 	memset(opt, 0xff, option_len);
+ 	opt->length = data_len / 4;
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index aba789c30a2eb..7045b67b5533e 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -1250,7 +1250,12 @@ static struct Qdisc *qdisc_create(struct net_device *dev,
+ 	sch->parent = parent;
+ 
+ 	if (handle == TC_H_INGRESS) {
+-		sch->flags |= TCQ_F_INGRESS;
++		if (!(sch->flags & TCQ_F_INGRESS)) {
++			NL_SET_ERR_MSG(extack,
++				       "Specified parent ID is reserved for ingress and clsact Qdiscs");
++			err = -EINVAL;
++			goto err_out3;
++		}
+ 		handle = TC_H_MAKE(TC_H_INGRESS, 0);
+ 	} else {
+ 		if (handle == 0) {
+@@ -1589,11 +1594,20 @@ replay:
+ 					NL_SET_ERR_MSG(extack, "Invalid qdisc name");
+ 					return -EINVAL;
+ 				}
++				if (q->flags & TCQ_F_INGRESS) {
++					NL_SET_ERR_MSG(extack,
++						       "Cannot regraft ingress or clsact Qdiscs");
++					return -EINVAL;
++				}
+ 				if (q == p ||
+ 				    (p && check_loop(q, p, 0))) {
+ 					NL_SET_ERR_MSG(extack, "Qdisc parent/child loop detected");
+ 					return -ELOOP;
+ 				}
++				if (clid == TC_H_INGRESS) {
++					NL_SET_ERR_MSG(extack, "Ingress cannot graft directly");
++					return -EINVAL;
++				}
+ 				qdisc_refcount_inc(q);
+ 				goto graft;
+ 			} else {
+diff --git a/net/sched/sch_ingress.c b/net/sched/sch_ingress.c
+index 84838128b9c5b..e43a454993723 100644
+--- a/net/sched/sch_ingress.c
++++ b/net/sched/sch_ingress.c
+@@ -80,6 +80,9 @@ static int ingress_init(struct Qdisc *sch, struct nlattr *opt,
+ 	struct net_device *dev = qdisc_dev(sch);
+ 	int err;
+ 
++	if (sch->parent != TC_H_INGRESS)
++		return -EOPNOTSUPP;
++
+ 	net_inc_ingress_queue();
+ 
+ 	mini_qdisc_pair_init(&q->miniqp, sch, &dev->miniq_ingress);
+@@ -101,6 +104,9 @@ static void ingress_destroy(struct Qdisc *sch)
+ {
+ 	struct ingress_sched_data *q = qdisc_priv(sch);
+ 
++	if (sch->parent != TC_H_INGRESS)
++		return;
++
+ 	tcf_block_put_ext(q->block, sch, &q->block_info);
+ 	net_dec_ingress_queue();
+ }
+@@ -134,7 +140,7 @@ static struct Qdisc_ops ingress_qdisc_ops __read_mostly = {
+ 	.cl_ops			=	&ingress_class_ops,
+ 	.id			=	"ingress",
+ 	.priv_size		=	sizeof(struct ingress_sched_data),
+-	.static_flags		=	TCQ_F_CPUSTATS,
++	.static_flags		=	TCQ_F_INGRESS | TCQ_F_CPUSTATS,
+ 	.init			=	ingress_init,
+ 	.destroy		=	ingress_destroy,
+ 	.dump			=	ingress_dump,
+@@ -219,6 +225,9 @@ static int clsact_init(struct Qdisc *sch, struct nlattr *opt,
+ 	struct net_device *dev = qdisc_dev(sch);
+ 	int err;
+ 
++	if (sch->parent != TC_H_CLSACT)
++		return -EOPNOTSUPP;
++
+ 	net_inc_ingress_queue();
+ 	net_inc_egress_queue();
+ 
+@@ -248,6 +257,9 @@ static void clsact_destroy(struct Qdisc *sch)
+ {
+ 	struct clsact_sched_data *q = qdisc_priv(sch);
+ 
++	if (sch->parent != TC_H_CLSACT)
++		return;
++
+ 	tcf_block_put_ext(q->egress_block, sch, &q->egress_block_info);
+ 	tcf_block_put_ext(q->ingress_block, sch, &q->ingress_block_info);
+ 
+@@ -269,7 +281,7 @@ static struct Qdisc_ops clsact_qdisc_ops __read_mostly = {
+ 	.cl_ops			=	&clsact_class_ops,
+ 	.id			=	"clsact",
+ 	.priv_size		=	sizeof(struct clsact_sched_data),
+-	.static_flags		=	TCQ_F_CPUSTATS,
++	.static_flags		=	TCQ_F_INGRESS | TCQ_F_CPUSTATS,
+ 	.init			=	clsact_init,
+ 	.destroy		=	clsact_destroy,
+ 	.dump			=	ingress_dump,
+diff --git a/net/smc/smc_llc.c b/net/smc/smc_llc.c
+index a0840b8c935b8..7a8d9163d186e 100644
+--- a/net/smc/smc_llc.c
++++ b/net/smc/smc_llc.c
+@@ -578,7 +578,10 @@ static struct smc_buf_desc *smc_llc_get_next_rmb(struct smc_link_group *lgr,
+ {
+ 	struct smc_buf_desc *buf_next;
+ 
+-	if (!buf_pos || list_is_last(&buf_pos->list, &lgr->rmbs[*buf_lst])) {
++	if (!buf_pos)
++		return _smc_llc_get_next_rmb(lgr, buf_lst);
++
++	if (list_is_last(&buf_pos->list, &lgr->rmbs[*buf_lst])) {
+ 		(*buf_lst)++;
+ 		return _smc_llc_get_next_rmb(lgr, buf_lst);
+ 	}
+@@ -614,6 +617,8 @@ static int smc_llc_fill_ext_v2(struct smc_llc_msg_add_link_v2_ext *ext,
+ 		goto out;
+ 	buf_pos = smc_llc_get_first_rmb(lgr, &buf_lst);
+ 	for (i = 0; i < ext->num_rkeys; i++) {
++		while (buf_pos && !(buf_pos)->used)
++			buf_pos = smc_llc_get_next_rmb(lgr, &buf_lst, buf_pos);
+ 		if (!buf_pos)
+ 			break;
+ 		rmb = buf_pos;
+@@ -623,8 +628,6 @@ static int smc_llc_fill_ext_v2(struct smc_llc_msg_add_link_v2_ext *ext,
+ 			cpu_to_be64((uintptr_t)rmb->cpu_addr) :
+ 			cpu_to_be64((u64)sg_dma_address(rmb->sgt[lnk_idx].sgl));
+ 		buf_pos = smc_llc_get_next_rmb(lgr, &buf_lst, buf_pos);
+-		while (buf_pos && !(buf_pos)->used)
+-			buf_pos = smc_llc_get_next_rmb(lgr, &buf_lst, buf_pos);
+ 	}
+ 	len += i * sizeof(ext->rt[0]);
+ out:
+diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
+index bf2d2cdca1185..e2a94589dd5df 100644
+--- a/net/sunrpc/svcsock.c
++++ b/net/sunrpc/svcsock.c
+@@ -1340,25 +1340,10 @@ static struct svc_sock *svc_setup_socket(struct svc_serv *serv,
+ 	return svsk;
+ }
+ 
+-bool svc_alien_sock(struct net *net, int fd)
+-{
+-	int err;
+-	struct socket *sock = sockfd_lookup(fd, &err);
+-	bool ret = false;
+-
+-	if (!sock)
+-		goto out;
+-	if (sock_net(sock->sk) != net)
+-		ret = true;
+-	sockfd_put(sock);
+-out:
+-	return ret;
+-}
+-EXPORT_SYMBOL_GPL(svc_alien_sock);
+-
+ /**
+  * svc_addsock - add a listener socket to an RPC service
+  * @serv: pointer to RPC service to which to add a new listener
++ * @net: caller's network namespace
+  * @fd: file descriptor of the new listener
+  * @name_return: pointer to buffer to fill in with name of listener
+  * @len: size of the buffer
+@@ -1368,8 +1353,8 @@ EXPORT_SYMBOL_GPL(svc_alien_sock);
+  * Name is terminated with '\n'.  On error, returns a negative errno
+  * value.
+  */
+-int svc_addsock(struct svc_serv *serv, const int fd, char *name_return,
+-		const size_t len, const struct cred *cred)
++int svc_addsock(struct svc_serv *serv, struct net *net, const int fd,
++		char *name_return, const size_t len, const struct cred *cred)
+ {
+ 	int err = 0;
+ 	struct socket *so = sockfd_lookup(fd, &err);
+@@ -1380,6 +1365,9 @@ int svc_addsock(struct svc_serv *serv, const int fd, char *name_return,
+ 
+ 	if (!so)
+ 		return err;
++	err = -EINVAL;
++	if (sock_net(so->sk) != net)
++		goto out;
+ 	err = -EAFNOSUPPORT;
+ 	if ((so->sk->sk_family != PF_INET) && (so->sk->sk_family != PF_INET6))
+ 		goto out;
+diff --git a/net/tls/tls_strp.c b/net/tls/tls_strp.c
+index da95abbb7ea32..f37f4a0fcd3c2 100644
+--- a/net/tls/tls_strp.c
++++ b/net/tls/tls_strp.c
+@@ -20,7 +20,9 @@ static void tls_strp_abort_strp(struct tls_strparser *strp, int err)
+ 	strp->stopped = 1;
+ 
+ 	/* Report an error on the lower socket */
+-	strp->sk->sk_err = -err;
++	WRITE_ONCE(strp->sk->sk_err, -err);
++	/* Paired with smp_rmb() in tcp_poll() */
++	smp_wmb();
+ 	sk_error_report(strp->sk);
+ }
+ 
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 6e6a7c37d685c..1a53c8f481e9a 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -70,7 +70,9 @@ noinline void tls_err_abort(struct sock *sk, int err)
+ {
+ 	WARN_ON_ONCE(err >= 0);
+ 	/* sk->sk_err should contain a positive error code. */
+-	sk->sk_err = -err;
++	WRITE_ONCE(sk->sk_err, -err);
++	/* Paired with smp_rmb() in tcp_poll() */
++	smp_wmb();
+ 	sk_error_report(sk);
+ }
+ 
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index 21a3a1cd3d6de..6d15788b51231 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -3312,7 +3312,7 @@ xfrm_secpath_reject(int idx, struct sk_buff *skb, const struct flowi *fl)
+ 
+ static inline int
+ xfrm_state_ok(const struct xfrm_tmpl *tmpl, const struct xfrm_state *x,
+-	      unsigned short family)
++	      unsigned short family, u32 if_id)
+ {
+ 	if (xfrm_state_kern(x))
+ 		return tmpl->optional && !xfrm_state_addr_cmp(tmpl, x, tmpl->encap_family);
+@@ -3323,7 +3323,8 @@ xfrm_state_ok(const struct xfrm_tmpl *tmpl, const struct xfrm_state *x,
+ 		(tmpl->allalgs || (tmpl->aalgos & (1<<x->props.aalgo)) ||
+ 		 !(xfrm_id_proto_match(tmpl->id.proto, IPSEC_PROTO_ANY))) &&
+ 		!(x->props.mode != XFRM_MODE_TRANSPORT &&
+-		  xfrm_state_addr_cmp(tmpl, x, family));
++		  xfrm_state_addr_cmp(tmpl, x, family)) &&
++		(if_id == 0 || if_id == x->if_id);
+ }
+ 
+ /*
+@@ -3335,7 +3336,7 @@ xfrm_state_ok(const struct xfrm_tmpl *tmpl, const struct xfrm_state *x,
+  */
+ static inline int
+ xfrm_policy_ok(const struct xfrm_tmpl *tmpl, const struct sec_path *sp, int start,
+-	       unsigned short family)
++	       unsigned short family, u32 if_id)
+ {
+ 	int idx = start;
+ 
+@@ -3345,7 +3346,7 @@ xfrm_policy_ok(const struct xfrm_tmpl *tmpl, const struct sec_path *sp, int star
+ 	} else
+ 		start = -1;
+ 	for (; idx < sp->len; idx++) {
+-		if (xfrm_state_ok(tmpl, sp->xvec[idx], family))
++		if (xfrm_state_ok(tmpl, sp->xvec[idx], family, if_id))
+ 			return ++idx;
+ 		if (sp->xvec[idx]->props.mode != XFRM_MODE_TRANSPORT) {
+ 			if (start == -1)
+@@ -3724,7 +3725,7 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb,
+ 		 * are implied between each two transformations.
+ 		 */
+ 		for (i = xfrm_nr-1, k = 0; i >= 0; i--) {
+-			k = xfrm_policy_ok(tpp[i], sp, k, family);
++			k = xfrm_policy_ok(tpp[i], sp, k, family, if_id);
+ 			if (k < 0) {
+ 				if (k < -1)
+ 					/* "-2 - errored_index" returned */
+diff --git a/security/selinux/Makefile b/security/selinux/Makefile
+index 0aecf9334ec31..8b21520bd4b9f 100644
+--- a/security/selinux/Makefile
++++ b/security/selinux/Makefile
+@@ -26,5 +26,9 @@ quiet_cmd_flask = GEN     $(obj)/flask.h $(obj)/av_permissions.h
+       cmd_flask = $< $(obj)/flask.h $(obj)/av_permissions.h
+ 
+ targets += flask.h av_permissions.h
+-$(obj)/flask.h $(obj)/av_permissions.h &: scripts/selinux/genheaders/genheaders FORCE
++# once make >= 4.3 is required, we can use grouped targets in the rule below,
++# which basically involves adding both headers and a '&' before the colon, see
++# the example below:
++#   $(obj)/flask.h $(obj)/av_permissions.h &: scripts/selinux/...
++$(obj)/flask.h: scripts/selinux/genheaders/genheaders FORCE
+ 	$(call if_changed,flask)
+diff --git a/sound/core/oss/pcm_plugin.h b/sound/core/oss/pcm_plugin.h
+index 46e273bd4a786..50a6b50f5db4c 100644
+--- a/sound/core/oss/pcm_plugin.h
++++ b/sound/core/oss/pcm_plugin.h
+@@ -141,6 +141,14 @@ int snd_pcm_area_copy(const struct snd_pcm_channel_area *src_channel,
+ 
+ void *snd_pcm_plug_buf_alloc(struct snd_pcm_substream *plug, snd_pcm_uframes_t size);
+ void snd_pcm_plug_buf_unlock(struct snd_pcm_substream *plug, void *ptr);
++#else
++
++static inline snd_pcm_sframes_t snd_pcm_plug_client_size(struct snd_pcm_substream *handle, snd_pcm_uframes_t drv_size) { return drv_size; }
++static inline snd_pcm_sframes_t snd_pcm_plug_slave_size(struct snd_pcm_substream *handle, snd_pcm_uframes_t clt_size) { return clt_size; }
++static inline int snd_pcm_plug_slave_format(int format, const struct snd_mask *format_mask) { return format; }
++
++#endif
++
+ snd_pcm_sframes_t snd_pcm_oss_write3(struct snd_pcm_substream *substream,
+ 				     const char *ptr, snd_pcm_uframes_t size,
+ 				     int in_kernel);
+@@ -151,14 +159,6 @@ snd_pcm_sframes_t snd_pcm_oss_writev3(struct snd_pcm_substream *substream,
+ snd_pcm_sframes_t snd_pcm_oss_readv3(struct snd_pcm_substream *substream,
+ 				     void **bufs, snd_pcm_uframes_t frames);
+ 
+-#else
+-
+-static inline snd_pcm_sframes_t snd_pcm_plug_client_size(struct snd_pcm_substream *handle, snd_pcm_uframes_t drv_size) { return drv_size; }
+-static inline snd_pcm_sframes_t snd_pcm_plug_slave_size(struct snd_pcm_substream *handle, snd_pcm_uframes_t clt_size) { return clt_size; }
+-static inline int snd_pcm_plug_slave_format(int format, const struct snd_mask *format_mask) { return format; }
+-
+-#endif
+-
+ #ifdef PLUGIN_DEBUG
+ #define pdprintf(fmt, args...) printk(KERN_DEBUG "plugin: " fmt, ##args)
+ #else
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 881b2f3a1551f..3226691ac923c 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -227,6 +227,7 @@ enum {
+ 	AZX_DRIVER_ATI,
+ 	AZX_DRIVER_ATIHDMI,
+ 	AZX_DRIVER_ATIHDMI_NS,
++	AZX_DRIVER_GFHDMI,
+ 	AZX_DRIVER_VIA,
+ 	AZX_DRIVER_SIS,
+ 	AZX_DRIVER_ULI,
+@@ -349,6 +350,7 @@ static const char * const driver_short_names[] = {
+ 	[AZX_DRIVER_ATI] = "HDA ATI SB",
+ 	[AZX_DRIVER_ATIHDMI] = "HDA ATI HDMI",
+ 	[AZX_DRIVER_ATIHDMI_NS] = "HDA ATI HDMI",
++	[AZX_DRIVER_GFHDMI] = "HDA GF HDMI",
+ 	[AZX_DRIVER_VIA] = "HDA VIA VT82xx",
+ 	[AZX_DRIVER_SIS] = "HDA SIS966",
+ 	[AZX_DRIVER_ULI] = "HDA ULI M5461",
+@@ -1743,6 +1745,12 @@ static int default_bdl_pos_adj(struct azx *chip)
+ 	}
+ 
+ 	switch (chip->driver_type) {
++	/*
++	 * increase the bdl size for Glenfly Gpus for hardware
++	 * limitation on hdac interrupt interval
++	 */
++	case AZX_DRIVER_GFHDMI:
++		return 128;
+ 	case AZX_DRIVER_ICH:
+ 	case AZX_DRIVER_PCH:
+ 		return 1;
+@@ -1858,6 +1866,12 @@ static int azx_first_init(struct azx *chip)
+ 		pci_write_config_dword(pci, PCI_BASE_ADDRESS_1, 0);
+ 	}
+ #endif
++	/*
++	 * Fix response write request not synced to memory when handle
++	 * hdac interrupt on Glenfly Gpus
++	 */
++	if (chip->driver_type == AZX_DRIVER_GFHDMI)
++		bus->polling_mode = 1;
+ 
+ 	err = pcim_iomap_regions(pci, 1 << 0, "ICH HD audio");
+ 	if (err < 0)
+@@ -1959,6 +1973,7 @@ static int azx_first_init(struct azx *chip)
+ 			chip->playback_streams = ATIHDMI_NUM_PLAYBACK;
+ 			chip->capture_streams = ATIHDMI_NUM_CAPTURE;
+ 			break;
++		case AZX_DRIVER_GFHDMI:
+ 		case AZX_DRIVER_GENERIC:
+ 		default:
+ 			chip->playback_streams = ICH6_NUM_PLAYBACK;
+@@ -2727,6 +2742,12 @@ static const struct pci_device_id azx_ids[] = {
+ 	{ PCI_DEVICE(0x1002, 0xab38),
+ 	  .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS |
+ 	  AZX_DCAPS_PM_RUNTIME },
++	/* GLENFLY */
++	{ PCI_DEVICE(0x6766, PCI_ANY_ID),
++	  .class = PCI_CLASS_MULTIMEDIA_HD_AUDIO << 8,
++	  .class_mask = 0xffffff,
++	  .driver_data = AZX_DRIVER_GFHDMI | AZX_DCAPS_POSFIX_LPIB |
++	  AZX_DCAPS_NO_MSI | AZX_DCAPS_NO_64BIT },
+ 	/* VIA VT8251/VT8237A */
+ 	{ PCI_DEVICE(0x1106, 0x3288), .driver_data = AZX_DRIVER_VIA },
+ 	/* VIA GFX VT7122/VX900 */
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index be2c6cff77011..7b5e09070ab9b 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -4489,6 +4489,22 @@ static int patch_via_hdmi(struct hda_codec *codec)
+ 	return patch_simple_hdmi(codec, VIAHDMI_CVT_NID, VIAHDMI_PIN_NID);
+ }
+ 
++static int patch_gf_hdmi(struct hda_codec *codec)
++{
++	int err;
++
++	err = patch_generic_hdmi(codec);
++	if (err)
++		return err;
++
++	/*
++	 * Glenfly GPUs have two codecs, stream switches from one codec to
++	 * another, need to do actual clean-ups in codec_cleanup_stream
++	 */
++	codec->no_sticky_stream = 1;
++	return 0;
++}
++
+ /*
+  * patch entries
+  */
+@@ -4584,6 +4600,12 @@ HDA_CODEC_ENTRY(0x10de00a6, "GPU a6 HDMI/DP",	patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de00a7, "GPU a7 HDMI/DP",	patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de8001, "MCP73 HDMI",	patch_nvhdmi_2ch),
+ HDA_CODEC_ENTRY(0x10de8067, "MCP67/68 HDMI",	patch_nvhdmi_2ch),
++HDA_CODEC_ENTRY(0x67663d82, "Arise 82 HDMI/DP",	patch_gf_hdmi),
++HDA_CODEC_ENTRY(0x67663d83, "Arise 83 HDMI/DP",	patch_gf_hdmi),
++HDA_CODEC_ENTRY(0x67663d84, "Arise 84 HDMI/DP",	patch_gf_hdmi),
++HDA_CODEC_ENTRY(0x67663d85, "Arise 85 HDMI/DP",	patch_gf_hdmi),
++HDA_CODEC_ENTRY(0x67663d86, "Arise 86 HDMI/DP",	patch_gf_hdmi),
++HDA_CODEC_ENTRY(0x67663d87, "Arise 87 HDMI/DP",	patch_gf_hdmi),
+ HDA_CODEC_ENTRY(0x11069f80, "VX900 HDMI/DP",	patch_via_hdmi),
+ HDA_CODEC_ENTRY(0x11069f81, "VX900 HDMI/DP",	patch_via_hdmi),
+ HDA_CODEC_ENTRY(0x11069f84, "VX11 HDMI/DP",	patch_generic_hdmi),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 379f216158ab4..7b5f194513c7b 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -7063,6 +7063,8 @@ enum {
+ 	ALC225_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 	ALC295_FIXUP_DISABLE_DAC3,
+ 	ALC285_FIXUP_SPEAKER2_TO_DAC1,
++	ALC285_FIXUP_ASUS_SPEAKER2_TO_DAC1,
++	ALC285_FIXUP_ASUS_HEADSET_MIC,
+ 	ALC280_FIXUP_HP_HEADSET_MIC,
+ 	ALC221_FIXUP_HP_FRONT_MIC,
+ 	ALC292_FIXUP_TPT460,
+@@ -8033,6 +8035,22 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC269_FIXUP_THINKPAD_ACPI
+ 	},
++	[ALC285_FIXUP_ASUS_SPEAKER2_TO_DAC1] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc285_fixup_speaker2_to_dac1,
++		.chained = true,
++		.chain_id = ALC245_FIXUP_CS35L41_SPI_2
++	},
++	[ALC285_FIXUP_ASUS_HEADSET_MIC] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x19, 0x03a11050 },
++			{ 0x1b, 0x03a11c30 },
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC285_FIXUP_ASUS_SPEAKER2_TO_DAC1
++	},
+ 	[ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER] = {
+ 		.type = HDA_FIXUP_PINS,
+ 		.v.pins = (const struct hda_pintbl[]) {
+@@ -9507,6 +9525,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x1313, "Asus K42JZ", ALC269VB_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x13b0, "ASUS Z550SA", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK),
++	SND_PCI_QUIRK(0x1043, 0x1473, "ASUS GU604V", ALC285_FIXUP_ASUS_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1043, 0x1483, "ASUS GU603V", ALC285_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_ASUS_ZENBOOK_UX31A),
+ 	SND_PCI_QUIRK(0x1043, 0x1662, "ASUS GV301QH", ALC294_FIXUP_ASUS_DUAL_SPK),
+ 	SND_PCI_QUIRK(0x1043, 0x1683, "ASUS UM3402YAR", ALC287_FIXUP_CS35L41_I2C_2),
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index b9958e5553674..84b401b685f7f 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -297,6 +297,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "8A22"),
+ 		}
+ 	},
++	{
++		.driver_data = &acp6x_card,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "System76"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "pang12"),
++		}
++	},
+ 	{}
+ };
+ 
+diff --git a/sound/soc/codecs/ssm2602.c b/sound/soc/codecs/ssm2602.c
+index cbbe83b85adaf..cf7927222be15 100644
+--- a/sound/soc/codecs/ssm2602.c
++++ b/sound/soc/codecs/ssm2602.c
+@@ -53,6 +53,18 @@ static const struct reg_default ssm2602_reg[SSM2602_CACHEREGNUM] = {
+ 	{ .reg = 0x09, .def = 0x0000 }
+ };
+ 
++/*
++ * ssm2602 register patch
++ * Workaround for playback distortions after power up: activates digital
++ * core, and then powers on output, DAC, and whole chip at the same time
++ */
++
++static const struct reg_sequence ssm2602_patch[] = {
++	{ SSM2602_ACTIVE, 0x01 },
++	{ SSM2602_PWR,    0x07 },
++	{ SSM2602_RESET,  0x00 },
++};
++
+ 
+ /*Appending several "None"s just for OSS mixer use*/
+ static const char *ssm2602_input_select[] = {
+@@ -589,6 +601,9 @@ static int ssm260x_component_probe(struct snd_soc_component *component)
+ 		return ret;
+ 	}
+ 
++	regmap_register_patch(ssm2602->regmap, ssm2602_patch,
++			      ARRAY_SIZE(ssm2602_patch));
++
+ 	/* set the update bits */
+ 	regmap_update_bits(ssm2602->regmap, SSM2602_LINVOL,
+ 			    LINVOL_LRIN_BOTH, LINVOL_LRIN_BOTH);
+diff --git a/sound/soc/dwc/dwc-i2s.c b/sound/soc/dwc/dwc-i2s.c
+index 7f7dd07c63b2f..3496301582b22 100644
+--- a/sound/soc/dwc/dwc-i2s.c
++++ b/sound/soc/dwc/dwc-i2s.c
+@@ -132,13 +132,13 @@ static irqreturn_t i2s_irq_handler(int irq, void *dev_id)
+ 
+ 		/* Error Handling: TX */
+ 		if (isr[i] & ISR_TXFO) {
+-			dev_err(dev->dev, "TX overrun (ch_id=%d)\n", i);
++			dev_err_ratelimited(dev->dev, "TX overrun (ch_id=%d)\n", i);
+ 			irq_valid = true;
+ 		}
+ 
+ 		/* Error Handling: TX */
+ 		if (isr[i] & ISR_RXFO) {
+-			dev_err(dev->dev, "RX overrun (ch_id=%d)\n", i);
++			dev_err_ratelimited(dev->dev, "RX overrun (ch_id=%d)\n", i);
+ 			irq_valid = true;
+ 		}
+ 	}
+diff --git a/sound/soc/intel/common/soc-acpi-intel-cht-match.c b/sound/soc/intel/common/soc-acpi-intel-cht-match.c
+index 6beb00858c33f..cdcbf04b8832f 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-cht-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-cht-match.c
+@@ -50,6 +50,31 @@ static struct snd_soc_acpi_mach *cht_quirk(void *arg)
+ 		return mach;
+ }
+ 
++/*
++ * Some tablets with Android factory OS have buggy DSDTs with an ESSX8316 device
++ * in the ACPI tables. While they are not using an ESS8316 codec. These DSDTs
++ * also have an ACPI device for the correct codec, ignore the ESSX8316.
++ */
++static const struct dmi_system_id cht_ess8316_not_present_table[] = {
++	{
++		/* Nextbook Ares 8A */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Insyde"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "CherryTrail"),
++			DMI_MATCH(DMI_BIOS_VERSION, "M882"),
++		},
++	},
++	{ }
++};
++
++static struct snd_soc_acpi_mach *cht_ess8316_quirk(void *arg)
++{
++	if (dmi_check_system(cht_ess8316_not_present_table))
++		return NULL;
++
++	return arg;
++}
++
+ static const struct snd_soc_acpi_codecs rt5640_comp_ids = {
+ 	.num_codecs = 2,
+ 	.codecs = { "10EC5640", "10EC3276" },
+@@ -113,6 +138,7 @@ struct snd_soc_acpi_mach  snd_soc_acpi_intel_cherrytrail_machines[] = {
+ 		.drv_name = "bytcht_es8316",
+ 		.fw_filename = "intel/fw_sst_22a8.bin",
+ 		.board = "bytcht_es8316",
++		.machine_quirk = cht_ess8316_quirk,
+ 		.sof_tplg_filename = "sof-cht-es8316.tplg",
+ 	},
+ 	/* some CHT-T platforms rely on RT5640, use Baytrail machine driver */
+diff --git a/sound/soc/jz4740/jz4740-i2s.c b/sound/soc/jz4740/jz4740-i2s.c
+index 6d9cfe0a50411..d0f6c945d9aee 100644
+--- a/sound/soc/jz4740/jz4740-i2s.c
++++ b/sound/soc/jz4740/jz4740-i2s.c
+@@ -218,18 +218,48 @@ static int jz4740_i2s_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ 	return 0;
+ }
+ 
++static int jz4740_i2s_get_i2sdiv(unsigned long mclk, unsigned long rate,
++				 unsigned long i2sdiv_max)
++{
++	unsigned long div, rate1, rate2, err1, err2;
++
++	div = mclk / (64 * rate);
++	if (div == 0)
++		div = 1;
++
++	rate1 = mclk / (64 * div);
++	rate2 = mclk / (64 * (div + 1));
++
++	err1 = abs(rate1 - rate);
++	err2 = abs(rate2 - rate);
++
++	/*
++	 * Choose the divider that produces the smallest error in the
++	 * output rate and reject dividers with a 5% or higher error.
++	 * In the event that both dividers are outside the acceptable
++	 * error margin, reject the rate to prevent distorted audio.
++	 * (The number 5% is arbitrary.)
++	 */
++	if (div <= i2sdiv_max && err1 <= err2 && err1 < rate/20)
++		return div;
++	if (div < i2sdiv_max && err2 < rate/20)
++		return div + 1;
++
++	return -EINVAL;
++}
++
+ static int jz4740_i2s_hw_params(struct snd_pcm_substream *substream,
+ 	struct snd_pcm_hw_params *params, struct snd_soc_dai *dai)
+ {
+ 	struct jz4740_i2s *i2s = snd_soc_dai_get_drvdata(dai);
+ 	struct regmap_field *div_field;
++	unsigned long i2sdiv_max;
+ 	unsigned int sample_size;
+-	uint32_t ctrl;
+-	int div;
++	uint32_t ctrl, conf;
++	int div = 1;
+ 
+ 	regmap_read(i2s->regmap, JZ_REG_AIC_CTRL, &ctrl);
+-
+-	div = clk_get_rate(i2s->clk_i2s) / (64 * params_rate(params));
++	regmap_read(i2s->regmap, JZ_REG_AIC_CONF, &conf);
+ 
+ 	switch (params_format(params)) {
+ 	case SNDRV_PCM_FORMAT_S8:
+@@ -258,11 +288,27 @@ static int jz4740_i2s_hw_params(struct snd_pcm_substream *substream,
+ 			ctrl &= ~JZ_AIC_CTRL_MONO_TO_STEREO;
+ 
+ 		div_field = i2s->field_i2sdiv_playback;
++		i2sdiv_max = GENMASK(i2s->soc_info->field_i2sdiv_playback.msb,
++				     i2s->soc_info->field_i2sdiv_playback.lsb);
+ 	} else {
+ 		ctrl &= ~JZ_AIC_CTRL_INPUT_SAMPLE_SIZE;
+ 		ctrl |= FIELD_PREP(JZ_AIC_CTRL_INPUT_SAMPLE_SIZE, sample_size);
+ 
+ 		div_field = i2s->field_i2sdiv_capture;
++		i2sdiv_max = GENMASK(i2s->soc_info->field_i2sdiv_capture.msb,
++				     i2s->soc_info->field_i2sdiv_capture.lsb);
++	}
++
++	/*
++	 * Only calculate I2SDIV if we're supplying the bit or frame clock.
++	 * If the codec is supplying both clocks then the divider output is
++	 * unused, and we don't want it to limit the allowed sample rates.
++	 */
++	if (conf & (JZ_AIC_CONF_BIT_CLK_MASTER | JZ_AIC_CONF_SYNC_CLK_MASTER)) {
++		div = jz4740_i2s_get_i2sdiv(clk_get_rate(i2s->clk_i2s),
++					    params_rate(params), i2sdiv_max);
++		if (div < 0)
++			return div;
+ 	}
+ 
+ 	regmap_write(i2s->regmap, JZ_REG_AIC_CTRL, ctrl);
+diff --git a/sound/soc/sof/amd/acp-ipc.c b/sound/soc/sof/amd/acp-ipc.c
+index 4e0c48a361599..749e856dc6011 100644
+--- a/sound/soc/sof/amd/acp-ipc.c
++++ b/sound/soc/sof/amd/acp-ipc.c
+@@ -209,7 +209,12 @@ int acp_sof_ipc_msg_data(struct snd_sof_dev *sdev, struct snd_sof_pcm_stream *sp
+ 		acp_mailbox_read(sdev, offset, p, sz);
+ 	} else {
+ 		struct snd_pcm_substream *substream = sps->substream;
+-		struct acp_dsp_stream *stream = substream->runtime->private_data;
++		struct acp_dsp_stream *stream;
++
++		if (!substream || !substream->runtime)
++			return -ESTRPIPE;
++
++		stream = substream->runtime->private_data;
+ 
+ 		if (!stream)
+ 			return -ESTRPIPE;
+diff --git a/sound/soc/sof/debug.c b/sound/soc/sof/debug.c
+index ade0507328af4..5042312b1b98d 100644
+--- a/sound/soc/sof/debug.c
++++ b/sound/soc/sof/debug.c
+@@ -437,8 +437,8 @@ void snd_sof_handle_fw_exception(struct snd_sof_dev *sdev, const char *msg)
+ 		/* should we prevent DSP entering D3 ? */
+ 		if (!sdev->ipc_dump_printed)
+ 			dev_info(sdev->dev,
+-				 "preventing DSP entering D3 state to preserve context\n");
+-		pm_runtime_get_noresume(sdev->dev);
++				 "Attempting to prevent DSP from entering D3 state to preserve context\n");
++		pm_runtime_get_if_in_use(sdev->dev);
+ 	}
+ 
+ 	/* dump vital information to the logs */
+diff --git a/sound/soc/sof/pcm.c b/sound/soc/sof/pcm.c
+index 445acb5c3a21b..2570f33db9f3e 100644
+--- a/sound/soc/sof/pcm.c
++++ b/sound/soc/sof/pcm.c
+@@ -616,16 +616,17 @@ static int sof_pcm_probe(struct snd_soc_component *component)
+ 				       "%s/%s",
+ 				       plat_data->tplg_filename_prefix,
+ 				       plat_data->tplg_filename);
+-	if (!tplg_filename)
+-		return -ENOMEM;
++	if (!tplg_filename) {
++		ret = -ENOMEM;
++		goto pm_error;
++	}
+ 
+ 	ret = snd_sof_load_topology(component, tplg_filename);
+-	if (ret < 0) {
++	if (ret < 0)
+ 		dev_err(component->dev, "error: failed to load DSP topology %d\n",
+ 			ret);
+-		return ret;
+-	}
+ 
++pm_error:
+ 	pm_runtime_mark_last_busy(component->dev);
+ 	pm_runtime_put_autosuspend(component->dev);
+ 
+diff --git a/sound/soc/sof/pm.c b/sound/soc/sof/pm.c
+index 85412aeb1ca16..40f392efd8246 100644
+--- a/sound/soc/sof/pm.c
++++ b/sound/soc/sof/pm.c
+@@ -159,7 +159,7 @@ static int sof_resume(struct device *dev, bool runtime_resume)
+ 		ret = tplg_ops->set_up_all_pipelines(sdev, false);
+ 		if (ret < 0) {
+ 			dev_err(sdev->dev, "Failed to restore pipeline after resume %d\n", ret);
+-			return ret;
++			goto setup_fail;
+ 		}
+ 	}
+ 
+@@ -173,6 +173,18 @@ static int sof_resume(struct device *dev, bool runtime_resume)
+ 			dev_err(sdev->dev, "ctx_restore IPC error during resume: %d\n", ret);
+ 	}
+ 
++setup_fail:
++#if IS_ENABLED(CONFIG_SND_SOC_SOF_DEBUG_ENABLE_DEBUGFS_CACHE)
++	if (ret < 0) {
++		/*
++		 * Debugfs cannot be read in runtime suspend, so cache
++		 * the contents upon failure. This allows to capture
++		 * possible DSP coredump information.
++		 */
++		sof_cache_debugfs(sdev);
++	}
++#endif
++
+ 	return ret;
+ }
+ 
+diff --git a/sound/soc/sof/sof-client-probes.c b/sound/soc/sof/sof-client-probes.c
+index fff126808bc04..8d9e9d5f40e45 100644
+--- a/sound/soc/sof/sof-client-probes.c
++++ b/sound/soc/sof/sof-client-probes.c
+@@ -218,12 +218,7 @@ static ssize_t sof_probes_dfs_points_read(struct file *file, char __user *to,
+ 
+ 	ret = ipc->points_info(cdev, &desc, &num_desc);
+ 	if (ret < 0)
+-		goto exit;
+-
+-	pm_runtime_mark_last_busy(dev);
+-	err = pm_runtime_put_autosuspend(dev);
+-	if (err < 0)
+-		dev_err_ratelimited(dev, "debugfs read failed to idle %d\n", err);
++		goto pm_error;
+ 
+ 	for (i = 0; i < num_desc; i++) {
+ 		offset = strlen(buf);
+@@ -241,6 +236,13 @@ static ssize_t sof_probes_dfs_points_read(struct file *file, char __user *to,
+ 	ret = simple_read_from_buffer(to, count, ppos, buf, strlen(buf));
+ 
+ 	kfree(desc);
++
++pm_error:
++	pm_runtime_mark_last_busy(dev);
++	err = pm_runtime_put_autosuspend(dev);
++	if (err < 0)
++		dev_err_ratelimited(dev, "debugfs read failed to idle %d\n", err);
++
+ exit:
+ 	kfree(buf);
+ 	return ret;
+diff --git a/tools/perf/builtin-ftrace.c b/tools/perf/builtin-ftrace.c
+index fb1b66ef2e167..ce482ef58e6f2 100644
+--- a/tools/perf/builtin-ftrace.c
++++ b/tools/perf/builtin-ftrace.c
+@@ -1175,7 +1175,7 @@ int cmd_ftrace(int argc, const char **argv)
+ 	OPT_BOOLEAN('b', "use-bpf", &ftrace.target.use_bpf,
+ 		    "Use BPF to measure function latency"),
+ #endif
+-	OPT_BOOLEAN('n', "--use-nsec", &ftrace.use_nsec,
++	OPT_BOOLEAN('n', "use-nsec", &ftrace.use_nsec,
+ 		    "Use nano-second histogram"),
+ 	OPT_PARENT(common_options),
+ 	};
+diff --git a/tools/power/cpupower/lib/powercap.c b/tools/power/cpupower/lib/powercap.c
+index 0ce29ee4c2e46..a7a59c6bacda8 100644
+--- a/tools/power/cpupower/lib/powercap.c
++++ b/tools/power/cpupower/lib/powercap.c
+@@ -40,25 +40,34 @@ static int sysfs_get_enabled(char *path, int *mode)
+ {
+ 	int fd;
+ 	char yes_no;
++	int ret = 0;
+ 
+ 	*mode = 0;
+ 
+ 	fd = open(path, O_RDONLY);
+-	if (fd == -1)
+-		return -1;
++	if (fd == -1) {
++		ret = -1;
++		goto out;
++	}
+ 
+ 	if (read(fd, &yes_no, 1) != 1) {
+-		close(fd);
+-		return -1;
++		ret = -1;
++		goto out_close;
+ 	}
+ 
+ 	if (yes_no == '1') {
+ 		*mode = 1;
+-		return 0;
++		goto out_close;
+ 	} else if (yes_no == '0') {
+-		return 0;
++		goto out_close;
++	} else {
++		ret = -1;
++		goto out_close;
+ 	}
+-	return -1;
++out_close:
++	close(fd);
++out:
++	return ret;
+ }
+ 
+ int powercap_get_enabled(int *mode)
+diff --git a/tools/testing/selftests/ftrace/test.d/filter/event-filter-function.tc b/tools/testing/selftests/ftrace/test.d/filter/event-filter-function.tc
+index e2ff3bf4df80f..2de7c61d1ae30 100644
+--- a/tools/testing/selftests/ftrace/test.d/filter/event-filter-function.tc
++++ b/tools/testing/selftests/ftrace/test.d/filter/event-filter-function.tc
+@@ -9,18 +9,33 @@ fail() { #msg
+     exit_fail
+ }
+ 
+-echo "Test event filter function name"
++sample_events() {
++    echo > trace
++    echo 1 > events/kmem/kmem_cache_free/enable
++    echo 1 > tracing_on
++    ls > /dev/null
++    echo 0 > tracing_on
++    echo 0 > events/kmem/kmem_cache_free/enable
++}
++
+ echo 0 > tracing_on
+ echo 0 > events/enable
++
++echo "Get the most frequently calling function"
++sample_events
++
++target_func=`cut -d: -f3 trace | sed 's/call_site=\([^+]*\)+0x.*/\1/' | sort | uniq -c | sort | tail -n 1 | sed 's/^[ 0-9]*//'`
++if [ -z "$target_func" ]; then
++    exit_fail
++fi
+ echo > trace
+-echo 'call_site.function == exit_mmap' > events/kmem/kmem_cache_free/filter
+-echo 1 > events/kmem/kmem_cache_free/enable
+-echo 1 > tracing_on
+-ls > /dev/null
+-echo 0 > events/kmem/kmem_cache_free/enable
+ 
+-hitcnt=`grep kmem_cache_free trace| grep exit_mmap | wc -l`
+-misscnt=`grep kmem_cache_free trace| grep -v exit_mmap | wc -l`
++echo "Test event filter function name"
++echo "call_site.function == $target_func" > events/kmem/kmem_cache_free/filter
++sample_events
++
++hitcnt=`grep kmem_cache_free trace| grep $target_func | wc -l`
++misscnt=`grep kmem_cache_free trace| grep -v $target_func | wc -l`
+ 
+ if [ $hitcnt -eq 0 ]; then
+ 	exit_fail
+@@ -30,20 +45,14 @@ if [ $misscnt -gt 0 ]; then
+ 	exit_fail
+ fi
+ 
+-address=`grep ' exit_mmap$' /proc/kallsyms | cut -d' ' -f1`
++address=`grep " ${target_func}\$" /proc/kallsyms | cut -d' ' -f1`
+ 
+ echo "Test event filter function address"
+-echo 0 > tracing_on
+-echo 0 > events/enable
+-echo > trace
+ echo "call_site.function == 0x$address" > events/kmem/kmem_cache_free/filter
+-echo 1 > events/kmem/kmem_cache_free/enable
+-echo 1 > tracing_on
+-sleep 1
+-echo 0 > events/kmem/kmem_cache_free/enable
++sample_events
+ 
+-hitcnt=`grep kmem_cache_free trace| grep exit_mmap | wc -l`
+-misscnt=`grep kmem_cache_free trace| grep -v exit_mmap | wc -l`
++hitcnt=`grep kmem_cache_free trace| grep $target_func | wc -l`
++misscnt=`grep kmem_cache_free trace| grep -v $target_func | wc -l`
+ 
+ if [ $hitcnt -eq 0 ]; then
+ 	exit_fail
+diff --git a/tools/testing/selftests/net/mptcp/Makefile b/tools/testing/selftests/net/mptcp/Makefile
+index 43a7236261261..7b936a9268594 100644
+--- a/tools/testing/selftests/net/mptcp/Makefile
++++ b/tools/testing/selftests/net/mptcp/Makefile
+@@ -9,7 +9,7 @@ TEST_PROGS := mptcp_connect.sh pm_netlink.sh mptcp_join.sh diag.sh \
+ 
+ TEST_GEN_FILES = mptcp_connect pm_nl_ctl mptcp_sockopt mptcp_inq
+ 
+-TEST_FILES := settings
++TEST_FILES := mptcp_lib.sh settings
+ 
+ EXTRA_CLEAN := *.pcap
+ 
+diff --git a/tools/testing/selftests/net/mptcp/diag.sh b/tools/testing/selftests/net/mptcp/diag.sh
+index ef628b16fe9b4..4eacdb1ab9628 100755
+--- a/tools/testing/selftests/net/mptcp/diag.sh
++++ b/tools/testing/selftests/net/mptcp/diag.sh
+@@ -1,6 +1,8 @@
+ #!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+ 
++. "$(dirname "${0}")/mptcp_lib.sh"
++
+ sec=$(date +%s)
+ rndh=$(printf %x $sec)-$(mktemp -u XXXXXX)
+ ns="ns1-$rndh"
+@@ -31,6 +33,8 @@ cleanup()
+ 	ip netns del $ns
+ }
+ 
++mptcp_lib_check_mptcp
++
+ ip -Version > /dev/null 2>&1
+ if [ $? -ne 0 ];then
+ 	echo "SKIP: Could not run test without ip tool"
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.sh b/tools/testing/selftests/net/mptcp/mptcp_connect.sh
+index a43d3e2f59bbe..c1f7bac199423 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_connect.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect.sh
+@@ -1,6 +1,8 @@
+ #!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+ 
++. "$(dirname "${0}")/mptcp_lib.sh"
++
+ time_start=$(date +%s)
+ 
+ optstring="S:R:d:e:l:r:h4cm:f:tC"
+@@ -141,6 +143,8 @@ cleanup()
+ 	done
+ }
+ 
++mptcp_lib_check_mptcp
++
+ ip -Version > /dev/null 2>&1
+ if [ $? -ne 0 ];then
+ 	echo "SKIP: Could not run test without ip tool"
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+index 42e3bd1a05f56..7c20811ab64bb 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+@@ -6,6 +6,8 @@
+ # address all other issues detected by shellcheck.
+ #shellcheck disable=SC2086
+ 
++. "$(dirname "${0}")/mptcp_lib.sh"
++
+ ret=0
+ sin=""
+ sinfail=""
+@@ -13,6 +15,7 @@ sout=""
+ cin=""
+ cinfail=""
+ cinsent=""
++tmpfile=""
+ cout=""
+ capout=""
+ ns1=""
+@@ -132,6 +135,8 @@ cleanup_partial()
+ 
+ check_tools()
+ {
++	mptcp_lib_check_mptcp
++
+ 	if ! ip -Version &> /dev/null; then
+ 		echo "SKIP: Could not run test without ip tool"
+ 		exit $ksft_skip
+@@ -171,6 +176,7 @@ cleanup()
+ {
+ 	rm -f "$cin" "$cout" "$sinfail"
+ 	rm -f "$sin" "$sout" "$cinsent" "$cinfail"
++	rm -f "$tmpfile"
+ 	rm -rf $evts_ns1 $evts_ns2
+ 	cleanup_partial
+ }
+@@ -378,9 +384,16 @@ check_transfer()
+ 			fail_test
+ 			return 1
+ 		fi
+-		bytes="--bytes=${bytes}"
++
++		# note: BusyBox's "cmp" command doesn't support --bytes
++		tmpfile=$(mktemp)
++		head --bytes="$bytes" "$in" > "$tmpfile"
++		mv "$tmpfile" "$in"
++		head --bytes="$bytes" "$out" > "$tmpfile"
++		mv "$tmpfile" "$out"
++		tmpfile=""
+ 	fi
+-	cmp -l "$in" "$out" ${bytes} | while read -r i a b; do
++	cmp -l "$in" "$out" | while read -r i a b; do
+ 		local sum=$((0${a} + 0${b}))
+ 		if [ $check_invert -eq 0 ] || [ $sum -ne $((0xff)) ]; then
+ 			echo "[ FAIL ] $what does not match (in, out):"
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_lib.sh b/tools/testing/selftests/net/mptcp/mptcp_lib.sh
+new file mode 100644
+index 0000000000000..3286536b79d55
+--- /dev/null
++++ b/tools/testing/selftests/net/mptcp/mptcp_lib.sh
+@@ -0,0 +1,40 @@
++#! /bin/bash
++# SPDX-License-Identifier: GPL-2.0
++
++readonly KSFT_FAIL=1
++readonly KSFT_SKIP=4
++
++# SELFTESTS_MPTCP_LIB_EXPECT_ALL_FEATURES env var can be set when validating all
++# features using the last version of the kernel and the selftests to make sure
++# a test is not being skipped by mistake.
++mptcp_lib_expect_all_features() {
++	[ "${SELFTESTS_MPTCP_LIB_EXPECT_ALL_FEATURES:-}" = "1" ]
++}
++
++# $1: msg
++mptcp_lib_fail_if_expected_feature() {
++	if mptcp_lib_expect_all_features; then
++		echo "ERROR: missing feature: ${*}"
++		exit ${KSFT_FAIL}
++	fi
++
++	return 1
++}
++
++# $1: file
++mptcp_lib_has_file() {
++	local f="${1}"
++
++	if [ -f "${f}" ]; then
++		return 0
++	fi
++
++	mptcp_lib_fail_if_expected_feature "${f} file not found"
++}
++
++mptcp_lib_check_mptcp() {
++	if ! mptcp_lib_has_file "/proc/sys/net/mptcp/enabled"; then
++		echo "SKIP: MPTCP support is not available"
++		exit ${KSFT_SKIP}
++	fi
++}
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_sockopt.sh b/tools/testing/selftests/net/mptcp/mptcp_sockopt.sh
+index 1b70c0a304cef..ff5adbb9c7f2b 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_sockopt.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_sockopt.sh
+@@ -1,6 +1,8 @@
+ #!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+ 
++. "$(dirname "${0}")/mptcp_lib.sh"
++
+ ret=0
+ sin=""
+ sout=""
+@@ -84,6 +86,8 @@ cleanup()
+ 	rm -f "$sin" "$sout"
+ }
+ 
++mptcp_lib_check_mptcp
++
+ ip -Version > /dev/null 2>&1
+ if [ $? -ne 0 ];then
+ 	echo "SKIP: Could not run test without ip tool"
+diff --git a/tools/testing/selftests/net/mptcp/pm_netlink.sh b/tools/testing/selftests/net/mptcp/pm_netlink.sh
+index 89839d1ff9d83..32f7533e0919a 100755
+--- a/tools/testing/selftests/net/mptcp/pm_netlink.sh
++++ b/tools/testing/selftests/net/mptcp/pm_netlink.sh
+@@ -1,6 +1,8 @@
+ #!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+ 
++. "$(dirname "${0}")/mptcp_lib.sh"
++
+ ksft_skip=4
+ ret=0
+ 
+@@ -34,6 +36,8 @@ cleanup()
+ 	ip netns del $ns1
+ }
+ 
++mptcp_lib_check_mptcp
++
+ ip -Version > /dev/null 2>&1
+ if [ $? -ne 0 ];then
+ 	echo "SKIP: Could not run test without ip tool"
+diff --git a/tools/testing/selftests/net/mptcp/simult_flows.sh b/tools/testing/selftests/net/mptcp/simult_flows.sh
+index 9f22f7e5027df..36a3c9d92e205 100755
+--- a/tools/testing/selftests/net/mptcp/simult_flows.sh
++++ b/tools/testing/selftests/net/mptcp/simult_flows.sh
+@@ -1,6 +1,8 @@
+ #!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+ 
++. "$(dirname "${0}")/mptcp_lib.sh"
++
+ sec=$(date +%s)
+ rndh=$(printf %x $sec)-$(mktemp -u XXXXXX)
+ ns1="ns1-$rndh"
+@@ -34,6 +36,8 @@ cleanup()
+ 	done
+ }
+ 
++mptcp_lib_check_mptcp
++
+ ip -Version > /dev/null 2>&1
+ if [ $? -ne 0 ];then
+ 	echo "SKIP: Could not run test without ip tool"
+diff --git a/tools/testing/selftests/net/mptcp/userspace_pm.sh b/tools/testing/selftests/net/mptcp/userspace_pm.sh
+index b1eb7bce599dc..8092399d911f1 100755
+--- a/tools/testing/selftests/net/mptcp/userspace_pm.sh
++++ b/tools/testing/selftests/net/mptcp/userspace_pm.sh
+@@ -1,6 +1,10 @@
+ #!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+ 
++. "$(dirname "${0}")/mptcp_lib.sh"
++
++mptcp_lib_check_mptcp
++
+ ip -Version > /dev/null 2>&1
+ if [ $? -ne 0 ];then
+ 	echo "SKIP: Cannot not run test without ip tool"


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:6.3 commit in: /
@ 2023-06-09 12:04 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2023-06-09 12:04 UTC (permalink / raw
  To: gentoo-commits

commit:     e27c6f924be2191404e2c08b591d72c9a7c6bea0
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jun  9 12:04:08 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jun  9 12:04:08 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e27c6f92

Remove redundant patch

Removed:
2100_io-uring-undeprecate-epoll-ctl-support.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                       |  4 ----
 2100_io-uring-undeprecate-epoll-ctl-support.patch | 21 ---------------------
 2 files changed, 25 deletions(-)

diff --git a/0000_README b/0000_README
index 5091043c..ac375662 100644
--- a/0000_README
+++ b/0000_README
@@ -87,10 +87,6 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
-Patch:  2100_io-uring-undeprecate-epoll-ctl-support.patch
-From:   https://patchwork.kernel.org/project/io-uring/patch/20230506095502.13401-1-info@bnoordhuis.nl/
-Desc:   io_uring: undeprecate epoll_ctl support
-
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino

diff --git a/2100_io-uring-undeprecate-epoll-ctl-support.patch b/2100_io-uring-undeprecate-epoll-ctl-support.patch
deleted file mode 100644
index 4c3d3904..00000000
--- a/2100_io-uring-undeprecate-epoll-ctl-support.patch
+++ /dev/null
@@ -1,21 +0,0 @@
-io_uring: undeprecate epoll_ctl support
-
----
- io_uring/epoll.c | 4 ----
- 1 file changed, 4 deletions(-)
-
-diff --git a/io_uring/epoll.c b/io_uring/epoll.c
-index 9aa74d2c80bc..89bff2068a19 100644
---- a/io_uring/epoll.c
-+++ b/io_uring/epoll.c
-@@ -25,10 +25,6 @@ int io_epoll_ctl_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
- {
- 	struct io_epoll *epoll = io_kiocb_to_cmd(req, struct io_epoll);
- 
--	pr_warn_once("%s: epoll_ctl support in io_uring is deprecated and will "
--		     "be removed in a future Linux kernel version.\n",
--		     current->comm);
--
- 	if (sqe->buf_index || sqe->splice_fd_in)
- 		return -EINVAL;
- 


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:6.3 commit in: /
@ 2023-06-14 10:16 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2023-06-14 10:16 UTC (permalink / raw
  To: gentoo-commits

commit:     e53984ce4fd0952e980a7418817445b3725c8096
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 14 10:16:42 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jun 14 10:16:42 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e53984ce

Linux patch 6.3.8

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |    4 +
 1007_linux-6.3.8.patch | 7325 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 7329 insertions(+)

diff --git a/0000_README b/0000_README
index ac375662..5d0c85ce 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch:  1006_linux-6.3.7.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.3.7
 
+Patch:  1007_linux-6.3.8.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.3.8
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1007_linux-6.3.8.patch b/1007_linux-6.3.8.patch
new file mode 100644
index 00000000..85b81339
--- /dev/null
+++ b/1007_linux-6.3.8.patch
@@ -0,0 +1,7325 @@
+diff --git a/Documentation/mm/page_table_check.rst b/Documentation/mm/page_table_check.rst
+index cfd8f4117cf3e..c12838ce6b8de 100644
+--- a/Documentation/mm/page_table_check.rst
++++ b/Documentation/mm/page_table_check.rst
+@@ -52,3 +52,22 @@ Build kernel with:
+ 
+ Optionally, build kernel with PAGE_TABLE_CHECK_ENFORCED in order to have page
+ table support without extra kernel parameter.
++
++Implementation notes
++====================
++
++We specifically decided not to use VMA information in order to avoid relying on
++MM states (except for limited "struct page" info). The page table check is a
++separate from Linux-MM state machine that verifies that the user accessible
++pages are not falsely shared.
++
++PAGE_TABLE_CHECK depends on EXCLUSIVE_SYSTEM_RAM. The reason is that without
++EXCLUSIVE_SYSTEM_RAM, users are allowed to map arbitrary physical memory
++regions into the userspace via /dev/mem. At the same time, pages may change
++their properties (e.g., from anonymous pages to named pages) while they are
++still being mapped in the userspace, leading to "corruption" detected by the
++page table check.
++
++Even with EXCLUSIVE_SYSTEM_RAM, I/O pages may be still allowed to be mapped via
++/dev/mem. However, these pages are always considered as named pages, so they
++won't break the logic used in the page table check.
+diff --git a/Documentation/networking/ip-sysctl.rst b/Documentation/networking/ip-sysctl.rst
+index 58a78a3166978..97ae2b5a6101c 100644
+--- a/Documentation/networking/ip-sysctl.rst
++++ b/Documentation/networking/ip-sysctl.rst
+@@ -1352,8 +1352,8 @@ ping_group_range - 2 INTEGERS
+ 	Restrict ICMP_PROTO datagram sockets to users in the group range.
+ 	The default is "1 0", meaning, that nobody (not even root) may
+ 	create ping sockets.  Setting it to "100 100" would grant permissions
+-	to the single group. "0 4294967295" would enable it for the world, "100
+-	4294967295" would enable it for the users, but not daemons.
++	to the single group. "0 4294967294" would enable it for the world, "100
++	4294967294" would enable it for the users, but not daemons.
+ 
+ tcp_early_demux - BOOLEAN
+ 	Enable early demux for established TCP sockets.
+diff --git a/Makefile b/Makefile
+index 71c958fd52854..b4267d7a57b35 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 3
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/arch/arm/boot/dts/at91-sama7g5ek.dts b/arch/arm/boot/dts/at91-sama7g5ek.dts
+index aa5cc0e98bbab..217e9b96c61e5 100644
+--- a/arch/arm/boot/dts/at91-sama7g5ek.dts
++++ b/arch/arm/boot/dts/at91-sama7g5ek.dts
+@@ -792,7 +792,7 @@
+ };
+ 
+ &shdwc {
+-	atmel,shdwc-debouncer = <976>;
++	debounce-delay-us = <976>;
+ 	status = "okay";
+ 
+ 	input@0 {
+diff --git a/arch/arm/mach-at91/pm.c b/arch/arm/mach-at91/pm.c
+index 60dc56d8acfb9..437dd0352fd44 100644
+--- a/arch/arm/mach-at91/pm.c
++++ b/arch/arm/mach-at91/pm.c
+@@ -334,16 +334,14 @@ static bool at91_pm_eth_quirk_is_valid(struct at91_pm_quirk_eth *eth)
+ 		pdev = of_find_device_by_node(eth->np);
+ 		if (!pdev)
+ 			return false;
++		/* put_device(eth->dev) is called at the end of suspend. */
+ 		eth->dev = &pdev->dev;
+ 	}
+ 
+ 	/* No quirks if device isn't a wakeup source. */
+-	if (!device_may_wakeup(eth->dev)) {
+-		put_device(eth->dev);
++	if (!device_may_wakeup(eth->dev))
+ 		return false;
+-	}
+ 
+-	/* put_device(eth->dev) is called at the end of suspend. */
+ 	return true;
+ }
+ 
+@@ -439,14 +437,14 @@ clk_unconfigure:
+ 				pr_err("AT91: PM: failed to enable %s clocks\n",
+ 				       j == AT91_PM_G_ETH ? "geth" : "eth");
+ 			}
+-		} else {
+-			/*
+-			 * Release the reference to eth->dev taken in
+-			 * at91_pm_eth_quirk_is_valid().
+-			 */
+-			put_device(eth->dev);
+-			eth->dev = NULL;
+ 		}
++
++		/*
++		 * Release the reference to eth->dev taken in
++		 * at91_pm_eth_quirk_is_valid().
++		 */
++		put_device(eth->dev);
++		eth->dev = NULL;
+ 	}
+ 
+ 	return ret;
+diff --git a/arch/arm64/boot/dts/freescale/imx8-ss-dma.dtsi b/arch/arm64/boot/dts/freescale/imx8-ss-dma.dtsi
+index a943a1e2797f4..21345ae14eb25 100644
+--- a/arch/arm64/boot/dts/freescale/imx8-ss-dma.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8-ss-dma.dtsi
+@@ -90,6 +90,8 @@ dma_subsys: bus@5a000000 {
+ 		clocks = <&uart0_lpcg IMX_LPCG_CLK_4>,
+ 			 <&uart0_lpcg IMX_LPCG_CLK_0>;
+ 		clock-names = "ipg", "baud";
++		assigned-clocks = <&clk IMX_SC_R_UART_0 IMX_SC_PM_CLK_PER>;
++		assigned-clock-rates = <80000000>;
+ 		power-domains = <&pd IMX_SC_R_UART_0>;
+ 		status = "disabled";
+ 	};
+@@ -100,6 +102,8 @@ dma_subsys: bus@5a000000 {
+ 		clocks = <&uart1_lpcg IMX_LPCG_CLK_4>,
+ 			 <&uart1_lpcg IMX_LPCG_CLK_0>;
+ 		clock-names = "ipg", "baud";
++		assigned-clocks = <&clk IMX_SC_R_UART_1 IMX_SC_PM_CLK_PER>;
++		assigned-clock-rates = <80000000>;
+ 		power-domains = <&pd IMX_SC_R_UART_1>;
+ 		status = "disabled";
+ 	};
+@@ -110,6 +114,8 @@ dma_subsys: bus@5a000000 {
+ 		clocks = <&uart2_lpcg IMX_LPCG_CLK_4>,
+ 			 <&uart2_lpcg IMX_LPCG_CLK_0>;
+ 		clock-names = "ipg", "baud";
++		assigned-clocks = <&clk IMX_SC_R_UART_2 IMX_SC_PM_CLK_PER>;
++		assigned-clock-rates = <80000000>;
+ 		power-domains = <&pd IMX_SC_R_UART_2>;
+ 		status = "disabled";
+ 	};
+@@ -120,6 +126,8 @@ dma_subsys: bus@5a000000 {
+ 		clocks = <&uart3_lpcg IMX_LPCG_CLK_4>,
+ 			 <&uart3_lpcg IMX_LPCG_CLK_0>;
+ 		clock-names = "ipg", "baud";
++		assigned-clocks = <&clk IMX_SC_R_UART_3 IMX_SC_PM_CLK_PER>;
++		assigned-clock-rates = <80000000>;
+ 		power-domains = <&pd IMX_SC_R_UART_3>;
+ 		status = "disabled";
+ 	};
+diff --git a/arch/arm64/boot/dts/freescale/imx8mn-beacon-baseboard.dtsi b/arch/arm64/boot/dts/freescale/imx8mn-beacon-baseboard.dtsi
+index 9e82069c941fa..5a1f7c30afe57 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mn-beacon-baseboard.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mn-beacon-baseboard.dtsi
+@@ -81,7 +81,7 @@
+ &ecspi2 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_espi2>;
+-	cs-gpios = <&gpio5 9 GPIO_ACTIVE_LOW>;
++	cs-gpios = <&gpio5 13 GPIO_ACTIVE_LOW>;
+ 	status = "okay";
+ 
+ 	eeprom@0 {
+@@ -202,7 +202,7 @@
+ 			MX8MN_IOMUXC_ECSPI2_SCLK_ECSPI2_SCLK		0x82
+ 			MX8MN_IOMUXC_ECSPI2_MOSI_ECSPI2_MOSI		0x82
+ 			MX8MN_IOMUXC_ECSPI2_MISO_ECSPI2_MISO		0x82
+-			MX8MN_IOMUXC_ECSPI1_SS0_GPIO5_IO9		0x41
++			MX8MN_IOMUXC_ECSPI2_SS0_GPIO5_IO13		0x41
+ 		>;
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/freescale/imx8qm-mek.dts b/arch/arm64/boot/dts/freescale/imx8qm-mek.dts
+index ce9d3f0b98fc0..607cd6b4e9721 100644
+--- a/arch/arm64/boot/dts/freescale/imx8qm-mek.dts
++++ b/arch/arm64/boot/dts/freescale/imx8qm-mek.dts
+@@ -82,8 +82,8 @@
+ 	pinctrl-0 = <&pinctrl_usdhc2>;
+ 	bus-width = <4>;
+ 	vmmc-supply = <&reg_usdhc2_vmmc>;
+-	cd-gpios = <&lsio_gpio4 22 GPIO_ACTIVE_LOW>;
+-	wp-gpios = <&lsio_gpio4 21 GPIO_ACTIVE_HIGH>;
++	cd-gpios = <&lsio_gpio5 22 GPIO_ACTIVE_LOW>;
++	wp-gpios = <&lsio_gpio5 21 GPIO_ACTIVE_HIGH>;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-lite.dtsi b/arch/arm64/boot/dts/qcom/sc7180-lite.dtsi
+index d8ed1d7b4ec76..4b306a59d9bec 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-lite.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180-lite.dtsi
+@@ -16,3 +16,11 @@
+ &cpu6_opp12 {
+ 	opp-peak-kBps = <8532000 23347200>;
+ };
++
++&cpu6_opp13 {
++	opp-peak-kBps = <8532000 23347200>;
++};
++
++&cpu6_opp14 {
++	opp-peak-kBps = <8532000 23347200>;
++};
+diff --git a/arch/arm64/boot/dts/qcom/sc8280xp.dtsi b/arch/arm64/boot/dts/qcom/sc8280xp.dtsi
+index 03b679b75201d..f081ca449699a 100644
+--- a/arch/arm64/boot/dts/qcom/sc8280xp.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc8280xp.dtsi
+@@ -3957,6 +3957,7 @@
+ 			qcom,tcs-config = <ACTIVE_TCS  2>, <SLEEP_TCS   3>,
+ 					  <WAKE_TCS    3>, <CONTROL_TCS 1>;
+ 			label = "apps_rsc";
++			power-domains = <&CLUSTER_PD>;
+ 
+ 			apps_bcm_voter: bcm-voter {
+ 				compatible = "qcom,bcm-voter";
+diff --git a/arch/arm64/boot/dts/qcom/sm6375-sony-xperia-murray-pdx225.dts b/arch/arm64/boot/dts/qcom/sm6375-sony-xperia-murray-pdx225.dts
+index b691c3834b6b6..71970dd3fc1ad 100644
+--- a/arch/arm64/boot/dts/qcom/sm6375-sony-xperia-murray-pdx225.dts
++++ b/arch/arm64/boot/dts/qcom/sm6375-sony-xperia-murray-pdx225.dts
+@@ -151,12 +151,12 @@
+ };
+ 
+ &remoteproc_adsp {
+-	firmware-name = "qcom/Sony/murray/adsp.mbn";
++	firmware-name = "qcom/sm6375/Sony/murray/adsp.mbn";
+ 	status = "okay";
+ };
+ 
+ &remoteproc_cdsp {
+-	firmware-name = "qcom/Sony/murray/cdsp.mbn";
++	firmware-name = "qcom/sm6375/Sony/murray/cdsp.mbn";
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
+index eb7f29a412f87..b462ed7d41fe1 100644
+--- a/arch/riscv/Kconfig
++++ b/arch/riscv/Kconfig
+@@ -25,6 +25,7 @@ config RISCV
+ 	select ARCH_HAS_GIGANTIC_PAGE
+ 	select ARCH_HAS_KCOV
+ 	select ARCH_HAS_MMIOWB
++	select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
+ 	select ARCH_HAS_PMEM_API
+ 	select ARCH_HAS_PTE_SPECIAL
+ 	select ARCH_HAS_SET_DIRECT_MAP if MMU
+diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
+index f641837ccf31d..05eda3281ba90 100644
+--- a/arch/riscv/include/asm/pgtable.h
++++ b/arch/riscv/include/asm/pgtable.h
+@@ -165,8 +165,7 @@ extern struct pt_alloc_ops pt_ops __initdata;
+ 					 _PAGE_EXEC | _PAGE_WRITE)
+ 
+ #define PAGE_COPY		PAGE_READ
+-#define PAGE_COPY_EXEC		PAGE_EXEC
+-#define PAGE_COPY_READ_EXEC	PAGE_READ_EXEC
++#define PAGE_COPY_EXEC		PAGE_READ_EXEC
+ #define PAGE_SHARED		PAGE_WRITE
+ #define PAGE_SHARED_EXEC	PAGE_WRITE_EXEC
+ 
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index dc1793bf01796..309d685d70267 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -286,7 +286,7 @@ static const pgprot_t protection_map[16] = {
+ 	[VM_EXEC]					= PAGE_EXEC,
+ 	[VM_EXEC | VM_READ]				= PAGE_READ_EXEC,
+ 	[VM_EXEC | VM_WRITE]				= PAGE_COPY_EXEC,
+-	[VM_EXEC | VM_WRITE | VM_READ]			= PAGE_COPY_READ_EXEC,
++	[VM_EXEC | VM_WRITE | VM_READ]			= PAGE_COPY_EXEC,
+ 	[VM_SHARED]					= PAGE_NONE,
+ 	[VM_SHARED | VM_READ]				= PAGE_READ,
+ 	[VM_SHARED | VM_WRITE]				= PAGE_SHARED,
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index ae08c4936743d..f2e2ffd135baf 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -717,6 +717,10 @@ static void __blk_mq_free_request(struct request *rq)
+ 	blk_crypto_free_request(rq);
+ 	blk_pm_mark_last_busy(rq);
+ 	rq->mq_hctx = NULL;
++
++	if (rq->rq_flags & RQF_MQ_INFLIGHT)
++		__blk_mq_dec_active_requests(hctx);
++
+ 	if (rq->tag != BLK_MQ_NO_TAG)
+ 		blk_mq_put_tag(hctx->tags, ctx, rq->tag);
+ 	if (sched_tag != BLK_MQ_NO_TAG)
+@@ -728,15 +732,11 @@ static void __blk_mq_free_request(struct request *rq)
+ void blk_mq_free_request(struct request *rq)
+ {
+ 	struct request_queue *q = rq->q;
+-	struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
+ 
+ 	if ((rq->rq_flags & RQF_ELVPRIV) &&
+ 	    q->elevator->type->ops.finish_request)
+ 		q->elevator->type->ops.finish_request(rq);
+ 
+-	if (rq->rq_flags & RQF_MQ_INFLIGHT)
+-		__blk_mq_dec_active_requests(hctx);
+-
+ 	if (unlikely(laptop_mode && !blk_rq_is_passthrough(rq)))
+ 		laptop_io_completion(q->disk->bdi);
+ 
+diff --git a/drivers/accel/ivpu/Kconfig b/drivers/accel/ivpu/Kconfig
+index 9bdf168bf1d0e..1a4c4ed9d1136 100644
+--- a/drivers/accel/ivpu/Kconfig
++++ b/drivers/accel/ivpu/Kconfig
+@@ -7,6 +7,7 @@ config DRM_ACCEL_IVPU
+ 	depends on PCI && PCI_MSI
+ 	select FW_LOADER
+ 	select SHMEM
++	select GENERIC_ALLOCATOR
+ 	help
+ 	  Choose this option if you have a system that has an 14th generation Intel CPU
+ 	  or newer. VPU stands for Versatile Processing Unit and it's a CPU-integrated
+diff --git a/drivers/accel/ivpu/ivpu_hw_mtl.c b/drivers/accel/ivpu/ivpu_hw_mtl.c
+index 382ec127be8ea..fef35422c6f0d 100644
+--- a/drivers/accel/ivpu/ivpu_hw_mtl.c
++++ b/drivers/accel/ivpu/ivpu_hw_mtl.c
+@@ -197,6 +197,11 @@ static void ivpu_pll_init_frequency_ratios(struct ivpu_device *vdev)
+ 	hw->pll.pn_ratio = clamp_t(u8, fuse_pn_ratio, hw->pll.min_ratio, hw->pll.max_ratio);
+ }
+ 
++static int ivpu_hw_mtl_wait_for_vpuip_bar(struct ivpu_device *vdev)
++{
++	return REGV_POLL_FLD(MTL_VPU_HOST_SS_CPR_RST_CLR, AON, 0, 100);
++}
++
+ static int ivpu_pll_drive(struct ivpu_device *vdev, bool enable)
+ {
+ 	struct ivpu_hw_info *hw = vdev->hw;
+@@ -239,6 +244,12 @@ static int ivpu_pll_drive(struct ivpu_device *vdev, bool enable)
+ 			ivpu_err(vdev, "Timed out waiting for PLL ready status\n");
+ 			return ret;
+ 		}
++
++		ret = ivpu_hw_mtl_wait_for_vpuip_bar(vdev);
++		if (ret) {
++			ivpu_err(vdev, "Timed out waiting for VPUIP bar\n");
++			return ret;
++		}
+ 	}
+ 
+ 	return 0;
+@@ -256,7 +267,7 @@ static int ivpu_pll_disable(struct ivpu_device *vdev)
+ 
+ static void ivpu_boot_host_ss_rst_clr_assert(struct ivpu_device *vdev)
+ {
+-	u32 val = REGV_RD32(MTL_VPU_HOST_SS_CPR_RST_CLR);
++	u32 val = 0;
+ 
+ 	val = REG_SET_FLD(MTL_VPU_HOST_SS_CPR_RST_CLR, TOP_NOC, val);
+ 	val = REG_SET_FLD(MTL_VPU_HOST_SS_CPR_RST_CLR, DSS_MAS, val);
+@@ -754,9 +765,8 @@ static int ivpu_hw_mtl_power_down(struct ivpu_device *vdev)
+ {
+ 	int ret = 0;
+ 
+-	if (ivpu_hw_mtl_reset(vdev)) {
++	if (!ivpu_hw_mtl_is_idle(vdev) && ivpu_hw_mtl_reset(vdev)) {
+ 		ivpu_err(vdev, "Failed to reset the VPU\n");
+-		ret = -EIO;
+ 	}
+ 
+ 	if (ivpu_pll_disable(vdev)) {
+@@ -764,8 +774,10 @@ static int ivpu_hw_mtl_power_down(struct ivpu_device *vdev)
+ 		ret = -EIO;
+ 	}
+ 
+-	if (ivpu_hw_mtl_d0i3_enable(vdev))
+-		ivpu_warn(vdev, "Failed to enable D0I3\n");
++	if (ivpu_hw_mtl_d0i3_enable(vdev)) {
++		ivpu_err(vdev, "Failed to enter D0I3\n");
++		ret = -EIO;
++	}
+ 
+ 	return ret;
+ }
+diff --git a/drivers/accel/ivpu/ivpu_hw_mtl_reg.h b/drivers/accel/ivpu/ivpu_hw_mtl_reg.h
+index d83ccfd9a871b..593b8ff074170 100644
+--- a/drivers/accel/ivpu/ivpu_hw_mtl_reg.h
++++ b/drivers/accel/ivpu/ivpu_hw_mtl_reg.h
+@@ -91,6 +91,7 @@
+ #define MTL_VPU_HOST_SS_CPR_RST_SET_MSS_MAS_MASK			BIT_MASK(11)
+ 
+ #define MTL_VPU_HOST_SS_CPR_RST_CLR					0x00000098u
++#define MTL_VPU_HOST_SS_CPR_RST_CLR_AON_MASK				BIT_MASK(0)
+ #define MTL_VPU_HOST_SS_CPR_RST_CLR_TOP_NOC_MASK			BIT_MASK(1)
+ #define MTL_VPU_HOST_SS_CPR_RST_CLR_DSS_MAS_MASK			BIT_MASK(10)
+ #define MTL_VPU_HOST_SS_CPR_RST_CLR_MSS_MAS_MASK			BIT_MASK(11)
+diff --git a/drivers/accel/ivpu/ivpu_ipc.c b/drivers/accel/ivpu/ivpu_ipc.c
+index 3adcfa80fc0e5..fa0af59e39ab6 100644
+--- a/drivers/accel/ivpu/ivpu_ipc.c
++++ b/drivers/accel/ivpu/ivpu_ipc.c
+@@ -183,9 +183,7 @@ ivpu_ipc_send(struct ivpu_device *vdev, struct ivpu_ipc_consumer *cons, struct v
+ 	struct ivpu_ipc_info *ipc = vdev->ipc;
+ 	int ret;
+ 
+-	ret = mutex_lock_interruptible(&ipc->lock);
+-	if (ret)
+-		return ret;
++	mutex_lock(&ipc->lock);
+ 
+ 	if (!ipc->on) {
+ 		ret = -EAGAIN;
+diff --git a/drivers/accel/ivpu/ivpu_job.c b/drivers/accel/ivpu/ivpu_job.c
+index 3c6f1e16cf2ff..d45be0615b476 100644
+--- a/drivers/accel/ivpu/ivpu_job.c
++++ b/drivers/accel/ivpu/ivpu_job.c
+@@ -431,6 +431,7 @@ ivpu_job_prepare_bos_for_submit(struct drm_file *file, struct ivpu_job *job, u32
+ 	struct ivpu_file_priv *file_priv = file->driver_priv;
+ 	struct ivpu_device *vdev = file_priv->vdev;
+ 	struct ww_acquire_ctx acquire_ctx;
++	enum dma_resv_usage usage;
+ 	struct ivpu_bo *bo;
+ 	int ret;
+ 	u32 i;
+@@ -461,22 +462,28 @@ ivpu_job_prepare_bos_for_submit(struct drm_file *file, struct ivpu_job *job, u32
+ 
+ 	job->cmd_buf_vpu_addr = bo->vpu_addr + commands_offset;
+ 
+-	ret = drm_gem_lock_reservations((struct drm_gem_object **)job->bos, 1, &acquire_ctx);
++	ret = drm_gem_lock_reservations((struct drm_gem_object **)job->bos, buf_count,
++					&acquire_ctx);
+ 	if (ret) {
+ 		ivpu_warn(vdev, "Failed to lock reservations: %d\n", ret);
+ 		return ret;
+ 	}
+ 
+-	ret = dma_resv_reserve_fences(bo->base.resv, 1);
+-	if (ret) {
+-		ivpu_warn(vdev, "Failed to reserve fences: %d\n", ret);
+-		goto unlock_reservations;
++	for (i = 0; i < buf_count; i++) {
++		ret = dma_resv_reserve_fences(job->bos[i]->base.resv, 1);
++		if (ret) {
++			ivpu_warn(vdev, "Failed to reserve fences: %d\n", ret);
++			goto unlock_reservations;
++		}
+ 	}
+ 
+-	dma_resv_add_fence(bo->base.resv, job->done_fence, DMA_RESV_USAGE_WRITE);
++	for (i = 0; i < buf_count; i++) {
++		usage = (i == CMD_BUF_IDX) ? DMA_RESV_USAGE_WRITE : DMA_RESV_USAGE_BOOKKEEP;
++		dma_resv_add_fence(job->bos[i]->base.resv, job->done_fence, usage);
++	}
+ 
+ unlock_reservations:
+-	drm_gem_unlock_reservations((struct drm_gem_object **)job->bos, 1, &acquire_ctx);
++	drm_gem_unlock_reservations((struct drm_gem_object **)job->bos, buf_count, &acquire_ctx);
+ 
+ 	wmb(); /* Flush write combining buffers */
+ 
+diff --git a/drivers/accel/ivpu/ivpu_mmu.c b/drivers/accel/ivpu/ivpu_mmu.c
+index 694e978aba663..b8b259b3aa635 100644
+--- a/drivers/accel/ivpu/ivpu_mmu.c
++++ b/drivers/accel/ivpu/ivpu_mmu.c
+@@ -587,16 +587,11 @@ static int ivpu_mmu_strtab_init(struct ivpu_device *vdev)
+ int ivpu_mmu_invalidate_tlb(struct ivpu_device *vdev, u16 ssid)
+ {
+ 	struct ivpu_mmu_info *mmu = vdev->mmu;
+-	int ret;
+-
+-	ret = mutex_lock_interruptible(&mmu->lock);
+-	if (ret)
+-		return ret;
++	int ret = 0;
+ 
+-	if (!mmu->on) {
+-		ret = 0;
++	mutex_lock(&mmu->lock);
++	if (!mmu->on)
+ 		goto unlock;
+-	}
+ 
+ 	ret = ivpu_mmu_cmdq_write_tlbi_nh_asid(vdev, ssid);
+ 	if (ret)
+@@ -614,7 +609,7 @@ static int ivpu_mmu_cd_add(struct ivpu_device *vdev, u32 ssid, u64 cd_dma)
+ 	struct ivpu_mmu_cdtab *cdtab = &mmu->cdtab;
+ 	u64 *entry;
+ 	u64 cd[4];
+-	int ret;
++	int ret = 0;
+ 
+ 	if (ssid > IVPU_MMU_CDTAB_ENT_COUNT)
+ 		return -EINVAL;
+@@ -655,14 +650,9 @@ static int ivpu_mmu_cd_add(struct ivpu_device *vdev, u32 ssid, u64 cd_dma)
+ 	ivpu_dbg(vdev, MMU, "CDTAB %s entry (SSID=%u, dma=%pad): 0x%llx, 0x%llx, 0x%llx, 0x%llx\n",
+ 		 cd_dma ? "write" : "clear", ssid, &cd_dma, cd[0], cd[1], cd[2], cd[3]);
+ 
+-	ret = mutex_lock_interruptible(&mmu->lock);
+-	if (ret)
+-		return ret;
+-
+-	if (!mmu->on) {
+-		ret = 0;
++	mutex_lock(&mmu->lock);
++	if (!mmu->on)
+ 		goto unlock;
+-	}
+ 
+ 	ret = ivpu_mmu_cmdq_write_cfgi_all(vdev);
+ 	if (ret)
+diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
+index 5cb008b9700a0..f218a2b114b9b 100644
+--- a/drivers/block/rbd.c
++++ b/drivers/block/rbd.c
+@@ -1334,14 +1334,30 @@ static bool rbd_obj_is_tail(struct rbd_obj_request *obj_req)
+ /*
+  * Must be called after rbd_obj_calc_img_extents().
+  */
+-static bool rbd_obj_copyup_enabled(struct rbd_obj_request *obj_req)
++static void rbd_obj_set_copyup_enabled(struct rbd_obj_request *obj_req)
+ {
+-	if (!obj_req->num_img_extents ||
+-	    (rbd_obj_is_entire(obj_req) &&
+-	     !obj_req->img_request->snapc->num_snaps))
+-		return false;
++	rbd_assert(obj_req->img_request->snapc);
+ 
+-	return true;
++	if (obj_req->img_request->op_type == OBJ_OP_DISCARD) {
++		dout("%s %p objno %llu discard\n", __func__, obj_req,
++		     obj_req->ex.oe_objno);
++		return;
++	}
++
++	if (!obj_req->num_img_extents) {
++		dout("%s %p objno %llu not overlapping\n", __func__, obj_req,
++		     obj_req->ex.oe_objno);
++		return;
++	}
++
++	if (rbd_obj_is_entire(obj_req) &&
++	    !obj_req->img_request->snapc->num_snaps) {
++		dout("%s %p objno %llu entire\n", __func__, obj_req,
++		     obj_req->ex.oe_objno);
++		return;
++	}
++
++	obj_req->flags |= RBD_OBJ_FLAG_COPYUP_ENABLED;
+ }
+ 
+ static u64 rbd_obj_img_extents_bytes(struct rbd_obj_request *obj_req)
+@@ -1442,6 +1458,7 @@ __rbd_obj_add_osd_request(struct rbd_obj_request *obj_req,
+ static struct ceph_osd_request *
+ rbd_obj_add_osd_request(struct rbd_obj_request *obj_req, int num_ops)
+ {
++	rbd_assert(obj_req->img_request->snapc);
+ 	return __rbd_obj_add_osd_request(obj_req, obj_req->img_request->snapc,
+ 					 num_ops);
+ }
+@@ -1578,15 +1595,18 @@ static void rbd_img_request_init(struct rbd_img_request *img_request,
+ 	mutex_init(&img_request->state_mutex);
+ }
+ 
++/*
++ * Only snap_id is captured here, for reads.  For writes, snapshot
++ * context is captured in rbd_img_object_requests() after exclusive
++ * lock is ensured to be held.
++ */
+ static void rbd_img_capture_header(struct rbd_img_request *img_req)
+ {
+ 	struct rbd_device *rbd_dev = img_req->rbd_dev;
+ 
+ 	lockdep_assert_held(&rbd_dev->header_rwsem);
+ 
+-	if (rbd_img_is_write(img_req))
+-		img_req->snapc = ceph_get_snap_context(rbd_dev->header.snapc);
+-	else
++	if (!rbd_img_is_write(img_req))
+ 		img_req->snap_id = rbd_dev->spec->snap_id;
+ 
+ 	if (rbd_dev_parent_get(rbd_dev))
+@@ -2233,9 +2253,6 @@ static int rbd_obj_init_write(struct rbd_obj_request *obj_req)
+ 	if (ret)
+ 		return ret;
+ 
+-	if (rbd_obj_copyup_enabled(obj_req))
+-		obj_req->flags |= RBD_OBJ_FLAG_COPYUP_ENABLED;
+-
+ 	obj_req->write_state = RBD_OBJ_WRITE_START;
+ 	return 0;
+ }
+@@ -2341,8 +2358,6 @@ static int rbd_obj_init_zeroout(struct rbd_obj_request *obj_req)
+ 	if (ret)
+ 		return ret;
+ 
+-	if (rbd_obj_copyup_enabled(obj_req))
+-		obj_req->flags |= RBD_OBJ_FLAG_COPYUP_ENABLED;
+ 	if (!obj_req->num_img_extents) {
+ 		obj_req->flags |= RBD_OBJ_FLAG_NOOP_FOR_NONEXISTENT;
+ 		if (rbd_obj_is_entire(obj_req))
+@@ -3286,6 +3301,7 @@ again:
+ 	case RBD_OBJ_WRITE_START:
+ 		rbd_assert(!*result);
+ 
++		rbd_obj_set_copyup_enabled(obj_req);
+ 		if (rbd_obj_write_is_noop(obj_req))
+ 			return true;
+ 
+@@ -3472,9 +3488,19 @@ static int rbd_img_exclusive_lock(struct rbd_img_request *img_req)
+ 
+ static void rbd_img_object_requests(struct rbd_img_request *img_req)
+ {
++	struct rbd_device *rbd_dev = img_req->rbd_dev;
+ 	struct rbd_obj_request *obj_req;
+ 
+ 	rbd_assert(!img_req->pending.result && !img_req->pending.num_pending);
++	rbd_assert(!need_exclusive_lock(img_req) ||
++		   __rbd_is_lock_owner(rbd_dev));
++
++	if (rbd_img_is_write(img_req)) {
++		rbd_assert(!img_req->snapc);
++		down_read(&rbd_dev->header_rwsem);
++		img_req->snapc = ceph_get_snap_context(rbd_dev->header.snapc);
++		up_read(&rbd_dev->header_rwsem);
++	}
+ 
+ 	for_each_obj_request(img_req, obj_req) {
+ 		int result = 0;
+@@ -3492,7 +3518,6 @@ static void rbd_img_object_requests(struct rbd_img_request *img_req)
+ 
+ static bool rbd_img_advance(struct rbd_img_request *img_req, int *result)
+ {
+-	struct rbd_device *rbd_dev = img_req->rbd_dev;
+ 	int ret;
+ 
+ again:
+@@ -3513,9 +3538,6 @@ again:
+ 		if (*result)
+ 			return true;
+ 
+-		rbd_assert(!need_exclusive_lock(img_req) ||
+-			   __rbd_is_lock_owner(rbd_dev));
+-
+ 		rbd_img_object_requests(img_req);
+ 		if (!img_req->pending.num_pending) {
+ 			*result = img_req->pending.result;
+@@ -3977,6 +3999,10 @@ static int rbd_post_acquire_action(struct rbd_device *rbd_dev)
+ {
+ 	int ret;
+ 
++	ret = rbd_dev_refresh(rbd_dev);
++	if (ret)
++		return ret;
++
+ 	if (rbd_dev->header.features & RBD_FEATURE_OBJECT_MAP) {
+ 		ret = rbd_object_map_open(rbd_dev);
+ 		if (ret)
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index 3df8c3606e933..bd3b65a13e741 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -78,7 +78,8 @@ enum qca_flags {
+ 	QCA_HW_ERROR_EVENT,
+ 	QCA_SSR_TRIGGERED,
+ 	QCA_BT_OFF,
+-	QCA_ROM_FW
++	QCA_ROM_FW,
++	QCA_DEBUGFS_CREATED,
+ };
+ 
+ enum qca_capabilities {
+@@ -635,6 +636,9 @@ static void qca_debugfs_init(struct hci_dev *hdev)
+ 	if (!hdev->debugfs)
+ 		return;
+ 
++	if (test_and_set_bit(QCA_DEBUGFS_CREATED, &qca->flags))
++		return;
++
+ 	ibs_dir = debugfs_create_dir("ibs", hdev->debugfs);
+ 
+ 	/* read only */
+diff --git a/drivers/firmware/arm_ffa/driver.c b/drivers/firmware/arm_ffa/driver.c
+index e234091386671..2109cd178ff70 100644
+--- a/drivers/firmware/arm_ffa/driver.c
++++ b/drivers/firmware/arm_ffa/driver.c
+@@ -424,6 +424,7 @@ ffa_setup_and_transmit(u32 func_id, void *buffer, u32 max_fragsize,
+ 		ep_mem_access->flag = 0;
+ 		ep_mem_access->reserved = 0;
+ 	}
++	mem_region->handle = 0;
+ 	mem_region->reserved_0 = 0;
+ 	mem_region->reserved_1 = 0;
+ 	mem_region->ep_count = args->nattrs;
+diff --git a/drivers/gpio/gpio-sim.c b/drivers/gpio/gpio-sim.c
+index e5dfd636c63c1..09aa0b64859b4 100644
+--- a/drivers/gpio/gpio-sim.c
++++ b/drivers/gpio/gpio-sim.c
+@@ -721,8 +721,10 @@ static char **gpio_sim_make_line_names(struct gpio_sim_bank *bank,
+ 	if (!line_names)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	list_for_each_entry(line, &bank->line_list, siblings)
+-		line_names[line->offset] = line->name;
++	list_for_each_entry(line, &bank->line_list, siblings) {
++		if (line->name && (line->offset <= max_offset))
++			line_names[line->offset] = line->name;
++	}
+ 
+ 	return line_names;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+index aeeec211861c4..e1b01554e3231 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+@@ -1092,16 +1092,20 @@ bool amdgpu_acpi_is_s0ix_active(struct amdgpu_device *adev)
+ 	 * S0ix even though the system is suspending to idle, so return false
+ 	 * in that case.
+ 	 */
+-	if (!(acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0))
++	if (!(acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0)) {
+ 		dev_warn_once(adev->dev,
+ 			      "Power consumption will be higher as BIOS has not been configured for suspend-to-idle.\n"
+ 			      "To use suspend-to-idle change the sleep mode in BIOS setup.\n");
++		return false;
++	}
+ 
+ #if !IS_ENABLED(CONFIG_AMD_PMC)
+ 	dev_warn_once(adev->dev,
+ 		      "Power consumption will be higher as the kernel has not been compiled with CONFIG_AMD_PMC.\n");
+-#endif /* CONFIG_AMD_PMC */
++	return false;
++#else
+ 	return true;
++#endif /* CONFIG_AMD_PMC */
+ }
+ 
+ #endif /* CONFIG_SUSPEND */
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+index 6c7d672412b21..5e9a0c1bb3079 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+@@ -79,9 +79,10 @@ static void amdgpu_bo_user_destroy(struct ttm_buffer_object *tbo)
+ static void amdgpu_bo_vm_destroy(struct ttm_buffer_object *tbo)
+ {
+ 	struct amdgpu_device *adev = amdgpu_ttm_adev(tbo->bdev);
+-	struct amdgpu_bo *bo = ttm_to_amdgpu_bo(tbo);
++	struct amdgpu_bo *shadow_bo = ttm_to_amdgpu_bo(tbo), *bo;
+ 	struct amdgpu_bo_vm *vmbo;
+ 
++	bo = shadow_bo->parent;
+ 	vmbo = to_amdgpu_bo_vm(bo);
+ 	/* in case amdgpu_device_recover_vram got NULL of bo->parent */
+ 	if (!list_empty(&vmbo->shadow_list)) {
+@@ -694,11 +695,6 @@ int amdgpu_bo_create_vm(struct amdgpu_device *adev,
+ 		return r;
+ 
+ 	*vmbo_ptr = to_amdgpu_bo_vm(bo_ptr);
+-	INIT_LIST_HEAD(&(*vmbo_ptr)->shadow_list);
+-	/* Set destroy callback to amdgpu_bo_vm_destroy after vmbo->shadow_list
+-	 * is initialized.
+-	 */
+-	bo_ptr->tbo.destroy = &amdgpu_bo_vm_destroy;
+ 	return r;
+ }
+ 
+@@ -715,6 +711,8 @@ void amdgpu_bo_add_to_shadow_list(struct amdgpu_bo_vm *vmbo)
+ 
+ 	mutex_lock(&adev->shadow_list_lock);
+ 	list_add_tail(&vmbo->shadow_list, &adev->shadow_list);
++	vmbo->shadow->parent = amdgpu_bo_ref(&vmbo->bo);
++	vmbo->shadow->tbo.destroy = &amdgpu_bo_vm_destroy;
+ 	mutex_unlock(&adev->shadow_list_lock);
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c
+index 01e42bdd8e4e8..4642cff0e1a4f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c
+@@ -564,7 +564,6 @@ int amdgpu_vm_pt_create(struct amdgpu_device *adev, struct amdgpu_vm *vm,
+ 		return r;
+ 	}
+ 
+-	(*vmbo)->shadow->parent = amdgpu_bo_ref(bo);
+ 	amdgpu_bo_add_to_shadow_list(*vmbo);
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+index 43d6a9d6a5384..afacfb9b5bf6c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+@@ -800,7 +800,7 @@ static void amdgpu_vram_mgr_debug(struct ttm_resource_manager *man,
+ {
+ 	struct amdgpu_vram_mgr *mgr = to_vram_mgr(man);
+ 	struct drm_buddy *mm = &mgr->mm;
+-	struct drm_buddy_block *block;
++	struct amdgpu_vram_reservation *rsv;
+ 
+ 	drm_printf(printer, "  vis usage:%llu\n",
+ 		   amdgpu_vram_mgr_vis_usage(mgr));
+@@ -812,8 +812,9 @@ static void amdgpu_vram_mgr_debug(struct ttm_resource_manager *man,
+ 	drm_buddy_print(mm, printer);
+ 
+ 	drm_printf(printer, "reserved:\n");
+-	list_for_each_entry(block, &mgr->reserved_pages, link)
+-		drm_buddy_block_print(mm, block, printer);
++	list_for_each_entry(rsv, &mgr->reserved_pages, blocks)
++		drm_printf(printer, "%#018llx-%#018llx: %llu\n",
++			rsv->start, rsv->start + rsv->size, rsv->size);
+ 	mutex_unlock(&mgr->lock);
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/vi.c b/drivers/gpu/drm/amd/amdgpu/vi.c
+index ceab8783575ca..4924d853f9e45 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vi.c
++++ b/drivers/gpu/drm/amd/amdgpu/vi.c
+@@ -542,8 +542,15 @@ static u32 vi_get_xclk(struct amdgpu_device *adev)
+ 	u32 reference_clock = adev->clock.spll.reference_freq;
+ 	u32 tmp;
+ 
+-	if (adev->flags & AMD_IS_APU)
+-		return reference_clock;
++	if (adev->flags & AMD_IS_APU) {
++		switch (adev->asic_type) {
++		case CHIP_STONEY:
++			/* vbios says 48Mhz, but the actual freq is 100Mhz */
++			return 10000;
++		default:
++			return reference_clock;
++		}
++	}
+ 
+ 	tmp = RREG32_SMC(ixCG_CLKPIN_CNTL_2);
+ 	if (REG_GET_FIELD(tmp, CG_CLKPIN_CNTL_2, MUX_TCLK_TO_XCLK))
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index f07cba121d010..eab53d6317c9f 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -1962,6 +1962,9 @@ static enum dc_status dc_commit_state_no_check(struct dc *dc, struct dc_state *c
+ 	return result;
+ }
+ 
++static bool commit_minimal_transition_state(struct dc *dc,
++		struct dc_state *transition_base_context);
++
+ /**
+  * dc_commit_streams - Commit current stream state
+  *
+@@ -1983,6 +1986,8 @@ enum dc_status dc_commit_streams(struct dc *dc,
+ 	struct dc_state *context;
+ 	enum dc_status res = DC_OK;
+ 	struct dc_validation_set set[MAX_STREAMS] = {0};
++	struct pipe_ctx *pipe;
++	bool handle_exit_odm2to1 = false;
+ 
+ 	if (dc->ctx->dce_environment == DCE_ENV_VIRTUAL_HW)
+ 		return res;
+@@ -2007,6 +2012,22 @@ enum dc_status dc_commit_streams(struct dc *dc,
+ 		}
+ 	}
+ 
++	/* Check for case where we are going from odm 2:1 to max
++	 *  pipe scenario.  For these cases, we will call
++	 *  commit_minimal_transition_state() to exit out of odm 2:1
++	 *  first before processing new streams
++	 */
++	if (stream_count == dc->res_pool->pipe_count) {
++		for (i = 0; i < dc->res_pool->pipe_count; i++) {
++			pipe = &dc->current_state->res_ctx.pipe_ctx[i];
++			if (pipe->next_odm_pipe)
++				handle_exit_odm2to1 = true;
++		}
++	}
++
++	if (handle_exit_odm2to1)
++		res = commit_minimal_transition_state(dc, dc->current_state);
++
+ 	context = dc_create_state(dc);
+ 	if (!context)
+ 		goto context_alloc_fail;
+@@ -3915,6 +3936,7 @@ static bool commit_minimal_transition_state(struct dc *dc,
+ 	unsigned int i, j;
+ 	unsigned int pipe_in_use = 0;
+ 	bool subvp_in_use = false;
++	bool odm_in_use = false;
+ 
+ 	if (!transition_context)
+ 		return false;
+@@ -3943,6 +3965,18 @@ static bool commit_minimal_transition_state(struct dc *dc,
+ 		}
+ 	}
+ 
++	/* If ODM is enabled and we are adding or removing planes from any ODM
++	 * pipe, we must use the minimal transition.
++	 */
++	for (i = 0; i < dc->res_pool->pipe_count; i++) {
++		struct pipe_ctx *pipe = &dc->current_state->res_ctx.pipe_ctx[i];
++
++		if (pipe->stream && pipe->next_odm_pipe) {
++			odm_in_use = true;
++			break;
++		}
++	}
++
+ 	/* When the OS add a new surface if we have been used all of pipes with odm combine
+ 	 * and mpc split feature, it need use commit_minimal_transition_state to transition safely.
+ 	 * After OS exit MPO, it will back to use odm and mpc split with all of pipes, we need
+@@ -3951,7 +3985,7 @@ static bool commit_minimal_transition_state(struct dc *dc,
+ 	 * Reduce the scenarios to use dc_commit_state_no_check in the stage of flip. Especially
+ 	 * enter/exit MPO when DCN still have enough resources.
+ 	 */
+-	if (pipe_in_use != dc->res_pool->pipe_count && !subvp_in_use) {
++	if (pipe_in_use != dc->res_pool->pipe_count && !subvp_in_use && !odm_in_use) {
+ 		dc_release_state(transition_context);
+ 		return true;
+ 	}
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index 0ae6dcc403a4b..986de684b078e 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -1444,6 +1444,26 @@ static int acquire_first_split_pipe(
+ 			split_pipe->plane_res.mpcc_inst = pool->dpps[i]->inst;
+ 			split_pipe->pipe_idx = i;
+ 
++			split_pipe->stream = stream;
++			return i;
++		} else if (split_pipe->prev_odm_pipe &&
++				split_pipe->prev_odm_pipe->plane_state == split_pipe->plane_state) {
++			split_pipe->prev_odm_pipe->next_odm_pipe = split_pipe->next_odm_pipe;
++			if (split_pipe->next_odm_pipe)
++				split_pipe->next_odm_pipe->prev_odm_pipe = split_pipe->prev_odm_pipe;
++
++			if (split_pipe->prev_odm_pipe->plane_state)
++				resource_build_scaling_params(split_pipe->prev_odm_pipe);
++
++			memset(split_pipe, 0, sizeof(*split_pipe));
++			split_pipe->stream_res.tg = pool->timing_generators[i];
++			split_pipe->plane_res.hubp = pool->hubps[i];
++			split_pipe->plane_res.ipp = pool->ipps[i];
++			split_pipe->plane_res.dpp = pool->dpps[i];
++			split_pipe->stream_res.opp = pool->opps[i];
++			split_pipe->plane_res.mpcc_inst = pool->dpps[i]->inst;
++			split_pipe->pipe_idx = i;
++
+ 			split_pipe->stream = stream;
+ 			return i;
+ 		}
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
+index e47828e3b6d5d..7c06a339ab93c 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
+@@ -138,7 +138,7 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_2_soc = {
+ 	.urgent_out_of_order_return_per_channel_pixel_only_bytes = 4096,
+ 	.urgent_out_of_order_return_per_channel_pixel_and_vm_bytes = 4096,
+ 	.urgent_out_of_order_return_per_channel_vm_only_bytes = 4096,
+-	.pct_ideal_sdp_bw_after_urgent = 100.0,
++	.pct_ideal_sdp_bw_after_urgent = 90.0,
+ 	.pct_ideal_fabric_bw_after_urgent = 67.0,
+ 	.pct_ideal_dram_sdp_bw_after_urgent_pixel_only = 20.0,
+ 	.pct_ideal_dram_sdp_bw_after_urgent_pixel_and_vm = 60.0, // N/A, for now keep as is until DML implemented
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+index 75f18681e984c..85d53597eb07a 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+@@ -2067,33 +2067,94 @@ static int sienna_cichlid_display_disable_memory_clock_switch(struct smu_context
+ 	return ret;
+ }
+ 
++static void sienna_cichlid_get_override_pcie_settings(struct smu_context *smu,
++						      uint32_t *gen_speed_override,
++						      uint32_t *lane_width_override)
++{
++	struct amdgpu_device *adev = smu->adev;
++
++	*gen_speed_override = 0xff;
++	*lane_width_override = 0xff;
++
++	switch (adev->pdev->device) {
++	case 0x73A0:
++	case 0x73A1:
++	case 0x73A2:
++	case 0x73A3:
++	case 0x73AB:
++	case 0x73AE:
++		/* Bit 7:0: PCIE lane width, 1 to 7 corresponds is x1 to x32 */
++		*lane_width_override = 6;
++		break;
++	case 0x73E0:
++	case 0x73E1:
++	case 0x73E3:
++		*lane_width_override = 4;
++		break;
++	case 0x7420:
++	case 0x7421:
++	case 0x7422:
++	case 0x7423:
++	case 0x7424:
++		*lane_width_override = 3;
++		break;
++	default:
++		break;
++	}
++}
++
++#define MAX(a, b)	((a) > (b) ? (a) : (b))
++
+ static int sienna_cichlid_update_pcie_parameters(struct smu_context *smu,
+ 					 uint32_t pcie_gen_cap,
+ 					 uint32_t pcie_width_cap)
+ {
+ 	struct smu_11_0_dpm_context *dpm_context = smu->smu_dpm.dpm_context;
+-
+-	uint32_t smu_pcie_arg;
++	struct smu_11_0_pcie_table *pcie_table = &dpm_context->dpm_tables.pcie_table;
++	uint32_t gen_speed_override, lane_width_override;
+ 	uint8_t *table_member1, *table_member2;
++	uint32_t min_gen_speed, max_gen_speed;
++	uint32_t min_lane_width, max_lane_width;
++	uint32_t smu_pcie_arg;
+ 	int ret, i;
+ 
+ 	GET_PPTABLE_MEMBER(PcieGenSpeed, &table_member1);
+ 	GET_PPTABLE_MEMBER(PcieLaneCount, &table_member2);
+ 
+-	/* lclk dpm table setup */
+-	for (i = 0; i < MAX_PCIE_CONF; i++) {
+-		dpm_context->dpm_tables.pcie_table.pcie_gen[i] = table_member1[i];
+-		dpm_context->dpm_tables.pcie_table.pcie_lane[i] = table_member2[i];
++	sienna_cichlid_get_override_pcie_settings(smu,
++						  &gen_speed_override,
++						  &lane_width_override);
++
++	/* PCIE gen speed override */
++	if (gen_speed_override != 0xff) {
++		min_gen_speed = MIN(pcie_gen_cap, gen_speed_override);
++		max_gen_speed = MIN(pcie_gen_cap, gen_speed_override);
++	} else {
++		min_gen_speed = MAX(0, table_member1[0]);
++		max_gen_speed = MIN(pcie_gen_cap, table_member1[1]);
++		min_gen_speed = min_gen_speed > max_gen_speed ?
++				max_gen_speed : min_gen_speed;
+ 	}
++	pcie_table->pcie_gen[0] = min_gen_speed;
++	pcie_table->pcie_gen[1] = max_gen_speed;
++
++	/* PCIE lane width override */
++	if (lane_width_override != 0xff) {
++		min_lane_width = MIN(pcie_width_cap, lane_width_override);
++		max_lane_width = MIN(pcie_width_cap, lane_width_override);
++	} else {
++		min_lane_width = MAX(1, table_member2[0]);
++		max_lane_width = MIN(pcie_width_cap, table_member2[1]);
++		min_lane_width = min_lane_width > max_lane_width ?
++				 max_lane_width : min_lane_width;
++	}
++	pcie_table->pcie_lane[0] = min_lane_width;
++	pcie_table->pcie_lane[1] = max_lane_width;
+ 
+ 	for (i = 0; i < NUM_LINK_LEVELS; i++) {
+-		smu_pcie_arg = (i << 16) |
+-			((table_member1[i] <= pcie_gen_cap) ?
+-			 (table_member1[i] << 8) :
+-			 (pcie_gen_cap << 8)) |
+-			((table_member2[i] <= pcie_width_cap) ?
+-			 table_member2[i] :
+-			 pcie_width_cap);
++		smu_pcie_arg = (i << 16 |
++				pcie_table->pcie_gen[i] << 8 |
++				pcie_table->pcie_lane[i]);
+ 
+ 		ret = smu_cmn_send_smc_msg_with_param(smu,
+ 				SMU_MSG_OverridePcieParameters,
+@@ -2101,11 +2162,6 @@ static int sienna_cichlid_update_pcie_parameters(struct smu_context *smu,
+ 				NULL);
+ 		if (ret)
+ 			return ret;
+-
+-		if (table_member1[i] > pcie_gen_cap)
+-			dpm_context->dpm_tables.pcie_table.pcie_gen[i] = pcie_gen_cap;
+-		if (table_member2[i] > pcie_width_cap)
+-			dpm_context->dpm_tables.pcie_table.pcie_lane[i] = pcie_width_cap;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+index a52ed0580fd7e..f827f95755525 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+@@ -566,11 +566,11 @@ int smu_v13_0_init_power(struct smu_context *smu)
+ 	if (smu_power->power_context || smu_power->power_context_size != 0)
+ 		return -EINVAL;
+ 
+-	smu_power->power_context = kzalloc(sizeof(struct smu_13_0_dpm_context),
++	smu_power->power_context = kzalloc(sizeof(struct smu_13_0_power_context),
+ 					   GFP_KERNEL);
+ 	if (!smu_power->power_context)
+ 		return -ENOMEM;
+-	smu_power->power_context_size = sizeof(struct smu_13_0_dpm_context);
++	smu_power->power_context_size = sizeof(struct smu_13_0_power_context);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/i915/display/intel_dp_aux.c b/drivers/gpu/drm/i915/display/intel_dp_aux.c
+index 30c98810e28bb..36d6ece8b4616 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp_aux.c
++++ b/drivers/gpu/drm/i915/display/intel_dp_aux.c
+@@ -117,6 +117,32 @@ static u32 skl_get_aux_clock_divider(struct intel_dp *intel_dp, int index)
+ 	return index ? 0 : 1;
+ }
+ 
++static int intel_dp_aux_sync_len(void)
++{
++	int precharge = 16; /* 10-16 */
++	int preamble = 16;
++
++	return precharge + preamble;
++}
++
++static int intel_dp_aux_fw_sync_len(void)
++{
++	int precharge = 10; /* 10-16 */
++	int preamble = 8;
++
++	return precharge + preamble;
++}
++
++static int g4x_dp_aux_precharge_len(void)
++{
++	int precharge_min = 10;
++	int preamble = 16;
++
++	/* HW wants the length of the extra precharge in 2us units */
++	return (intel_dp_aux_sync_len() -
++		precharge_min - preamble) / 2;
++}
++
+ static u32 g4x_get_aux_send_ctl(struct intel_dp *intel_dp,
+ 				int send_bytes,
+ 				u32 aux_clock_divider)
+@@ -139,7 +165,7 @@ static u32 g4x_get_aux_send_ctl(struct intel_dp *intel_dp,
+ 	       timeout |
+ 	       DP_AUX_CH_CTL_RECEIVE_ERROR |
+ 	       (send_bytes << DP_AUX_CH_CTL_MESSAGE_SIZE_SHIFT) |
+-	       (3 << DP_AUX_CH_CTL_PRECHARGE_2US_SHIFT) |
++	       (g4x_dp_aux_precharge_len() << DP_AUX_CH_CTL_PRECHARGE_2US_SHIFT) |
+ 	       (aux_clock_divider << DP_AUX_CH_CTL_BIT_CLOCK_2X_SHIFT);
+ }
+ 
+@@ -163,8 +189,8 @@ static u32 skl_get_aux_send_ctl(struct intel_dp *intel_dp,
+ 	      DP_AUX_CH_CTL_TIME_OUT_MAX |
+ 	      DP_AUX_CH_CTL_RECEIVE_ERROR |
+ 	      (send_bytes << DP_AUX_CH_CTL_MESSAGE_SIZE_SHIFT) |
+-	      DP_AUX_CH_CTL_FW_SYNC_PULSE_SKL(24) |
+-	      DP_AUX_CH_CTL_SYNC_PULSE_SKL(32);
++	      DP_AUX_CH_CTL_FW_SYNC_PULSE_SKL(intel_dp_aux_fw_sync_len()) |
++	      DP_AUX_CH_CTL_SYNC_PULSE_SKL(intel_dp_aux_sync_len());
+ 
+ 	if (intel_tc_port_in_tbt_alt_mode(dig_port))
+ 		ret |= DP_AUX_CH_CTL_TBT_IO;
+diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
+index a81fa6a20f5aa..7b516b1a4915b 100644
+--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
++++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
+@@ -346,8 +346,10 @@ static int live_parallel_switch(void *arg)
+ 				continue;
+ 
+ 			ce = intel_context_create(data[m].ce[0]->engine);
+-			if (IS_ERR(ce))
++			if (IS_ERR(ce)) {
++				err = PTR_ERR(ce);
+ 				goto out;
++			}
+ 
+ 			err = intel_context_pin(ce);
+ 			if (err) {
+@@ -367,8 +369,10 @@ static int live_parallel_switch(void *arg)
+ 
+ 		worker = kthread_create_worker(0, "igt/parallel:%s",
+ 					       data[n].ce[0]->engine->name);
+-		if (IS_ERR(worker))
++		if (IS_ERR(worker)) {
++			err = PTR_ERR(worker);
+ 			goto out;
++		}
+ 
+ 		data[n].worker = worker;
+ 	}
+@@ -397,8 +401,10 @@ static int live_parallel_switch(void *arg)
+ 			}
+ 		}
+ 
+-		if (igt_live_test_end(&t))
+-			err = -EIO;
++		if (igt_live_test_end(&t)) {
++			err = err ?: -EIO;
++			break;
++		}
+ 	}
+ 
+ out:
+diff --git a/drivers/gpu/drm/i915/gt/selftest_execlists.c b/drivers/gpu/drm/i915/gt/selftest_execlists.c
+index 736b89a8ecf54..4202df5b8c122 100644
+--- a/drivers/gpu/drm/i915/gt/selftest_execlists.c
++++ b/drivers/gpu/drm/i915/gt/selftest_execlists.c
+@@ -1530,8 +1530,8 @@ static int live_busywait_preempt(void *arg)
+ 	struct drm_i915_gem_object *obj;
+ 	struct i915_vma *vma;
+ 	enum intel_engine_id id;
+-	int err = -ENOMEM;
+ 	u32 *map;
++	int err;
+ 
+ 	/*
+ 	 * Verify that even without HAS_LOGICAL_RING_PREEMPTION, we can
+@@ -1539,13 +1539,17 @@ static int live_busywait_preempt(void *arg)
+ 	 */
+ 
+ 	ctx_hi = kernel_context(gt->i915, NULL);
+-	if (!ctx_hi)
+-		return -ENOMEM;
++	if (IS_ERR(ctx_hi))
++		return PTR_ERR(ctx_hi);
++
+ 	ctx_hi->sched.priority = I915_CONTEXT_MAX_USER_PRIORITY;
+ 
+ 	ctx_lo = kernel_context(gt->i915, NULL);
+-	if (!ctx_lo)
++	if (IS_ERR(ctx_lo)) {
++		err = PTR_ERR(ctx_lo);
+ 		goto err_ctx_hi;
++	}
++
+ 	ctx_lo->sched.priority = I915_CONTEXT_MIN_USER_PRIORITY;
+ 
+ 	obj = i915_gem_object_create_internal(gt->i915, PAGE_SIZE);
+diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
+index ff003403fbbc7..ffd91a5ee2990 100644
+--- a/drivers/gpu/drm/lima/lima_sched.c
++++ b/drivers/gpu/drm/lima/lima_sched.c
+@@ -165,7 +165,7 @@ int lima_sched_context_init(struct lima_sched_pipe *pipe,
+ void lima_sched_context_fini(struct lima_sched_pipe *pipe,
+ 			     struct lima_sched_context *context)
+ {
+-	drm_sched_entity_fini(&context->base);
++	drm_sched_entity_destroy(&context->base);
+ }
+ 
+ struct dma_fence *lima_sched_context_queue_task(struct lima_sched_task *task)
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+index 7f5bc73b20402..611311b65b168 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+@@ -1514,8 +1514,6 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node)
+ 	if (!pdev)
+ 		return -ENODEV;
+ 
+-	mutex_init(&gmu->lock);
+-
+ 	gmu->dev = &pdev->dev;
+ 
+ 	of_dma_configure(gmu->dev, node, true);
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index 6faea5049f765..2942d2548ce69 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -1998,6 +1998,8 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev)
+ 	adreno_gpu = &a6xx_gpu->base;
+ 	gpu = &adreno_gpu->base;
+ 
++	mutex_init(&a6xx_gpu->gmu.lock);
++
+ 	adreno_gpu->registers = NULL;
+ 
+ 	/*
+diff --git a/drivers/i2c/busses/i2c-mv64xxx.c b/drivers/i2c/busses/i2c-mv64xxx.c
+index 047dfef7a6577..878c076ebdc6b 100644
+--- a/drivers/i2c/busses/i2c-mv64xxx.c
++++ b/drivers/i2c/busses/i2c-mv64xxx.c
+@@ -520,6 +520,17 @@ mv64xxx_i2c_intr(int irq, void *dev_id)
+ 
+ 	while (readl(drv_data->reg_base + drv_data->reg_offsets.control) &
+ 						MV64XXX_I2C_REG_CONTROL_IFLG) {
++		/*
++		 * It seems that sometime the controller updates the status
++		 * register only after it asserts IFLG in control register.
++		 * This may result in weird bugs when in atomic mode. A delay
++		 * of 100 ns before reading the status register solves this
++		 * issue. This bug does not seem to appear when using
++		 * interrupts.
++		 */
++		if (drv_data->atomic)
++			ndelay(100);
++
+ 		status = readl(drv_data->reg_base + drv_data->reg_offsets.status);
+ 		mv64xxx_i2c_fsm(drv_data, status);
+ 		mv64xxx_i2c_do_action(drv_data);
+diff --git a/drivers/i2c/busses/i2c-sprd.c b/drivers/i2c/busses/i2c-sprd.c
+index 4fe15cd78907e..ffc54fbf814dd 100644
+--- a/drivers/i2c/busses/i2c-sprd.c
++++ b/drivers/i2c/busses/i2c-sprd.c
+@@ -576,12 +576,14 @@ static int sprd_i2c_remove(struct platform_device *pdev)
+ 	struct sprd_i2c *i2c_dev = platform_get_drvdata(pdev);
+ 	int ret;
+ 
+-	ret = pm_runtime_resume_and_get(i2c_dev->dev);
++	ret = pm_runtime_get_sync(i2c_dev->dev);
+ 	if (ret < 0)
+-		return ret;
++		dev_err(&pdev->dev, "Failed to resume device (%pe)\n", ERR_PTR(ret));
+ 
+ 	i2c_del_adapter(&i2c_dev->adap);
+-	clk_disable_unprepare(i2c_dev->clk);
++
++	if (ret >= 0)
++		clk_disable_unprepare(i2c_dev->clk);
+ 
+ 	pm_runtime_put_noidle(i2c_dev->dev);
+ 	pm_runtime_disable(i2c_dev->dev);
+diff --git a/drivers/input/input.c b/drivers/input/input.c
+index 37e876d45eb9c..641eb86f276e6 100644
+--- a/drivers/input/input.c
++++ b/drivers/input/input.c
+@@ -703,7 +703,7 @@ void input_close_device(struct input_handle *handle)
+ 
+ 	__input_release_device(handle);
+ 
+-	if (!dev->inhibited && !--dev->users) {
++	if (!--dev->users && !dev->inhibited) {
+ 		if (dev->poller)
+ 			input_dev_poller_stop(dev->poller);
+ 		if (dev->close)
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index f617b2c60819c..8f65f41a1fd75 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -282,7 +282,6 @@ static const struct xpad_device {
+ 	{ 0x1430, 0xf801, "RedOctane Controller", 0, XTYPE_XBOX360 },
+ 	{ 0x146b, 0x0601, "BigBen Interactive XBOX 360 Controller", 0, XTYPE_XBOX360 },
+ 	{ 0x146b, 0x0604, "Bigben Interactive DAIJA Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },
+-	{ 0x1532, 0x0037, "Razer Sabertooth", 0, XTYPE_XBOX360 },
+ 	{ 0x1532, 0x0a00, "Razer Atrox Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOXONE },
+ 	{ 0x1532, 0x0a03, "Razer Wildcat", 0, XTYPE_XBOXONE },
+ 	{ 0x15e4, 0x3f00, "Power A Mini Pro Elite", 0, XTYPE_XBOX360 },
+diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
+index ece97f8c6a3e3..2118b2075f437 100644
+--- a/drivers/input/mouse/elantech.c
++++ b/drivers/input/mouse/elantech.c
+@@ -674,10 +674,11 @@ static void process_packet_head_v4(struct psmouse *psmouse)
+ 	struct input_dev *dev = psmouse->dev;
+ 	struct elantech_data *etd = psmouse->private;
+ 	unsigned char *packet = psmouse->packet;
+-	int id = ((packet[3] & 0xe0) >> 5) - 1;
++	int id;
+ 	int pres, traces;
+ 
+-	if (id < 0)
++	id = ((packet[3] & 0xe0) >> 5) - 1;
++	if (id < 0 || id >= ETP_MAX_FINGERS)
+ 		return;
+ 
+ 	etd->mt[id].x = ((packet[1] & 0x0f) << 8) | packet[2];
+@@ -707,7 +708,7 @@ static void process_packet_motion_v4(struct psmouse *psmouse)
+ 	int id, sid;
+ 
+ 	id = ((packet[0] & 0xe0) >> 5) - 1;
+-	if (id < 0)
++	if (id < 0 || id >= ETP_MAX_FINGERS)
+ 		return;
+ 
+ 	sid = ((packet[3] & 0xe0) >> 5) - 1;
+@@ -728,7 +729,7 @@ static void process_packet_motion_v4(struct psmouse *psmouse)
+ 	input_report_abs(dev, ABS_MT_POSITION_X, etd->mt[id].x);
+ 	input_report_abs(dev, ABS_MT_POSITION_Y, etd->mt[id].y);
+ 
+-	if (sid >= 0) {
++	if (sid >= 0 && sid < ETP_MAX_FINGERS) {
+ 		etd->mt[sid].x += delta_x2 * weight;
+ 		etd->mt[sid].y -= delta_y2 * weight;
+ 		input_mt_slot(dev, sid);
+diff --git a/drivers/input/touchscreen/cyttsp5.c b/drivers/input/touchscreen/cyttsp5.c
+index 30102cb80fac8..3c9d07218f48d 100644
+--- a/drivers/input/touchscreen/cyttsp5.c
++++ b/drivers/input/touchscreen/cyttsp5.c
+@@ -560,7 +560,7 @@ static int cyttsp5_hid_output_get_sysinfo(struct cyttsp5 *ts)
+ static int cyttsp5_hid_output_bl_launch_app(struct cyttsp5 *ts)
+ {
+ 	int rc;
+-	u8 cmd[HID_OUTPUT_BL_LAUNCH_APP];
++	u8 cmd[HID_OUTPUT_BL_LAUNCH_APP_SIZE];
+ 	u16 crc;
+ 
+ 	put_unaligned_le16(HID_OUTPUT_BL_LAUNCH_APP_SIZE, cmd);
+diff --git a/drivers/misc/eeprom/Kconfig b/drivers/misc/eeprom/Kconfig
+index f0a7531f354c1..2d240bfa819f8 100644
+--- a/drivers/misc/eeprom/Kconfig
++++ b/drivers/misc/eeprom/Kconfig
+@@ -6,6 +6,7 @@ config EEPROM_AT24
+ 	depends on I2C && SYSFS
+ 	select NVMEM
+ 	select NVMEM_SYSFS
++	select REGMAP
+ 	select REGMAP_I2C
+ 	help
+ 	  Enable this driver to get read/write support to most I2C EEPROMs
+diff --git a/drivers/net/dsa/lan9303-core.c b/drivers/net/dsa/lan9303-core.c
+index cbe8318753471..c0215a8770f49 100644
+--- a/drivers/net/dsa/lan9303-core.c
++++ b/drivers/net/dsa/lan9303-core.c
+@@ -1188,8 +1188,6 @@ static int lan9303_port_fdb_add(struct dsa_switch *ds, int port,
+ 	struct lan9303 *chip = ds->priv;
+ 
+ 	dev_dbg(chip->dev, "%s(%d, %pM, %d)\n", __func__, port, addr, vid);
+-	if (vid)
+-		return -EOPNOTSUPP;
+ 
+ 	return lan9303_alr_add_port(chip, addr, port, false);
+ }
+@@ -1201,8 +1199,6 @@ static int lan9303_port_fdb_del(struct dsa_switch *ds, int port,
+ 	struct lan9303 *chip = ds->priv;
+ 
+ 	dev_dbg(chip->dev, "%s(%d, %pM, %d)\n", __func__, port, addr, vid);
+-	if (vid)
+-		return -EOPNOTSUPP;
+ 	lan9303_alr_del_port(chip, addr, port);
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 651b79ce5d80c..9784e86d4d96a 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -2392,6 +2392,9 @@ static int bnxt_async_event_process(struct bnxt *bp,
+ 				struct bnxt_ptp_cfg *ptp = bp->ptp_cfg;
+ 				u64 ns;
+ 
++				if (!ptp)
++					goto async_event_process_exit;
++
+ 				spin_lock_bh(&ptp->ptp_lock);
+ 				bnxt_ptp_update_current_time(bp);
+ 				ns = (((u64)BNXT_EVENT_PHC_RTC_UPDATE(data1) <<
+@@ -4789,6 +4792,9 @@ int bnxt_hwrm_func_drv_rgtr(struct bnxt *bp, unsigned long *bmap, int bmap_size,
+ 		if (event_id == ASYNC_EVENT_CMPL_EVENT_ID_ERROR_RECOVERY &&
+ 		    !(bp->fw_cap & BNXT_FW_CAP_ERROR_RECOVERY))
+ 			continue;
++		if (event_id == ASYNC_EVENT_CMPL_EVENT_ID_PHC_UPDATE &&
++		    !bp->ptp_cfg)
++			continue;
+ 		__set_bit(bnxt_async_events_arr[i], async_events_bmap);
+ 	}
+ 	if (bmap && bmap_size) {
+@@ -5376,6 +5382,7 @@ static void bnxt_hwrm_update_rss_hash_cfg(struct bnxt *bp)
+ 	if (hwrm_req_init(bp, req, HWRM_VNIC_RSS_QCFG))
+ 		return;
+ 
++	req->vnic_id = cpu_to_le16(vnic->fw_vnic_id);
+ 	/* all contexts configured to same hash_type, zero always exists */
+ 	req->rss_ctx_idx = cpu_to_le16(vnic->fw_rss_cos_lb_ctx[0]);
+ 	resp = hwrm_req_hold(bp, req);
+@@ -8838,6 +8845,9 @@ static int bnxt_init_chip(struct bnxt *bp, bool irq_re_init)
+ 		goto err_out;
+ 	}
+ 
++	if (BNXT_VF(bp))
++		bnxt_hwrm_func_qcfg(bp);
++
+ 	rc = bnxt_setup_vnic(bp, 0);
+ 	if (rc)
+ 		goto err_out;
+@@ -11624,6 +11634,7 @@ static void bnxt_tx_timeout(struct net_device *dev, unsigned int txqueue)
+ static void bnxt_fw_health_check(struct bnxt *bp)
+ {
+ 	struct bnxt_fw_health *fw_health = bp->fw_health;
++	struct pci_dev *pdev = bp->pdev;
+ 	u32 val;
+ 
+ 	if (!fw_health->enabled || test_bit(BNXT_STATE_IN_FW_RESET, &bp->state))
+@@ -11637,7 +11648,7 @@ static void bnxt_fw_health_check(struct bnxt *bp)
+ 	}
+ 
+ 	val = bnxt_fw_health_readl(bp, BNXT_FW_HEARTBEAT_REG);
+-	if (val == fw_health->last_fw_heartbeat) {
++	if (val == fw_health->last_fw_heartbeat && pci_device_is_present(pdev)) {
+ 		fw_health->arrests++;
+ 		goto fw_reset;
+ 	}
+@@ -11645,7 +11656,7 @@ static void bnxt_fw_health_check(struct bnxt *bp)
+ 	fw_health->last_fw_heartbeat = val;
+ 
+ 	val = bnxt_fw_health_readl(bp, BNXT_FW_RESET_CNT_REG);
+-	if (val != fw_health->last_fw_reset_cnt) {
++	if (val != fw_health->last_fw_reset_cnt && pci_device_is_present(pdev)) {
+ 		fw_health->discoveries++;
+ 		goto fw_reset;
+ 	}
+@@ -13051,26 +13062,37 @@ static void bnxt_cfg_ntp_filters(struct bnxt *bp)
+ 
+ #endif /* CONFIG_RFS_ACCEL */
+ 
+-static int bnxt_udp_tunnel_sync(struct net_device *netdev, unsigned int table)
++static int bnxt_udp_tunnel_set_port(struct net_device *netdev, unsigned int table,
++				    unsigned int entry, struct udp_tunnel_info *ti)
+ {
+ 	struct bnxt *bp = netdev_priv(netdev);
+-	struct udp_tunnel_info ti;
+ 	unsigned int cmd;
+ 
+-	udp_tunnel_nic_get_port(netdev, table, 0, &ti);
+-	if (ti.type == UDP_TUNNEL_TYPE_VXLAN)
++	if (ti->type == UDP_TUNNEL_TYPE_VXLAN)
+ 		cmd = TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN;
+ 	else
+ 		cmd = TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_GENEVE;
+ 
+-	if (ti.port)
+-		return bnxt_hwrm_tunnel_dst_port_alloc(bp, ti.port, cmd);
++	return bnxt_hwrm_tunnel_dst_port_alloc(bp, ti->port, cmd);
++}
++
++static int bnxt_udp_tunnel_unset_port(struct net_device *netdev, unsigned int table,
++				      unsigned int entry, struct udp_tunnel_info *ti)
++{
++	struct bnxt *bp = netdev_priv(netdev);
++	unsigned int cmd;
++
++	if (ti->type == UDP_TUNNEL_TYPE_VXLAN)
++		cmd = TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN;
++	else
++		cmd = TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_GENEVE;
+ 
+ 	return bnxt_hwrm_tunnel_dst_port_free(bp, cmd);
+ }
+ 
+ static const struct udp_tunnel_nic_info bnxt_udp_tunnels = {
+-	.sync_table	= bnxt_udp_tunnel_sync,
++	.set_port	= bnxt_udp_tunnel_set_port,
++	.unset_port	= bnxt_udp_tunnel_unset_port,
+ 	.flags		= UDP_TUNNEL_NIC_INFO_MAY_SLEEP |
+ 			  UDP_TUNNEL_NIC_INFO_OPEN_ONLY,
+ 	.tables		= {
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+index 2dd8ee4a6f75b..8fd5071d8b099 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+@@ -3831,7 +3831,7 @@ static int bnxt_reset(struct net_device *dev, u32 *flags)
+ 		}
+ 	}
+ 
+-	if (req & BNXT_FW_RESET_AP) {
++	if (!BNXT_CHIP_P4_PLUS(bp) && (req & BNXT_FW_RESET_AP)) {
+ 		/* This feature is not supported in older firmware versions */
+ 		if (bp->hwrm_spec_code >= 0x10803) {
+ 			if (!bnxt_firmware_reset_ap(dev)) {
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
+index a3a3978a4d1c2..af7b4466f9520 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
+@@ -946,6 +946,7 @@ int bnxt_ptp_init(struct bnxt *bp, bool phc_cfg)
+ 		bnxt_ptp_timecounter_init(bp, true);
+ 		bnxt_ptp_adjfine_rtc(bp, 0);
+ 	}
++	bnxt_hwrm_func_drv_rgtr(bp, NULL, 0, true);
+ 
+ 	ptp->ptp_info = bnxt_ptp_caps;
+ 	if ((bp->fw_cap & BNXT_FW_CAP_PTP_PPS)) {
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index eca0c92c0c84d..2b5761ad2f92f 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -1272,7 +1272,8 @@ static void bcmgenet_get_ethtool_stats(struct net_device *dev,
+ 	}
+ }
+ 
+-static void bcmgenet_eee_enable_set(struct net_device *dev, bool enable)
++void bcmgenet_eee_enable_set(struct net_device *dev, bool enable,
++			     bool tx_lpi_enabled)
+ {
+ 	struct bcmgenet_priv *priv = netdev_priv(dev);
+ 	u32 off = priv->hw_params->tbuf_offset + TBUF_ENERGY_CTRL;
+@@ -1292,7 +1293,7 @@ static void bcmgenet_eee_enable_set(struct net_device *dev, bool enable)
+ 
+ 	/* Enable EEE and switch to a 27Mhz clock automatically */
+ 	reg = bcmgenet_readl(priv->base + off);
+-	if (enable)
++	if (tx_lpi_enabled)
+ 		reg |= TBUF_EEE_EN | TBUF_PM_EN;
+ 	else
+ 		reg &= ~(TBUF_EEE_EN | TBUF_PM_EN);
+@@ -1313,6 +1314,7 @@ static void bcmgenet_eee_enable_set(struct net_device *dev, bool enable)
+ 
+ 	priv->eee.eee_enabled = enable;
+ 	priv->eee.eee_active = enable;
++	priv->eee.tx_lpi_enabled = tx_lpi_enabled;
+ }
+ 
+ static int bcmgenet_get_eee(struct net_device *dev, struct ethtool_eee *e)
+@@ -1328,6 +1330,7 @@ static int bcmgenet_get_eee(struct net_device *dev, struct ethtool_eee *e)
+ 
+ 	e->eee_enabled = p->eee_enabled;
+ 	e->eee_active = p->eee_active;
++	e->tx_lpi_enabled = p->tx_lpi_enabled;
+ 	e->tx_lpi_timer = bcmgenet_umac_readl(priv, UMAC_EEE_LPI_TIMER);
+ 
+ 	return phy_ethtool_get_eee(dev->phydev, e);
+@@ -1337,7 +1340,6 @@ static int bcmgenet_set_eee(struct net_device *dev, struct ethtool_eee *e)
+ {
+ 	struct bcmgenet_priv *priv = netdev_priv(dev);
+ 	struct ethtool_eee *p = &priv->eee;
+-	int ret = 0;
+ 
+ 	if (GENET_IS_V1(priv))
+ 		return -EOPNOTSUPP;
+@@ -1348,16 +1350,11 @@ static int bcmgenet_set_eee(struct net_device *dev, struct ethtool_eee *e)
+ 	p->eee_enabled = e->eee_enabled;
+ 
+ 	if (!p->eee_enabled) {
+-		bcmgenet_eee_enable_set(dev, false);
++		bcmgenet_eee_enable_set(dev, false, false);
+ 	} else {
+-		ret = phy_init_eee(dev->phydev, false);
+-		if (ret) {
+-			netif_err(priv, hw, dev, "EEE initialization failed\n");
+-			return ret;
+-		}
+-
++		p->eee_active = phy_init_eee(dev->phydev, false) >= 0;
+ 		bcmgenet_umac_writel(priv, e->tx_lpi_timer, UMAC_EEE_LPI_TIMER);
+-		bcmgenet_eee_enable_set(dev, true);
++		bcmgenet_eee_enable_set(dev, p->eee_active, e->tx_lpi_enabled);
+ 	}
+ 
+ 	return phy_ethtool_set_eee(dev->phydev, e);
+@@ -4279,9 +4276,6 @@ static int bcmgenet_resume(struct device *d)
+ 	if (!device_may_wakeup(d))
+ 		phy_resume(dev->phydev);
+ 
+-	if (priv->eee.eee_enabled)
+-		bcmgenet_eee_enable_set(dev, true);
+-
+ 	bcmgenet_netif_start(dev);
+ 
+ 	netif_device_attach(dev);
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.h b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
+index 946f6e283c4e6..1985c0ec4da2a 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.h
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
+@@ -703,4 +703,7 @@ int bcmgenet_wol_power_down_cfg(struct bcmgenet_priv *priv,
+ void bcmgenet_wol_power_up_cfg(struct bcmgenet_priv *priv,
+ 			       enum bcmgenet_power_mode mode);
+ 
++void bcmgenet_eee_enable_set(struct net_device *dev, bool enable,
++			     bool tx_lpi_enabled);
++
+ #endif /* __BCMGENET_H__ */
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+index be042905ada2a..c15ed0acdb777 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+@@ -87,6 +87,11 @@ static void bcmgenet_mac_config(struct net_device *dev)
+ 		reg |= CMD_TX_EN | CMD_RX_EN;
+ 	}
+ 	bcmgenet_umac_writel(priv, reg, UMAC_CMD);
++
++	priv->eee.eee_active = phy_init_eee(phydev, 0) >= 0;
++	bcmgenet_eee_enable_set(dev,
++				priv->eee.eee_enabled && priv->eee.eee_active,
++				priv->eee.tx_lpi_enabled);
+ }
+ 
+ /* setup netdev link state when PHY link status change and
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
+index 2fc712b24d126..24024745ecef6 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc.c
+@@ -1222,7 +1222,13 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
+ 		if (!skb)
+ 			break;
+ 
+-		rx_byte_cnt += skb->len;
++		/* When set, the outer VLAN header is extracted and reported
++		 * in the receive buffer descriptor. So rx_byte_cnt should
++		 * add the length of the extracted VLAN header.
++		 */
++		if (bd_status & ENETC_RXBD_FLAG_VLAN)
++			rx_byte_cnt += VLAN_HLEN;
++		rx_byte_cnt += skb->len + ETH_HLEN;
+ 		rx_frm_cnt++;
+ 
+ 		napi_gro_receive(napi, skb);
+@@ -1558,6 +1564,14 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,
+ 		enetc_build_xdp_buff(rx_ring, bd_status, &rxbd, &i,
+ 				     &cleaned_cnt, &xdp_buff);
+ 
++		/* When set, the outer VLAN header is extracted and reported
++		 * in the receive buffer descriptor. So rx_byte_cnt should
++		 * add the length of the extracted VLAN header.
++		 */
++		if (bd_status & ENETC_RXBD_FLAG_VLAN)
++			rx_byte_cnt += VLAN_HLEN;
++		rx_byte_cnt += xdp_get_buff_len(&xdp_buff);
++
+ 		xdp_act = bpf_prog_run_xdp(prog, &xdp_buff);
+ 
+ 		switch (xdp_act) {
+diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
+index c2fda4fa4188c..b534d7726d3e8 100644
+--- a/drivers/net/ethernet/intel/ice/ice_common.c
++++ b/drivers/net/ethernet/intel/ice/ice_common.c
+@@ -5169,7 +5169,7 @@ ice_aq_read_i2c(struct ice_hw *hw, struct ice_aqc_link_topo_addr topo_addr,
+  */
+ int
+ ice_aq_write_i2c(struct ice_hw *hw, struct ice_aqc_link_topo_addr topo_addr,
+-		 u16 bus_addr, __le16 addr, u8 params, u8 *data,
++		 u16 bus_addr, __le16 addr, u8 params, const u8 *data,
+ 		 struct ice_sq_cd *cd)
+ {
+ 	struct ice_aq_desc desc = { 0 };
+diff --git a/drivers/net/ethernet/intel/ice/ice_common.h b/drivers/net/ethernet/intel/ice/ice_common.h
+index 8ba5f935a092b..81961a7d65985 100644
+--- a/drivers/net/ethernet/intel/ice/ice_common.h
++++ b/drivers/net/ethernet/intel/ice/ice_common.h
+@@ -229,7 +229,7 @@ ice_aq_read_i2c(struct ice_hw *hw, struct ice_aqc_link_topo_addr topo_addr,
+ 		struct ice_sq_cd *cd);
+ int
+ ice_aq_write_i2c(struct ice_hw *hw, struct ice_aqc_link_topo_addr topo_addr,
+-		 u16 bus_addr, __le16 addr, u8 params, u8 *data,
++		 u16 bus_addr, __le16 addr, u8 params, const u8 *data,
+ 		 struct ice_sq_cd *cd);
+ bool ice_fw_supports_report_dflt_cfg(struct ice_hw *hw);
+ #endif /* _ICE_COMMON_H_ */
+diff --git a/drivers/net/ethernet/intel/ice/ice_gnss.c b/drivers/net/ethernet/intel/ice/ice_gnss.c
+index 8dec748bb53a4..12086aafb42fb 100644
+--- a/drivers/net/ethernet/intel/ice/ice_gnss.c
++++ b/drivers/net/ethernet/intel/ice/ice_gnss.c
+@@ -16,8 +16,8 @@
+  * * number of bytes written - success
+  * * negative - error code
+  */
+-static unsigned int
+-ice_gnss_do_write(struct ice_pf *pf, unsigned char *buf, unsigned int size)
++static int
++ice_gnss_do_write(struct ice_pf *pf, const unsigned char *buf, unsigned int size)
+ {
+ 	struct ice_aqc_link_topo_addr link_topo;
+ 	struct ice_hw *hw = &pf->hw;
+@@ -72,39 +72,7 @@ err_out:
+ 	dev_err(ice_pf_to_dev(pf), "GNSS failed to write, offset=%u, size=%u, err=%d\n",
+ 		offset, size, err);
+ 
+-	return offset;
+-}
+-
+-/**
+- * ice_gnss_write_pending - Write all pending data to internal GNSS
+- * @work: GNSS write work structure
+- */
+-static void ice_gnss_write_pending(struct kthread_work *work)
+-{
+-	struct gnss_serial *gnss = container_of(work, struct gnss_serial,
+-						write_work);
+-	struct ice_pf *pf = gnss->back;
+-
+-	if (!pf)
+-		return;
+-
+-	if (!test_bit(ICE_FLAG_GNSS, pf->flags))
+-		return;
+-
+-	if (!list_empty(&gnss->queue)) {
+-		struct gnss_write_buf *write_buf = NULL;
+-		unsigned int bytes;
+-
+-		write_buf = list_first_entry(&gnss->queue,
+-					     struct gnss_write_buf, queue);
+-
+-		bytes = ice_gnss_do_write(pf, write_buf->buf, write_buf->size);
+-		dev_dbg(ice_pf_to_dev(pf), "%u bytes written to GNSS\n", bytes);
+-
+-		list_del(&write_buf->queue);
+-		kfree(write_buf->buf);
+-		kfree(write_buf);
+-	}
++	return err;
+ }
+ 
+ /**
+@@ -224,8 +192,6 @@ static struct gnss_serial *ice_gnss_struct_init(struct ice_pf *pf)
+ 	pf->gnss_serial = gnss;
+ 
+ 	kthread_init_delayed_work(&gnss->read_work, ice_gnss_read);
+-	INIT_LIST_HEAD(&gnss->queue);
+-	kthread_init_work(&gnss->write_work, ice_gnss_write_pending);
+ 	kworker = kthread_create_worker(0, "ice-gnss-%s", dev_name(dev));
+ 	if (IS_ERR(kworker)) {
+ 		kfree(gnss);
+@@ -285,7 +251,6 @@ static void ice_gnss_close(struct gnss_device *gdev)
+ 	if (!gnss)
+ 		return;
+ 
+-	kthread_cancel_work_sync(&gnss->write_work);
+ 	kthread_cancel_delayed_work_sync(&gnss->read_work);
+ }
+ 
+@@ -304,10 +269,7 @@ ice_gnss_write(struct gnss_device *gdev, const unsigned char *buf,
+ 	       size_t count)
+ {
+ 	struct ice_pf *pf = gnss_get_drvdata(gdev);
+-	struct gnss_write_buf *write_buf;
+ 	struct gnss_serial *gnss;
+-	unsigned char *cmd_buf;
+-	int err = count;
+ 
+ 	/* We cannot write a single byte using our I2C implementation. */
+ 	if (count <= 1 || count > ICE_GNSS_TTY_WRITE_BUF)
+@@ -323,24 +285,7 @@ ice_gnss_write(struct gnss_device *gdev, const unsigned char *buf,
+ 	if (!gnss)
+ 		return -ENODEV;
+ 
+-	cmd_buf = kcalloc(count, sizeof(*buf), GFP_KERNEL);
+-	if (!cmd_buf)
+-		return -ENOMEM;
+-
+-	memcpy(cmd_buf, buf, count);
+-	write_buf = kzalloc(sizeof(*write_buf), GFP_KERNEL);
+-	if (!write_buf) {
+-		kfree(cmd_buf);
+-		return -ENOMEM;
+-	}
+-
+-	write_buf->buf = cmd_buf;
+-	write_buf->size = count;
+-	INIT_LIST_HEAD(&write_buf->queue);
+-	list_add_tail(&write_buf->queue, &gnss->queue);
+-	kthread_queue_work(gnss->kworker, &gnss->write_work);
+-
+-	return err;
++	return ice_gnss_do_write(pf, buf, count);
+ }
+ 
+ static const struct gnss_operations ice_gnss_ops = {
+@@ -436,7 +381,6 @@ void ice_gnss_exit(struct ice_pf *pf)
+ 	if (pf->gnss_serial) {
+ 		struct gnss_serial *gnss = pf->gnss_serial;
+ 
+-		kthread_cancel_work_sync(&gnss->write_work);
+ 		kthread_cancel_delayed_work_sync(&gnss->read_work);
+ 		kthread_destroy_worker(gnss->kworker);
+ 		gnss->kworker = NULL;
+diff --git a/drivers/net/ethernet/intel/ice/ice_gnss.h b/drivers/net/ethernet/intel/ice/ice_gnss.h
+index 4d49e5b0b4b81..d95ca3928b2ea 100644
+--- a/drivers/net/ethernet/intel/ice/ice_gnss.h
++++ b/drivers/net/ethernet/intel/ice/ice_gnss.h
+@@ -23,26 +23,16 @@
+ #define ICE_MAX_UBX_READ_TRIES		255
+ #define ICE_MAX_UBX_ACK_READ_TRIES	4095
+ 
+-struct gnss_write_buf {
+-	struct list_head queue;
+-	unsigned int size;
+-	unsigned char *buf;
+-};
+-
+ /**
+  * struct gnss_serial - data used to initialize GNSS TTY port
+  * @back: back pointer to PF
+  * @kworker: kwork thread for handling periodic work
+  * @read_work: read_work function for handling GNSS reads
+- * @write_work: write_work function for handling GNSS writes
+- * @queue: write buffers queue
+  */
+ struct gnss_serial {
+ 	struct ice_pf *back;
+ 	struct kthread_worker *kworker;
+ 	struct kthread_delayed_work read_work;
+-	struct kthread_work write_work;
+-	struct list_head queue;
+ };
+ 
+ #if IS_ENABLED(CONFIG_GNSS)
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_l2.c b/drivers/net/ethernet/qlogic/qed/qed_l2.c
+index 2edd6bf64a3cc..7776d3bdd459a 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_l2.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_l2.c
+@@ -1903,7 +1903,7 @@ void qed_get_vport_stats(struct qed_dev *cdev, struct qed_eth_stats *stats)
+ {
+ 	u32 i;
+ 
+-	if (!cdev) {
++	if (!cdev || cdev->recov_in_prog) {
+ 		memset(stats, 0, sizeof(*stats));
+ 		return;
+ 	}
+diff --git a/drivers/net/ethernet/qlogic/qede/qede.h b/drivers/net/ethernet/qlogic/qede/qede.h
+index f90dcfe9ee688..8a63f99d499c4 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede.h
++++ b/drivers/net/ethernet/qlogic/qede/qede.h
+@@ -271,6 +271,10 @@ struct qede_dev {
+ #define QEDE_ERR_WARN			3
+ 
+ 	struct qede_dump_info		dump_info;
++	struct delayed_work		periodic_task;
++	unsigned long			stats_coal_ticks;
++	u32				stats_coal_usecs;
++	spinlock_t			stats_lock; /* lock for vport stats access */
+ };
+ 
+ enum QEDE_STATE {
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_ethtool.c b/drivers/net/ethernet/qlogic/qede/qede_ethtool.c
+index 8034d812d5a00..d0a3395b2bc1f 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_ethtool.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_ethtool.c
+@@ -430,6 +430,8 @@ static void qede_get_ethtool_stats(struct net_device *dev,
+ 		}
+ 	}
+ 
++	spin_lock(&edev->stats_lock);
++
+ 	for (i = 0; i < QEDE_NUM_STATS; i++) {
+ 		if (qede_is_irrelevant_stat(edev, i))
+ 			continue;
+@@ -439,6 +441,8 @@ static void qede_get_ethtool_stats(struct net_device *dev,
+ 		buf++;
+ 	}
+ 
++	spin_unlock(&edev->stats_lock);
++
+ 	__qede_unlock(edev);
+ }
+ 
+@@ -830,6 +834,7 @@ out:
+ 
+ 	coal->rx_coalesce_usecs = rx_coal;
+ 	coal->tx_coalesce_usecs = tx_coal;
++	coal->stats_block_coalesce_usecs = edev->stats_coal_usecs;
+ 
+ 	return rc;
+ }
+@@ -843,6 +848,19 @@ int qede_set_coalesce(struct net_device *dev, struct ethtool_coalesce *coal,
+ 	int i, rc = 0;
+ 	u16 rxc, txc;
+ 
++	if (edev->stats_coal_usecs != coal->stats_block_coalesce_usecs) {
++		edev->stats_coal_usecs = coal->stats_block_coalesce_usecs;
++		if (edev->stats_coal_usecs) {
++			edev->stats_coal_ticks = usecs_to_jiffies(edev->stats_coal_usecs);
++			schedule_delayed_work(&edev->periodic_task, 0);
++
++			DP_INFO(edev, "Configured stats coal ticks=%lu jiffies\n",
++				edev->stats_coal_ticks);
++		} else {
++			cancel_delayed_work_sync(&edev->periodic_task);
++		}
++	}
++
+ 	if (!netif_running(dev)) {
+ 		DP_INFO(edev, "Interface is down\n");
+ 		return -EINVAL;
+@@ -2253,7 +2271,8 @@ out:
+ }
+ 
+ static const struct ethtool_ops qede_ethtool_ops = {
+-	.supported_coalesce_params	= ETHTOOL_COALESCE_USECS,
++	.supported_coalesce_params	= ETHTOOL_COALESCE_USECS |
++					  ETHTOOL_COALESCE_STATS_BLOCK_USECS,
+ 	.get_link_ksettings		= qede_get_link_ksettings,
+ 	.set_link_ksettings		= qede_set_link_ksettings,
+ 	.get_drvinfo			= qede_get_drvinfo,
+@@ -2304,7 +2323,8 @@ static const struct ethtool_ops qede_ethtool_ops = {
+ };
+ 
+ static const struct ethtool_ops qede_vf_ethtool_ops = {
+-	.supported_coalesce_params	= ETHTOOL_COALESCE_USECS,
++	.supported_coalesce_params	= ETHTOOL_COALESCE_USECS |
++					  ETHTOOL_COALESCE_STATS_BLOCK_USECS,
+ 	.get_link_ksettings		= qede_get_link_ksettings,
+ 	.get_drvinfo			= qede_get_drvinfo,
+ 	.get_msglevel			= qede_get_msglevel,
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c b/drivers/net/ethernet/qlogic/qede/qede_main.c
+index 261f982ca40da..36a75e84a084a 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_main.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_main.c
+@@ -308,6 +308,8 @@ void qede_fill_by_demand_stats(struct qede_dev *edev)
+ 
+ 	edev->ops->get_vport_stats(edev->cdev, &stats);
+ 
++	spin_lock(&edev->stats_lock);
++
+ 	p_common->no_buff_discards = stats.common.no_buff_discards;
+ 	p_common->packet_too_big_discard = stats.common.packet_too_big_discard;
+ 	p_common->ttl0_discard = stats.common.ttl0_discard;
+@@ -405,6 +407,8 @@ void qede_fill_by_demand_stats(struct qede_dev *edev)
+ 		p_ah->tx_1519_to_max_byte_packets =
+ 		    stats.ah.tx_1519_to_max_byte_packets;
+ 	}
++
++	spin_unlock(&edev->stats_lock);
+ }
+ 
+ static void qede_get_stats64(struct net_device *dev,
+@@ -413,9 +417,10 @@ static void qede_get_stats64(struct net_device *dev,
+ 	struct qede_dev *edev = netdev_priv(dev);
+ 	struct qede_stats_common *p_common;
+ 
+-	qede_fill_by_demand_stats(edev);
+ 	p_common = &edev->stats.common;
+ 
++	spin_lock(&edev->stats_lock);
++
+ 	stats->rx_packets = p_common->rx_ucast_pkts + p_common->rx_mcast_pkts +
+ 			    p_common->rx_bcast_pkts;
+ 	stats->tx_packets = p_common->tx_ucast_pkts + p_common->tx_mcast_pkts +
+@@ -435,6 +440,8 @@ static void qede_get_stats64(struct net_device *dev,
+ 		stats->collisions = edev->stats.bb.tx_total_collisions;
+ 	stats->rx_crc_errors = p_common->rx_crc_errors;
+ 	stats->rx_frame_errors = p_common->rx_align_errors;
++
++	spin_unlock(&edev->stats_lock);
+ }
+ 
+ #ifdef CONFIG_QED_SRIOV
+@@ -1064,6 +1071,23 @@ static void qede_unlock(struct qede_dev *edev)
+ 	rtnl_unlock();
+ }
+ 
++static void qede_periodic_task(struct work_struct *work)
++{
++	struct qede_dev *edev = container_of(work, struct qede_dev,
++					     periodic_task.work);
++
++	qede_fill_by_demand_stats(edev);
++	schedule_delayed_work(&edev->periodic_task, edev->stats_coal_ticks);
++}
++
++static void qede_init_periodic_task(struct qede_dev *edev)
++{
++	INIT_DELAYED_WORK(&edev->periodic_task, qede_periodic_task);
++	spin_lock_init(&edev->stats_lock);
++	edev->stats_coal_usecs = USEC_PER_SEC;
++	edev->stats_coal_ticks = usecs_to_jiffies(USEC_PER_SEC);
++}
++
+ static void qede_sp_task(struct work_struct *work)
+ {
+ 	struct qede_dev *edev = container_of(work, struct qede_dev,
+@@ -1083,6 +1107,7 @@ static void qede_sp_task(struct work_struct *work)
+ 	 */
+ 
+ 	if (test_and_clear_bit(QEDE_SP_RECOVERY, &edev->sp_flags)) {
++		cancel_delayed_work_sync(&edev->periodic_task);
+ #ifdef CONFIG_QED_SRIOV
+ 		/* SRIOV must be disabled outside the lock to avoid a deadlock.
+ 		 * The recovery of the active VFs is currently not supported.
+@@ -1273,6 +1298,7 @@ static int __qede_probe(struct pci_dev *pdev, u32 dp_module, u8 dp_level,
+ 		 */
+ 		INIT_DELAYED_WORK(&edev->sp_task, qede_sp_task);
+ 		mutex_init(&edev->qede_lock);
++		qede_init_periodic_task(edev);
+ 
+ 		rc = register_netdev(edev->ndev);
+ 		if (rc) {
+@@ -1297,6 +1323,11 @@ static int __qede_probe(struct pci_dev *pdev, u32 dp_module, u8 dp_level,
+ 	edev->rx_copybreak = QEDE_RX_HDR_SIZE;
+ 
+ 	qede_log_probe(edev);
++
++	/* retain user config (for example - after recovery) */
++	if (edev->stats_coal_usecs)
++		schedule_delayed_work(&edev->periodic_task, 0);
++
+ 	return 0;
+ 
+ err4:
+@@ -1365,6 +1396,7 @@ static void __qede_remove(struct pci_dev *pdev, enum qede_remove_mode mode)
+ 		unregister_netdev(ndev);
+ 
+ 		cancel_delayed_work_sync(&edev->sp_task);
++		cancel_delayed_work_sync(&edev->periodic_task);
+ 
+ 		edev->ops->common->set_power_state(cdev, PCI_D0);
+ 
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 13ac7f1c7ae8c..c40fc1e304f2d 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -204,6 +204,8 @@ struct control_buf {
+ 	__virtio16 vid;
+ 	__virtio64 offloads;
+ 	struct virtio_net_ctrl_rss rss;
++	struct virtio_net_ctrl_coal_tx coal_tx;
++	struct virtio_net_ctrl_coal_rx coal_rx;
+ };
+ 
+ struct virtnet_info {
+@@ -2933,12 +2935,10 @@ static int virtnet_send_notf_coal_cmds(struct virtnet_info *vi,
+ 				       struct ethtool_coalesce *ec)
+ {
+ 	struct scatterlist sgs_tx, sgs_rx;
+-	struct virtio_net_ctrl_coal_tx coal_tx;
+-	struct virtio_net_ctrl_coal_rx coal_rx;
+ 
+-	coal_tx.tx_usecs = cpu_to_le32(ec->tx_coalesce_usecs);
+-	coal_tx.tx_max_packets = cpu_to_le32(ec->tx_max_coalesced_frames);
+-	sg_init_one(&sgs_tx, &coal_tx, sizeof(coal_tx));
++	vi->ctrl->coal_tx.tx_usecs = cpu_to_le32(ec->tx_coalesce_usecs);
++	vi->ctrl->coal_tx.tx_max_packets = cpu_to_le32(ec->tx_max_coalesced_frames);
++	sg_init_one(&sgs_tx, &vi->ctrl->coal_tx, sizeof(vi->ctrl->coal_tx));
+ 
+ 	if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_NOTF_COAL,
+ 				  VIRTIO_NET_CTRL_NOTF_COAL_TX_SET,
+@@ -2949,9 +2949,9 @@ static int virtnet_send_notf_coal_cmds(struct virtnet_info *vi,
+ 	vi->tx_usecs = ec->tx_coalesce_usecs;
+ 	vi->tx_max_packets = ec->tx_max_coalesced_frames;
+ 
+-	coal_rx.rx_usecs = cpu_to_le32(ec->rx_coalesce_usecs);
+-	coal_rx.rx_max_packets = cpu_to_le32(ec->rx_max_coalesced_frames);
+-	sg_init_one(&sgs_rx, &coal_rx, sizeof(coal_rx));
++	vi->ctrl->coal_rx.rx_usecs = cpu_to_le32(ec->rx_coalesce_usecs);
++	vi->ctrl->coal_rx.rx_max_packets = cpu_to_le32(ec->rx_max_coalesced_frames);
++	sg_init_one(&sgs_rx, &vi->ctrl->coal_rx, sizeof(vi->ctrl->coal_rx));
+ 
+ 	if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_NOTF_COAL,
+ 				  VIRTIO_NET_CTRL_NOTF_COAL_RX_SET,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+index d75fec8d0afd4..cbba850094187 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+@@ -2729,17 +2729,13 @@ static bool iwl_mvm_wait_d3_notif(struct iwl_notif_wait_data *notif_wait,
+ 		if (wowlan_info_ver < 2) {
+ 			struct iwl_wowlan_info_notif_v1 *notif_v1 = (void *)pkt->data;
+ 
+-			notif = kmemdup(notif_v1,
+-					offsetofend(struct iwl_wowlan_info_notif,
+-						    received_beacons),
+-					GFP_ATOMIC);
+-
++			notif = kmemdup(notif_v1, sizeof(*notif), GFP_ATOMIC);
+ 			if (!notif)
+ 				return false;
+ 
+ 			notif->tid_tear_down = notif_v1->tid_tear_down;
+ 			notif->station_id = notif_v1->station_id;
+-
++			memset_after(notif, 0, station_id);
+ 		} else {
+ 			notif = (void *)pkt->data;
+ 		}
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+index eafa0f204c1f8..12f7bcec53ae1 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+@@ -919,7 +919,10 @@ void mt7615_mac_sta_poll(struct mt7615_dev *dev)
+ 
+ 		msta = list_first_entry(&sta_poll_list, struct mt7615_sta,
+ 					poll_list);
++
++		spin_lock_bh(&dev->sta_poll_lock);
+ 		list_del_init(&msta->poll_list);
++		spin_unlock_bh(&dev->sta_poll_lock);
+ 
+ 		addr = mt7615_mac_wtbl_addr(dev, msta->wcid.idx) + 19 * 4;
+ 
+diff --git a/drivers/net/wireless/realtek/rtw88/mac80211.c b/drivers/net/wireless/realtek/rtw88/mac80211.c
+index e29ca5dcf105b..0bf6882d18f14 100644
+--- a/drivers/net/wireless/realtek/rtw88/mac80211.c
++++ b/drivers/net/wireless/realtek/rtw88/mac80211.c
+@@ -88,15 +88,6 @@ static int rtw_ops_config(struct ieee80211_hw *hw, u32 changed)
+ 		}
+ 	}
+ 
+-	if (changed & IEEE80211_CONF_CHANGE_PS) {
+-		if (hw->conf.flags & IEEE80211_CONF_PS) {
+-			rtwdev->ps_enabled = true;
+-		} else {
+-			rtwdev->ps_enabled = false;
+-			rtw_leave_lps(rtwdev);
+-		}
+-	}
+-
+ 	if (changed & IEEE80211_CONF_CHANGE_CHANNEL)
+ 		rtw_set_channel(rtwdev);
+ 
+@@ -206,6 +197,7 @@ static int rtw_ops_add_interface(struct ieee80211_hw *hw,
+ 	rtwvif->bcn_ctrl = bcn_ctrl;
+ 	config |= PORT_SET_BCN_CTRL;
+ 	rtw_vif_port_config(rtwdev, rtwvif, config);
++	rtw_recalc_lps(rtwdev, vif);
+ 
+ 	mutex_unlock(&rtwdev->mutex);
+ 
+@@ -236,6 +228,7 @@ static void rtw_ops_remove_interface(struct ieee80211_hw *hw,
+ 	rtwvif->bcn_ctrl = 0;
+ 	config |= PORT_SET_BCN_CTRL;
+ 	rtw_vif_port_config(rtwdev, rtwvif, config);
++	rtw_recalc_lps(rtwdev, NULL);
+ 
+ 	mutex_unlock(&rtwdev->mutex);
+ }
+@@ -428,6 +421,9 @@ static void rtw_ops_bss_info_changed(struct ieee80211_hw *hw,
+ 	if (changed & BSS_CHANGED_ERP_SLOT)
+ 		rtw_conf_tx(rtwdev, rtwvif);
+ 
++	if (changed & BSS_CHANGED_PS)
++		rtw_recalc_lps(rtwdev, NULL);
++
+ 	rtw_vif_port_config(rtwdev, rtwvif, config);
+ 
+ 	mutex_unlock(&rtwdev->mutex);
+diff --git a/drivers/net/wireless/realtek/rtw88/main.c b/drivers/net/wireless/realtek/rtw88/main.c
+index 76f7aadef77c5..fc51acc9a5998 100644
+--- a/drivers/net/wireless/realtek/rtw88/main.c
++++ b/drivers/net/wireless/realtek/rtw88/main.c
+@@ -250,8 +250,8 @@ static void rtw_watch_dog_work(struct work_struct *work)
+ 	 * more than two stations associated to the AP, then we can not enter
+ 	 * lps, because fw does not handle the overlapped beacon interval
+ 	 *
+-	 * mac80211 should iterate vifs and determine if driver can enter
+-	 * ps by passing IEEE80211_CONF_PS to us, all we need to do is to
++	 * rtw_recalc_lps() iterate vifs and determine if driver can enter
++	 * ps by vif->type and vif->cfg.ps, all we need to do here is to
+ 	 * get that vif and check if device is having traffic more than the
+ 	 * threshold.
+ 	 */
+diff --git a/drivers/net/wireless/realtek/rtw88/ps.c b/drivers/net/wireless/realtek/rtw88/ps.c
+index 996365575f44f..53933fb38a330 100644
+--- a/drivers/net/wireless/realtek/rtw88/ps.c
++++ b/drivers/net/wireless/realtek/rtw88/ps.c
+@@ -299,3 +299,46 @@ void rtw_leave_lps_deep(struct rtw_dev *rtwdev)
+ 
+ 	__rtw_leave_lps_deep(rtwdev);
+ }
++
++struct rtw_vif_recalc_lps_iter_data {
++	struct rtw_dev *rtwdev;
++	struct ieee80211_vif *found_vif;
++	int count;
++};
++
++static void __rtw_vif_recalc_lps(struct rtw_vif_recalc_lps_iter_data *data,
++				 struct ieee80211_vif *vif)
++{
++	if (data->count < 0)
++		return;
++
++	if (vif->type != NL80211_IFTYPE_STATION) {
++		data->count = -1;
++		return;
++	}
++
++	data->count++;
++	data->found_vif = vif;
++}
++
++static void rtw_vif_recalc_lps_iter(void *data, u8 *mac,
++				    struct ieee80211_vif *vif)
++{
++	__rtw_vif_recalc_lps(data, vif);
++}
++
++void rtw_recalc_lps(struct rtw_dev *rtwdev, struct ieee80211_vif *new_vif)
++{
++	struct rtw_vif_recalc_lps_iter_data data = { .rtwdev = rtwdev };
++
++	if (new_vif)
++		__rtw_vif_recalc_lps(&data, new_vif);
++	rtw_iterate_vifs(rtwdev, rtw_vif_recalc_lps_iter, &data);
++
++	if (data.count == 1 && data.found_vif->cfg.ps) {
++		rtwdev->ps_enabled = true;
++	} else {
++		rtwdev->ps_enabled = false;
++		rtw_leave_lps(rtwdev);
++	}
++}
+diff --git a/drivers/net/wireless/realtek/rtw88/ps.h b/drivers/net/wireless/realtek/rtw88/ps.h
+index c194386f6db53..5ae83d2526cfd 100644
+--- a/drivers/net/wireless/realtek/rtw88/ps.h
++++ b/drivers/net/wireless/realtek/rtw88/ps.h
+@@ -23,4 +23,6 @@ void rtw_enter_lps(struct rtw_dev *rtwdev, u8 port_id);
+ void rtw_leave_lps(struct rtw_dev *rtwdev);
+ void rtw_leave_lps_deep(struct rtw_dev *rtwdev);
+ enum rtw_lps_deep_mode rtw_get_lps_deep_mode(struct rtw_dev *rtwdev);
++void rtw_recalc_lps(struct rtw_dev *rtwdev, struct ieee80211_vif *new_vif);
++
+ #endif
+diff --git a/drivers/net/wireless/realtek/rtw89/mac80211.c b/drivers/net/wireless/realtek/rtw89/mac80211.c
+index d43281f7335b1..3dc988cac4aeb 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac80211.c
++++ b/drivers/net/wireless/realtek/rtw89/mac80211.c
+@@ -79,15 +79,6 @@ static int rtw89_ops_config(struct ieee80211_hw *hw, u32 changed)
+ 	    !(hw->conf.flags & IEEE80211_CONF_IDLE))
+ 		rtw89_leave_ips(rtwdev);
+ 
+-	if (changed & IEEE80211_CONF_CHANGE_PS) {
+-		if (hw->conf.flags & IEEE80211_CONF_PS) {
+-			rtwdev->lps_enabled = true;
+-		} else {
+-			rtw89_leave_lps(rtwdev);
+-			rtwdev->lps_enabled = false;
+-		}
+-	}
+-
+ 	if (changed & IEEE80211_CONF_CHANGE_CHANNEL) {
+ 		rtw89_config_entity_chandef(rtwdev, RTW89_SUB_ENTITY_0,
+ 					    &hw->conf.chandef);
+@@ -147,6 +138,8 @@ static int rtw89_ops_add_interface(struct ieee80211_hw *hw,
+ 	rtw89_core_txq_init(rtwdev, vif->txq);
+ 
+ 	rtw89_btc_ntfy_role_info(rtwdev, rtwvif, NULL, BTC_ROLE_START);
++
++	rtw89_recalc_lps(rtwdev);
+ out:
+ 	mutex_unlock(&rtwdev->mutex);
+ 
+@@ -170,6 +163,8 @@ static void rtw89_ops_remove_interface(struct ieee80211_hw *hw,
+ 	rtw89_mac_remove_vif(rtwdev, rtwvif);
+ 	rtw89_core_release_bit_map(rtwdev->hw_port, rtwvif->port);
+ 	list_del_init(&rtwvif->list);
++	rtw89_recalc_lps(rtwdev);
++
+ 	mutex_unlock(&rtwdev->mutex);
+ }
+ 
+@@ -425,6 +420,9 @@ static void rtw89_ops_bss_info_changed(struct ieee80211_hw *hw,
+ 	if (changed & BSS_CHANGED_P2P_PS)
+ 		rtw89_process_p2p_ps(rtwdev, vif);
+ 
++	if (changed & BSS_CHANGED_PS)
++		rtw89_recalc_lps(rtwdev);
++
+ 	mutex_unlock(&rtwdev->mutex);
+ }
+ 
+diff --git a/drivers/net/wireless/realtek/rtw89/ps.c b/drivers/net/wireless/realtek/rtw89/ps.c
+index 40498812205ea..c1f1083d3f634 100644
+--- a/drivers/net/wireless/realtek/rtw89/ps.c
++++ b/drivers/net/wireless/realtek/rtw89/ps.c
+@@ -244,3 +244,29 @@ void rtw89_process_p2p_ps(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif)
+ 	rtw89_p2p_disable_all_noa(rtwdev, vif);
+ 	rtw89_p2p_update_noa(rtwdev, vif);
+ }
++
++void rtw89_recalc_lps(struct rtw89_dev *rtwdev)
++{
++	struct ieee80211_vif *vif, *found_vif = NULL;
++	struct rtw89_vif *rtwvif;
++	int count = 0;
++
++	rtw89_for_each_rtwvif(rtwdev, rtwvif) {
++		vif = rtwvif_to_vif(rtwvif);
++
++		if (vif->type != NL80211_IFTYPE_STATION) {
++			count = 0;
++			break;
++		}
++
++		count++;
++		found_vif = vif;
++	}
++
++	if (count == 1 && found_vif->cfg.ps) {
++		rtwdev->lps_enabled = true;
++	} else {
++		rtw89_leave_lps(rtwdev);
++		rtwdev->lps_enabled = false;
++	}
++}
+diff --git a/drivers/net/wireless/realtek/rtw89/ps.h b/drivers/net/wireless/realtek/rtw89/ps.h
+index 6ac1f7ea53394..374e9a358683f 100644
+--- a/drivers/net/wireless/realtek/rtw89/ps.h
++++ b/drivers/net/wireless/realtek/rtw89/ps.h
+@@ -14,5 +14,6 @@ void rtw89_enter_ips(struct rtw89_dev *rtwdev);
+ void rtw89_leave_ips(struct rtw89_dev *rtwdev);
+ void rtw89_set_coex_ctrl_lps(struct rtw89_dev *rtwdev, bool btc_ctrl);
+ void rtw89_process_p2p_ps(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif);
++void rtw89_recalc_lps(struct rtw89_dev *rtwdev);
+ 
+ #endif
+diff --git a/drivers/pinctrl/meson/pinctrl-meson-axg.c b/drivers/pinctrl/meson/pinctrl-meson-axg.c
+index 7bfecdfba1779..d249a035c2b9b 100644
+--- a/drivers/pinctrl/meson/pinctrl-meson-axg.c
++++ b/drivers/pinctrl/meson/pinctrl-meson-axg.c
+@@ -400,6 +400,7 @@ static struct meson_pmx_group meson_axg_periphs_groups[] = {
+ 	GPIO_GROUP(GPIOA_15),
+ 	GPIO_GROUP(GPIOA_16),
+ 	GPIO_GROUP(GPIOA_17),
++	GPIO_GROUP(GPIOA_18),
+ 	GPIO_GROUP(GPIOA_19),
+ 	GPIO_GROUP(GPIOA_20),
+ 
+diff --git a/drivers/platform/surface/aggregator/controller.c b/drivers/platform/surface/aggregator/controller.c
+index 535581c0471c5..7fc602e01487d 100644
+--- a/drivers/platform/surface/aggregator/controller.c
++++ b/drivers/platform/surface/aggregator/controller.c
+@@ -825,7 +825,7 @@ static int ssam_cplt_init(struct ssam_cplt *cplt, struct device *dev)
+ 
+ 	cplt->dev = dev;
+ 
+-	cplt->wq = create_workqueue(SSAM_CPLT_WQ_NAME);
++	cplt->wq = alloc_workqueue(SSAM_CPLT_WQ_NAME, WQ_UNBOUND | WQ_MEM_RECLAIM, 0);
+ 	if (!cplt->wq)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/platform/surface/surface_aggregator_tabletsw.c b/drivers/platform/surface/surface_aggregator_tabletsw.c
+index 9fed800c7cc09..a18e9fc7896b3 100644
+--- a/drivers/platform/surface/surface_aggregator_tabletsw.c
++++ b/drivers/platform/surface/surface_aggregator_tabletsw.c
+@@ -201,6 +201,7 @@ enum ssam_kip_cover_state {
+ 	SSAM_KIP_COVER_STATE_LAPTOP        = 0x03,
+ 	SSAM_KIP_COVER_STATE_FOLDED_CANVAS = 0x04,
+ 	SSAM_KIP_COVER_STATE_FOLDED_BACK   = 0x05,
++	SSAM_KIP_COVER_STATE_BOOK          = 0x06,
+ };
+ 
+ static const char *ssam_kip_cover_state_name(struct ssam_tablet_sw *sw, u32 state)
+@@ -221,6 +222,9 @@ static const char *ssam_kip_cover_state_name(struct ssam_tablet_sw *sw, u32 stat
+ 	case SSAM_KIP_COVER_STATE_FOLDED_BACK:
+ 		return "folded-back";
+ 
++	case SSAM_KIP_COVER_STATE_BOOK:
++		return "book";
++
+ 	default:
+ 		dev_warn(&sw->sdev->dev, "unknown KIP cover state: %u\n", state);
+ 		return "<unknown>";
+@@ -233,6 +237,7 @@ static bool ssam_kip_cover_state_is_tablet_mode(struct ssam_tablet_sw *sw, u32 s
+ 	case SSAM_KIP_COVER_STATE_DISCONNECTED:
+ 	case SSAM_KIP_COVER_STATE_FOLDED_CANVAS:
+ 	case SSAM_KIP_COVER_STATE_FOLDED_BACK:
++	case SSAM_KIP_COVER_STATE_BOOK:
+ 		return true;
+ 
+ 	case SSAM_KIP_COVER_STATE_CLOSED:
+diff --git a/drivers/s390/block/dasd_ioctl.c b/drivers/s390/block/dasd_ioctl.c
+index 9327dcdd6e5e4..8fca725b3daec 100644
+--- a/drivers/s390/block/dasd_ioctl.c
++++ b/drivers/s390/block/dasd_ioctl.c
+@@ -552,10 +552,10 @@ static int __dasd_ioctl_information(struct dasd_block *block,
+ 
+ 	memcpy(dasd_info->type, base->discipline->name, 4);
+ 
+-	spin_lock_irqsave(&block->queue_lock, flags);
++	spin_lock_irqsave(get_ccwdev_lock(base->cdev), flags);
+ 	list_for_each(l, &base->ccw_queue)
+ 		dasd_info->chanq_len++;
+-	spin_unlock_irqrestore(&block->queue_lock, flags);
++	spin_unlock_irqrestore(get_ccwdev_lock(base->cdev), flags);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/soc/qcom/icc-bwmon.c b/drivers/soc/qcom/icc-bwmon.c
+index d07be3700db60..7829549993c92 100644
+--- a/drivers/soc/qcom/icc-bwmon.c
++++ b/drivers/soc/qcom/icc-bwmon.c
+@@ -603,12 +603,12 @@ static int bwmon_probe(struct platform_device *pdev)
+ 	bwmon->max_bw_kbps = UINT_MAX;
+ 	opp = dev_pm_opp_find_bw_floor(dev, &bwmon->max_bw_kbps, 0);
+ 	if (IS_ERR(opp))
+-		return dev_err_probe(dev, ret, "failed to find max peak bandwidth\n");
++		return dev_err_probe(dev, PTR_ERR(opp), "failed to find max peak bandwidth\n");
+ 
+ 	bwmon->min_bw_kbps = 0;
+ 	opp = dev_pm_opp_find_bw_ceil(dev, &bwmon->min_bw_kbps, 0);
+ 	if (IS_ERR(opp))
+-		return dev_err_probe(dev, ret, "failed to find min peak bandwidth\n");
++		return dev_err_probe(dev, PTR_ERR(opp), "failed to find min peak bandwidth\n");
+ 
+ 	bwmon->dev = dev;
+ 
+diff --git a/drivers/soc/qcom/ramp_controller.c b/drivers/soc/qcom/ramp_controller.c
+index dc74d2a19de2b..5e3ba0be09035 100644
+--- a/drivers/soc/qcom/ramp_controller.c
++++ b/drivers/soc/qcom/ramp_controller.c
+@@ -296,7 +296,7 @@ static int qcom_ramp_controller_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	qrc->desc = device_get_match_data(&pdev->dev);
+-	if (!qrc)
++	if (!qrc->desc)
+ 		return -EINVAL;
+ 
+ 	qrc->regmap = devm_regmap_init_mmio(&pdev->dev, base, &qrc_regmap_config);
+diff --git a/drivers/soc/qcom/rmtfs_mem.c b/drivers/soc/qcom/rmtfs_mem.c
+index 0d31377f178d5..d4bda086c141a 100644
+--- a/drivers/soc/qcom/rmtfs_mem.c
++++ b/drivers/soc/qcom/rmtfs_mem.c
+@@ -234,6 +234,7 @@ static int qcom_rmtfs_mem_probe(struct platform_device *pdev)
+ 		num_vmids = 0;
+ 	} else if (num_vmids < 0) {
+ 		dev_err(&pdev->dev, "failed to count qcom,vmid elements: %d\n", num_vmids);
++		ret = num_vmids;
+ 		goto remove_cdev;
+ 	} else if (num_vmids > NUM_MAX_VMIDS) {
+ 		dev_warn(&pdev->dev,
+diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
+index f93544f6d7961..0dd4363ebac8f 100644
+--- a/drivers/soc/qcom/rpmh-rsc.c
++++ b/drivers/soc/qcom/rpmh-rsc.c
+@@ -1073,7 +1073,7 @@ static int rpmh_rsc_probe(struct platform_device *pdev)
+ 	drv->ver.minor = rsc_id & (MINOR_VER_MASK << MINOR_VER_SHIFT);
+ 	drv->ver.minor >>= MINOR_VER_SHIFT;
+ 
+-	if (drv->ver.major == 3 && drv->ver.minor >= 0)
++	if (drv->ver.major == 3)
+ 		drv->regs = rpmh_rsc_reg_offset_ver_3_0;
+ 	else
+ 		drv->regs = rpmh_rsc_reg_offset_ver_2_7;
+diff --git a/drivers/soundwire/stream.c b/drivers/soundwire/stream.c
+index 8c6da1739e3d1..3c909853aaf89 100644
+--- a/drivers/soundwire/stream.c
++++ b/drivers/soundwire/stream.c
+@@ -2019,8 +2019,10 @@ int sdw_stream_add_slave(struct sdw_slave *slave,
+ 
+ skip_alloc_master_rt:
+ 	s_rt = sdw_slave_rt_find(slave, stream);
+-	if (s_rt)
++	if (s_rt) {
++		alloc_slave_rt = false;
+ 		goto skip_alloc_slave_rt;
++	}
+ 
+ 	s_rt = sdw_slave_rt_alloc(slave, m_rt);
+ 	if (!s_rt) {
+diff --git a/drivers/spi/spi-mt65xx.c b/drivers/spi/spi-mt65xx.c
+index 9eab6c20dbc56..6e95efb50acbc 100644
+--- a/drivers/spi/spi-mt65xx.c
++++ b/drivers/spi/spi-mt65xx.c
+@@ -1275,6 +1275,9 @@ static int mtk_spi_remove(struct platform_device *pdev)
+ 	struct mtk_spi *mdata = spi_master_get_devdata(master);
+ 	int ret;
+ 
++	if (mdata->use_spimem && !completion_done(&mdata->spimem_done))
++		complete(&mdata->spimem_done);
++
+ 	ret = pm_runtime_resume_and_get(&pdev->dev);
+ 	if (ret < 0)
+ 		return ret;
+diff --git a/drivers/spi/spi-qup.c b/drivers/spi/spi-qup.c
+index 205e54f157b4a..fb6b7738b4f55 100644
+--- a/drivers/spi/spi-qup.c
++++ b/drivers/spi/spi-qup.c
+@@ -1029,23 +1029,8 @@ static int spi_qup_probe(struct platform_device *pdev)
+ 		return -ENXIO;
+ 	}
+ 
+-	ret = clk_prepare_enable(cclk);
+-	if (ret) {
+-		dev_err(dev, "cannot enable core clock\n");
+-		return ret;
+-	}
+-
+-	ret = clk_prepare_enable(iclk);
+-	if (ret) {
+-		clk_disable_unprepare(cclk);
+-		dev_err(dev, "cannot enable iface clock\n");
+-		return ret;
+-	}
+-
+ 	master = spi_alloc_master(dev, sizeof(struct spi_qup));
+ 	if (!master) {
+-		clk_disable_unprepare(cclk);
+-		clk_disable_unprepare(iclk);
+ 		dev_err(dev, "cannot allocate master\n");
+ 		return -ENOMEM;
+ 	}
+@@ -1093,6 +1078,19 @@ static int spi_qup_probe(struct platform_device *pdev)
+ 	spin_lock_init(&controller->lock);
+ 	init_completion(&controller->done);
+ 
++	ret = clk_prepare_enable(cclk);
++	if (ret) {
++		dev_err(dev, "cannot enable core clock\n");
++		goto error_dma;
++	}
++
++	ret = clk_prepare_enable(iclk);
++	if (ret) {
++		clk_disable_unprepare(cclk);
++		dev_err(dev, "cannot enable iface clock\n");
++		goto error_dma;
++	}
++
+ 	iomode = readl_relaxed(base + QUP_IO_M_MODES);
+ 
+ 	size = QUP_IO_M_OUTPUT_BLOCK_SIZE(iomode);
+@@ -1122,7 +1120,7 @@ static int spi_qup_probe(struct platform_device *pdev)
+ 	ret = spi_qup_set_state(controller, QUP_STATE_RESET);
+ 	if (ret) {
+ 		dev_err(dev, "cannot set RESET state\n");
+-		goto error_dma;
++		goto error_clk;
+ 	}
+ 
+ 	writel_relaxed(0, base + QUP_OPERATIONAL);
+@@ -1146,7 +1144,7 @@ static int spi_qup_probe(struct platform_device *pdev)
+ 	ret = devm_request_irq(dev, irq, spi_qup_qup_irq,
+ 			       IRQF_TRIGGER_HIGH, pdev->name, controller);
+ 	if (ret)
+-		goto error_dma;
++		goto error_clk;
+ 
+ 	pm_runtime_set_autosuspend_delay(dev, MSEC_PER_SEC);
+ 	pm_runtime_use_autosuspend(dev);
+@@ -1161,11 +1159,12 @@ static int spi_qup_probe(struct platform_device *pdev)
+ 
+ disable_pm:
+ 	pm_runtime_disable(&pdev->dev);
++error_clk:
++	clk_disable_unprepare(cclk);
++	clk_disable_unprepare(iclk);
+ error_dma:
+ 	spi_qup_release_dma(master);
+ error:
+-	clk_disable_unprepare(cclk);
+-	clk_disable_unprepare(iclk);
+ 	spi_master_put(master);
+ 	return ret;
+ }
+diff --git a/drivers/staging/rtl8192e/rtl8192e/rtl_core.c b/drivers/staging/rtl8192e/rtl8192e/rtl_core.c
+index 92552ce30cd58..72d76dc7df781 100644
+--- a/drivers/staging/rtl8192e/rtl8192e/rtl_core.c
++++ b/drivers/staging/rtl8192e/rtl8192e/rtl_core.c
+@@ -48,9 +48,9 @@ static const struct rtl819x_ops rtl819xp_ops = {
+ };
+ 
+ static struct pci_device_id rtl8192_pci_id_tbl[] = {
+-	{PCI_DEVICE(0x10ec, 0x8192)},
+-	{PCI_DEVICE(0x07aa, 0x0044)},
+-	{PCI_DEVICE(0x07aa, 0x0047)},
++	{RTL_PCI_DEVICE(0x10ec, 0x8192, rtl819xp_ops)},
++	{RTL_PCI_DEVICE(0x07aa, 0x0044, rtl819xp_ops)},
++	{RTL_PCI_DEVICE(0x07aa, 0x0047, rtl819xp_ops)},
+ 	{}
+ };
+ 
+diff --git a/drivers/staging/rtl8192e/rtl8192e/rtl_core.h b/drivers/staging/rtl8192e/rtl8192e/rtl_core.h
+index bbc1c4bac3588..fd96eef90c7fa 100644
+--- a/drivers/staging/rtl8192e/rtl8192e/rtl_core.h
++++ b/drivers/staging/rtl8192e/rtl8192e/rtl_core.h
+@@ -55,6 +55,11 @@
+ #define IS_HARDWARE_TYPE_8192SE(_priv)		\
+ 	(((struct r8192_priv *)rtllib_priv(dev))->card_8192 == NIC_8192SE)
+ 
++#define RTL_PCI_DEVICE(vend, dev, cfg) \
++	.vendor = (vend), .device = (dev), \
++	.subvendor = PCI_ANY_ID, .subdevice = PCI_ANY_ID, \
++	.driver_data = (kernel_ulong_t)&(cfg)
++
+ #define TOTAL_CAM_ENTRY		32
+ #define CAM_CONTENT_COUNT	8
+ 
+diff --git a/drivers/tee/amdtee/amdtee_if.h b/drivers/tee/amdtee/amdtee_if.h
+index ff48c3e473750..e2014e21530ac 100644
+--- a/drivers/tee/amdtee/amdtee_if.h
++++ b/drivers/tee/amdtee/amdtee_if.h
+@@ -118,16 +118,18 @@ struct tee_cmd_unmap_shared_mem {
+ 
+ /**
+  * struct tee_cmd_load_ta - load Trusted Application (TA) binary into TEE
+- * @low_addr:    [in] bits [31:0] of the physical address of the TA binary
+- * @hi_addr:     [in] bits [63:32] of the physical address of the TA binary
+- * @size:        [in] size of TA binary in bytes
+- * @ta_handle:   [out] return handle of the loaded TA
++ * @low_addr:       [in] bits [31:0] of the physical address of the TA binary
++ * @hi_addr:        [in] bits [63:32] of the physical address of the TA binary
++ * @size:           [in] size of TA binary in bytes
++ * @ta_handle:      [out] return handle of the loaded TA
++ * @return_origin:  [out] origin of return code after TEE processing
+  */
+ struct tee_cmd_load_ta {
+ 	u32 low_addr;
+ 	u32 hi_addr;
+ 	u32 size;
+ 	u32 ta_handle;
++	u32 return_origin;
+ };
+ 
+ /**
+diff --git a/drivers/tee/amdtee/call.c b/drivers/tee/amdtee/call.c
+index cec6e70f0ac92..8a02c5fe33a6b 100644
+--- a/drivers/tee/amdtee/call.c
++++ b/drivers/tee/amdtee/call.c
+@@ -423,19 +423,23 @@ int handle_load_ta(void *data, u32 size, struct tee_ioctl_open_session_arg *arg)
+ 	if (ret) {
+ 		arg->ret_origin = TEEC_ORIGIN_COMMS;
+ 		arg->ret = TEEC_ERROR_COMMUNICATION;
+-	} else if (arg->ret == TEEC_SUCCESS) {
+-		ret = get_ta_refcount(load_cmd.ta_handle);
+-		if (!ret) {
+-			arg->ret_origin = TEEC_ORIGIN_COMMS;
+-			arg->ret = TEEC_ERROR_OUT_OF_MEMORY;
+-
+-			/* Unload the TA on error */
+-			unload_cmd.ta_handle = load_cmd.ta_handle;
+-			psp_tee_process_cmd(TEE_CMD_ID_UNLOAD_TA,
+-					    (void *)&unload_cmd,
+-					    sizeof(unload_cmd), &ret);
+-		} else {
+-			set_session_id(load_cmd.ta_handle, 0, &arg->session);
++	} else {
++		arg->ret_origin = load_cmd.return_origin;
++
++		if (arg->ret == TEEC_SUCCESS) {
++			ret = get_ta_refcount(load_cmd.ta_handle);
++			if (!ret) {
++				arg->ret_origin = TEEC_ORIGIN_COMMS;
++				arg->ret = TEEC_ERROR_OUT_OF_MEMORY;
++
++				/* Unload the TA on error */
++				unload_cmd.ta_handle = load_cmd.ta_handle;
++				psp_tee_process_cmd(TEE_CMD_ID_UNLOAD_TA,
++						    (void *)&unload_cmd,
++						    sizeof(unload_cmd), &ret);
++			} else {
++				set_session_id(load_cmd.ta_handle, 0, &arg->session);
++			}
+ 		}
+ 	}
+ 	mutex_unlock(&ta_refcount_mutex);
+diff --git a/drivers/usb/core/buffer.c b/drivers/usb/core/buffer.c
+index fbb087b728dc9..268ccbec88f95 100644
+--- a/drivers/usb/core/buffer.c
++++ b/drivers/usb/core/buffer.c
+@@ -172,3 +172,44 @@ void hcd_buffer_free(
+ 	}
+ 	dma_free_coherent(hcd->self.sysdev, size, addr, dma);
+ }
++
++void *hcd_buffer_alloc_pages(struct usb_hcd *hcd,
++		size_t size, gfp_t mem_flags, dma_addr_t *dma)
++{
++	if (size == 0)
++		return NULL;
++
++	if (hcd->localmem_pool)
++		return gen_pool_dma_alloc_align(hcd->localmem_pool,
++				size, dma, PAGE_SIZE);
++
++	/* some USB hosts just use PIO */
++	if (!hcd_uses_dma(hcd)) {
++		*dma = DMA_MAPPING_ERROR;
++		return (void *)__get_free_pages(mem_flags,
++				get_order(size));
++	}
++
++	return dma_alloc_coherent(hcd->self.sysdev,
++			size, dma, mem_flags);
++}
++
++void hcd_buffer_free_pages(struct usb_hcd *hcd,
++		size_t size, void *addr, dma_addr_t dma)
++{
++	if (!addr)
++		return;
++
++	if (hcd->localmem_pool) {
++		gen_pool_free(hcd->localmem_pool,
++				(unsigned long)addr, size);
++		return;
++	}
++
++	if (!hcd_uses_dma(hcd)) {
++		free_pages((unsigned long)addr, get_order(size));
++		return;
++	}
++
++	dma_free_coherent(hcd->self.sysdev, size, addr, dma);
++}
+diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c
+index e501a03d6c700..fcf68818e9992 100644
+--- a/drivers/usb/core/devio.c
++++ b/drivers/usb/core/devio.c
+@@ -186,6 +186,7 @@ static int connected(struct usb_dev_state *ps)
+ static void dec_usb_memory_use_count(struct usb_memory *usbm, int *count)
+ {
+ 	struct usb_dev_state *ps = usbm->ps;
++	struct usb_hcd *hcd = bus_to_hcd(ps->dev->bus);
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&ps->lock, flags);
+@@ -194,8 +195,8 @@ static void dec_usb_memory_use_count(struct usb_memory *usbm, int *count)
+ 		list_del(&usbm->memlist);
+ 		spin_unlock_irqrestore(&ps->lock, flags);
+ 
+-		usb_free_coherent(ps->dev, usbm->size, usbm->mem,
+-				usbm->dma_handle);
++		hcd_buffer_free_pages(hcd, usbm->size,
++				usbm->mem, usbm->dma_handle);
+ 		usbfs_decrease_memory_usage(
+ 			usbm->size + sizeof(struct usb_memory));
+ 		kfree(usbm);
+@@ -234,7 +235,7 @@ static int usbdev_mmap(struct file *file, struct vm_area_struct *vma)
+ 	size_t size = vma->vm_end - vma->vm_start;
+ 	void *mem;
+ 	unsigned long flags;
+-	dma_addr_t dma_handle;
++	dma_addr_t dma_handle = DMA_MAPPING_ERROR;
+ 	int ret;
+ 
+ 	ret = usbfs_increase_memory_usage(size + sizeof(struct usb_memory));
+@@ -247,8 +248,8 @@ static int usbdev_mmap(struct file *file, struct vm_area_struct *vma)
+ 		goto error_decrease_mem;
+ 	}
+ 
+-	mem = usb_alloc_coherent(ps->dev, size, GFP_USER | __GFP_NOWARN,
+-			&dma_handle);
++	mem = hcd_buffer_alloc_pages(hcd,
++			size, GFP_USER | __GFP_NOWARN, &dma_handle);
+ 	if (!mem) {
+ 		ret = -ENOMEM;
+ 		goto error_free_usbm;
+@@ -264,7 +265,14 @@ static int usbdev_mmap(struct file *file, struct vm_area_struct *vma)
+ 	usbm->vma_use_count = 1;
+ 	INIT_LIST_HEAD(&usbm->memlist);
+ 
+-	if (hcd->localmem_pool || !hcd_uses_dma(hcd)) {
++	/*
++	 * In DMA-unavailable cases, hcd_buffer_alloc_pages allocates
++	 * normal pages and assigns DMA_MAPPING_ERROR to dma_handle. Check
++	 * whether we are in such cases, and then use remap_pfn_range (or
++	 * dma_mmap_coherent) to map normal (or DMA) pages into the user
++	 * space, respectively.
++	 */
++	if (dma_handle == DMA_MAPPING_ERROR) {
+ 		if (remap_pfn_range(vma, vma->vm_start,
+ 				    virt_to_phys(usbm->mem) >> PAGE_SHIFT,
+ 				    size, vma->vm_page_prot) < 0) {
+diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+index 97a16f7eb8941..0b228fbb2a68b 100644
+--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+@@ -3323,10 +3323,10 @@ static void mlx5_vdpa_dev_del(struct vdpa_mgmt_dev *v_mdev, struct vdpa_device *
+ 	mlx5_vdpa_remove_debugfs(ndev->debugfs);
+ 	ndev->debugfs = NULL;
+ 	unregister_link_notifier(ndev);
++	_vdpa_unregister_device(dev);
+ 	wq = mvdev->wq;
+ 	mvdev->wq = NULL;
+ 	destroy_workqueue(wq);
+-	_vdpa_unregister_device(dev);
+ 	mgtdev->ndev = NULL;
+ }
+ 
+diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
+index 0c3b48616a9f3..695b20b17e010 100644
+--- a/drivers/vdpa/vdpa_user/vduse_dev.c
++++ b/drivers/vdpa/vdpa_user/vduse_dev.c
+@@ -1443,6 +1443,9 @@ static bool vduse_validate_config(struct vduse_dev_config *config)
+ 	if (config->vq_num > 0xffff)
+ 		return false;
+ 
++	if (!config->name[0])
++		return false;
++
+ 	if (!device_is_allowed(config->device_id))
+ 		return false;
+ 
+diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
+index 74c7d1f978b75..779fc44677162 100644
+--- a/drivers/vhost/vdpa.c
++++ b/drivers/vhost/vdpa.c
+@@ -572,7 +572,14 @@ static long vhost_vdpa_vring_ioctl(struct vhost_vdpa *v, unsigned int cmd,
+ 		if (r)
+ 			return r;
+ 
+-		vq->last_avail_idx = vq_state.split.avail_index;
++		if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED)) {
++			vq->last_avail_idx = vq_state.packed.last_avail_idx |
++					     (vq_state.packed.last_avail_counter << 15);
++			vq->last_used_idx = vq_state.packed.last_used_idx |
++					    (vq_state.packed.last_used_counter << 15);
++		} else {
++			vq->last_avail_idx = vq_state.split.avail_index;
++		}
+ 		break;
+ 	}
+ 
+@@ -590,9 +597,15 @@ static long vhost_vdpa_vring_ioctl(struct vhost_vdpa *v, unsigned int cmd,
+ 		break;
+ 
+ 	case VHOST_SET_VRING_BASE:
+-		vq_state.split.avail_index = vq->last_avail_idx;
+-		if (ops->set_vq_state(vdpa, idx, &vq_state))
+-			r = -EINVAL;
++		if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED)) {
++			vq_state.packed.last_avail_idx = vq->last_avail_idx & 0x7fff;
++			vq_state.packed.last_avail_counter = !!(vq->last_avail_idx & 0x8000);
++			vq_state.packed.last_used_idx = vq->last_used_idx & 0x7fff;
++			vq_state.packed.last_used_counter = !!(vq->last_used_idx & 0x8000);
++		} else {
++			vq_state.split.avail_index = vq->last_avail_idx;
++		}
++		r = ops->set_vq_state(vdpa, idx, &vq_state);
+ 		break;
+ 
+ 	case VHOST_SET_VRING_CALL:
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index f11bdbe4c2c5f..f64efda48f21c 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -1633,17 +1633,25 @@ long vhost_vring_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *arg
+ 			r = -EFAULT;
+ 			break;
+ 		}
+-		if (s.num > 0xffff) {
+-			r = -EINVAL;
+-			break;
++		if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED)) {
++			vq->last_avail_idx = s.num & 0xffff;
++			vq->last_used_idx = (s.num >> 16) & 0xffff;
++		} else {
++			if (s.num > 0xffff) {
++				r = -EINVAL;
++				break;
++			}
++			vq->last_avail_idx = s.num;
+ 		}
+-		vq->last_avail_idx = s.num;
+ 		/* Forget the cached index value. */
+ 		vq->avail_idx = vq->last_avail_idx;
+ 		break;
+ 	case VHOST_GET_VRING_BASE:
+ 		s.index = idx;
+-		s.num = vq->last_avail_idx;
++		if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED))
++			s.num = (u32)vq->last_avail_idx | ((u32)vq->last_used_idx << 16);
++		else
++			s.num = vq->last_avail_idx;
+ 		if (copy_to_user(argp, &s, sizeof s))
+ 			r = -EFAULT;
+ 		break;
+diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
+index 1647b750169c7..6f73f29d59791 100644
+--- a/drivers/vhost/vhost.h
++++ b/drivers/vhost/vhost.h
+@@ -85,13 +85,17 @@ struct vhost_virtqueue {
+ 	/* The routine to call when the Guest pings us, or timeout. */
+ 	vhost_work_fn_t handle_kick;
+ 
+-	/* Last available index we saw. */
++	/* Last available index we saw.
++	 * Values are limited to 0x7fff, and the high bit is used as
++	 * a wrap counter when using VIRTIO_F_RING_PACKED. */
+ 	u16 last_avail_idx;
+ 
+ 	/* Caches available index value from user. */
+ 	u16 avail_idx;
+ 
+-	/* Last index we used. */
++	/* Last index we used.
++	 * Values are limited to 0x7fff, and the high bit is used as
++	 * a wrap counter when using VIRTIO_F_RING_PACKED. */
+ 	u16 last_used_idx;
+ 
+ 	/* Used flags */
+diff --git a/fs/afs/dir.c b/fs/afs/dir.c
+index a97499fd747b6..93e8b06ef76a6 100644
+--- a/fs/afs/dir.c
++++ b/fs/afs/dir.c
+@@ -1358,6 +1358,7 @@ static int afs_mkdir(struct mnt_idmap *idmap, struct inode *dir,
+ 	op->dentry	= dentry;
+ 	op->create.mode	= S_IFDIR | mode;
+ 	op->create.reason = afs_edit_dir_for_mkdir;
++	op->mtime	= current_time(dir);
+ 	op->ops		= &afs_mkdir_operation;
+ 	return afs_do_sync_operation(op);
+ }
+@@ -1661,6 +1662,7 @@ static int afs_create(struct mnt_idmap *idmap, struct inode *dir,
+ 	op->dentry	= dentry;
+ 	op->create.mode	= S_IFREG | mode;
+ 	op->create.reason = afs_edit_dir_for_create;
++	op->mtime	= current_time(dir);
+ 	op->ops		= &afs_create_operation;
+ 	return afs_do_sync_operation(op);
+ 
+@@ -1796,6 +1798,7 @@ static int afs_symlink(struct mnt_idmap *idmap, struct inode *dir,
+ 	op->ops			= &afs_symlink_operation;
+ 	op->create.reason	= afs_edit_dir_for_symlink;
+ 	op->create.symlink	= content;
++	op->mtime		= current_time(dir);
+ 	return afs_do_sync_operation(op);
+ 
+ error:
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index 789be30d6ee22..2321e5ddb664d 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -1627,6 +1627,7 @@ void ceph_flush_snaps(struct ceph_inode_info *ci,
+ 	struct inode *inode = &ci->netfs.inode;
+ 	struct ceph_mds_client *mdsc = ceph_inode_to_client(inode)->mdsc;
+ 	struct ceph_mds_session *session = NULL;
++	bool need_put = false;
+ 	int mds;
+ 
+ 	dout("ceph_flush_snaps %p\n", inode);
+@@ -1671,8 +1672,13 @@ out:
+ 		ceph_put_mds_session(session);
+ 	/* we flushed them all; remove this inode from the queue */
+ 	spin_lock(&mdsc->snap_flush_lock);
++	if (!list_empty(&ci->i_snap_flush_item))
++		need_put = true;
+ 	list_del_init(&ci->i_snap_flush_item);
+ 	spin_unlock(&mdsc->snap_flush_lock);
++
++	if (need_put)
++		iput(inode);
+ }
+ 
+ /*
+diff --git a/fs/ceph/snap.c b/fs/ceph/snap.c
+index 0b236ebd989fc..2e73ba62bd7aa 100644
+--- a/fs/ceph/snap.c
++++ b/fs/ceph/snap.c
+@@ -693,8 +693,10 @@ int __ceph_finish_cap_snap(struct ceph_inode_info *ci,
+ 	     capsnap->size);
+ 
+ 	spin_lock(&mdsc->snap_flush_lock);
+-	if (list_empty(&ci->i_snap_flush_item))
++	if (list_empty(&ci->i_snap_flush_item)) {
++		ihold(inode);
+ 		list_add_tail(&ci->i_snap_flush_item, &mdsc->snap_flush_list);
++	}
+ 	spin_unlock(&mdsc->snap_flush_lock);
+ 	return 1;  /* caller may want to ceph_flush_snaps */
+ }
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 1f222c396932e..9c987bdbab334 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -6354,7 +6354,6 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
+ 	struct ext4_mount_options old_opts;
+ 	ext4_group_t g;
+ 	int err = 0;
+-	int enable_rw = 0;
+ #ifdef CONFIG_QUOTA
+ 	int enable_quota = 0;
+ 	int i, j;
+@@ -6541,7 +6540,7 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
+ 			if (err)
+ 				goto restore_opts;
+ 
+-			enable_rw = 1;
++			sb->s_flags &= ~SB_RDONLY;
+ 			if (ext4_has_feature_mmp(sb)) {
+ 				err = ext4_multi_mount_protect(sb,
+ 						le64_to_cpu(es->s_mmp_block));
+@@ -6588,9 +6587,6 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
+ 	if (!test_opt(sb, BLOCK_VALIDITY) && sbi->s_system_blks)
+ 		ext4_release_system_zone(sb);
+ 
+-	if (enable_rw)
+-		sb->s_flags &= ~SB_RDONLY;
+-
+ 	/*
+ 	 * Reinitialize lazy itable initialization thread based on
+ 	 * current settings
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 6fa38707163b1..7421ca1d968b0 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -2057,8 +2057,9 @@ inserted:
+ 			else {
+ 				u32 ref;
+ 
++#ifdef EXT4_XATTR_DEBUG
+ 				WARN_ON_ONCE(dquot_initialize_needed(inode));
+-
++#endif
+ 				/* The old block is released after updating
+ 				   the inode. */
+ 				error = dquot_alloc_block(inode,
+@@ -2121,8 +2122,9 @@ inserted:
+ 			/* We need to allocate a new block */
+ 			ext4_fsblk_t goal, block;
+ 
++#ifdef EXT4_XATTR_DEBUG
+ 			WARN_ON_ONCE(dquot_initialize_needed(inode));
+-
++#endif
+ 			goal = ext4_group_first_block_no(sb,
+ 						EXT4_I(inode)->i_block_group);
+ 			block = ext4_new_meta_blocks(handle, inode, goal, 0,
+diff --git a/fs/ksmbd/connection.c b/fs/ksmbd/connection.c
+index 4882a812ea867..e11d4a1e63d73 100644
+--- a/fs/ksmbd/connection.c
++++ b/fs/ksmbd/connection.c
+@@ -294,6 +294,9 @@ bool ksmbd_conn_alive(struct ksmbd_conn *conn)
+ 	return true;
+ }
+ 
++#define SMB1_MIN_SUPPORTED_HEADER_SIZE (sizeof(struct smb_hdr))
++#define SMB2_MIN_SUPPORTED_HEADER_SIZE (sizeof(struct smb2_hdr) + 4)
++
+ /**
+  * ksmbd_conn_handler_loop() - session thread to listen on new smb requests
+  * @p:		connection instance
+@@ -350,6 +353,9 @@ int ksmbd_conn_handler_loop(void *p)
+ 		if (pdu_size > MAX_STREAM_PROT_LEN)
+ 			break;
+ 
++		if (pdu_size < SMB1_MIN_SUPPORTED_HEADER_SIZE)
++			break;
++
+ 		/* 4 for rfc1002 length field */
+ 		/* 1 for implied bcc[0] */
+ 		size = pdu_size + 4 + 1;
+@@ -377,6 +383,12 @@ int ksmbd_conn_handler_loop(void *p)
+ 			continue;
+ 		}
+ 
++		if (((struct smb2_hdr *)smb2_get_msg(conn->request_buf))->ProtocolId ==
++		    SMB2_PROTO_NUMBER) {
++			if (pdu_size < SMB2_MIN_SUPPORTED_HEADER_SIZE)
++				break;
++		}
++
+ 		if (!default_conn_ops.process_fn) {
+ 			pr_err("No connection request callback\n");
+ 			break;
+diff --git a/fs/ksmbd/oplock.c b/fs/ksmbd/oplock.c
+index db181bdad73a2..844b303baf293 100644
+--- a/fs/ksmbd/oplock.c
++++ b/fs/ksmbd/oplock.c
+@@ -1415,56 +1415,38 @@ void create_lease_buf(u8 *rbuf, struct lease *lease)
+  */
+ struct lease_ctx_info *parse_lease_state(void *open_req)
+ {
+-	char *data_offset;
+ 	struct create_context *cc;
+-	unsigned int next = 0;
+-	char *name;
+-	bool found = false;
+ 	struct smb2_create_req *req = (struct smb2_create_req *)open_req;
+-	struct lease_ctx_info *lreq = kzalloc(sizeof(struct lease_ctx_info),
+-		GFP_KERNEL);
++	struct lease_ctx_info *lreq;
++
++	cc = smb2_find_context_vals(req, SMB2_CREATE_REQUEST_LEASE, 4);
++	if (IS_ERR_OR_NULL(cc))
++		return NULL;
++
++	lreq = kzalloc(sizeof(struct lease_ctx_info), GFP_KERNEL);
+ 	if (!lreq)
+ 		return NULL;
+ 
+-	data_offset = (char *)req + le32_to_cpu(req->CreateContextsOffset);
+-	cc = (struct create_context *)data_offset;
+-	do {
+-		cc = (struct create_context *)((char *)cc + next);
+-		name = le16_to_cpu(cc->NameOffset) + (char *)cc;
+-		if (le16_to_cpu(cc->NameLength) != 4 ||
+-		    strncmp(name, SMB2_CREATE_REQUEST_LEASE, 4)) {
+-			next = le32_to_cpu(cc->Next);
+-			continue;
+-		}
+-		found = true;
+-		break;
+-	} while (next != 0);
++	if (sizeof(struct lease_context_v2) == le32_to_cpu(cc->DataLength)) {
++		struct create_lease_v2 *lc = (struct create_lease_v2 *)cc;
+ 
+-	if (found) {
+-		if (sizeof(struct lease_context_v2) == le32_to_cpu(cc->DataLength)) {
+-			struct create_lease_v2 *lc = (struct create_lease_v2 *)cc;
+-
+-			memcpy(lreq->lease_key, lc->lcontext.LeaseKey, SMB2_LEASE_KEY_SIZE);
+-			lreq->req_state = lc->lcontext.LeaseState;
+-			lreq->flags = lc->lcontext.LeaseFlags;
+-			lreq->duration = lc->lcontext.LeaseDuration;
+-			memcpy(lreq->parent_lease_key, lc->lcontext.ParentLeaseKey,
+-			       SMB2_LEASE_KEY_SIZE);
+-			lreq->version = 2;
+-		} else {
+-			struct create_lease *lc = (struct create_lease *)cc;
++		memcpy(lreq->lease_key, lc->lcontext.LeaseKey, SMB2_LEASE_KEY_SIZE);
++		lreq->req_state = lc->lcontext.LeaseState;
++		lreq->flags = lc->lcontext.LeaseFlags;
++		lreq->duration = lc->lcontext.LeaseDuration;
++		memcpy(lreq->parent_lease_key, lc->lcontext.ParentLeaseKey,
++				SMB2_LEASE_KEY_SIZE);
++		lreq->version = 2;
++	} else {
++		struct create_lease *lc = (struct create_lease *)cc;
+ 
+-			memcpy(lreq->lease_key, lc->lcontext.LeaseKey, SMB2_LEASE_KEY_SIZE);
+-			lreq->req_state = lc->lcontext.LeaseState;
+-			lreq->flags = lc->lcontext.LeaseFlags;
+-			lreq->duration = lc->lcontext.LeaseDuration;
+-			lreq->version = 1;
+-		}
+-		return lreq;
++		memcpy(lreq->lease_key, lc->lcontext.LeaseKey, SMB2_LEASE_KEY_SIZE);
++		lreq->req_state = lc->lcontext.LeaseState;
++		lreq->flags = lc->lcontext.LeaseFlags;
++		lreq->duration = lc->lcontext.LeaseDuration;
++		lreq->version = 1;
+ 	}
+-
+-	kfree(lreq);
+-	return NULL;
++	return lreq;
+ }
+ 
+ /**
+diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c
+index 46f815098a753..85783112e56a0 100644
+--- a/fs/ksmbd/smb2pdu.c
++++ b/fs/ksmbd/smb2pdu.c
+@@ -991,13 +991,13 @@ static void decode_sign_cap_ctxt(struct ksmbd_conn *conn,
+ 
+ static __le32 deassemble_neg_contexts(struct ksmbd_conn *conn,
+ 				      struct smb2_negotiate_req *req,
+-				      int len_of_smb)
++				      unsigned int len_of_smb)
+ {
+ 	/* +4 is to account for the RFC1001 len field */
+ 	struct smb2_neg_context *pctx = (struct smb2_neg_context *)req;
+ 	int i = 0, len_of_ctxts;
+-	int offset = le32_to_cpu(req->NegotiateContextOffset);
+-	int neg_ctxt_cnt = le16_to_cpu(req->NegotiateContextCount);
++	unsigned int offset = le32_to_cpu(req->NegotiateContextOffset);
++	unsigned int neg_ctxt_cnt = le16_to_cpu(req->NegotiateContextCount);
+ 	__le32 status = STATUS_INVALID_PARAMETER;
+ 
+ 	ksmbd_debug(SMB, "decoding %d negotiate contexts\n", neg_ctxt_cnt);
+@@ -1011,7 +1011,7 @@ static __le32 deassemble_neg_contexts(struct ksmbd_conn *conn,
+ 	while (i++ < neg_ctxt_cnt) {
+ 		int clen, ctxt_len;
+ 
+-		if (len_of_ctxts < sizeof(struct smb2_neg_context))
++		if (len_of_ctxts < (int)sizeof(struct smb2_neg_context))
+ 			break;
+ 
+ 		pctx = (struct smb2_neg_context *)((char *)pctx + offset);
+@@ -1066,9 +1066,8 @@ static __le32 deassemble_neg_contexts(struct ksmbd_conn *conn,
+ 		}
+ 
+ 		/* offsets must be 8 byte aligned */
+-		clen = (clen + 7) & ~0x7;
+-		offset = clen + sizeof(struct smb2_neg_context);
+-		len_of_ctxts -= clen + sizeof(struct smb2_neg_context);
++		offset = (ctxt_len + 7) & ~0x7;
++		len_of_ctxts -= offset;
+ 	}
+ 	return status;
+ }
+diff --git a/fs/ksmbd/smbacl.c b/fs/ksmbd/smbacl.c
+index 6d6cfb6957a99..0a5862a61c773 100644
+--- a/fs/ksmbd/smbacl.c
++++ b/fs/ksmbd/smbacl.c
+@@ -1290,7 +1290,7 @@ int smb_check_perm_dacl(struct ksmbd_conn *conn, const struct path *path,
+ 
+ 	if (IS_ENABLED(CONFIG_FS_POSIX_ACL)) {
+ 		posix_acls = get_inode_acl(d_inode(path->dentry), ACL_TYPE_ACCESS);
+-		if (posix_acls && !found) {
++		if (!IS_ERR_OR_NULL(posix_acls) && !found) {
+ 			unsigned int id = -1;
+ 
+ 			pa_entry = posix_acls->a_entries;
+@@ -1314,7 +1314,7 @@ int smb_check_perm_dacl(struct ksmbd_conn *conn, const struct path *path,
+ 				}
+ 			}
+ 		}
+-		if (posix_acls)
++		if (!IS_ERR_OR_NULL(posix_acls))
+ 			posix_acl_release(posix_acls);
+ 	}
+ 
+diff --git a/fs/ksmbd/vfs.c b/fs/ksmbd/vfs.c
+index 5ea9229dad2c0..f6a8b26593090 100644
+--- a/fs/ksmbd/vfs.c
++++ b/fs/ksmbd/vfs.c
+@@ -1377,7 +1377,7 @@ static struct xattr_smb_acl *ksmbd_vfs_make_xattr_posix_acl(struct mnt_idmap *id
+ 		return NULL;
+ 
+ 	posix_acls = get_inode_acl(inode, acl_type);
+-	if (!posix_acls)
++	if (IS_ERR_OR_NULL(posix_acls))
+ 		return NULL;
+ 
+ 	smb_acl = kzalloc(sizeof(struct xattr_smb_acl) +
+@@ -1886,7 +1886,7 @@ int ksmbd_vfs_inherit_posix_acl(struct mnt_idmap *idmap,
+ 		return -EOPNOTSUPP;
+ 
+ 	acls = get_inode_acl(parent_inode, ACL_TYPE_DEFAULT);
+-	if (!acls)
++	if (IS_ERR_OR_NULL(acls))
+ 		return -ENOENT;
+ 	pace = acls->a_entries;
+ 
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 7db9f960221d3..7ed63f5bbe056 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -612,7 +612,7 @@ struct netdev_queue {
+ 	netdevice_tracker	dev_tracker;
+ 
+ 	struct Qdisc __rcu	*qdisc;
+-	struct Qdisc		*qdisc_sleeping;
++	struct Qdisc __rcu	*qdisc_sleeping;
+ #ifdef CONFIG_SYSFS
+ 	struct kobject		kobj;
+ #endif
+@@ -760,8 +760,11 @@ static inline void rps_record_sock_flow(struct rps_sock_flow_table *table,
+ 		/* We only give a hint, preemption can change CPU under us */
+ 		val |= raw_smp_processor_id();
+ 
+-		if (table->ents[index] != val)
+-			table->ents[index] = val;
++		/* The following WRITE_ONCE() is paired with the READ_ONCE()
++		 * here, and another one in get_rps_cpu().
++		 */
++		if (READ_ONCE(table->ents[index]) != val)
++			WRITE_ONCE(table->ents[index], val);
+ 	}
+ }
+ 
+diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
+index a7e3a3405520a..5b9415cb979b1 100644
+--- a/include/linux/page-flags.h
++++ b/include/linux/page-flags.h
+@@ -630,6 +630,12 @@ PAGEFLAG_FALSE(VmemmapSelfHosted, vmemmap_self_hosted)
+  * Please note that, confusingly, "page_mapping" refers to the inode
+  * address_space which maps the page from disk; whereas "page_mapped"
+  * refers to user virtual address space into which the page is mapped.
++ *
++ * For slab pages, since slab reuses the bits in struct page to store its
++ * internal states, the page->mapping does not exist as such, nor do these
++ * flags below.  So in order to avoid testing non-existent bits, please
++ * make sure that PageSlab(page) actually evaluates to false before calling
++ * the following functions (e.g., PageAnon).  See mm/slab.h.
+  */
+ #define PAGE_MAPPING_ANON	0x1
+ #define PAGE_MAPPING_MOVABLE	0x2
+diff --git a/include/linux/usb/hcd.h b/include/linux/usb/hcd.h
+index b51c07111729b..ec37cc9743909 100644
+--- a/include/linux/usb/hcd.h
++++ b/include/linux/usb/hcd.h
+@@ -503,6 +503,11 @@ void *hcd_buffer_alloc(struct usb_bus *bus, size_t size,
+ void hcd_buffer_free(struct usb_bus *bus, size_t size,
+ 	void *addr, dma_addr_t dma);
+ 
++void *hcd_buffer_alloc_pages(struct usb_hcd *hcd,
++		size_t size, gfp_t mem_flags, dma_addr_t *dma);
++void hcd_buffer_free_pages(struct usb_hcd *hcd,
++		size_t size, void *addr, dma_addr_t dma);
++
+ /* generic bus glue, needed for host controllers that don't use PCI */
+ extern irqreturn_t usb_hcd_irq(int irq, void *__hcd);
+ 
+diff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h
+index bcc5a4cd2c17b..1b4230cd42a37 100644
+--- a/include/net/bluetooth/bluetooth.h
++++ b/include/net/bluetooth/bluetooth.h
+@@ -1,6 +1,7 @@
+ /*
+    BlueZ - Bluetooth protocol stack for Linux
+    Copyright (C) 2000-2001 Qualcomm Incorporated
++   Copyright 2023 NXP
+ 
+    Written 2000,2001 by Maxim Krasnyansky <maxk@qualcomm.com>
+ 
+@@ -171,23 +172,39 @@ struct bt_iso_io_qos {
+ 	__u8  rtn;
+ };
+ 
+-struct bt_iso_qos {
+-	union {
+-		__u8  cig;
+-		__u8  big;
+-	};
+-	union {
+-		__u8  cis;
+-		__u8  bis;
+-	};
+-	union {
+-		__u8  sca;
+-		__u8  sync_interval;
+-	};
++struct bt_iso_ucast_qos {
++	__u8  cig;
++	__u8  cis;
++	__u8  sca;
++	__u8  packing;
++	__u8  framing;
++	struct bt_iso_io_qos in;
++	struct bt_iso_io_qos out;
++};
++
++struct bt_iso_bcast_qos {
++	__u8  big;
++	__u8  bis;
++	__u8  sync_interval;
+ 	__u8  packing;
+ 	__u8  framing;
+ 	struct bt_iso_io_qos in;
+ 	struct bt_iso_io_qos out;
++	__u8  encryption;
++	__u8  bcode[16];
++	__u8  options;
++	__u16 skip;
++	__u16 sync_timeout;
++	__u8  sync_cte_type;
++	__u8  mse;
++	__u16 timeout;
++};
++
++struct bt_iso_qos {
++	union {
++		struct bt_iso_ucast_qos ucast;
++		struct bt_iso_bcast_qos bcast;
++	};
+ };
+ 
+ #define BT_ISO_PHY_1M		0x01
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index 07df96c47ef4f..872dcb91a540e 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -350,6 +350,7 @@ enum {
+ enum {
+ 	HCI_SETUP,
+ 	HCI_CONFIG,
++	HCI_DEBUGFS_CREATED,
+ 	HCI_AUTO_OFF,
+ 	HCI_RFKILLED,
+ 	HCI_MGMT,
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index d5311ceb21c62..65b76bf63109b 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -1,6 +1,7 @@
+ /*
+    BlueZ - Bluetooth protocol stack for Linux
+    Copyright (c) 2000-2001, 2010, Code Aurora Forum. All rights reserved.
++   Copyright 2023 NXP
+ 
+    Written 2000,2001 by Maxim Krasnyansky <maxk@qualcomm.com>
+ 
+@@ -513,6 +514,7 @@ struct hci_dev {
+ 	struct work_struct	cmd_sync_work;
+ 	struct list_head	cmd_sync_work_list;
+ 	struct mutex		cmd_sync_work_lock;
++	struct mutex		unregister_lock;
+ 	struct work_struct	cmd_sync_cancel_work;
+ 	struct work_struct	reenable_adv_work;
+ 
+@@ -764,7 +766,10 @@ struct hci_conn {
+ 	void		*iso_data;
+ 	struct amp_mgr	*amp_mgr;
+ 
+-	struct hci_conn	*link;
++	struct list_head link_list;
++	struct hci_conn	*parent;
++	struct hci_link *link;
++
+ 	struct bt_codec codec;
+ 
+ 	void (*connect_cfm_cb)	(struct hci_conn *conn, u8 status);
+@@ -774,6 +779,11 @@ struct hci_conn {
+ 	void (*cleanup)(struct hci_conn *conn);
+ };
+ 
++struct hci_link {
++	struct list_head list;
++	struct hci_conn *conn;
++};
++
+ struct hci_chan {
+ 	struct list_head list;
+ 	__u16 handle;
+@@ -1091,7 +1101,7 @@ static inline struct hci_conn *hci_conn_hash_lookup_bis(struct hci_dev *hdev,
+ 		if (bacmp(&c->dst, ba) || c->type != ISO_LINK)
+ 			continue;
+ 
+-		if (c->iso_qos.big == big && c->iso_qos.bis == bis) {
++		if (c->iso_qos.bcast.big == big && c->iso_qos.bcast.bis == bis) {
+ 			rcu_read_unlock();
+ 			return c;
+ 		}
+@@ -1166,7 +1176,9 @@ static inline struct hci_conn *hci_conn_hash_lookup_le(struct hci_dev *hdev,
+ 
+ static inline struct hci_conn *hci_conn_hash_lookup_cis(struct hci_dev *hdev,
+ 							bdaddr_t *ba,
+-							__u8 ba_type)
++							__u8 ba_type,
++							__u8 cig,
++							__u8 id)
+ {
+ 	struct hci_conn_hash *h = &hdev->conn_hash;
+ 	struct hci_conn  *c;
+@@ -1177,7 +1189,16 @@ static inline struct hci_conn *hci_conn_hash_lookup_cis(struct hci_dev *hdev,
+ 		if (c->type != ISO_LINK)
+ 			continue;
+ 
+-		if (ba_type == c->dst_type && !bacmp(&c->dst, ba)) {
++		/* Match CIG ID if set */
++		if (cig != BT_ISO_QOS_CIG_UNSET && cig != c->iso_qos.ucast.cig)
++			continue;
++
++		/* Match CIS ID if set */
++		if (id != BT_ISO_QOS_CIS_UNSET && id != c->iso_qos.ucast.cis)
++			continue;
++
++		/* Match destination address if set */
++		if (!ba || (ba_type == c->dst_type && !bacmp(&c->dst, ba))) {
+ 			rcu_read_unlock();
+ 			return c;
+ 		}
+@@ -1200,7 +1221,7 @@ static inline struct hci_conn *hci_conn_hash_lookup_cig(struct hci_dev *hdev,
+ 		if (c->type != ISO_LINK)
+ 			continue;
+ 
+-		if (handle == c->iso_qos.cig) {
++		if (handle == c->iso_qos.ucast.cig) {
+ 			rcu_read_unlock();
+ 			return c;
+ 		}
+@@ -1223,7 +1244,7 @@ static inline struct hci_conn *hci_conn_hash_lookup_big(struct hci_dev *hdev,
+ 		if (bacmp(&c->dst, BDADDR_ANY) || c->type != ISO_LINK)
+ 			continue;
+ 
+-		if (handle == c->iso_qos.big) {
++		if (handle == c->iso_qos.bcast.big) {
+ 			rcu_read_unlock();
+ 			return c;
+ 		}
+@@ -1303,7 +1324,7 @@ int hci_le_create_cis(struct hci_conn *conn);
+ 
+ struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst,
+ 			      u8 role);
+-int hci_conn_del(struct hci_conn *conn);
++void hci_conn_del(struct hci_conn *conn);
+ void hci_conn_hash_flush(struct hci_dev *hdev);
+ void hci_conn_check_pending(struct hci_dev *hdev);
+ 
+@@ -1332,7 +1353,7 @@ struct hci_conn *hci_connect_bis(struct hci_dev *hdev, bdaddr_t *dst,
+ 				 __u8 dst_type, struct bt_iso_qos *qos,
+ 				 __u8 data_len, __u8 *data);
+ int hci_pa_create_sync(struct hci_dev *hdev, bdaddr_t *dst, __u8 dst_type,
+-		       __u8 sid);
++		       __u8 sid, struct bt_iso_qos *qos);
+ int hci_le_big_create_sync(struct hci_dev *hdev, struct bt_iso_qos *qos,
+ 			   __u16 sync_handle, __u8 num_bis, __u8 bis[]);
+ int hci_conn_check_link_mode(struct hci_conn *conn);
+@@ -1377,12 +1398,14 @@ static inline void hci_conn_put(struct hci_conn *conn)
+ 	put_device(&conn->dev);
+ }
+ 
+-static inline void hci_conn_hold(struct hci_conn *conn)
++static inline struct hci_conn *hci_conn_hold(struct hci_conn *conn)
+ {
+ 	BT_DBG("hcon %p orig refcnt %d", conn, atomic_read(&conn->refcnt));
+ 
+ 	atomic_inc(&conn->refcnt);
+ 	cancel_delayed_work(&conn->disc_work);
++
++	return conn;
+ }
+ 
+ static inline void hci_conn_drop(struct hci_conn *conn)
+diff --git a/include/net/neighbour.h b/include/net/neighbour.h
+index 2f2a6023fb0e5..94a1599824d8f 100644
+--- a/include/net/neighbour.h
++++ b/include/net/neighbour.h
+@@ -180,7 +180,7 @@ struct pneigh_entry {
+ 	netdevice_tracker	dev_tracker;
+ 	u32			flags;
+ 	u8			protocol;
+-	u8			key[];
++	u32			key[];
+ };
+ 
+ /*
+diff --git a/include/net/netns/ipv6.h b/include/net/netns/ipv6.h
+index b4af4837d80b4..f6e6a3ab91489 100644
+--- a/include/net/netns/ipv6.h
++++ b/include/net/netns/ipv6.h
+@@ -53,7 +53,7 @@ struct netns_sysctl_ipv6 {
+ 	int seg6_flowlabel;
+ 	u32 ioam6_id;
+ 	u64 ioam6_id_wide;
+-	bool skip_notify_on_dev_down;
++	int skip_notify_on_dev_down;
+ 	u8 fib_notify_on_flag_change;
+ };
+ 
+diff --git a/include/net/ping.h b/include/net/ping.h
+index 9233ad3de0ade..bc7779262e603 100644
+--- a/include/net/ping.h
++++ b/include/net/ping.h
+@@ -16,11 +16,7 @@
+ #define PING_HTABLE_SIZE 	64
+ #define PING_HTABLE_MASK 	(PING_HTABLE_SIZE-1)
+ 
+-/*
+- * gid_t is either uint or ushort.  We want to pass it to
+- * proc_dointvec_minmax(), so it must not be larger than MAX_INT
+- */
+-#define GID_T_MAX (((gid_t)~0U) >> 1)
++#define GID_T_MAX (((gid_t)~0U) - 1)
+ 
+ /* Compatibility glue so we can support IPv6 when it's compiled as a module */
+ struct pingv6_ops {
+diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h
+index fc688c7e95951..4df802f84eeba 100644
+--- a/include/net/pkt_sched.h
++++ b/include/net/pkt_sched.h
+@@ -128,6 +128,8 @@ static inline void qdisc_run(struct Qdisc *q)
+ 	}
+ }
+ 
++extern const struct nla_policy rtm_tca_policy[TCA_MAX + 1];
++
+ /* Calculate maximal size of packet seen by hard_start_xmit
+    routine of this device.
+  */
+diff --git a/include/net/rpl.h b/include/net/rpl.h
+index 308ef0a05caef..30fe780d1e7c8 100644
+--- a/include/net/rpl.h
++++ b/include/net/rpl.h
+@@ -23,9 +23,6 @@ static inline int rpl_init(void)
+ static inline void rpl_exit(void) {}
+ #endif
+ 
+-/* Worst decompression memory usage ipv6 address (16) + pad 7 */
+-#define IPV6_RPL_SRH_WORST_SWAP_SIZE (sizeof(struct in6_addr) + 7)
+-
+ size_t ipv6_rpl_srh_size(unsigned char n, unsigned char cmpri,
+ 			 unsigned char cmpre);
+ 
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index fab5ba3e61b7c..27271f2b37cb3 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -545,7 +545,7 @@ static inline struct Qdisc *qdisc_root_bh(const struct Qdisc *qdisc)
+ 
+ static inline struct Qdisc *qdisc_root_sleeping(const struct Qdisc *qdisc)
+ {
+-	return qdisc->dev_queue->qdisc_sleeping;
++	return rcu_dereference_rtnl(qdisc->dev_queue->qdisc_sleeping);
+ }
+ 
+ static inline spinlock_t *qdisc_root_sleeping_lock(const struct Qdisc *qdisc)
+@@ -754,7 +754,9 @@ static inline bool qdisc_tx_changing(const struct net_device *dev)
+ 
+ 	for (i = 0; i < dev->num_tx_queues; i++) {
+ 		struct netdev_queue *txq = netdev_get_tx_queue(dev, i);
+-		if (rcu_access_pointer(txq->qdisc) != txq->qdisc_sleeping)
++
++		if (rcu_access_pointer(txq->qdisc) !=
++		    rcu_access_pointer(txq->qdisc_sleeping))
+ 			return true;
+ 	}
+ 	return false;
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 45e46a1c4afc6..f0654c44acf5f 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -1152,8 +1152,12 @@ static inline void sock_rps_record_flow(const struct sock *sk)
+ 		 * OR	an additional socket flag
+ 		 * [1] : sk_state and sk_prot are in the same cache line.
+ 		 */
+-		if (sk->sk_state == TCP_ESTABLISHED)
+-			sock_rps_record_flow_hash(sk->sk_rxhash);
++		if (sk->sk_state == TCP_ESTABLISHED) {
++			/* This READ_ONCE() is paired with the WRITE_ONCE()
++			 * from sock_rps_save_rxhash() and sock_rps_reset_rxhash().
++			 */
++			sock_rps_record_flow_hash(READ_ONCE(sk->sk_rxhash));
++		}
+ 	}
+ #endif
+ }
+@@ -1162,15 +1166,19 @@ static inline void sock_rps_save_rxhash(struct sock *sk,
+ 					const struct sk_buff *skb)
+ {
+ #ifdef CONFIG_RPS
+-	if (unlikely(sk->sk_rxhash != skb->hash))
+-		sk->sk_rxhash = skb->hash;
++	/* The following WRITE_ONCE() is paired with the READ_ONCE()
++	 * here, and another one in sock_rps_record_flow().
++	 */
++	if (unlikely(READ_ONCE(sk->sk_rxhash) != skb->hash))
++		WRITE_ONCE(sk->sk_rxhash, skb->hash);
+ #endif
+ }
+ 
+ static inline void sock_rps_reset_rxhash(struct sock *sk)
+ {
+ #ifdef CONFIG_RPS
+-	sk->sk_rxhash = 0;
++	/* Paired with READ_ONCE() in sock_rps_record_flow() */
++	WRITE_ONCE(sk->sk_rxhash, 0);
+ #endif
+ }
+ 
+diff --git a/kernel/bpf/map_in_map.c b/kernel/bpf/map_in_map.c
+index 38136ec4e095a..fbc3e944dc747 100644
+--- a/kernel/bpf/map_in_map.c
++++ b/kernel/bpf/map_in_map.c
+@@ -81,9 +81,13 @@ struct bpf_map *bpf_map_meta_alloc(int inner_map_ufd)
+ 	/* Misc members not needed in bpf_map_meta_equal() check. */
+ 	inner_map_meta->ops = inner_map->ops;
+ 	if (inner_map->ops == &array_map_ops) {
++		struct bpf_array *inner_array_meta =
++			container_of(inner_map_meta, struct bpf_array, map);
++		struct bpf_array *inner_array = container_of(inner_map, struct bpf_array, map);
++
++		inner_array_meta->index_mask = inner_array->index_mask;
++		inner_array_meta->elem_size = inner_array->elem_size;
+ 		inner_map_meta->bypass_spec_v1 = inner_map->bypass_spec_v1;
+-		container_of(inner_map_meta, struct bpf_array, map)->index_mask =
+-		     container_of(inner_map, struct bpf_array, map)->index_mask;
+ 	}
+ 
+ 	fdput(f);
+diff --git a/kernel/fork.c b/kernel/fork.c
+index ea332319dffea..1ec1e9ea4bf83 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -559,6 +559,7 @@ void free_task(struct task_struct *tsk)
+ 	arch_release_task_struct(tsk);
+ 	if (tsk->flags & PF_KTHREAD)
+ 		free_kthread_struct(tsk);
++	bpf_task_storage_free(tsk);
+ 	free_task_struct(tsk);
+ }
+ EXPORT_SYMBOL(free_task);
+@@ -845,7 +846,6 @@ void __put_task_struct(struct task_struct *tsk)
+ 	cgroup_free(tsk);
+ 	task_numa_free(tsk, true);
+ 	security_task_free(tsk);
+-	bpf_task_storage_free(tsk);
+ 	exit_creds(tsk);
+ 	delayacct_tsk_free(tsk);
+ 	put_signal_struct(tsk->signal);
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index e8da032bb6fc8..165441044bc55 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -900,13 +900,23 @@ static const struct bpf_func_proto bpf_send_signal_thread_proto = {
+ 
+ BPF_CALL_3(bpf_d_path, struct path *, path, char *, buf, u32, sz)
+ {
++	struct path copy;
+ 	long len;
+ 	char *p;
+ 
+ 	if (!sz)
+ 		return 0;
+ 
+-	p = d_path(path, buf, sz);
++	/*
++	 * The path pointer is verified as trusted and safe to use,
++	 * but let's double check it's valid anyway to workaround
++	 * potentially broken verifier.
++	 */
++	len = copy_from_kernel_nofault(&copy, path, sizeof(*path));
++	if (len < 0)
++		return len;
++
++	p = d_path(&copy, buf, sz);
+ 	if (IS_ERR(p)) {
+ 		len = PTR_ERR(p);
+ 	} else {
+diff --git a/lib/cpu_rmap.c b/lib/cpu_rmap.c
+index e77f12bb3c774..1833ad73de6fc 100644
+--- a/lib/cpu_rmap.c
++++ b/lib/cpu_rmap.c
+@@ -268,8 +268,8 @@ static void irq_cpu_rmap_release(struct kref *ref)
+ 	struct irq_glue *glue =
+ 		container_of(ref, struct irq_glue, notify.kref);
+ 
+-	cpu_rmap_put(glue->rmap);
+ 	glue->rmap->obj[glue->index] = NULL;
++	cpu_rmap_put(glue->rmap);
+ 	kfree(glue);
+ }
+ 
+diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
+index c3547a373c9ca..c7d27c3f2e967 100644
+--- a/mm/Kconfig.debug
++++ b/mm/Kconfig.debug
+@@ -98,6 +98,7 @@ config PAGE_OWNER
+ config PAGE_TABLE_CHECK
+ 	bool "Check for invalid mappings in user page tables"
+ 	depends on ARCH_SUPPORTS_PAGE_TABLE_CHECK
++	depends on EXCLUSIVE_SYSTEM_RAM
+ 	select PAGE_EXTENSION
+ 	help
+ 	  Check that anonymous page is not being mapped twice with read write
+diff --git a/mm/page_table_check.c b/mm/page_table_check.c
+index 25d8610c00429..f2baf97d5f389 100644
+--- a/mm/page_table_check.c
++++ b/mm/page_table_check.c
+@@ -71,6 +71,8 @@ static void page_table_check_clear(struct mm_struct *mm, unsigned long addr,
+ 
+ 	page = pfn_to_page(pfn);
+ 	page_ext = page_ext_get(page);
++
++	BUG_ON(PageSlab(page));
+ 	anon = PageAnon(page);
+ 
+ 	for (i = 0; i < pgcnt; i++) {
+@@ -107,6 +109,8 @@ static void page_table_check_set(struct mm_struct *mm, unsigned long addr,
+ 
+ 	page = pfn_to_page(pfn);
+ 	page_ext = page_ext_get(page);
++
++	BUG_ON(PageSlab(page));
+ 	anon = PageAnon(page);
+ 
+ 	for (i = 0; i < pgcnt; i++) {
+@@ -133,6 +137,8 @@ void __page_table_check_zero(struct page *page, unsigned int order)
+ 	struct page_ext *page_ext;
+ 	unsigned long i;
+ 
++	BUG_ON(PageSlab(page));
++
+ 	page_ext = page_ext_get(page);
+ 	BUG_ON(!page_ext);
+ 	for (i = 0; i < (1ul << order); i++) {
+diff --git a/net/batman-adv/distributed-arp-table.c b/net/batman-adv/distributed-arp-table.c
+index 6968e55eb9714..28a939d560906 100644
+--- a/net/batman-adv/distributed-arp-table.c
++++ b/net/batman-adv/distributed-arp-table.c
+@@ -101,7 +101,6 @@ static void batadv_dat_purge(struct work_struct *work);
+  */
+ static void batadv_dat_start_timer(struct batadv_priv *bat_priv)
+ {
+-	INIT_DELAYED_WORK(&bat_priv->dat.work, batadv_dat_purge);
+ 	queue_delayed_work(batadv_event_workqueue, &bat_priv->dat.work,
+ 			   msecs_to_jiffies(10000));
+ }
+@@ -819,6 +818,7 @@ int batadv_dat_init(struct batadv_priv *bat_priv)
+ 	if (!bat_priv->dat.hash)
+ 		return -ENOMEM;
+ 
++	INIT_DELAYED_WORK(&bat_priv->dat.work, batadv_dat_purge);
+ 	batadv_dat_start_timer(bat_priv);
+ 
+ 	batadv_tvlv_handler_register(bat_priv, batadv_dat_tvlv_ogm_handler_v1,
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 8455ba141ee61..62518d06d2aff 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -1,6 +1,7 @@
+ /*
+    BlueZ - Bluetooth protocol stack for Linux
+    Copyright (c) 2000-2001, 2010, Code Aurora Forum. All rights reserved.
++   Copyright 2023 NXP
+ 
+    Written 2000,2001 by Maxim Krasnyansky <maxk@qualcomm.com>
+ 
+@@ -329,8 +330,11 @@ static void hci_add_sco(struct hci_conn *conn, __u16 handle)
+ static bool find_next_esco_param(struct hci_conn *conn,
+ 				 const struct sco_param *esco_param, int size)
+ {
++	if (!conn->parent)
++		return false;
++
+ 	for (; conn->attempt <= size; conn->attempt++) {
+-		if (lmp_esco_2m_capable(conn->link) ||
++		if (lmp_esco_2m_capable(conn->parent) ||
+ 		    (esco_param[conn->attempt - 1].pkt_type & ESCO_2EV3))
+ 			break;
+ 		BT_DBG("hcon %p skipped attempt %d, eSCO 2M not supported",
+@@ -460,7 +464,7 @@ static int hci_enhanced_setup_sync(struct hci_dev *hdev, void *data)
+ 		break;
+ 
+ 	case BT_CODEC_CVSD:
+-		if (lmp_esco_capable(conn->link)) {
++		if (conn->parent && lmp_esco_capable(conn->parent)) {
+ 			if (!find_next_esco_param(conn, esco_param_cvsd,
+ 						  ARRAY_SIZE(esco_param_cvsd)))
+ 				return -EINVAL;
+@@ -530,7 +534,7 @@ static bool hci_setup_sync_conn(struct hci_conn *conn, __u16 handle)
+ 		param = &esco_param_msbc[conn->attempt - 1];
+ 		break;
+ 	case SCO_AIRMODE_CVSD:
+-		if (lmp_esco_capable(conn->link)) {
++		if (conn->parent && lmp_esco_capable(conn->parent)) {
+ 			if (!find_next_esco_param(conn, esco_param_cvsd,
+ 						  ARRAY_SIZE(esco_param_cvsd)))
+ 				return false;
+@@ -636,21 +640,22 @@ void hci_le_start_enc(struct hci_conn *conn, __le16 ediv, __le64 rand,
+ /* Device _must_ be locked */
+ void hci_sco_setup(struct hci_conn *conn, __u8 status)
+ {
+-	struct hci_conn *sco = conn->link;
++	struct hci_link *link;
+ 
+-	if (!sco)
++	link = list_first_entry_or_null(&conn->link_list, struct hci_link, list);
++	if (!link || !link->conn)
+ 		return;
+ 
+ 	BT_DBG("hcon %p", conn);
+ 
+ 	if (!status) {
+ 		if (lmp_esco_capable(conn->hdev))
+-			hci_setup_sync(sco, conn->handle);
++			hci_setup_sync(link->conn, conn->handle);
+ 		else
+-			hci_add_sco(sco, conn->handle);
++			hci_add_sco(link->conn, conn->handle);
+ 	} else {
+-		hci_connect_cfm(sco, status);
+-		hci_conn_del(sco);
++		hci_connect_cfm(link->conn, status);
++		hci_conn_del(link->conn);
+ 	}
+ }
+ 
+@@ -795,8 +800,8 @@ static void bis_list(struct hci_conn *conn, void *data)
+ 	if (bacmp(&conn->dst, BDADDR_ANY))
+ 		return;
+ 
+-	if (d->big != conn->iso_qos.big || d->bis == BT_ISO_QOS_BIS_UNSET ||
+-	    d->bis != conn->iso_qos.bis)
++	if (d->big != conn->iso_qos.bcast.big || d->bis == BT_ISO_QOS_BIS_UNSET ||
++	    d->bis != conn->iso_qos.bcast.bis)
+ 		return;
+ 
+ 	d->count++;
+@@ -916,10 +921,10 @@ static void bis_cleanup(struct hci_conn *conn)
+ 		if (!test_and_clear_bit(HCI_CONN_PER_ADV, &conn->flags))
+ 			return;
+ 
+-		hci_le_terminate_big(hdev, conn->iso_qos.big,
+-				     conn->iso_qos.bis);
++		hci_le_terminate_big(hdev, conn->iso_qos.bcast.big,
++				     conn->iso_qos.bcast.bis);
+ 	} else {
+-		hci_le_big_terminate(hdev, conn->iso_qos.big,
++		hci_le_big_terminate(hdev, conn->iso_qos.bcast.big,
+ 				     conn->sync_handle);
+ 	}
+ }
+@@ -942,8 +947,8 @@ static void find_cis(struct hci_conn *conn, void *data)
+ {
+ 	struct iso_list_data *d = data;
+ 
+-	/* Ignore broadcast */
+-	if (!bacmp(&conn->dst, BDADDR_ANY))
++	/* Ignore broadcast or if CIG don't match */
++	if (!bacmp(&conn->dst, BDADDR_ANY) || d->cig != conn->iso_qos.ucast.cig)
+ 		return;
+ 
+ 	d->count++;
+@@ -958,17 +963,22 @@ static void cis_cleanup(struct hci_conn *conn)
+ 	struct hci_dev *hdev = conn->hdev;
+ 	struct iso_list_data d;
+ 
++	if (conn->iso_qos.ucast.cig == BT_ISO_QOS_CIG_UNSET)
++		return;
++
+ 	memset(&d, 0, sizeof(d));
+-	d.cig = conn->iso_qos.cig;
++	d.cig = conn->iso_qos.ucast.cig;
+ 
+ 	/* Check if ISO connection is a CIS and remove CIG if there are
+ 	 * no other connections using it.
+ 	 */
++	hci_conn_hash_list_state(hdev, find_cis, ISO_LINK, BT_BOUND, &d);
++	hci_conn_hash_list_state(hdev, find_cis, ISO_LINK, BT_CONNECT, &d);
+ 	hci_conn_hash_list_state(hdev, find_cis, ISO_LINK, BT_CONNECTED, &d);
+ 	if (d.count)
+ 		return;
+ 
+-	hci_le_remove_cig(hdev, conn->iso_qos.cig);
++	hci_le_remove_cig(hdev, conn->iso_qos.ucast.cig);
+ }
+ 
+ struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst,
+@@ -1041,6 +1051,7 @@ struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst,
+ 	skb_queue_head_init(&conn->data_q);
+ 
+ 	INIT_LIST_HEAD(&conn->chan_list);
++	INIT_LIST_HEAD(&conn->link_list);
+ 
+ 	INIT_DELAYED_WORK(&conn->disc_work, hci_conn_timeout);
+ 	INIT_DELAYED_WORK(&conn->auto_accept_work, hci_conn_auto_accept);
+@@ -1068,18 +1079,53 @@ struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst,
+ 	return conn;
+ }
+ 
+-static bool hci_conn_unlink(struct hci_conn *conn)
++static void hci_conn_unlink(struct hci_conn *conn)
+ {
++	struct hci_dev *hdev = conn->hdev;
++
++	bt_dev_dbg(hdev, "hcon %p", conn);
++
++	if (!conn->parent) {
++		struct hci_link *link, *t;
++
++		list_for_each_entry_safe(link, t, &conn->link_list, list) {
++			struct hci_conn *child = link->conn;
++
++			hci_conn_unlink(child);
++
++			/* If hdev is down it means
++			 * hci_dev_close_sync/hci_conn_hash_flush is in progress
++			 * and links don't need to be cleanup as all connections
++			 * would be cleanup.
++			 */
++			if (!test_bit(HCI_UP, &hdev->flags))
++				continue;
++
++			/* Due to race, SCO connection might be not established
++			 * yet at this point. Delete it now, otherwise it is
++			 * possible for it to be stuck and can't be deleted.
++			 */
++			if (child->handle == HCI_CONN_HANDLE_UNSET)
++				hci_conn_del(child);
++		}
++
++		return;
++	}
++
+ 	if (!conn->link)
+-		return false;
++		return;
+ 
+-	conn->link->link = NULL;
+-	conn->link = NULL;
++	list_del_rcu(&conn->link->list);
++	synchronize_rcu();
+ 
+-	return true;
++	hci_conn_put(conn->parent);
++	conn->parent = NULL;
++
++	kfree(conn->link);
++	conn->link = NULL;
+ }
+ 
+-int hci_conn_del(struct hci_conn *conn)
++void hci_conn_del(struct hci_conn *conn)
+ {
+ 	struct hci_dev *hdev = conn->hdev;
+ 
+@@ -1090,18 +1136,7 @@ int hci_conn_del(struct hci_conn *conn)
+ 	cancel_delayed_work_sync(&conn->idle_work);
+ 
+ 	if (conn->type == ACL_LINK) {
+-		struct hci_conn *link = conn->link;
+-
+-		if (link) {
+-			hci_conn_unlink(conn);
+-			/* Due to race, SCO connection might be not established
+-			 * yet at this point. Delete it now, otherwise it is
+-			 * possible for it to be stuck and can't be deleted.
+-			 */
+-			if (link->handle == HCI_CONN_HANDLE_UNSET)
+-				hci_conn_del(link);
+-		}
+-
++		hci_conn_unlink(conn);
+ 		/* Unacked frames */
+ 		hdev->acl_cnt += conn->sent;
+ 	} else if (conn->type == LE_LINK) {
+@@ -1112,7 +1147,7 @@ int hci_conn_del(struct hci_conn *conn)
+ 		else
+ 			hdev->acl_cnt += conn->sent;
+ 	} else {
+-		struct hci_conn *acl = conn->link;
++		struct hci_conn *acl = conn->parent;
+ 
+ 		if (acl) {
+ 			hci_conn_unlink(conn);
+@@ -1141,8 +1176,6 @@ int hci_conn_del(struct hci_conn *conn)
+ 	 * rest of hci_conn_del.
+ 	 */
+ 	hci_conn_cleanup(conn);
+-
+-	return 0;
+ }
+ 
+ struct hci_dev *hci_get_route(bdaddr_t *dst, bdaddr_t *src, uint8_t src_type)
+@@ -1411,7 +1444,7 @@ static int qos_set_big(struct hci_dev *hdev, struct bt_iso_qos *qos)
+ 	struct iso_list_data data;
+ 
+ 	/* Allocate a BIG if not set */
+-	if (qos->big == BT_ISO_QOS_BIG_UNSET) {
++	if (qos->bcast.big == BT_ISO_QOS_BIG_UNSET) {
+ 		for (data.big = 0x00; data.big < 0xef; data.big++) {
+ 			data.count = 0;
+ 			data.bis = 0xff;
+@@ -1426,7 +1459,7 @@ static int qos_set_big(struct hci_dev *hdev, struct bt_iso_qos *qos)
+ 			return -EADDRNOTAVAIL;
+ 
+ 		/* Update BIG */
+-		qos->big = data.big;
++		qos->bcast.big = data.big;
+ 	}
+ 
+ 	return 0;
+@@ -1437,7 +1470,7 @@ static int qos_set_bis(struct hci_dev *hdev, struct bt_iso_qos *qos)
+ 	struct iso_list_data data;
+ 
+ 	/* Allocate BIS if not set */
+-	if (qos->bis == BT_ISO_QOS_BIS_UNSET) {
++	if (qos->bcast.bis == BT_ISO_QOS_BIS_UNSET) {
+ 		/* Find an unused adv set to advertise BIS, skip instance 0x00
+ 		 * since it is reserved as general purpose set.
+ 		 */
+@@ -1455,7 +1488,7 @@ static int qos_set_bis(struct hci_dev *hdev, struct bt_iso_qos *qos)
+ 			return -EADDRNOTAVAIL;
+ 
+ 		/* Update BIS */
+-		qos->bis = data.bis;
++		qos->bcast.bis = data.bis;
+ 	}
+ 
+ 	return 0;
+@@ -1484,8 +1517,8 @@ static struct hci_conn *hci_add_bis(struct hci_dev *hdev, bdaddr_t *dst,
+ 	if (err)
+ 		return ERR_PTR(err);
+ 
+-	data.big = qos->big;
+-	data.bis = qos->bis;
++	data.big = qos->bcast.big;
++	data.bis = qos->bcast.bis;
+ 	data.count = 0;
+ 
+ 	/* Check if there is already a matching BIG/BIS */
+@@ -1493,7 +1526,7 @@ static struct hci_conn *hci_add_bis(struct hci_dev *hdev, bdaddr_t *dst,
+ 	if (data.count)
+ 		return ERR_PTR(-EADDRINUSE);
+ 
+-	conn = hci_conn_hash_lookup_bis(hdev, dst, qos->big, qos->bis);
++	conn = hci_conn_hash_lookup_bis(hdev, dst, qos->bcast.big, qos->bcast.bis);
+ 	if (conn)
+ 		return ERR_PTR(-EADDRINUSE);
+ 
+@@ -1599,11 +1632,40 @@ struct hci_conn *hci_connect_acl(struct hci_dev *hdev, bdaddr_t *dst,
+ 	return acl;
+ }
+ 
++static struct hci_link *hci_conn_link(struct hci_conn *parent,
++				      struct hci_conn *conn)
++{
++	struct hci_dev *hdev = parent->hdev;
++	struct hci_link *link;
++
++	bt_dev_dbg(hdev, "parent %p hcon %p", parent, conn);
++
++	if (conn->link)
++		return conn->link;
++
++	if (conn->parent)
++		return NULL;
++
++	link = kzalloc(sizeof(*link), GFP_KERNEL);
++	if (!link)
++		return NULL;
++
++	link->conn = hci_conn_hold(conn);
++	conn->link = link;
++	conn->parent = hci_conn_get(parent);
++
++	/* Use list_add_tail_rcu append to the list */
++	list_add_tail_rcu(&link->list, &parent->link_list);
++
++	return link;
++}
++
+ struct hci_conn *hci_connect_sco(struct hci_dev *hdev, int type, bdaddr_t *dst,
+ 				 __u16 setting, struct bt_codec *codec)
+ {
+ 	struct hci_conn *acl;
+ 	struct hci_conn *sco;
++	struct hci_link *link;
+ 
+ 	acl = hci_connect_acl(hdev, dst, BT_SECURITY_LOW, HCI_AT_NO_BONDING,
+ 			      CONN_REASON_SCO_CONNECT);
+@@ -1619,10 +1681,12 @@ struct hci_conn *hci_connect_sco(struct hci_dev *hdev, int type, bdaddr_t *dst,
+ 		}
+ 	}
+ 
+-	acl->link = sco;
+-	sco->link = acl;
+-
+-	hci_conn_hold(sco);
++	link = hci_conn_link(acl, sco);
++	if (!link) {
++		hci_conn_drop(acl);
++		hci_conn_drop(sco);
++		return NULL;
++	}
+ 
+ 	sco->setting = setting;
+ 	sco->codec = *codec;
+@@ -1648,13 +1712,13 @@ static void cis_add(struct iso_list_data *d, struct bt_iso_qos *qos)
+ {
+ 	struct hci_cis_params *cis = &d->pdu.cis[d->pdu.cp.num_cis];
+ 
+-	cis->cis_id = qos->cis;
+-	cis->c_sdu  = cpu_to_le16(qos->out.sdu);
+-	cis->p_sdu  = cpu_to_le16(qos->in.sdu);
+-	cis->c_phy  = qos->out.phy ? qos->out.phy : qos->in.phy;
+-	cis->p_phy  = qos->in.phy ? qos->in.phy : qos->out.phy;
+-	cis->c_rtn  = qos->out.rtn;
+-	cis->p_rtn  = qos->in.rtn;
++	cis->cis_id = qos->ucast.cis;
++	cis->c_sdu  = cpu_to_le16(qos->ucast.out.sdu);
++	cis->p_sdu  = cpu_to_le16(qos->ucast.in.sdu);
++	cis->c_phy  = qos->ucast.out.phy ? qos->ucast.out.phy : qos->ucast.in.phy;
++	cis->p_phy  = qos->ucast.in.phy ? qos->ucast.in.phy : qos->ucast.out.phy;
++	cis->c_rtn  = qos->ucast.out.rtn;
++	cis->p_rtn  = qos->ucast.in.rtn;
+ 
+ 	d->pdu.cp.num_cis++;
+ }
+@@ -1667,8 +1731,8 @@ static void cis_list(struct hci_conn *conn, void *data)
+ 	if (!bacmp(&conn->dst, BDADDR_ANY))
+ 		return;
+ 
+-	if (d->cig != conn->iso_qos.cig || d->cis == BT_ISO_QOS_CIS_UNSET ||
+-	    d->cis != conn->iso_qos.cis)
++	if (d->cig != conn->iso_qos.ucast.cig || d->cis == BT_ISO_QOS_CIS_UNSET ||
++	    d->cis != conn->iso_qos.ucast.cis)
+ 		return;
+ 
+ 	d->count++;
+@@ -1687,17 +1751,18 @@ static int hci_le_create_big(struct hci_conn *conn, struct bt_iso_qos *qos)
+ 
+ 	memset(&cp, 0, sizeof(cp));
+ 
+-	cp.handle = qos->big;
+-	cp.adv_handle = qos->bis;
++	cp.handle = qos->bcast.big;
++	cp.adv_handle = qos->bcast.bis;
+ 	cp.num_bis  = 0x01;
+-	hci_cpu_to_le24(qos->out.interval, cp.bis.sdu_interval);
+-	cp.bis.sdu = cpu_to_le16(qos->out.sdu);
+-	cp.bis.latency =  cpu_to_le16(qos->out.latency);
+-	cp.bis.rtn  = qos->out.rtn;
+-	cp.bis.phy  = qos->out.phy;
+-	cp.bis.packing = qos->packing;
+-	cp.bis.framing = qos->framing;
+-	cp.bis.encryption = 0x00;
++	hci_cpu_to_le24(qos->bcast.out.interval, cp.bis.sdu_interval);
++	cp.bis.sdu = cpu_to_le16(qos->bcast.out.sdu);
++	cp.bis.latency =  cpu_to_le16(qos->bcast.out.latency);
++	cp.bis.rtn  = qos->bcast.out.rtn;
++	cp.bis.phy  = qos->bcast.out.phy;
++	cp.bis.packing = qos->bcast.packing;
++	cp.bis.framing = qos->bcast.framing;
++	cp.bis.encryption = qos->bcast.encryption;
++	memcpy(cp.bis.bcode, qos->bcast.bcode, sizeof(cp.bis.bcode));
+ 	memset(&cp.bis.bcode, 0, sizeof(cp.bis.bcode));
+ 
+ 	return hci_send_cmd(hdev, HCI_OP_LE_CREATE_BIG, sizeof(cp), &cp);
+@@ -1710,43 +1775,42 @@ static bool hci_le_set_cig_params(struct hci_conn *conn, struct bt_iso_qos *qos)
+ 
+ 	memset(&data, 0, sizeof(data));
+ 
+-	/* Allocate a CIG if not set */
+-	if (qos->cig == BT_ISO_QOS_CIG_UNSET) {
+-		for (data.cig = 0x00; data.cig < 0xff; data.cig++) {
++	/* Allocate first still reconfigurable CIG if not set */
++	if (qos->ucast.cig == BT_ISO_QOS_CIG_UNSET) {
++		for (data.cig = 0x00; data.cig < 0xf0; data.cig++) {
+ 			data.count = 0;
+-			data.cis = 0xff;
+ 
+-			hci_conn_hash_list_state(hdev, cis_list, ISO_LINK,
+-						 BT_BOUND, &data);
++			hci_conn_hash_list_state(hdev, find_cis, ISO_LINK,
++						 BT_CONNECT, &data);
+ 			if (data.count)
+ 				continue;
+ 
+-			hci_conn_hash_list_state(hdev, cis_list, ISO_LINK,
++			hci_conn_hash_list_state(hdev, find_cis, ISO_LINK,
+ 						 BT_CONNECTED, &data);
+ 			if (!data.count)
+ 				break;
+ 		}
+ 
+-		if (data.cig == 0xff)
++		if (data.cig == 0xf0)
+ 			return false;
+ 
+ 		/* Update CIG */
+-		qos->cig = data.cig;
++		qos->ucast.cig = data.cig;
+ 	}
+ 
+-	data.pdu.cp.cig_id = qos->cig;
+-	hci_cpu_to_le24(qos->out.interval, data.pdu.cp.c_interval);
+-	hci_cpu_to_le24(qos->in.interval, data.pdu.cp.p_interval);
+-	data.pdu.cp.sca = qos->sca;
+-	data.pdu.cp.packing = qos->packing;
+-	data.pdu.cp.framing = qos->framing;
+-	data.pdu.cp.c_latency = cpu_to_le16(qos->out.latency);
+-	data.pdu.cp.p_latency = cpu_to_le16(qos->in.latency);
++	data.pdu.cp.cig_id = qos->ucast.cig;
++	hci_cpu_to_le24(qos->ucast.out.interval, data.pdu.cp.c_interval);
++	hci_cpu_to_le24(qos->ucast.in.interval, data.pdu.cp.p_interval);
++	data.pdu.cp.sca = qos->ucast.sca;
++	data.pdu.cp.packing = qos->ucast.packing;
++	data.pdu.cp.framing = qos->ucast.framing;
++	data.pdu.cp.c_latency = cpu_to_le16(qos->ucast.out.latency);
++	data.pdu.cp.p_latency = cpu_to_le16(qos->ucast.in.latency);
+ 
+-	if (qos->cis != BT_ISO_QOS_CIS_UNSET) {
++	if (qos->ucast.cis != BT_ISO_QOS_CIS_UNSET) {
+ 		data.count = 0;
+-		data.cig = qos->cig;
+-		data.cis = qos->cis;
++		data.cig = qos->ucast.cig;
++		data.cis = qos->ucast.cis;
+ 
+ 		hci_conn_hash_list_state(hdev, cis_list, ISO_LINK, BT_BOUND,
+ 					 &data);
+@@ -1757,7 +1821,7 @@ static bool hci_le_set_cig_params(struct hci_conn *conn, struct bt_iso_qos *qos)
+ 	}
+ 
+ 	/* Reprogram all CIS(s) with the same CIG */
+-	for (data.cig = qos->cig, data.cis = 0x00; data.cis < 0x11;
++	for (data.cig = qos->ucast.cig, data.cis = 0x00; data.cis < 0x11;
+ 	     data.cis++) {
+ 		data.count = 0;
+ 
+@@ -1767,14 +1831,14 @@ static bool hci_le_set_cig_params(struct hci_conn *conn, struct bt_iso_qos *qos)
+ 			continue;
+ 
+ 		/* Allocate a CIS if not set */
+-		if (qos->cis == BT_ISO_QOS_CIS_UNSET) {
++		if (qos->ucast.cis == BT_ISO_QOS_CIS_UNSET) {
+ 			/* Update CIS */
+-			qos->cis = data.cis;
++			qos->ucast.cis = data.cis;
+ 			cis_add(&data, qos);
+ 		}
+ 	}
+ 
+-	if (qos->cis == BT_ISO_QOS_CIS_UNSET || !data.pdu.cp.num_cis)
++	if (qos->ucast.cis == BT_ISO_QOS_CIS_UNSET || !data.pdu.cp.num_cis)
+ 		return false;
+ 
+ 	if (hci_send_cmd(hdev, HCI_OP_LE_SET_CIG_PARAMS,
+@@ -1791,7 +1855,8 @@ struct hci_conn *hci_bind_cis(struct hci_dev *hdev, bdaddr_t *dst,
+ {
+ 	struct hci_conn *cis;
+ 
+-	cis = hci_conn_hash_lookup_cis(hdev, dst, dst_type);
++	cis = hci_conn_hash_lookup_cis(hdev, dst, dst_type, qos->ucast.cig,
++				       qos->ucast.cis);
+ 	if (!cis) {
+ 		cis = hci_conn_add(hdev, ISO_LINK, dst, HCI_ROLE_MASTER);
+ 		if (!cis)
+@@ -1809,32 +1874,32 @@ struct hci_conn *hci_bind_cis(struct hci_dev *hdev, bdaddr_t *dst,
+ 		return cis;
+ 
+ 	/* Update LINK PHYs according to QoS preference */
+-	cis->le_tx_phy = qos->out.phy;
+-	cis->le_rx_phy = qos->in.phy;
++	cis->le_tx_phy = qos->ucast.out.phy;
++	cis->le_rx_phy = qos->ucast.in.phy;
+ 
+ 	/* If output interval is not set use the input interval as it cannot be
+ 	 * 0x000000.
+ 	 */
+-	if (!qos->out.interval)
+-		qos->out.interval = qos->in.interval;
++	if (!qos->ucast.out.interval)
++		qos->ucast.out.interval = qos->ucast.in.interval;
+ 
+ 	/* If input interval is not set use the output interval as it cannot be
+ 	 * 0x000000.
+ 	 */
+-	if (!qos->in.interval)
+-		qos->in.interval = qos->out.interval;
++	if (!qos->ucast.in.interval)
++		qos->ucast.in.interval = qos->ucast.out.interval;
+ 
+ 	/* If output latency is not set use the input latency as it cannot be
+ 	 * 0x0000.
+ 	 */
+-	if (!qos->out.latency)
+-		qos->out.latency = qos->in.latency;
++	if (!qos->ucast.out.latency)
++		qos->ucast.out.latency = qos->ucast.in.latency;
+ 
+ 	/* If input latency is not set use the output latency as it cannot be
+ 	 * 0x0000.
+ 	 */
+-	if (!qos->in.latency)
+-		qos->in.latency = qos->out.latency;
++	if (!qos->ucast.in.latency)
++		qos->ucast.in.latency = qos->ucast.out.latency;
+ 
+ 	if (!hci_le_set_cig_params(cis, qos)) {
+ 		hci_conn_drop(cis);
+@@ -1854,7 +1919,7 @@ bool hci_iso_setup_path(struct hci_conn *conn)
+ 
+ 	memset(&cmd, 0, sizeof(cmd));
+ 
+-	if (conn->iso_qos.out.sdu) {
++	if (conn->iso_qos.ucast.out.sdu) {
+ 		cmd.handle = cpu_to_le16(conn->handle);
+ 		cmd.direction = 0x00; /* Input (Host to Controller) */
+ 		cmd.path = 0x00; /* HCI path if enabled */
+@@ -1865,7 +1930,7 @@ bool hci_iso_setup_path(struct hci_conn *conn)
+ 			return false;
+ 	}
+ 
+-	if (conn->iso_qos.in.sdu) {
++	if (conn->iso_qos.ucast.in.sdu) {
+ 		cmd.handle = cpu_to_le16(conn->handle);
+ 		cmd.direction = 0x01; /* Output (Controller to Host) */
+ 		cmd.path = 0x00; /* HCI path if enabled */
+@@ -1889,10 +1954,10 @@ static int hci_create_cis_sync(struct hci_dev *hdev, void *data)
+ 	u8 cig;
+ 
+ 	memset(&cmd, 0, sizeof(cmd));
+-	cmd.cis[0].acl_handle = cpu_to_le16(conn->link->handle);
++	cmd.cis[0].acl_handle = cpu_to_le16(conn->parent->handle);
+ 	cmd.cis[0].cis_handle = cpu_to_le16(conn->handle);
+ 	cmd.cp.num_cis++;
+-	cig = conn->iso_qos.cig;
++	cig = conn->iso_qos.ucast.cig;
+ 
+ 	hci_dev_lock(hdev);
+ 
+@@ -1902,11 +1967,12 @@ static int hci_create_cis_sync(struct hci_dev *hdev, void *data)
+ 		struct hci_cis *cis = &cmd.cis[cmd.cp.num_cis];
+ 
+ 		if (conn == data || conn->type != ISO_LINK ||
+-		    conn->state == BT_CONNECTED || conn->iso_qos.cig != cig)
++		    conn->state == BT_CONNECTED ||
++		    conn->iso_qos.ucast.cig != cig)
+ 			continue;
+ 
+ 		/* Check if all CIS(s) belonging to a CIG are ready */
+-		if (!conn->link || conn->link->state != BT_CONNECTED ||
++		if (!conn->parent || conn->parent->state != BT_CONNECTED ||
+ 		    conn->state != BT_CONNECT) {
+ 			cmd.cp.num_cis = 0;
+ 			break;
+@@ -1923,7 +1989,7 @@ static int hci_create_cis_sync(struct hci_dev *hdev, void *data)
+ 		 * command have been generated, the Controller shall return the
+ 		 * error code Command Disallowed (0x0C).
+ 		 */
+-		cis->acl_handle = cpu_to_le16(conn->link->handle);
++		cis->acl_handle = cpu_to_le16(conn->parent->handle);
+ 		cis->cis_handle = cpu_to_le16(conn->handle);
+ 		cmd.cp.num_cis++;
+ 	}
+@@ -1942,15 +2008,33 @@ static int hci_create_cis_sync(struct hci_dev *hdev, void *data)
+ int hci_le_create_cis(struct hci_conn *conn)
+ {
+ 	struct hci_conn *cis;
++	struct hci_link *link, *t;
+ 	struct hci_dev *hdev = conn->hdev;
+ 	int err;
+ 
++	bt_dev_dbg(hdev, "hcon %p", conn);
++
+ 	switch (conn->type) {
+ 	case LE_LINK:
+-		if (!conn->link || conn->state != BT_CONNECTED)
++		if (conn->state != BT_CONNECTED || list_empty(&conn->link_list))
+ 			return -EINVAL;
+-		cis = conn->link;
+-		break;
++
++		cis = NULL;
++
++		/* hci_conn_link uses list_add_tail_rcu so the list is in
++		 * the same order as the connections are requested.
++		 */
++		list_for_each_entry_safe(link, t, &conn->link_list, list) {
++			if (link->conn->state == BT_BOUND) {
++				err = hci_le_create_cis(link->conn);
++				if (err)
++					return err;
++
++				cis = link->conn;
++			}
++		}
++
++		return cis ? 0 : -EINVAL;
+ 	case ISO_LINK:
+ 		cis = conn;
+ 		break;
+@@ -2002,8 +2086,8 @@ static void hci_bind_bis(struct hci_conn *conn,
+ 			 struct bt_iso_qos *qos)
+ {
+ 	/* Update LINK PHYs according to QoS preference */
+-	conn->le_tx_phy = qos->out.phy;
+-	conn->le_tx_phy = qos->out.phy;
++	conn->le_tx_phy = qos->bcast.out.phy;
++	conn->le_tx_phy = qos->bcast.out.phy;
+ 	conn->iso_qos = *qos;
+ 	conn->state = BT_BOUND;
+ }
+@@ -2016,16 +2100,16 @@ static int create_big_sync(struct hci_dev *hdev, void *data)
+ 	u32 flags = 0;
+ 	int err;
+ 
+-	if (qos->out.phy == 0x02)
++	if (qos->bcast.out.phy == 0x02)
+ 		flags |= MGMT_ADV_FLAG_SEC_2M;
+ 
+ 	/* Align intervals */
+-	interval = qos->out.interval / 1250;
++	interval = qos->bcast.out.interval / 1250;
+ 
+-	if (qos->bis)
+-		sync_interval = qos->sync_interval * 1600;
++	if (qos->bcast.bis)
++		sync_interval = qos->bcast.sync_interval * 1600;
+ 
+-	err = hci_start_per_adv_sync(hdev, qos->bis, conn->le_per_adv_data_len,
++	err = hci_start_per_adv_sync(hdev, qos->bcast.bis, conn->le_per_adv_data_len,
+ 				     conn->le_per_adv_data, flags, interval,
+ 				     interval, sync_interval);
+ 	if (err)
+@@ -2062,7 +2146,7 @@ static int create_pa_sync(struct hci_dev *hdev, void *data)
+ }
+ 
+ int hci_pa_create_sync(struct hci_dev *hdev, bdaddr_t *dst, __u8 dst_type,
+-		       __u8 sid)
++		       __u8 sid, struct bt_iso_qos *qos)
+ {
+ 	struct hci_cp_le_pa_create_sync *cp;
+ 
+@@ -2075,9 +2159,13 @@ int hci_pa_create_sync(struct hci_dev *hdev, bdaddr_t *dst, __u8 dst_type,
+ 		return -ENOMEM;
+ 	}
+ 
++	cp->options = qos->bcast.options;
+ 	cp->sid = sid;
+ 	cp->addr_type = dst_type;
+ 	bacpy(&cp->addr, dst);
++	cp->skip = cpu_to_le16(qos->bcast.skip);
++	cp->sync_timeout = cpu_to_le16(qos->bcast.sync_timeout);
++	cp->sync_cte_type = qos->bcast.sync_cte_type;
+ 
+ 	/* Queue start pa_create_sync and scan */
+ 	return hci_cmd_sync_queue(hdev, create_pa_sync, cp, create_pa_complete);
+@@ -2100,8 +2188,12 @@ int hci_le_big_create_sync(struct hci_dev *hdev, struct bt_iso_qos *qos,
+ 		return err;
+ 
+ 	memset(&pdu, 0, sizeof(pdu));
+-	pdu.cp.handle = qos->big;
++	pdu.cp.handle = qos->bcast.big;
+ 	pdu.cp.sync_handle = cpu_to_le16(sync_handle);
++	pdu.cp.encryption = qos->bcast.encryption;
++	memcpy(pdu.cp.bcode, qos->bcast.bcode, sizeof(pdu.cp.bcode));
++	pdu.cp.mse = qos->bcast.mse;
++	pdu.cp.timeout = cpu_to_le16(qos->bcast.timeout);
+ 	pdu.cp.num_bis = num_bis;
+ 	memcpy(pdu.bis, bis, num_bis);
+ 
+@@ -2151,7 +2243,7 @@ struct hci_conn *hci_connect_bis(struct hci_dev *hdev, bdaddr_t *dst,
+ 		return ERR_PTR(err);
+ 	}
+ 
+-	hci_iso_qos_setup(hdev, conn, &qos->out,
++	hci_iso_qos_setup(hdev, conn, &qos->bcast.out,
+ 			  conn->le_tx_phy ? conn->le_tx_phy :
+ 			  hdev->le_tx_def_phys);
+ 
+@@ -2163,6 +2255,7 @@ struct hci_conn *hci_connect_cis(struct hci_dev *hdev, bdaddr_t *dst,
+ {
+ 	struct hci_conn *le;
+ 	struct hci_conn *cis;
++	struct hci_link *link;
+ 
+ 	if (hci_dev_test_flag(hdev, HCI_ADVERTISING))
+ 		le = hci_connect_le(hdev, dst, dst_type, false,
+@@ -2177,9 +2270,9 @@ struct hci_conn *hci_connect_cis(struct hci_dev *hdev, bdaddr_t *dst,
+ 	if (IS_ERR(le))
+ 		return le;
+ 
+-	hci_iso_qos_setup(hdev, le, &qos->out,
++	hci_iso_qos_setup(hdev, le, &qos->ucast.out,
+ 			  le->le_tx_phy ? le->le_tx_phy : hdev->le_tx_def_phys);
+-	hci_iso_qos_setup(hdev, le, &qos->in,
++	hci_iso_qos_setup(hdev, le, &qos->ucast.in,
+ 			  le->le_rx_phy ? le->le_rx_phy : hdev->le_rx_def_phys);
+ 
+ 	cis = hci_bind_cis(hdev, dst, dst_type, qos);
+@@ -2188,16 +2281,18 @@ struct hci_conn *hci_connect_cis(struct hci_dev *hdev, bdaddr_t *dst,
+ 		return cis;
+ 	}
+ 
+-	le->link = cis;
+-	cis->link = le;
+-
+-	hci_conn_hold(cis);
++	link = hci_conn_link(le, cis);
++	if (!link) {
++		hci_conn_drop(le);
++		hci_conn_drop(cis);
++		return NULL;
++	}
+ 
+ 	/* If LE is already connected and CIS handle is already set proceed to
+ 	 * Create CIS immediately.
+ 	 */
+ 	if (le->state == BT_CONNECTED && cis->handle != HCI_CONN_HANDLE_UNSET)
+-		hci_le_create_cis(le);
++		hci_le_create_cis(cis);
+ 
+ 	return cis;
+ }
+@@ -2437,22 +2532,27 @@ timer:
+ /* Drop all connection on the device */
+ void hci_conn_hash_flush(struct hci_dev *hdev)
+ {
+-	struct hci_conn_hash *h = &hdev->conn_hash;
+-	struct hci_conn *c, *n;
++	struct list_head *head = &hdev->conn_hash.list;
++	struct hci_conn *conn;
+ 
+ 	BT_DBG("hdev %s", hdev->name);
+ 
+-	list_for_each_entry_safe(c, n, &h->list, list) {
+-		c->state = BT_CLOSED;
+-
+-		hci_disconn_cfm(c, HCI_ERROR_LOCAL_HOST_TERM);
++	/* We should not traverse the list here, because hci_conn_del
++	 * can remove extra links, which may cause the list traversal
++	 * to hit items that have already been released.
++	 */
++	while ((conn = list_first_entry_or_null(head,
++						struct hci_conn,
++						list)) != NULL) {
++		conn->state = BT_CLOSED;
++		hci_disconn_cfm(conn, HCI_ERROR_LOCAL_HOST_TERM);
+ 
+ 		/* Unlink before deleting otherwise it is possible that
+ 		 * hci_conn_del removes the link which may cause the list to
+ 		 * contain items already freed.
+ 		 */
+-		hci_conn_unlink(c);
+-		hci_conn_del(c);
++		hci_conn_unlink(conn);
++		hci_conn_del(conn);
+ 	}
+ }
+ 
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 334e308451f53..ca42129f8f91a 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -1416,10 +1416,10 @@ int hci_remove_link_key(struct hci_dev *hdev, bdaddr_t *bdaddr)
+ 
+ int hci_remove_ltk(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 bdaddr_type)
+ {
+-	struct smp_ltk *k;
++	struct smp_ltk *k, *tmp;
+ 	int removed = 0;
+ 
+-	list_for_each_entry_rcu(k, &hdev->long_term_keys, list) {
++	list_for_each_entry_safe(k, tmp, &hdev->long_term_keys, list) {
+ 		if (bacmp(bdaddr, &k->bdaddr) || k->bdaddr_type != bdaddr_type)
+ 			continue;
+ 
+@@ -1435,9 +1435,9 @@ int hci_remove_ltk(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 bdaddr_type)
+ 
+ void hci_remove_irk(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 addr_type)
+ {
+-	struct smp_irk *k;
++	struct smp_irk *k, *tmp;
+ 
+-	list_for_each_entry_rcu(k, &hdev->identity_resolving_keys, list) {
++	list_for_each_entry_safe(k, tmp, &hdev->identity_resolving_keys, list) {
+ 		if (bacmp(bdaddr, &k->bdaddr) || k->addr_type != addr_type)
+ 			continue;
+ 
+@@ -2685,7 +2685,9 @@ void hci_unregister_dev(struct hci_dev *hdev)
+ {
+ 	BT_DBG("%p name %s bus %d", hdev, hdev->name, hdev->bus);
+ 
++	mutex_lock(&hdev->unregister_lock);
+ 	hci_dev_set_flag(hdev, HCI_UNREGISTER);
++	mutex_unlock(&hdev->unregister_lock);
+ 
+ 	write_lock(&hci_dev_list_lock);
+ 	list_del(&hdev->list);
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 51f13518dba9b..09ba6d8987ee1 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -1,6 +1,7 @@
+ /*
+    BlueZ - Bluetooth protocol stack for Linux
+    Copyright (c) 2000-2001, 2010, Code Aurora Forum. All rights reserved.
++   Copyright 2023 NXP
+ 
+    Written 2000,2001 by Maxim Krasnyansky <maxk@qualcomm.com>
+ 
+@@ -2344,7 +2345,8 @@ static void hci_cs_create_conn(struct hci_dev *hdev, __u8 status)
+ static void hci_cs_add_sco(struct hci_dev *hdev, __u8 status)
+ {
+ 	struct hci_cp_add_sco *cp;
+-	struct hci_conn *acl, *sco;
++	struct hci_conn *acl;
++	struct hci_link *link;
+ 	__u16 handle;
+ 
+ 	bt_dev_dbg(hdev, "status 0x%2.2x", status);
+@@ -2364,12 +2366,13 @@ static void hci_cs_add_sco(struct hci_dev *hdev, __u8 status)
+ 
+ 	acl = hci_conn_hash_lookup_handle(hdev, handle);
+ 	if (acl) {
+-		sco = acl->link;
+-		if (sco) {
+-			sco->state = BT_CLOSED;
++		link = list_first_entry_or_null(&acl->link_list,
++						struct hci_link, list);
++		if (link && link->conn) {
++			link->conn->state = BT_CLOSED;
+ 
+-			hci_connect_cfm(sco, status);
+-			hci_conn_del(sco);
++			hci_connect_cfm(link->conn, status);
++			hci_conn_del(link->conn);
+ 		}
+ 	}
+ 
+@@ -2636,74 +2639,61 @@ static void hci_cs_read_remote_ext_features(struct hci_dev *hdev, __u8 status)
+ 	hci_dev_unlock(hdev);
+ }
+ 
+-static void hci_cs_setup_sync_conn(struct hci_dev *hdev, __u8 status)
++static void hci_setup_sync_conn_status(struct hci_dev *hdev, __u16 handle,
++				       __u8 status)
+ {
+-	struct hci_cp_setup_sync_conn *cp;
+-	struct hci_conn *acl, *sco;
+-	__u16 handle;
+-
+-	bt_dev_dbg(hdev, "status 0x%2.2x", status);
+-
+-	if (!status)
+-		return;
++	struct hci_conn *acl;
++	struct hci_link *link;
+ 
+-	cp = hci_sent_cmd_data(hdev, HCI_OP_SETUP_SYNC_CONN);
+-	if (!cp)
+-		return;
+-
+-	handle = __le16_to_cpu(cp->handle);
+-
+-	bt_dev_dbg(hdev, "handle 0x%4.4x", handle);
++	bt_dev_dbg(hdev, "handle 0x%4.4x status 0x%2.2x", handle, status);
+ 
+ 	hci_dev_lock(hdev);
+ 
+ 	acl = hci_conn_hash_lookup_handle(hdev, handle);
+ 	if (acl) {
+-		sco = acl->link;
+-		if (sco) {
+-			sco->state = BT_CLOSED;
++		link = list_first_entry_or_null(&acl->link_list,
++						struct hci_link, list);
++		if (link && link->conn) {
++			link->conn->state = BT_CLOSED;
+ 
+-			hci_connect_cfm(sco, status);
+-			hci_conn_del(sco);
++			hci_connect_cfm(link->conn, status);
++			hci_conn_del(link->conn);
+ 		}
+ 	}
+ 
+ 	hci_dev_unlock(hdev);
+ }
+ 
+-static void hci_cs_enhanced_setup_sync_conn(struct hci_dev *hdev, __u8 status)
++static void hci_cs_setup_sync_conn(struct hci_dev *hdev, __u8 status)
+ {
+-	struct hci_cp_enhanced_setup_sync_conn *cp;
+-	struct hci_conn *acl, *sco;
+-	__u16 handle;
++	struct hci_cp_setup_sync_conn *cp;
+ 
+ 	bt_dev_dbg(hdev, "status 0x%2.2x", status);
+ 
+ 	if (!status)
+ 		return;
+ 
+-	cp = hci_sent_cmd_data(hdev, HCI_OP_ENHANCED_SETUP_SYNC_CONN);
++	cp = hci_sent_cmd_data(hdev, HCI_OP_SETUP_SYNC_CONN);
+ 	if (!cp)
+ 		return;
+ 
+-	handle = __le16_to_cpu(cp->handle);
++	hci_setup_sync_conn_status(hdev, __le16_to_cpu(cp->handle), status);
++}
+ 
+-	bt_dev_dbg(hdev, "handle 0x%4.4x", handle);
++static void hci_cs_enhanced_setup_sync_conn(struct hci_dev *hdev, __u8 status)
++{
++	struct hci_cp_enhanced_setup_sync_conn *cp;
+ 
+-	hci_dev_lock(hdev);
++	bt_dev_dbg(hdev, "status 0x%2.2x", status);
+ 
+-	acl = hci_conn_hash_lookup_handle(hdev, handle);
+-	if (acl) {
+-		sco = acl->link;
+-		if (sco) {
+-			sco->state = BT_CLOSED;
++	if (!status)
++		return;
+ 
+-			hci_connect_cfm(sco, status);
+-			hci_conn_del(sco);
+-		}
+-	}
++	cp = hci_sent_cmd_data(hdev, HCI_OP_ENHANCED_SETUP_SYNC_CONN);
++	if (!cp)
++		return;
+ 
+-	hci_dev_unlock(hdev);
++	hci_setup_sync_conn_status(hdev, __le16_to_cpu(cp->handle), status);
+ }
+ 
+ static void hci_cs_sniff_mode(struct hci_dev *hdev, __u8 status)
+@@ -3814,47 +3804,56 @@ static u8 hci_cc_le_set_cig_params(struct hci_dev *hdev, void *data,
+ 				   struct sk_buff *skb)
+ {
+ 	struct hci_rp_le_set_cig_params *rp = data;
++	struct hci_cp_le_set_cig_params *cp;
+ 	struct hci_conn *conn;
+-	int i = 0;
++	u8 status = rp->status;
++	int i;
+ 
+ 	bt_dev_dbg(hdev, "status 0x%2.2x", rp->status);
+ 
++	cp = hci_sent_cmd_data(hdev, HCI_OP_LE_SET_CIG_PARAMS);
++	if (!cp || rp->num_handles != cp->num_cis || rp->cig_id != cp->cig_id) {
++		bt_dev_err(hdev, "unexpected Set CIG Parameters response data");
++		status = HCI_ERROR_UNSPECIFIED;
++	}
++
+ 	hci_dev_lock(hdev);
+ 
+-	if (rp->status) {
++	if (status) {
+ 		while ((conn = hci_conn_hash_lookup_cig(hdev, rp->cig_id))) {
+ 			conn->state = BT_CLOSED;
+-			hci_connect_cfm(conn, rp->status);
++			hci_connect_cfm(conn, status);
+ 			hci_conn_del(conn);
+ 		}
+ 		goto unlock;
+ 	}
+ 
+-	rcu_read_lock();
++	/* BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E page 2553
++	 *
++	 * If the Status return parameter is zero, then the Controller shall
++	 * set the Connection_Handle arrayed return parameter to the connection
++	 * handle(s) corresponding to the CIS configurations specified in
++	 * the CIS_IDs command parameter, in the same order.
++	 */
++	for (i = 0; i < rp->num_handles; ++i) {
++		conn = hci_conn_hash_lookup_cis(hdev, NULL, 0, rp->cig_id,
++						cp->cis[i].cis_id);
++		if (!conn || !bacmp(&conn->dst, BDADDR_ANY))
++			continue;
+ 
+-	list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) {
+-		if (conn->type != ISO_LINK || conn->iso_qos.cig != rp->cig_id ||
+-		    conn->state == BT_CONNECTED)
++		if (conn->state != BT_BOUND && conn->state != BT_CONNECT)
+ 			continue;
+ 
+-		conn->handle = __le16_to_cpu(rp->handle[i++]);
++		conn->handle = __le16_to_cpu(rp->handle[i]);
+ 
+-		bt_dev_dbg(hdev, "%p handle 0x%4.4x link %p", conn,
+-			   conn->handle, conn->link);
++		bt_dev_dbg(hdev, "%p handle 0x%4.4x parent %p", conn,
++			   conn->handle, conn->parent);
+ 
+ 		/* Create CIS if LE is already connected */
+-		if (conn->link && conn->link->state == BT_CONNECTED) {
+-			rcu_read_unlock();
+-			hci_le_create_cis(conn->link);
+-			rcu_read_lock();
+-		}
+-
+-		if (i == rp->num_handles)
+-			break;
++		if (conn->parent && conn->parent->state == BT_CONNECTED)
++			hci_le_create_cis(conn);
+ 	}
+ 
+-	rcu_read_unlock();
+-
+ unlock:
+ 	hci_dev_unlock(hdev);
+ 
+@@ -3890,7 +3889,7 @@ static u8 hci_cc_le_setup_iso_path(struct hci_dev *hdev, void *data,
+ 	/* Input (Host to Controller) */
+ 	case 0x00:
+ 		/* Only confirm connection if output only */
+-		if (conn->iso_qos.out.sdu && !conn->iso_qos.in.sdu)
++		if (conn->iso_qos.ucast.out.sdu && !conn->iso_qos.ucast.in.sdu)
+ 			hci_connect_cfm(conn, rp->status);
+ 		break;
+ 	/* Output (Controller to Host) */
+@@ -5030,7 +5029,7 @@ static void hci_sync_conn_complete_evt(struct hci_dev *hdev, void *data,
+ 		if (conn->out) {
+ 			conn->pkt_type = (hdev->esco_type & SCO_ESCO_MASK) |
+ 					(hdev->esco_type & EDR_ESCO_MASK);
+-			if (hci_setup_sync(conn, conn->link->handle))
++			if (hci_setup_sync(conn, conn->parent->handle))
+ 				goto unlock;
+ 		}
+ 		fallthrough;
+@@ -6818,15 +6817,15 @@ static void hci_le_cis_estabilished_evt(struct hci_dev *hdev, void *data,
+ 		memset(&interval, 0, sizeof(interval));
+ 
+ 		memcpy(&interval, ev->c_latency, sizeof(ev->c_latency));
+-		conn->iso_qos.in.interval = le32_to_cpu(interval);
++		conn->iso_qos.ucast.in.interval = le32_to_cpu(interval);
+ 		memcpy(&interval, ev->p_latency, sizeof(ev->p_latency));
+-		conn->iso_qos.out.interval = le32_to_cpu(interval);
+-		conn->iso_qos.in.latency = le16_to_cpu(ev->interval);
+-		conn->iso_qos.out.latency = le16_to_cpu(ev->interval);
+-		conn->iso_qos.in.sdu = le16_to_cpu(ev->c_mtu);
+-		conn->iso_qos.out.sdu = le16_to_cpu(ev->p_mtu);
+-		conn->iso_qos.in.phy = ev->c_phy;
+-		conn->iso_qos.out.phy = ev->p_phy;
++		conn->iso_qos.ucast.out.interval = le32_to_cpu(interval);
++		conn->iso_qos.ucast.in.latency = le16_to_cpu(ev->interval);
++		conn->iso_qos.ucast.out.latency = le16_to_cpu(ev->interval);
++		conn->iso_qos.ucast.in.sdu = le16_to_cpu(ev->c_mtu);
++		conn->iso_qos.ucast.out.sdu = le16_to_cpu(ev->p_mtu);
++		conn->iso_qos.ucast.in.phy = ev->c_phy;
++		conn->iso_qos.ucast.out.phy = ev->p_phy;
+ 	}
+ 
+ 	if (!ev->status) {
+@@ -6900,8 +6899,8 @@ static void hci_le_cis_req_evt(struct hci_dev *hdev, void *data,
+ 		cis->handle = cis_handle;
+ 	}
+ 
+-	cis->iso_qos.cig = ev->cig_id;
+-	cis->iso_qos.cis = ev->cis_id;
++	cis->iso_qos.ucast.cig = ev->cig_id;
++	cis->iso_qos.ucast.cis = ev->cis_id;
+ 
+ 	if (!(flags & HCI_PROTO_DEFER)) {
+ 		hci_le_accept_cis(hdev, ev->cis_handle);
+@@ -6988,13 +6987,13 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
+ 			bis->handle = handle;
+ 		}
+ 
+-		bis->iso_qos.big = ev->handle;
++		bis->iso_qos.bcast.big = ev->handle;
+ 		memset(&interval, 0, sizeof(interval));
+ 		memcpy(&interval, ev->latency, sizeof(ev->latency));
+-		bis->iso_qos.in.interval = le32_to_cpu(interval);
++		bis->iso_qos.bcast.in.interval = le32_to_cpu(interval);
+ 		/* Convert ISO Interval (1.25 ms slots) to latency (ms) */
+-		bis->iso_qos.in.latency = le16_to_cpu(ev->interval) * 125 / 100;
+-		bis->iso_qos.in.sdu = le16_to_cpu(ev->max_pdu);
++		bis->iso_qos.bcast.in.latency = le16_to_cpu(ev->interval) * 125 / 100;
++		bis->iso_qos.bcast.in.sdu = le16_to_cpu(ev->max_pdu);
+ 
+ 		hci_iso_setup_path(bis);
+ 	}
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index b65ee3a32e5d7..7f410f441e82c 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -629,6 +629,7 @@ void hci_cmd_sync_init(struct hci_dev *hdev)
+ 	INIT_WORK(&hdev->cmd_sync_work, hci_cmd_sync_work);
+ 	INIT_LIST_HEAD(&hdev->cmd_sync_work_list);
+ 	mutex_init(&hdev->cmd_sync_work_lock);
++	mutex_init(&hdev->unregister_lock);
+ 
+ 	INIT_WORK(&hdev->cmd_sync_cancel_work, hci_cmd_sync_cancel_work);
+ 	INIT_WORK(&hdev->reenable_adv_work, reenable_adv);
+@@ -688,14 +689,19 @@ int hci_cmd_sync_queue(struct hci_dev *hdev, hci_cmd_sync_work_func_t func,
+ 		       void *data, hci_cmd_sync_work_destroy_t destroy)
+ {
+ 	struct hci_cmd_sync_work_entry *entry;
++	int err = 0;
+ 
+-	if (hci_dev_test_flag(hdev, HCI_UNREGISTER))
+-		return -ENODEV;
++	mutex_lock(&hdev->unregister_lock);
++	if (hci_dev_test_flag(hdev, HCI_UNREGISTER)) {
++		err = -ENODEV;
++		goto unlock;
++	}
+ 
+ 	entry = kmalloc(sizeof(*entry), GFP_KERNEL);
+-	if (!entry)
+-		return -ENOMEM;
+-
++	if (!entry) {
++		err = -ENOMEM;
++		goto unlock;
++	}
+ 	entry->func = func;
+ 	entry->data = data;
+ 	entry->destroy = destroy;
+@@ -706,7 +712,9 @@ int hci_cmd_sync_queue(struct hci_dev *hdev, hci_cmd_sync_work_func_t func,
+ 
+ 	queue_work(hdev->req_workqueue, &hdev->cmd_sync_work);
+ 
+-	return 0;
++unlock:
++	mutex_unlock(&hdev->unregister_lock);
++	return err;
+ }
+ EXPORT_SYMBOL(hci_cmd_sync_queue);
+ 
+@@ -4502,6 +4510,9 @@ static int hci_init_sync(struct hci_dev *hdev)
+ 	    !hci_dev_test_flag(hdev, HCI_CONFIG))
+ 		return 0;
+ 
++	if (hci_dev_test_and_set_flag(hdev, HCI_DEBUGFS_CREATED))
++		return 0;
++
+ 	hci_debugfs_create_common(hdev);
+ 
+ 	if (lmp_bredr_capable(hdev))
+diff --git a/net/bluetooth/iso.c b/net/bluetooth/iso.c
+index 8d136a7301630..34d55a85d8f6f 100644
+--- a/net/bluetooth/iso.c
++++ b/net/bluetooth/iso.c
+@@ -3,6 +3,7 @@
+  * BlueZ - Bluetooth protocol stack for Linux
+  *
+  * Copyright (C) 2022 Intel Corporation
++ * Copyright 2023 NXP
+  */
+ 
+ #include <linux/module.h>
+@@ -59,11 +60,17 @@ struct iso_pinfo {
+ 	__u16			sync_handle;
+ 	__u32			flags;
+ 	struct bt_iso_qos	qos;
++	bool			qos_user_set;
+ 	__u8			base_len;
+ 	__u8			base[BASE_MAX_LENGTH];
+ 	struct iso_conn		*conn;
+ };
+ 
++static struct bt_iso_qos default_qos;
++
++static bool check_ucast_qos(struct bt_iso_qos *qos);
++static bool check_bcast_qos(struct bt_iso_qos *qos);
++
+ /* ---- ISO timers ---- */
+ #define ISO_CONN_TIMEOUT	(HZ * 40)
+ #define ISO_DISCONN_TIMEOUT	(HZ * 2)
+@@ -264,8 +271,15 @@ static int iso_connect_bis(struct sock *sk)
+ 		goto unlock;
+ 	}
+ 
++	/* Fail if user set invalid QoS */
++	if (iso_pi(sk)->qos_user_set && !check_bcast_qos(&iso_pi(sk)->qos)) {
++		iso_pi(sk)->qos = default_qos;
++		err = -EINVAL;
++		goto unlock;
++	}
++
+ 	/* Fail if out PHYs are marked as disabled */
+-	if (!iso_pi(sk)->qos.out.phy) {
++	if (!iso_pi(sk)->qos.bcast.out.phy) {
+ 		err = -EINVAL;
+ 		goto unlock;
+ 	}
+@@ -336,8 +350,15 @@ static int iso_connect_cis(struct sock *sk)
+ 		goto unlock;
+ 	}
+ 
++	/* Fail if user set invalid QoS */
++	if (iso_pi(sk)->qos_user_set && !check_ucast_qos(&iso_pi(sk)->qos)) {
++		iso_pi(sk)->qos = default_qos;
++		err = -EINVAL;
++		goto unlock;
++	}
++
+ 	/* Fail if either PHYs are marked as disabled */
+-	if (!iso_pi(sk)->qos.in.phy && !iso_pi(sk)->qos.out.phy) {
++	if (!iso_pi(sk)->qos.ucast.in.phy && !iso_pi(sk)->qos.ucast.out.phy) {
+ 		err = -EINVAL;
+ 		goto unlock;
+ 	}
+@@ -417,7 +438,7 @@ static int iso_send_frame(struct sock *sk, struct sk_buff *skb)
+ 
+ 	BT_DBG("sk %p len %d", sk, skb->len);
+ 
+-	if (skb->len > qos->out.sdu)
++	if (skb->len > qos->ucast.out.sdu)
+ 		return -EMSGSIZE;
+ 
+ 	len = skb->len;
+@@ -680,13 +701,23 @@ static struct proto iso_proto = {
+ }
+ 
+ static struct bt_iso_qos default_qos = {
+-	.cig		= BT_ISO_QOS_CIG_UNSET,
+-	.cis		= BT_ISO_QOS_CIS_UNSET,
+-	.sca		= 0x00,
+-	.packing	= 0x00,
+-	.framing	= 0x00,
+-	.in		= DEFAULT_IO_QOS,
+-	.out		= DEFAULT_IO_QOS,
++	.bcast = {
++		.big			= BT_ISO_QOS_BIG_UNSET,
++		.bis			= BT_ISO_QOS_BIS_UNSET,
++		.sync_interval		= 0x00,
++		.packing		= 0x00,
++		.framing		= 0x00,
++		.in			= DEFAULT_IO_QOS,
++		.out			= DEFAULT_IO_QOS,
++		.encryption		= 0x00,
++		.bcode			= {0x00},
++		.options		= 0x00,
++		.skip			= 0x0000,
++		.sync_timeout		= 0x4000,
++		.sync_cte_type		= 0x00,
++		.mse			= 0x00,
++		.timeout		= 0x4000,
++	},
+ };
+ 
+ static struct sock *iso_sock_alloc(struct net *net, struct socket *sock,
+@@ -893,9 +924,15 @@ static int iso_listen_bis(struct sock *sk)
+ 	if (!hdev)
+ 		return -EHOSTUNREACH;
+ 
++	/* Fail if user set invalid QoS */
++	if (iso_pi(sk)->qos_user_set && !check_bcast_qos(&iso_pi(sk)->qos)) {
++		iso_pi(sk)->qos = default_qos;
++		return -EINVAL;
++	}
++
+ 	err = hci_pa_create_sync(hdev, &iso_pi(sk)->dst,
+ 				 le_addr_type(iso_pi(sk)->dst_type),
+-				 iso_pi(sk)->bc_sid);
++				 iso_pi(sk)->bc_sid, &iso_pi(sk)->qos);
+ 
+ 	hci_dev_put(hdev);
+ 
+@@ -1154,21 +1191,62 @@ static bool check_io_qos(struct bt_iso_io_qos *qos)
+ 	return true;
+ }
+ 
+-static bool check_qos(struct bt_iso_qos *qos)
++static bool check_ucast_qos(struct bt_iso_qos *qos)
+ {
+-	if (qos->sca > 0x07)
++	if (qos->ucast.sca > 0x07)
+ 		return false;
+ 
+-	if (qos->packing > 0x01)
++	if (qos->ucast.packing > 0x01)
+ 		return false;
+ 
+-	if (qos->framing > 0x01)
++	if (qos->ucast.framing > 0x01)
+ 		return false;
+ 
+-	if (!check_io_qos(&qos->in))
++	if (!check_io_qos(&qos->ucast.in))
+ 		return false;
+ 
+-	if (!check_io_qos(&qos->out))
++	if (!check_io_qos(&qos->ucast.out))
++		return false;
++
++	return true;
++}
++
++static bool check_bcast_qos(struct bt_iso_qos *qos)
++{
++	if (qos->bcast.sync_interval > 0x07)
++		return false;
++
++	if (qos->bcast.packing > 0x01)
++		return false;
++
++	if (qos->bcast.framing > 0x01)
++		return false;
++
++	if (!check_io_qos(&qos->bcast.in))
++		return false;
++
++	if (!check_io_qos(&qos->bcast.out))
++		return false;
++
++	if (qos->bcast.encryption > 0x01)
++		return false;
++
++	if (qos->bcast.options > 0x07)
++		return false;
++
++	if (qos->bcast.skip > 0x01f3)
++		return false;
++
++	if (qos->bcast.sync_timeout < 0x000a || qos->bcast.sync_timeout > 0x4000)
++		return false;
++
++	if (qos->bcast.sync_cte_type > 0x1f)
++		return false;
++
++	if (qos->bcast.mse > 0x1f)
++		return false;
++
++	if (qos->bcast.timeout < 0x000a || qos->bcast.timeout > 0x4000)
+ 		return false;
+ 
+ 	return true;
+@@ -1179,7 +1257,7 @@ static int iso_sock_setsockopt(struct socket *sock, int level, int optname,
+ {
+ 	struct sock *sk = sock->sk;
+ 	int len, err = 0;
+-	struct bt_iso_qos qos;
++	struct bt_iso_qos qos = default_qos;
+ 	u32 opt;
+ 
+ 	BT_DBG("sk %p", sk);
+@@ -1212,24 +1290,19 @@ static int iso_sock_setsockopt(struct socket *sock, int level, int optname,
+ 		}
+ 
+ 		len = min_t(unsigned int, sizeof(qos), optlen);
+-		if (len != sizeof(qos)) {
+-			err = -EINVAL;
+-			break;
+-		}
+-
+-		memset(&qos, 0, sizeof(qos));
+ 
+ 		if (copy_from_sockptr(&qos, optval, len)) {
+ 			err = -EFAULT;
+ 			break;
+ 		}
+ 
+-		if (!check_qos(&qos)) {
++		if (len == sizeof(qos.ucast) && !check_ucast_qos(&qos)) {
+ 			err = -EINVAL;
+ 			break;
+ 		}
+ 
+ 		iso_pi(sk)->qos = qos;
++		iso_pi(sk)->qos_user_set = true;
+ 
+ 		break;
+ 
+@@ -1419,7 +1492,7 @@ static bool iso_match_big(struct sock *sk, void *data)
+ {
+ 	struct hci_evt_le_big_sync_estabilished *ev = data;
+ 
+-	return ev->handle == iso_pi(sk)->qos.big;
++	return ev->handle == iso_pi(sk)->qos.bcast.big;
+ }
+ 
+ static void iso_conn_ready(struct iso_conn *conn)
+@@ -1584,8 +1657,12 @@ static void iso_connect_cfm(struct hci_conn *hcon, __u8 status)
+ 
+ 		/* Check if LE link has failed */
+ 		if (status) {
+-			if (hcon->link)
+-				iso_conn_del(hcon->link, bt_to_errno(status));
++			struct hci_link *link, *t;
++
++			list_for_each_entry_safe(link, t, &hcon->link_list,
++						 list)
++				iso_conn_del(link->conn, bt_to_errno(status));
++
+ 			return;
+ 		}
+ 
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 24d075282996c..5678218a19607 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -4307,6 +4307,10 @@ static int l2cap_connect_create_rsp(struct l2cap_conn *conn,
+ 	result = __le16_to_cpu(rsp->result);
+ 	status = __le16_to_cpu(rsp->status);
+ 
++	if (result == L2CAP_CR_SUCCESS && (dcid < L2CAP_CID_DYN_START ||
++					   dcid > L2CAP_CID_DYN_END))
++		return -EPROTO;
++
+ 	BT_DBG("dcid 0x%4.4x scid 0x%4.4x result 0x%2.2x status 0x%2.2x",
+ 	       dcid, scid, result, status);
+ 
+@@ -4338,6 +4342,11 @@ static int l2cap_connect_create_rsp(struct l2cap_conn *conn,
+ 
+ 	switch (result) {
+ 	case L2CAP_CR_SUCCESS:
++		if (__l2cap_get_chan_by_dcid(conn, dcid)) {
++			err = -EBADSLT;
++			break;
++		}
++
+ 		l2cap_state_change(chan, BT_CONFIG);
+ 		chan->ident = 0;
+ 		chan->dcid = dcid;
+@@ -4664,7 +4673,9 @@ static inline int l2cap_disconnect_req(struct l2cap_conn *conn,
+ 
+ 	chan->ops->set_shutdown(chan);
+ 
++	l2cap_chan_unlock(chan);
+ 	mutex_lock(&conn->chan_lock);
++	l2cap_chan_lock(chan);
+ 	l2cap_chan_del(chan, ECONNRESET);
+ 	mutex_unlock(&conn->chan_lock);
+ 
+@@ -4703,7 +4714,9 @@ static inline int l2cap_disconnect_rsp(struct l2cap_conn *conn,
+ 		return 0;
+ 	}
+ 
++	l2cap_chan_unlock(chan);
+ 	mutex_lock(&conn->chan_lock);
++	l2cap_chan_lock(chan);
+ 	l2cap_chan_del(chan, 0);
+ 	mutex_unlock(&conn->chan_lock);
+ 
+diff --git a/net/can/j1939/main.c b/net/can/j1939/main.c
+index 821d4ff303b35..ecff1c947d683 100644
+--- a/net/can/j1939/main.c
++++ b/net/can/j1939/main.c
+@@ -126,7 +126,7 @@ static void j1939_can_recv(struct sk_buff *iskb, void *data)
+ #define J1939_CAN_ID CAN_EFF_FLAG
+ #define J1939_CAN_MASK (CAN_EFF_FLAG | CAN_RTR_FLAG)
+ 
+-static DEFINE_SPINLOCK(j1939_netdev_lock);
++static DEFINE_MUTEX(j1939_netdev_lock);
+ 
+ static struct j1939_priv *j1939_priv_create(struct net_device *ndev)
+ {
+@@ -220,7 +220,7 @@ static void __j1939_rx_release(struct kref *kref)
+ 	j1939_can_rx_unregister(priv);
+ 	j1939_ecu_unmap_all(priv);
+ 	j1939_priv_set(priv->ndev, NULL);
+-	spin_unlock(&j1939_netdev_lock);
++	mutex_unlock(&j1939_netdev_lock);
+ }
+ 
+ /* get pointer to priv without increasing ref counter */
+@@ -248,9 +248,9 @@ static struct j1939_priv *j1939_priv_get_by_ndev(struct net_device *ndev)
+ {
+ 	struct j1939_priv *priv;
+ 
+-	spin_lock(&j1939_netdev_lock);
++	mutex_lock(&j1939_netdev_lock);
+ 	priv = j1939_priv_get_by_ndev_locked(ndev);
+-	spin_unlock(&j1939_netdev_lock);
++	mutex_unlock(&j1939_netdev_lock);
+ 
+ 	return priv;
+ }
+@@ -260,14 +260,14 @@ struct j1939_priv *j1939_netdev_start(struct net_device *ndev)
+ 	struct j1939_priv *priv, *priv_new;
+ 	int ret;
+ 
+-	spin_lock(&j1939_netdev_lock);
++	mutex_lock(&j1939_netdev_lock);
+ 	priv = j1939_priv_get_by_ndev_locked(ndev);
+ 	if (priv) {
+ 		kref_get(&priv->rx_kref);
+-		spin_unlock(&j1939_netdev_lock);
++		mutex_unlock(&j1939_netdev_lock);
+ 		return priv;
+ 	}
+-	spin_unlock(&j1939_netdev_lock);
++	mutex_unlock(&j1939_netdev_lock);
+ 
+ 	priv = j1939_priv_create(ndev);
+ 	if (!priv)
+@@ -277,29 +277,31 @@ struct j1939_priv *j1939_netdev_start(struct net_device *ndev)
+ 	spin_lock_init(&priv->j1939_socks_lock);
+ 	INIT_LIST_HEAD(&priv->j1939_socks);
+ 
+-	spin_lock(&j1939_netdev_lock);
++	mutex_lock(&j1939_netdev_lock);
+ 	priv_new = j1939_priv_get_by_ndev_locked(ndev);
+ 	if (priv_new) {
+ 		/* Someone was faster than us, use their priv and roll
+ 		 * back our's.
+ 		 */
+ 		kref_get(&priv_new->rx_kref);
+-		spin_unlock(&j1939_netdev_lock);
++		mutex_unlock(&j1939_netdev_lock);
+ 		dev_put(ndev);
+ 		kfree(priv);
+ 		return priv_new;
+ 	}
+ 	j1939_priv_set(ndev, priv);
+-	spin_unlock(&j1939_netdev_lock);
+ 
+ 	ret = j1939_can_rx_register(priv);
+ 	if (ret < 0)
+ 		goto out_priv_put;
+ 
++	mutex_unlock(&j1939_netdev_lock);
+ 	return priv;
+ 
+  out_priv_put:
+ 	j1939_priv_set(ndev, NULL);
++	mutex_unlock(&j1939_netdev_lock);
++
+ 	dev_put(ndev);
+ 	kfree(priv);
+ 
+@@ -308,7 +310,7 @@ struct j1939_priv *j1939_netdev_start(struct net_device *ndev)
+ 
+ void j1939_netdev_stop(struct j1939_priv *priv)
+ {
+-	kref_put_lock(&priv->rx_kref, __j1939_rx_release, &j1939_netdev_lock);
++	kref_put_mutex(&priv->rx_kref, __j1939_rx_release, &j1939_netdev_lock);
+ 	j1939_priv_put(priv);
+ }
+ 
+diff --git a/net/can/j1939/socket.c b/net/can/j1939/socket.c
+index 1790469b25808..35970c25496ab 100644
+--- a/net/can/j1939/socket.c
++++ b/net/can/j1939/socket.c
+@@ -1088,6 +1088,11 @@ void j1939_sk_errqueue(struct j1939_session *session,
+ 
+ void j1939_sk_send_loop_abort(struct sock *sk, int err)
+ {
++	struct j1939_sock *jsk = j1939_sk(sk);
++
++	if (jsk->state & J1939_SOCK_ERRQUEUE)
++		return;
++
+ 	sk->sk_err = err;
+ 
+ 	sk_error_report(sk);
+diff --git a/net/core/dev.c b/net/core/dev.c
+index b3d8e74fcaf06..bcb654fd519bd 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -4471,8 +4471,10 @@ static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb,
+ 		u32 next_cpu;
+ 		u32 ident;
+ 
+-		/* First check into global flow table if there is a match */
+-		ident = sock_flow_table->ents[hash & sock_flow_table->mask];
++		/* First check into global flow table if there is a match.
++		 * This READ_ONCE() pairs with WRITE_ONCE() from rps_record_sock_flow().
++		 */
++		ident = READ_ONCE(sock_flow_table->ents[hash & sock_flow_table->mask]);
+ 		if ((ident ^ hash) & ~rps_cpu_mask)
+ 			goto try_rps;
+ 
+@@ -10505,7 +10507,7 @@ struct netdev_queue *dev_ingress_queue_create(struct net_device *dev)
+ 		return NULL;
+ 	netdev_init_one_queue(dev, queue, NULL);
+ 	RCU_INIT_POINTER(queue->qdisc, &noop_qdisc);
+-	queue->qdisc_sleeping = &noop_qdisc;
++	RCU_INIT_POINTER(queue->qdisc_sleeping, &noop_qdisc);
+ 	rcu_assign_pointer(dev->ingress_queue, queue);
+ #endif
+ 	return queue;
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index a9060e1f0e437..a29508e1ff356 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -1210,7 +1210,8 @@ static void sk_psock_verdict_data_ready(struct sock *sk)
+ 
+ 		rcu_read_lock();
+ 		psock = sk_psock(sk);
+-		psock->saved_data_ready(sk);
++		if (psock)
++			psock->saved_data_ready(sk);
+ 		rcu_read_unlock();
+ 	}
+ }
+diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
+index 40fe70fc2015d..88dfe51e68f3c 100644
+--- a/net/ipv4/sysctl_net_ipv4.c
++++ b/net/ipv4/sysctl_net_ipv4.c
+@@ -34,8 +34,8 @@ static int ip_ttl_min = 1;
+ static int ip_ttl_max = 255;
+ static int tcp_syn_retries_min = 1;
+ static int tcp_syn_retries_max = MAX_TCP_SYNCNT;
+-static int ip_ping_group_range_min[] = { 0, 0 };
+-static int ip_ping_group_range_max[] = { GID_T_MAX, GID_T_MAX };
++static unsigned long ip_ping_group_range_min[] = { 0, 0 };
++static unsigned long ip_ping_group_range_max[] = { GID_T_MAX, GID_T_MAX };
+ static u32 u32_max_div_HZ = UINT_MAX / HZ;
+ static int one_day_secs = 24 * 3600;
+ static u32 fib_multipath_hash_fields_all_mask __maybe_unused =
+@@ -165,7 +165,7 @@ static int ipv4_ping_group_range(struct ctl_table *table, int write,
+ {
+ 	struct user_namespace *user_ns = current_user_ns();
+ 	int ret;
+-	gid_t urange[2];
++	unsigned long urange[2];
+ 	kgid_t low, high;
+ 	struct ctl_table tmp = {
+ 		.data = &urange,
+@@ -178,7 +178,7 @@ static int ipv4_ping_group_range(struct ctl_table *table, int write,
+ 	inet_get_ping_group_range_table(table, &low, &high);
+ 	urange[0] = from_kgid_munged(user_ns, low);
+ 	urange[1] = from_kgid_munged(user_ns, high);
+-	ret = proc_dointvec_minmax(&tmp, write, buffer, lenp, ppos);
++	ret = proc_doulongvec_minmax(&tmp, write, buffer, lenp, ppos);
+ 
+ 	if (write && ret == 0) {
+ 		low = make_kgid(user_ns, urange[0]);
+diff --git a/net/ipv4/tcp_offload.c b/net/ipv4/tcp_offload.c
+index 45dda78893870..4851211aa60d6 100644
+--- a/net/ipv4/tcp_offload.c
++++ b/net/ipv4/tcp_offload.c
+@@ -60,12 +60,12 @@ struct sk_buff *tcp_gso_segment(struct sk_buff *skb,
+ 	struct tcphdr *th;
+ 	unsigned int thlen;
+ 	unsigned int seq;
+-	__be32 delta;
+ 	unsigned int oldlen;
+ 	unsigned int mss;
+ 	struct sk_buff *gso_skb = skb;
+ 	__sum16 newcheck;
+ 	bool ooo_okay, copy_destructor;
++	__wsum delta;
+ 
+ 	th = tcp_hdr(skb);
+ 	thlen = th->doff * 4;
+@@ -75,7 +75,7 @@ struct sk_buff *tcp_gso_segment(struct sk_buff *skb,
+ 	if (!pskb_may_pull(skb, thlen))
+ 		goto out;
+ 
+-	oldlen = (u16)~skb->len;
++	oldlen = ~skb->len;
+ 	__skb_pull(skb, thlen);
+ 
+ 	mss = skb_shinfo(skb)->gso_size;
+@@ -110,7 +110,7 @@ struct sk_buff *tcp_gso_segment(struct sk_buff *skb,
+ 	if (skb_is_gso(segs))
+ 		mss *= skb_shinfo(segs)->gso_segs;
+ 
+-	delta = htonl(oldlen + (thlen + mss));
++	delta = (__force __wsum)htonl(oldlen + thlen + mss);
+ 
+ 	skb = segs;
+ 	th = tcp_hdr(skb);
+@@ -119,8 +119,7 @@ struct sk_buff *tcp_gso_segment(struct sk_buff *skb,
+ 	if (unlikely(skb_shinfo(gso_skb)->tx_flags & SKBTX_SW_TSTAMP))
+ 		tcp_gso_tstamp(segs, skb_shinfo(gso_skb)->tskey, seq, mss);
+ 
+-	newcheck = ~csum_fold((__force __wsum)((__force u32)th->check +
+-					       (__force u32)delta));
++	newcheck = ~csum_fold(csum_add(csum_unfold(th->check), delta));
+ 
+ 	while (skb->next) {
+ 		th->fin = th->psh = 0;
+@@ -165,11 +164,11 @@ struct sk_buff *tcp_gso_segment(struct sk_buff *skb,
+ 			WARN_ON_ONCE(refcount_sub_and_test(-delta, &skb->sk->sk_wmem_alloc));
+ 	}
+ 
+-	delta = htonl(oldlen + (skb_tail_pointer(skb) -
+-				skb_transport_header(skb)) +
+-		      skb->data_len);
+-	th->check = ~csum_fold((__force __wsum)((__force u32)th->check +
+-				(__force u32)delta));
++	delta = (__force __wsum)htonl(oldlen +
++				      (skb_tail_pointer(skb) -
++				       skb_transport_header(skb)) +
++				      skb->data_len);
++	th->check = ~csum_fold(csum_add(csum_unfold(th->check), delta));
+ 	if (skb->ip_summed == CHECKSUM_PARTIAL)
+ 		gso_reset_checksum(skb, ~th->check);
+ 	else
+diff --git a/net/ipv6/exthdrs.c b/net/ipv6/exthdrs.c
+index a8d961d3a477f..5fa0e37305d9d 100644
+--- a/net/ipv6/exthdrs.c
++++ b/net/ipv6/exthdrs.c
+@@ -569,24 +569,6 @@ looped_back:
+ 		return -1;
+ 	}
+ 
+-	if (skb_cloned(skb)) {
+-		if (pskb_expand_head(skb, IPV6_RPL_SRH_WORST_SWAP_SIZE, 0,
+-				     GFP_ATOMIC)) {
+-			__IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
+-					IPSTATS_MIB_OUTDISCARDS);
+-			kfree_skb(skb);
+-			return -1;
+-		}
+-	} else {
+-		err = skb_cow_head(skb, IPV6_RPL_SRH_WORST_SWAP_SIZE);
+-		if (unlikely(err)) {
+-			kfree_skb(skb);
+-			return -1;
+-		}
+-	}
+-
+-	hdr = (struct ipv6_rpl_sr_hdr *)skb_transport_header(skb);
+-
+ 	if (!pskb_may_pull(skb, ipv6_rpl_srh_size(n, hdr->cmpri,
+ 						  hdr->cmpre))) {
+ 		kfree_skb(skb);
+@@ -630,6 +612,17 @@ looped_back:
+ 	skb_pull(skb, ((hdr->hdrlen + 1) << 3));
+ 	skb_postpull_rcsum(skb, oldhdr,
+ 			   sizeof(struct ipv6hdr) + ((hdr->hdrlen + 1) << 3));
++	if (unlikely(!hdr->segments_left)) {
++		if (pskb_expand_head(skb, sizeof(struct ipv6hdr) + ((chdr->hdrlen + 1) << 3), 0,
++				     GFP_ATOMIC)) {
++			__IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)), IPSTATS_MIB_OUTDISCARDS);
++			kfree_skb(skb);
++			kfree(buf);
++			return -1;
++		}
++
++		oldhdr = ipv6_hdr(skb);
++	}
+ 	skb_push(skb, ((chdr->hdrlen + 1) << 3) + sizeof(struct ipv6hdr));
+ 	skb_reset_network_header(skb);
+ 	skb_mac_header_rebuild(skb);
+diff --git a/net/mac80211/he.c b/net/mac80211/he.c
+index 729f261520c77..0322abae08250 100644
+--- a/net/mac80211/he.c
++++ b/net/mac80211/he.c
+@@ -3,7 +3,7 @@
+  * HE handling
+  *
+  * Copyright(c) 2017 Intel Deutschland GmbH
+- * Copyright(c) 2019 - 2022 Intel Corporation
++ * Copyright(c) 2019 - 2023 Intel Corporation
+  */
+ 
+ #include "ieee80211_i.h"
+@@ -114,6 +114,7 @@ ieee80211_he_cap_ie_to_sta_he_cap(struct ieee80211_sub_if_data *sdata,
+ 				  struct link_sta_info *link_sta)
+ {
+ 	struct ieee80211_sta_he_cap *he_cap = &link_sta->pub->he_cap;
++	const struct ieee80211_sta_he_cap *own_he_cap_ptr;
+ 	struct ieee80211_sta_he_cap own_he_cap;
+ 	struct ieee80211_he_cap_elem *he_cap_ie_elem = (void *)he_cap_ie;
+ 	u8 he_ppe_size;
+@@ -123,12 +124,16 @@ ieee80211_he_cap_ie_to_sta_he_cap(struct ieee80211_sub_if_data *sdata,
+ 
+ 	memset(he_cap, 0, sizeof(*he_cap));
+ 
+-	if (!he_cap_ie ||
+-	    !ieee80211_get_he_iftype_cap(sband,
+-					 ieee80211_vif_type_p2p(&sdata->vif)))
++	if (!he_cap_ie)
+ 		return;
+ 
+-	own_he_cap = sband->iftype_data->he_cap;
++	own_he_cap_ptr =
++		ieee80211_get_he_iftype_cap(sband,
++					    ieee80211_vif_type_p2p(&sdata->vif));
++	if (!own_he_cap_ptr)
++		return;
++
++	own_he_cap = *own_he_cap_ptr;
+ 
+ 	/* Make sure size is OK */
+ 	mcs_nss_size = ieee80211_he_mcs_nss_size(he_cap_ie_elem);
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 60792dfabc9d6..7a970b6dda640 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -1217,6 +1217,7 @@ static void ieee80211_add_non_inheritance_elem(struct sk_buff *skb,
+ 					       const u16 *inner)
+ {
+ 	unsigned int skb_len = skb->len;
++	bool at_extension = false;
+ 	bool added = false;
+ 	int i, j;
+ 	u8 *len, *list_len = NULL;
+@@ -1228,7 +1229,6 @@ static void ieee80211_add_non_inheritance_elem(struct sk_buff *skb,
+ 	for (i = 0; i < PRESENT_ELEMS_MAX && outer[i]; i++) {
+ 		u16 elem = outer[i];
+ 		bool have_inner = false;
+-		bool at_extension = false;
+ 
+ 		/* should at least be sorted in the sense of normal -> ext */
+ 		WARN_ON(at_extension && elem < PRESENT_ELEM_EXT_OFFS);
+@@ -1257,8 +1257,14 @@ static void ieee80211_add_non_inheritance_elem(struct sk_buff *skb,
+ 		}
+ 		*list_len += 1;
+ 		skb_put_u8(skb, (u8)elem);
++		added = true;
+ 	}
+ 
++	/* if we added a list but no extension list, make a zero-len one */
++	if (added && (!at_extension || !list_len))
++		skb_put_u8(skb, 0);
++
++	/* if nothing added remove extension element completely */
+ 	if (!added)
+ 		skb_trim(skb, skb_len);
+ 	else
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index af57616d2f1d9..0e66ece35f8e2 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -4884,7 +4884,9 @@ static bool ieee80211_prepare_and_rx_handle(struct ieee80211_rx_data *rx,
+ 	}
+ 
+ 	if (unlikely(rx->sta && rx->sta->sta.mlo) &&
+-	    is_unicast_ether_addr(hdr->addr1)) {
++	    is_unicast_ether_addr(hdr->addr1) &&
++	    !ieee80211_is_probe_resp(hdr->frame_control) &&
++	    !ieee80211_is_beacon(hdr->frame_control)) {
+ 		/* translate to MLD addresses */
+ 		if (ether_addr_equal(link->conf->addr, hdr->addr1))
+ 			ether_addr_copy(hdr->addr1, rx->sdata->vif.addr);
+diff --git a/net/mptcp/pm.c b/net/mptcp/pm.c
+index 70f0ced3ca86e..10c288a0cb0c2 100644
+--- a/net/mptcp/pm.c
++++ b/net/mptcp/pm.c
+@@ -87,8 +87,15 @@ bool mptcp_pm_allow_new_subflow(struct mptcp_sock *msk)
+ 	unsigned int subflows_max;
+ 	int ret = 0;
+ 
+-	if (mptcp_pm_is_userspace(msk))
+-		return mptcp_userspace_pm_active(msk);
++	if (mptcp_pm_is_userspace(msk)) {
++		if (mptcp_userspace_pm_active(msk)) {
++			spin_lock_bh(&pm->lock);
++			pm->subflows++;
++			spin_unlock_bh(&pm->lock);
++			return true;
++		}
++		return false;
++	}
+ 
+ 	subflows_max = mptcp_pm_get_subflows_max(msk);
+ 
+@@ -181,8 +188,16 @@ void mptcp_pm_subflow_check_next(struct mptcp_sock *msk, const struct sock *ssk,
+ 	struct mptcp_pm_data *pm = &msk->pm;
+ 	bool update_subflows;
+ 
+-	update_subflows = (subflow->request_join || subflow->mp_join) &&
+-			  mptcp_pm_is_kernel(msk);
++	update_subflows = subflow->request_join || subflow->mp_join;
++	if (mptcp_pm_is_userspace(msk)) {
++		if (update_subflows) {
++			spin_lock_bh(&pm->lock);
++			pm->subflows--;
++			spin_unlock_bh(&pm->lock);
++		}
++		return;
++	}
++
+ 	if (!READ_ONCE(pm->work_pending) && !update_subflows)
+ 		return;
+ 
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index 5c8dea49626c3..5396907cc5963 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -1558,6 +1558,24 @@ static int mptcp_nl_cmd_del_addr(struct sk_buff *skb, struct genl_info *info)
+ 	return ret;
+ }
+ 
++void mptcp_pm_remove_addrs(struct mptcp_sock *msk, struct list_head *rm_list)
++{
++	struct mptcp_rm_list alist = { .nr = 0 };
++	struct mptcp_pm_addr_entry *entry;
++
++	list_for_each_entry(entry, rm_list, list) {
++		remove_anno_list_by_saddr(msk, &entry->addr);
++		if (alist.nr < MPTCP_RM_IDS_MAX)
++			alist.ids[alist.nr++] = entry->addr.id;
++	}
++
++	if (alist.nr) {
++		spin_lock_bh(&msk->pm.lock);
++		mptcp_pm_remove_addr(msk, &alist);
++		spin_unlock_bh(&msk->pm.lock);
++	}
++}
++
+ void mptcp_pm_remove_addrs_and_subflows(struct mptcp_sock *msk,
+ 					struct list_head *rm_list)
+ {
+diff --git a/net/mptcp/pm_userspace.c b/net/mptcp/pm_userspace.c
+index a02d3cbf2a1b6..c3e1bb0ac3d5d 100644
+--- a/net/mptcp/pm_userspace.c
++++ b/net/mptcp/pm_userspace.c
+@@ -69,6 +69,7 @@ int mptcp_userspace_pm_append_new_local_addr(struct mptcp_sock *msk,
+ 							MPTCP_PM_MAX_ADDR_ID + 1,
+ 							1);
+ 		list_add_tail_rcu(&e->list, &msk->pm.userspace_pm_local_addr_list);
++		msk->pm.local_addr_used++;
+ 		ret = e->addr.id;
+ 	} else if (match) {
+ 		ret = entry->addr.id;
+@@ -79,6 +80,31 @@ append_err:
+ 	return ret;
+ }
+ 
++/* If the subflow is closed from the other peer (not via a
++ * subflow destroy command then), we want to keep the entry
++ * not to assign the same ID to another address and to be
++ * able to send RM_ADDR after the removal of the subflow.
++ */
++static int mptcp_userspace_pm_delete_local_addr(struct mptcp_sock *msk,
++						struct mptcp_pm_addr_entry *addr)
++{
++	struct mptcp_pm_addr_entry *entry, *tmp;
++
++	list_for_each_entry_safe(entry, tmp, &msk->pm.userspace_pm_local_addr_list, list) {
++		if (mptcp_addresses_equal(&entry->addr, &addr->addr, false)) {
++			/* TODO: a refcount is needed because the entry can
++			 * be used multiple times (e.g. fullmesh mode).
++			 */
++			list_del_rcu(&entry->list);
++			kfree(entry);
++			msk->pm.local_addr_used--;
++			return 0;
++		}
++	}
++
++	return -EINVAL;
++}
++
+ int mptcp_userspace_pm_get_flags_and_ifindex_by_id(struct mptcp_sock *msk,
+ 						   unsigned int id,
+ 						   u8 *flags, int *ifindex)
+@@ -171,6 +197,7 @@ int mptcp_nl_cmd_announce(struct sk_buff *skb, struct genl_info *info)
+ 	spin_lock_bh(&msk->pm.lock);
+ 
+ 	if (mptcp_pm_alloc_anno_list(msk, &addr_val)) {
++		msk->pm.add_addr_signaled++;
+ 		mptcp_pm_announce_addr(msk, &addr_val.addr, false);
+ 		mptcp_pm_nl_addr_send_ack(msk);
+ 	}
+@@ -232,7 +259,7 @@ int mptcp_nl_cmd_remove(struct sk_buff *skb, struct genl_info *info)
+ 
+ 	list_move(&match->list, &free_list);
+ 
+-	mptcp_pm_remove_addrs_and_subflows(msk, &free_list);
++	mptcp_pm_remove_addrs(msk, &free_list);
+ 
+ 	release_sock((struct sock *)msk);
+ 
+@@ -251,6 +278,7 @@ int mptcp_nl_cmd_sf_create(struct sk_buff *skb, struct genl_info *info)
+ 	struct nlattr *raddr = info->attrs[MPTCP_PM_ATTR_ADDR_REMOTE];
+ 	struct nlattr *token = info->attrs[MPTCP_PM_ATTR_TOKEN];
+ 	struct nlattr *laddr = info->attrs[MPTCP_PM_ATTR_ADDR];
++	struct mptcp_pm_addr_entry local = { 0 };
+ 	struct mptcp_addr_info addr_r;
+ 	struct mptcp_addr_info addr_l;
+ 	struct mptcp_sock *msk;
+@@ -302,12 +330,26 @@ int mptcp_nl_cmd_sf_create(struct sk_buff *skb, struct genl_info *info)
+ 		goto create_err;
+ 	}
+ 
++	local.addr = addr_l;
++	err = mptcp_userspace_pm_append_new_local_addr(msk, &local);
++	if (err < 0) {
++		GENL_SET_ERR_MSG(info, "did not match address and id");
++		goto create_err;
++	}
++
+ 	lock_sock(sk);
+ 
+ 	err = __mptcp_subflow_connect(sk, &addr_l, &addr_r);
+ 
+ 	release_sock(sk);
+ 
++	spin_lock_bh(&msk->pm.lock);
++	if (err)
++		mptcp_userspace_pm_delete_local_addr(msk, &local);
++	else
++		msk->pm.subflows++;
++	spin_unlock_bh(&msk->pm.lock);
++
+  create_err:
+ 	sock_put((struct sock *)msk);
+ 	return err;
+@@ -420,7 +462,11 @@ int mptcp_nl_cmd_sf_destroy(struct sk_buff *skb, struct genl_info *info)
+ 	ssk = mptcp_nl_find_ssk(msk, &addr_l, &addr_r);
+ 	if (ssk) {
+ 		struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
++		struct mptcp_pm_addr_entry entry = { .addr = addr_l };
+ 
++		spin_lock_bh(&msk->pm.lock);
++		mptcp_userspace_pm_delete_local_addr(msk, &entry);
++		spin_unlock_bh(&msk->pm.lock);
+ 		mptcp_subflow_shutdown(sk, ssk, RCV_SHUTDOWN | SEND_SHUTDOWN);
+ 		mptcp_close_ssk(sk, ssk, subflow);
+ 		MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_RMSUBFLOW);
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index 9c4860cf18a97..554e676fa619c 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -835,6 +835,7 @@ int mptcp_pm_announce_addr(struct mptcp_sock *msk,
+ 			   bool echo);
+ int mptcp_pm_remove_addr(struct mptcp_sock *msk, const struct mptcp_rm_list *rm_list);
+ int mptcp_pm_remove_subflow(struct mptcp_sock *msk, const struct mptcp_rm_list *rm_list);
++void mptcp_pm_remove_addrs(struct mptcp_sock *msk, struct list_head *rm_list);
+ void mptcp_pm_remove_addrs_and_subflows(struct mptcp_sock *msk,
+ 					struct list_head *rm_list);
+ 
+diff --git a/net/netfilter/ipset/ip_set_core.c b/net/netfilter/ipset/ip_set_core.c
+index 46ebee9400dab..9a6b64779e644 100644
+--- a/net/netfilter/ipset/ip_set_core.c
++++ b/net/netfilter/ipset/ip_set_core.c
+@@ -1694,6 +1694,14 @@ call_ad(struct net *net, struct sock *ctnl, struct sk_buff *skb,
+ 	bool eexist = flags & IPSET_FLAG_EXIST, retried = false;
+ 
+ 	do {
++		if (retried) {
++			__ip_set_get(set);
++			nfnl_unlock(NFNL_SUBSYS_IPSET);
++			cond_resched();
++			nfnl_lock(NFNL_SUBSYS_IPSET);
++			__ip_set_put(set);
++		}
++
+ 		ip_set_lock(set);
+ 		ret = set->variant->uadt(set, tb, adt, &lineno, flags, retried);
+ 		ip_set_unlock(set);
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index 7ba6ab9b54b56..06582f0a5393c 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -2260,6 +2260,9 @@ static int nf_confirm_cthelper(struct sk_buff *skb, struct nf_conn *ct,
+ 		return 0;
+ 
+ 	helper = rcu_dereference(help->helper);
++	if (!helper)
++		return 0;
++
+ 	if (!(helper->flags & NF_CT_HELPER_F_USERSPACE))
+ 		return 0;
+ 
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index ef80504c3ccd2..368aeabd8f8f1 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -1593,6 +1593,8 @@ static int nft_dump_basechain_hook(struct sk_buff *skb, int family,
+ 
+ 	if (nft_base_chain_netdev(family, ops->hooknum)) {
+ 		nest_devs = nla_nest_start_noflag(skb, NFTA_HOOK_DEVS);
++		if (!nest_devs)
++			goto nla_put_failure;
+ 
+ 		if (!hook_list)
+ 			hook_list = &basechain->hook_list;
+@@ -8919,7 +8921,7 @@ static int nf_tables_commit_chain_prepare(struct net *net, struct nft_chain *cha
+ 				continue;
+ 			}
+ 
+-			if (WARN_ON_ONCE(data + expr->ops->size > data_boundary))
++			if (WARN_ON_ONCE(data + size + expr->ops->size > data_boundary))
+ 				return -ENOMEM;
+ 
+ 			memcpy(data + size, expr, expr->ops->size);
+diff --git a/net/netfilter/nft_bitwise.c b/net/netfilter/nft_bitwise.c
+index 84eae7cabc67a..2527a01486efc 100644
+--- a/net/netfilter/nft_bitwise.c
++++ b/net/netfilter/nft_bitwise.c
+@@ -323,7 +323,7 @@ static bool nft_bitwise_reduce(struct nft_regs_track *track,
+ 	dreg = priv->dreg;
+ 	regcount = DIV_ROUND_UP(priv->len, NFT_REG32_SIZE);
+ 	for (i = 0; i < regcount; i++, dreg++)
+-		track->regs[priv->dreg].bitwise = expr;
++		track->regs[dreg].bitwise = expr;
+ 
+ 	return false;
+ }
+diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
+index fcee6012293b1..58f530f60172a 100644
+--- a/net/openvswitch/datapath.c
++++ b/net/openvswitch/datapath.c
+@@ -236,9 +236,6 @@ void ovs_dp_detach_port(struct vport *p)
+ 	/* First drop references to device. */
+ 	hlist_del_rcu(&p->dp_hash_node);
+ 
+-	/* Free percpu memory */
+-	free_percpu(p->upcall_stats);
+-
+ 	/* Then destroy it. */
+ 	ovs_vport_del(p);
+ }
+@@ -1858,12 +1855,6 @@ static int ovs_dp_cmd_new(struct sk_buff *skb, struct genl_info *info)
+ 		goto err_destroy_portids;
+ 	}
+ 
+-	vport->upcall_stats = netdev_alloc_pcpu_stats(struct vport_upcall_stats_percpu);
+-	if (!vport->upcall_stats) {
+-		err = -ENOMEM;
+-		goto err_destroy_vport;
+-	}
+-
+ 	err = ovs_dp_cmd_fill_info(dp, reply, info->snd_portid,
+ 				   info->snd_seq, 0, OVS_DP_CMD_NEW);
+ 	BUG_ON(err < 0);
+@@ -1876,8 +1867,6 @@ static int ovs_dp_cmd_new(struct sk_buff *skb, struct genl_info *info)
+ 	ovs_notify(&dp_datapath_genl_family, reply, info);
+ 	return 0;
+ 
+-err_destroy_vport:
+-	ovs_dp_detach_port(vport);
+ err_destroy_portids:
+ 	kfree(rcu_dereference_raw(dp->upcall_portids));
+ err_unlock_and_destroy_meters:
+@@ -2322,12 +2311,6 @@ restart:
+ 		goto exit_unlock_free;
+ 	}
+ 
+-	vport->upcall_stats = netdev_alloc_pcpu_stats(struct vport_upcall_stats_percpu);
+-	if (!vport->upcall_stats) {
+-		err = -ENOMEM;
+-		goto exit_unlock_free_vport;
+-	}
+-
+ 	err = ovs_vport_cmd_fill_info(vport, reply, genl_info_net(info),
+ 				      info->snd_portid, info->snd_seq, 0,
+ 				      OVS_VPORT_CMD_NEW, GFP_KERNEL);
+@@ -2345,8 +2328,6 @@ restart:
+ 	ovs_notify(&dp_vport_genl_family, reply, info);
+ 	return 0;
+ 
+-exit_unlock_free_vport:
+-	ovs_dp_detach_port(vport);
+ exit_unlock_free:
+ 	ovs_unlock();
+ 	kfree_skb(reply);
+diff --git a/net/openvswitch/vport.c b/net/openvswitch/vport.c
+index 7e0f5c45b5124..972ae01a70f76 100644
+--- a/net/openvswitch/vport.c
++++ b/net/openvswitch/vport.c
+@@ -124,6 +124,7 @@ struct vport *ovs_vport_alloc(int priv_size, const struct vport_ops *ops,
+ {
+ 	struct vport *vport;
+ 	size_t alloc_size;
++	int err;
+ 
+ 	alloc_size = sizeof(struct vport);
+ 	if (priv_size) {
+@@ -135,17 +136,29 @@ struct vport *ovs_vport_alloc(int priv_size, const struct vport_ops *ops,
+ 	if (!vport)
+ 		return ERR_PTR(-ENOMEM);
+ 
++	vport->upcall_stats = netdev_alloc_pcpu_stats(struct vport_upcall_stats_percpu);
++	if (!vport->upcall_stats) {
++		err = -ENOMEM;
++		goto err_kfree_vport;
++	}
++
+ 	vport->dp = parms->dp;
+ 	vport->port_no = parms->port_no;
+ 	vport->ops = ops;
+ 	INIT_HLIST_NODE(&vport->dp_hash_node);
+ 
+ 	if (ovs_vport_set_upcall_portids(vport, parms->upcall_portids)) {
+-		kfree(vport);
+-		return ERR_PTR(-EINVAL);
++		err = -EINVAL;
++		goto err_free_percpu;
+ 	}
+ 
+ 	return vport;
++
++err_free_percpu:
++	free_percpu(vport->upcall_stats);
++err_kfree_vport:
++	kfree(vport);
++	return ERR_PTR(err);
+ }
+ EXPORT_SYMBOL_GPL(ovs_vport_alloc);
+ 
+@@ -165,6 +178,7 @@ void ovs_vport_free(struct vport *vport)
+ 	 * it is safe to use raw dereference.
+ 	 */
+ 	kfree(rcu_dereference_raw(vport->upcall_portids));
++	free_percpu(vport->upcall_stats);
+ 	kfree(vport);
+ }
+ EXPORT_SYMBOL_GPL(ovs_vport_free);
+diff --git a/net/sched/act_police.c b/net/sched/act_police.c
+index 227cba58ce9f3..2e9dce03d1ecc 100644
+--- a/net/sched/act_police.c
++++ b/net/sched/act_police.c
+@@ -357,23 +357,23 @@ static int tcf_police_dump(struct sk_buff *skb, struct tc_action *a,
+ 	opt.burst = PSCHED_NS2TICKS(p->tcfp_burst);
+ 	if (p->rate_present) {
+ 		psched_ratecfg_getrate(&opt.rate, &p->rate);
+-		if ((police->params->rate.rate_bytes_ps >= (1ULL << 32)) &&
++		if ((p->rate.rate_bytes_ps >= (1ULL << 32)) &&
+ 		    nla_put_u64_64bit(skb, TCA_POLICE_RATE64,
+-				      police->params->rate.rate_bytes_ps,
++				      p->rate.rate_bytes_ps,
+ 				      TCA_POLICE_PAD))
+ 			goto nla_put_failure;
+ 	}
+ 	if (p->peak_present) {
+ 		psched_ratecfg_getrate(&opt.peakrate, &p->peak);
+-		if ((police->params->peak.rate_bytes_ps >= (1ULL << 32)) &&
++		if ((p->peak.rate_bytes_ps >= (1ULL << 32)) &&
+ 		    nla_put_u64_64bit(skb, TCA_POLICE_PEAKRATE64,
+-				      police->params->peak.rate_bytes_ps,
++				      p->peak.rate_bytes_ps,
+ 				      TCA_POLICE_PAD))
+ 			goto nla_put_failure;
+ 	}
+ 	if (p->pps_present) {
+ 		if (nla_put_u64_64bit(skb, TCA_POLICE_PKTRATE64,
+-				      police->params->ppsrate.rate_pkts_ps,
++				      p->ppsrate.rate_pkts_ps,
+ 				      TCA_POLICE_PAD))
+ 			goto nla_put_failure;
+ 		if (nla_put_u64_64bit(skb, TCA_POLICE_PKTBURST64,
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index 2621550bfddc1..c877a6343fd47 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -43,8 +43,6 @@
+ #include <net/flow_offload.h>
+ #include <net/tc_wrapper.h>
+ 
+-extern const struct nla_policy rtm_tca_policy[TCA_MAX + 1];
+-
+ /* The list of all installed classifier types */
+ static LIST_HEAD(tcf_proto_base);
+ 
+@@ -2952,6 +2950,7 @@ static int tc_chain_tmplt_add(struct tcf_chain *chain, struct net *net,
+ 		return PTR_ERR(ops);
+ 	if (!ops->tmplt_create || !ops->tmplt_destroy || !ops->tmplt_dump) {
+ 		NL_SET_ERR_MSG(extack, "Chain templates are not supported with specified classifier");
++		module_put(ops->owner);
+ 		return -EOPNOTSUPP;
+ 	}
+ 
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 7045b67b5533e..b2a63d697a4aa 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -309,7 +309,7 @@ struct Qdisc *qdisc_lookup(struct net_device *dev, u32 handle)
+ 
+ 	if (dev_ingress_queue(dev))
+ 		q = qdisc_match_from_root(
+-			dev_ingress_queue(dev)->qdisc_sleeping,
++			rtnl_dereference(dev_ingress_queue(dev)->qdisc_sleeping),
+ 			handle);
+ out:
+ 	return q;
+@@ -328,7 +328,8 @@ struct Qdisc *qdisc_lookup_rcu(struct net_device *dev, u32 handle)
+ 
+ 	nq = dev_ingress_queue_rcu(dev);
+ 	if (nq)
+-		q = qdisc_match_from_root(nq->qdisc_sleeping, handle);
++		q = qdisc_match_from_root(rcu_dereference(nq->qdisc_sleeping),
++					  handle);
+ out:
+ 	return q;
+ }
+@@ -634,8 +635,13 @@ EXPORT_SYMBOL(qdisc_watchdog_init);
+ void qdisc_watchdog_schedule_range_ns(struct qdisc_watchdog *wd, u64 expires,
+ 				      u64 delta_ns)
+ {
+-	if (test_bit(__QDISC_STATE_DEACTIVATED,
+-		     &qdisc_root_sleeping(wd->qdisc)->state))
++	bool deactivated;
++
++	rcu_read_lock();
++	deactivated = test_bit(__QDISC_STATE_DEACTIVATED,
++			       &qdisc_root_sleeping(wd->qdisc)->state);
++	rcu_read_unlock();
++	if (deactivated)
+ 		return;
+ 
+ 	if (hrtimer_is_queued(&wd->timer)) {
+@@ -1476,7 +1482,7 @@ static int tc_get_qdisc(struct sk_buff *skb, struct nlmsghdr *n,
+ 				}
+ 				q = qdisc_leaf(p, clid);
+ 			} else if (dev_ingress_queue(dev)) {
+-				q = dev_ingress_queue(dev)->qdisc_sleeping;
++				q = rtnl_dereference(dev_ingress_queue(dev)->qdisc_sleeping);
+ 			}
+ 		} else {
+ 			q = rtnl_dereference(dev->qdisc);
+@@ -1562,7 +1568,7 @@ replay:
+ 				}
+ 				q = qdisc_leaf(p, clid);
+ 			} else if (dev_ingress_queue_create(dev)) {
+-				q = dev_ingress_queue(dev)->qdisc_sleeping;
++				q = rtnl_dereference(dev_ingress_queue(dev)->qdisc_sleeping);
+ 			}
+ 		} else {
+ 			q = rtnl_dereference(dev->qdisc);
+@@ -1803,8 +1809,8 @@ static int tc_dump_qdisc(struct sk_buff *skb, struct netlink_callback *cb)
+ 
+ 		dev_queue = dev_ingress_queue(dev);
+ 		if (dev_queue &&
+-		    tc_dump_qdisc_root(dev_queue->qdisc_sleeping, skb, cb,
+-				       &q_idx, s_q_idx, false,
++		    tc_dump_qdisc_root(rtnl_dereference(dev_queue->qdisc_sleeping),
++				       skb, cb, &q_idx, s_q_idx, false,
+ 				       tca[TCA_DUMP_INVISIBLE]) < 0)
+ 			goto done;
+ 
+@@ -2247,8 +2253,8 @@ static int tc_dump_tclass(struct sk_buff *skb, struct netlink_callback *cb)
+ 
+ 	dev_queue = dev_ingress_queue(dev);
+ 	if (dev_queue &&
+-	    tc_dump_tclass_root(dev_queue->qdisc_sleeping, skb, tcm, cb,
+-				&t, s_t, false) < 0)
++	    tc_dump_tclass_root(rtnl_dereference(dev_queue->qdisc_sleeping),
++				skb, tcm, cb, &t, s_t, false) < 0)
+ 		goto done;
+ 
+ done:
+diff --git a/net/sched/sch_fq_pie.c b/net/sched/sch_fq_pie.c
+index 6980796d435d9..591d87d5e5c0f 100644
+--- a/net/sched/sch_fq_pie.c
++++ b/net/sched/sch_fq_pie.c
+@@ -201,6 +201,11 @@ out:
+ 	return NET_XMIT_CN;
+ }
+ 
++static struct netlink_range_validation fq_pie_q_range = {
++	.min = 1,
++	.max = 1 << 20,
++};
++
+ static const struct nla_policy fq_pie_policy[TCA_FQ_PIE_MAX + 1] = {
+ 	[TCA_FQ_PIE_LIMIT]		= {.type = NLA_U32},
+ 	[TCA_FQ_PIE_FLOWS]		= {.type = NLA_U32},
+@@ -208,7 +213,8 @@ static const struct nla_policy fq_pie_policy[TCA_FQ_PIE_MAX + 1] = {
+ 	[TCA_FQ_PIE_TUPDATE]		= {.type = NLA_U32},
+ 	[TCA_FQ_PIE_ALPHA]		= {.type = NLA_U32},
+ 	[TCA_FQ_PIE_BETA]		= {.type = NLA_U32},
+-	[TCA_FQ_PIE_QUANTUM]		= {.type = NLA_U32},
++	[TCA_FQ_PIE_QUANTUM]		=
++			NLA_POLICY_FULL_RANGE(NLA_U32, &fq_pie_q_range),
+ 	[TCA_FQ_PIE_MEMORY_LIMIT]	= {.type = NLA_U32},
+ 	[TCA_FQ_PIE_ECN_PROB]		= {.type = NLA_U32},
+ 	[TCA_FQ_PIE_ECN]		= {.type = NLA_U32},
+@@ -373,6 +379,7 @@ static void fq_pie_timer(struct timer_list *t)
+ 	spinlock_t *root_lock; /* to lock qdisc for probability calculations */
+ 	u32 idx;
+ 
++	rcu_read_lock();
+ 	root_lock = qdisc_lock(qdisc_root_sleeping(sch));
+ 	spin_lock(root_lock);
+ 
+@@ -385,6 +392,7 @@ static void fq_pie_timer(struct timer_list *t)
+ 		mod_timer(&q->adapt_timer, jiffies + q->p_params.tupdate);
+ 
+ 	spin_unlock(root_lock);
++	rcu_read_unlock();
+ }
+ 
+ static int fq_pie_init(struct Qdisc *sch, struct nlattr *opt,
+diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
+index a9aadc4e68581..ee43e8ac039ed 100644
+--- a/net/sched/sch_generic.c
++++ b/net/sched/sch_generic.c
+@@ -648,7 +648,7 @@ struct Qdisc_ops noop_qdisc_ops __read_mostly = {
+ 
+ static struct netdev_queue noop_netdev_queue = {
+ 	RCU_POINTER_INITIALIZER(qdisc, &noop_qdisc),
+-	.qdisc_sleeping	=	&noop_qdisc,
++	RCU_POINTER_INITIALIZER(qdisc_sleeping, &noop_qdisc),
+ };
+ 
+ struct Qdisc noop_qdisc = {
+@@ -1103,7 +1103,7 @@ EXPORT_SYMBOL(qdisc_put_unlocked);
+ struct Qdisc *dev_graft_qdisc(struct netdev_queue *dev_queue,
+ 			      struct Qdisc *qdisc)
+ {
+-	struct Qdisc *oqdisc = dev_queue->qdisc_sleeping;
++	struct Qdisc *oqdisc = rtnl_dereference(dev_queue->qdisc_sleeping);
+ 	spinlock_t *root_lock;
+ 
+ 	root_lock = qdisc_lock(oqdisc);
+@@ -1112,7 +1112,7 @@ struct Qdisc *dev_graft_qdisc(struct netdev_queue *dev_queue,
+ 	/* ... and graft new one */
+ 	if (qdisc == NULL)
+ 		qdisc = &noop_qdisc;
+-	dev_queue->qdisc_sleeping = qdisc;
++	rcu_assign_pointer(dev_queue->qdisc_sleeping, qdisc);
+ 	rcu_assign_pointer(dev_queue->qdisc, &noop_qdisc);
+ 
+ 	spin_unlock_bh(root_lock);
+@@ -1125,12 +1125,12 @@ static void shutdown_scheduler_queue(struct net_device *dev,
+ 				     struct netdev_queue *dev_queue,
+ 				     void *_qdisc_default)
+ {
+-	struct Qdisc *qdisc = dev_queue->qdisc_sleeping;
++	struct Qdisc *qdisc = rtnl_dereference(dev_queue->qdisc_sleeping);
+ 	struct Qdisc *qdisc_default = _qdisc_default;
+ 
+ 	if (qdisc) {
+ 		rcu_assign_pointer(dev_queue->qdisc, qdisc_default);
+-		dev_queue->qdisc_sleeping = qdisc_default;
++		rcu_assign_pointer(dev_queue->qdisc_sleeping, qdisc_default);
+ 
+ 		qdisc_put(qdisc);
+ 	}
+@@ -1154,7 +1154,7 @@ static void attach_one_default_qdisc(struct net_device *dev,
+ 
+ 	if (!netif_is_multiqueue(dev))
+ 		qdisc->flags |= TCQ_F_ONETXQUEUE | TCQ_F_NOPARENT;
+-	dev_queue->qdisc_sleeping = qdisc;
++	rcu_assign_pointer(dev_queue->qdisc_sleeping, qdisc);
+ }
+ 
+ static void attach_default_qdiscs(struct net_device *dev)
+@@ -1167,7 +1167,7 @@ static void attach_default_qdiscs(struct net_device *dev)
+ 	if (!netif_is_multiqueue(dev) ||
+ 	    dev->priv_flags & IFF_NO_QUEUE) {
+ 		netdev_for_each_tx_queue(dev, attach_one_default_qdisc, NULL);
+-		qdisc = txq->qdisc_sleeping;
++		qdisc = rtnl_dereference(txq->qdisc_sleeping);
+ 		rcu_assign_pointer(dev->qdisc, qdisc);
+ 		qdisc_refcount_inc(qdisc);
+ 	} else {
+@@ -1186,7 +1186,7 @@ static void attach_default_qdiscs(struct net_device *dev)
+ 		netdev_for_each_tx_queue(dev, shutdown_scheduler_queue, &noop_qdisc);
+ 		dev->priv_flags |= IFF_NO_QUEUE;
+ 		netdev_for_each_tx_queue(dev, attach_one_default_qdisc, NULL);
+-		qdisc = txq->qdisc_sleeping;
++		qdisc = rtnl_dereference(txq->qdisc_sleeping);
+ 		rcu_assign_pointer(dev->qdisc, qdisc);
+ 		qdisc_refcount_inc(qdisc);
+ 		dev->priv_flags ^= IFF_NO_QUEUE;
+@@ -1202,7 +1202,7 @@ static void transition_one_qdisc(struct net_device *dev,
+ 				 struct netdev_queue *dev_queue,
+ 				 void *_need_watchdog)
+ {
+-	struct Qdisc *new_qdisc = dev_queue->qdisc_sleeping;
++	struct Qdisc *new_qdisc = rtnl_dereference(dev_queue->qdisc_sleeping);
+ 	int *need_watchdog_p = _need_watchdog;
+ 
+ 	if (!(new_qdisc->flags & TCQ_F_BUILTIN))
+@@ -1272,7 +1272,7 @@ static void dev_reset_queue(struct net_device *dev,
+ 	struct Qdisc *qdisc;
+ 	bool nolock;
+ 
+-	qdisc = dev_queue->qdisc_sleeping;
++	qdisc = rtnl_dereference(dev_queue->qdisc_sleeping);
+ 	if (!qdisc)
+ 		return;
+ 
+@@ -1303,7 +1303,7 @@ static bool some_qdisc_is_busy(struct net_device *dev)
+ 		int val;
+ 
+ 		dev_queue = netdev_get_tx_queue(dev, i);
+-		q = dev_queue->qdisc_sleeping;
++		q = rtnl_dereference(dev_queue->qdisc_sleeping);
+ 
+ 		root_lock = qdisc_lock(q);
+ 		spin_lock_bh(root_lock);
+@@ -1379,7 +1379,7 @@ EXPORT_SYMBOL(dev_deactivate);
+ static int qdisc_change_tx_queue_len(struct net_device *dev,
+ 				     struct netdev_queue *dev_queue)
+ {
+-	struct Qdisc *qdisc = dev_queue->qdisc_sleeping;
++	struct Qdisc *qdisc = rtnl_dereference(dev_queue->qdisc_sleeping);
+ 	const struct Qdisc_ops *ops = qdisc->ops;
+ 
+ 	if (ops->change_tx_queue_len)
+@@ -1404,7 +1404,7 @@ void mq_change_real_num_tx(struct Qdisc *sch, unsigned int new_real_tx)
+ 	unsigned int i;
+ 
+ 	for (i = new_real_tx; i < dev->real_num_tx_queues; i++) {
+-		qdisc = netdev_get_tx_queue(dev, i)->qdisc_sleeping;
++		qdisc = rtnl_dereference(netdev_get_tx_queue(dev, i)->qdisc_sleeping);
+ 		/* Only update the default qdiscs we created,
+ 		 * qdiscs with handles are always hashed.
+ 		 */
+@@ -1412,7 +1412,7 @@ void mq_change_real_num_tx(struct Qdisc *sch, unsigned int new_real_tx)
+ 			qdisc_hash_del(qdisc);
+ 	}
+ 	for (i = dev->real_num_tx_queues; i < new_real_tx; i++) {
+-		qdisc = netdev_get_tx_queue(dev, i)->qdisc_sleeping;
++		qdisc = rtnl_dereference(netdev_get_tx_queue(dev, i)->qdisc_sleeping);
+ 		if (qdisc != &noop_qdisc && !qdisc->handle)
+ 			qdisc_hash_add(qdisc, false);
+ 	}
+@@ -1449,7 +1449,7 @@ static void dev_init_scheduler_queue(struct net_device *dev,
+ 	struct Qdisc *qdisc = _qdisc;
+ 
+ 	rcu_assign_pointer(dev_queue->qdisc, qdisc);
+-	dev_queue->qdisc_sleeping = qdisc;
++	rcu_assign_pointer(dev_queue->qdisc_sleeping, qdisc);
+ }
+ 
+ void dev_init_scheduler(struct net_device *dev)
+diff --git a/net/sched/sch_mq.c b/net/sched/sch_mq.c
+index d0bc660d7401f..c860119a8f091 100644
+--- a/net/sched/sch_mq.c
++++ b/net/sched/sch_mq.c
+@@ -141,7 +141,7 @@ static int mq_dump(struct Qdisc *sch, struct sk_buff *skb)
+ 	 * qdisc totals are added at end.
+ 	 */
+ 	for (ntx = 0; ntx < dev->num_tx_queues; ntx++) {
+-		qdisc = netdev_get_tx_queue(dev, ntx)->qdisc_sleeping;
++		qdisc = rtnl_dereference(netdev_get_tx_queue(dev, ntx)->qdisc_sleeping);
+ 		spin_lock_bh(qdisc_lock(qdisc));
+ 
+ 		gnet_stats_add_basic(&sch->bstats, qdisc->cpu_bstats,
+@@ -202,7 +202,7 @@ static struct Qdisc *mq_leaf(struct Qdisc *sch, unsigned long cl)
+ {
+ 	struct netdev_queue *dev_queue = mq_queue_get(sch, cl);
+ 
+-	return dev_queue->qdisc_sleeping;
++	return rtnl_dereference(dev_queue->qdisc_sleeping);
+ }
+ 
+ static unsigned long mq_find(struct Qdisc *sch, u32 classid)
+@@ -221,7 +221,7 @@ static int mq_dump_class(struct Qdisc *sch, unsigned long cl,
+ 
+ 	tcm->tcm_parent = TC_H_ROOT;
+ 	tcm->tcm_handle |= TC_H_MIN(cl);
+-	tcm->tcm_info = dev_queue->qdisc_sleeping->handle;
++	tcm->tcm_info = rtnl_dereference(dev_queue->qdisc_sleeping)->handle;
+ 	return 0;
+ }
+ 
+@@ -230,7 +230,7 @@ static int mq_dump_class_stats(struct Qdisc *sch, unsigned long cl,
+ {
+ 	struct netdev_queue *dev_queue = mq_queue_get(sch, cl);
+ 
+-	sch = dev_queue->qdisc_sleeping;
++	sch = rtnl_dereference(dev_queue->qdisc_sleeping);
+ 	if (gnet_stats_copy_basic(d, sch->cpu_bstats, &sch->bstats, true) < 0 ||
+ 	    qdisc_qstats_copy(d, sch) < 0)
+ 		return -1;
+diff --git a/net/sched/sch_mqprio.c b/net/sched/sch_mqprio.c
+index fc6225f15fcdb..dd29c9470c784 100644
+--- a/net/sched/sch_mqprio.c
++++ b/net/sched/sch_mqprio.c
+@@ -421,7 +421,7 @@ static int mqprio_dump(struct Qdisc *sch, struct sk_buff *skb)
+ 	 * qdisc totals are added at end.
+ 	 */
+ 	for (ntx = 0; ntx < dev->num_tx_queues; ntx++) {
+-		qdisc = netdev_get_tx_queue(dev, ntx)->qdisc_sleeping;
++		qdisc = rtnl_dereference(netdev_get_tx_queue(dev, ntx)->qdisc_sleeping);
+ 		spin_lock_bh(qdisc_lock(qdisc));
+ 
+ 		gnet_stats_add_basic(&sch->bstats, qdisc->cpu_bstats,
+@@ -465,7 +465,7 @@ static struct Qdisc *mqprio_leaf(struct Qdisc *sch, unsigned long cl)
+ 	if (!dev_queue)
+ 		return NULL;
+ 
+-	return dev_queue->qdisc_sleeping;
++	return rtnl_dereference(dev_queue->qdisc_sleeping);
+ }
+ 
+ static unsigned long mqprio_find(struct Qdisc *sch, u32 classid)
+@@ -498,7 +498,7 @@ static int mqprio_dump_class(struct Qdisc *sch, unsigned long cl,
+ 		tcm->tcm_parent = (tc < 0) ? 0 :
+ 			TC_H_MAKE(TC_H_MAJ(sch->handle),
+ 				  TC_H_MIN(tc + TC_H_MIN_PRIORITY));
+-		tcm->tcm_info = dev_queue->qdisc_sleeping->handle;
++		tcm->tcm_info = rtnl_dereference(dev_queue->qdisc_sleeping)->handle;
+ 	} else {
+ 		tcm->tcm_parent = TC_H_ROOT;
+ 		tcm->tcm_info = 0;
+@@ -554,7 +554,7 @@ static int mqprio_dump_class_stats(struct Qdisc *sch, unsigned long cl,
+ 	} else {
+ 		struct netdev_queue *dev_queue = mqprio_queue_get(sch, cl);
+ 
+-		sch = dev_queue->qdisc_sleeping;
++		sch = rtnl_dereference(dev_queue->qdisc_sleeping);
+ 		if (gnet_stats_copy_basic(d, sch->cpu_bstats,
+ 					  &sch->bstats, true) < 0 ||
+ 		    qdisc_qstats_copy(d, sch) < 0)
+diff --git a/net/sched/sch_pie.c b/net/sched/sch_pie.c
+index 265c238047a42..b60b31ef71cc5 100644
+--- a/net/sched/sch_pie.c
++++ b/net/sched/sch_pie.c
+@@ -421,8 +421,10 @@ static void pie_timer(struct timer_list *t)
+ {
+ 	struct pie_sched_data *q = from_timer(q, t, adapt_timer);
+ 	struct Qdisc *sch = q->sch;
+-	spinlock_t *root_lock = qdisc_lock(qdisc_root_sleeping(sch));
++	spinlock_t *root_lock;
+ 
++	rcu_read_lock();
++	root_lock = qdisc_lock(qdisc_root_sleeping(sch));
+ 	spin_lock(root_lock);
+ 	pie_calculate_probability(&q->params, &q->vars, sch->qstats.backlog);
+ 
+@@ -430,6 +432,7 @@ static void pie_timer(struct timer_list *t)
+ 	if (q->params.tupdate)
+ 		mod_timer(&q->adapt_timer, jiffies + q->params.tupdate);
+ 	spin_unlock(root_lock);
++	rcu_read_unlock();
+ }
+ 
+ static int pie_init(struct Qdisc *sch, struct nlattr *opt,
+diff --git a/net/sched/sch_red.c b/net/sched/sch_red.c
+index 98129324e1573..16277b6a0238d 100644
+--- a/net/sched/sch_red.c
++++ b/net/sched/sch_red.c
+@@ -321,12 +321,15 @@ static inline void red_adaptative_timer(struct timer_list *t)
+ {
+ 	struct red_sched_data *q = from_timer(q, t, adapt_timer);
+ 	struct Qdisc *sch = q->sch;
+-	spinlock_t *root_lock = qdisc_lock(qdisc_root_sleeping(sch));
++	spinlock_t *root_lock;
+ 
++	rcu_read_lock();
++	root_lock = qdisc_lock(qdisc_root_sleeping(sch));
+ 	spin_lock(root_lock);
+ 	red_adaptative_algo(&q->parms, &q->vars);
+ 	mod_timer(&q->adapt_timer, jiffies + HZ/2);
+ 	spin_unlock(root_lock);
++	rcu_read_unlock();
+ }
+ 
+ static int red_init(struct Qdisc *sch, struct nlattr *opt,
+diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c
+index abd436307d6a8..66dcb18638fea 100644
+--- a/net/sched/sch_sfq.c
++++ b/net/sched/sch_sfq.c
+@@ -606,10 +606,12 @@ static void sfq_perturbation(struct timer_list *t)
+ {
+ 	struct sfq_sched_data *q = from_timer(q, t, perturb_timer);
+ 	struct Qdisc *sch = q->sch;
+-	spinlock_t *root_lock = qdisc_lock(qdisc_root_sleeping(sch));
++	spinlock_t *root_lock;
+ 	siphash_key_t nkey;
+ 
+ 	get_random_bytes(&nkey, sizeof(nkey));
++	rcu_read_lock();
++	root_lock = qdisc_lock(qdisc_root_sleeping(sch));
+ 	spin_lock(root_lock);
+ 	q->perturbation = nkey;
+ 	if (!q->filter_list && q->tail)
+@@ -618,6 +620,7 @@ static void sfq_perturbation(struct timer_list *t)
+ 
+ 	if (q->perturb_period)
+ 		mod_timer(&q->perturb_timer, jiffies + q->perturb_period);
++	rcu_read_unlock();
+ }
+ 
+ static int sfq_change(struct Qdisc *sch, struct nlattr *opt)
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index cbad430191721..a6cf56a969421 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -2319,7 +2319,7 @@ static struct Qdisc *taprio_leaf(struct Qdisc *sch, unsigned long cl)
+ 	if (!dev_queue)
+ 		return NULL;
+ 
+-	return dev_queue->qdisc_sleeping;
++	return rtnl_dereference(dev_queue->qdisc_sleeping);
+ }
+ 
+ static unsigned long taprio_find(struct Qdisc *sch, u32 classid)
+@@ -2338,7 +2338,7 @@ static int taprio_dump_class(struct Qdisc *sch, unsigned long cl,
+ 
+ 	tcm->tcm_parent = TC_H_ROOT;
+ 	tcm->tcm_handle |= TC_H_MIN(cl);
+-	tcm->tcm_info = dev_queue->qdisc_sleeping->handle;
++	tcm->tcm_info = rtnl_dereference(dev_queue->qdisc_sleeping)->handle;
+ 
+ 	return 0;
+ }
+@@ -2350,7 +2350,7 @@ static int taprio_dump_class_stats(struct Qdisc *sch, unsigned long cl,
+ {
+ 	struct netdev_queue *dev_queue = taprio_queue_get(sch, cl);
+ 
+-	sch = dev_queue->qdisc_sleeping;
++	sch = rtnl_dereference(dev_queue->qdisc_sleeping);
+ 	if (gnet_stats_copy_basic(d, NULL, &sch->bstats, true) < 0 ||
+ 	    qdisc_qstats_copy(d, sch) < 0)
+ 		return -1;
+diff --git a/net/sched/sch_teql.c b/net/sched/sch_teql.c
+index 16f9238aa51d1..7721239c185fb 100644
+--- a/net/sched/sch_teql.c
++++ b/net/sched/sch_teql.c
+@@ -297,7 +297,7 @@ restart:
+ 		struct net_device *slave = qdisc_dev(q);
+ 		struct netdev_queue *slave_txq = netdev_get_tx_queue(slave, 0);
+ 
+-		if (slave_txq->qdisc_sleeping != q)
++		if (rcu_access_pointer(slave_txq->qdisc_sleeping) != q)
+ 			continue;
+ 		if (netif_xmit_stopped(netdev_get_tx_queue(slave, subq)) ||
+ 		    !netif_running(slave)) {
+diff --git a/net/smc/smc_llc.c b/net/smc/smc_llc.c
+index 7a8d9163d186e..90f0b60b196ab 100644
+--- a/net/smc/smc_llc.c
++++ b/net/smc/smc_llc.c
+@@ -851,6 +851,8 @@ static int smc_llc_add_link_cont(struct smc_link *link,
+ 	addc_llc->num_rkeys = *num_rkeys_todo;
+ 	n = *num_rkeys_todo;
+ 	for (i = 0; i < min_t(u8, n, SMC_LLC_RKEYS_PER_CONT_MSG); i++) {
++		while (*buf_pos && !(*buf_pos)->used)
++			*buf_pos = smc_llc_get_next_rmb(lgr, buf_lst, *buf_pos);
+ 		if (!*buf_pos) {
+ 			addc_llc->num_rkeys = addc_llc->num_rkeys -
+ 					      *num_rkeys_todo;
+@@ -867,8 +869,6 @@ static int smc_llc_add_link_cont(struct smc_link *link,
+ 
+ 		(*num_rkeys_todo)--;
+ 		*buf_pos = smc_llc_get_next_rmb(lgr, buf_lst, *buf_pos);
+-		while (*buf_pos && !(*buf_pos)->used)
+-			*buf_pos = smc_llc_get_next_rmb(lgr, buf_lst, *buf_pos);
+ 	}
+ 	addc_llc->hd.common.llc_type = SMC_LLC_ADD_LINK_CONT;
+ 	addc_llc->hd.length = sizeof(struct smc_llc_msg_add_link_cont);
+diff --git a/net/wireless/core.c b/net/wireless/core.c
+index 5b0c4d5b80cf5..b3ec9eaec36b3 100644
+--- a/net/wireless/core.c
++++ b/net/wireless/core.c
+@@ -368,12 +368,12 @@ static void cfg80211_sched_scan_stop_wk(struct work_struct *work)
+ 	rdev = container_of(work, struct cfg80211_registered_device,
+ 			   sched_scan_stop_wk);
+ 
+-	rtnl_lock();
++	wiphy_lock(&rdev->wiphy);
+ 	list_for_each_entry_safe(req, tmp, &rdev->sched_scan_req_list, list) {
+ 		if (req->nl_owner_dead)
+ 			cfg80211_stop_sched_scan_req(rdev, req, false);
+ 	}
+-	rtnl_unlock();
++	wiphy_unlock(&rdev->wiphy);
+ }
+ 
+ static void cfg80211_propagate_radar_detect_wk(struct work_struct *work)
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 4f63059efd813..1922fccb96ace 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -10642,6 +10642,8 @@ static int nl80211_authenticate(struct sk_buff *skb, struct genl_info *info)
+ 		if (!info->attrs[NL80211_ATTR_MLD_ADDR])
+ 			return -EINVAL;
+ 		req.ap_mld_addr = nla_data(info->attrs[NL80211_ATTR_MLD_ADDR]);
++		if (!is_valid_ether_addr(req.ap_mld_addr))
++			return -EINVAL;
+ 	}
+ 
+ 	req.bss = cfg80211_get_bss(&rdev->wiphy, chan, bssid, ssid, ssid_len,
+diff --git a/sound/isa/gus/gus_pcm.c b/sound/isa/gus/gus_pcm.c
+index 230f65a0e4b07..388db5fb65bd0 100644
+--- a/sound/isa/gus/gus_pcm.c
++++ b/sound/isa/gus/gus_pcm.c
+@@ -892,10 +892,10 @@ int snd_gf1_pcm_new(struct snd_gus_card *gus, int pcm_dev, int control_index)
+ 		kctl = snd_ctl_new1(&snd_gf1_pcm_volume_control1, gus);
+ 	else
+ 		kctl = snd_ctl_new1(&snd_gf1_pcm_volume_control, gus);
++	kctl->id.index = control_index;
+ 	err = snd_ctl_add(card, kctl);
+ 	if (err < 0)
+ 		return err;
+-	kctl->id.index = control_index;
+ 
+ 	return 0;
+ }
+diff --git a/sound/pci/cmipci.c b/sound/pci/cmipci.c
+index 727db6d433916..6d25c12d9ef00 100644
+--- a/sound/pci/cmipci.c
++++ b/sound/pci/cmipci.c
+@@ -2688,20 +2688,20 @@ static int snd_cmipci_mixer_new(struct cmipci *cm, int pcm_spdif_device)
+ 		}
+ 		if (cm->can_ac3_hw) {
+ 			kctl = snd_ctl_new1(&snd_cmipci_spdif_default, cm);
++			kctl->id.device = pcm_spdif_device;
+ 			err = snd_ctl_add(card, kctl);
+ 			if (err < 0)
+ 				return err;
+-			kctl->id.device = pcm_spdif_device;
+ 			kctl = snd_ctl_new1(&snd_cmipci_spdif_mask, cm);
++			kctl->id.device = pcm_spdif_device;
+ 			err = snd_ctl_add(card, kctl);
+ 			if (err < 0)
+ 				return err;
+-			kctl->id.device = pcm_spdif_device;
+ 			kctl = snd_ctl_new1(&snd_cmipci_spdif_stream, cm);
++			kctl->id.device = pcm_spdif_device;
+ 			err = snd_ctl_add(card, kctl);
+ 			if (err < 0)
+ 				return err;
+-			kctl->id.device = pcm_spdif_device;
+ 		}
+ 		if (cm->chip_version <= 37) {
+ 			sw = snd_cmipci_old_mixer_switches;
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index 9f79c0ac2bda7..bd19f92aeeec8 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -2458,10 +2458,14 @@ int snd_hda_create_dig_out_ctls(struct hda_codec *codec,
+ 		   type == HDA_PCM_TYPE_HDMI) {
+ 		/* suppose a single SPDIF device */
+ 		for (dig_mix = dig_mixes; dig_mix->name; dig_mix++) {
++			struct snd_ctl_elem_id id;
++
+ 			kctl = find_mixer_ctl(codec, dig_mix->name, 0, 0);
+ 			if (!kctl)
+ 				break;
+-			kctl->id.index = spdif_index;
++			id = kctl->id;
++			id.index = spdif_index;
++			snd_ctl_rename_id(codec->card, &kctl->id, &id);
+ 		}
+ 		bus->primary_dig_out_type = HDA_PCM_TYPE_HDMI;
+ 	}
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 7b5f194513c7b..48a0e87136f1c 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -9547,6 +9547,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x1a8f, "ASUS UX582ZS", ALC245_FIXUP_CS35L41_SPI_2),
+ 	SND_PCI_QUIRK(0x1043, 0x1b11, "ASUS UX431DA", ALC294_FIXUP_ASUS_COEF_1B),
+ 	SND_PCI_QUIRK(0x1043, 0x1b13, "Asus U41SV", ALC269_FIXUP_INV_DMIC),
++	SND_PCI_QUIRK(0x1043, 0x1b93, "ASUS G614JVR/JIR", ALC245_FIXUP_CS35L41_SPI_2),
+ 	SND_PCI_QUIRK(0x1043, 0x1bbd, "ASUS Z550MA", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x1c23, "Asus X55U", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x1043, 0x1c62, "ASUS GU603", ALC289_FIXUP_ASUS_GA401),
+@@ -9565,6 +9566,11 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x1f12, "ASUS UM5302", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x1043, 0x1f92, "ASUS ROG Flow X16", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
++	SND_PCI_QUIRK(0x1043, 0x3a20, "ASUS G614JZR", ALC245_FIXUP_CS35L41_SPI_2),
++	SND_PCI_QUIRK(0x1043, 0x3a30, "ASUS G814JVR/JIR", ALC245_FIXUP_CS35L41_SPI_2),
++	SND_PCI_QUIRK(0x1043, 0x3a40, "ASUS G814JZR", ALC245_FIXUP_CS35L41_SPI_2),
++	SND_PCI_QUIRK(0x1043, 0x3a50, "ASUS G834JYR/JZR", ALC245_FIXUP_CS35L41_SPI_2),
++	SND_PCI_QUIRK(0x1043, 0x3a60, "ASUS G634JYR/JZR", ALC245_FIXUP_CS35L41_SPI_2),
+ 	SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x834a, "ASUS S101", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x8398, "ASUS P1005", ALC269_FIXUP_STEREO_DMIC),
+@@ -9636,6 +9642,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x5101, "Clevo S510WU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x5157, "Clevo W517GU1", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x51a1, "Clevo NS50MU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x51b1, "Clevo NS50AU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x5630, "Clevo NP50RNJS", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x70a1, "Clevo NB70T[HJK]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x70b3, "Clevo NK70SB", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+@@ -11694,6 +11701,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8719, "HP", ALC897_FIXUP_HP_HSMIC_VERB),
+ 	SND_PCI_QUIRK(0x103c, 0x872b, "HP", ALC897_FIXUP_HP_HSMIC_VERB),
+ 	SND_PCI_QUIRK(0x103c, 0x873e, "HP", ALC671_FIXUP_HP_HEADSET_MIC2),
++	SND_PCI_QUIRK(0x103c, 0x8768, "HP Slim Desktop S01", ALC671_FIXUP_HP_HEADSET_MIC2),
+ 	SND_PCI_QUIRK(0x103c, 0x877e, "HP 288 Pro G6", ALC671_FIXUP_HP_HEADSET_MIC2),
+ 	SND_PCI_QUIRK(0x103c, 0x885f, "HP 288 Pro G8", ALC671_FIXUP_HP_HEADSET_MIC2),
+ 	SND_PCI_QUIRK(0x1043, 0x1080, "Asus UX501VW", ALC668_FIXUP_HEADSET_MODE),
+@@ -11715,6 +11723,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x14cd, 0x5003, "USI", ALC662_FIXUP_USI_HEADSET_MODE),
+ 	SND_PCI_QUIRK(0x17aa, 0x1036, "Lenovo P520", ALC662_FIXUP_LENOVO_MULTI_CODECS),
+ 	SND_PCI_QUIRK(0x17aa, 0x1057, "Lenovo P360", ALC897_FIXUP_HEADSET_MIC_PIN),
++	SND_PCI_QUIRK(0x17aa, 0x1064, "Lenovo P3 Tower", ALC897_FIXUP_HEADSET_MIC_PIN),
+ 	SND_PCI_QUIRK(0x17aa, 0x32ca, "Lenovo ThinkCentre M80", ALC897_FIXUP_HEADSET_MIC_PIN),
+ 	SND_PCI_QUIRK(0x17aa, 0x32cb, "Lenovo ThinkCentre M70", ALC897_FIXUP_HEADSET_MIC_PIN),
+ 	SND_PCI_QUIRK(0x17aa, 0x32cf, "Lenovo ThinkCentre M950", ALC897_FIXUP_HEADSET_MIC_PIN),
+diff --git a/sound/pci/ice1712/aureon.c b/sound/pci/ice1712/aureon.c
+index 24b9782340001..027849329c1b0 100644
+--- a/sound/pci/ice1712/aureon.c
++++ b/sound/pci/ice1712/aureon.c
+@@ -1899,11 +1899,12 @@ static int aureon_add_controls(struct snd_ice1712 *ice)
+ 		else {
+ 			for (i = 0; i < ARRAY_SIZE(cs8415_controls); i++) {
+ 				struct snd_kcontrol *kctl;
+-				err = snd_ctl_add(ice->card, (kctl = snd_ctl_new1(&cs8415_controls[i], ice)));
+-				if (err < 0)
+-					return err;
++				kctl = snd_ctl_new1(&cs8415_controls[i], ice);
+ 				if (i > 1)
+ 					kctl->id.device = ice->pcm->device;
++				err = snd_ctl_add(ice->card, kctl);
++				if (err < 0)
++					return err;
+ 			}
+ 		}
+ 	}
+diff --git a/sound/pci/ice1712/ice1712.c b/sound/pci/ice1712/ice1712.c
+index a5241a287851c..3b0c3e70987b9 100644
+--- a/sound/pci/ice1712/ice1712.c
++++ b/sound/pci/ice1712/ice1712.c
+@@ -2371,22 +2371,26 @@ int snd_ice1712_spdif_build_controls(struct snd_ice1712 *ice)
+ 
+ 	if (snd_BUG_ON(!ice->pcm_pro))
+ 		return -EIO;
+-	err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_ice1712_spdif_default, ice));
++	kctl = snd_ctl_new1(&snd_ice1712_spdif_default, ice);
++	kctl->id.device = ice->pcm_pro->device;
++	err = snd_ctl_add(ice->card, kctl);
+ 	if (err < 0)
+ 		return err;
++	kctl = snd_ctl_new1(&snd_ice1712_spdif_maskc, ice);
+ 	kctl->id.device = ice->pcm_pro->device;
+-	err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_ice1712_spdif_maskc, ice));
++	err = snd_ctl_add(ice->card, kctl);
+ 	if (err < 0)
+ 		return err;
++	kctl = snd_ctl_new1(&snd_ice1712_spdif_maskp, ice);
+ 	kctl->id.device = ice->pcm_pro->device;
+-	err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_ice1712_spdif_maskp, ice));
++	err = snd_ctl_add(ice->card, kctl);
+ 	if (err < 0)
+ 		return err;
++	kctl = snd_ctl_new1(&snd_ice1712_spdif_stream, ice);
+ 	kctl->id.device = ice->pcm_pro->device;
+-	err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_ice1712_spdif_stream, ice));
++	err = snd_ctl_add(ice->card, kctl);
+ 	if (err < 0)
+ 		return err;
+-	kctl->id.device = ice->pcm_pro->device;
+ 	ice->spdif.stream_ctl = kctl;
+ 	return 0;
+ }
+diff --git a/sound/pci/ice1712/ice1724.c b/sound/pci/ice1712/ice1724.c
+index 6fab2ad85bbec..1dc776acd637c 100644
+--- a/sound/pci/ice1712/ice1724.c
++++ b/sound/pci/ice1712/ice1724.c
+@@ -2392,23 +2392,27 @@ static int snd_vt1724_spdif_build_controls(struct snd_ice1712 *ice)
+ 	if (err < 0)
+ 		return err;
+ 
+-	err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_vt1724_spdif_default, ice));
++	kctl = snd_ctl_new1(&snd_vt1724_spdif_default, ice);
++	kctl->id.device = ice->pcm->device;
++	err = snd_ctl_add(ice->card, kctl);
+ 	if (err < 0)
+ 		return err;
++	kctl = snd_ctl_new1(&snd_vt1724_spdif_maskc, ice);
+ 	kctl->id.device = ice->pcm->device;
+-	err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_vt1724_spdif_maskc, ice));
++	err = snd_ctl_add(ice->card, kctl);
+ 	if (err < 0)
+ 		return err;
++	kctl = snd_ctl_new1(&snd_vt1724_spdif_maskp, ice);
+ 	kctl->id.device = ice->pcm->device;
+-	err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_vt1724_spdif_maskp, ice));
++	err = snd_ctl_add(ice->card, kctl);
+ 	if (err < 0)
+ 		return err;
+-	kctl->id.device = ice->pcm->device;
+ #if 0 /* use default only */
+-	err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_vt1724_spdif_stream, ice));
++	kctl = snd_ctl_new1(&snd_vt1724_spdif_stream, ice);
++	kctl->id.device = ice->pcm->device;
++	err = snd_ctl_add(ice->card, kctl);
+ 	if (err < 0)
+ 		return err;
+-	kctl->id.device = ice->pcm->device;
+ 	ice->spdif.stream_ctl = kctl;
+ #endif
+ 	return 0;
+diff --git a/sound/pci/ymfpci/ymfpci_main.c b/sound/pci/ymfpci/ymfpci_main.c
+index b492c32ce0704..f629b3956a69d 100644
+--- a/sound/pci/ymfpci/ymfpci_main.c
++++ b/sound/pci/ymfpci/ymfpci_main.c
+@@ -1827,20 +1827,20 @@ int snd_ymfpci_mixer(struct snd_ymfpci *chip, int rear_switch)
+ 	if (snd_BUG_ON(!chip->pcm_spdif))
+ 		return -ENXIO;
+ 	kctl = snd_ctl_new1(&snd_ymfpci_spdif_default, chip);
++	kctl->id.device = chip->pcm_spdif->device;
+ 	err = snd_ctl_add(chip->card, kctl);
+ 	if (err < 0)
+ 		return err;
+-	kctl->id.device = chip->pcm_spdif->device;
+ 	kctl = snd_ctl_new1(&snd_ymfpci_spdif_mask, chip);
++	kctl->id.device = chip->pcm_spdif->device;
+ 	err = snd_ctl_add(chip->card, kctl);
+ 	if (err < 0)
+ 		return err;
+-	kctl->id.device = chip->pcm_spdif->device;
+ 	kctl = snd_ctl_new1(&snd_ymfpci_spdif_stream, chip);
++	kctl->id.device = chip->pcm_spdif->device;
+ 	err = snd_ctl_add(chip->card, kctl);
+ 	if (err < 0)
+ 		return err;
+-	kctl->id.device = chip->pcm_spdif->device;
+ 	chip->spdif_pcm_ctl = kctl;
+ 
+ 	/* direct recording source */
+diff --git a/sound/soc/amd/ps/pci-ps.c b/sound/soc/amd/ps/pci-ps.c
+index afddb9a77ba49..b1337b96ea8d6 100644
+--- a/sound/soc/amd/ps/pci-ps.c
++++ b/sound/soc/amd/ps/pci-ps.c
+@@ -211,8 +211,7 @@ static int create_acp63_platform_devs(struct pci_dev *pci, struct acp63_dev_data
+ 	case ACP63_PDM_DEV_MASK:
+ 		adata->pdm_dev_index  = 0;
+ 		acp63_fill_platform_dev_info(&pdevinfo[0], parent, NULL, "acp_ps_pdm_dma",
+-					     0, adata->res, 1, &adata->acp_lock,
+-					     sizeof(adata->acp_lock));
++					     0, adata->res, 1, NULL, 0);
+ 		acp63_fill_platform_dev_info(&pdevinfo[1], parent, NULL, "dmic-codec",
+ 					     0, NULL, 0, NULL, 0);
+ 		acp63_fill_platform_dev_info(&pdevinfo[2], parent, NULL, "acp_ps_mach",
+diff --git a/sound/soc/amd/ps/ps-pdm-dma.c b/sound/soc/amd/ps/ps-pdm-dma.c
+index 454dab062e4f5..527594aa9c113 100644
+--- a/sound/soc/amd/ps/ps-pdm-dma.c
++++ b/sound/soc/amd/ps/ps-pdm-dma.c
+@@ -361,12 +361,12 @@ static int acp63_pdm_audio_probe(struct platform_device *pdev)
+ {
+ 	struct resource *res;
+ 	struct pdm_dev_data *adata;
++	struct acp63_dev_data *acp_data;
++	struct device *parent;
+ 	int status;
+ 
+-	if (!pdev->dev.platform_data) {
+-		dev_err(&pdev->dev, "platform_data not retrieved\n");
+-		return -ENODEV;
+-	}
++	parent = pdev->dev.parent;
++	acp_data = dev_get_drvdata(parent);
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	if (!res) {
+ 		dev_err(&pdev->dev, "IORESOURCE_MEM FAILED\n");
+@@ -382,7 +382,7 @@ static int acp63_pdm_audio_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	adata->capture_stream = NULL;
+-	adata->acp_lock = pdev->dev.platform_data;
++	adata->acp_lock = &acp_data->acp_lock;
+ 	dev_set_drvdata(&pdev->dev, adata);
+ 	status = devm_snd_soc_register_component(&pdev->dev,
+ 						 &acp63_pdm_component,
+diff --git a/sound/soc/codecs/wsa881x.c b/sound/soc/codecs/wsa881x.c
+index f709231b1277a..97f6873a0a8c7 100644
+--- a/sound/soc/codecs/wsa881x.c
++++ b/sound/soc/codecs/wsa881x.c
+@@ -645,7 +645,6 @@ static struct regmap_config wsa881x_regmap_config = {
+ 	.readable_reg = wsa881x_readable_register,
+ 	.reg_format_endian = REGMAP_ENDIAN_NATIVE,
+ 	.val_format_endian = REGMAP_ENDIAN_NATIVE,
+-	.can_multi_write = true,
+ };
+ 
+ enum {
+diff --git a/sound/soc/codecs/wsa883x.c b/sound/soc/codecs/wsa883x.c
+index c609cb63dae6d..e80b531435696 100644
+--- a/sound/soc/codecs/wsa883x.c
++++ b/sound/soc/codecs/wsa883x.c
+@@ -946,7 +946,6 @@ static struct regmap_config wsa883x_regmap_config = {
+ 	.writeable_reg = wsa883x_writeable_register,
+ 	.reg_format_endian = REGMAP_ENDIAN_NATIVE,
+ 	.val_format_endian = REGMAP_ENDIAN_NATIVE,
+-	.can_multi_write = true,
+ 	.use_single_read = true,
+ };
+ 
+diff --git a/sound/soc/generic/simple-card-utils.c b/sound/soc/generic/simple-card-utils.c
+index 56552a616f21f..1f24344846ae9 100644
+--- a/sound/soc/generic/simple-card-utils.c
++++ b/sound/soc/generic/simple-card-utils.c
+@@ -314,7 +314,7 @@ int asoc_simple_startup(struct snd_pcm_substream *substream)
+ 		}
+ 		ret = snd_pcm_hw_constraint_minmax(substream->runtime, SNDRV_PCM_HW_PARAM_RATE,
+ 			fixed_rate, fixed_rate);
+-		if (ret)
++		if (ret < 0)
+ 			goto codec_err;
+ 	}
+ 
+diff --git a/sound/soc/mediatek/mt8188/mt8188-afe-clk.c b/sound/soc/mediatek/mt8188/mt8188-afe-clk.c
+index 743d6a162cb9a..0fb97517f82c6 100644
+--- a/sound/soc/mediatek/mt8188/mt8188-afe-clk.c
++++ b/sound/soc/mediatek/mt8188/mt8188-afe-clk.c
+@@ -418,13 +418,6 @@ int mt8188_afe_init_clock(struct mtk_base_afe *afe)
+ 	return 0;
+ }
+ 
+-void mt8188_afe_deinit_clock(void *priv)
+-{
+-	struct mtk_base_afe *afe = priv;
+-
+-	mt8188_audsys_clk_unregister(afe);
+-}
+-
+ int mt8188_afe_enable_clk(struct mtk_base_afe *afe, struct clk *clk)
+ {
+ 	int ret;
+diff --git a/sound/soc/mediatek/mt8188/mt8188-afe-clk.h b/sound/soc/mediatek/mt8188/mt8188-afe-clk.h
+index 084fdfb1d877a..a4203a87a1e35 100644
+--- a/sound/soc/mediatek/mt8188/mt8188-afe-clk.h
++++ b/sound/soc/mediatek/mt8188/mt8188-afe-clk.h
+@@ -100,7 +100,6 @@ int mt8188_afe_get_mclk_source_clk_id(int sel);
+ int mt8188_afe_get_mclk_source_rate(struct mtk_base_afe *afe, int apll);
+ int mt8188_afe_get_default_mclk_source_by_rate(int rate);
+ int mt8188_afe_init_clock(struct mtk_base_afe *afe);
+-void mt8188_afe_deinit_clock(void *priv);
+ int mt8188_afe_enable_clk(struct mtk_base_afe *afe, struct clk *clk);
+ void mt8188_afe_disable_clk(struct mtk_base_afe *afe, struct clk *clk);
+ int mt8188_afe_set_clk_rate(struct mtk_base_afe *afe, struct clk *clk,
+diff --git a/sound/soc/mediatek/mt8188/mt8188-afe-pcm.c b/sound/soc/mediatek/mt8188/mt8188-afe-pcm.c
+index e8e84de865422..45ab6e2829b7a 100644
+--- a/sound/soc/mediatek/mt8188/mt8188-afe-pcm.c
++++ b/sound/soc/mediatek/mt8188/mt8188-afe-pcm.c
+@@ -3185,10 +3185,6 @@ static int mt8188_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		return dev_err_probe(dev, ret, "init clock error");
+ 
+-	ret = devm_add_action_or_reset(dev, mt8188_afe_deinit_clock, (void *)afe);
+-	if (ret)
+-		return ret;
+-
+ 	spin_lock_init(&afe_priv->afe_ctrl_lock);
+ 
+ 	mutex_init(&afe->irq_alloc_lock);
+diff --git a/sound/soc/mediatek/mt8188/mt8188-audsys-clk.c b/sound/soc/mediatek/mt8188/mt8188-audsys-clk.c
+index be1c53bf47298..c796ad8b62eea 100644
+--- a/sound/soc/mediatek/mt8188/mt8188-audsys-clk.c
++++ b/sound/soc/mediatek/mt8188/mt8188-audsys-clk.c
+@@ -138,6 +138,29 @@ static const struct afe_gate aud_clks[CLK_AUD_NR_CLK] = {
+ 	GATE_AUD6(CLK_AUD_GASRC11, "aud_gasrc11", "top_asm_h", 11),
+ };
+ 
++static void mt8188_audsys_clk_unregister(void *data)
++{
++	struct mtk_base_afe *afe = data;
++	struct mt8188_afe_private *afe_priv = afe->platform_priv;
++	struct clk *clk;
++	struct clk_lookup *cl;
++	int i;
++
++	if (!afe_priv)
++		return;
++
++	for (i = 0; i < CLK_AUD_NR_CLK; i++) {
++		cl = afe_priv->lookup[i];
++		if (!cl)
++			continue;
++
++		clk = cl->clk;
++		clk_unregister_gate(clk);
++
++		clkdev_drop(cl);
++	}
++}
++
+ int mt8188_audsys_clk_register(struct mtk_base_afe *afe)
+ {
+ 	struct mt8188_afe_private *afe_priv = afe->platform_priv;
+@@ -179,27 +202,5 @@ int mt8188_audsys_clk_register(struct mtk_base_afe *afe)
+ 		afe_priv->lookup[i] = cl;
+ 	}
+ 
+-	return 0;
+-}
+-
+-void mt8188_audsys_clk_unregister(struct mtk_base_afe *afe)
+-{
+-	struct mt8188_afe_private *afe_priv = afe->platform_priv;
+-	struct clk *clk;
+-	struct clk_lookup *cl;
+-	int i;
+-
+-	if (!afe_priv)
+-		return;
+-
+-	for (i = 0; i < CLK_AUD_NR_CLK; i++) {
+-		cl = afe_priv->lookup[i];
+-		if (!cl)
+-			continue;
+-
+-		clk = cl->clk;
+-		clk_unregister_gate(clk);
+-
+-		clkdev_drop(cl);
+-	}
++	return devm_add_action_or_reset(afe->dev, mt8188_audsys_clk_unregister, afe);
+ }
+diff --git a/sound/soc/mediatek/mt8188/mt8188-audsys-clk.h b/sound/soc/mediatek/mt8188/mt8188-audsys-clk.h
+index 6c5f463ad7e4d..45b0948c4a06e 100644
+--- a/sound/soc/mediatek/mt8188/mt8188-audsys-clk.h
++++ b/sound/soc/mediatek/mt8188/mt8188-audsys-clk.h
+@@ -10,6 +10,5 @@
+ #define _MT8188_AUDSYS_CLK_H_
+ 
+ int mt8188_audsys_clk_register(struct mtk_base_afe *afe);
+-void mt8188_audsys_clk_unregister(struct mtk_base_afe *afe);
+ 
+ #endif
+diff --git a/sound/soc/mediatek/mt8195/mt8195-afe-clk.c b/sound/soc/mediatek/mt8195/mt8195-afe-clk.c
+index 9ca2cb8c8a9c2..f35318ae07392 100644
+--- a/sound/soc/mediatek/mt8195/mt8195-afe-clk.c
++++ b/sound/soc/mediatek/mt8195/mt8195-afe-clk.c
+@@ -410,11 +410,6 @@ int mt8195_afe_init_clock(struct mtk_base_afe *afe)
+ 	return 0;
+ }
+ 
+-void mt8195_afe_deinit_clock(struct mtk_base_afe *afe)
+-{
+-	mt8195_audsys_clk_unregister(afe);
+-}
+-
+ int mt8195_afe_enable_clk(struct mtk_base_afe *afe, struct clk *clk)
+ {
+ 	int ret;
+diff --git a/sound/soc/mediatek/mt8195/mt8195-afe-clk.h b/sound/soc/mediatek/mt8195/mt8195-afe-clk.h
+index 40663e31becd1..a08c0ee6c8602 100644
+--- a/sound/soc/mediatek/mt8195/mt8195-afe-clk.h
++++ b/sound/soc/mediatek/mt8195/mt8195-afe-clk.h
+@@ -101,7 +101,6 @@ int mt8195_afe_get_mclk_source_clk_id(int sel);
+ int mt8195_afe_get_mclk_source_rate(struct mtk_base_afe *afe, int apll);
+ int mt8195_afe_get_default_mclk_source_by_rate(int rate);
+ int mt8195_afe_init_clock(struct mtk_base_afe *afe);
+-void mt8195_afe_deinit_clock(struct mtk_base_afe *afe);
+ int mt8195_afe_enable_clk(struct mtk_base_afe *afe, struct clk *clk);
+ void mt8195_afe_disable_clk(struct mtk_base_afe *afe, struct clk *clk);
+ int mt8195_afe_prepare_clk(struct mtk_base_afe *afe, struct clk *clk);
+diff --git a/sound/soc/mediatek/mt8195/mt8195-afe-pcm.c b/sound/soc/mediatek/mt8195/mt8195-afe-pcm.c
+index 72b2c6d629b93..03dabc056b916 100644
+--- a/sound/soc/mediatek/mt8195/mt8195-afe-pcm.c
++++ b/sound/soc/mediatek/mt8195/mt8195-afe-pcm.c
+@@ -3253,18 +3253,13 @@ err_pm_put:
+ 	return ret;
+ }
+ 
+-static int mt8195_afe_pcm_dev_remove(struct platform_device *pdev)
++static void mt8195_afe_pcm_dev_remove(struct platform_device *pdev)
+ {
+-	struct mtk_base_afe *afe = platform_get_drvdata(pdev);
+-
+ 	snd_soc_unregister_component(&pdev->dev);
+ 
+ 	pm_runtime_disable(&pdev->dev);
+ 	if (!pm_runtime_status_suspended(&pdev->dev))
+ 		mt8195_afe_runtime_suspend(&pdev->dev);
+-
+-	mt8195_afe_deinit_clock(afe);
+-	return 0;
+ }
+ 
+ static const struct of_device_id mt8195_afe_pcm_dt_match[] = {
+@@ -3285,7 +3280,7 @@ static struct platform_driver mt8195_afe_pcm_driver = {
+ 		   .pm = &mt8195_afe_pm_ops,
+ 	},
+ 	.probe = mt8195_afe_pcm_dev_probe,
+-	.remove = mt8195_afe_pcm_dev_remove,
++	.remove_new = mt8195_afe_pcm_dev_remove,
+ };
+ 
+ module_platform_driver(mt8195_afe_pcm_driver);
+diff --git a/sound/soc/mediatek/mt8195/mt8195-audsys-clk.c b/sound/soc/mediatek/mt8195/mt8195-audsys-clk.c
+index e0670e0dbd5b0..38594bc3f2f77 100644
+--- a/sound/soc/mediatek/mt8195/mt8195-audsys-clk.c
++++ b/sound/soc/mediatek/mt8195/mt8195-audsys-clk.c
+@@ -148,6 +148,29 @@ static const struct afe_gate aud_clks[CLK_AUD_NR_CLK] = {
+ 	GATE_AUD6(CLK_AUD_GASRC19, "aud_gasrc19", "top_asm_h", 19),
+ };
+ 
++static void mt8195_audsys_clk_unregister(void *data)
++{
++	struct mtk_base_afe *afe = data;
++	struct mt8195_afe_private *afe_priv = afe->platform_priv;
++	struct clk *clk;
++	struct clk_lookup *cl;
++	int i;
++
++	if (!afe_priv)
++		return;
++
++	for (i = 0; i < CLK_AUD_NR_CLK; i++) {
++		cl = afe_priv->lookup[i];
++		if (!cl)
++			continue;
++
++		clk = cl->clk;
++		clk_unregister_gate(clk);
++
++		clkdev_drop(cl);
++	}
++}
++
+ int mt8195_audsys_clk_register(struct mtk_base_afe *afe)
+ {
+ 	struct mt8195_afe_private *afe_priv = afe->platform_priv;
+@@ -188,27 +211,5 @@ int mt8195_audsys_clk_register(struct mtk_base_afe *afe)
+ 		afe_priv->lookup[i] = cl;
+ 	}
+ 
+-	return 0;
+-}
+-
+-void mt8195_audsys_clk_unregister(struct mtk_base_afe *afe)
+-{
+-	struct mt8195_afe_private *afe_priv = afe->platform_priv;
+-	struct clk *clk;
+-	struct clk_lookup *cl;
+-	int i;
+-
+-	if (!afe_priv)
+-		return;
+-
+-	for (i = 0; i < CLK_AUD_NR_CLK; i++) {
+-		cl = afe_priv->lookup[i];
+-		if (!cl)
+-			continue;
+-
+-		clk = cl->clk;
+-		clk_unregister_gate(clk);
+-
+-		clkdev_drop(cl);
+-	}
++	return devm_add_action_or_reset(afe->dev, mt8195_audsys_clk_unregister, afe);
+ }
+diff --git a/sound/soc/mediatek/mt8195/mt8195-audsys-clk.h b/sound/soc/mediatek/mt8195/mt8195-audsys-clk.h
+index 239d31016ba76..69db2dd1c9e02 100644
+--- a/sound/soc/mediatek/mt8195/mt8195-audsys-clk.h
++++ b/sound/soc/mediatek/mt8195/mt8195-audsys-clk.h
+@@ -10,6 +10,5 @@
+ #define _MT8195_AUDSYS_CLK_H_
+ 
+ int mt8195_audsys_clk_register(struct mtk_base_afe *afe);
+-void mt8195_audsys_clk_unregister(struct mtk_base_afe *afe);
+ 
+ #endif
+diff --git a/tools/testing/selftests/bpf/prog_tests/sockopt_sk.c b/tools/testing/selftests/bpf/prog_tests/sockopt_sk.c
+index 60d952719d275..05d0e07da3942 100644
+--- a/tools/testing/selftests/bpf/prog_tests/sockopt_sk.c
++++ b/tools/testing/selftests/bpf/prog_tests/sockopt_sk.c
+@@ -3,6 +3,7 @@
+ #include "cgroup_helpers.h"
+ 
+ #include <linux/tcp.h>
++#include <linux/netlink.h>
+ #include "sockopt_sk.skel.h"
+ 
+ #ifndef SOL_TCP
+@@ -183,6 +184,33 @@ static int getsetsockopt(void)
+ 		goto err;
+ 	}
+ 
++	/* optval=NULL case is handled correctly */
++
++	close(fd);
++	fd = socket(AF_NETLINK, SOCK_RAW, 0);
++	if (fd < 0) {
++		log_err("Failed to create AF_NETLINK socket");
++		return -1;
++	}
++
++	buf.u32 = 1;
++	optlen = sizeof(__u32);
++	err = setsockopt(fd, SOL_NETLINK, NETLINK_ADD_MEMBERSHIP, &buf, optlen);
++	if (err) {
++		log_err("Unexpected getsockopt(NETLINK_ADD_MEMBERSHIP) err=%d errno=%d",
++			err, errno);
++		goto err;
++	}
++
++	optlen = 0;
++	err = getsockopt(fd, SOL_NETLINK, NETLINK_LIST_MEMBERSHIPS, NULL, &optlen);
++	if (err) {
++		log_err("Unexpected getsockopt(NETLINK_LIST_MEMBERSHIPS) err=%d errno=%d",
++			err, errno);
++		goto err;
++	}
++	ASSERT_EQ(optlen, 8, "Unexpected NETLINK_LIST_MEMBERSHIPS value");
++
+ 	free(big_buf);
+ 	close(fd);
+ 	return 0;
+diff --git a/tools/testing/selftests/bpf/progs/sockopt_sk.c b/tools/testing/selftests/bpf/progs/sockopt_sk.c
+index c8d810010a946..fe1df4cd206eb 100644
+--- a/tools/testing/selftests/bpf/progs/sockopt_sk.c
++++ b/tools/testing/selftests/bpf/progs/sockopt_sk.c
+@@ -32,6 +32,12 @@ int _getsockopt(struct bpf_sockopt *ctx)
+ 	__u8 *optval_end = ctx->optval_end;
+ 	__u8 *optval = ctx->optval;
+ 	struct sockopt_sk *storage;
++	struct bpf_sock *sk;
++
++	/* Bypass AF_NETLINK. */
++	sk = ctx->sk;
++	if (sk && sk->family == AF_NETLINK)
++		return 1;
+ 
+ 	/* Make sure bpf_get_netns_cookie is callable.
+ 	 */
+@@ -131,6 +137,12 @@ int _setsockopt(struct bpf_sockopt *ctx)
+ 	__u8 *optval_end = ctx->optval_end;
+ 	__u8 *optval = ctx->optval;
+ 	struct sockopt_sk *storage;
++	struct bpf_sock *sk;
++
++	/* Bypass AF_NETLINK. */
++	sk = ctx->sk;
++	if (sk && sk->family == AF_NETLINK)
++		return 1;
+ 
+ 	/* Make sure bpf_get_netns_cookie is callable.
+ 	 */
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+index 7c20811ab64bb..4152370298bf0 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+@@ -856,7 +856,15 @@ do_transfer()
+ 				     sed -n 's/.*\(token:\)\([[:digit:]]*\).*$/\2/p;q')
+ 				ip netns exec ${listener_ns} ./pm_nl_ctl ann $addr token $tk id $id
+ 				sleep 1
++				sp=$(grep "type:10" "$evts_ns1" |
++				     sed -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q')
++				da=$(grep "type:10" "$evts_ns1" |
++				     sed -n 's/.*\(daddr6:\)\([0-9a-f:.]*\).*$/\2/p;q')
++				dp=$(grep "type:10" "$evts_ns1" |
++				     sed -n 's/.*\(dport:\)\([[:digit:]]*\).*$/\2/p;q')
+ 				ip netns exec ${listener_ns} ./pm_nl_ctl rem token $tk id $id
++				ip netns exec ${listener_ns} ./pm_nl_ctl dsf lip "::ffff:$addr" \
++							lport $sp rip $da rport $dp token $tk
+ 			fi
+ 
+ 			counter=$((counter + 1))
+@@ -922,6 +930,7 @@ do_transfer()
+ 				sleep 1
+ 				sp=$(grep "type:10" "$evts_ns2" |
+ 				     sed -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q')
++				ip netns exec ${connector_ns} ./pm_nl_ctl rem token $tk id $id
+ 				ip netns exec ${connector_ns} ./pm_nl_ctl dsf lip $addr lport $sp \
+ 									rip $da rport $dp token $tk
+ 			fi
+@@ -3096,7 +3105,7 @@ userspace_tests()
+ 		pm_nl_set_limits $ns1 0 1
+ 		run_tests $ns1 $ns2 10.0.1.1 0 0 userspace_1 slow
+ 		chk_join_nr 1 1 1
+-		chk_rm_nr 0 1
++		chk_rm_nr 1 1
+ 		kill_events_pids
+ 	fi
+ }


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:6.3 commit in: /
@ 2023-06-21 14:53 Alice Ferrazzi
  0 siblings, 0 replies; 23+ messages in thread
From: Alice Ferrazzi @ 2023-06-21 14:53 UTC (permalink / raw
  To: gentoo-commits

commit:     c7167210c804f8dd1d5a6c74a10b1a2016d57c75
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 21 14:53:36 2023 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Jun 21 14:53:36 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c7167210

Linux patch 6.3.9

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README            |    4 +
 1008_linux-6.3.9.patch | 7701 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 7705 insertions(+)

diff --git a/0000_README b/0000_README
index 5d0c85ce..f891b16a 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch:  1007_linux-6.3.8.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.3.8
 
+Patch:  1008_linux-6.3.9.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.3.9
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1008_linux-6.3.9.patch b/1008_linux-6.3.9.patch
new file mode 100644
index 00000000..bba525bf
--- /dev/null
+++ b/1008_linux-6.3.9.patch
@@ -0,0 +1,7701 @@
+diff --git a/Makefile b/Makefile
+index b4267d7a57b35..f8ee723758856 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 3
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/arch/arm/boot/dts/vexpress-v2p-ca5s.dts b/arch/arm/boot/dts/vexpress-v2p-ca5s.dts
+index 3b88209bacea2..ff1f9a1bcfcfc 100644
+--- a/arch/arm/boot/dts/vexpress-v2p-ca5s.dts
++++ b/arch/arm/boot/dts/vexpress-v2p-ca5s.dts
+@@ -132,6 +132,7 @@
+ 		reg = <0x2c0f0000 0x1000>;
+ 		interrupts = <0 84 4>;
+ 		cache-level = <2>;
++		cache-unified;
+ 	};
+ 
+ 	pmu {
+diff --git a/arch/arm64/boot/dts/arm/foundation-v8.dtsi b/arch/arm64/boot/dts/arm/foundation-v8.dtsi
+index 029578072d8fb..7b41537731a6a 100644
+--- a/arch/arm64/boot/dts/arm/foundation-v8.dtsi
++++ b/arch/arm64/boot/dts/arm/foundation-v8.dtsi
+@@ -59,6 +59,7 @@
+ 		L2_0: l2-cache0 {
+ 			compatible = "cache";
+ 			cache-level = <2>;
++			cache-unified;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/arm/rtsm_ve-aemv8a.dts b/arch/arm64/boot/dts/arm/rtsm_ve-aemv8a.dts
+index ef68f5aae7ddf..afdf954206f1d 100644
+--- a/arch/arm64/boot/dts/arm/rtsm_ve-aemv8a.dts
++++ b/arch/arm64/boot/dts/arm/rtsm_ve-aemv8a.dts
+@@ -72,6 +72,7 @@
+ 		L2_0: l2-cache0 {
+ 			compatible = "cache";
+ 			cache-level = <2>;
++			cache-unified;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/arm/vexpress-v2f-1xv7-ca53x2.dts b/arch/arm64/boot/dts/arm/vexpress-v2f-1xv7-ca53x2.dts
+index 796cd7d02eb55..7bdeb965f0a96 100644
+--- a/arch/arm64/boot/dts/arm/vexpress-v2f-1xv7-ca53x2.dts
++++ b/arch/arm64/boot/dts/arm/vexpress-v2f-1xv7-ca53x2.dts
+@@ -58,6 +58,7 @@
+ 		L2_0: l2-cache0 {
+ 			compatible = "cache";
+ 			cache-level = <2>;
++			cache-unified;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sm8550.dtsi b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+index f05cdb376a175..90e3fb11e6e70 100644
+--- a/arch/arm64/boot/dts/qcom/sm8550.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+@@ -3423,9 +3423,16 @@
+ 
+ 		system-cache-controller@25000000 {
+ 			compatible = "qcom,sm8550-llcc";
+-			reg = <0 0x25000000 0 0x800000>,
++			reg = <0 0x25000000 0 0x200000>,
++			      <0 0x25200000 0 0x200000>,
++			      <0 0x25400000 0 0x200000>,
++			      <0 0x25600000 0 0x200000>,
+ 			      <0 0x25800000 0 0x200000>;
+-			reg-names = "llcc_base", "llcc_broadcast_base";
++			reg-names = "llcc0_base",
++				    "llcc1_base",
++				    "llcc2_base",
++				    "llcc3_base",
++				    "llcc_broadcast_base";
+ 			interrupts = <GIC_SPI 266 IRQ_TYPE_LEVEL_HIGH>;
+ 		};
+ 
+diff --git a/arch/loongarch/kernel/perf_event.c b/arch/loongarch/kernel/perf_event.c
+index 707bd32e5c4ff..3a2edb157b65a 100644
+--- a/arch/loongarch/kernel/perf_event.c
++++ b/arch/loongarch/kernel/perf_event.c
+@@ -271,7 +271,7 @@ static void loongarch_pmu_enable_event(struct hw_perf_event *evt, int idx)
+ 	WARN_ON(idx < 0 || idx >= loongarch_pmu.num_counters);
+ 
+ 	/* Make sure interrupt enabled. */
+-	cpuc->saved_ctrl[idx] = M_PERFCTL_EVENT(evt->event_base & 0xff) |
++	cpuc->saved_ctrl[idx] = M_PERFCTL_EVENT(evt->event_base) |
+ 		(evt->config_base & M_PERFCTL_CONFIG_MASK) | CSR_PERFCTRL_IE;
+ 
+ 	cpu = (event->cpu >= 0) ? event->cpu : smp_processor_id();
+@@ -594,7 +594,7 @@ static struct pmu pmu = {
+ 
+ static unsigned int loongarch_pmu_perf_event_encode(const struct loongarch_perf_event *pev)
+ {
+-	return (pev->event_id & 0xff);
++	return M_PERFCTL_EVENT(pev->event_id);
+ }
+ 
+ static const struct loongarch_perf_event *loongarch_pmu_map_general_event(int idx)
+@@ -849,7 +849,7 @@ static void resume_local_counters(void)
+ 
+ static const struct loongarch_perf_event *loongarch_pmu_map_raw_event(u64 config)
+ {
+-	raw_event.event_id = config & 0xff;
++	raw_event.event_id = M_PERFCTL_EVENT(config);
+ 
+ 	return &raw_event;
+ }
+diff --git a/arch/loongarch/kernel/unaligned.c b/arch/loongarch/kernel/unaligned.c
+index bdff825d29ef4..85fae3d2d71ae 100644
+--- a/arch/loongarch/kernel/unaligned.c
++++ b/arch/loongarch/kernel/unaligned.c
+@@ -485,7 +485,7 @@ static int __init debugfs_unaligned(void)
+ 	struct dentry *d;
+ 
+ 	d = debugfs_create_dir("loongarch", NULL);
+-	if (!d)
++	if (IS_ERR_OR_NULL(d))
+ 		return -ENOMEM;
+ 
+ 	debugfs_create_u32("unaligned_instructions_user",
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index e2f3ca73f40d6..5b3f1f1dfd164 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -82,6 +82,7 @@ config MIPS
+ 	select HAVE_LD_DEAD_CODE_DATA_ELIMINATION
+ 	select HAVE_MOD_ARCH_SPECIFIC
+ 	select HAVE_NMI
++	select HAVE_PATA_PLATFORM
+ 	select HAVE_PERF_EVENTS
+ 	select HAVE_PERF_REGS
+ 	select HAVE_PERF_USER_STACK_DUMP
+diff --git a/arch/mips/alchemy/common/dbdma.c b/arch/mips/alchemy/common/dbdma.c
+index 5ab0430004092..6a3c890f7bbfe 100644
+--- a/arch/mips/alchemy/common/dbdma.c
++++ b/arch/mips/alchemy/common/dbdma.c
+@@ -30,6 +30,7 @@
+  *
+  */
+ 
++#include <linux/dma-map-ops.h> /* for dma_default_coherent */
+ #include <linux/init.h>
+ #include <linux/kernel.h>
+ #include <linux/slab.h>
+@@ -623,17 +624,18 @@ u32 au1xxx_dbdma_put_source(u32 chanid, dma_addr_t buf, int nbytes, u32 flags)
+ 		dp->dscr_cmd0 &= ~DSCR_CMD0_IE;
+ 
+ 	/*
+-	 * There is an errata on the Au1200/Au1550 parts that could result
+-	 * in "stale" data being DMA'ed. It has to do with the snoop logic on
+-	 * the cache eviction buffer.  DMA_NONCOHERENT is on by default for
+-	 * these parts. If it is fixed in the future, these dma_cache_inv will
+-	 * just be nothing more than empty macros. See io.h.
++	 * There is an erratum on certain Au1200/Au1550 revisions that could
++	 * result in "stale" data being DMA'ed. It has to do with the snoop
++	 * logic on the cache eviction buffer.  dma_default_coherent is set
++	 * to false on these parts.
+ 	 */
+-	dma_cache_wback_inv((unsigned long)buf, nbytes);
++	if (!dma_default_coherent)
++		dma_cache_wback_inv(KSEG0ADDR(buf), nbytes);
+ 	dp->dscr_cmd0 |= DSCR_CMD0_V;	/* Let it rip */
+ 	wmb(); /* drain writebuffer */
+ 	dma_cache_wback_inv((unsigned long)dp, sizeof(*dp));
+ 	ctp->chan_ptr->ddma_dbell = 0;
++	wmb(); /* force doorbell write out to dma engine */
+ 
+ 	/* Get next descriptor pointer. */
+ 	ctp->put_ptr = phys_to_virt(DSCR_GET_NXTPTR(dp->dscr_nxtptr));
+@@ -685,17 +687,18 @@ u32 au1xxx_dbdma_put_dest(u32 chanid, dma_addr_t buf, int nbytes, u32 flags)
+ 			  dp->dscr_source1, dp->dscr_dest0, dp->dscr_dest1);
+ #endif
+ 	/*
+-	 * There is an errata on the Au1200/Au1550 parts that could result in
+-	 * "stale" data being DMA'ed. It has to do with the snoop logic on the
+-	 * cache eviction buffer.  DMA_NONCOHERENT is on by default for these
+-	 * parts. If it is fixed in the future, these dma_cache_inv will just
+-	 * be nothing more than empty macros. See io.h.
++	 * There is an erratum on certain Au1200/Au1550 revisions that could
++	 * result in "stale" data being DMA'ed. It has to do with the snoop
++	 * logic on the cache eviction buffer.  dma_default_coherent is set
++	 * to false on these parts.
+ 	 */
+-	dma_cache_inv((unsigned long)buf, nbytes);
++	if (!dma_default_coherent)
++		dma_cache_inv(KSEG0ADDR(buf), nbytes);
+ 	dp->dscr_cmd0 |= DSCR_CMD0_V;	/* Let it rip */
+ 	wmb(); /* drain writebuffer */
+ 	dma_cache_wback_inv((unsigned long)dp, sizeof(*dp));
+ 	ctp->chan_ptr->ddma_dbell = 0;
++	wmb(); /* force doorbell write out to dma engine */
+ 
+ 	/* Get next descriptor pointer. */
+ 	ctp->put_ptr = phys_to_virt(DSCR_GET_NXTPTR(dp->dscr_nxtptr));
+diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
+index 7ddf07f255f32..6f5d825958778 100644
+--- a/arch/mips/kernel/cpu-probe.c
++++ b/arch/mips/kernel/cpu-probe.c
+@@ -1502,6 +1502,10 @@ static inline void cpu_probe_alchemy(struct cpuinfo_mips *c, unsigned int cpu)
+ 			break;
+ 		}
+ 		break;
++	case PRID_IMP_NETLOGIC_AU13XX:
++		c->cputype = CPU_ALCHEMY;
++		__cpu_name[cpu] = "Au1300";
++		break;
+ 	}
+ }
+ 
+@@ -1861,6 +1865,7 @@ void cpu_probe(void)
+ 		cpu_probe_mips(c, cpu);
+ 		break;
+ 	case PRID_COMP_ALCHEMY:
++	case PRID_COMP_NETLOGIC:
+ 		cpu_probe_alchemy(c, cpu);
+ 		break;
+ 	case PRID_COMP_SIBYTE:
+diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
+index f1c88f8a1dc51..81dbb4ef52317 100644
+--- a/arch/mips/kernel/setup.c
++++ b/arch/mips/kernel/setup.c
+@@ -158,10 +158,6 @@ static unsigned long __init init_initrd(void)
+ 		pr_err("initrd start must be page aligned\n");
+ 		goto disable;
+ 	}
+-	if (initrd_start < PAGE_OFFSET) {
+-		pr_err("initrd start < PAGE_OFFSET\n");
+-		goto disable;
+-	}
+ 
+ 	/*
+ 	 * Sanitize initrd addresses. For example firmware
+@@ -174,6 +170,11 @@ static unsigned long __init init_initrd(void)
+ 	initrd_end = (unsigned long)__va(end);
+ 	initrd_start = (unsigned long)__va(__pa(initrd_start));
+ 
++	if (initrd_start < PAGE_OFFSET) {
++		pr_err("initrd start < PAGE_OFFSET\n");
++		goto disable;
++	}
++
+ 	ROOT_DEV = Root_RAM0;
+ 	return PFN_UP(end);
+ disable:
+diff --git a/arch/nios2/boot/dts/10m50_devboard.dts b/arch/nios2/boot/dts/10m50_devboard.dts
+index 56339bef3247d..0e7e5b0dd685c 100644
+--- a/arch/nios2/boot/dts/10m50_devboard.dts
++++ b/arch/nios2/boot/dts/10m50_devboard.dts
+@@ -97,7 +97,7 @@
+ 			rx-fifo-depth = <8192>;
+ 			tx-fifo-depth = <8192>;
+ 			address-bits = <48>;
+-			max-frame-size = <1518>;
++			max-frame-size = <1500>;
+ 			local-mac-address = [00 00 00 00 00 00];
+ 			altr,has-supplementary-unicast;
+ 			altr,enable-sup-addr = <1>;
+diff --git a/arch/nios2/boot/dts/3c120_devboard.dts b/arch/nios2/boot/dts/3c120_devboard.dts
+index d10fb81686c7e..3ee3169063797 100644
+--- a/arch/nios2/boot/dts/3c120_devboard.dts
++++ b/arch/nios2/boot/dts/3c120_devboard.dts
+@@ -106,7 +106,7 @@
+ 				interrupt-names = "rx_irq", "tx_irq";
+ 				rx-fifo-depth = <8192>;
+ 				tx-fifo-depth = <8192>;
+-				max-frame-size = <1518>;
++				max-frame-size = <1500>;
+ 				local-mac-address = [ 00 00 00 00 00 00 ];
+ 				phy-mode = "rgmii-id";
+ 				phy-handle = <&phy0>;
+diff --git a/arch/parisc/include/asm/assembly.h b/arch/parisc/include/asm/assembly.h
+index 0f0d4a496fef0..75677b526b2bb 100644
+--- a/arch/parisc/include/asm/assembly.h
++++ b/arch/parisc/include/asm/assembly.h
+@@ -90,10 +90,6 @@
+ #include <asm/asmregs.h>
+ #include <asm/psw.h>
+ 
+-	sp	=	30
+-	gp	=	27
+-	ipsw	=	22
+-
+ 	/*
+ 	 * We provide two versions of each macro to convert from physical
+ 	 * to virtual and vice versa. The "_r1" versions take one argument
+diff --git a/arch/parisc/kernel/pci-dma.c b/arch/parisc/kernel/pci-dma.c
+index ba87f791323be..71ed5391f29d6 100644
+--- a/arch/parisc/kernel/pci-dma.c
++++ b/arch/parisc/kernel/pci-dma.c
+@@ -446,11 +446,27 @@ void arch_dma_free(struct device *dev, size_t size, void *vaddr,
+ void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
+ 		enum dma_data_direction dir)
+ {
++	/*
++	 * fdc: The data cache line is written back to memory, if and only if
++	 * it is dirty, and then invalidated from the data cache.
++	 */
+ 	flush_kernel_dcache_range((unsigned long)phys_to_virt(paddr), size);
+ }
+ 
+ void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
+ 		enum dma_data_direction dir)
+ {
+-	flush_kernel_dcache_range((unsigned long)phys_to_virt(paddr), size);
++	unsigned long addr = (unsigned long) phys_to_virt(paddr);
++
++	switch (dir) {
++	case DMA_TO_DEVICE:
++	case DMA_BIDIRECTIONAL:
++		flush_kernel_dcache_range(addr, size);
++		return;
++	case DMA_FROM_DEVICE:
++		purge_kernel_dcache_range_asm(addr, addr + size);
++		return;
++	default:
++		BUG();
++	}
+ }
+diff --git a/arch/powerpc/purgatory/Makefile b/arch/powerpc/purgatory/Makefile
+index 6f5e2727963c4..78473d69cd2b6 100644
+--- a/arch/powerpc/purgatory/Makefile
++++ b/arch/powerpc/purgatory/Makefile
+@@ -5,6 +5,11 @@ KCSAN_SANITIZE := n
+ 
+ targets += trampoline_$(BITS).o purgatory.ro
+ 
++# When profile-guided optimization is enabled, llvm emits two different
++# overlapping text sections, which is not supported by kexec. Remove profile
++# optimization flags.
++KBUILD_CFLAGS := $(filter-out -fprofile-sample-use=% -fprofile-use=%,$(KBUILD_CFLAGS))
++
+ LDFLAGS_purgatory.ro := -e purgatory_start -r --no-undefined
+ 
+ $(obj)/purgatory.ro: $(obj)/trampoline_$(BITS).o FORCE
+diff --git a/arch/riscv/purgatory/Makefile b/arch/riscv/purgatory/Makefile
+index 5730797a6b402..bd2e27f825326 100644
+--- a/arch/riscv/purgatory/Makefile
++++ b/arch/riscv/purgatory/Makefile
+@@ -35,6 +35,11 @@ CFLAGS_sha256.o := -D__DISABLE_EXPORTS
+ CFLAGS_string.o := -D__DISABLE_EXPORTS
+ CFLAGS_ctype.o := -D__DISABLE_EXPORTS
+ 
++# When profile-guided optimization is enabled, llvm emits two different
++# overlapping text sections, which is not supported by kexec. Remove profile
++# optimization flags.
++KBUILD_CFLAGS := $(filter-out -fprofile-sample-use=% -fprofile-use=%,$(KBUILD_CFLAGS))
++
+ # When linking purgatory.ro with -r unresolved symbols are not checked,
+ # also link a purgatory.chk binary without -r to check for unresolved symbols.
+ PURGATORY_LDFLAGS := -e purgatory_start -z nodefaultlib
+diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
+index 222efd4a09bc8..f047723c6ca5d 100644
+--- a/arch/x86/kernel/head_64.S
++++ b/arch/x86/kernel/head_64.S
+@@ -85,6 +85,15 @@ SYM_CODE_START_NOALIGN(startup_64)
+ 	call	startup_64_setup_env
+ 	popq	%rsi
+ 
++	/* Now switch to __KERNEL_CS so IRET works reliably */
++	pushq	$__KERNEL_CS
++	leaq	.Lon_kernel_cs(%rip), %rax
++	pushq	%rax
++	lretq
++
++.Lon_kernel_cs:
++	UNWIND_HINT_EMPTY
++
+ #ifdef CONFIG_AMD_MEM_ENCRYPT
+ 	/*
+ 	 * Activate SEV/SME memory encryption if supported/enabled. This needs to
+@@ -98,15 +107,6 @@ SYM_CODE_START_NOALIGN(startup_64)
+ 	popq	%rsi
+ #endif
+ 
+-	/* Now switch to __KERNEL_CS so IRET works reliably */
+-	pushq	$__KERNEL_CS
+-	leaq	.Lon_kernel_cs(%rip), %rax
+-	pushq	%rax
+-	lretq
+-
+-.Lon_kernel_cs:
+-	UNWIND_HINT_EMPTY
+-
+ 	/* Sanitize CPU configuration */
+ 	call verify_cpu
+ 
+diff --git a/arch/x86/purgatory/Makefile b/arch/x86/purgatory/Makefile
+index 82fec66d46d29..42abd6af11984 100644
+--- a/arch/x86/purgatory/Makefile
++++ b/arch/x86/purgatory/Makefile
+@@ -14,6 +14,11 @@ $(obj)/sha256.o: $(srctree)/lib/crypto/sha256.c FORCE
+ 
+ CFLAGS_sha256.o := -D__DISABLE_EXPORTS
+ 
++# When profile-guided optimization is enabled, llvm emits two different
++# overlapping text sections, which is not supported by kexec. Remove profile
++# optimization flags.
++KBUILD_CFLAGS := $(filter-out -fprofile-sample-use=% -fprofile-use=%,$(KBUILD_CFLAGS))
++
+ # When linking purgatory.ro with -r unresolved symbols are not checked,
+ # also link a purgatory.chk binary without -r to check for unresolved symbols.
+ PURGATORY_LDFLAGS := -e purgatory_start -z nodefaultlib
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index 75bad5d60c9f4..dd6d1c0117b15 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -35,6 +35,8 @@
+ #include "blk-throttle.h"
+ #include "blk-rq-qos.h"
+ 
++static void __blkcg_rstat_flush(struct blkcg *blkcg, int cpu);
++
+ /*
+  * blkcg_pol_mutex protects blkcg_policy[] and policy [de]activation.
+  * blkcg_pol_register_mutex nests outside of it and synchronizes entire
+@@ -58,6 +60,8 @@ static LIST_HEAD(all_blkcgs);		/* protected by blkcg_pol_mutex */
+ bool blkcg_debug_stats = false;
+ static struct workqueue_struct *blkcg_punt_bio_wq;
+ 
++static DEFINE_RAW_SPINLOCK(blkg_stat_lock);
++
+ #define BLKG_DESTROY_BATCH_SIZE  64
+ 
+ /*
+@@ -165,8 +169,18 @@ static void blkg_free(struct blkcg_gq *blkg)
+ static void __blkg_release(struct rcu_head *rcu)
+ {
+ 	struct blkcg_gq *blkg = container_of(rcu, struct blkcg_gq, rcu_head);
++	struct blkcg *blkcg = blkg->blkcg;
++	int cpu;
+ 
+ 	WARN_ON(!bio_list_empty(&blkg->async_bios));
++	/*
++	 * Flush all the non-empty percpu lockless lists before releasing
++	 * us, given these stat belongs to us.
++	 *
++	 * blkg_stat_lock is for serializing blkg stat update
++	 */
++	for_each_possible_cpu(cpu)
++		__blkcg_rstat_flush(blkcg, cpu);
+ 
+ 	/* release the blkcg and parent blkg refs this blkg has been holding */
+ 	css_put(&blkg->blkcg->css);
+@@ -888,23 +902,26 @@ static void blkcg_iostat_update(struct blkcg_gq *blkg, struct blkg_iostat *cur,
+ 	u64_stats_update_end_irqrestore(&blkg->iostat.sync, flags);
+ }
+ 
+-static void blkcg_rstat_flush(struct cgroup_subsys_state *css, int cpu)
++static void __blkcg_rstat_flush(struct blkcg *blkcg, int cpu)
+ {
+-	struct blkcg *blkcg = css_to_blkcg(css);
+ 	struct llist_head *lhead = per_cpu_ptr(blkcg->lhead, cpu);
+ 	struct llist_node *lnode;
+ 	struct blkg_iostat_set *bisc, *next_bisc;
+ 
+-	/* Root-level stats are sourced from system-wide IO stats */
+-	if (!cgroup_parent(css->cgroup))
+-		return;
+-
+ 	rcu_read_lock();
+ 
+ 	lnode = llist_del_all(lhead);
+ 	if (!lnode)
+ 		goto out;
+ 
++	/*
++	 * For covering concurrent parent blkg update from blkg_release().
++	 *
++	 * When flushing from cgroup, cgroup_rstat_lock is always held, so
++	 * this lock won't cause contention most of time.
++	 */
++	raw_spin_lock(&blkg_stat_lock);
++
+ 	/*
+ 	 * Iterate only the iostat_cpu's queued in the lockless list.
+ 	 */
+@@ -928,13 +945,19 @@ static void blkcg_rstat_flush(struct cgroup_subsys_state *css, int cpu)
+ 		if (parent && parent->parent)
+ 			blkcg_iostat_update(parent, &blkg->iostat.cur,
+ 					    &blkg->iostat.last);
+-		percpu_ref_put(&blkg->refcnt);
+ 	}
+-
++	raw_spin_unlock(&blkg_stat_lock);
+ out:
+ 	rcu_read_unlock();
+ }
+ 
++static void blkcg_rstat_flush(struct cgroup_subsys_state *css, int cpu)
++{
++	/* Root-level stats are sourced from system-wide IO stats */
++	if (cgroup_parent(css->cgroup))
++		__blkcg_rstat_flush(css_to_blkcg(css), cpu);
++}
++
+ /*
+  * We source root cgroup stats from the system-wide stats to avoid
+  * tracking the same information twice and incurring overhead when no
+@@ -2063,7 +2086,6 @@ void blk_cgroup_bio_start(struct bio *bio)
+ 
+ 		llist_add(&bis->lnode, lhead);
+ 		WRITE_ONCE(bis->lqueued, true);
+-		percpu_ref_get(&bis->blkg->refcnt);
+ 	}
+ 
+ 	u64_stats_update_end_irqrestore(&bis->sync, flags);
+diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
+index 23ed258b57f0e..c1890c8a9f6e7 100644
+--- a/drivers/block/xen-blkfront.c
++++ b/drivers/block/xen-blkfront.c
+@@ -780,7 +780,8 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
+ 		ring_req->u.rw.handle = info->handle;
+ 		ring_req->operation = rq_data_dir(req) ?
+ 			BLKIF_OP_WRITE : BLKIF_OP_READ;
+-		if (req_op(req) == REQ_OP_FLUSH || req->cmd_flags & REQ_FUA) {
++		if (req_op(req) == REQ_OP_FLUSH ||
++		    (req_op(req) == REQ_OP_WRITE && (req->cmd_flags & REQ_FUA))) {
+ 			/*
+ 			 * Ideally we can do an unordered flush-to-disk.
+ 			 * In case the backend onlysupports barriers, use that.
+diff --git a/drivers/char/agp/parisc-agp.c b/drivers/char/agp/parisc-agp.c
+index d68d05d5d3838..514f9f287a781 100644
+--- a/drivers/char/agp/parisc-agp.c
++++ b/drivers/char/agp/parisc-agp.c
+@@ -90,6 +90,9 @@ parisc_agp_tlbflush(struct agp_memory *mem)
+ {
+ 	struct _parisc_agp_info *info = &parisc_agp_info;
+ 
++	/* force fdc ops to be visible to IOMMU */
++	asm_io_sync();
++
+ 	writeq(info->gart_base | ilog2(info->gart_size), info->ioc_regs+IOC_PCOM);
+ 	readq(info->ioc_regs+IOC_PCOM);	/* flush */
+ }
+@@ -158,6 +161,7 @@ parisc_agp_insert_memory(struct agp_memory *mem, off_t pg_start, int type)
+ 			info->gatt[j] =
+ 				parisc_agp_mask_memory(agp_bridge,
+ 					paddr, type);
++			asm_io_fdc(&info->gatt[j]);
+ 		}
+ 	}
+ 
+@@ -191,7 +195,16 @@ static unsigned long
+ parisc_agp_mask_memory(struct agp_bridge_data *bridge, dma_addr_t addr,
+ 		       int type)
+ {
+-	return SBA_PDIR_VALID_BIT | addr;
++	unsigned ci;			/* coherent index */
++	dma_addr_t pa;
++
++	pa = addr & IOVP_MASK;
++	asm("lci 0(%1), %0" : "=r" (ci) : "r" (phys_to_virt(pa)));
++
++	pa |= (ci >> PAGE_SHIFT) & 0xff;/* move CI (8 bits) into lowest byte */
++	pa |= SBA_PDIR_VALID_BIT;	/* set "valid" bit */
++
++	return cpu_to_le64(pa);
+ }
+ 
+ static void
+diff --git a/drivers/clk/pxa/clk-pxa3xx.c b/drivers/clk/pxa/clk-pxa3xx.c
+index 42958a5426625..621e298f101a0 100644
+--- a/drivers/clk/pxa/clk-pxa3xx.c
++++ b/drivers/clk/pxa/clk-pxa3xx.c
+@@ -164,7 +164,7 @@ void pxa3xx_clk_update_accr(u32 disable, u32 enable, u32 xclkcfg, u32 mask)
+ 	accr &= ~disable;
+ 	accr |= enable;
+ 
+-	writel(accr, ACCR);
++	writel(accr, clk_regs + ACCR);
+ 	if (xclkcfg)
+ 		__asm__("mcr p14, 0, %0, c6, c0, 0\n" : : "r"(xclkcfg));
+ 
+diff --git a/drivers/edac/qcom_edac.c b/drivers/edac/qcom_edac.c
+index a7158b570ddd0..ae84558481770 100644
+--- a/drivers/edac/qcom_edac.c
++++ b/drivers/edac/qcom_edac.c
+@@ -21,30 +21,9 @@
+ #define TRP_SYN_REG_CNT                 6
+ #define DRP_SYN_REG_CNT                 8
+ 
+-#define LLCC_COMMON_STATUS0             0x0003000c
+ #define LLCC_LB_CNT_MASK                GENMASK(31, 28)
+ #define LLCC_LB_CNT_SHIFT               28
+ 
+-/* Single & double bit syndrome register offsets */
+-#define TRP_ECC_SB_ERR_SYN0             0x0002304c
+-#define TRP_ECC_DB_ERR_SYN0             0x00020370
+-#define DRP_ECC_SB_ERR_SYN0             0x0004204c
+-#define DRP_ECC_DB_ERR_SYN0             0x00042070
+-
+-/* Error register offsets */
+-#define TRP_ECC_ERROR_STATUS1           0x00020348
+-#define TRP_ECC_ERROR_STATUS0           0x00020344
+-#define DRP_ECC_ERROR_STATUS1           0x00042048
+-#define DRP_ECC_ERROR_STATUS0           0x00042044
+-
+-/* TRP, DRP interrupt register offsets */
+-#define DRP_INTERRUPT_STATUS            0x00041000
+-#define TRP_INTERRUPT_0_STATUS          0x00020480
+-#define DRP_INTERRUPT_CLEAR             0x00041008
+-#define DRP_ECC_ERROR_CNTR_CLEAR        0x00040004
+-#define TRP_INTERRUPT_0_CLEAR           0x00020484
+-#define TRP_ECC_ERROR_CNTR_CLEAR        0x00020440
+-
+ /* Mask and shift macros */
+ #define ECC_DB_ERR_COUNT_MASK           GENMASK(4, 0)
+ #define ECC_DB_ERR_WAYS_MASK            GENMASK(31, 16)
+@@ -60,15 +39,6 @@
+ #define DRP_TRP_INT_CLEAR               GENMASK(1, 0)
+ #define DRP_TRP_CNT_CLEAR               GENMASK(1, 0)
+ 
+-/* Config registers offsets*/
+-#define DRP_ECC_ERROR_CFG               0x00040000
+-
+-/* Tag RAM, Data RAM interrupt register offsets */
+-#define CMN_INTERRUPT_0_ENABLE          0x0003001c
+-#define CMN_INTERRUPT_2_ENABLE          0x0003003c
+-#define TRP_INTERRUPT_0_ENABLE          0x00020488
+-#define DRP_INTERRUPT_ENABLE            0x0004100c
+-
+ #define SB_ERROR_THRESHOLD              0x1
+ #define SB_ERROR_THRESHOLD_SHIFT        24
+ #define SB_DB_TRP_INTERRUPT_ENABLE      0x3
+@@ -88,9 +58,6 @@ enum {
+ static const struct llcc_edac_reg_data edac_reg_data[] = {
+ 	[LLCC_DRAM_CE] = {
+ 		.name = "DRAM Single-bit",
+-		.synd_reg = DRP_ECC_SB_ERR_SYN0,
+-		.count_status_reg = DRP_ECC_ERROR_STATUS1,
+-		.ways_status_reg = DRP_ECC_ERROR_STATUS0,
+ 		.reg_cnt = DRP_SYN_REG_CNT,
+ 		.count_mask = ECC_SB_ERR_COUNT_MASK,
+ 		.ways_mask = ECC_SB_ERR_WAYS_MASK,
+@@ -98,9 +65,6 @@ static const struct llcc_edac_reg_data edac_reg_data[] = {
+ 	},
+ 	[LLCC_DRAM_UE] = {
+ 		.name = "DRAM Double-bit",
+-		.synd_reg = DRP_ECC_DB_ERR_SYN0,
+-		.count_status_reg = DRP_ECC_ERROR_STATUS1,
+-		.ways_status_reg = DRP_ECC_ERROR_STATUS0,
+ 		.reg_cnt = DRP_SYN_REG_CNT,
+ 		.count_mask = ECC_DB_ERR_COUNT_MASK,
+ 		.ways_mask = ECC_DB_ERR_WAYS_MASK,
+@@ -108,9 +72,6 @@ static const struct llcc_edac_reg_data edac_reg_data[] = {
+ 	},
+ 	[LLCC_TRAM_CE] = {
+ 		.name = "TRAM Single-bit",
+-		.synd_reg = TRP_ECC_SB_ERR_SYN0,
+-		.count_status_reg = TRP_ECC_ERROR_STATUS1,
+-		.ways_status_reg = TRP_ECC_ERROR_STATUS0,
+ 		.reg_cnt = TRP_SYN_REG_CNT,
+ 		.count_mask = ECC_SB_ERR_COUNT_MASK,
+ 		.ways_mask = ECC_SB_ERR_WAYS_MASK,
+@@ -118,9 +79,6 @@ static const struct llcc_edac_reg_data edac_reg_data[] = {
+ 	},
+ 	[LLCC_TRAM_UE] = {
+ 		.name = "TRAM Double-bit",
+-		.synd_reg = TRP_ECC_DB_ERR_SYN0,
+-		.count_status_reg = TRP_ECC_ERROR_STATUS1,
+-		.ways_status_reg = TRP_ECC_ERROR_STATUS0,
+ 		.reg_cnt = TRP_SYN_REG_CNT,
+ 		.count_mask = ECC_DB_ERR_COUNT_MASK,
+ 		.ways_mask = ECC_DB_ERR_WAYS_MASK,
+@@ -128,7 +86,7 @@ static const struct llcc_edac_reg_data edac_reg_data[] = {
+ 	},
+ };
+ 
+-static int qcom_llcc_core_setup(struct regmap *llcc_bcast_regmap)
++static int qcom_llcc_core_setup(struct llcc_drv_data *drv, struct regmap *llcc_bcast_regmap)
+ {
+ 	u32 sb_err_threshold;
+ 	int ret;
+@@ -137,31 +95,31 @@ static int qcom_llcc_core_setup(struct regmap *llcc_bcast_regmap)
+ 	 * Configure interrupt enable registers such that Tag, Data RAM related
+ 	 * interrupts are propagated to interrupt controller for servicing
+ 	 */
+-	ret = regmap_update_bits(llcc_bcast_regmap, CMN_INTERRUPT_2_ENABLE,
++	ret = regmap_update_bits(llcc_bcast_regmap, drv->edac_reg_offset->cmn_interrupt_2_enable,
+ 				 TRP0_INTERRUPT_ENABLE,
+ 				 TRP0_INTERRUPT_ENABLE);
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = regmap_update_bits(llcc_bcast_regmap, TRP_INTERRUPT_0_ENABLE,
++	ret = regmap_update_bits(llcc_bcast_regmap, drv->edac_reg_offset->trp_interrupt_0_enable,
+ 				 SB_DB_TRP_INTERRUPT_ENABLE,
+ 				 SB_DB_TRP_INTERRUPT_ENABLE);
+ 	if (ret)
+ 		return ret;
+ 
+ 	sb_err_threshold = (SB_ERROR_THRESHOLD << SB_ERROR_THRESHOLD_SHIFT);
+-	ret = regmap_write(llcc_bcast_regmap, DRP_ECC_ERROR_CFG,
++	ret = regmap_write(llcc_bcast_regmap, drv->edac_reg_offset->drp_ecc_error_cfg,
+ 			   sb_err_threshold);
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = regmap_update_bits(llcc_bcast_regmap, CMN_INTERRUPT_2_ENABLE,
++	ret = regmap_update_bits(llcc_bcast_regmap, drv->edac_reg_offset->cmn_interrupt_2_enable,
+ 				 DRP0_INTERRUPT_ENABLE,
+ 				 DRP0_INTERRUPT_ENABLE);
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = regmap_write(llcc_bcast_regmap, DRP_INTERRUPT_ENABLE,
++	ret = regmap_write(llcc_bcast_regmap, drv->edac_reg_offset->drp_interrupt_enable,
+ 			   SB_DB_DRP_INTERRUPT_ENABLE);
+ 	return ret;
+ }
+@@ -175,24 +133,28 @@ qcom_llcc_clear_error_status(int err_type, struct llcc_drv_data *drv)
+ 	switch (err_type) {
+ 	case LLCC_DRAM_CE:
+ 	case LLCC_DRAM_UE:
+-		ret = regmap_write(drv->bcast_regmap, DRP_INTERRUPT_CLEAR,
++		ret = regmap_write(drv->bcast_regmap,
++				   drv->edac_reg_offset->drp_interrupt_clear,
+ 				   DRP_TRP_INT_CLEAR);
+ 		if (ret)
+ 			return ret;
+ 
+-		ret = regmap_write(drv->bcast_regmap, DRP_ECC_ERROR_CNTR_CLEAR,
++		ret = regmap_write(drv->bcast_regmap,
++				   drv->edac_reg_offset->drp_ecc_error_cntr_clear,
+ 				   DRP_TRP_CNT_CLEAR);
+ 		if (ret)
+ 			return ret;
+ 		break;
+ 	case LLCC_TRAM_CE:
+ 	case LLCC_TRAM_UE:
+-		ret = regmap_write(drv->bcast_regmap, TRP_INTERRUPT_0_CLEAR,
++		ret = regmap_write(drv->bcast_regmap,
++				   drv->edac_reg_offset->trp_interrupt_0_clear,
+ 				   DRP_TRP_INT_CLEAR);
+ 		if (ret)
+ 			return ret;
+ 
+-		ret = regmap_write(drv->bcast_regmap, TRP_ECC_ERROR_CNTR_CLEAR,
++		ret = regmap_write(drv->bcast_regmap,
++				   drv->edac_reg_offset->trp_ecc_error_cntr_clear,
+ 				   DRP_TRP_CNT_CLEAR);
+ 		if (ret)
+ 			return ret;
+@@ -205,17 +167,55 @@ qcom_llcc_clear_error_status(int err_type, struct llcc_drv_data *drv)
+ 	return ret;
+ }
+ 
++struct qcom_llcc_syn_regs {
++	u32 synd_reg;
++	u32 count_status_reg;
++	u32 ways_status_reg;
++};
++
++static void get_reg_offsets(struct llcc_drv_data *drv, int err_type,
++			    struct qcom_llcc_syn_regs *syn_regs)
++{
++	const struct llcc_edac_reg_offset *edac_reg_offset = drv->edac_reg_offset;
++
++	switch (err_type) {
++	case LLCC_DRAM_CE:
++		syn_regs->synd_reg = edac_reg_offset->drp_ecc_sb_err_syn0;
++		syn_regs->count_status_reg = edac_reg_offset->drp_ecc_error_status1;
++		syn_regs->ways_status_reg = edac_reg_offset->drp_ecc_error_status0;
++		break;
++	case LLCC_DRAM_UE:
++		syn_regs->synd_reg = edac_reg_offset->drp_ecc_db_err_syn0;
++		syn_regs->count_status_reg = edac_reg_offset->drp_ecc_error_status1;
++		syn_regs->ways_status_reg = edac_reg_offset->drp_ecc_error_status0;
++		break;
++	case LLCC_TRAM_CE:
++		syn_regs->synd_reg = edac_reg_offset->trp_ecc_sb_err_syn0;
++		syn_regs->count_status_reg = edac_reg_offset->trp_ecc_error_status1;
++		syn_regs->ways_status_reg = edac_reg_offset->trp_ecc_error_status0;
++		break;
++	case LLCC_TRAM_UE:
++		syn_regs->synd_reg = edac_reg_offset->trp_ecc_db_err_syn0;
++		syn_regs->count_status_reg = edac_reg_offset->trp_ecc_error_status1;
++		syn_regs->ways_status_reg = edac_reg_offset->trp_ecc_error_status0;
++		break;
++	}
++}
++
+ /* Dump Syndrome registers data for Tag RAM, Data RAM bit errors*/
+ static int
+ dump_syn_reg_values(struct llcc_drv_data *drv, u32 bank, int err_type)
+ {
+ 	struct llcc_edac_reg_data reg_data = edac_reg_data[err_type];
++	struct qcom_llcc_syn_regs regs = { };
+ 	int err_cnt, err_ways, ret, i;
+ 	u32 synd_reg, synd_val;
+ 
++	get_reg_offsets(drv, err_type, &regs);
++
+ 	for (i = 0; i < reg_data.reg_cnt; i++) {
+-		synd_reg = reg_data.synd_reg + (i * 4);
+-		ret = regmap_read(drv->regmap, drv->offsets[bank] + synd_reg,
++		synd_reg = regs.synd_reg + (i * 4);
++		ret = regmap_read(drv->regmaps[bank], synd_reg,
+ 				  &synd_val);
+ 		if (ret)
+ 			goto clear;
+@@ -224,8 +224,7 @@ dump_syn_reg_values(struct llcc_drv_data *drv, u32 bank, int err_type)
+ 			    reg_data.name, i, synd_val);
+ 	}
+ 
+-	ret = regmap_read(drv->regmap,
+-			  drv->offsets[bank] + reg_data.count_status_reg,
++	ret = regmap_read(drv->regmaps[bank], regs.count_status_reg,
+ 			  &err_cnt);
+ 	if (ret)
+ 		goto clear;
+@@ -235,8 +234,7 @@ dump_syn_reg_values(struct llcc_drv_data *drv, u32 bank, int err_type)
+ 	edac_printk(KERN_CRIT, EDAC_LLCC, "%s: Error count: 0x%4x\n",
+ 		    reg_data.name, err_cnt);
+ 
+-	ret = regmap_read(drv->regmap,
+-			  drv->offsets[bank] + reg_data.ways_status_reg,
++	ret = regmap_read(drv->regmaps[bank], regs.ways_status_reg,
+ 			  &err_ways);
+ 	if (ret)
+ 		goto clear;
+@@ -297,8 +295,7 @@ static irqreturn_t llcc_ecc_irq_handler(int irq, void *edev_ctl)
+ 
+ 	/* Iterate over the banks and look for Tag RAM or Data RAM errors */
+ 	for (i = 0; i < drv->num_banks; i++) {
+-		ret = regmap_read(drv->regmap,
+-				  drv->offsets[i] + DRP_INTERRUPT_STATUS,
++		ret = regmap_read(drv->regmaps[i], drv->edac_reg_offset->drp_interrupt_status,
+ 				  &drp_error);
+ 
+ 		if (!ret && (drp_error & SB_ECC_ERROR)) {
+@@ -313,8 +310,7 @@ static irqreturn_t llcc_ecc_irq_handler(int irq, void *edev_ctl)
+ 		if (!ret)
+ 			irq_rc = IRQ_HANDLED;
+ 
+-		ret = regmap_read(drv->regmap,
+-				  drv->offsets[i] + TRP_INTERRUPT_0_STATUS,
++		ret = regmap_read(drv->regmaps[i], drv->edac_reg_offset->trp_interrupt_0_status,
+ 				  &trp_error);
+ 
+ 		if (!ret && (trp_error & SB_ECC_ERROR)) {
+@@ -346,7 +342,7 @@ static int qcom_llcc_edac_probe(struct platform_device *pdev)
+ 	int ecc_irq;
+ 	int rc;
+ 
+-	rc = qcom_llcc_core_setup(llcc_driv_data->bcast_regmap);
++	rc = qcom_llcc_core_setup(llcc_driv_data, llcc_driv_data->bcast_regmap);
+ 	if (rc)
+ 		return rc;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index ba5def374368e..b056ca8fb8179 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -1623,6 +1623,7 @@ static const u16 amdgpu_unsupported_pciidlist[] = {
+ 	0x5874,
+ 	0x5940,
+ 	0x5941,
++	0x5b70,
+ 	0x5b72,
+ 	0x5b73,
+ 	0x5b74,
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+index 5e9a0c1bb3079..4939a685a643b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+@@ -140,7 +140,7 @@ void amdgpu_bo_placement_from_domain(struct amdgpu_bo *abo, u32 domain)
+ 
+ 		if (flags & AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED)
+ 			places[c].lpfn = visible_pfn;
+-		else if (adev->gmc.real_vram_size != adev->gmc.visible_vram_size)
++		else
+ 			places[c].flags |= TTM_PL_FLAG_TOPDOWN;
+ 
+ 		if (flags & AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index 3f5d13035aff0..89a62df76a12d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -3538,6 +3538,9 @@ static ssize_t amdgpu_psp_vbflash_read(struct file *filp, struct kobject *kobj,
+ 	void *fw_pri_cpu_addr;
+ 	int ret;
+ 
++	if (adev->psp.vbflash_image_size == 0)
++		return -EINVAL;
++
+ 	dev_info(adev->dev, "VBIOS flash to PSP started");
+ 
+ 	ret = amdgpu_bo_create_kernel(adev, adev->psp.vbflash_image_size,
+@@ -3589,13 +3592,13 @@ static ssize_t amdgpu_psp_vbflash_status(struct device *dev,
+ }
+ 
+ static const struct bin_attribute psp_vbflash_bin_attr = {
+-	.attr = {.name = "psp_vbflash", .mode = 0664},
++	.attr = {.name = "psp_vbflash", .mode = 0660},
+ 	.size = 0,
+ 	.write = amdgpu_psp_vbflash_write,
+ 	.read = amdgpu_psp_vbflash_read,
+ };
+ 
+-static DEVICE_ATTR(psp_vbflash_status, 0444, amdgpu_psp_vbflash_status, NULL);
++static DEVICE_ATTR(psp_vbflash_status, 0440, amdgpu_psp_vbflash_status, NULL);
+ 
+ int amdgpu_psp_sysfs_init(struct amdgpu_device *adev)
+ {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+index dc474b8096040..49de3a3eebc78 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+@@ -581,3 +581,21 @@ void amdgpu_ring_ib_end(struct amdgpu_ring *ring)
+ 	if (ring->is_sw_ring)
+ 		amdgpu_sw_ring_ib_end(ring);
+ }
++
++void amdgpu_ring_ib_on_emit_cntl(struct amdgpu_ring *ring)
++{
++	if (ring->is_sw_ring)
++		amdgpu_sw_ring_ib_mark_offset(ring, AMDGPU_MUX_OFFSET_TYPE_CONTROL);
++}
++
++void amdgpu_ring_ib_on_emit_ce(struct amdgpu_ring *ring)
++{
++	if (ring->is_sw_ring)
++		amdgpu_sw_ring_ib_mark_offset(ring, AMDGPU_MUX_OFFSET_TYPE_CE);
++}
++
++void amdgpu_ring_ib_on_emit_de(struct amdgpu_ring *ring)
++{
++	if (ring->is_sw_ring)
++		amdgpu_sw_ring_ib_mark_offset(ring, AMDGPU_MUX_OFFSET_TYPE_DE);
++}
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+index 3989e755a5b4b..6a491bb24426a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+@@ -227,6 +227,9 @@ struct amdgpu_ring_funcs {
+ 	int (*preempt_ib)(struct amdgpu_ring *ring);
+ 	void (*emit_mem_sync)(struct amdgpu_ring *ring);
+ 	void (*emit_wave_limit)(struct amdgpu_ring *ring, bool enable);
++	void (*patch_cntl)(struct amdgpu_ring *ring, unsigned offset);
++	void (*patch_ce)(struct amdgpu_ring *ring, unsigned offset);
++	void (*patch_de)(struct amdgpu_ring *ring, unsigned offset);
+ };
+ 
+ struct amdgpu_ring {
+@@ -316,10 +319,16 @@ struct amdgpu_ring {
+ #define amdgpu_ring_init_cond_exec(r) (r)->funcs->init_cond_exec((r))
+ #define amdgpu_ring_patch_cond_exec(r,o) (r)->funcs->patch_cond_exec((r),(o))
+ #define amdgpu_ring_preempt_ib(r) (r)->funcs->preempt_ib(r)
++#define amdgpu_ring_patch_cntl(r, o) ((r)->funcs->patch_cntl((r), (o)))
++#define amdgpu_ring_patch_ce(r, o) ((r)->funcs->patch_ce((r), (o)))
++#define amdgpu_ring_patch_de(r, o) ((r)->funcs->patch_de((r), (o)))
+ 
+ int amdgpu_ring_alloc(struct amdgpu_ring *ring, unsigned ndw);
+ void amdgpu_ring_ib_begin(struct amdgpu_ring *ring);
+ void amdgpu_ring_ib_end(struct amdgpu_ring *ring);
++void amdgpu_ring_ib_on_emit_cntl(struct amdgpu_ring *ring);
++void amdgpu_ring_ib_on_emit_ce(struct amdgpu_ring *ring);
++void amdgpu_ring_ib_on_emit_de(struct amdgpu_ring *ring);
+ 
+ void amdgpu_ring_insert_nop(struct amdgpu_ring *ring, uint32_t count);
+ void amdgpu_ring_generic_pad_ib(struct amdgpu_ring *ring, struct amdgpu_ib *ib);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring_mux.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring_mux.c
+index 62079f0e3ee8f..73516abef662f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring_mux.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring_mux.c
+@@ -105,6 +105,16 @@ static void amdgpu_mux_resubmit_chunks(struct amdgpu_ring_mux *mux)
+ 				amdgpu_fence_update_start_timestamp(e->ring,
+ 								    chunk->sync_seq,
+ 								    ktime_get());
++				if (chunk->sync_seq ==
++					le32_to_cpu(*(e->ring->fence_drv.cpu_addr + 2))) {
++					if (chunk->cntl_offset <= e->ring->buf_mask)
++						amdgpu_ring_patch_cntl(e->ring,
++								       chunk->cntl_offset);
++					if (chunk->ce_offset <= e->ring->buf_mask)
++						amdgpu_ring_patch_ce(e->ring, chunk->ce_offset);
++					if (chunk->de_offset <= e->ring->buf_mask)
++						amdgpu_ring_patch_de(e->ring, chunk->de_offset);
++				}
+ 				amdgpu_ring_mux_copy_pkt_from_sw_ring(mux, e->ring,
+ 								      chunk->start,
+ 								      chunk->end);
+@@ -407,6 +417,17 @@ void amdgpu_sw_ring_ib_end(struct amdgpu_ring *ring)
+ 	amdgpu_ring_mux_end_ib(mux, ring);
+ }
+ 
++void amdgpu_sw_ring_ib_mark_offset(struct amdgpu_ring *ring, enum amdgpu_ring_mux_offset_type type)
++{
++	struct amdgpu_device *adev = ring->adev;
++	struct amdgpu_ring_mux *mux = &adev->gfx.muxer;
++	unsigned offset;
++
++	offset = ring->wptr & ring->buf_mask;
++
++	amdgpu_ring_mux_ib_mark_offset(mux, ring, offset, type);
++}
++
+ void amdgpu_ring_mux_start_ib(struct amdgpu_ring_mux *mux, struct amdgpu_ring *ring)
+ {
+ 	struct amdgpu_mux_entry *e;
+@@ -429,6 +450,10 @@ void amdgpu_ring_mux_start_ib(struct amdgpu_ring_mux *mux, struct amdgpu_ring *r
+ 	}
+ 
+ 	chunk->start = ring->wptr;
++	/* the initialized value used to check if they are set by the ib submission*/
++	chunk->cntl_offset = ring->buf_mask + 1;
++	chunk->de_offset = ring->buf_mask + 1;
++	chunk->ce_offset = ring->buf_mask + 1;
+ 	list_add_tail(&chunk->entry, &e->list);
+ }
+ 
+@@ -454,6 +479,41 @@ static void scan_and_remove_signaled_chunk(struct amdgpu_ring_mux *mux, struct a
+ 	}
+ }
+ 
++void amdgpu_ring_mux_ib_mark_offset(struct amdgpu_ring_mux *mux,
++				    struct amdgpu_ring *ring, u64 offset,
++				    enum amdgpu_ring_mux_offset_type type)
++{
++	struct amdgpu_mux_entry *e;
++	struct amdgpu_mux_chunk *chunk;
++
++	e = amdgpu_ring_mux_sw_entry(mux, ring);
++	if (!e) {
++		DRM_ERROR("cannot find entry!\n");
++		return;
++	}
++
++	chunk = list_last_entry(&e->list, struct amdgpu_mux_chunk, entry);
++	if (!chunk) {
++		DRM_ERROR("cannot find chunk!\n");
++		return;
++	}
++
++	switch (type) {
++	case AMDGPU_MUX_OFFSET_TYPE_CONTROL:
++		chunk->cntl_offset = offset;
++		break;
++	case AMDGPU_MUX_OFFSET_TYPE_DE:
++		chunk->de_offset = offset;
++		break;
++	case AMDGPU_MUX_OFFSET_TYPE_CE:
++		chunk->ce_offset = offset;
++		break;
++	default:
++		DRM_ERROR("invalid type (%d)\n", type);
++		break;
++	}
++}
++
+ void amdgpu_ring_mux_end_ib(struct amdgpu_ring_mux *mux, struct amdgpu_ring *ring)
+ {
+ 	struct amdgpu_mux_entry *e;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring_mux.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring_mux.h
+index 4be45fc14954c..b22d4fb2a8470 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring_mux.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring_mux.h
+@@ -50,6 +50,12 @@ struct amdgpu_mux_entry {
+ 	struct list_head        list;
+ };
+ 
++enum amdgpu_ring_mux_offset_type {
++	AMDGPU_MUX_OFFSET_TYPE_CONTROL,
++	AMDGPU_MUX_OFFSET_TYPE_DE,
++	AMDGPU_MUX_OFFSET_TYPE_CE,
++};
++
+ struct amdgpu_ring_mux {
+ 	struct amdgpu_ring      *real_ring;
+ 
+@@ -72,12 +78,18 @@ struct amdgpu_ring_mux {
+  * @sync_seq: the fence seqno related with the saved IB.
+  * @start:- start location on the software ring.
+  * @end:- end location on the software ring.
++ * @control_offset:- the PRE_RESUME bit position used for resubmission.
++ * @de_offset:- the anchor in write_data for de meta of resubmission.
++ * @ce_offset:- the anchor in write_data for ce meta of resubmission.
+  */
+ struct amdgpu_mux_chunk {
+ 	struct list_head        entry;
+ 	uint32_t                sync_seq;
+ 	u64                     start;
+ 	u64                     end;
++	u64                     cntl_offset;
++	u64                     de_offset;
++	u64                     ce_offset;
+ };
+ 
+ int amdgpu_ring_mux_init(struct amdgpu_ring_mux *mux, struct amdgpu_ring *ring,
+@@ -89,6 +101,8 @@ u64 amdgpu_ring_mux_get_wptr(struct amdgpu_ring_mux *mux, struct amdgpu_ring *ri
+ u64 amdgpu_ring_mux_get_rptr(struct amdgpu_ring_mux *mux, struct amdgpu_ring *ring);
+ void amdgpu_ring_mux_start_ib(struct amdgpu_ring_mux *mux, struct amdgpu_ring *ring);
+ void amdgpu_ring_mux_end_ib(struct amdgpu_ring_mux *mux, struct amdgpu_ring *ring);
++void amdgpu_ring_mux_ib_mark_offset(struct amdgpu_ring_mux *mux, struct amdgpu_ring *ring,
++				    u64 offset, enum amdgpu_ring_mux_offset_type type);
+ bool amdgpu_mcbp_handle_trailing_fence_irq(struct amdgpu_ring_mux *mux);
+ 
+ u64 amdgpu_sw_ring_get_rptr_gfx(struct amdgpu_ring *ring);
+@@ -97,6 +111,7 @@ void amdgpu_sw_ring_set_wptr_gfx(struct amdgpu_ring *ring);
+ void amdgpu_sw_ring_insert_nop(struct amdgpu_ring *ring, uint32_t count);
+ void amdgpu_sw_ring_ib_begin(struct amdgpu_ring *ring);
+ void amdgpu_sw_ring_ib_end(struct amdgpu_ring *ring);
++void amdgpu_sw_ring_ib_mark_offset(struct amdgpu_ring *ring, enum amdgpu_ring_mux_offset_type type);
+ const char *amdgpu_sw_ring_name(int idx);
+ unsigned int amdgpu_sw_ring_priority(int idx);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+index b1428068fef7f..8144d6693541e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+@@ -6890,8 +6890,10 @@ static int gfx_v10_0_kiq_resume(struct amdgpu_device *adev)
+ 		return r;
+ 
+ 	r = amdgpu_bo_kmap(ring->mqd_obj, (void **)&ring->mqd_ptr);
+-	if (unlikely(r != 0))
++	if (unlikely(r != 0)) {
++		amdgpu_bo_unreserve(ring->mqd_obj);
+ 		return r;
++	}
+ 
+ 	gfx_v10_0_kiq_init_queue(ring);
+ 	amdgpu_bo_kunmap(ring->mqd_obj);
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index c54d05bdc2d8c..7b3e6b6528685 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -755,7 +755,7 @@ static void gfx_v9_0_set_rlc_funcs(struct amdgpu_device *adev);
+ static int gfx_v9_0_get_cu_info(struct amdgpu_device *adev,
+ 				struct amdgpu_cu_info *cu_info);
+ static uint64_t gfx_v9_0_get_gpu_clock_counter(struct amdgpu_device *adev);
+-static void gfx_v9_0_ring_emit_de_meta(struct amdgpu_ring *ring, bool resume);
++static void gfx_v9_0_ring_emit_de_meta(struct amdgpu_ring *ring, bool resume, bool usegds);
+ static u64 gfx_v9_0_ring_get_rptr_compute(struct amdgpu_ring *ring);
+ static void gfx_v9_0_query_ras_error_count(struct amdgpu_device *adev,
+ 					  void *ras_error_status);
+@@ -3604,8 +3604,10 @@ static int gfx_v9_0_kiq_resume(struct amdgpu_device *adev)
+ 		return r;
+ 
+ 	r = amdgpu_bo_kmap(ring->mqd_obj, (void **)&ring->mqd_ptr);
+-	if (unlikely(r != 0))
++	if (unlikely(r != 0)) {
++		amdgpu_bo_unreserve(ring->mqd_obj);
+ 		return r;
++	}
+ 
+ 	gfx_v9_0_kiq_init_queue(ring);
+ 	amdgpu_bo_kunmap(ring->mqd_obj);
+@@ -5122,7 +5124,8 @@ static void gfx_v9_0_ring_emit_ib_gfx(struct amdgpu_ring *ring,
+ 			gfx_v9_0_ring_emit_de_meta(ring,
+ 						   (!amdgpu_sriov_vf(ring->adev) &&
+ 						   flags & AMDGPU_IB_PREEMPTED) ?
+-						   true : false);
++						   true : false,
++						   job->gds_size > 0 && job->gds_base != 0);
+ 	}
+ 
+ 	amdgpu_ring_write(ring, header);
+@@ -5133,9 +5136,83 @@ static void gfx_v9_0_ring_emit_ib_gfx(struct amdgpu_ring *ring,
+ #endif
+ 		lower_32_bits(ib->gpu_addr));
+ 	amdgpu_ring_write(ring, upper_32_bits(ib->gpu_addr));
++	amdgpu_ring_ib_on_emit_cntl(ring);
+ 	amdgpu_ring_write(ring, control);
+ }
+ 
++static void gfx_v9_0_ring_patch_cntl(struct amdgpu_ring *ring,
++				     unsigned offset)
++{
++	u32 control = ring->ring[offset];
++
++	control |= INDIRECT_BUFFER_PRE_RESUME(1);
++	ring->ring[offset] = control;
++}
++
++static void gfx_v9_0_ring_patch_ce_meta(struct amdgpu_ring *ring,
++					unsigned offset)
++{
++	struct amdgpu_device *adev = ring->adev;
++	void *ce_payload_cpu_addr;
++	uint64_t payload_offset, payload_size;
++
++	payload_size = sizeof(struct v9_ce_ib_state);
++
++	if (ring->is_mes_queue) {
++		payload_offset = offsetof(struct amdgpu_mes_ctx_meta_data,
++					  gfx[0].gfx_meta_data) +
++			offsetof(struct v9_gfx_meta_data, ce_payload);
++		ce_payload_cpu_addr =
++			amdgpu_mes_ctx_get_offs_cpu_addr(ring, payload_offset);
++	} else {
++		payload_offset = offsetof(struct v9_gfx_meta_data, ce_payload);
++		ce_payload_cpu_addr = adev->virt.csa_cpu_addr + payload_offset;
++	}
++
++	if (offset + (payload_size >> 2) <= ring->buf_mask + 1) {
++		memcpy((void *)&ring->ring[offset], ce_payload_cpu_addr, payload_size);
++	} else {
++		memcpy((void *)&ring->ring[offset], ce_payload_cpu_addr,
++		       (ring->buf_mask + 1 - offset) << 2);
++		payload_size -= (ring->buf_mask + 1 - offset) << 2;
++		memcpy((void *)&ring->ring[0],
++		       ce_payload_cpu_addr + ((ring->buf_mask + 1 - offset) << 2),
++		       payload_size);
++	}
++}
++
++static void gfx_v9_0_ring_patch_de_meta(struct amdgpu_ring *ring,
++					unsigned offset)
++{
++	struct amdgpu_device *adev = ring->adev;
++	void *de_payload_cpu_addr;
++	uint64_t payload_offset, payload_size;
++
++	payload_size = sizeof(struct v9_de_ib_state);
++
++	if (ring->is_mes_queue) {
++		payload_offset = offsetof(struct amdgpu_mes_ctx_meta_data,
++					  gfx[0].gfx_meta_data) +
++			offsetof(struct v9_gfx_meta_data, de_payload);
++		de_payload_cpu_addr =
++			amdgpu_mes_ctx_get_offs_cpu_addr(ring, payload_offset);
++	} else {
++		payload_offset = offsetof(struct v9_gfx_meta_data, de_payload);
++		de_payload_cpu_addr = adev->virt.csa_cpu_addr + payload_offset;
++	}
++
++	if (offset + (payload_size >> 2) <= ring->buf_mask + 1) {
++		memcpy((void *)&ring->ring[offset], de_payload_cpu_addr, payload_size);
++	} else {
++		memcpy((void *)&ring->ring[offset], de_payload_cpu_addr,
++		       (ring->buf_mask + 1 - offset) << 2);
++		payload_size -= (ring->buf_mask + 1 - offset) << 2;
++		memcpy((void *)&ring->ring[0],
++		       de_payload_cpu_addr + ((ring->buf_mask + 1 - offset) << 2),
++		       payload_size);
++	}
++}
++
+ static void gfx_v9_0_ring_emit_ib_compute(struct amdgpu_ring *ring,
+ 					  struct amdgpu_job *job,
+ 					  struct amdgpu_ib *ib,
+@@ -5331,6 +5408,8 @@ static void gfx_v9_0_ring_emit_ce_meta(struct amdgpu_ring *ring, bool resume)
+ 	amdgpu_ring_write(ring, lower_32_bits(ce_payload_gpu_addr));
+ 	amdgpu_ring_write(ring, upper_32_bits(ce_payload_gpu_addr));
+ 
++	amdgpu_ring_ib_on_emit_ce(ring);
++
+ 	if (resume)
+ 		amdgpu_ring_write_multiple(ring, ce_payload_cpu_addr,
+ 					   sizeof(ce_payload) >> 2);
+@@ -5364,10 +5443,6 @@ static int gfx_v9_0_ring_preempt_ib(struct amdgpu_ring *ring)
+ 	amdgpu_ring_alloc(ring, 13);
+ 	gfx_v9_0_ring_emit_fence(ring, ring->trail_fence_gpu_addr,
+ 				 ring->trail_seq, AMDGPU_FENCE_FLAG_EXEC | AMDGPU_FENCE_FLAG_INT);
+-	/*reset the CP_VMID_PREEMPT after trailing fence*/
+-	amdgpu_ring_emit_wreg(ring,
+-			      SOC15_REG_OFFSET(GC, 0, mmCP_VMID_PREEMPT),
+-			      0x0);
+ 
+ 	/* assert IB preemption, emit the trailing fence */
+ 	kiq->pmf->kiq_unmap_queues(kiq_ring, ring, PREEMPT_QUEUES_NO_UNMAP,
+@@ -5390,6 +5465,10 @@ static int gfx_v9_0_ring_preempt_ib(struct amdgpu_ring *ring)
+ 		DRM_WARN("ring %d timeout to preempt ib\n", ring->idx);
+ 	}
+ 
++	/*reset the CP_VMID_PREEMPT after trailing fence*/
++	amdgpu_ring_emit_wreg(ring,
++			      SOC15_REG_OFFSET(GC, 0, mmCP_VMID_PREEMPT),
++			      0x0);
+ 	amdgpu_ring_commit(ring);
+ 
+ 	/* deassert preemption condition */
+@@ -5397,7 +5476,7 @@ static int gfx_v9_0_ring_preempt_ib(struct amdgpu_ring *ring)
+ 	return r;
+ }
+ 
+-static void gfx_v9_0_ring_emit_de_meta(struct amdgpu_ring *ring, bool resume)
++static void gfx_v9_0_ring_emit_de_meta(struct amdgpu_ring *ring, bool resume, bool usegds)
+ {
+ 	struct amdgpu_device *adev = ring->adev;
+ 	struct v9_de_ib_state de_payload = {0};
+@@ -5428,8 +5507,10 @@ static void gfx_v9_0_ring_emit_de_meta(struct amdgpu_ring *ring, bool resume)
+ 				 PAGE_SIZE);
+ 	}
+ 
+-	de_payload.gds_backup_addrlo = lower_32_bits(gds_addr);
+-	de_payload.gds_backup_addrhi = upper_32_bits(gds_addr);
++	if (usegds) {
++		de_payload.gds_backup_addrlo = lower_32_bits(gds_addr);
++		de_payload.gds_backup_addrhi = upper_32_bits(gds_addr);
++	}
+ 
+ 	cnt = (sizeof(de_payload) >> 2) + 4 - 2;
+ 	amdgpu_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, cnt));
+@@ -5440,6 +5521,7 @@ static void gfx_v9_0_ring_emit_de_meta(struct amdgpu_ring *ring, bool resume)
+ 	amdgpu_ring_write(ring, lower_32_bits(de_payload_gpu_addr));
+ 	amdgpu_ring_write(ring, upper_32_bits(de_payload_gpu_addr));
+ 
++	amdgpu_ring_ib_on_emit_de(ring);
+ 	if (resume)
+ 		amdgpu_ring_write_multiple(ring, de_payload_cpu_addr,
+ 					   sizeof(de_payload) >> 2);
+@@ -6852,6 +6934,9 @@ static const struct amdgpu_ring_funcs gfx_v9_0_sw_ring_funcs_gfx = {
+ 	.emit_reg_write_reg_wait = gfx_v9_0_ring_emit_reg_write_reg_wait,
+ 	.soft_recovery = gfx_v9_0_ring_soft_recovery,
+ 	.emit_mem_sync = gfx_v9_0_emit_mem_sync,
++	.patch_cntl = gfx_v9_0_ring_patch_cntl,
++	.patch_de = gfx_v9_0_ring_patch_de_meta,
++	.patch_ce = gfx_v9_0_ring_patch_ce_meta,
+ };
+ 
+ static const struct amdgpu_ring_funcs gfx_v9_0_ring_funcs_compute = {
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
+index 43d587404c3e1..a3703fcd00761 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
+@@ -129,7 +129,11 @@ static int vcn_v4_0_sw_init(void *handle)
+ 		if (adev->vcn.harvest_config & (1 << i))
+ 			continue;
+ 
+-		atomic_set(&adev->vcn.inst[i].sched_score, 0);
++		/* Init instance 0 sched_score to 1, so it's scheduled after other instances */
++		if (i == 0)
++			atomic_set(&adev->vcn.inst[i].sched_score, 1);
++		else
++			atomic_set(&adev->vcn.inst[i].sched_score, 0);
+ 
+ 		/* VCN UNIFIED TRAP */
+ 		r = amdgpu_irq_add_id(adev, amdgpu_ih_clientid_vcns[i],
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index ce46f3a061c44..289c261ffe876 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -7170,7 +7170,13 @@ static int amdgpu_dm_connector_get_modes(struct drm_connector *connector)
+ 				drm_add_modes_noedid(connector, 640, 480);
+ 	} else {
+ 		amdgpu_dm_connector_ddc_get_modes(connector, edid);
+-		amdgpu_dm_connector_add_common_modes(encoder, connector);
++		/* most eDP supports only timings from its edid,
++		 * usually only detailed timings are available
++		 * from eDP edid. timings which are not from edid
++		 * may damage eDP
++		 */
++		if (connector->connector_type != DRM_MODE_CONNECTOR_eDP)
++			amdgpu_dm_connector_add_common_modes(encoder, connector);
+ 		amdgpu_dm_connector_add_freesync_modes(connector, edid);
+ 	}
+ 	amdgpu_dm_fbc_init(connector);
+diff --git a/drivers/gpu/drm/amd/display/dc/link/link_detection.c b/drivers/gpu/drm/amd/display/dc/link/link_detection.c
+index 9839ec222875a..8145d208512d6 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/link_detection.c
++++ b/drivers/gpu/drm/amd/display/dc/link/link_detection.c
+@@ -980,6 +980,11 @@ static bool detect_link_and_local_sink(struct dc_link *link,
+ 					(link->dpcd_caps.dongle_type !=
+ 							DISPLAY_DONGLE_DP_HDMI_CONVERTER))
+ 				converter_disable_audio = true;
++
++			/* limited link rate to HBR3 for DPIA until we implement USB4 V2 */
++			if (link->ep_type == DISPLAY_ENDPOINT_USB4_DPIA &&
++					link->reported_link_cap.link_rate > LINK_RATE_HIGH3)
++				link->reported_link_cap.link_rate = LINK_RATE_HIGH3;
+ 			break;
+ 		}
+ 
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+index a5c97d61e92a6..65ff245feaecb 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+@@ -1694,10 +1694,39 @@ static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
+ 		}
+ 	}
+ 
+-	/* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
+-	workload_type = smu_cmn_to_asic_specific_index(smu,
++	if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_COMPUTE &&
++		(((smu->adev->pdev->device == 0x744C) && (smu->adev->pdev->revision == 0xC8)) ||
++		((smu->adev->pdev->device == 0x744C) && (smu->adev->pdev->revision == 0xCC)))) {
++		ret = smu_cmn_update_table(smu,
++					   SMU_TABLE_ACTIVITY_MONITOR_COEFF,
++					   WORKLOAD_PPLIB_COMPUTE_BIT,
++					   (void *)(&activity_monitor_external),
++					   false);
++		if (ret) {
++			dev_err(smu->adev->dev, "[%s] Failed to get activity monitor!", __func__);
++			return ret;
++		}
++
++		ret = smu_cmn_update_table(smu,
++					   SMU_TABLE_ACTIVITY_MONITOR_COEFF,
++					   WORKLOAD_PPLIB_CUSTOM_BIT,
++					   (void *)(&activity_monitor_external),
++					   true);
++		if (ret) {
++			dev_err(smu->adev->dev, "[%s] Failed to set activity monitor!", __func__);
++			return ret;
++		}
++
++		workload_type = smu_cmn_to_asic_specific_index(smu,
++						       CMN2ASIC_MAPPING_WORKLOAD,
++						       PP_SMC_POWER_PROFILE_CUSTOM);
++	} else {
++		/* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
++		workload_type = smu_cmn_to_asic_specific_index(smu,
+ 						       CMN2ASIC_MAPPING_WORKLOAD,
+ 						       smu->power_profile_mode);
++	}
++
+ 	if (workload_type < 0)
+ 		return -EINVAL;
+ 
+diff --git a/drivers/gpu/drm/bridge/ti-sn65dsi86.c b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
+index 1e26fa63845a2..0ae8a52acf5e4 100644
+--- a/drivers/gpu/drm/bridge/ti-sn65dsi86.c
++++ b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
+@@ -298,6 +298,10 @@ static void ti_sn_bridge_set_refclk_freq(struct ti_sn65dsi86 *pdata)
+ 		if (refclk_lut[i] == refclk_rate)
+ 			break;
+ 
++	/* avoid buffer overflow and "1" is the default rate in the datasheet. */
++	if (i >= refclk_lut_size)
++		i = 1;
++
+ 	regmap_update_bits(pdata->regmap, SN_DPPLL_SRC_REG, REFCLK_FREQ_MASK,
+ 			   REFCLK_FREQ(i));
+ 
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index b1a38e6ce2f8f..0cb646cb04ee1 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -179,7 +179,7 @@ static const struct dmi_system_id orientation_data[] = {
+ 	}, {	/* AYA NEO AIR */
+ 		.matches = {
+ 		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "AYANEO"),
+-		  DMI_MATCH(DMI_BOARD_NAME, "AIR"),
++		  DMI_MATCH(DMI_PRODUCT_NAME, "AIR"),
+ 		},
+ 		.driver_data = (void *)&lcd1080x1920_leftside_up,
+ 	}, {	/* AYA NEO NEXT */
+diff --git a/drivers/gpu/drm/nouveau/nouveau_acpi.c b/drivers/gpu/drm/nouveau/nouveau_acpi.c
+index 8cf096f841a90..a2ae8c21e4dce 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_acpi.c
++++ b/drivers/gpu/drm/nouveau/nouveau_acpi.c
+@@ -220,6 +220,9 @@ static void nouveau_dsm_pci_probe(struct pci_dev *pdev, acpi_handle *dhandle_out
+ 	int optimus_funcs;
+ 	struct pci_dev *parent_pdev;
+ 
++	if (pdev->vendor != PCI_VENDOR_ID_NVIDIA)
++		return;
++
+ 	*has_pr3 = false;
+ 	parent_pdev = pci_upstream_bridge(pdev);
+ 	if (parent_pdev) {
+diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c b/drivers/gpu/drm/nouveau/nouveau_connector.c
+index 086b66b60d918..f75c6f09dd2af 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_connector.c
++++ b/drivers/gpu/drm/nouveau/nouveau_connector.c
+@@ -730,7 +730,8 @@ out:
+ #endif
+ 
+ 	nouveau_connector_set_edid(nv_connector, edid);
+-	nouveau_connector_set_encoder(connector, nv_encoder);
++	if (nv_encoder)
++		nouveau_connector_set_encoder(connector, nv_encoder);
+ 	return status;
+ }
+ 
+@@ -966,7 +967,7 @@ nouveau_connector_get_modes(struct drm_connector *connector)
+ 	/* Determine display colour depth for everything except LVDS now,
+ 	 * DP requires this before mode_valid() is called.
+ 	 */
+-	if (connector->connector_type != DRM_MODE_CONNECTOR_LVDS)
++	if (connector->connector_type != DRM_MODE_CONNECTOR_LVDS && nv_connector->native_mode)
+ 		nouveau_connector_detect_depth(connector);
+ 
+ 	/* Find the native mode if this is a digital panel, if we didn't
+@@ -987,7 +988,7 @@ nouveau_connector_get_modes(struct drm_connector *connector)
+ 	 * "native" mode as some VBIOS tables require us to use the
+ 	 * pixel clock as part of the lookup...
+ 	 */
+-	if (connector->connector_type == DRM_MODE_CONNECTOR_LVDS)
++	if (connector->connector_type == DRM_MODE_CONNECTOR_LVDS && nv_connector->native_mode)
+ 		nouveau_connector_detect_depth(connector);
+ 
+ 	if (nv_encoder->dcb->type == DCB_OUTPUT_TV)
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
+index cc7c5b4a05fd8..7aac9384600ed 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
+@@ -137,10 +137,16 @@ nouveau_name(struct drm_device *dev)
+ static inline bool
+ nouveau_cli_work_ready(struct dma_fence *fence)
+ {
+-	if (!dma_fence_is_signaled(fence))
+-		return false;
+-	dma_fence_put(fence);
+-	return true;
++	bool ret = true;
++
++	spin_lock_irq(fence->lock);
++	if (!dma_fence_is_signaled_locked(fence))
++		ret = false;
++	spin_unlock_irq(fence->lock);
++
++	if (ret == true)
++		dma_fence_put(fence);
++	return ret;
+ }
+ 
+ static void
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index 6b9563d4f23c9..033df909b92e1 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -3297,7 +3297,7 @@ static int cma_resolve_iboe_route(struct rdma_id_private *id_priv)
+ 	route->path_rec->traffic_class = tos;
+ 	route->path_rec->mtu = iboe_get_mtu(ndev->mtu);
+ 	route->path_rec->rate_selector = IB_SA_EQ;
+-	route->path_rec->rate = iboe_get_rate(ndev);
++	route->path_rec->rate = IB_RATE_PORT_CURRENT;
+ 	dev_put(ndev);
+ 	route->path_rec->packet_life_time_selector = IB_SA_EQ;
+ 	/* In case ACK timeout is set, use this value to calculate
+@@ -4966,7 +4966,7 @@ static int cma_iboe_join_multicast(struct rdma_id_private *id_priv,
+ 	if (!ndev)
+ 		return -ENODEV;
+ 
+-	ib.rec.rate = iboe_get_rate(ndev);
++	ib.rec.rate = IB_RATE_PORT_CURRENT;
+ 	ib.rec.hop_limit = 1;
+ 	ib.rec.mtu = iboe_get_mtu(ndev->mtu);
+ 
+diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
+index 4796f6a8828ca..e836c9c477f67 100644
+--- a/drivers/infiniband/core/uverbs_cmd.c
++++ b/drivers/infiniband/core/uverbs_cmd.c
+@@ -1850,8 +1850,13 @@ static int modify_qp(struct uverbs_attr_bundle *attrs,
+ 		attr->path_mtu = cmd->base.path_mtu;
+ 	if (cmd->base.attr_mask & IB_QP_PATH_MIG_STATE)
+ 		attr->path_mig_state = cmd->base.path_mig_state;
+-	if (cmd->base.attr_mask & IB_QP_QKEY)
++	if (cmd->base.attr_mask & IB_QP_QKEY) {
++		if (cmd->base.qkey & IB_QP_SET_QKEY && !capable(CAP_NET_RAW)) {
++			ret = -EPERM;
++			goto release_qp;
++		}
+ 		attr->qkey = cmd->base.qkey;
++	}
+ 	if (cmd->base.attr_mask & IB_QP_RQ_PSN)
+ 		attr->rq_psn = cmd->base.rq_psn;
+ 	if (cmd->base.attr_mask & IB_QP_SQ_PSN)
+diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c
+index bdb179a09d77c..3cd4b195007b8 100644
+--- a/drivers/infiniband/core/uverbs_main.c
++++ b/drivers/infiniband/core/uverbs_main.c
+@@ -222,8 +222,12 @@ static ssize_t ib_uverbs_event_read(struct ib_uverbs_event_queue *ev_queue,
+ 	spin_lock_irq(&ev_queue->lock);
+ 
+ 	while (list_empty(&ev_queue->event_list)) {
+-		spin_unlock_irq(&ev_queue->lock);
++		if (ev_queue->is_closed) {
++			spin_unlock_irq(&ev_queue->lock);
++			return -EIO;
++		}
+ 
++		spin_unlock_irq(&ev_queue->lock);
+ 		if (filp->f_flags & O_NONBLOCK)
+ 			return -EAGAIN;
+ 
+@@ -233,12 +237,6 @@ static ssize_t ib_uverbs_event_read(struct ib_uverbs_event_queue *ev_queue,
+ 			return -ERESTARTSYS;
+ 
+ 		spin_lock_irq(&ev_queue->lock);
+-
+-		/* If device was disassociated and no event exists set an error */
+-		if (list_empty(&ev_queue->event_list) && ev_queue->is_closed) {
+-			spin_unlock_irq(&ev_queue->lock);
+-			return -EIO;
+-		}
+ 	}
+ 
+ 	event = list_entry(ev_queue->event_list.next, struct ib_uverbs_event, list);
+diff --git a/drivers/infiniband/hw/bnxt_re/bnxt_re.h b/drivers/infiniband/hw/bnxt_re/bnxt_re.h
+index 5a2baf49ecaa4..2c95e6f3d47ac 100644
+--- a/drivers/infiniband/hw/bnxt_re/bnxt_re.h
++++ b/drivers/infiniband/hw/bnxt_re/bnxt_re.h
+@@ -135,8 +135,6 @@ struct bnxt_re_dev {
+ 
+ 	struct delayed_work		worker;
+ 	u8				cur_prio_map;
+-	u16				active_speed;
+-	u8				active_width;
+ 
+ 	/* FP Notification Queue (CQ & SRQ) */
+ 	struct tasklet_struct		nq_task;
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index 94222de1d3719..584d6e64ca708 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -199,6 +199,7 @@ int bnxt_re_query_port(struct ib_device *ibdev, u32 port_num,
+ {
+ 	struct bnxt_re_dev *rdev = to_bnxt_re_dev(ibdev, ibdev);
+ 	struct bnxt_qplib_dev_attr *dev_attr = &rdev->dev_attr;
++	int rc;
+ 
+ 	memset(port_attr, 0, sizeof(*port_attr));
+ 
+@@ -228,10 +229,10 @@ int bnxt_re_query_port(struct ib_device *ibdev, u32 port_num,
+ 	port_attr->sm_sl = 0;
+ 	port_attr->subnet_timeout = 0;
+ 	port_attr->init_type_reply = 0;
+-	port_attr->active_speed = rdev->active_speed;
+-	port_attr->active_width = rdev->active_width;
++	rc = ib_get_eth_speed(&rdev->ibdev, port_num, &port_attr->active_speed,
++			      &port_attr->active_width);
+ 
+-	return 0;
++	return rc;
+ }
+ 
+ int bnxt_re_get_port_immutable(struct ib_device *ibdev, u32 port_num,
+diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c
+index c5867e78f2319..85e36c9f8e797 100644
+--- a/drivers/infiniband/hw/bnxt_re/main.c
++++ b/drivers/infiniband/hw/bnxt_re/main.c
+@@ -1152,8 +1152,6 @@ static int bnxt_re_ib_init(struct bnxt_re_dev *rdev)
+ 		return rc;
+ 	}
+ 	dev_info(rdev_to_dev(rdev), "Device registered with IB successfully");
+-	ib_get_eth_speed(&rdev->ibdev, 1, &rdev->active_speed,
+-			 &rdev->active_width);
+ 	set_bit(BNXT_RE_FLAG_ISSUE_ROCE_STATS, &rdev->flags);
+ 
+ 	event = netif_running(rdev->netdev) && netif_carrier_ok(rdev->netdev) ?
+diff --git a/drivers/infiniband/hw/mlx5/fs.c b/drivers/infiniband/hw/mlx5/fs.c
+index 3008632a6c206..1e419e080b535 100644
+--- a/drivers/infiniband/hw/mlx5/fs.c
++++ b/drivers/infiniband/hw/mlx5/fs.c
+@@ -695,8 +695,6 @@ static struct mlx5_ib_flow_prio *_get_prio(struct mlx5_ib_dev *dev,
+ 	struct mlx5_flow_table_attr ft_attr = {};
+ 	struct mlx5_flow_table *ft;
+ 
+-	if (mlx5_ib_shared_ft_allowed(&dev->ib_dev))
+-		ft_attr.uid = MLX5_SHARED_RESOURCE_UID;
+ 	ft_attr.prio = priority;
+ 	ft_attr.max_fte = num_entries;
+ 	ft_attr.flags = flags;
+@@ -2025,6 +2023,237 @@ static int flow_matcher_cleanup(struct ib_uobject *uobject,
+ 	return 0;
+ }
+ 
++static int steering_anchor_create_ft(struct mlx5_ib_dev *dev,
++				     struct mlx5_ib_flow_prio *ft_prio,
++				     enum mlx5_flow_namespace_type ns_type)
++{
++	struct mlx5_flow_table_attr ft_attr = {};
++	struct mlx5_flow_namespace *ns;
++	struct mlx5_flow_table *ft;
++
++	if (ft_prio->anchor.ft)
++		return 0;
++
++	ns = mlx5_get_flow_namespace(dev->mdev, ns_type);
++	if (!ns)
++		return -EOPNOTSUPP;
++
++	ft_attr.flags = MLX5_FLOW_TABLE_UNMANAGED;
++	ft_attr.uid = MLX5_SHARED_RESOURCE_UID;
++	ft_attr.prio = 0;
++	ft_attr.max_fte = 2;
++	ft_attr.level = 1;
++
++	ft = mlx5_create_flow_table(ns, &ft_attr);
++	if (IS_ERR(ft))
++		return PTR_ERR(ft);
++
++	ft_prio->anchor.ft = ft;
++
++	return 0;
++}
++
++static void steering_anchor_destroy_ft(struct mlx5_ib_flow_prio *ft_prio)
++{
++	if (ft_prio->anchor.ft) {
++		mlx5_destroy_flow_table(ft_prio->anchor.ft);
++		ft_prio->anchor.ft = NULL;
++	}
++}
++
++static int
++steering_anchor_create_fg_drop(struct mlx5_ib_flow_prio *ft_prio)
++{
++	int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
++	struct mlx5_flow_group *fg;
++	void *flow_group_in;
++	int err = 0;
++
++	if (ft_prio->anchor.fg_drop)
++		return 0;
++
++	flow_group_in = kvzalloc(inlen, GFP_KERNEL);
++	if (!flow_group_in)
++		return -ENOMEM;
++
++	MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 1);
++	MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 1);
++
++	fg = mlx5_create_flow_group(ft_prio->anchor.ft, flow_group_in);
++	if (IS_ERR(fg)) {
++		err = PTR_ERR(fg);
++		goto out;
++	}
++
++	ft_prio->anchor.fg_drop = fg;
++
++out:
++	kvfree(flow_group_in);
++
++	return err;
++}
++
++static void
++steering_anchor_destroy_fg_drop(struct mlx5_ib_flow_prio *ft_prio)
++{
++	if (ft_prio->anchor.fg_drop) {
++		mlx5_destroy_flow_group(ft_prio->anchor.fg_drop);
++		ft_prio->anchor.fg_drop = NULL;
++	}
++}
++
++static int
++steering_anchor_create_fg_goto_table(struct mlx5_ib_flow_prio *ft_prio)
++{
++	int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
++	struct mlx5_flow_group *fg;
++	void *flow_group_in;
++	int err = 0;
++
++	if (ft_prio->anchor.fg_goto_table)
++		return 0;
++
++	flow_group_in = kvzalloc(inlen, GFP_KERNEL);
++	if (!flow_group_in)
++		return -ENOMEM;
++
++	fg = mlx5_create_flow_group(ft_prio->anchor.ft, flow_group_in);
++	if (IS_ERR(fg)) {
++		err = PTR_ERR(fg);
++		goto out;
++	}
++	ft_prio->anchor.fg_goto_table = fg;
++
++out:
++	kvfree(flow_group_in);
++
++	return err;
++}
++
++static void
++steering_anchor_destroy_fg_goto_table(struct mlx5_ib_flow_prio *ft_prio)
++{
++	if (ft_prio->anchor.fg_goto_table) {
++		mlx5_destroy_flow_group(ft_prio->anchor.fg_goto_table);
++		ft_prio->anchor.fg_goto_table = NULL;
++	}
++}
++
++static int
++steering_anchor_create_rule_drop(struct mlx5_ib_flow_prio *ft_prio)
++{
++	struct mlx5_flow_act flow_act = {};
++	struct mlx5_flow_handle *handle;
++
++	if (ft_prio->anchor.rule_drop)
++		return 0;
++
++	flow_act.fg = ft_prio->anchor.fg_drop;
++	flow_act.action = MLX5_FLOW_CONTEXT_ACTION_DROP;
++
++	handle = mlx5_add_flow_rules(ft_prio->anchor.ft, NULL, &flow_act,
++				     NULL, 0);
++	if (IS_ERR(handle))
++		return PTR_ERR(handle);
++
++	ft_prio->anchor.rule_drop = handle;
++
++	return 0;
++}
++
++static void steering_anchor_destroy_rule_drop(struct mlx5_ib_flow_prio *ft_prio)
++{
++	if (ft_prio->anchor.rule_drop) {
++		mlx5_del_flow_rules(ft_prio->anchor.rule_drop);
++		ft_prio->anchor.rule_drop = NULL;
++	}
++}
++
++static int
++steering_anchor_create_rule_goto_table(struct mlx5_ib_flow_prio *ft_prio)
++{
++	struct mlx5_flow_destination dest = {};
++	struct mlx5_flow_act flow_act = {};
++	struct mlx5_flow_handle *handle;
++
++	if (ft_prio->anchor.rule_goto_table)
++		return 0;
++
++	flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
++	flow_act.flags |= FLOW_ACT_IGNORE_FLOW_LEVEL;
++	flow_act.fg = ft_prio->anchor.fg_goto_table;
++
++	dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
++	dest.ft = ft_prio->flow_table;
++
++	handle = mlx5_add_flow_rules(ft_prio->anchor.ft, NULL, &flow_act,
++				     &dest, 1);
++	if (IS_ERR(handle))
++		return PTR_ERR(handle);
++
++	ft_prio->anchor.rule_goto_table = handle;
++
++	return 0;
++}
++
++static void
++steering_anchor_destroy_rule_goto_table(struct mlx5_ib_flow_prio *ft_prio)
++{
++	if (ft_prio->anchor.rule_goto_table) {
++		mlx5_del_flow_rules(ft_prio->anchor.rule_goto_table);
++		ft_prio->anchor.rule_goto_table = NULL;
++	}
++}
++
++static int steering_anchor_create_res(struct mlx5_ib_dev *dev,
++				      struct mlx5_ib_flow_prio *ft_prio,
++				      enum mlx5_flow_namespace_type ns_type)
++{
++	int err;
++
++	err = steering_anchor_create_ft(dev, ft_prio, ns_type);
++	if (err)
++		return err;
++
++	err = steering_anchor_create_fg_drop(ft_prio);
++	if (err)
++		goto destroy_ft;
++
++	err = steering_anchor_create_fg_goto_table(ft_prio);
++	if (err)
++		goto destroy_fg_drop;
++
++	err = steering_anchor_create_rule_drop(ft_prio);
++	if (err)
++		goto destroy_fg_goto_table;
++
++	err = steering_anchor_create_rule_goto_table(ft_prio);
++	if (err)
++		goto destroy_rule_drop;
++
++	return 0;
++
++destroy_rule_drop:
++	steering_anchor_destroy_rule_drop(ft_prio);
++destroy_fg_goto_table:
++	steering_anchor_destroy_fg_goto_table(ft_prio);
++destroy_fg_drop:
++	steering_anchor_destroy_fg_drop(ft_prio);
++destroy_ft:
++	steering_anchor_destroy_ft(ft_prio);
++
++	return err;
++}
++
++static void mlx5_steering_anchor_destroy_res(struct mlx5_ib_flow_prio *ft_prio)
++{
++	steering_anchor_destroy_rule_goto_table(ft_prio);
++	steering_anchor_destroy_rule_drop(ft_prio);
++	steering_anchor_destroy_fg_goto_table(ft_prio);
++	steering_anchor_destroy_fg_drop(ft_prio);
++	steering_anchor_destroy_ft(ft_prio);
++}
++
+ static int steering_anchor_cleanup(struct ib_uobject *uobject,
+ 				   enum rdma_remove_reason why,
+ 				   struct uverbs_attr_bundle *attrs)
+@@ -2035,6 +2264,9 @@ static int steering_anchor_cleanup(struct ib_uobject *uobject,
+ 		return -EBUSY;
+ 
+ 	mutex_lock(&obj->dev->flow_db->lock);
++	if (!--obj->ft_prio->anchor.rule_goto_table_ref)
++		steering_anchor_destroy_rule_goto_table(obj->ft_prio);
++
+ 	put_flow_table(obj->dev, obj->ft_prio, true);
+ 	mutex_unlock(&obj->dev->flow_db->lock);
+ 
+@@ -2042,6 +2274,24 @@ static int steering_anchor_cleanup(struct ib_uobject *uobject,
+ 	return 0;
+ }
+ 
++static void fs_cleanup_anchor(struct mlx5_ib_flow_prio *prio,
++			      int count)
++{
++	while (count--)
++		mlx5_steering_anchor_destroy_res(&prio[count]);
++}
++
++void mlx5_ib_fs_cleanup_anchor(struct mlx5_ib_dev *dev)
++{
++	fs_cleanup_anchor(dev->flow_db->prios, MLX5_IB_NUM_FLOW_FT);
++	fs_cleanup_anchor(dev->flow_db->egress_prios, MLX5_IB_NUM_FLOW_FT);
++	fs_cleanup_anchor(dev->flow_db->sniffer, MLX5_IB_NUM_SNIFFER_FTS);
++	fs_cleanup_anchor(dev->flow_db->egress, MLX5_IB_NUM_EGRESS_FTS);
++	fs_cleanup_anchor(dev->flow_db->fdb, MLX5_IB_NUM_FDB_FTS);
++	fs_cleanup_anchor(dev->flow_db->rdma_rx, MLX5_IB_NUM_FLOW_FT);
++	fs_cleanup_anchor(dev->flow_db->rdma_tx, MLX5_IB_NUM_FLOW_FT);
++}
++
+ static int mlx5_ib_matcher_ns(struct uverbs_attr_bundle *attrs,
+ 			      struct mlx5_ib_flow_matcher *obj)
+ {
+@@ -2182,21 +2432,31 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_STEERING_ANCHOR_CREATE)(
+ 		return -ENOMEM;
+ 
+ 	mutex_lock(&dev->flow_db->lock);
++
+ 	ft_prio = _get_flow_table(dev, priority, ns_type, 0);
+ 	if (IS_ERR(ft_prio)) {
+-		mutex_unlock(&dev->flow_db->lock);
+ 		err = PTR_ERR(ft_prio);
+ 		goto free_obj;
+ 	}
+ 
+ 	ft_prio->refcount++;
+-	ft_id = mlx5_flow_table_id(ft_prio->flow_table);
+-	mutex_unlock(&dev->flow_db->lock);
++
++	if (!ft_prio->anchor.rule_goto_table_ref) {
++		err = steering_anchor_create_res(dev, ft_prio, ns_type);
++		if (err)
++			goto put_flow_table;
++	}
++
++	ft_prio->anchor.rule_goto_table_ref++;
++
++	ft_id = mlx5_flow_table_id(ft_prio->anchor.ft);
+ 
+ 	err = uverbs_copy_to(attrs, MLX5_IB_ATTR_STEERING_ANCHOR_FT_ID,
+ 			     &ft_id, sizeof(ft_id));
+ 	if (err)
+-		goto put_flow_table;
++		goto destroy_res;
++
++	mutex_unlock(&dev->flow_db->lock);
+ 
+ 	uobj->object = obj;
+ 	obj->dev = dev;
+@@ -2205,8 +2465,10 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_STEERING_ANCHOR_CREATE)(
+ 
+ 	return 0;
+ 
++destroy_res:
++	--ft_prio->anchor.rule_goto_table_ref;
++	mlx5_steering_anchor_destroy_res(ft_prio);
+ put_flow_table:
+-	mutex_lock(&dev->flow_db->lock);
+ 	put_flow_table(dev, ft_prio, true);
+ 	mutex_unlock(&dev->flow_db->lock);
+ free_obj:
+diff --git a/drivers/infiniband/hw/mlx5/fs.h b/drivers/infiniband/hw/mlx5/fs.h
+index ad320adaf3217..b9734904f5f01 100644
+--- a/drivers/infiniband/hw/mlx5/fs.h
++++ b/drivers/infiniband/hw/mlx5/fs.h
+@@ -10,6 +10,7 @@
+ 
+ #if IS_ENABLED(CONFIG_INFINIBAND_USER_ACCESS)
+ int mlx5_ib_fs_init(struct mlx5_ib_dev *dev);
++void mlx5_ib_fs_cleanup_anchor(struct mlx5_ib_dev *dev);
+ #else
+ static inline int mlx5_ib_fs_init(struct mlx5_ib_dev *dev)
+ {
+@@ -21,9 +22,24 @@ static inline int mlx5_ib_fs_init(struct mlx5_ib_dev *dev)
+ 	mutex_init(&dev->flow_db->lock);
+ 	return 0;
+ }
++
++inline void mlx5_ib_fs_cleanup_anchor(struct mlx5_ib_dev *dev) {}
+ #endif
++
+ static inline void mlx5_ib_fs_cleanup(struct mlx5_ib_dev *dev)
+ {
++	/* When a steering anchor is created, a special flow table is also
++	 * created for the user to reference. Since the user can reference it,
++	 * the kernel cannot trust that when the user destroys the steering
++	 * anchor, they no longer reference the flow table.
++	 *
++	 * To address this issue, when a user destroys a steering anchor, only
++	 * the flow steering rule in the table is destroyed, but the table
++	 * itself is kept to deal with the above scenario. The remaining
++	 * resources are only removed when the RDMA device is destroyed, which
++	 * is a safe assumption that all references are gone.
++	 */
++	mlx5_ib_fs_cleanup_anchor(dev);
+ 	kfree(dev->flow_db);
+ }
+ #endif /* _MLX5_IB_FS_H */
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index 5d45de223c43a..f0b394ed7452a 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -4275,6 +4275,9 @@ const struct mlx5_ib_profile raw_eth_profile = {
+ 	STAGE_CREATE(MLX5_IB_STAGE_POST_IB_REG_UMR,
+ 		     mlx5_ib_stage_post_ib_reg_umr_init,
+ 		     NULL),
++	STAGE_CREATE(MLX5_IB_STAGE_DELAY_DROP,
++		     mlx5_ib_stage_delay_drop_init,
++		     mlx5_ib_stage_delay_drop_cleanup),
+ 	STAGE_CREATE(MLX5_IB_STAGE_RESTRACK,
+ 		     mlx5_ib_restrack_init,
+ 		     NULL),
+diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
+index efa4dc6e7dee1..2dfa6f49a6f48 100644
+--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
++++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
+@@ -237,8 +237,19 @@ enum {
+ #define MLX5_IB_NUM_SNIFFER_FTS		2
+ #define MLX5_IB_NUM_EGRESS_FTS		1
+ #define MLX5_IB_NUM_FDB_FTS		MLX5_BY_PASS_NUM_REGULAR_PRIOS
++
++struct mlx5_ib_anchor {
++	struct mlx5_flow_table *ft;
++	struct mlx5_flow_group *fg_goto_table;
++	struct mlx5_flow_group *fg_drop;
++	struct mlx5_flow_handle *rule_goto_table;
++	struct mlx5_flow_handle *rule_drop;
++	unsigned int rule_goto_table_ref;
++};
++
+ struct mlx5_ib_flow_prio {
+ 	struct mlx5_flow_table		*flow_table;
++	struct mlx5_ib_anchor		anchor;
+ 	unsigned int			refcount;
+ };
+ 
+@@ -1587,6 +1598,9 @@ static inline bool mlx5_ib_lag_should_assign_affinity(struct mlx5_ib_dev *dev)
+ 	    MLX5_CAP_PORT_SELECTION(dev->mdev, port_select_flow_table_bypass))
+ 		return 0;
+ 
++	if (mlx5_lag_is_lacp_owner(dev->mdev) && !dev->lag_active)
++		return 0;
++
+ 	return dev->lag_active ||
+ 		(MLX5_CAP_GEN(dev->mdev, num_lag_ports) > 1 &&
+ 		 MLX5_CAP_GEN(dev->mdev, lag_tx_port_affinity));
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index 86284aba3470d..5bc63020b766d 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -1233,6 +1233,9 @@ static int create_raw_packet_qp_tis(struct mlx5_ib_dev *dev,
+ 
+ 	MLX5_SET(create_tis_in, in, uid, to_mpd(pd)->uid);
+ 	MLX5_SET(tisc, tisc, transport_domain, tdn);
++	if (!mlx5_ib_lag_should_assign_affinity(dev) &&
++	    mlx5_lag_is_lacp_owner(dev->mdev))
++		MLX5_SET(tisc, tisc, strict_lag_tx_port_affinity, 1);
+ 	if (qp->flags & IB_QP_CREATE_SOURCE_QPN)
+ 		MLX5_SET(tisc, tisc, underlay_qpn, qp->underlay_qpn);
+ 
+diff --git a/drivers/infiniband/sw/rxe/rxe_cq.c b/drivers/infiniband/sw/rxe/rxe_cq.c
+index 519ddec29b4ba..d6113329fee61 100644
+--- a/drivers/infiniband/sw/rxe/rxe_cq.c
++++ b/drivers/infiniband/sw/rxe/rxe_cq.c
+@@ -112,8 +112,6 @@ int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited)
+ 
+ 	queue_advance_producer(cq->queue, QUEUE_TYPE_TO_CLIENT);
+ 
+-	spin_unlock_irqrestore(&cq->cq_lock, flags);
+-
+ 	if ((cq->notify == IB_CQ_NEXT_COMP) ||
+ 	    (cq->notify == IB_CQ_SOLICITED && solicited)) {
+ 		cq->notify = 0;
+@@ -121,6 +119,8 @@ int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited)
+ 		cq->ibcq.comp_handler(&cq->ibcq, cq->ibcq.cq_context);
+ 	}
+ 
++	spin_unlock_irqrestore(&cq->cq_lock, flags);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c
+index a2ace42e95366..e8403a70c6b74 100644
+--- a/drivers/infiniband/sw/rxe/rxe_net.c
++++ b/drivers/infiniband/sw/rxe/rxe_net.c
+@@ -159,6 +159,9 @@ static int rxe_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
+ 	pkt->mask = RXE_GRH_MASK;
+ 	pkt->paylen = be16_to_cpu(udph->len) - sizeof(*udph);
+ 
++	/* remove udp header */
++	skb_pull(skb, sizeof(struct udphdr));
++
+ 	rxe_rcv(skb);
+ 
+ 	return 0;
+@@ -401,6 +404,9 @@ static int rxe_loopback(struct sk_buff *skb, struct rxe_pkt_info *pkt)
+ 		return -EIO;
+ 	}
+ 
++	/* remove udp header */
++	skb_pull(skb, sizeof(struct udphdr));
++
+ 	rxe_rcv(skb);
+ 
+ 	return 0;
+diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
+index d5de5ba6940f1..94a7f5ebc6292 100644
+--- a/drivers/infiniband/sw/rxe/rxe_qp.c
++++ b/drivers/infiniband/sw/rxe/rxe_qp.c
+@@ -176,6 +176,9 @@ static void rxe_qp_init_misc(struct rxe_dev *rxe, struct rxe_qp *qp,
+ 	spin_lock_init(&qp->rq.producer_lock);
+ 	spin_lock_init(&qp->rq.consumer_lock);
+ 
++	skb_queue_head_init(&qp->req_pkts);
++	skb_queue_head_init(&qp->resp_pkts);
++
+ 	atomic_set(&qp->ssn, 0);
+ 	atomic_set(&qp->skb_out, 0);
+ }
+@@ -236,8 +239,6 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp,
+ 	qp->req.opcode		= -1;
+ 	qp->comp.opcode		= -1;
+ 
+-	skb_queue_head_init(&qp->req_pkts);
+-
+ 	rxe_init_task(&qp->req.task, qp, rxe_requester);
+ 	rxe_init_task(&qp->comp.task, qp, rxe_completer);
+ 
+@@ -281,8 +282,6 @@ static int rxe_qp_init_resp(struct rxe_dev *rxe, struct rxe_qp *qp,
+ 		}
+ 	}
+ 
+-	skb_queue_head_init(&qp->resp_pkts);
+-
+ 	rxe_init_task(&qp->resp.task, qp, rxe_responder);
+ 
+ 	qp->resp.opcode		= OPCODE_NONE;
+diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
+index 8c68340502769..4e8d8bec0010b 100644
+--- a/drivers/infiniband/sw/rxe/rxe_resp.c
++++ b/drivers/infiniband/sw/rxe/rxe_resp.c
+@@ -519,8 +519,9 @@ static enum resp_states check_rkey(struct rxe_qp *qp,
+ 		if (mw->access & IB_ZERO_BASED)
+ 			qp->resp.offset = mw->addr;
+ 
+-		rxe_put(mw);
+ 		rxe_get(mr);
++		rxe_put(mw);
++		mw = NULL;
+ 	} else {
+ 		mr = lookup_mr(qp->pd, access, rkey, RXE_LOOKUP_REMOTE);
+ 		if (!mr) {
+diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
+index f290cd49698ea..92e1e7587af8b 100644
+--- a/drivers/infiniband/ulp/isert/ib_isert.c
++++ b/drivers/infiniband/ulp/isert/ib_isert.c
+@@ -657,9 +657,13 @@ static int
+ isert_connect_error(struct rdma_cm_id *cma_id)
+ {
+ 	struct isert_conn *isert_conn = cma_id->qp->qp_context;
++	struct isert_np *isert_np = cma_id->context;
+ 
+ 	ib_drain_qp(isert_conn->qp);
++
++	mutex_lock(&isert_np->mutex);
+ 	list_del_init(&isert_conn->node);
++	mutex_unlock(&isert_np->mutex);
+ 	isert_conn->cm_id = NULL;
+ 	isert_put_conn(isert_conn);
+ 
+@@ -2431,6 +2435,7 @@ isert_free_np(struct iscsi_np *np)
+ {
+ 	struct isert_np *isert_np = np->np_context;
+ 	struct isert_conn *isert_conn, *n;
++	LIST_HEAD(drop_conn_list);
+ 
+ 	if (isert_np->cm_id)
+ 		rdma_destroy_id(isert_np->cm_id);
+@@ -2450,7 +2455,7 @@ isert_free_np(struct iscsi_np *np)
+ 					 node) {
+ 			isert_info("cleaning isert_conn %p state (%d)\n",
+ 				   isert_conn, isert_conn->state);
+-			isert_connect_release(isert_conn);
++			list_move_tail(&isert_conn->node, &drop_conn_list);
+ 		}
+ 	}
+ 
+@@ -2461,11 +2466,16 @@ isert_free_np(struct iscsi_np *np)
+ 					 node) {
+ 			isert_info("cleaning isert_conn %p state (%d)\n",
+ 				   isert_conn, isert_conn->state);
+-			isert_connect_release(isert_conn);
++			list_move_tail(&isert_conn->node, &drop_conn_list);
+ 		}
+ 	}
+ 	mutex_unlock(&isert_np->mutex);
+ 
++	list_for_each_entry_safe(isert_conn, n, &drop_conn_list, node) {
++		list_del_init(&isert_conn->node);
++		isert_connect_release(isert_conn);
++	}
++
+ 	np->np_context = NULL;
+ 	kfree(isert_np);
+ }
+@@ -2560,8 +2570,6 @@ static void isert_wait_conn(struct iscsit_conn *conn)
+ 	isert_put_unsol_pending_cmds(conn);
+ 	isert_wait4cmds(conn);
+ 	isert_wait4logout(isert_conn);
+-
+-	queue_work(isert_release_wq, &isert_conn->release_work);
+ }
+ 
+ static void isert_free_conn(struct iscsit_conn *conn)
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+index 80abf45a197ac..047bbfb96e2cb 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+@@ -2040,6 +2040,7 @@ static int rtrs_clt_rdma_cm_handler(struct rdma_cm_id *cm_id,
+ 	return 0;
+ }
+ 
++/* The caller should do the cleanup in case of error */
+ static int create_cm(struct rtrs_clt_con *con)
+ {
+ 	struct rtrs_path *s = con->c.path;
+@@ -2062,14 +2063,14 @@ static int create_cm(struct rtrs_clt_con *con)
+ 	err = rdma_set_reuseaddr(cm_id, 1);
+ 	if (err != 0) {
+ 		rtrs_err(s, "Set address reuse failed, err: %d\n", err);
+-		goto destroy_cm;
++		return err;
+ 	}
+ 	err = rdma_resolve_addr(cm_id, (struct sockaddr *)&clt_path->s.src_addr,
+ 				(struct sockaddr *)&clt_path->s.dst_addr,
+ 				RTRS_CONNECT_TIMEOUT_MS);
+ 	if (err) {
+ 		rtrs_err(s, "Failed to resolve address, err: %d\n", err);
+-		goto destroy_cm;
++		return err;
+ 	}
+ 	/*
+ 	 * Combine connection status and session events. This is needed
+@@ -2084,29 +2085,15 @@ static int create_cm(struct rtrs_clt_con *con)
+ 		if (err == 0)
+ 			err = -ETIMEDOUT;
+ 		/* Timedout or interrupted */
+-		goto errr;
+-	}
+-	if (con->cm_err < 0) {
+-		err = con->cm_err;
+-		goto errr;
++		return err;
+ 	}
+-	if (READ_ONCE(clt_path->state) != RTRS_CLT_CONNECTING) {
++	if (con->cm_err < 0)
++		return con->cm_err;
++	if (READ_ONCE(clt_path->state) != RTRS_CLT_CONNECTING)
+ 		/* Device removal */
+-		err = -ECONNABORTED;
+-		goto errr;
+-	}
++		return -ECONNABORTED;
+ 
+ 	return 0;
+-
+-errr:
+-	stop_cm(con);
+-	mutex_lock(&con->con_mutex);
+-	destroy_con_cq_qp(con);
+-	mutex_unlock(&con->con_mutex);
+-destroy_cm:
+-	destroy_cm(con);
+-
+-	return err;
+ }
+ 
+ static void rtrs_clt_path_up(struct rtrs_clt_path *clt_path)
+@@ -2334,7 +2321,7 @@ static void rtrs_clt_close_work(struct work_struct *work)
+ static int init_conns(struct rtrs_clt_path *clt_path)
+ {
+ 	unsigned int cid;
+-	int err;
++	int err, i;
+ 
+ 	/*
+ 	 * On every new session connections increase reconnect counter
+@@ -2350,10 +2337,8 @@ static int init_conns(struct rtrs_clt_path *clt_path)
+ 			goto destroy;
+ 
+ 		err = create_cm(to_clt_con(clt_path->s.con[cid]));
+-		if (err) {
+-			destroy_con(to_clt_con(clt_path->s.con[cid]));
++		if (err)
+ 			goto destroy;
+-		}
+ 	}
+ 	err = alloc_path_reqs(clt_path);
+ 	if (err)
+@@ -2364,15 +2349,21 @@ static int init_conns(struct rtrs_clt_path *clt_path)
+ 	return 0;
+ 
+ destroy:
+-	while (cid--) {
+-		struct rtrs_clt_con *con = to_clt_con(clt_path->s.con[cid]);
++	/* Make sure we do the cleanup in the order they are created */
++	for (i = 0; i <= cid; i++) {
++		struct rtrs_clt_con *con;
+ 
+-		stop_cm(con);
++		if (!clt_path->s.con[i])
++			break;
+ 
+-		mutex_lock(&con->con_mutex);
+-		destroy_con_cq_qp(con);
+-		mutex_unlock(&con->con_mutex);
+-		destroy_cm(con);
++		con = to_clt_con(clt_path->s.con[i]);
++		if (con->c.cm_id) {
++			stop_cm(con);
++			mutex_lock(&con->con_mutex);
++			destroy_con_cq_qp(con);
++			mutex_unlock(&con->con_mutex);
++			destroy_cm(con);
++		}
+ 		destroy_con(con);
+ 	}
+ 	/*
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs.c b/drivers/infiniband/ulp/rtrs/rtrs.c
+index 4bf9d868cc522..3696f367ff515 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs.c
+@@ -37,8 +37,10 @@ struct rtrs_iu *rtrs_iu_alloc(u32 iu_num, size_t size, gfp_t gfp_mask,
+ 			goto err;
+ 
+ 		iu->dma_addr = ib_dma_map_single(dma_dev, iu->buf, size, dir);
+-		if (ib_dma_mapping_error(dma_dev, iu->dma_addr))
++		if (ib_dma_mapping_error(dma_dev, iu->dma_addr)) {
++			kfree(iu->buf);
+ 			goto err;
++		}
+ 
+ 		iu->cqe.done  = done;
+ 		iu->size      = size;
+diff --git a/drivers/irqchip/irq-gic-common.c b/drivers/irqchip/irq-gic-common.c
+index a610821c8ff2a..afd6a1841715a 100644
+--- a/drivers/irqchip/irq-gic-common.c
++++ b/drivers/irqchip/irq-gic-common.c
+@@ -16,7 +16,13 @@ void gic_enable_of_quirks(const struct device_node *np,
+ 			  const struct gic_quirk *quirks, void *data)
+ {
+ 	for (; quirks->desc; quirks++) {
+-		if (!of_device_is_compatible(np, quirks->compatible))
++		if (!quirks->compatible && !quirks->property)
++			continue;
++		if (quirks->compatible &&
++		    !of_device_is_compatible(np, quirks->compatible))
++			continue;
++		if (quirks->property &&
++		    !of_property_read_bool(np, quirks->property))
+ 			continue;
+ 		if (quirks->init(data))
+ 			pr_info("GIC: enabling workaround for %s\n",
+@@ -28,7 +34,7 @@ void gic_enable_quirks(u32 iidr, const struct gic_quirk *quirks,
+ 		void *data)
+ {
+ 	for (; quirks->desc; quirks++) {
+-		if (quirks->compatible)
++		if (quirks->compatible || quirks->property)
+ 			continue;
+ 		if (quirks->iidr != (quirks->mask & iidr))
+ 			continue;
+diff --git a/drivers/irqchip/irq-gic-common.h b/drivers/irqchip/irq-gic-common.h
+index 27e3d4ed4f328..3db4592cda1c0 100644
+--- a/drivers/irqchip/irq-gic-common.h
++++ b/drivers/irqchip/irq-gic-common.h
+@@ -13,6 +13,7 @@
+ struct gic_quirk {
+ 	const char *desc;
+ 	const char *compatible;
++	const char *property;
+ 	bool (*init)(void *data);
+ 	u32 iidr;
+ 	u32 mask;
+diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
+index 6fcee221f2017..a605aa79435a4 100644
+--- a/drivers/irqchip/irq-gic-v3.c
++++ b/drivers/irqchip/irq-gic-v3.c
+@@ -39,6 +39,7 @@
+ 
+ #define FLAGS_WORKAROUND_GICR_WAKER_MSM8996	(1ULL << 0)
+ #define FLAGS_WORKAROUND_CAVIUM_ERRATUM_38539	(1ULL << 1)
++#define FLAGS_WORKAROUND_MTK_GICR_SAVE		(1ULL << 2)
+ 
+ #define GIC_IRQ_TYPE_PARTITION	(GIC_IRQ_TYPE_LPI + 1)
+ 
+@@ -1720,6 +1721,15 @@ static bool gic_enable_quirk_msm8996(void *data)
+ 	return true;
+ }
+ 
++static bool gic_enable_quirk_mtk_gicr(void *data)
++{
++	struct gic_chip_data *d = data;
++
++	d->flags |= FLAGS_WORKAROUND_MTK_GICR_SAVE;
++
++	return true;
++}
++
+ static bool gic_enable_quirk_cavium_38539(void *data)
+ {
+ 	struct gic_chip_data *d = data;
+@@ -1792,6 +1802,11 @@ static const struct gic_quirk gic_quirks[] = {
+ 		.compatible = "qcom,msm8996-gic-v3",
+ 		.init	= gic_enable_quirk_msm8996,
+ 	},
++	{
++		.desc	= "GICv3: Mediatek Chromebook GICR save problem",
++		.property = "mediatek,broken-save-restore-fw",
++		.init	= gic_enable_quirk_mtk_gicr,
++	},
+ 	{
+ 		.desc	= "GICv3: HIP06 erratum 161010803",
+ 		.iidr	= 0x0204043b,
+@@ -1834,6 +1849,11 @@ static void gic_enable_nmi_support(void)
+ 	if (!gic_prio_masking_enabled())
+ 		return;
+ 
++	if (gic_data.flags & FLAGS_WORKAROUND_MTK_GICR_SAVE) {
++		pr_warn("Skipping NMI enable due to firmware issues\n");
++		return;
++	}
++
+ 	ppi_nmi_refs = kcalloc(gic_data.ppi_nr, sizeof(*ppi_nmi_refs), GFP_KERNEL);
+ 	if (!ppi_nmi_refs)
+ 		return;
+diff --git a/drivers/irqchip/irq-meson-gpio.c b/drivers/irqchip/irq-meson-gpio.c
+index 2aaa9aad3e87a..7da18ef952119 100644
+--- a/drivers/irqchip/irq-meson-gpio.c
++++ b/drivers/irqchip/irq-meson-gpio.c
+@@ -150,7 +150,7 @@ static const struct meson_gpio_irq_params s4_params = {
+ 	INIT_MESON_S4_COMMON_DATA(82)
+ };
+ 
+-static const struct of_device_id meson_irq_gpio_matches[] = {
++static const struct of_device_id meson_irq_gpio_matches[] __maybe_unused = {
+ 	{ .compatible = "amlogic,meson8-gpio-intc", .data = &meson8_params },
+ 	{ .compatible = "amlogic,meson8b-gpio-intc", .data = &meson8b_params },
+ 	{ .compatible = "amlogic,meson-gxbb-gpio-intc", .data = &gxbb_params },
+diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
+index cc77cf3d41092..7d5c9c582ed2d 100644
+--- a/drivers/md/dm-ioctl.c
++++ b/drivers/md/dm-ioctl.c
+@@ -1168,13 +1168,10 @@ static int do_resume(struct dm_ioctl *param)
+ 	/* Do we need to load a new map ? */
+ 	if (new_map) {
+ 		sector_t old_size, new_size;
+-		int srcu_idx;
+ 
+ 		/* Suspend if it isn't already suspended */
+-		old_map = dm_get_live_table(md, &srcu_idx);
+-		if ((param->flags & DM_SKIP_LOCKFS_FLAG) || !old_map)
++		if (param->flags & DM_SKIP_LOCKFS_FLAG)
+ 			suspend_flags &= ~DM_SUSPEND_LOCKFS_FLAG;
+-		dm_put_live_table(md, srcu_idx);
+ 		if (param->flags & DM_NOFLUSH_FLAG)
+ 			suspend_flags |= DM_SUSPEND_NOFLUSH_FLAG;
+ 		if (!dm_suspended_md(md))
+diff --git a/drivers/md/dm-thin-metadata.c b/drivers/md/dm-thin-metadata.c
+index fd464fb024c3c..9dd0409848abe 100644
+--- a/drivers/md/dm-thin-metadata.c
++++ b/drivers/md/dm-thin-metadata.c
+@@ -1756,13 +1756,15 @@ int dm_thin_remove_range(struct dm_thin_device *td,
+ 
+ int dm_pool_block_is_shared(struct dm_pool_metadata *pmd, dm_block_t b, bool *result)
+ {
+-	int r;
++	int r = -EINVAL;
+ 	uint32_t ref_count;
+ 
+ 	down_read(&pmd->root_lock);
+-	r = dm_sm_get_count(pmd->data_sm, b, &ref_count);
+-	if (!r)
+-		*result = (ref_count > 1);
++	if (!pmd->fail_io) {
++		r = dm_sm_get_count(pmd->data_sm, b, &ref_count);
++		if (!r)
++			*result = (ref_count > 1);
++	}
+ 	up_read(&pmd->root_lock);
+ 
+ 	return r;
+@@ -1770,10 +1772,11 @@ int dm_pool_block_is_shared(struct dm_pool_metadata *pmd, dm_block_t b, bool *re
+ 
+ int dm_pool_inc_data_range(struct dm_pool_metadata *pmd, dm_block_t b, dm_block_t e)
+ {
+-	int r = 0;
++	int r = -EINVAL;
+ 
+ 	pmd_write_lock(pmd);
+-	r = dm_sm_inc_blocks(pmd->data_sm, b, e);
++	if (!pmd->fail_io)
++		r = dm_sm_inc_blocks(pmd->data_sm, b, e);
+ 	pmd_write_unlock(pmd);
+ 
+ 	return r;
+@@ -1781,10 +1784,11 @@ int dm_pool_inc_data_range(struct dm_pool_metadata *pmd, dm_block_t b, dm_block_
+ 
+ int dm_pool_dec_data_range(struct dm_pool_metadata *pmd, dm_block_t b, dm_block_t e)
+ {
+-	int r = 0;
++	int r = -EINVAL;
+ 
+ 	pmd_write_lock(pmd);
+-	r = dm_sm_dec_blocks(pmd->data_sm, b, e);
++	if (!pmd->fail_io)
++		r = dm_sm_dec_blocks(pmd->data_sm, b, e);
+ 	pmd_write_unlock(pmd);
+ 
+ 	return r;
+diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
+index 13d4677baafd1..303af00e88894 100644
+--- a/drivers/md/dm-thin.c
++++ b/drivers/md/dm-thin.c
+@@ -399,8 +399,7 @@ static int issue_discard(struct discard_op *op, dm_block_t data_b, dm_block_t da
+ 	sector_t s = block_to_sectors(tc->pool, data_b);
+ 	sector_t len = block_to_sectors(tc->pool, data_e - data_b);
+ 
+-	return __blkdev_issue_discard(tc->pool_dev->bdev, s, len, GFP_NOWAIT,
+-				      &op->bio);
++	return __blkdev_issue_discard(tc->pool_dev->bdev, s, len, GFP_NOIO, &op->bio);
+ }
+ 
+ static void end_discard(struct discard_op *op, int r)
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index dfde0088147a1..0a10e94a6db17 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -2795,6 +2795,10 @@ retry:
+ 	}
+ 
+ 	map = rcu_dereference_protected(md->map, lockdep_is_held(&md->suspend_lock));
++	if (!map) {
++		/* avoid deadlock with fs/namespace.c:do_mount() */
++		suspend_flags &= ~DM_SUSPEND_LOCKFS_FLAG;
++	}
+ 
+ 	r = __dm_suspend(md, map, suspend_flags, TASK_INTERRUPTIBLE, DMF_SUSPENDED);
+ 	if (r)
+diff --git a/drivers/media/dvb-core/dvb_frontend.c b/drivers/media/dvb-core/dvb_frontend.c
+index bc6950a5740f6..9293b058ab997 100644
+--- a/drivers/media/dvb-core/dvb_frontend.c
++++ b/drivers/media/dvb-core/dvb_frontend.c
+@@ -817,26 +817,15 @@ static void dvb_frontend_stop(struct dvb_frontend *fe)
+ 
+ 	dev_dbg(fe->dvb->device, "%s:\n", __func__);
+ 
+-	mutex_lock(&fe->remove_mutex);
+-
+ 	if (fe->exit != DVB_FE_DEVICE_REMOVED)
+ 		fe->exit = DVB_FE_NORMAL_EXIT;
+ 	mb();
+ 
+-	if (!fepriv->thread) {
+-		mutex_unlock(&fe->remove_mutex);
++	if (!fepriv->thread)
+ 		return;
+-	}
+ 
+ 	kthread_stop(fepriv->thread);
+ 
+-	mutex_unlock(&fe->remove_mutex);
+-
+-	if (fepriv->dvbdev->users < -1) {
+-		wait_event(fepriv->dvbdev->wait_queue,
+-			   fepriv->dvbdev->users == -1);
+-	}
+-
+ 	sema_init(&fepriv->sem, 1);
+ 	fepriv->state = FESTATE_IDLE;
+ 
+@@ -2780,13 +2769,9 @@ static int dvb_frontend_open(struct inode *inode, struct file *file)
+ 	struct dvb_adapter *adapter = fe->dvb;
+ 	int ret;
+ 
+-	mutex_lock(&fe->remove_mutex);
+-
+ 	dev_dbg(fe->dvb->device, "%s:\n", __func__);
+-	if (fe->exit == DVB_FE_DEVICE_REMOVED) {
+-		ret = -ENODEV;
+-		goto err_remove_mutex;
+-	}
++	if (fe->exit == DVB_FE_DEVICE_REMOVED)
++		return -ENODEV;
+ 
+ 	if (adapter->mfe_shared == 2) {
+ 		mutex_lock(&adapter->mfe_lock);
+@@ -2794,8 +2779,7 @@ static int dvb_frontend_open(struct inode *inode, struct file *file)
+ 			if (adapter->mfe_dvbdev &&
+ 			    !adapter->mfe_dvbdev->writers) {
+ 				mutex_unlock(&adapter->mfe_lock);
+-				ret = -EBUSY;
+-				goto err_remove_mutex;
++				return -EBUSY;
+ 			}
+ 			adapter->mfe_dvbdev = dvbdev;
+ 		}
+@@ -2818,10 +2802,8 @@ static int dvb_frontend_open(struct inode *inode, struct file *file)
+ 			while (mferetry-- && (mfedev->users != -1 ||
+ 					      mfepriv->thread)) {
+ 				if (msleep_interruptible(500)) {
+-					if (signal_pending(current)) {
+-						ret = -EINTR;
+-						goto err_remove_mutex;
+-					}
++					if (signal_pending(current))
++						return -EINTR;
+ 				}
+ 			}
+ 
+@@ -2833,8 +2815,7 @@ static int dvb_frontend_open(struct inode *inode, struct file *file)
+ 				if (mfedev->users != -1 ||
+ 				    mfepriv->thread) {
+ 					mutex_unlock(&adapter->mfe_lock);
+-					ret = -EBUSY;
+-					goto err_remove_mutex;
++					return -EBUSY;
+ 				}
+ 				adapter->mfe_dvbdev = dvbdev;
+ 			}
+@@ -2893,8 +2874,6 @@ static int dvb_frontend_open(struct inode *inode, struct file *file)
+ 
+ 	if (adapter->mfe_shared)
+ 		mutex_unlock(&adapter->mfe_lock);
+-
+-	mutex_unlock(&fe->remove_mutex);
+ 	return ret;
+ 
+ err3:
+@@ -2916,9 +2895,6 @@ err1:
+ err0:
+ 	if (adapter->mfe_shared)
+ 		mutex_unlock(&adapter->mfe_lock);
+-
+-err_remove_mutex:
+-	mutex_unlock(&fe->remove_mutex);
+ 	return ret;
+ }
+ 
+@@ -2929,8 +2905,6 @@ static int dvb_frontend_release(struct inode *inode, struct file *file)
+ 	struct dvb_frontend_private *fepriv = fe->frontend_priv;
+ 	int ret;
+ 
+-	mutex_lock(&fe->remove_mutex);
+-
+ 	dev_dbg(fe->dvb->device, "%s:\n", __func__);
+ 
+ 	if ((file->f_flags & O_ACCMODE) != O_RDONLY) {
+@@ -2952,18 +2926,10 @@ static int dvb_frontend_release(struct inode *inode, struct file *file)
+ 		}
+ 		mutex_unlock(&fe->dvb->mdev_lock);
+ #endif
++		if (fe->exit != DVB_FE_NO_EXIT)
++			wake_up(&dvbdev->wait_queue);
+ 		if (fe->ops.ts_bus_ctrl)
+ 			fe->ops.ts_bus_ctrl(fe, 0);
+-
+-		if (fe->exit != DVB_FE_NO_EXIT) {
+-			mutex_unlock(&fe->remove_mutex);
+-			wake_up(&dvbdev->wait_queue);
+-		} else {
+-			mutex_unlock(&fe->remove_mutex);
+-		}
+-
+-	} else {
+-		mutex_unlock(&fe->remove_mutex);
+ 	}
+ 
+ 	dvb_frontend_put(fe);
+@@ -3064,7 +3030,6 @@ int dvb_register_frontend(struct dvb_adapter *dvb,
+ 	fepriv = fe->frontend_priv;
+ 
+ 	kref_init(&fe->refcount);
+-	mutex_init(&fe->remove_mutex);
+ 
+ 	/*
+ 	 * After initialization, there need to be two references: one
+diff --git a/drivers/net/dsa/ocelot/felix_vsc9959.c b/drivers/net/dsa/ocelot/felix_vsc9959.c
+index dddb28984bdfc..841c5ebc1afaa 100644
+--- a/drivers/net/dsa/ocelot/felix_vsc9959.c
++++ b/drivers/net/dsa/ocelot/felix_vsc9959.c
+@@ -1263,7 +1263,7 @@ static void vsc9959_tas_guard_bands_update(struct ocelot *ocelot, int port)
+ 	/* Consider the standard Ethernet overhead of 8 octets preamble+SFD,
+ 	 * 4 octets FCS, 12 octets IFG.
+ 	 */
+-	needed_bit_time_ps = (maxlen + 24) * picos_per_byte;
++	needed_bit_time_ps = (u64)(maxlen + 24) * picos_per_byte;
+ 
+ 	dev_dbg(ocelot->dev,
+ 		"port %d: max frame size %d needs %llu ps at speed %d\n",
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_qos.c b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
+index 83c27bbbc6edf..126007ab70f61 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_qos.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
+@@ -181,8 +181,8 @@ int enetc_setup_tc_cbs(struct net_device *ndev, void *type_data)
+ 	int bw_sum = 0;
+ 	u8 bw;
+ 
+-	prio_top = netdev_get_prio_tc_map(ndev, tc_nums - 1);
+-	prio_next = netdev_get_prio_tc_map(ndev, tc_nums - 2);
++	prio_top = tc_nums - 1;
++	prio_next = tc_nums - 2;
+ 
+ 	/* Support highest prio and second prio tc in cbs mode */
+ 	if (tc != prio_top && tc != prio_next)
+diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h
+index 746ff76f2fb1e..0615bb2b3c33b 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf.h
++++ b/drivers/net/ethernet/intel/iavf/iavf.h
+@@ -526,7 +526,7 @@ void iavf_set_ethtool_ops(struct net_device *netdev);
+ void iavf_update_stats(struct iavf_adapter *adapter);
+ void iavf_reset_interrupt_capability(struct iavf_adapter *adapter);
+ int iavf_init_interrupt_scheme(struct iavf_adapter *adapter);
+-void iavf_irq_enable_queues(struct iavf_adapter *adapter, u32 mask);
++void iavf_irq_enable_queues(struct iavf_adapter *adapter);
+ void iavf_free_all_tx_resources(struct iavf_adapter *adapter);
+ void iavf_free_all_rx_resources(struct iavf_adapter *adapter);
+ 
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index 2de4baff4c205..4a66873882d12 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -359,21 +359,18 @@ static void iavf_irq_disable(struct iavf_adapter *adapter)
+ }
+ 
+ /**
+- * iavf_irq_enable_queues - Enable interrupt for specified queues
++ * iavf_irq_enable_queues - Enable interrupt for all queues
+  * @adapter: board private structure
+- * @mask: bitmap of queues to enable
+  **/
+-void iavf_irq_enable_queues(struct iavf_adapter *adapter, u32 mask)
++void iavf_irq_enable_queues(struct iavf_adapter *adapter)
+ {
+ 	struct iavf_hw *hw = &adapter->hw;
+ 	int i;
+ 
+ 	for (i = 1; i < adapter->num_msix_vectors; i++) {
+-		if (mask & BIT(i - 1)) {
+-			wr32(hw, IAVF_VFINT_DYN_CTLN1(i - 1),
+-			     IAVF_VFINT_DYN_CTLN1_INTENA_MASK |
+-			     IAVF_VFINT_DYN_CTLN1_ITR_INDX_MASK);
+-		}
++		wr32(hw, IAVF_VFINT_DYN_CTLN1(i - 1),
++		     IAVF_VFINT_DYN_CTLN1_INTENA_MASK |
++		     IAVF_VFINT_DYN_CTLN1_ITR_INDX_MASK);
+ 	}
+ }
+ 
+@@ -387,7 +384,7 @@ void iavf_irq_enable(struct iavf_adapter *adapter, bool flush)
+ 	struct iavf_hw *hw = &adapter->hw;
+ 
+ 	iavf_misc_irq_enable(adapter);
+-	iavf_irq_enable_queues(adapter, ~0);
++	iavf_irq_enable_queues(adapter);
+ 
+ 	if (flush)
+ 		iavf_flush(hw);
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_register.h b/drivers/net/ethernet/intel/iavf/iavf_register.h
+index bf793332fc9d5..a19e88898a0bb 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_register.h
++++ b/drivers/net/ethernet/intel/iavf/iavf_register.h
+@@ -40,7 +40,7 @@
+ #define IAVF_VFINT_DYN_CTL01_INTENA_MASK IAVF_MASK(0x1, IAVF_VFINT_DYN_CTL01_INTENA_SHIFT)
+ #define IAVF_VFINT_DYN_CTL01_ITR_INDX_SHIFT 3
+ #define IAVF_VFINT_DYN_CTL01_ITR_INDX_MASK IAVF_MASK(0x3, IAVF_VFINT_DYN_CTL01_ITR_INDX_SHIFT)
+-#define IAVF_VFINT_DYN_CTLN1(_INTVF) (0x00003800 + ((_INTVF) * 4)) /* _i=0...15 */ /* Reset: VFR */
++#define IAVF_VFINT_DYN_CTLN1(_INTVF) (0x00003800 + ((_INTVF) * 4)) /* _i=0...63 */ /* Reset: VFR */
+ #define IAVF_VFINT_DYN_CTLN1_INTENA_SHIFT 0
+ #define IAVF_VFINT_DYN_CTLN1_INTENA_MASK IAVF_MASK(0x1, IAVF_VFINT_DYN_CTLN1_INTENA_SHIFT)
+ #define IAVF_VFINT_DYN_CTLN1_SWINT_TRIG_SHIFT 2
+diff --git a/drivers/net/ethernet/intel/ice/ice_gnss.c b/drivers/net/ethernet/intel/ice/ice_gnss.c
+index 12086aafb42fb..75c9de675f202 100644
+--- a/drivers/net/ethernet/intel/ice/ice_gnss.c
++++ b/drivers/net/ethernet/intel/ice/ice_gnss.c
+@@ -85,6 +85,7 @@ static void ice_gnss_read(struct kthread_work *work)
+ {
+ 	struct gnss_serial *gnss = container_of(work, struct gnss_serial,
+ 						read_work.work);
++	unsigned long delay = ICE_GNSS_POLL_DATA_DELAY_TIME;
+ 	unsigned int i, bytes_read, data_len, count;
+ 	struct ice_aqc_link_topo_addr link_topo;
+ 	struct ice_pf *pf;
+@@ -95,20 +96,10 @@ static void ice_gnss_read(struct kthread_work *work)
+ 	int err = 0;
+ 
+ 	pf = gnss->back;
+-	if (!pf) {
+-		err = -EFAULT;
+-		goto exit;
+-	}
+-
+-	if (!test_bit(ICE_FLAG_GNSS, pf->flags))
++	if (!pf || !test_bit(ICE_FLAG_GNSS, pf->flags))
+ 		return;
+ 
+ 	hw = &pf->hw;
+-	buf = (char *)get_zeroed_page(GFP_KERNEL);
+-	if (!buf) {
+-		err = -ENOMEM;
+-		goto exit;
+-	}
+ 
+ 	memset(&link_topo, 0, sizeof(struct ice_aqc_link_topo_addr));
+ 	link_topo.topo_params.index = ICE_E810T_GNSS_I2C_BUS;
+@@ -119,25 +110,24 @@ static void ice_gnss_read(struct kthread_work *work)
+ 	i2c_params = ICE_GNSS_UBX_DATA_LEN_WIDTH |
+ 		     ICE_AQC_I2C_USE_REPEATED_START;
+ 
+-	/* Read data length in a loop, when it's not 0 the data is ready */
+-	for (i = 0; i < ICE_MAX_UBX_READ_TRIES; i++) {
+-		err = ice_aq_read_i2c(hw, link_topo, ICE_GNSS_UBX_I2C_BUS_ADDR,
+-				      cpu_to_le16(ICE_GNSS_UBX_DATA_LEN_H),
+-				      i2c_params, (u8 *)&data_len_b, NULL);
+-		if (err)
+-			goto exit_buf;
++	err = ice_aq_read_i2c(hw, link_topo, ICE_GNSS_UBX_I2C_BUS_ADDR,
++			      cpu_to_le16(ICE_GNSS_UBX_DATA_LEN_H),
++			      i2c_params, (u8 *)&data_len_b, NULL);
++	if (err)
++		goto requeue;
+ 
+-		data_len = be16_to_cpu(data_len_b);
+-		if (data_len != 0 && data_len != U16_MAX)
+-			break;
++	data_len = be16_to_cpu(data_len_b);
++	if (data_len == 0 || data_len == U16_MAX)
++		goto requeue;
+ 
+-		mdelay(10);
+-	}
++	/* The u-blox has data_len bytes for us to read */
+ 
+ 	data_len = min_t(typeof(data_len), data_len, PAGE_SIZE);
+-	if (!data_len) {
++
++	buf = (char *)get_zeroed_page(GFP_KERNEL);
++	if (!buf) {
+ 		err = -ENOMEM;
+-		goto exit_buf;
++		goto requeue;
+ 	}
+ 
+ 	/* Read received data */
+@@ -151,7 +141,7 @@ static void ice_gnss_read(struct kthread_work *work)
+ 				      cpu_to_le16(ICE_GNSS_UBX_EMPTY_DATA),
+ 				      bytes_read, &buf[i], NULL);
+ 		if (err)
+-			goto exit_buf;
++			goto free_buf;
+ 	}
+ 
+ 	count = gnss_insert_raw(pf->gnss_dev, buf, i);
+@@ -159,11 +149,11 @@ static void ice_gnss_read(struct kthread_work *work)
+ 		dev_warn(ice_pf_to_dev(pf),
+ 			 "gnss_insert_raw ret=%d size=%d\n",
+ 			 count, i);
+-exit_buf:
++	delay = ICE_GNSS_TIMER_DELAY_TIME;
++free_buf:
+ 	free_page((unsigned long)buf);
+-	kthread_queue_delayed_work(gnss->kworker, &gnss->read_work,
+-				   ICE_GNSS_TIMER_DELAY_TIME);
+-exit:
++requeue:
++	kthread_queue_delayed_work(gnss->kworker, &gnss->read_work, delay);
+ 	if (err)
+ 		dev_dbg(ice_pf_to_dev(pf), "GNSS failed to read err=%d\n", err);
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_gnss.h b/drivers/net/ethernet/intel/ice/ice_gnss.h
+index d95ca3928b2ea..d206afe550a56 100644
+--- a/drivers/net/ethernet/intel/ice/ice_gnss.h
++++ b/drivers/net/ethernet/intel/ice/ice_gnss.h
+@@ -5,6 +5,7 @@
+ #define _ICE_GNSS_H_
+ 
+ #define ICE_E810T_GNSS_I2C_BUS		0x2
++#define ICE_GNSS_POLL_DATA_DELAY_TIME	(HZ / 100) /* poll every 10 ms */
+ #define ICE_GNSS_TIMER_DELAY_TIME	(HZ / 10) /* 0.1 second per message */
+ #define ICE_GNSS_TTY_WRITE_BUF		250
+ #define ICE_MAX_I2C_DATA_SIZE		FIELD_MAX(ICE_AQC_I2C_DATA_SIZE_M)
+@@ -20,8 +21,6 @@
+  * passed as I2C addr parameter.
+  */
+ #define ICE_GNSS_UBX_WRITE_BYTES	(ICE_MAX_I2C_WRITE_BYTES + 1)
+-#define ICE_MAX_UBX_READ_TRIES		255
+-#define ICE_MAX_UBX_ACK_READ_TRIES	4095
+ 
+ /**
+  * struct gnss_serial - data used to initialize GNSS TTY port
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 0d8b8c6f9bd35..98e8ce743fb2e 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -4794,9 +4794,13 @@ err_init_pf:
+ static void ice_deinit_dev(struct ice_pf *pf)
+ {
+ 	ice_free_irq_msix_misc(pf);
+-	ice_clear_interrupt_scheme(pf);
+ 	ice_deinit_pf(pf);
+ 	ice_deinit_hw(&pf->hw);
++
++	/* Service task is already stopped, so call reset directly. */
++	ice_reset(&pf->hw, ICE_RESET_PFR);
++	pci_wait_for_pending_transaction(pf->pdev);
++	ice_clear_interrupt_scheme(pf);
+ }
+ 
+ static void ice_init_features(struct ice_pf *pf)
+@@ -5086,10 +5090,6 @@ int ice_load(struct ice_pf *pf)
+ 	struct ice_vsi *vsi;
+ 	int err;
+ 
+-	err = ice_reset(&pf->hw, ICE_RESET_PFR);
+-	if (err)
+-		return err;
+-
+ 	err = ice_init_dev(pf);
+ 	if (err)
+ 		return err;
+@@ -5346,12 +5346,6 @@ static void ice_remove(struct pci_dev *pdev)
+ 	ice_setup_mc_magic_wake(pf);
+ 	ice_set_wake(pf);
+ 
+-	/* Issue a PFR as part of the prescribed driver unload flow.  Do not
+-	 * do it via ice_schedule_reset() since there is no need to rebuild
+-	 * and the service task is already stopped.
+-	 */
+-	ice_reset(&pf->hw, ICE_RESET_PFR);
+-	pci_wait_for_pending_transaction(pdev);
+ 	pci_disable_device(pdev);
+ }
+ 
+@@ -7048,6 +7042,10 @@ int ice_down(struct ice_vsi *vsi)
+ 	ice_for_each_txq(vsi, i)
+ 		ice_clean_tx_ring(vsi->tx_rings[i]);
+ 
++	if (ice_is_xdp_ena_vsi(vsi))
++		ice_for_each_xdp_txq(vsi, i)
++			ice_clean_tx_ring(vsi->xdp_rings[i]);
++
+ 	ice_for_each_rxq(vsi, i)
+ 		ice_clean_rx_ring(vsi->rx_rings[i]);
+ 
+diff --git a/drivers/net/ethernet/intel/igb/igb_ethtool.c b/drivers/net/ethernet/intel/igb/igb_ethtool.c
+index 7d60da1b7bf41..319ed601eaa1e 100644
+--- a/drivers/net/ethernet/intel/igb/igb_ethtool.c
++++ b/drivers/net/ethernet/intel/igb/igb_ethtool.c
+@@ -822,6 +822,8 @@ static int igb_set_eeprom(struct net_device *netdev,
+ 		 */
+ 		ret_val = hw->nvm.ops.read(hw, last_word, 1,
+ 				   &eeprom_buff[last_word - first_word]);
++		if (ret_val)
++			goto out;
+ 	}
+ 
+ 	/* Device's eeprom is always little-endian, word addressable */
+@@ -841,6 +843,7 @@ static int igb_set_eeprom(struct net_device *netdev,
+ 		hw->nvm.ops.update(hw);
+ 
+ 	igb_set_fw_version(adapter);
++out:
+ 	kfree(eeprom_buff);
+ 	return ret_val;
+ }
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index 274c781b55473..f2f9f81c123b8 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -6948,6 +6948,7 @@ static void igb_extts(struct igb_adapter *adapter, int tsintr_tt)
+ 	struct e1000_hw *hw = &adapter->hw;
+ 	struct ptp_clock_event event;
+ 	struct timespec64 ts;
++	unsigned long flags;
+ 
+ 	if (pin < 0 || pin >= IGB_N_SDP)
+ 		return;
+@@ -6955,9 +6956,12 @@ static void igb_extts(struct igb_adapter *adapter, int tsintr_tt)
+ 	if (hw->mac.type == e1000_82580 ||
+ 	    hw->mac.type == e1000_i354 ||
+ 	    hw->mac.type == e1000_i350) {
+-		s64 ns = rd32(auxstmpl);
++		u64 ns = rd32(auxstmpl);
+ 
+-		ns += ((s64)(rd32(auxstmph) & 0xFF)) << 32;
++		ns += ((u64)(rd32(auxstmph) & 0xFF)) << 32;
++		spin_lock_irqsave(&adapter->tmreg_lock, flags);
++		ns = timecounter_cyc2time(&adapter->tc, ns);
++		spin_unlock_irqrestore(&adapter->tmreg_lock, flags);
+ 		ts = ns_to_timespec64(ns);
+ 	} else {
+ 		ts.tv_nsec = rd32(auxstmpl);
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index a2d823e646095..b35f5ff3536e5 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -255,6 +255,13 @@ static void igc_clean_tx_ring(struct igc_ring *tx_ring)
+ 	/* reset BQL for queue */
+ 	netdev_tx_reset_queue(txring_txq(tx_ring));
+ 
++	/* Zero out the buffer ring */
++	memset(tx_ring->tx_buffer_info, 0,
++	       sizeof(*tx_ring->tx_buffer_info) * tx_ring->count);
++
++	/* Zero out the descriptor ring */
++	memset(tx_ring->desc, 0, tx_ring->size);
++
+ 	/* reset next_to_use and next_to_clean */
+ 	tx_ring->next_to_use = 0;
+ 	tx_ring->next_to_clean = 0;
+@@ -268,7 +275,7 @@ static void igc_clean_tx_ring(struct igc_ring *tx_ring)
+  */
+ void igc_free_tx_resources(struct igc_ring *tx_ring)
+ {
+-	igc_clean_tx_ring(tx_ring);
++	igc_disable_tx_ring(tx_ring);
+ 
+ 	vfree(tx_ring->tx_buffer_info);
+ 	tx_ring->tx_buffer_info = NULL;
+@@ -6715,6 +6722,9 @@ static void igc_remove(struct pci_dev *pdev)
+ 
+ 	igc_ptp_stop(adapter);
+ 
++	pci_disable_ptm(pdev);
++	pci_clear_master(pdev);
++
+ 	set_bit(__IGC_DOWN, &adapter->state);
+ 
+ 	del_timer_sync(&adapter->watchdog_timer);
+diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
+index 5a898fb88e375..c70d2500a3634 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
++++ b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
+@@ -941,6 +941,9 @@ int octep_device_setup(struct octep_device *oct)
+ 		oct->mmio[i].hw_addr =
+ 			ioremap(pci_resource_start(oct->pdev, i * 2),
+ 				pci_resource_len(oct->pdev, i * 2));
++		if (!oct->mmio[i].hw_addr)
++			goto unmap_prev;
++
+ 		oct->mmio[i].mapped = 1;
+ 	}
+ 
+@@ -980,7 +983,9 @@ int octep_device_setup(struct octep_device *oct)
+ 	return 0;
+ 
+ unsupported_dev:
+-	for (i = 0; i < OCTEP_MMIO_REGIONS; i++)
++	i = OCTEP_MMIO_REGIONS;
++unmap_prev:
++	while (i--)
+ 		iounmap(oct->mmio[i].hw_addr);
+ 
+ 	kfree(oct->conf);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+index 4ad707e758b9f..f01d057ad025a 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+@@ -1878,7 +1878,8 @@ static int nix_check_txschq_alloc_req(struct rvu *rvu, int lvl, u16 pcifunc,
+ 		free_cnt = rvu_rsrc_free_count(&txsch->schq);
+ 	}
+ 
+-	if (free_cnt < req_schq || req_schq > MAX_TXSCHQ_PER_FUNC)
++	if (free_cnt < req_schq || req->schq[lvl] > MAX_TXSCHQ_PER_FUNC ||
++	    req->schq_contig[lvl] > MAX_TXSCHQ_PER_FUNC)
+ 		return NIX_AF_ERR_TLX_ALLOC_FAIL;
+ 
+ 	/* If contiguous queues are needed, check for availability */
+@@ -4080,10 +4081,6 @@ int rvu_mbox_handler_nix_set_rx_cfg(struct rvu *rvu, struct nix_rx_cfg *req,
+ 
+ static u64 rvu_get_lbk_link_credits(struct rvu *rvu, u16 lbk_max_frs)
+ {
+-	/* CN10k supports 72KB FIFO size and max packet size of 64k */
+-	if (rvu->hw->lbk_bufsize == 0x12000)
+-		return (rvu->hw->lbk_bufsize - lbk_max_frs) / 16;
+-
+ 	return 1600; /* 16 * max LBK datarate = 16 * 100Gbps */
+ }
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c
+index 51209119f0f2f..9f11c1e407373 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c
+@@ -1164,10 +1164,8 @@ static u16 __rvu_npc_exact_cmd_rules_cnt_update(struct rvu *rvu, int drop_mcam_i
+ {
+ 	struct npc_exact_table *table;
+ 	u16 *cnt, old_cnt;
+-	bool promisc;
+ 
+ 	table = rvu->hw->table;
+-	promisc = table->promisc_mode[drop_mcam_idx];
+ 
+ 	cnt = &table->cnt_cmd_rules[drop_mcam_idx];
+ 	old_cnt = *cnt;
+@@ -1179,16 +1177,13 @@ static u16 __rvu_npc_exact_cmd_rules_cnt_update(struct rvu *rvu, int drop_mcam_i
+ 
+ 	*enable_or_disable_cam = false;
+ 
+-	if (promisc)
+-		goto done;
+-
+-	/* If all rules are deleted and not already in promisc mode; disable cam */
++	/* If all rules are deleted, disable cam */
+ 	if (!*cnt && val < 0) {
+ 		*enable_or_disable_cam = true;
+ 		goto done;
+ 	}
+ 
+-	/* If rule got added and not already in promisc mode; enable cam */
++	/* If rule got added, enable cam */
+ 	if (!old_cnt && val > 0) {
+ 		*enable_or_disable_cam = true;
+ 		goto done;
+@@ -1443,7 +1438,6 @@ int rvu_npc_exact_promisc_disable(struct rvu *rvu, u16 pcifunc)
+ 	u32 drop_mcam_idx;
+ 	bool *promisc;
+ 	bool rc;
+-	u32 cnt;
+ 
+ 	table = rvu->hw->table;
+ 
+@@ -1466,17 +1460,8 @@ int rvu_npc_exact_promisc_disable(struct rvu *rvu, u16 pcifunc)
+ 		return LMAC_AF_ERR_INVALID_PARAM;
+ 	}
+ 	*promisc = false;
+-	cnt = __rvu_npc_exact_cmd_rules_cnt_update(rvu, drop_mcam_idx, 0, NULL);
+ 	mutex_unlock(&table->lock);
+ 
+-	/* If no dmac filter entries configured, disable drop rule */
+-	if (!cnt)
+-		rvu_npc_enable_mcam_by_entry_index(rvu, drop_mcam_idx, NIX_INTF_RX, false);
+-	else
+-		rvu_npc_enable_mcam_by_entry_index(rvu, drop_mcam_idx, NIX_INTF_RX, !*promisc);
+-
+-	dev_dbg(rvu->dev, "%s: disabled  promisc mode (cgx=%d lmac=%d, cnt=%d)\n",
+-		__func__, cgx_id, lmac_id, cnt);
+ 	return 0;
+ }
+ 
+@@ -1494,7 +1479,6 @@ int rvu_npc_exact_promisc_enable(struct rvu *rvu, u16 pcifunc)
+ 	u32 drop_mcam_idx;
+ 	bool *promisc;
+ 	bool rc;
+-	u32 cnt;
+ 
+ 	table = rvu->hw->table;
+ 
+@@ -1517,17 +1501,8 @@ int rvu_npc_exact_promisc_enable(struct rvu *rvu, u16 pcifunc)
+ 		return LMAC_AF_ERR_INVALID_PARAM;
+ 	}
+ 	*promisc = true;
+-	cnt = __rvu_npc_exact_cmd_rules_cnt_update(rvu, drop_mcam_idx, 0, NULL);
+ 	mutex_unlock(&table->lock);
+ 
+-	/* If no dmac filter entries configured, disable drop rule */
+-	if (!cnt)
+-		rvu_npc_enable_mcam_by_entry_index(rvu, drop_mcam_idx, NIX_INTF_RX, false);
+-	else
+-		rvu_npc_enable_mcam_by_entry_index(rvu, drop_mcam_idx, NIX_INTF_RX, !*promisc);
+-
+-	dev_dbg(rvu->dev, "%s: Enabled promisc mode (cgx=%d lmac=%d cnt=%d)\n",
+-		__func__, cgx_id, lmac_id, cnt);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
+index a3c5c2dab5fd7..d7ef853702b79 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
+@@ -275,18 +275,6 @@ static inline bool mlx5_sriov_is_enabled(struct mlx5_core_dev *dev)
+ 	return pci_num_vf(dev->pdev) ? true : false;
+ }
+ 
+-static inline int mlx5_lag_is_lacp_owner(struct mlx5_core_dev *dev)
+-{
+-	/* LACP owner conditions:
+-	 * 1) Function is physical.
+-	 * 2) LAG is supported by FW.
+-	 * 3) LAG is managed by driver (currently the only option).
+-	 */
+-	return  MLX5_CAP_GEN(dev, vport_group_manager) &&
+-		   (MLX5_CAP_GEN(dev, num_lag_ports) > 1) &&
+-		    MLX5_CAP_GEN(dev, lag_master);
+-}
+-
+ int mlx5_rescan_drivers_locked(struct mlx5_core_dev *dev);
+ static inline int mlx5_rescan_drivers(struct mlx5_core_dev *dev)
+ {
+diff --git a/drivers/net/ethernet/renesas/rswitch.c b/drivers/net/ethernet/renesas/rswitch.c
+index 7855d9ef81eb1..66bce799471c1 100644
+--- a/drivers/net/ethernet/renesas/rswitch.c
++++ b/drivers/net/ethernet/renesas/rswitch.c
+@@ -347,17 +347,6 @@ out:
+ 	return -ENOMEM;
+ }
+ 
+-static int rswitch_gwca_ts_queue_alloc(struct rswitch_private *priv)
+-{
+-	struct rswitch_gwca_queue *gq = &priv->gwca.ts_queue;
+-
+-	gq->ring_size = TS_RING_SIZE;
+-	gq->ts_ring = dma_alloc_coherent(&priv->pdev->dev,
+-					 sizeof(struct rswitch_ts_desc) *
+-					 (gq->ring_size + 1), &gq->ring_dma, GFP_KERNEL);
+-	return !gq->ts_ring ? -ENOMEM : 0;
+-}
+-
+ static void rswitch_desc_set_dptr(struct rswitch_desc *desc, dma_addr_t addr)
+ {
+ 	desc->dptrl = cpu_to_le32(lower_32_bits(addr));
+@@ -533,6 +522,28 @@ static void rswitch_gwca_linkfix_free(struct rswitch_private *priv)
+ 	gwca->linkfix_table = NULL;
+ }
+ 
++static int rswitch_gwca_ts_queue_alloc(struct rswitch_private *priv)
++{
++	struct rswitch_gwca_queue *gq = &priv->gwca.ts_queue;
++	struct rswitch_ts_desc *desc;
++
++	gq->ring_size = TS_RING_SIZE;
++	gq->ts_ring = dma_alloc_coherent(&priv->pdev->dev,
++					 sizeof(struct rswitch_ts_desc) *
++					 (gq->ring_size + 1), &gq->ring_dma, GFP_KERNEL);
++
++	if (!gq->ts_ring)
++		return -ENOMEM;
++
++	rswitch_gwca_ts_queue_fill(priv, 0, TS_RING_SIZE);
++	desc = &gq->ts_ring[gq->ring_size];
++	desc->desc.die_dt = DT_LINKFIX;
++	rswitch_desc_set_dptr(&desc->desc, gq->ring_dma);
++	INIT_LIST_HEAD(&priv->gwca.ts_info_list);
++
++	return 0;
++}
++
+ static struct rswitch_gwca_queue *rswitch_gwca_get(struct rswitch_private *priv)
+ {
+ 	struct rswitch_gwca_queue *gq;
+@@ -1782,9 +1793,6 @@ static int rswitch_init(struct rswitch_private *priv)
+ 	if (err < 0)
+ 		goto err_ts_queue_alloc;
+ 
+-	rswitch_gwca_ts_queue_fill(priv, 0, TS_RING_SIZE);
+-	INIT_LIST_HEAD(&priv->gwca.ts_info_list);
+-
+ 	for (i = 0; i < RSWITCH_NUM_PORTS; i++) {
+ 		err = rswitch_device_alloc(priv, i);
+ 		if (err < 0) {
+diff --git a/drivers/net/ethernet/sfc/efx_channels.c b/drivers/net/ethernet/sfc/efx_channels.c
+index fcea3ea809d77..41b33a75333cd 100644
+--- a/drivers/net/ethernet/sfc/efx_channels.c
++++ b/drivers/net/ethernet/sfc/efx_channels.c
+@@ -301,6 +301,7 @@ int efx_probe_interrupts(struct efx_nic *efx)
+ 		efx->tx_channel_offset = 0;
+ 		efx->n_xdp_channels = 0;
+ 		efx->xdp_channel_offset = efx->n_channels;
++		efx->xdp_txq_queues_mode = EFX_XDP_TX_QUEUES_BORROWED;
+ 		rc = pci_enable_msi(efx->pci_dev);
+ 		if (rc == 0) {
+ 			efx_get_channel(efx, 0)->irq = efx->pci_dev->irq;
+@@ -322,6 +323,7 @@ int efx_probe_interrupts(struct efx_nic *efx)
+ 		efx->tx_channel_offset = efx_separate_tx_channels ? 1 : 0;
+ 		efx->n_xdp_channels = 0;
+ 		efx->xdp_channel_offset = efx->n_channels;
++		efx->xdp_txq_queues_mode = EFX_XDP_TX_QUEUES_BORROWED;
+ 		efx->legacy_irq = efx->pci_dev->irq;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/sfc/efx_devlink.c b/drivers/net/ethernet/sfc/efx_devlink.c
+index 381b805659d39..ef9971cbb695d 100644
+--- a/drivers/net/ethernet/sfc/efx_devlink.c
++++ b/drivers/net/ethernet/sfc/efx_devlink.c
+@@ -171,9 +171,14 @@ static int efx_devlink_info_nvram_partition(struct efx_nic *efx,
+ 
+ 	rc = efx_mcdi_nvram_metadata(efx, partition_type, NULL, version, NULL,
+ 				     0);
++
++	/* If the partition does not exist, that is not an error. */
++	if (rc == -ENOENT)
++		return 0;
++
+ 	if (rc) {
+-		netif_err(efx, drv, efx->net_dev, "mcdi nvram %s: failed\n",
+-			  version_name);
++		netif_err(efx, drv, efx->net_dev, "mcdi nvram %s: failed (rc=%d)\n",
++			  version_name, rc);
+ 		return rc;
+ 	}
+ 
+@@ -187,36 +192,33 @@ static int efx_devlink_info_nvram_partition(struct efx_nic *efx,
+ static int efx_devlink_info_stored_versions(struct efx_nic *efx,
+ 					    struct devlink_info_req *req)
+ {
+-	int rc;
+-
+-	rc = efx_devlink_info_nvram_partition(efx, req,
+-					      NVRAM_PARTITION_TYPE_BUNDLE,
+-					      DEVLINK_INFO_VERSION_GENERIC_FW_BUNDLE_ID);
+-	if (rc)
+-		return rc;
+-
+-	rc = efx_devlink_info_nvram_partition(efx, req,
+-					      NVRAM_PARTITION_TYPE_MC_FIRMWARE,
+-					      DEVLINK_INFO_VERSION_GENERIC_FW_MGMT);
+-	if (rc)
+-		return rc;
+-
+-	rc = efx_devlink_info_nvram_partition(efx, req,
+-					      NVRAM_PARTITION_TYPE_SUC_FIRMWARE,
+-					      EFX_DEVLINK_INFO_VERSION_FW_MGMT_SUC);
+-	if (rc)
+-		return rc;
+-
+-	rc = efx_devlink_info_nvram_partition(efx, req,
+-					      NVRAM_PARTITION_TYPE_EXPANSION_ROM,
+-					      EFX_DEVLINK_INFO_VERSION_FW_EXPROM);
+-	if (rc)
+-		return rc;
++	int err;
+ 
+-	rc = efx_devlink_info_nvram_partition(efx, req,
+-					      NVRAM_PARTITION_TYPE_EXPANSION_UEFI,
+-					      EFX_DEVLINK_INFO_VERSION_FW_UEFI);
+-	return rc;
++	/* We do not care here about the specific error but just if an error
++	 * happened. The specific error will be reported inside the call
++	 * through system messages, and if any error happened in any call
++	 * below, we report it through extack.
++	 */
++	err = efx_devlink_info_nvram_partition(efx, req,
++					       NVRAM_PARTITION_TYPE_BUNDLE,
++					       DEVLINK_INFO_VERSION_GENERIC_FW_BUNDLE_ID);
++
++	err |= efx_devlink_info_nvram_partition(efx, req,
++						NVRAM_PARTITION_TYPE_MC_FIRMWARE,
++						DEVLINK_INFO_VERSION_GENERIC_FW_MGMT);
++
++	err |= efx_devlink_info_nvram_partition(efx, req,
++						NVRAM_PARTITION_TYPE_SUC_FIRMWARE,
++						EFX_DEVLINK_INFO_VERSION_FW_MGMT_SUC);
++
++	err |= efx_devlink_info_nvram_partition(efx, req,
++						NVRAM_PARTITION_TYPE_EXPANSION_ROM,
++						EFX_DEVLINK_INFO_VERSION_FW_EXPROM);
++
++	err |= efx_devlink_info_nvram_partition(efx, req,
++						NVRAM_PARTITION_TYPE_EXPANSION_UEFI,
++						EFX_DEVLINK_INFO_VERSION_FW_UEFI);
++	return err;
+ }
+ 
+ #define EFX_VER_FLAG(_f)	\
+@@ -587,27 +589,20 @@ static int efx_devlink_info_get(struct devlink *devlink,
+ {
+ 	struct efx_devlink *devlink_private = devlink_priv(devlink);
+ 	struct efx_nic *efx = devlink_private->efx;
+-	int rc;
++	int err;
+ 
+-	/* Several different MCDI commands are used. We report first error
+-	 * through extack returning at that point. Specific error
+-	 * information via system messages.
++	/* Several different MCDI commands are used. We report if errors
++	 * happened through extack. Specific error information via system
++	 * messages inside the calls.
+ 	 */
+-	rc = efx_devlink_info_board_cfg(efx, req);
+-	if (rc) {
+-		NL_SET_ERR_MSG_MOD(extack, "Getting board info failed");
+-		return rc;
+-	}
+-	rc = efx_devlink_info_stored_versions(efx, req);
+-	if (rc) {
+-		NL_SET_ERR_MSG_MOD(extack, "Getting stored versions failed");
+-		return rc;
+-	}
+-	rc = efx_devlink_info_running_versions(efx, req);
+-	if (rc) {
+-		NL_SET_ERR_MSG_MOD(extack, "Getting running versions failed");
+-		return rc;
+-	}
++	err = efx_devlink_info_board_cfg(efx, req);
++
++	err |= efx_devlink_info_stored_versions(efx, req);
++
++	err |= efx_devlink_info_running_versions(efx, req);
++
++	if (err)
++		NL_SET_ERR_MSG_MOD(extack, "Errors when getting device info. Check system messages");
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/sfc/siena/efx_channels.c b/drivers/net/ethernet/sfc/siena/efx_channels.c
+index 06ed74994e366..1776f7f8a7a90 100644
+--- a/drivers/net/ethernet/sfc/siena/efx_channels.c
++++ b/drivers/net/ethernet/sfc/siena/efx_channels.c
+@@ -302,6 +302,7 @@ int efx_siena_probe_interrupts(struct efx_nic *efx)
+ 		efx->tx_channel_offset = 0;
+ 		efx->n_xdp_channels = 0;
+ 		efx->xdp_channel_offset = efx->n_channels;
++		efx->xdp_txq_queues_mode = EFX_XDP_TX_QUEUES_BORROWED;
+ 		rc = pci_enable_msi(efx->pci_dev);
+ 		if (rc == 0) {
+ 			efx_get_channel(efx, 0)->irq = efx->pci_dev->irq;
+@@ -323,6 +324,7 @@ int efx_siena_probe_interrupts(struct efx_nic *efx)
+ 		efx->tx_channel_offset = efx_siena_separate_tx_channels ? 1 : 0;
+ 		efx->n_xdp_channels = 0;
+ 		efx->xdp_channel_offset = efx->n_channels;
++		efx->xdp_txq_queues_mode = EFX_XDP_TX_QUEUES_BORROWED;
+ 		efx->legacy_irq = efx->pci_dev->irq;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 71f8f78ce0090..4e59f0e164ec0 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -3867,7 +3867,6 @@ irq_error:
+ 
+ 	stmmac_hw_teardown(dev);
+ init_error:
+-	free_dma_desc_resources(priv, &priv->dma_conf);
+ 	phylink_disconnect_phy(priv->phylink);
+ init_phy_error:
+ 	pm_runtime_put(priv->device);
+@@ -3885,6 +3884,9 @@ static int stmmac_open(struct net_device *dev)
+ 		return PTR_ERR(dma_conf);
+ 
+ 	ret = __stmmac_open(dev, dma_conf);
++	if (ret)
++		free_dma_desc_resources(priv, dma_conf);
++
+ 	kfree(dma_conf);
+ 	return ret;
+ }
+@@ -5609,12 +5611,15 @@ static int stmmac_change_mtu(struct net_device *dev, int new_mtu)
+ 		stmmac_release(dev);
+ 
+ 		ret = __stmmac_open(dev, dma_conf);
+-		kfree(dma_conf);
+ 		if (ret) {
++			free_dma_desc_resources(priv, dma_conf);
++			kfree(dma_conf);
+ 			netdev_err(priv->dev, "failed reopening the interface after MTU change\n");
+ 			return ret;
+ 		}
+ 
++		kfree(dma_conf);
++
+ 		stmmac_set_rx_mode(dev);
+ 	}
+ 
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index bcea87b7151c0..ad69c5d5761b3 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -2031,7 +2031,7 @@ static int am65_cpsw_nuss_init_slave_ports(struct am65_cpsw_common *common)
+ 		/* Initialize the Serdes PHY for the port */
+ 		ret = am65_cpsw_init_serdes_phy(dev, port_np, port);
+ 		if (ret)
+-			return ret;
++			goto of_node_put;
+ 
+ 		port->slave.mac_only =
+ 				of_property_read_bool(port_np, "ti,mac-only");
+diff --git a/drivers/net/ipvlan/ipvlan_l3s.c b/drivers/net/ipvlan/ipvlan_l3s.c
+index 71712ea25403d..d5b05e8032199 100644
+--- a/drivers/net/ipvlan/ipvlan_l3s.c
++++ b/drivers/net/ipvlan/ipvlan_l3s.c
+@@ -102,6 +102,10 @@ static unsigned int ipvlan_nf_input(void *priv, struct sk_buff *skb,
+ 
+ 	skb->dev = addr->master->dev;
+ 	skb->skb_iif = skb->dev->ifindex;
++#if IS_ENABLED(CONFIG_IPV6)
++	if (addr->atype == IPVL_IPV6)
++		IP6CB(skb)->iif = skb->dev->ifindex;
++#endif
+ 	len = skb->len + ETH_HLEN;
+ 	ipvlan_count_rx(addr->master, len, true, false);
+ out:
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 25616247d7a56..e040c191659b9 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -3987,17 +3987,15 @@ static int macsec_add_dev(struct net_device *dev, sci_t sci, u8 icv_len)
+ 		return -ENOMEM;
+ 
+ 	secy->tx_sc.stats = netdev_alloc_pcpu_stats(struct pcpu_tx_sc_stats);
+-	if (!secy->tx_sc.stats) {
+-		free_percpu(macsec->stats);
++	if (!secy->tx_sc.stats)
+ 		return -ENOMEM;
+-	}
+ 
+ 	secy->tx_sc.md_dst = metadata_dst_alloc(0, METADATA_MACSEC, GFP_KERNEL);
+-	if (!secy->tx_sc.md_dst) {
+-		free_percpu(secy->tx_sc.stats);
+-		free_percpu(macsec->stats);
++	if (!secy->tx_sc.md_dst)
++		/* macsec and secy percpu stats will be freed when unregistering
++		 * net_device in macsec_free_netdev()
++		 */
+ 		return -ENOMEM;
+-	}
+ 
+ 	if (sci == MACSEC_UNDEF_SCI)
+ 		sci = dev_to_sci(dev, MACSEC_PORT_ES);
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index 30c166b334686..0598debb93dea 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -188,6 +188,7 @@ static int phylink_interface_max_speed(phy_interface_t interface)
+ 	case PHY_INTERFACE_MODE_RGMII_ID:
+ 	case PHY_INTERFACE_MODE_RGMII:
+ 	case PHY_INTERFACE_MODE_QSGMII:
++	case PHY_INTERFACE_MODE_QUSGMII:
+ 	case PHY_INTERFACE_MODE_SGMII:
+ 	case PHY_INTERFACE_MODE_GMII:
+ 		return SPEED_1000;
+@@ -204,7 +205,6 @@ static int phylink_interface_max_speed(phy_interface_t interface)
+ 	case PHY_INTERFACE_MODE_10GBASER:
+ 	case PHY_INTERFACE_MODE_10GKR:
+ 	case PHY_INTERFACE_MODE_USXGMII:
+-	case PHY_INTERFACE_MODE_QUSGMII:
+ 		return SPEED_10000;
+ 
+ 	case PHY_INTERFACE_MODE_25GBASER:
+@@ -3297,6 +3297,41 @@ void phylink_decode_usxgmii_word(struct phylink_link_state *state,
+ }
+ EXPORT_SYMBOL_GPL(phylink_decode_usxgmii_word);
+ 
++/**
++ * phylink_decode_usgmii_word() - decode the USGMII word from a MAC PCS
++ * @state: a pointer to a struct phylink_link_state.
++ * @lpa: a 16 bit value which stores the USGMII auto-negotiation word
++ *
++ * Helper for MAC PCS supporting the USGMII protocol and the auto-negotiation
++ * code word.  Decode the USGMII code word and populate the corresponding fields
++ * (speed, duplex) into the phylink_link_state structure. The structure for this
++ * word is the same as the USXGMII word, except it only supports speeds up to
++ * 1Gbps.
++ */
++static void phylink_decode_usgmii_word(struct phylink_link_state *state,
++				       uint16_t lpa)
++{
++	switch (lpa & MDIO_USXGMII_SPD_MASK) {
++	case MDIO_USXGMII_10:
++		state->speed = SPEED_10;
++		break;
++	case MDIO_USXGMII_100:
++		state->speed = SPEED_100;
++		break;
++	case MDIO_USXGMII_1000:
++		state->speed = SPEED_1000;
++		break;
++	default:
++		state->link = false;
++		return;
++	}
++
++	if (lpa & MDIO_USXGMII_FULL_DUPLEX)
++		state->duplex = DUPLEX_FULL;
++	else
++		state->duplex = DUPLEX_HALF;
++}
++
+ /**
+  * phylink_mii_c22_pcs_decode_state() - Decode MAC PCS state from MII registers
+  * @state: a pointer to a &struct phylink_link_state.
+@@ -3333,9 +3368,11 @@ void phylink_mii_c22_pcs_decode_state(struct phylink_link_state *state,
+ 
+ 	case PHY_INTERFACE_MODE_SGMII:
+ 	case PHY_INTERFACE_MODE_QSGMII:
+-	case PHY_INTERFACE_MODE_QUSGMII:
+ 		phylink_decode_sgmii_word(state, lpa);
+ 		break;
++	case PHY_INTERFACE_MODE_QUSGMII:
++		phylink_decode_usgmii_word(state, lpa);
++		break;
+ 
+ 	default:
+ 		state->link = false;
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index f1865d047971b..2e7c7b0cdc549 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1220,7 +1220,9 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x05c6, 0x9080, 8)},
+ 	{QMI_FIXED_INTF(0x05c6, 0x9083, 3)},
+ 	{QMI_FIXED_INTF(0x05c6, 0x9084, 4)},
++	{QMI_QUIRK_SET_DTR(0x05c6, 0x9091, 2)},	/* Compal RXM-G1 */
+ 	{QMI_FIXED_INTF(0x05c6, 0x90b2, 3)},    /* ublox R410M */
++	{QMI_QUIRK_SET_DTR(0x05c6, 0x90db, 2)},	/* Compal RXM-G1 */
+ 	{QMI_FIXED_INTF(0x05c6, 0x920d, 0)},
+ 	{QMI_FIXED_INTF(0x05c6, 0x920d, 5)},
+ 	{QMI_QUIRK_SET_DTR(0x05c6, 0x9625, 4)},	/* YUGA CLM920-NC5 */
+diff --git a/drivers/net/wan/lapbether.c b/drivers/net/wan/lapbether.c
+index d62a904d2e422..56326f38fe8a3 100644
+--- a/drivers/net/wan/lapbether.c
++++ b/drivers/net/wan/lapbether.c
+@@ -384,6 +384,9 @@ static int lapbeth_new_device(struct net_device *dev)
+ 
+ 	ASSERT_RTNL();
+ 
++	if (dev->type != ARPHRD_ETHER)
++		return -EINVAL;
++
+ 	ndev = alloc_netdev(sizeof(*lapbeth), "lapb%d", NET_NAME_UNKNOWN,
+ 			    lapbeth_setup);
+ 	if (!ndev)
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 60f51155a6d20..b682cc6a62f0a 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -3428,6 +3428,8 @@ static const struct pci_device_id nvme_id_table[] = {
+ 		.driver_data = NVME_QUIRK_BOGUS_NID, },
+ 	{ PCI_DEVICE(0x1e4B, 0x1202),   /* MAXIO MAP1202 */
+ 		.driver_data = NVME_QUIRK_BOGUS_NID, },
++	{ PCI_DEVICE(0x1e4B, 0x1602),   /* MAXIO MAP1602 */
++		.driver_data = NVME_QUIRK_BOGUS_NID, },
+ 	{ PCI_DEVICE(0x1cc1, 0x5350),   /* ADATA XPG GAMMIX S50 */
+ 		.driver_data = NVME_QUIRK_BOGUS_NID, },
+ 	{ PCI_DEVICE(0x1dbe, 0x5236),   /* ADATA XPG GAMMIX S70 */
+diff --git a/drivers/of/overlay.c b/drivers/of/overlay.c
+index 2e01960f1aeb3..7feb643f13707 100644
+--- a/drivers/of/overlay.c
++++ b/drivers/of/overlay.c
+@@ -811,6 +811,7 @@ static int init_overlay_changeset(struct overlay_changeset *ovcs)
+ 		if (!fragment->target) {
+ 			pr_err("symbols in overlay, but not in live tree\n");
+ 			ret = -EINVAL;
++			of_node_put(node);
+ 			goto err_out;
+ 		}
+ 
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index f4e2a88729fd1..c525867760bf8 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -6003,8 +6003,9 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56c1, aspm_l1_acceptable_latency
+ 
+ #ifdef CONFIG_PCIE_DPC
+ /*
+- * Intel Tiger Lake and Alder Lake BIOS has a bug that clears the DPC
+- * RP PIO Log Size of the integrated Thunderbolt PCIe Root Ports.
++ * Intel Ice Lake, Tiger Lake and Alder Lake BIOS has a bug that clears
++ * the DPC RP PIO Log Size of the integrated Thunderbolt PCIe Root
++ * Ports.
+  */
+ static void dpc_log_size(struct pci_dev *dev)
+ {
+@@ -6027,6 +6028,10 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x461f, dpc_log_size);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x462f, dpc_log_size);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x463f, dpc_log_size);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x466e, dpc_log_size);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x8a1d, dpc_log_size);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x8a1f, dpc_log_size);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x8a21, dpc_log_size);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x8a23, dpc_log_size);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a23, dpc_log_size);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a25, dpc_log_size);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a27, dpc_log_size);
+diff --git a/drivers/platform/x86/asus-nb-wmi.c b/drivers/platform/x86/asus-nb-wmi.c
+index e2c9a68d12df9..fdf7da06af306 100644
+--- a/drivers/platform/x86/asus-nb-wmi.c
++++ b/drivers/platform/x86/asus-nb-wmi.c
+@@ -555,6 +555,7 @@ static const struct key_entry asus_nb_wmi_keymap[] = {
+ 	{ KE_KEY, 0x71, { KEY_F13 } }, /* General-purpose button */
+ 	{ KE_IGNORE, 0x79, },  /* Charger type dectection notification */
+ 	{ KE_KEY, 0x7a, { KEY_ALS_TOGGLE } }, /* Ambient Light Sensor Toggle */
++	{ KE_IGNORE, 0x7B, }, /* Charger connect/disconnect notification */
+ 	{ KE_KEY, 0x7c, { KEY_MICMUTE } },
+ 	{ KE_KEY, 0x7D, { KEY_BLUETOOTH } }, /* Bluetooth Enable */
+ 	{ KE_KEY, 0x7E, { KEY_BLUETOOTH } }, /* Bluetooth Disable */
+@@ -584,6 +585,7 @@ static const struct key_entry asus_nb_wmi_keymap[] = {
+ 	{ KE_KEY, 0xAE, { KEY_FN_F5 } }, /* Fn+F5 fan mode on 2020+ */
+ 	{ KE_KEY, 0xB3, { KEY_PROG4 } }, /* AURA */
+ 	{ KE_KEY, 0xB5, { KEY_CALC } },
++	{ KE_IGNORE, 0xC0, }, /* External display connect/disconnect notification */
+ 	{ KE_KEY, 0xC4, { KEY_KBDILLUMUP } },
+ 	{ KE_KEY, 0xC5, { KEY_KBDILLUMDOWN } },
+ 	{ KE_IGNORE, 0xC6, },  /* Ambient Light Sensor notification */
+diff --git a/drivers/power/supply/ab8500_btemp.c b/drivers/power/supply/ab8500_btemp.c
+index 307ee6f71042e..6f83e99d2eb72 100644
+--- a/drivers/power/supply/ab8500_btemp.c
++++ b/drivers/power/supply/ab8500_btemp.c
+@@ -624,10 +624,8 @@ static int ab8500_btemp_get_ext_psy_data(struct device *dev, void *data)
+  */
+ static void ab8500_btemp_external_power_changed(struct power_supply *psy)
+ {
+-	struct ab8500_btemp *di = power_supply_get_drvdata(psy);
+-
+-	class_for_each_device(power_supply_class, NULL,
+-		di->btemp_psy, ab8500_btemp_get_ext_psy_data);
++	class_for_each_device(power_supply_class, NULL, psy,
++			      ab8500_btemp_get_ext_psy_data);
+ }
+ 
+ /* ab8500 btemp driver interrupts and their respective isr */
+diff --git a/drivers/power/supply/ab8500_fg.c b/drivers/power/supply/ab8500_fg.c
+index 41a7bff9ac376..53560fbb6dcd3 100644
+--- a/drivers/power/supply/ab8500_fg.c
++++ b/drivers/power/supply/ab8500_fg.c
+@@ -2407,10 +2407,8 @@ out:
+  */
+ static void ab8500_fg_external_power_changed(struct power_supply *psy)
+ {
+-	struct ab8500_fg *di = power_supply_get_drvdata(psy);
+-
+-	class_for_each_device(power_supply_class, NULL,
+-		di->fg_psy, ab8500_fg_get_ext_psy_data);
++	class_for_each_device(power_supply_class, NULL, psy,
++			      ab8500_fg_get_ext_psy_data);
+ }
+ 
+ /**
+diff --git a/drivers/power/supply/bq27xxx_battery.c b/drivers/power/supply/bq27xxx_battery.c
+index 929e813b9c443..4296600e8912a 100644
+--- a/drivers/power/supply/bq27xxx_battery.c
++++ b/drivers/power/supply/bq27xxx_battery.c
+@@ -1083,10 +1083,8 @@ static int poll_interval_param_set(const char *val, const struct kernel_param *k
+ 		return ret;
+ 
+ 	mutex_lock(&bq27xxx_list_lock);
+-	list_for_each_entry(di, &bq27xxx_battery_devices, list) {
+-		cancel_delayed_work_sync(&di->work);
+-		schedule_delayed_work(&di->work, 0);
+-	}
++	list_for_each_entry(di, &bq27xxx_battery_devices, list)
++		mod_delayed_work(system_wq, &di->work, 0);
+ 	mutex_unlock(&bq27xxx_list_lock);
+ 
+ 	return ret;
+diff --git a/drivers/power/supply/power_supply_core.c b/drivers/power/supply/power_supply_core.c
+index f3d7c1da299fe..d325e6dbc7709 100644
+--- a/drivers/power/supply/power_supply_core.c
++++ b/drivers/power/supply/power_supply_core.c
+@@ -348,6 +348,10 @@ static int __power_supply_is_system_supplied(struct device *dev, void *data)
+ 	struct power_supply *psy = dev_get_drvdata(dev);
+ 	unsigned int *count = data;
+ 
++	if (!psy->desc->get_property(psy, POWER_SUPPLY_PROP_SCOPE, &ret))
++		if (ret.intval == POWER_SUPPLY_SCOPE_DEVICE)
++			return 0;
++
+ 	(*count)++;
+ 	if (psy->desc->type != POWER_SUPPLY_TYPE_BATTERY)
+ 		if (!psy->desc->get_property(psy, POWER_SUPPLY_PROP_ONLINE,
+@@ -366,8 +370,8 @@ int power_supply_is_system_supplied(void)
+ 				      __power_supply_is_system_supplied);
+ 
+ 	/*
+-	 * If no power class device was found at all, most probably we are
+-	 * running on a desktop system, so assume we are on mains power.
++	 * If no system scope power class device was found at all, most probably we
++	 * are running on a desktop system, so assume we are on mains power.
+ 	 */
+ 	if (count == 0)
+ 		return 1;
+diff --git a/drivers/power/supply/power_supply_sysfs.c b/drivers/power/supply/power_supply_sysfs.c
+index c228205e09538..4bbb3053eef44 100644
+--- a/drivers/power/supply/power_supply_sysfs.c
++++ b/drivers/power/supply/power_supply_sysfs.c
+@@ -285,7 +285,8 @@ static ssize_t power_supply_show_property(struct device *dev,
+ 
+ 		if (ret < 0) {
+ 			if (ret == -ENODATA)
+-				dev_dbg(dev, "driver has no data for `%s' property\n",
++				dev_dbg_ratelimited(dev,
++					"driver has no data for `%s' property\n",
+ 					attr->attr.name);
+ 			else if (ret != -ENODEV && ret != -EAGAIN)
+ 				dev_err_ratelimited(dev,
+diff --git a/drivers/power/supply/sc27xx_fuel_gauge.c b/drivers/power/supply/sc27xx_fuel_gauge.c
+index 632977f84b954..bd23c4d9fed43 100644
+--- a/drivers/power/supply/sc27xx_fuel_gauge.c
++++ b/drivers/power/supply/sc27xx_fuel_gauge.c
+@@ -733,13 +733,6 @@ static int sc27xx_fgu_set_property(struct power_supply *psy,
+ 	return ret;
+ }
+ 
+-static void sc27xx_fgu_external_power_changed(struct power_supply *psy)
+-{
+-	struct sc27xx_fgu_data *data = power_supply_get_drvdata(psy);
+-
+-	power_supply_changed(data->battery);
+-}
+-
+ static int sc27xx_fgu_property_is_writeable(struct power_supply *psy,
+ 					    enum power_supply_property psp)
+ {
+@@ -774,7 +767,7 @@ static const struct power_supply_desc sc27xx_fgu_desc = {
+ 	.num_properties		= ARRAY_SIZE(sc27xx_fgu_props),
+ 	.get_property		= sc27xx_fgu_get_property,
+ 	.set_property		= sc27xx_fgu_set_property,
+-	.external_power_changed	= sc27xx_fgu_external_power_changed,
++	.external_power_changed	= power_supply_changed,
+ 	.property_is_writeable	= sc27xx_fgu_property_is_writeable,
+ 	.no_thermal		= true,
+ };
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 08726bc0da9d7..323e8187a98ff 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -5263,7 +5263,7 @@ static void rdev_init_debugfs(struct regulator_dev *rdev)
+ 	}
+ 
+ 	rdev->debugfs = debugfs_create_dir(rname, debugfs_root);
+-	if (!rdev->debugfs) {
++	if (IS_ERR(rdev->debugfs)) {
+ 		rdev_warn(rdev, "Failed to create debugfs directory\n");
+ 		return;
+ 	}
+@@ -6185,7 +6185,7 @@ static int __init regulator_init(void)
+ 	ret = class_register(&regulator_class);
+ 
+ 	debugfs_root = debugfs_create_dir("regulator", NULL);
+-	if (!debugfs_root)
++	if (IS_ERR(debugfs_root))
+ 		pr_warn("regulator: Failed to create debugfs directory\n");
+ 
+ #ifdef CONFIG_DEBUG_FS
+diff --git a/drivers/regulator/qcom-rpmh-regulator.c b/drivers/regulator/qcom-rpmh-regulator.c
+index ae6021390143c..1e2455fc1967b 100644
+--- a/drivers/regulator/qcom-rpmh-regulator.c
++++ b/drivers/regulator/qcom-rpmh-regulator.c
+@@ -694,6 +694,16 @@ static const struct rpmh_vreg_hw_data pmic5_pldo_lv = {
+ 	.of_map_mode = rpmh_regulator_pmic4_ldo_of_map_mode,
+ };
+ 
++static const struct rpmh_vreg_hw_data pmic5_pldo515_mv = {
++	.regulator_type = VRM,
++	.ops = &rpmh_regulator_vrm_drms_ops,
++	.voltage_range = REGULATOR_LINEAR_RANGE(1800000, 0, 187, 8000),
++	.n_voltages = 188,
++	.hpm_min_load_uA = 10000,
++	.pmic_mode_map = pmic_mode_map_pmic5_ldo,
++	.of_map_mode = rpmh_regulator_pmic4_ldo_of_map_mode,
++};
++
+ static const struct rpmh_vreg_hw_data pmic5_nldo = {
+ 	.regulator_type = VRM,
+ 	.ops = &rpmh_regulator_vrm_drms_ops,
+@@ -704,6 +714,16 @@ static const struct rpmh_vreg_hw_data pmic5_nldo = {
+ 	.of_map_mode = rpmh_regulator_pmic4_ldo_of_map_mode,
+ };
+ 
++static const struct rpmh_vreg_hw_data pmic5_nldo515 = {
++	.regulator_type = VRM,
++	.ops = &rpmh_regulator_vrm_drms_ops,
++	.voltage_range = REGULATOR_LINEAR_RANGE(320000, 0, 210, 8000),
++	.n_voltages = 211,
++	.hpm_min_load_uA = 30000,
++	.pmic_mode_map = pmic_mode_map_pmic5_ldo,
++	.of_map_mode = rpmh_regulator_pmic4_ldo_of_map_mode,
++};
++
+ static const struct rpmh_vreg_hw_data pmic5_hfsmps510 = {
+ 	.regulator_type = VRM,
+ 	.ops = &rpmh_regulator_vrm_ops,
+@@ -749,6 +769,15 @@ static const struct rpmh_vreg_hw_data pmic5_ftsmps525_mv = {
+ 	.of_map_mode = rpmh_regulator_pmic4_smps_of_map_mode,
+ };
+ 
++static const struct rpmh_vreg_hw_data pmic5_ftsmps527 = {
++	.regulator_type = VRM,
++	.ops = &rpmh_regulator_vrm_ops,
++	.voltage_range = REGULATOR_LINEAR_RANGE(320000, 0, 215, 8000),
++	.n_voltages = 215,
++	.pmic_mode_map = pmic_mode_map_pmic5_smps,
++	.of_map_mode = rpmh_regulator_pmic4_smps_of_map_mode,
++};
++
+ static const struct rpmh_vreg_hw_data pmic5_hfsmps515 = {
+ 	.regulator_type = VRM,
+ 	.ops = &rpmh_regulator_vrm_ops,
+@@ -937,6 +966,28 @@ static const struct rpmh_vreg_init_data pmm8155au_vreg_data[] = {
+ 	{}
+ };
+ 
++static const struct rpmh_vreg_init_data pmm8654au_vreg_data[] = {
++	RPMH_VREG("smps1",  "smp%s1",  &pmic5_ftsmps527,  "vdd-s1"),
++	RPMH_VREG("smps2",  "smp%s2",  &pmic5_ftsmps527,  "vdd-s2"),
++	RPMH_VREG("smps3",  "smp%s3",  &pmic5_ftsmps527,  "vdd-s3"),
++	RPMH_VREG("smps4",  "smp%s4",  &pmic5_ftsmps527,  "vdd-s4"),
++	RPMH_VREG("smps5",  "smp%s5",  &pmic5_ftsmps527,  "vdd-s5"),
++	RPMH_VREG("smps6",  "smp%s6",  &pmic5_ftsmps527,  "vdd-s6"),
++	RPMH_VREG("smps7",  "smp%s7",  &pmic5_ftsmps527,  "vdd-s7"),
++	RPMH_VREG("smps8",  "smp%s8",  &pmic5_ftsmps527,  "vdd-s8"),
++	RPMH_VREG("smps9",  "smp%s9",  &pmic5_ftsmps527,  "vdd-s9"),
++	RPMH_VREG("ldo1",   "ldo%s1",  &pmic5_nldo515,    "vdd-s9"),
++	RPMH_VREG("ldo2",   "ldo%s2",  &pmic5_nldo515,    "vdd-l2-l3"),
++	RPMH_VREG("ldo3",   "ldo%s3",  &pmic5_nldo515,    "vdd-l2-l3"),
++	RPMH_VREG("ldo4",   "ldo%s4",  &pmic5_nldo515,    "vdd-s9"),
++	RPMH_VREG("ldo5",   "ldo%s5",  &pmic5_nldo515,    "vdd-s9"),
++	RPMH_VREG("ldo6",   "ldo%s6",  &pmic5_nldo515,    "vdd-l6-l7"),
++	RPMH_VREG("ldo7",   "ldo%s7",  &pmic5_nldo515,    "vdd-l6-l7"),
++	RPMH_VREG("ldo8",   "ldo%s8",  &pmic5_pldo515_mv, "vdd-l8-l9"),
++	RPMH_VREG("ldo9",   "ldo%s9",  &pmic5_pldo,       "vdd-l8-l9"),
++	{}
++};
++
+ static const struct rpmh_vreg_init_data pm8350_vreg_data[] = {
+ 	RPMH_VREG("smps1",  "smp%s1",  &pmic5_ftsmps510, "vdd-s1"),
+ 	RPMH_VREG("smps2",  "smp%s2",  &pmic5_ftsmps510, "vdd-s2"),
+@@ -1006,21 +1057,21 @@ static const struct rpmh_vreg_init_data pm8450_vreg_data[] = {
+ };
+ 
+ static const struct rpmh_vreg_init_data pm8550_vreg_data[] = {
+-	RPMH_VREG("ldo1",   "ldo%s1",  &pmic5_pldo,    "vdd-l1-l4-l10"),
++	RPMH_VREG("ldo1",   "ldo%s1",  &pmic5_nldo515,    "vdd-l1-l4-l10"),
+ 	RPMH_VREG("ldo2",   "ldo%s2",  &pmic5_pldo,    "vdd-l2-l13-l14"),
+-	RPMH_VREG("ldo3",   "ldo%s3",  &pmic5_nldo,    "vdd-l3"),
+-	RPMH_VREG("ldo4",   "ldo%s4",  &pmic5_nldo,    "vdd-l1-l4-l10"),
++	RPMH_VREG("ldo3",   "ldo%s3",  &pmic5_nldo515,    "vdd-l3"),
++	RPMH_VREG("ldo4",   "ldo%s4",  &pmic5_nldo515,    "vdd-l1-l4-l10"),
+ 	RPMH_VREG("ldo5",   "ldo%s5",  &pmic5_pldo,    "vdd-l5-l16"),
+-	RPMH_VREG("ldo6",   "ldo%s6",  &pmic5_pldo_lv, "vdd-l6-l7"),
+-	RPMH_VREG("ldo7",   "ldo%s7",  &pmic5_pldo_lv, "vdd-l6-l7"),
+-	RPMH_VREG("ldo8",   "ldo%s8",  &pmic5_pldo_lv, "vdd-l8-l9"),
++	RPMH_VREG("ldo6",   "ldo%s6",  &pmic5_pldo, "vdd-l6-l7"),
++	RPMH_VREG("ldo7",   "ldo%s7",  &pmic5_pldo, "vdd-l6-l7"),
++	RPMH_VREG("ldo8",   "ldo%s8",  &pmic5_pldo, "vdd-l8-l9"),
+ 	RPMH_VREG("ldo9",   "ldo%s9",  &pmic5_pldo,    "vdd-l8-l9"),
+-	RPMH_VREG("ldo10",  "ldo%s10", &pmic5_nldo,    "vdd-l1-l4-l10"),
+-	RPMH_VREG("ldo11",  "ldo%s11", &pmic5_nldo,    "vdd-l11"),
++	RPMH_VREG("ldo10",  "ldo%s10", &pmic5_nldo515,    "vdd-l1-l4-l10"),
++	RPMH_VREG("ldo11",  "ldo%s11", &pmic5_nldo515,    "vdd-l11"),
+ 	RPMH_VREG("ldo12",  "ldo%s12", &pmic5_pldo,    "vdd-l12"),
+ 	RPMH_VREG("ldo13",  "ldo%s13", &pmic5_pldo,    "vdd-l2-l13-l14"),
+ 	RPMH_VREG("ldo14",  "ldo%s14", &pmic5_pldo,    "vdd-l2-l13-l14"),
+-	RPMH_VREG("ldo15",  "ldo%s15", &pmic5_pldo,    "vdd-l15"),
++	RPMH_VREG("ldo15",  "ldo%s15", &pmic5_nldo515,    "vdd-l15"),
+ 	RPMH_VREG("ldo16",  "ldo%s16", &pmic5_pldo,    "vdd-l5-l16"),
+ 	RPMH_VREG("ldo17",  "ldo%s17", &pmic5_pldo,    "vdd-l17"),
+ 	RPMH_VREG("bob1",   "bob%s1",  &pmic5_bob,     "vdd-bob1"),
+@@ -1035,9 +1086,9 @@ static const struct rpmh_vreg_init_data pm8550vs_vreg_data[] = {
+ 	RPMH_VREG("smps4",  "smp%s4",  &pmic5_ftsmps525_lv, "vdd-s4"),
+ 	RPMH_VREG("smps5",  "smp%s5",  &pmic5_ftsmps525_lv, "vdd-s5"),
+ 	RPMH_VREG("smps6",  "smp%s6",  &pmic5_ftsmps525_mv, "vdd-s6"),
+-	RPMH_VREG("ldo1",   "ldo%s1",  &pmic5_nldo,   "vdd-l1"),
+-	RPMH_VREG("ldo2",   "ldo%s2",  &pmic5_nldo,   "vdd-l2"),
+-	RPMH_VREG("ldo3",   "ldo%s3",  &pmic5_nldo,   "vdd-l3"),
++	RPMH_VREG("ldo1",   "ldo%s1",  &pmic5_nldo515,   "vdd-l1"),
++	RPMH_VREG("ldo2",   "ldo%s2",  &pmic5_nldo515,   "vdd-l2"),
++	RPMH_VREG("ldo3",   "ldo%s3",  &pmic5_nldo515,   "vdd-l3"),
+ 	{}
+ };
+ 
+@@ -1050,9 +1101,9 @@ static const struct rpmh_vreg_init_data pm8550ve_vreg_data[] = {
+ 	RPMH_VREG("smps6", "smp%s6", &pmic5_ftsmps525_lv, "vdd-s6"),
+ 	RPMH_VREG("smps7", "smp%s7", &pmic5_ftsmps525_lv, "vdd-s7"),
+ 	RPMH_VREG("smps8", "smp%s8", &pmic5_ftsmps525_lv, "vdd-s8"),
+-	RPMH_VREG("ldo1",  "ldo%s1", &pmic5_nldo,   "vdd-l1"),
+-	RPMH_VREG("ldo2",  "ldo%s2", &pmic5_nldo,   "vdd-l2"),
+-	RPMH_VREG("ldo3",  "ldo%s3", &pmic5_nldo,   "vdd-l3"),
++	RPMH_VREG("ldo1",  "ldo%s1", &pmic5_nldo515,   "vdd-l1"),
++	RPMH_VREG("ldo2",  "ldo%s2", &pmic5_nldo515,   "vdd-l2"),
++	RPMH_VREG("ldo3",  "ldo%s3", &pmic5_nldo515,   "vdd-l3"),
+ 	{}
+ };
+ 
+@@ -1431,6 +1482,10 @@ static const struct of_device_id __maybe_unused rpmh_regulator_match_table[] = {
+ 		.compatible = "qcom,pmm8155au-rpmh-regulators",
+ 		.data = pmm8155au_vreg_data,
+ 	},
++	{
++		.compatible = "qcom,pmm8654au-rpmh-regulators",
++		.data = pmm8654au_vreg_data,
++	},
+ 	{
+ 		.compatible = "qcom,pmx55-rpmh-regulators",
+ 		.data = pmx55_vreg_data,
+diff --git a/drivers/s390/net/ism_drv.c b/drivers/s390/net/ism_drv.c
+index eb7e134860878..db0903cf2baef 100644
+--- a/drivers/s390/net/ism_drv.c
++++ b/drivers/s390/net/ism_drv.c
+@@ -774,14 +774,6 @@ static int __init ism_init(void)
+ 
+ static void __exit ism_exit(void)
+ {
+-	struct ism_dev *ism;
+-
+-	mutex_lock(&ism_dev_list.mutex);
+-	list_for_each_entry(ism, &ism_dev_list.list, list) {
+-		ism_dev_exit(ism);
+-	}
+-	mutex_unlock(&ism_dev_list.mutex);
+-
+ 	pci_unregister_driver(&ism_driver);
+ 	debug_unregister(ism_debug_info);
+ }
+diff --git a/drivers/soc/qcom/llcc-qcom.c b/drivers/soc/qcom/llcc-qcom.c
+index d4d3eced52f35..8298decc518e3 100644
+--- a/drivers/soc/qcom/llcc-qcom.c
++++ b/drivers/soc/qcom/llcc-qcom.c
+@@ -62,8 +62,6 @@
+ #define LLCC_TRP_WRSC_CACHEABLE_EN    0x21f2c
+ #define LLCC_TRP_ALGO_CFG8	      0x21f30
+ 
+-#define BANK_OFFSET_STRIDE	      0x80000
+-
+ #define LLCC_VERSION_2_0_0_0          0x02000000
+ #define LLCC_VERSION_2_1_0_0          0x02010000
+ #define LLCC_VERSION_4_1_0_0          0x04010000
+@@ -900,8 +898,8 @@ static int qcom_llcc_remove(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
+-static struct regmap *qcom_llcc_init_mmio(struct platform_device *pdev,
+-		const char *name)
++static struct regmap *qcom_llcc_init_mmio(struct platform_device *pdev, u8 index,
++					  const char *name)
+ {
+ 	void __iomem *base;
+ 	struct regmap_config llcc_regmap_config = {
+@@ -911,7 +909,7 @@ static struct regmap *qcom_llcc_init_mmio(struct platform_device *pdev,
+ 		.fast_io = true,
+ 	};
+ 
+-	base = devm_platform_ioremap_resource_byname(pdev, name);
++	base = devm_platform_ioremap_resource(pdev, index);
+ 	if (IS_ERR(base))
+ 		return ERR_CAST(base);
+ 
+@@ -929,6 +927,7 @@ static int qcom_llcc_probe(struct platform_device *pdev)
+ 	const struct llcc_slice_config *llcc_cfg;
+ 	u32 sz;
+ 	u32 version;
++	struct regmap *regmap;
+ 
+ 	drv_data = devm_kzalloc(dev, sizeof(*drv_data), GFP_KERNEL);
+ 	if (!drv_data) {
+@@ -936,21 +935,51 @@ static int qcom_llcc_probe(struct platform_device *pdev)
+ 		goto err;
+ 	}
+ 
+-	drv_data->regmap = qcom_llcc_init_mmio(pdev, "llcc_base");
+-	if (IS_ERR(drv_data->regmap)) {
+-		ret = PTR_ERR(drv_data->regmap);
++	/* Initialize the first LLCC bank regmap */
++	regmap = qcom_llcc_init_mmio(pdev, 0, "llcc0_base");
++	if (IS_ERR(regmap)) {
++		ret = PTR_ERR(regmap);
+ 		goto err;
+ 	}
+ 
+-	drv_data->bcast_regmap =
+-		qcom_llcc_init_mmio(pdev, "llcc_broadcast_base");
++	cfg = of_device_get_match_data(&pdev->dev);
++
++	ret = regmap_read(regmap, cfg->reg_offset[LLCC_COMMON_STATUS0], &num_banks);
++	if (ret)
++		goto err;
++
++	num_banks &= LLCC_LB_CNT_MASK;
++	num_banks >>= LLCC_LB_CNT_SHIFT;
++	drv_data->num_banks = num_banks;
++
++	drv_data->regmaps = devm_kcalloc(dev, num_banks, sizeof(*drv_data->regmaps), GFP_KERNEL);
++	if (!drv_data->regmaps) {
++		ret = -ENOMEM;
++		goto err;
++	}
++
++	drv_data->regmaps[0] = regmap;
++
++	/* Initialize rest of LLCC bank regmaps */
++	for (i = 1; i < num_banks; i++) {
++		char *base = kasprintf(GFP_KERNEL, "llcc%d_base", i);
++
++		drv_data->regmaps[i] = qcom_llcc_init_mmio(pdev, i, base);
++		if (IS_ERR(drv_data->regmaps[i])) {
++			ret = PTR_ERR(drv_data->regmaps[i]);
++			kfree(base);
++			goto err;
++		}
++
++		kfree(base);
++	}
++
++	drv_data->bcast_regmap = qcom_llcc_init_mmio(pdev, i, "llcc_broadcast_base");
+ 	if (IS_ERR(drv_data->bcast_regmap)) {
+ 		ret = PTR_ERR(drv_data->bcast_regmap);
+ 		goto err;
+ 	}
+ 
+-	cfg = of_device_get_match_data(&pdev->dev);
+-
+ 	/* Extract version of the IP */
+ 	ret = regmap_read(drv_data->bcast_regmap, cfg->reg_offset[LLCC_COMMON_HW_INFO],
+ 			  &version);
+@@ -959,15 +988,6 @@ static int qcom_llcc_probe(struct platform_device *pdev)
+ 
+ 	drv_data->version = version;
+ 
+-	ret = regmap_read(drv_data->regmap, cfg->reg_offset[LLCC_COMMON_STATUS0],
+-			  &num_banks);
+-	if (ret)
+-		goto err;
+-
+-	num_banks &= LLCC_LB_CNT_MASK;
+-	num_banks >>= LLCC_LB_CNT_SHIFT;
+-	drv_data->num_banks = num_banks;
+-
+ 	llcc_cfg = cfg->sct_data;
+ 	sz = cfg->size;
+ 
+@@ -975,16 +995,6 @@ static int qcom_llcc_probe(struct platform_device *pdev)
+ 		if (llcc_cfg[i].slice_id > drv_data->max_slices)
+ 			drv_data->max_slices = llcc_cfg[i].slice_id;
+ 
+-	drv_data->offsets = devm_kcalloc(dev, num_banks, sizeof(u32),
+-							GFP_KERNEL);
+-	if (!drv_data->offsets) {
+-		ret = -ENOMEM;
+-		goto err;
+-	}
+-
+-	for (i = 0; i < num_banks; i++)
+-		drv_data->offsets[i] = i * BANK_OFFSET_STRIDE;
+-
+ 	drv_data->bitmap = devm_bitmap_zalloc(dev, drv_data->max_slices,
+ 					      GFP_KERNEL);
+ 	if (!drv_data->bitmap) {
+diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c
+index 8f36e1306e169..3fb85607fa4fa 100644
+--- a/drivers/spi/spi-cadence-quadspi.c
++++ b/drivers/spi/spi-cadence-quadspi.c
+@@ -1736,8 +1736,11 @@ static int cqspi_probe(struct platform_device *pdev)
+ 			cqspi->slow_sram = true;
+ 
+ 		if (of_device_is_compatible(pdev->dev.of_node,
+-					    "xlnx,versal-ospi-1.0"))
+-			dma_set_mask(&pdev->dev, DMA_BIT_MASK(64));
++					    "xlnx,versal-ospi-1.0")) {
++			ret = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64));
++			if (ret)
++				goto probe_reset_failed;
++		}
+ 	}
+ 
+ 	ret = devm_request_irq(dev, irq, cqspi_irq_handler, 0,
+diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c
+index e419642eb10e5..0da5c6ec46fb1 100644
+--- a/drivers/spi/spi-fsl-dspi.c
++++ b/drivers/spi/spi-fsl-dspi.c
+@@ -1002,7 +1002,9 @@ static int dspi_transfer_one_message(struct spi_controller *ctlr,
+ static int dspi_setup(struct spi_device *spi)
+ {
+ 	struct fsl_dspi *dspi = spi_controller_get_devdata(spi->controller);
++	u32 period_ns = DIV_ROUND_UP(NSEC_PER_SEC, spi->max_speed_hz);
+ 	unsigned char br = 0, pbr = 0, pcssck = 0, cssck = 0;
++	u32 quarter_period_ns = DIV_ROUND_UP(period_ns, 4);
+ 	u32 cs_sck_delay = 0, sck_cs_delay = 0;
+ 	struct fsl_dspi_platform_data *pdata;
+ 	unsigned char pasc = 0, asc = 0;
+@@ -1031,6 +1033,19 @@ static int dspi_setup(struct spi_device *spi)
+ 		sck_cs_delay = pdata->sck_cs_delay;
+ 	}
+ 
++	/* Since tCSC and tASC apply to continuous transfers too, avoid SCK
++	 * glitches of half a cycle by never allowing tCSC + tASC to go below
++	 * half a SCK period.
++	 */
++	if (cs_sck_delay < quarter_period_ns)
++		cs_sck_delay = quarter_period_ns;
++	if (sck_cs_delay < quarter_period_ns)
++		sck_cs_delay = quarter_period_ns;
++
++	dev_dbg(&spi->dev,
++		"DSPI controller timing params: CS-to-SCK delay %u ns, SCK-to-CS delay %u ns\n",
++		cs_sck_delay, sck_cs_delay);
++
+ 	clkrate = clk_get_rate(dspi->clk);
+ 	hz_to_spi_baud(&pbr, &br, spi->max_speed_hz, clkrate);
+ 
+diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
+index 86adff2a86edd..687adc9e086ca 100644
+--- a/drivers/target/target_core_transport.c
++++ b/drivers/target/target_core_transport.c
+@@ -504,6 +504,8 @@ target_setup_session(struct se_portal_group *tpg,
+ 
+ free_sess:
+ 	transport_free_session(sess);
++	return ERR_PTR(rc);
++
+ free_cnt:
+ 	target_free_cmd_counter(cmd_cnt);
+ 	return ERR_PTR(rc);
+diff --git a/drivers/thunderbolt/dma_test.c b/drivers/thunderbolt/dma_test.c
+index 3bedecb236e0d..14bb6dec6c4b0 100644
+--- a/drivers/thunderbolt/dma_test.c
++++ b/drivers/thunderbolt/dma_test.c
+@@ -192,9 +192,9 @@ static int dma_test_start_rings(struct dma_test *dt)
+ 	}
+ 
+ 	ret = tb_xdomain_enable_paths(dt->xd, dt->tx_hopid,
+-				      dt->tx_ring ? dt->tx_ring->hop : 0,
++				      dt->tx_ring ? dt->tx_ring->hop : -1,
+ 				      dt->rx_hopid,
+-				      dt->rx_ring ? dt->rx_ring->hop : 0);
++				      dt->rx_ring ? dt->rx_ring->hop : -1);
+ 	if (ret) {
+ 		dma_test_free_rings(dt);
+ 		return ret;
+@@ -218,9 +218,9 @@ static void dma_test_stop_rings(struct dma_test *dt)
+ 		tb_ring_stop(dt->tx_ring);
+ 
+ 	ret = tb_xdomain_disable_paths(dt->xd, dt->tx_hopid,
+-				       dt->tx_ring ? dt->tx_ring->hop : 0,
++				       dt->tx_ring ? dt->tx_ring->hop : -1,
+ 				       dt->rx_hopid,
+-				       dt->rx_ring ? dt->rx_ring->hop : 0);
++				       dt->rx_ring ? dt->rx_ring->hop : -1);
+ 	if (ret)
+ 		dev_warn(&dt->svc->dev, "failed to disable DMA paths\n");
+ 
+diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
+index 0a525f44ea316..4a6a3802d7e51 100644
+--- a/drivers/thunderbolt/nhi.c
++++ b/drivers/thunderbolt/nhi.c
+@@ -56,9 +56,14 @@ static int ring_interrupt_index(const struct tb_ring *ring)
+ 
+ static void nhi_mask_interrupt(struct tb_nhi *nhi, int mask, int ring)
+ {
+-	if (nhi->quirks & QUIRK_AUTO_CLEAR_INT)
+-		return;
+-	iowrite32(mask, nhi->iobase + REG_RING_INTERRUPT_MASK_CLEAR_BASE + ring);
++	if (nhi->quirks & QUIRK_AUTO_CLEAR_INT) {
++		u32 val;
++
++		val = ioread32(nhi->iobase + REG_RING_INTERRUPT_BASE + ring);
++		iowrite32(val & ~mask, nhi->iobase + REG_RING_INTERRUPT_BASE + ring);
++	} else {
++		iowrite32(mask, nhi->iobase + REG_RING_INTERRUPT_MASK_CLEAR_BASE + ring);
++	}
+ }
+ 
+ static void nhi_clear_interrupt(struct tb_nhi *nhi, int ring)
+diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
+index 7bfbc9ca9ba4f..c1af712ca7288 100644
+--- a/drivers/thunderbolt/tb.c
++++ b/drivers/thunderbolt/tb.c
+@@ -737,6 +737,7 @@ static void tb_scan_port(struct tb_port *port)
+ {
+ 	struct tb_cm *tcm = tb_priv(port->sw->tb);
+ 	struct tb_port *upstream_port;
++	bool discovery = false;
+ 	struct tb_switch *sw;
+ 	int ret;
+ 
+@@ -804,8 +805,10 @@ static void tb_scan_port(struct tb_port *port)
+ 	 * tunnels and know which switches were authorized already by
+ 	 * the boot firmware.
+ 	 */
+-	if (!tcm->hotplug_active)
++	if (!tcm->hotplug_active) {
+ 		dev_set_uevent_suppress(&sw->dev, true);
++		discovery = true;
++	}
+ 
+ 	/*
+ 	 * At the moment Thunderbolt 2 and beyond (devices with LC) we
+@@ -835,10 +838,14 @@ static void tb_scan_port(struct tb_port *port)
+ 	 * CL0s and CL1 are enabled and supported together.
+ 	 * Silently ignore CLx enabling in case CLx is not supported.
+ 	 */
+-	ret = tb_switch_enable_clx(sw, TB_CL1);
+-	if (ret && ret != -EOPNOTSUPP)
+-		tb_sw_warn(sw, "failed to enable %s on upstream port\n",
+-			   tb_switch_clx_name(TB_CL1));
++	if (discovery) {
++		tb_sw_dbg(sw, "discovery, not touching CL states\n");
++	} else {
++		ret = tb_switch_enable_clx(sw, TB_CL1);
++		if (ret && ret != -EOPNOTSUPP)
++			tb_sw_warn(sw, "failed to enable %s on upstream port\n",
++				   tb_switch_clx_name(TB_CL1));
++	}
+ 
+ 	if (tb_switch_is_clx_enabled(sw, TB_CL1))
+ 		/*
+diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c
+index 9099ae73e78f3..4f222673d651d 100644
+--- a/drivers/thunderbolt/tunnel.c
++++ b/drivers/thunderbolt/tunnel.c
+@@ -526,7 +526,7 @@ static int tb_dp_xchg_caps(struct tb_tunnel *tunnel)
+ 	 * Perform connection manager handshake between IN and OUT ports
+ 	 * before capabilities exchange can take place.
+ 	 */
+-	ret = tb_dp_cm_handshake(in, out, 1500);
++	ret = tb_dp_cm_handshake(in, out, 3000);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 2087a5e6f4357..43fe4bb2f397f 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -310,7 +310,7 @@ static const struct lpuart_soc_data ls1021a_data = {
+ static const struct lpuart_soc_data ls1028a_data = {
+ 	.devtype = LS1028A_LPUART,
+ 	.iotype = UPIO_MEM32,
+-	.rx_watermark = 1,
++	.rx_watermark = 0,
+ };
+ 
+ static struct lpuart_soc_data imx7ulp_data = {
+diff --git a/drivers/tty/serial/lantiq.c b/drivers/tty/serial/lantiq.c
+index a58e9277dfad8..f1387f1024dbb 100644
+--- a/drivers/tty/serial/lantiq.c
++++ b/drivers/tty/serial/lantiq.c
+@@ -250,6 +250,7 @@ lqasc_err_int(int irq, void *_port)
+ 	struct ltq_uart_port *ltq_port = to_ltq_uart_port(port);
+ 
+ 	spin_lock_irqsave(&ltq_port->lock, flags);
++	__raw_writel(ASC_IRNCR_EIR, port->membase + LTQ_ASC_IRNCR);
+ 	/* clear any pending interrupts */
+ 	asc_update_bits(0, ASCWHBSTATE_CLRPE | ASCWHBSTATE_CLRFE |
+ 		ASCWHBSTATE_CLRROE, port->membase + LTQ_ASC_WHBSTATE);
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 9f8c988c25cb1..e999e6079ae03 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -1982,6 +1982,11 @@ static int dwc3_remove(struct platform_device *pdev)
+ 	pm_runtime_allow(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+ 	pm_runtime_put_noidle(&pdev->dev);
++	/*
++	 * HACK: Clear the driver data, which is currently accessed by parent
++	 * glue drivers, before allowing the parent to suspend.
++	 */
++	platform_set_drvdata(pdev, NULL);
+ 	pm_runtime_set_suspended(&pdev->dev);
+ 
+ 	dwc3_free_event_buffers(dwc);
+diff --git a/drivers/usb/dwc3/dwc3-qcom.c b/drivers/usb/dwc3/dwc3-qcom.c
+index 959fc925ca7c5..79b22abf97276 100644
+--- a/drivers/usb/dwc3/dwc3-qcom.c
++++ b/drivers/usb/dwc3/dwc3-qcom.c
+@@ -308,7 +308,16 @@ static void dwc3_qcom_interconnect_exit(struct dwc3_qcom *qcom)
+ /* Only usable in contexts where the role can not change. */
+ static bool dwc3_qcom_is_host(struct dwc3_qcom *qcom)
+ {
+-	struct dwc3 *dwc = platform_get_drvdata(qcom->dwc3);
++	struct dwc3 *dwc;
++
++	/*
++	 * FIXME: Fix this layering violation.
++	 */
++	dwc = platform_get_drvdata(qcom->dwc3);
++
++	/* Core driver may not have probed yet. */
++	if (!dwc)
++		return false;
+ 
+ 	return dwc->xhci;
+ }
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 1cf352fbe7039..04bf054aa4f18 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -180,6 +180,7 @@ static void dwc3_gadget_del_and_unmap_request(struct dwc3_ep *dep,
+ 	list_del(&req->list);
+ 	req->remaining = 0;
+ 	req->needs_extra_trb = false;
++	req->num_trbs = 0;
+ 
+ 	if (req->request.status == -EINPROGRESS)
+ 		req->request.status = status;
+diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
+index 23b0629a87743..85ced2dceaaa1 100644
+--- a/drivers/usb/gadget/udc/core.c
++++ b/drivers/usb/gadget/udc/core.c
+@@ -37,6 +37,14 @@ static struct bus_type gadget_bus_type;
+  * @vbus: for udcs who care about vbus status, this value is real vbus status;
+  * for udcs who do not care about vbus status, this value is always true
+  * @started: the UDC's started state. True if the UDC had started.
++ * @allow_connect: Indicates whether UDC is allowed to be pulled up.
++ * Set/cleared by gadget_(un)bind_driver() after gadget driver is bound or
++ * unbound.
++ * @connect_lock: protects udc->started, gadget->connect,
++ * gadget->allow_connect and gadget->deactivate. The routines
++ * usb_gadget_connect_locked(), usb_gadget_disconnect_locked(),
++ * usb_udc_connect_control_locked(), usb_gadget_udc_start_locked() and
++ * usb_gadget_udc_stop_locked() are called with this lock held.
+  *
+  * This represents the internal data structure which is used by the UDC-class
+  * to hold information about udc driver and gadget together.
+@@ -48,6 +56,9 @@ struct usb_udc {
+ 	struct list_head		list;
+ 	bool				vbus;
+ 	bool				started;
++	bool				allow_connect;
++	struct work_struct		vbus_work;
++	struct mutex			connect_lock;
+ };
+ 
+ static struct class *udc_class;
+@@ -660,17 +671,8 @@ out:
+ }
+ EXPORT_SYMBOL_GPL(usb_gadget_vbus_disconnect);
+ 
+-/**
+- * usb_gadget_connect - software-controlled connect to USB host
+- * @gadget:the peripheral being connected
+- *
+- * Enables the D+ (or potentially D-) pullup.  The host will start
+- * enumerating this gadget when the pullup is active and a VBUS session
+- * is active (the link is powered).
+- *
+- * Returns zero on success, else negative errno.
+- */
+-int usb_gadget_connect(struct usb_gadget *gadget)
++static int usb_gadget_connect_locked(struct usb_gadget *gadget)
++	__must_hold(&gadget->udc->connect_lock)
+ {
+ 	int ret = 0;
+ 
+@@ -679,10 +681,12 @@ int usb_gadget_connect(struct usb_gadget *gadget)
+ 		goto out;
+ 	}
+ 
+-	if (gadget->deactivated) {
++	if (gadget->deactivated || !gadget->udc->allow_connect || !gadget->udc->started) {
+ 		/*
+-		 * If gadget is deactivated we only save new state.
+-		 * Gadget will be connected automatically after activation.
++		 * If the gadget isn't usable (because it is deactivated,
++		 * unbound, or not yet started), we only save the new state.
++		 * The gadget will be connected automatically when it is
++		 * activated/bound/started.
+ 		 */
+ 		gadget->connected = true;
+ 		goto out;
+@@ -697,22 +701,31 @@ out:
+ 
+ 	return ret;
+ }
+-EXPORT_SYMBOL_GPL(usb_gadget_connect);
+ 
+ /**
+- * usb_gadget_disconnect - software-controlled disconnect from USB host
+- * @gadget:the peripheral being disconnected
+- *
+- * Disables the D+ (or potentially D-) pullup, which the host may see
+- * as a disconnect (when a VBUS session is active).  Not all systems
+- * support software pullup controls.
++ * usb_gadget_connect - software-controlled connect to USB host
++ * @gadget:the peripheral being connected
+  *
+- * Following a successful disconnect, invoke the ->disconnect() callback
+- * for the current gadget driver so that UDC drivers don't need to.
++ * Enables the D+ (or potentially D-) pullup.  The host will start
++ * enumerating this gadget when the pullup is active and a VBUS session
++ * is active (the link is powered).
+  *
+  * Returns zero on success, else negative errno.
+  */
+-int usb_gadget_disconnect(struct usb_gadget *gadget)
++int usb_gadget_connect(struct usb_gadget *gadget)
++{
++	int ret;
++
++	mutex_lock(&gadget->udc->connect_lock);
++	ret = usb_gadget_connect_locked(gadget);
++	mutex_unlock(&gadget->udc->connect_lock);
++
++	return ret;
++}
++EXPORT_SYMBOL_GPL(usb_gadget_connect);
++
++static int usb_gadget_disconnect_locked(struct usb_gadget *gadget)
++	__must_hold(&gadget->udc->connect_lock)
+ {
+ 	int ret = 0;
+ 
+@@ -724,7 +737,7 @@ int usb_gadget_disconnect(struct usb_gadget *gadget)
+ 	if (!gadget->connected)
+ 		goto out;
+ 
+-	if (gadget->deactivated) {
++	if (gadget->deactivated || !gadget->udc->started) {
+ 		/*
+ 		 * If gadget is deactivated we only save new state.
+ 		 * Gadget will stay disconnected after activation.
+@@ -747,6 +760,30 @@ out:
+ 
+ 	return ret;
+ }
++
++/**
++ * usb_gadget_disconnect - software-controlled disconnect from USB host
++ * @gadget:the peripheral being disconnected
++ *
++ * Disables the D+ (or potentially D-) pullup, which the host may see
++ * as a disconnect (when a VBUS session is active).  Not all systems
++ * support software pullup controls.
++ *
++ * Following a successful disconnect, invoke the ->disconnect() callback
++ * for the current gadget driver so that UDC drivers don't need to.
++ *
++ * Returns zero on success, else negative errno.
++ */
++int usb_gadget_disconnect(struct usb_gadget *gadget)
++{
++	int ret;
++
++	mutex_lock(&gadget->udc->connect_lock);
++	ret = usb_gadget_disconnect_locked(gadget);
++	mutex_unlock(&gadget->udc->connect_lock);
++
++	return ret;
++}
+ EXPORT_SYMBOL_GPL(usb_gadget_disconnect);
+ 
+ /**
+@@ -764,13 +801,14 @@ int usb_gadget_deactivate(struct usb_gadget *gadget)
+ {
+ 	int ret = 0;
+ 
++	mutex_lock(&gadget->udc->connect_lock);
+ 	if (gadget->deactivated)
+-		goto out;
++		goto unlock;
+ 
+ 	if (gadget->connected) {
+-		ret = usb_gadget_disconnect(gadget);
++		ret = usb_gadget_disconnect_locked(gadget);
+ 		if (ret)
+-			goto out;
++			goto unlock;
+ 
+ 		/*
+ 		 * If gadget was being connected before deactivation, we want
+@@ -780,7 +818,8 @@ int usb_gadget_deactivate(struct usb_gadget *gadget)
+ 	}
+ 	gadget->deactivated = true;
+ 
+-out:
++unlock:
++	mutex_unlock(&gadget->udc->connect_lock);
+ 	trace_usb_gadget_deactivate(gadget, ret);
+ 
+ 	return ret;
+@@ -800,8 +839,9 @@ int usb_gadget_activate(struct usb_gadget *gadget)
+ {
+ 	int ret = 0;
+ 
++	mutex_lock(&gadget->udc->connect_lock);
+ 	if (!gadget->deactivated)
+-		goto out;
++		goto unlock;
+ 
+ 	gadget->deactivated = false;
+ 
+@@ -810,9 +850,11 @@ int usb_gadget_activate(struct usb_gadget *gadget)
+ 	 * while it was being deactivated, we call usb_gadget_connect().
+ 	 */
+ 	if (gadget->connected)
+-		ret = usb_gadget_connect(gadget);
++		ret = usb_gadget_connect_locked(gadget);
++	mutex_unlock(&gadget->udc->connect_lock);
+ 
+-out:
++unlock:
++	mutex_unlock(&gadget->udc->connect_lock);
+ 	trace_usb_gadget_activate(gadget, ret);
+ 
+ 	return ret;
+@@ -1051,12 +1093,22 @@ EXPORT_SYMBOL_GPL(usb_gadget_set_state);
+ 
+ /* ------------------------------------------------------------------------- */
+ 
+-static void usb_udc_connect_control(struct usb_udc *udc)
++/* Acquire connect_lock before calling this function. */
++static void usb_udc_connect_control_locked(struct usb_udc *udc) __must_hold(&udc->connect_lock)
+ {
+ 	if (udc->vbus)
+-		usb_gadget_connect(udc->gadget);
++		usb_gadget_connect_locked(udc->gadget);
+ 	else
+-		usb_gadget_disconnect(udc->gadget);
++		usb_gadget_disconnect_locked(udc->gadget);
++}
++
++static void vbus_event_work(struct work_struct *work)
++{
++	struct usb_udc *udc = container_of(work, struct usb_udc, vbus_work);
++
++	mutex_lock(&udc->connect_lock);
++	usb_udc_connect_control_locked(udc);
++	mutex_unlock(&udc->connect_lock);
+ }
+ 
+ /**
+@@ -1067,6 +1119,14 @@ static void usb_udc_connect_control(struct usb_udc *udc)
+  *
+  * The udc driver calls it when it wants to connect or disconnect gadget
+  * according to vbus status.
++ *
++ * This function can be invoked from interrupt context by irq handlers of
++ * the gadget drivers, however, usb_udc_connect_control() has to run in
++ * non-atomic context due to the following:
++ * a. Some of the gadget driver implementations expect the ->pullup
++ * callback to be invoked in non-atomic context.
++ * b. usb_gadget_disconnect() acquires udc_lock which is a mutex.
++ * Hence offload invocation of usb_udc_connect_control() to workqueue.
+  */
+ void usb_udc_vbus_handler(struct usb_gadget *gadget, bool status)
+ {
+@@ -1074,7 +1134,7 @@ void usb_udc_vbus_handler(struct usb_gadget *gadget, bool status)
+ 
+ 	if (udc) {
+ 		udc->vbus = status;
+-		usb_udc_connect_control(udc);
++		schedule_work(&udc->vbus_work);
+ 	}
+ }
+ EXPORT_SYMBOL_GPL(usb_udc_vbus_handler);
+@@ -1097,7 +1157,7 @@ void usb_gadget_udc_reset(struct usb_gadget *gadget,
+ EXPORT_SYMBOL_GPL(usb_gadget_udc_reset);
+ 
+ /**
+- * usb_gadget_udc_start - tells usb device controller to start up
++ * usb_gadget_udc_start_locked - tells usb device controller to start up
+  * @udc: The UDC to be started
+  *
+  * This call is issued by the UDC Class driver when it's about
+@@ -1108,8 +1168,11 @@ EXPORT_SYMBOL_GPL(usb_gadget_udc_reset);
+  * necessary to have it powered on.
+  *
+  * Returns zero on success, else negative errno.
++ *
++ * Caller should acquire connect_lock before invoking this function.
+  */
+-static inline int usb_gadget_udc_start(struct usb_udc *udc)
++static inline int usb_gadget_udc_start_locked(struct usb_udc *udc)
++	__must_hold(&udc->connect_lock)
+ {
+ 	int ret;
+ 
+@@ -1126,7 +1189,7 @@ static inline int usb_gadget_udc_start(struct usb_udc *udc)
+ }
+ 
+ /**
+- * usb_gadget_udc_stop - tells usb device controller we don't need it anymore
++ * usb_gadget_udc_stop_locked - tells usb device controller we don't need it anymore
+  * @udc: The UDC to be stopped
+  *
+  * This call is issued by the UDC Class driver after calling
+@@ -1135,8 +1198,11 @@ static inline int usb_gadget_udc_start(struct usb_udc *udc)
+  * The details are implementation specific, but it can go as
+  * far as powering off UDC completely and disable its data
+  * line pullups.
++ *
++ * Caller should acquire connect lock before invoking this function.
+  */
+-static inline void usb_gadget_udc_stop(struct usb_udc *udc)
++static inline void usb_gadget_udc_stop_locked(struct usb_udc *udc)
++	__must_hold(&udc->connect_lock)
+ {
+ 	if (!udc->started) {
+ 		dev_err(&udc->dev, "UDC had already stopped\n");
+@@ -1295,12 +1361,14 @@ int usb_add_gadget(struct usb_gadget *gadget)
+ 
+ 	udc->gadget = gadget;
+ 	gadget->udc = udc;
++	mutex_init(&udc->connect_lock);
+ 
+ 	udc->started = false;
+ 
+ 	mutex_lock(&udc_lock);
+ 	list_add_tail(&udc->list, &udc_list);
+ 	mutex_unlock(&udc_lock);
++	INIT_WORK(&udc->vbus_work, vbus_event_work);
+ 
+ 	ret = device_add(&udc->dev);
+ 	if (ret)
+@@ -1432,6 +1500,7 @@ void usb_del_gadget(struct usb_gadget *gadget)
+ 	flush_work(&gadget->work);
+ 	device_del(&gadget->dev);
+ 	ida_free(&gadget_id_numbers, gadget->id_number);
++	cancel_work_sync(&udc->vbus_work);
+ 	device_unregister(&udc->dev);
+ }
+ EXPORT_SYMBOL_GPL(usb_del_gadget);
+@@ -1496,11 +1565,16 @@ static int gadget_bind_driver(struct device *dev)
+ 	if (ret)
+ 		goto err_bind;
+ 
+-	ret = usb_gadget_udc_start(udc);
+-	if (ret)
++	mutex_lock(&udc->connect_lock);
++	ret = usb_gadget_udc_start_locked(udc);
++	if (ret) {
++		mutex_unlock(&udc->connect_lock);
+ 		goto err_start;
++	}
+ 	usb_gadget_enable_async_callbacks(udc);
+-	usb_udc_connect_control(udc);
++	udc->allow_connect = true;
++	usb_udc_connect_control_locked(udc);
++	mutex_unlock(&udc->connect_lock);
+ 
+ 	kobject_uevent(&udc->dev.kobj, KOBJ_CHANGE);
+ 	return 0;
+@@ -1531,12 +1605,16 @@ static void gadget_unbind_driver(struct device *dev)
+ 
+ 	kobject_uevent(&udc->dev.kobj, KOBJ_CHANGE);
+ 
+-	usb_gadget_disconnect(gadget);
++	udc->allow_connect = false;
++	cancel_work_sync(&udc->vbus_work);
++	mutex_lock(&udc->connect_lock);
++	usb_gadget_disconnect_locked(gadget);
+ 	usb_gadget_disable_async_callbacks(udc);
+ 	if (gadget->irq)
+ 		synchronize_irq(gadget->irq);
+ 	udc->driver->unbind(gadget);
+-	usb_gadget_udc_stop(udc);
++	usb_gadget_udc_stop_locked(udc);
++	mutex_unlock(&udc->connect_lock);
+ 
+ 	mutex_lock(&udc_lock);
+ 	driver->is_bound = false;
+@@ -1622,11 +1700,15 @@ static ssize_t soft_connect_store(struct device *dev,
+ 	}
+ 
+ 	if (sysfs_streq(buf, "connect")) {
+-		usb_gadget_udc_start(udc);
+-		usb_gadget_connect(udc->gadget);
++		mutex_lock(&udc->connect_lock);
++		usb_gadget_udc_start_locked(udc);
++		usb_gadget_connect_locked(udc->gadget);
++		mutex_unlock(&udc->connect_lock);
+ 	} else if (sysfs_streq(buf, "disconnect")) {
+-		usb_gadget_disconnect(udc->gadget);
+-		usb_gadget_udc_stop(udc);
++		mutex_lock(&udc->connect_lock);
++		usb_gadget_disconnect_locked(udc->gadget);
++		usb_gadget_udc_stop_locked(udc);
++		mutex_unlock(&udc->connect_lock);
+ 	} else {
+ 		dev_err(dev, "unsupported command '%s'\n", buf);
+ 		ret = -EINVAL;
+diff --git a/drivers/usb/gadget/udc/renesas_usb3.c b/drivers/usb/gadget/udc/renesas_usb3.c
+index a301af66bd912..50d3d4c55fac7 100644
+--- a/drivers/usb/gadget/udc/renesas_usb3.c
++++ b/drivers/usb/gadget/udc/renesas_usb3.c
+@@ -2898,9 +2898,9 @@ static int renesas_usb3_probe(struct platform_device *pdev)
+ 		struct rzv2m_usb3drd *ddata = dev_get_drvdata(pdev->dev.parent);
+ 
+ 		usb3->drd_reg = ddata->reg;
+-		ret = devm_request_irq(ddata->dev, ddata->drd_irq,
++		ret = devm_request_irq(&pdev->dev, ddata->drd_irq,
+ 				       renesas_usb3_otg_irq, 0,
+-				       dev_name(ddata->dev), usb3);
++				       dev_name(&pdev->dev), usb3);
+ 		if (ret < 0)
+ 			return ret;
+ 	}
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 644a55447fd7f..fd42e3a0bd187 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -248,6 +248,8 @@ static void option_instat_callback(struct urb *urb);
+ #define QUECTEL_VENDOR_ID			0x2c7c
+ /* These Quectel products use Quectel's vendor ID */
+ #define QUECTEL_PRODUCT_EC21			0x0121
++#define QUECTEL_PRODUCT_EM061K_LTA		0x0123
++#define QUECTEL_PRODUCT_EM061K_LMS		0x0124
+ #define QUECTEL_PRODUCT_EC25			0x0125
+ #define QUECTEL_PRODUCT_EG91			0x0191
+ #define QUECTEL_PRODUCT_EG95			0x0195
+@@ -266,6 +268,8 @@ static void option_instat_callback(struct urb *urb);
+ #define QUECTEL_PRODUCT_RM520N			0x0801
+ #define QUECTEL_PRODUCT_EC200U			0x0901
+ #define QUECTEL_PRODUCT_EC200S_CN		0x6002
++#define QUECTEL_PRODUCT_EM061K_LWW		0x6008
++#define QUECTEL_PRODUCT_EM061K_LCN		0x6009
+ #define QUECTEL_PRODUCT_EC200T			0x6026
+ #define QUECTEL_PRODUCT_RM500K			0x7001
+ 
+@@ -1189,6 +1193,18 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0x00, 0x40) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0xff, 0x30) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0xff, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0xff, 0x30) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0xff, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LMS, 0xff, 0xff, 0x30) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LMS, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LMS, 0xff, 0xff, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LTA, 0xff, 0xff, 0x30) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LTA, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LTA, 0xff, 0xff, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LWW, 0xff, 0xff, 0x30) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LWW, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LWW, 0xff, 0xff, 0x40) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0xff, 0xff),
+ 	  .driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0, 0) },
+diff --git a/drivers/usb/typec/pd.c b/drivers/usb/typec/pd.c
+index 59c537a5e6000..d86106fd92ebb 100644
+--- a/drivers/usb/typec/pd.c
++++ b/drivers/usb/typec/pd.c
+@@ -96,7 +96,7 @@ peak_current_show(struct device *dev, struct device_attribute *attr, char *buf)
+ static ssize_t
+ fast_role_swap_current_show(struct device *dev, struct device_attribute *attr, char *buf)
+ {
+-	return sysfs_emit(buf, "%u\n", to_pdo(dev)->pdo >> PDO_FIXED_FRS_CURR_SHIFT) & 3;
++	return sysfs_emit(buf, "%u\n", (to_pdo(dev)->pdo >> PDO_FIXED_FRS_CURR_SHIFT) & 3);
+ }
+ static DEVICE_ATTR_RO(fast_role_swap_current);
+ 
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index 8d1baf28df55c..8ada8e969f79c 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -132,10 +132,8 @@ static int ucsi_exec_command(struct ucsi *ucsi, u64 cmd)
+ 	if (ret)
+ 		return ret;
+ 
+-	if (cci & UCSI_CCI_BUSY) {
+-		ucsi->ops->async_write(ucsi, UCSI_CANCEL, NULL, 0);
+-		return -EBUSY;
+-	}
++	if (cmd != UCSI_CANCEL && cci & UCSI_CCI_BUSY)
++		return ucsi_exec_command(ucsi, UCSI_CANCEL);
+ 
+ 	if (!(cci & UCSI_CCI_COMMAND_COMPLETE))
+ 		return -EIO;
+@@ -149,6 +147,11 @@ static int ucsi_exec_command(struct ucsi *ucsi, u64 cmd)
+ 		return ucsi_read_error(ucsi);
+ 	}
+ 
++	if (cmd == UCSI_CANCEL && cci & UCSI_CCI_CANCEL_COMPLETE) {
++		ret = ucsi_acknowledge_command(ucsi);
++		return ret ? ret : -EBUSY;
++	}
++
+ 	return UCSI_CCI_LENGTH(cci);
+ }
+ 
+diff --git a/fs/afs/vl_probe.c b/fs/afs/vl_probe.c
+index d1c7068b4346f..58452b86e6727 100644
+--- a/fs/afs/vl_probe.c
++++ b/fs/afs/vl_probe.c
+@@ -115,8 +115,8 @@ responded:
+ 		}
+ 	}
+ 
+-	if (rxrpc_kernel_get_srtt(call->net->socket, call->rxcall, &rtt_us) &&
+-	    rtt_us < server->probe.rtt) {
++	rxrpc_kernel_get_srtt(call->net->socket, call->rxcall, &rtt_us);
++	if (rtt_us < server->probe.rtt) {
+ 		server->probe.rtt = rtt_us;
+ 		server->rtt = rtt_us;
+ 		alist->preferred = index;
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index 5fc670c27f864..58ce5d44ce4d5 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -2832,10 +2832,20 @@ int btrfs_inc_block_group_ro(struct btrfs_block_group *cache,
+ 	}
+ 
+ 	ret = inc_block_group_ro(cache, 0);
+-	if (!do_chunk_alloc || ret == -ETXTBSY)
+-		goto unlock_out;
+ 	if (!ret)
+ 		goto out;
++	if (ret == -ETXTBSY)
++		goto unlock_out;
++
++	/*
++	 * Skip chunk alloction if the bg is SYSTEM, this is to avoid system
++	 * chunk allocation storm to exhaust the system chunk array.  Otherwise
++	 * we still want to try our best to mark the block group read-only.
++	 */
++	if (!do_chunk_alloc && ret == -ENOSPC &&
++	    (cache->flags & BTRFS_BLOCK_GROUP_SYSTEM))
++		goto unlock_out;
++
+ 	alloc_flags = btrfs_get_alloc_profile(fs_info, cache->space_info->flags);
+ 	ret = btrfs_chunk_alloc(trans, alloc_flags, CHUNK_ALLOC_FORCE);
+ 	if (ret < 0)
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 53eedc74cfcca..a47d8ad6abcbe 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -242,7 +242,6 @@ static int btrfs_repair_eb_io_failure(const struct extent_buffer *eb,
+ 				      int mirror_num)
+ {
+ 	struct btrfs_fs_info *fs_info = eb->fs_info;
+-	u64 start = eb->start;
+ 	int i, num_pages = num_extent_pages(eb);
+ 	int ret = 0;
+ 
+@@ -251,12 +250,14 @@ static int btrfs_repair_eb_io_failure(const struct extent_buffer *eb,
+ 
+ 	for (i = 0; i < num_pages; i++) {
+ 		struct page *p = eb->pages[i];
++		u64 start = max_t(u64, eb->start, page_offset(p));
++		u64 end = min_t(u64, eb->start + eb->len, page_offset(p) + PAGE_SIZE);
++		u32 len = end - start;
+ 
+-		ret = btrfs_repair_io_failure(fs_info, 0, start, PAGE_SIZE,
+-				start, p, start - page_offset(p), mirror_num);
++		ret = btrfs_repair_io_failure(fs_info, 0, start, len,
++				start, p, offset_in_page(start), mirror_num);
+ 		if (ret)
+ 			break;
+-		start += PAGE_SIZE;
+ 	}
+ 
+ 	return ret;
+@@ -995,13 +996,18 @@ int btrfs_global_root_insert(struct btrfs_root *root)
+ {
+ 	struct btrfs_fs_info *fs_info = root->fs_info;
+ 	struct rb_node *tmp;
++	int ret = 0;
+ 
+ 	write_lock(&fs_info->global_root_lock);
+ 	tmp = rb_find_add(&root->rb_node, &fs_info->global_root_tree, global_root_cmp);
+ 	write_unlock(&fs_info->global_root_lock);
+-	ASSERT(!tmp);
+ 
+-	return tmp ? -EEXIST : 0;
++	if (tmp) {
++		ret = -EEXIST;
++		btrfs_warn(fs_info, "global root %llu %llu already exists",
++				root->root_key.objectid, root->root_key.offset);
++	}
++	return ret;
+ }
+ 
+ void btrfs_global_root_delete(struct btrfs_root *root)
+@@ -2842,6 +2848,7 @@ static int __cold init_tree_roots(struct btrfs_fs_info *fs_info)
+ 			/* We can't trust the free space cache either */
+ 			btrfs_set_opt(fs_info->mount_opt, CLEAR_CACHE);
+ 
++			btrfs_warn(fs_info, "try to load backup roots slot %d", i);
+ 			ret = read_backup_root(fs_info, i);
+ 			backup_index = ret;
+ 			if (ret < 0)
+diff --git a/fs/btrfs/file-item.c b/fs/btrfs/file-item.c
+index a4584c629ba35..9e45b416a9c85 100644
+--- a/fs/btrfs/file-item.c
++++ b/fs/btrfs/file-item.c
+@@ -847,7 +847,9 @@ blk_status_t btrfs_csum_one_bio(struct btrfs_bio *bbio)
+ 				sums = kvzalloc(btrfs_ordered_sum_size(fs_info,
+ 						      bytes_left), GFP_KERNEL);
+ 				memalloc_nofs_restore(nofs_flag);
+-				BUG_ON(!sums); /* -ENOMEM */
++				if (!sums)
++					return BLK_STS_RESOURCE;
++
+ 				sums->len = bytes_left;
+ 				ordered = btrfs_lookup_ordered_extent(inode,
+ 								offset);
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index b31bb33524774..00ade644579f5 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -1869,7 +1869,7 @@ static int can_nocow_file_extent(struct btrfs_path *path,
+ 
+ 	ret = btrfs_cross_ref_exist(root, btrfs_ino(inode),
+ 				    key->offset - args->extent_offset,
+-				    args->disk_bytenr, false, path);
++				    args->disk_bytenr, args->strict, path);
+ 	WARN_ON_ONCE(ret > 0 && is_freespace_inode);
+ 	if (ret != 0)
+ 		goto out;
+@@ -7324,7 +7324,7 @@ static struct extent_map *create_io_em(struct btrfs_inode *inode, u64 start,
+ static int btrfs_get_blocks_direct_write(struct extent_map **map,
+ 					 struct inode *inode,
+ 					 struct btrfs_dio_data *dio_data,
+-					 u64 start, u64 len,
++					 u64 start, u64 *lenp,
+ 					 unsigned int iomap_flags)
+ {
+ 	const bool nowait = (iomap_flags & IOMAP_NOWAIT);
+@@ -7335,6 +7335,7 @@ static int btrfs_get_blocks_direct_write(struct extent_map **map,
+ 	struct btrfs_block_group *bg;
+ 	bool can_nocow = false;
+ 	bool space_reserved = false;
++	u64 len = *lenp;
+ 	u64 prev_len;
+ 	int ret = 0;
+ 
+@@ -7405,15 +7406,19 @@ static int btrfs_get_blocks_direct_write(struct extent_map **map,
+ 		free_extent_map(em);
+ 		*map = NULL;
+ 
+-		if (nowait)
+-			return -EAGAIN;
++		if (nowait) {
++			ret = -EAGAIN;
++			goto out;
++		}
+ 
+ 		/*
+ 		 * If we could not allocate data space before locking the file
+ 		 * range and we can't do a NOCOW write, then we have to fail.
+ 		 */
+-		if (!dio_data->data_space_reserved)
+-			return -ENOSPC;
++		if (!dio_data->data_space_reserved) {
++			ret = -ENOSPC;
++			goto out;
++		}
+ 
+ 		/*
+ 		 * We have to COW and we have already reserved data space before,
+@@ -7454,6 +7459,7 @@ out:
+ 		btrfs_delalloc_release_extents(BTRFS_I(inode), len);
+ 		btrfs_delalloc_release_metadata(BTRFS_I(inode), len, true);
+ 	}
++	*lenp = len;
+ 	return ret;
+ }
+ 
+@@ -7630,7 +7636,7 @@ static int btrfs_dio_iomap_begin(struct inode *inode, loff_t start,
+ 
+ 	if (write) {
+ 		ret = btrfs_get_blocks_direct_write(&em, inode, dio_data,
+-						    start, len, flags);
++						    start, &len, flags);
+ 		if (ret < 0)
+ 			goto unlock_err;
+ 		unlock_extents = true;
+diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
+index 69c93ae333f63..3720fd1f593d2 100644
+--- a/fs/btrfs/scrub.c
++++ b/fs/btrfs/scrub.c
+@@ -4034,13 +4034,20 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
+ 
+ 		if (ret == 0) {
+ 			ro_set = 1;
+-		} else if (ret == -ENOSPC && !sctx->is_dev_replace) {
++		} else if (ret == -ENOSPC && !sctx->is_dev_replace &&
++			   !(cache->flags & BTRFS_BLOCK_GROUP_RAID56_MASK)) {
+ 			/*
+ 			 * btrfs_inc_block_group_ro return -ENOSPC when it
+ 			 * failed in creating new chunk for metadata.
+ 			 * It is not a problem for scrub, because
+ 			 * metadata are always cowed, and our scrub paused
+ 			 * commit_transactions.
++			 *
++			 * For RAID56 chunks, we have to mark them read-only
++			 * for scrub, as later we would use our own cache
++			 * out of RAID56 realm.
++			 * Thus we want the RAID56 bg to be marked RO to
++			 * prevent RMW from screwing up out cache.
+ 			 */
+ 			ro_set = 0;
+ 		} else if (ret == -ETXTBSY) {
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index e19792d919c66..06e91c02be9e1 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -1840,6 +1840,12 @@ static int btrfs_remount(struct super_block *sb, int *flags, char *data)
+ 		btrfs_clear_sb_rdonly(sb);
+ 
+ 		set_bit(BTRFS_FS_OPEN, &fs_info->flags);
++
++		/*
++		 * If we've gone from readonly -> read/write, we need to get
++		 * our sync/async discard lists in the right state.
++		 */
++		btrfs_discard_resume(fs_info);
+ 	}
+ out:
+ 	/*
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index df88b8c04d03d..051283386e229 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -4942,9 +4942,13 @@ oplock_break_ack:
+ 	 * disconnected since oplock already released by the server
+ 	 */
+ 	if (!oplock_break_cancelled) {
+-		rc = tcon->ses->server->ops->oplock_response(tcon, persistent_fid,
++		/* check for server null since can race with kill_sb calling tree disconnect */
++		if (tcon->ses && tcon->ses->server) {
++			rc = tcon->ses->server->ops->oplock_response(tcon, persistent_fid,
+ 				volatile_fid, net_fid, cinode);
+-		cifs_dbg(FYI, "Oplock release rc = %d\n", rc);
++			cifs_dbg(FYI, "Oplock release rc = %d\n", rc);
++		} else
++			pr_warn_once("lease break not sent for unmounted share\n");
+ 	}
+ 
+ 	cifs_done_oplock_break(cinode);
+diff --git a/fs/erofs/Kconfig b/fs/erofs/Kconfig
+index 704fb59577e09..f259d92c97207 100644
+--- a/fs/erofs/Kconfig
++++ b/fs/erofs/Kconfig
+@@ -121,6 +121,7 @@ config EROFS_FS_PCPU_KTHREAD
+ config EROFS_FS_PCPU_KTHREAD_HIPRI
+ 	bool "EROFS high priority per-CPU kthread workers"
+ 	depends on EROFS_FS_ZIP && EROFS_FS_PCPU_KTHREAD
++	default y
+ 	help
+ 	  This permits EROFS to configure per-CPU kthread workers to run
+ 	  at higher priority.
+diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
+index f1708c77a9912..d7add72a09437 100644
+--- a/fs/erofs/zdata.c
++++ b/fs/erofs/zdata.c
+@@ -369,8 +369,6 @@ static struct kthread_worker *erofs_init_percpu_worker(int cpu)
+ 		return worker;
+ 	if (IS_ENABLED(CONFIG_EROFS_FS_PCPU_KTHREAD_HIPRI))
+ 		sched_set_fifo_low(worker->task);
+-	else
+-		sched_set_normal(worker->task, 0);
+ 	return worker;
+ }
+ 
+diff --git a/fs/eventpoll.c b/fs/eventpoll.c
+index 64659b1109733..eccecd3fac90c 100644
+--- a/fs/eventpoll.c
++++ b/fs/eventpoll.c
+@@ -1760,7 +1760,11 @@ static int ep_autoremove_wake_function(struct wait_queue_entry *wq_entry,
+ {
+ 	int ret = default_wake_function(wq_entry, mode, sync, key);
+ 
+-	list_del_init(&wq_entry->entry);
++	/*
++	 * Pairs with list_empty_careful in ep_poll, and ensures future loop
++	 * iterations see the cause of this wakeup.
++	 */
++	list_del_init_careful(&wq_entry->entry);
+ 	return ret;
+ }
+ 
+diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
+index a38aa33af08ef..8e83b51e3c68a 100644
+--- a/fs/ext4/balloc.c
++++ b/fs/ext4/balloc.c
+@@ -322,17 +322,15 @@ static ext4_fsblk_t ext4_valid_block_bitmap_padding(struct super_block *sb,
+ struct ext4_group_info *ext4_get_group_info(struct super_block *sb,
+ 					    ext4_group_t group)
+ {
+-	 struct ext4_group_info **grp_info;
+-	 long indexv, indexh;
+-
+-	 if (unlikely(group >= EXT4_SB(sb)->s_groups_count)) {
+-		 ext4_error(sb, "invalid group %u", group);
+-		 return NULL;
+-	 }
+-	 indexv = group >> (EXT4_DESC_PER_BLOCK_BITS(sb));
+-	 indexh = group & ((EXT4_DESC_PER_BLOCK(sb)) - 1);
+-	 grp_info = sbi_array_rcu_deref(EXT4_SB(sb), s_group_info, indexv);
+-	 return grp_info[indexh];
++	struct ext4_group_info **grp_info;
++	long indexv, indexh;
++
++	if (unlikely(group >= EXT4_SB(sb)->s_groups_count))
++		return NULL;
++	indexv = group >> (EXT4_DESC_PER_BLOCK_BITS(sb));
++	indexh = group & ((EXT4_DESC_PER_BLOCK(sb)) - 1);
++	grp_info = sbi_array_rcu_deref(EXT4_SB(sb), s_group_info, indexv);
++	return grp_info[indexh];
+ }
+ 
+ /*
+diff --git a/fs/ksmbd/connection.c b/fs/ksmbd/connection.c
+index e11d4a1e63d73..2a717d158f02e 100644
+--- a/fs/ksmbd/connection.c
++++ b/fs/ksmbd/connection.c
+@@ -364,8 +364,6 @@ int ksmbd_conn_handler_loop(void *p)
+ 			break;
+ 
+ 		memcpy(conn->request_buf, hdr_buf, sizeof(hdr_buf));
+-		if (!ksmbd_smb_request(conn))
+-			break;
+ 
+ 		/*
+ 		 * We already read 4 bytes to find out PDU size, now
+@@ -383,6 +381,9 @@ int ksmbd_conn_handler_loop(void *p)
+ 			continue;
+ 		}
+ 
++		if (!ksmbd_smb_request(conn))
++			break;
++
+ 		if (((struct smb2_hdr *)smb2_get_msg(conn->request_buf))->ProtocolId ==
+ 		    SMB2_PROTO_NUMBER) {
+ 			if (pdu_size < SMB2_MIN_SUPPORTED_HEADER_SIZE)
+diff --git a/fs/ksmbd/smb_common.c b/fs/ksmbd/smb_common.c
+index af0c2a9b85290..569e5eecdf3db 100644
+--- a/fs/ksmbd/smb_common.c
++++ b/fs/ksmbd/smb_common.c
+@@ -158,7 +158,19 @@ int ksmbd_verify_smb_message(struct ksmbd_work *work)
+  */
+ bool ksmbd_smb_request(struct ksmbd_conn *conn)
+ {
+-	return conn->request_buf[0] == 0;
++	__le32 *proto = (__le32 *)smb2_get_msg(conn->request_buf);
++
++	if (*proto == SMB2_COMPRESSION_TRANSFORM_ID) {
++		pr_err_ratelimited("smb2 compression not support yet");
++		return false;
++	}
++
++	if (*proto != SMB1_PROTO_NUMBER &&
++	    *proto != SMB2_PROTO_NUMBER &&
++	    *proto != SMB2_TRANSFORM_PROTO_NUM)
++		return false;
++
++	return true;
+ }
+ 
+ static bool supported_protocol(int idx)
+diff --git a/fs/nilfs2/btnode.c b/fs/nilfs2/btnode.c
+index e956f886a1a19..5710833ac1cc7 100644
+--- a/fs/nilfs2/btnode.c
++++ b/fs/nilfs2/btnode.c
+@@ -285,6 +285,14 @@ void nilfs_btnode_abort_change_key(struct address_space *btnc,
+ 	if (nbh == NULL) {	/* blocksize == pagesize */
+ 		xa_erase_irq(&btnc->i_pages, newkey);
+ 		unlock_page(ctxt->bh->b_page);
+-	} else
+-		brelse(nbh);
++	} else {
++		/*
++		 * When canceling a buffer that a prepare operation has
++		 * allocated to copy a node block to another location, use
++		 * nilfs_btnode_delete() to initialize and release the buffer
++		 * so that the buffer flags will not be in an inconsistent
++		 * state when it is reallocated.
++		 */
++		nilfs_btnode_delete(nbh);
++	}
+ }
+diff --git a/fs/nilfs2/sufile.c b/fs/nilfs2/sufile.c
+index dc359b56fdfac..2c6078a6b8ecb 100644
+--- a/fs/nilfs2/sufile.c
++++ b/fs/nilfs2/sufile.c
+@@ -779,6 +779,15 @@ int nilfs_sufile_resize(struct inode *sufile, __u64 newnsegs)
+ 			goto out_header;
+ 
+ 		sui->ncleansegs -= nsegs - newnsegs;
++
++		/*
++		 * If the sufile is successfully truncated, immediately adjust
++		 * the segment allocation space while locking the semaphore
++		 * "mi_sem" so that nilfs_sufile_alloc() never allocates
++		 * segments in the truncated space.
++		 */
++		sui->allocmax = newnsegs - 1;
++		sui->allocmin = 0;
+ 	}
+ 
+ 	kaddr = kmap_atomic(header_bh->b_page);
+diff --git a/fs/nilfs2/the_nilfs.c b/fs/nilfs2/the_nilfs.c
+index 2894152a6b25c..0f0667957c810 100644
+--- a/fs/nilfs2/the_nilfs.c
++++ b/fs/nilfs2/the_nilfs.c
+@@ -405,6 +405,18 @@ unsigned long nilfs_nrsvsegs(struct the_nilfs *nilfs, unsigned long nsegs)
+ 				  100));
+ }
+ 
++/**
++ * nilfs_max_segment_count - calculate the maximum number of segments
++ * @nilfs: nilfs object
++ */
++static u64 nilfs_max_segment_count(struct the_nilfs *nilfs)
++{
++	u64 max_count = U64_MAX;
++
++	do_div(max_count, nilfs->ns_blocks_per_segment);
++	return min_t(u64, max_count, ULONG_MAX);
++}
++
+ void nilfs_set_nsegments(struct the_nilfs *nilfs, unsigned long nsegs)
+ {
+ 	nilfs->ns_nsegments = nsegs;
+@@ -414,6 +426,8 @@ void nilfs_set_nsegments(struct the_nilfs *nilfs, unsigned long nsegs)
+ static int nilfs_store_disk_layout(struct the_nilfs *nilfs,
+ 				   struct nilfs_super_block *sbp)
+ {
++	u64 nsegments, nblocks;
++
+ 	if (le32_to_cpu(sbp->s_rev_level) < NILFS_MIN_SUPP_REV) {
+ 		nilfs_err(nilfs->ns_sb,
+ 			  "unsupported revision (superblock rev.=%d.%d, current rev.=%d.%d). Please check the version of mkfs.nilfs(2).",
+@@ -457,7 +471,34 @@ static int nilfs_store_disk_layout(struct the_nilfs *nilfs,
+ 		return -EINVAL;
+ 	}
+ 
+-	nilfs_set_nsegments(nilfs, le64_to_cpu(sbp->s_nsegments));
++	nsegments = le64_to_cpu(sbp->s_nsegments);
++	if (nsegments > nilfs_max_segment_count(nilfs)) {
++		nilfs_err(nilfs->ns_sb,
++			  "segment count %llu exceeds upper limit (%llu segments)",
++			  (unsigned long long)nsegments,
++			  (unsigned long long)nilfs_max_segment_count(nilfs));
++		return -EINVAL;
++	}
++
++	nblocks = sb_bdev_nr_blocks(nilfs->ns_sb);
++	if (nblocks) {
++		u64 min_block_count = nsegments * nilfs->ns_blocks_per_segment;
++		/*
++		 * To avoid failing to mount early device images without a
++		 * second superblock, exclude that block count from the
++		 * "min_block_count" calculation.
++		 */
++
++		if (nblocks < min_block_count) {
++			nilfs_err(nilfs->ns_sb,
++				  "total number of segment blocks %llu exceeds device size (%llu blocks)",
++				  (unsigned long long)min_block_count,
++				  (unsigned long long)nblocks);
++			return -EINVAL;
++		}
++	}
++
++	nilfs_set_nsegments(nilfs, nsegments);
+ 	nilfs->ns_crc_seed = le32_to_cpu(sbp->s_crc_seed);
+ 	return 0;
+ }
+diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
+index efb09de4343d2..b173c36bcab37 100644
+--- a/fs/ocfs2/file.c
++++ b/fs/ocfs2/file.c
+@@ -2100,14 +2100,20 @@ static long ocfs2_fallocate(struct file *file, int mode, loff_t offset,
+ 	struct ocfs2_space_resv sr;
+ 	int change_size = 1;
+ 	int cmd = OCFS2_IOC_RESVSP64;
++	int ret = 0;
+ 
+ 	if (mode & ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE))
+ 		return -EOPNOTSUPP;
+ 	if (!ocfs2_writes_unwritten_extents(osb))
+ 		return -EOPNOTSUPP;
+ 
+-	if (mode & FALLOC_FL_KEEP_SIZE)
++	if (mode & FALLOC_FL_KEEP_SIZE) {
+ 		change_size = 0;
++	} else {
++		ret = inode_newsize_ok(inode, offset + len);
++		if (ret)
++			return ret;
++	}
+ 
+ 	if (mode & FALLOC_FL_PUNCH_HOLE)
+ 		cmd = OCFS2_IOC_UNRESVSP64;
+diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
+index 0b0e6a1321018..988d1c076861b 100644
+--- a/fs/ocfs2/super.c
++++ b/fs/ocfs2/super.c
+@@ -952,8 +952,10 @@ static void ocfs2_disable_quotas(struct ocfs2_super *osb)
+ 	for (type = 0; type < OCFS2_MAXQUOTAS; type++) {
+ 		if (!sb_has_quota_loaded(sb, type))
+ 			continue;
+-		oinfo = sb_dqinfo(sb, type)->dqi_priv;
+-		cancel_delayed_work_sync(&oinfo->dqi_sync_work);
++		if (!sb_has_quota_suspended(sb, type)) {
++			oinfo = sb_dqinfo(sb, type)->dqi_priv;
++			cancel_delayed_work_sync(&oinfo->dqi_sync_work);
++		}
+ 		inode = igrab(sb->s_dquot.files[type]);
+ 		/* Turn off quotas. This will remove all dquot structures from
+ 		 * memory and so they will be automatically synced to global
+diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
+index 40f9e1a2ebdd6..c69ec2dfefc2d 100644
+--- a/fs/userfaultfd.c
++++ b/fs/userfaultfd.c
+@@ -1429,6 +1429,8 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx,
+ 
+ 	vma_iter_set(&vmi, start);
+ 	prev = vma_prev(&vmi);
++	if (vma->vm_start < start)
++		prev = vma;
+ 
+ 	ret = 0;
+ 	for_each_vma_range(vmi, vma, end) {
+@@ -1595,6 +1597,9 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx,
+ 
+ 	vma_iter_set(&vmi, start);
+ 	prev = vma_prev(&vmi);
++	if (vma->vm_start < start)
++		prev = vma;
++
+ 	ret = 0;
+ 	for_each_vma_range(vmi, vma, end) {
+ 		cond_resched();
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index 68a3183d5d589..33dbe941d070c 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -1233,6 +1233,18 @@ static inline u16 mlx5_core_max_vfs(const struct mlx5_core_dev *dev)
+ 	return dev->priv.sriov.max_vfs;
+ }
+ 
++static inline int mlx5_lag_is_lacp_owner(struct mlx5_core_dev *dev)
++{
++	/* LACP owner conditions:
++	 * 1) Function is physical.
++	 * 2) LAG is supported by FW.
++	 * 3) LAG is managed by driver (currently the only option).
++	 */
++	return  MLX5_CAP_GEN(dev, vport_group_manager) &&
++		   (MLX5_CAP_GEN(dev, num_lag_ports) > 1) &&
++		    MLX5_CAP_GEN(dev, lag_master);
++}
++
+ static inline int mlx5_get_gid_table_len(u16 param)
+ {
+ 	if (param > 4) {
+diff --git a/include/linux/soc/qcom/llcc-qcom.h b/include/linux/soc/qcom/llcc-qcom.h
+index ad1fd718169d9..93417ba1ead4a 100644
+--- a/include/linux/soc/qcom/llcc-qcom.h
++++ b/include/linux/soc/qcom/llcc-qcom.h
+@@ -69,9 +69,6 @@ struct llcc_slice_desc {
+ /**
+  * struct llcc_edac_reg_data - llcc edac registers data for each error type
+  * @name: Name of the error
+- * @synd_reg: Syndrome register address
+- * @count_status_reg: Status register address to read the error count
+- * @ways_status_reg: Status register address to read the error ways
+  * @reg_cnt: Number of registers
+  * @count_mask: Mask value to get the error count
+  * @ways_mask: Mask value to get the error ways
+@@ -80,9 +77,6 @@ struct llcc_slice_desc {
+  */
+ struct llcc_edac_reg_data {
+ 	char *name;
+-	u64 synd_reg;
+-	u64 count_status_reg;
+-	u64 ways_status_reg;
+ 	u32 reg_cnt;
+ 	u32 count_mask;
+ 	u32 ways_mask;
+@@ -120,7 +114,7 @@ struct llcc_edac_reg_offset {
+ 
+ /**
+  * struct llcc_drv_data - Data associated with the llcc driver
+- * @regmap: regmap associated with the llcc device
++ * @regmaps: regmaps associated with the llcc device
+  * @bcast_regmap: regmap associated with llcc broadcast offset
+  * @cfg: pointer to the data structure for slice configuration
+  * @edac_reg_offset: Offset of the LLCC EDAC registers
+@@ -129,12 +123,11 @@ struct llcc_edac_reg_offset {
+  * @max_slices: max slices as read from device tree
+  * @num_banks: Number of llcc banks
+  * @bitmap: Bit map to track the active slice ids
+- * @offsets: Pointer to the bank offsets array
+  * @ecc_irq: interrupt for llcc cache error detection and reporting
+  * @version: Indicates the LLCC version
+  */
+ struct llcc_drv_data {
+-	struct regmap *regmap;
++	struct regmap **regmaps;
+ 	struct regmap *bcast_regmap;
+ 	const struct llcc_slice_config *cfg;
+ 	const struct llcc_edac_reg_offset *edac_reg_offset;
+@@ -143,7 +136,6 @@ struct llcc_drv_data {
+ 	u32 max_slices;
+ 	u32 num_banks;
+ 	unsigned long *bitmap;
+-	u32 *offsets;
+ 	int ecc_irq;
+ 	u32 version;
+ };
+diff --git a/include/media/dvb_frontend.h b/include/media/dvb_frontend.h
+index 367d5381217b5..e7c44870f20de 100644
+--- a/include/media/dvb_frontend.h
++++ b/include/media/dvb_frontend.h
+@@ -686,10 +686,7 @@ struct dtv_frontend_properties {
+  * @id:			Frontend ID
+  * @exit:		Used to inform the DVB core that the frontend
+  *			thread should exit (usually, means that the hardware
+- *			got disconnected).
+- * @remove_mutex:	mutex that avoids a race condition between a callback
+- *			called when the hardware is disconnected and the
+- *			file_operations of dvb_frontend.
++ *			got disconnected.
+  */
+ 
+ struct dvb_frontend {
+@@ -707,7 +704,6 @@ struct dvb_frontend {
+ 	int (*callback)(void *adapter_priv, int component, int cmd, int arg);
+ 	int id;
+ 	unsigned int exit;
+-	struct mutex remove_mutex;
+ };
+ 
+ /**
+diff --git a/include/net/neighbour.h b/include/net/neighbour.h
+index 94a1599824d8f..794e45981891a 100644
+--- a/include/net/neighbour.h
++++ b/include/net/neighbour.h
+@@ -336,8 +336,6 @@ void neigh_table_init(int index, struct neigh_table *tbl);
+ int neigh_table_clear(int index, struct neigh_table *tbl);
+ struct neighbour *neigh_lookup(struct neigh_table *tbl, const void *pkey,
+ 			       struct net_device *dev);
+-struct neighbour *neigh_lookup_nodev(struct neigh_table *tbl, struct net *net,
+-				     const void *pkey);
+ struct neighbour *__neigh_create(struct neigh_table *tbl, const void *pkey,
+ 				 struct net_device *dev, bool want_ref);
+ static inline struct neighbour *neigh_create(struct neigh_table *tbl,
+diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h
+index ebb28ec5b6faf..f37f9f34430c1 100644
+--- a/include/net/netfilter/nf_flow_table.h
++++ b/include/net/netfilter/nf_flow_table.h
+@@ -268,7 +268,7 @@ int flow_offload_route_init(struct flow_offload *flow,
+ 
+ int flow_offload_add(struct nf_flowtable *flow_table, struct flow_offload *flow);
+ void flow_offload_refresh(struct nf_flowtable *flow_table,
+-			  struct flow_offload *flow);
++			  struct flow_offload *flow, bool force);
+ 
+ struct flow_offload_tuple_rhash *flow_offload_lookup(struct nf_flowtable *flow_table,
+ 						     struct flow_offload_tuple *tuple);
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 3eb7d20ddfc97..300bce4d67cec 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -462,7 +462,8 @@ struct nft_set_ops {
+ 					       const struct nft_set *set,
+ 					       const struct nft_set_elem *elem,
+ 					       unsigned int flags);
+-
++	void				(*commit)(const struct nft_set *set);
++	void				(*abort)(const struct nft_set *set);
+ 	u64				(*privsize)(const struct nlattr * const nla[],
+ 						    const struct nft_set_desc *desc);
+ 	bool				(*estimate)(const struct nft_set_desc *desc,
+@@ -557,6 +558,7 @@ struct nft_set {
+ 	u16				policy;
+ 	u16				udlen;
+ 	unsigned char			*udata;
++	struct list_head		pending_update;
+ 	/* runtime data below here */
+ 	const struct nft_set_ops	*ops ____cacheline_aligned;
+ 	u16				flags:14,
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index 27271f2b37cb3..12eadecf8cd05 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -137,6 +137,13 @@ static inline void qdisc_refcount_inc(struct Qdisc *qdisc)
+ 	refcount_inc(&qdisc->refcnt);
+ }
+ 
++static inline bool qdisc_refcount_dec_if_one(struct Qdisc *qdisc)
++{
++	if (qdisc->flags & TCQ_F_BUILTIN)
++		return true;
++	return refcount_dec_if_one(&qdisc->refcnt);
++}
++
+ /* Intended to be used by unlocked users, when concurrent qdisc release is
+  * possible.
+  */
+@@ -652,6 +659,7 @@ void dev_deactivate_many(struct list_head *head);
+ struct Qdisc *dev_graft_qdisc(struct netdev_queue *dev_queue,
+ 			      struct Qdisc *qdisc);
+ void qdisc_reset(struct Qdisc *qdisc);
++void qdisc_destroy(struct Qdisc *qdisc);
+ void qdisc_put(struct Qdisc *qdisc);
+ void qdisc_put_unlocked(struct Qdisc *qdisc);
+ void qdisc_tree_reduce_backlog(struct Qdisc *qdisc, int n, int len);
+diff --git a/include/rdma/ib_addr.h b/include/rdma/ib_addr.h
+index d808dc3d239e8..811a0f11d0dbe 100644
+--- a/include/rdma/ib_addr.h
++++ b/include/rdma/ib_addr.h
+@@ -194,29 +194,6 @@ static inline enum ib_mtu iboe_get_mtu(int mtu)
+ 		return 0;
+ }
+ 
+-static inline int iboe_get_rate(struct net_device *dev)
+-{
+-	struct ethtool_link_ksettings cmd;
+-	int err;
+-
+-	rtnl_lock();
+-	err = __ethtool_get_link_ksettings(dev, &cmd);
+-	rtnl_unlock();
+-	if (err)
+-		return IB_RATE_PORT_CURRENT;
+-
+-	if (cmd.base.speed >= 40000)
+-		return IB_RATE_40_GBPS;
+-	else if (cmd.base.speed >= 30000)
+-		return IB_RATE_30_GBPS;
+-	else if (cmd.base.speed >= 20000)
+-		return IB_RATE_20_GBPS;
+-	else if (cmd.base.speed >= 10000)
+-		return IB_RATE_10_GBPS;
+-	else
+-		return IB_RATE_PORT_CURRENT;
+-}
+-
+ static inline int rdma_link_local_addr(struct in6_addr *addr)
+ {
+ 	if (addr->s6_addr32[0] == htonl(0xfe800000) &&
+diff --git a/include/sound/soc-acpi.h b/include/sound/soc-acpi.h
+index b38fd25c57295..528279056b3ab 100644
+--- a/include/sound/soc-acpi.h
++++ b/include/sound/soc-acpi.h
+@@ -170,6 +170,7 @@ struct snd_soc_acpi_link_adr {
+ /* Descriptor for SST ASoC machine driver */
+ struct snd_soc_acpi_mach {
+ 	u8 id[ACPI_ID_LEN];
++	const char *uid;
+ 	const struct snd_soc_acpi_codecs *comp_ids;
+ 	const u32 link_mask;
+ 	const struct snd_soc_acpi_link_adr *links;
+diff --git a/include/sound/soc-dpcm.h b/include/sound/soc-dpcm.h
+index 1e7d09556fe3e..b7bc1865b9e4a 100644
+--- a/include/sound/soc-dpcm.h
++++ b/include/sound/soc-dpcm.h
+@@ -123,6 +123,10 @@ int snd_soc_dpcm_can_be_free_stop(struct snd_soc_pcm_runtime *fe,
+ int snd_soc_dpcm_can_be_params(struct snd_soc_pcm_runtime *fe,
+ 		struct snd_soc_pcm_runtime *be, int stream);
+ 
++/* can this BE perform prepare */
++int snd_soc_dpcm_can_be_prepared(struct snd_soc_pcm_runtime *fe,
++				 struct snd_soc_pcm_runtime *be, int stream);
++
+ /* is the current PCM operation for this FE ? */
+ int snd_soc_dpcm_fe_can_update(struct snd_soc_pcm_runtime *fe, int stream);
+ 
+diff --git a/include/uapi/linux/ethtool_netlink.h b/include/uapi/linux/ethtool_netlink.h
+index d39ce21381c5b..b8171e1ffd327 100644
+--- a/include/uapi/linux/ethtool_netlink.h
++++ b/include/uapi/linux/ethtool_netlink.h
+@@ -781,7 +781,7 @@ enum {
+ 
+ 	/* add new constants above here */
+ 	__ETHTOOL_A_STATS_GRP_CNT,
+-	ETHTOOL_A_STATS_GRP_MAX = (__ETHTOOL_A_STATS_CNT - 1)
++	ETHTOOL_A_STATS_GRP_MAX = (__ETHTOOL_A_STATS_GRP_CNT - 1)
+ };
+ 
+ enum {
+diff --git a/io_uring/net.c b/io_uring/net.c
+index 4040cf093318c..05fd285ac75ae 100644
+--- a/io_uring/net.c
++++ b/io_uring/net.c
+@@ -65,6 +65,7 @@ struct io_sr_msg {
+ 	u16				addr_len;
+ 	u16				buf_group;
+ 	void __user			*addr;
++	void __user			*msg_control;
+ 	/* used only for send zerocopy */
+ 	struct io_kiocb 		*notif;
+ };
+@@ -195,11 +196,15 @@ static int io_sendmsg_copy_hdr(struct io_kiocb *req,
+ 			       struct io_async_msghdr *iomsg)
+ {
+ 	struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg);
++	int ret;
+ 
+ 	iomsg->msg.msg_name = &iomsg->addr;
+ 	iomsg->free_iov = iomsg->fast_iov;
+-	return sendmsg_copy_msghdr(&iomsg->msg, sr->umsg, sr->msg_flags,
++	ret = sendmsg_copy_msghdr(&iomsg->msg, sr->umsg, sr->msg_flags,
+ 					&iomsg->free_iov);
++	/* save msg_control as sys_sendmsg() overwrites it */
++	sr->msg_control = iomsg->msg.msg_control;
++	return ret;
+ }
+ 
+ int io_send_prep_async(struct io_kiocb *req)
+@@ -297,6 +302,7 @@ int io_sendmsg(struct io_kiocb *req, unsigned int issue_flags)
+ 
+ 	if (req_has_async_data(req)) {
+ 		kmsg = req->async_data;
++		kmsg->msg.msg_control = sr->msg_control;
+ 	} else {
+ 		ret = io_sendmsg_copy_hdr(req, &iomsg);
+ 		if (ret)
+diff --git a/io_uring/sqpoll.c b/io_uring/sqpoll.c
+index 9db4bc1f521a3..5e329e3cd4706 100644
+--- a/io_uring/sqpoll.c
++++ b/io_uring/sqpoll.c
+@@ -255,9 +255,13 @@ static int io_sq_thread(void *data)
+ 			sqt_spin = true;
+ 
+ 		if (sqt_spin || !time_after(jiffies, timeout)) {
+-			cond_resched();
+ 			if (sqt_spin)
+ 				timeout = jiffies + sqd->sq_thread_idle;
++			if (unlikely(need_resched())) {
++				mutex_unlock(&sqd->lock);
++				cond_resched();
++				mutex_lock(&sqd->lock);
++			}
+ 			continue;
+ 		}
+ 
+diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
+index 819f011f0a9cd..b86b907e566ca 100644
+--- a/kernel/bpf/cgroup.c
++++ b/kernel/bpf/cgroup.c
+@@ -173,11 +173,11 @@ void bpf_cgroup_atype_put(int cgroup_atype)
+ {
+ 	int i = cgroup_atype - CGROUP_LSM_START;
+ 
+-	mutex_lock(&cgroup_mutex);
++	cgroup_lock();
+ 	if (--cgroup_lsm_atype[i].refcnt <= 0)
+ 		cgroup_lsm_atype[i].attach_btf_id = 0;
+ 	WARN_ON_ONCE(cgroup_lsm_atype[i].refcnt < 0);
+-	mutex_unlock(&cgroup_mutex);
++	cgroup_unlock();
+ }
+ #else
+ static enum cgroup_bpf_attach_type
+@@ -282,7 +282,7 @@ static void cgroup_bpf_release(struct work_struct *work)
+ 
+ 	unsigned int atype;
+ 
+-	mutex_lock(&cgroup_mutex);
++	cgroup_lock();
+ 
+ 	for (atype = 0; atype < ARRAY_SIZE(cgrp->bpf.progs); atype++) {
+ 		struct hlist_head *progs = &cgrp->bpf.progs[atype];
+@@ -315,7 +315,7 @@ static void cgroup_bpf_release(struct work_struct *work)
+ 		bpf_cgroup_storage_free(storage);
+ 	}
+ 
+-	mutex_unlock(&cgroup_mutex);
++	cgroup_unlock();
+ 
+ 	for (p = cgroup_parent(cgrp); p; p = cgroup_parent(p))
+ 		cgroup_bpf_put(p);
+@@ -729,9 +729,9 @@ static int cgroup_bpf_attach(struct cgroup *cgrp,
+ {
+ 	int ret;
+ 
+-	mutex_lock(&cgroup_mutex);
++	cgroup_lock();
+ 	ret = __cgroup_bpf_attach(cgrp, prog, replace_prog, link, type, flags);
+-	mutex_unlock(&cgroup_mutex);
++	cgroup_unlock();
+ 	return ret;
+ }
+ 
+@@ -831,7 +831,7 @@ static int cgroup_bpf_replace(struct bpf_link *link, struct bpf_prog *new_prog,
+ 
+ 	cg_link = container_of(link, struct bpf_cgroup_link, link);
+ 
+-	mutex_lock(&cgroup_mutex);
++	cgroup_lock();
+ 	/* link might have been auto-released by dying cgroup, so fail */
+ 	if (!cg_link->cgroup) {
+ 		ret = -ENOLINK;
+@@ -843,7 +843,7 @@ static int cgroup_bpf_replace(struct bpf_link *link, struct bpf_prog *new_prog,
+ 	}
+ 	ret = __cgroup_bpf_replace(cg_link->cgroup, cg_link, new_prog);
+ out_unlock:
+-	mutex_unlock(&cgroup_mutex);
++	cgroup_unlock();
+ 	return ret;
+ }
+ 
+@@ -1009,9 +1009,9 @@ static int cgroup_bpf_detach(struct cgroup *cgrp, struct bpf_prog *prog,
+ {
+ 	int ret;
+ 
+-	mutex_lock(&cgroup_mutex);
++	cgroup_lock();
+ 	ret = __cgroup_bpf_detach(cgrp, prog, NULL, type);
+-	mutex_unlock(&cgroup_mutex);
++	cgroup_unlock();
+ 	return ret;
+ }
+ 
+@@ -1120,9 +1120,9 @@ static int cgroup_bpf_query(struct cgroup *cgrp, const union bpf_attr *attr,
+ {
+ 	int ret;
+ 
+-	mutex_lock(&cgroup_mutex);
++	cgroup_lock();
+ 	ret = __cgroup_bpf_query(cgrp, attr, uattr);
+-	mutex_unlock(&cgroup_mutex);
++	cgroup_unlock();
+ 	return ret;
+ }
+ 
+@@ -1189,11 +1189,11 @@ static void bpf_cgroup_link_release(struct bpf_link *link)
+ 	if (!cg_link->cgroup)
+ 		return;
+ 
+-	mutex_lock(&cgroup_mutex);
++	cgroup_lock();
+ 
+ 	/* re-check cgroup under lock again */
+ 	if (!cg_link->cgroup) {
+-		mutex_unlock(&cgroup_mutex);
++		cgroup_unlock();
+ 		return;
+ 	}
+ 
+@@ -1205,7 +1205,7 @@ static void bpf_cgroup_link_release(struct bpf_link *link)
+ 	cg = cg_link->cgroup;
+ 	cg_link->cgroup = NULL;
+ 
+-	mutex_unlock(&cgroup_mutex);
++	cgroup_unlock();
+ 
+ 	cgroup_put(cg);
+ }
+@@ -1232,10 +1232,10 @@ static void bpf_cgroup_link_show_fdinfo(const struct bpf_link *link,
+ 		container_of(link, struct bpf_cgroup_link, link);
+ 	u64 cg_id = 0;
+ 
+-	mutex_lock(&cgroup_mutex);
++	cgroup_lock();
+ 	if (cg_link->cgroup)
+ 		cg_id = cgroup_id(cg_link->cgroup);
+-	mutex_unlock(&cgroup_mutex);
++	cgroup_unlock();
+ 
+ 	seq_printf(seq,
+ 		   "cgroup_id:\t%llu\n"
+@@ -1251,10 +1251,10 @@ static int bpf_cgroup_link_fill_link_info(const struct bpf_link *link,
+ 		container_of(link, struct bpf_cgroup_link, link);
+ 	u64 cg_id = 0;
+ 
+-	mutex_lock(&cgroup_mutex);
++	cgroup_lock();
+ 	if (cg_link->cgroup)
+ 		cg_id = cgroup_id(cg_link->cgroup);
+-	mutex_unlock(&cgroup_mutex);
++	cgroup_unlock();
+ 
+ 	info->cgroup.cgroup_id = cg_id;
+ 	info->cgroup.attach_type = cg_link->type;
+diff --git a/kernel/bpf/cgroup_iter.c b/kernel/bpf/cgroup_iter.c
+index 06989d2788465..810378f04fbca 100644
+--- a/kernel/bpf/cgroup_iter.c
++++ b/kernel/bpf/cgroup_iter.c
+@@ -58,7 +58,7 @@ static void *cgroup_iter_seq_start(struct seq_file *seq, loff_t *pos)
+ {
+ 	struct cgroup_iter_priv *p = seq->private;
+ 
+-	mutex_lock(&cgroup_mutex);
++	cgroup_lock();
+ 
+ 	/* cgroup_iter doesn't support read across multiple sessions. */
+ 	if (*pos > 0) {
+@@ -89,7 +89,7 @@ static void cgroup_iter_seq_stop(struct seq_file *seq, void *v)
+ {
+ 	struct cgroup_iter_priv *p = seq->private;
+ 
+-	mutex_unlock(&cgroup_mutex);
++	cgroup_unlock();
+ 
+ 	/* pass NULL to the prog for post-processing */
+ 	if (!v) {
+diff --git a/kernel/bpf/local_storage.c b/kernel/bpf/local_storage.c
+index 66d8ce2ab5b34..d6f3b7ead2c09 100644
+--- a/kernel/bpf/local_storage.c
++++ b/kernel/bpf/local_storage.c
+@@ -333,14 +333,14 @@ static void cgroup_storage_map_free(struct bpf_map *_map)
+ 	struct list_head *storages = &map->list;
+ 	struct bpf_cgroup_storage *storage, *stmp;
+ 
+-	mutex_lock(&cgroup_mutex);
++	cgroup_lock();
+ 
+ 	list_for_each_entry_safe(storage, stmp, storages, list_map) {
+ 		bpf_cgroup_storage_unlink(storage);
+ 		bpf_cgroup_storage_free(storage);
+ 	}
+ 
+-	mutex_unlock(&cgroup_mutex);
++	cgroup_unlock();
+ 
+ 	WARN_ON(!RB_EMPTY_ROOT(&map->root));
+ 	WARN_ON(!list_empty(&map->list));
+diff --git a/kernel/cgroup/cgroup-v1.c b/kernel/cgroup/cgroup-v1.c
+index 52bb5a74a23b9..5407241dbb45f 100644
+--- a/kernel/cgroup/cgroup-v1.c
++++ b/kernel/cgroup/cgroup-v1.c
+@@ -58,7 +58,7 @@ int cgroup_attach_task_all(struct task_struct *from, struct task_struct *tsk)
+ 	struct cgroup_root *root;
+ 	int retval = 0;
+ 
+-	mutex_lock(&cgroup_mutex);
++	cgroup_lock();
+ 	cgroup_attach_lock(true);
+ 	for_each_root(root) {
+ 		struct cgroup *from_cgrp;
+@@ -72,7 +72,7 @@ int cgroup_attach_task_all(struct task_struct *from, struct task_struct *tsk)
+ 			break;
+ 	}
+ 	cgroup_attach_unlock(true);
+-	mutex_unlock(&cgroup_mutex);
++	cgroup_unlock();
+ 
+ 	return retval;
+ }
+@@ -106,9 +106,9 @@ int cgroup_transfer_tasks(struct cgroup *to, struct cgroup *from)
+ 	if (ret)
+ 		return ret;
+ 
+-	mutex_lock(&cgroup_mutex);
++	cgroup_lock();
+ 
+-	percpu_down_write(&cgroup_threadgroup_rwsem);
++	cgroup_attach_lock(true);
+ 
+ 	/* all tasks in @from are being moved, all csets are source */
+ 	spin_lock_irq(&css_set_lock);
+@@ -144,8 +144,8 @@ int cgroup_transfer_tasks(struct cgroup *to, struct cgroup *from)
+ 	} while (task && !ret);
+ out_err:
+ 	cgroup_migrate_finish(&mgctx);
+-	percpu_up_write(&cgroup_threadgroup_rwsem);
+-	mutex_unlock(&cgroup_mutex);
++	cgroup_attach_unlock(true);
++	cgroup_unlock();
+ 	return ret;
+ }
+ 
+@@ -847,13 +847,13 @@ static int cgroup1_rename(struct kernfs_node *kn, struct kernfs_node *new_parent
+ 	kernfs_break_active_protection(new_parent);
+ 	kernfs_break_active_protection(kn);
+ 
+-	mutex_lock(&cgroup_mutex);
++	cgroup_lock();
+ 
+ 	ret = kernfs_rename(kn, new_parent, new_name_str);
+ 	if (!ret)
+ 		TRACE_CGROUP_PATH(rename, cgrp);
+ 
+-	mutex_unlock(&cgroup_mutex);
++	cgroup_unlock();
+ 
+ 	kernfs_unbreak_active_protection(kn);
+ 	kernfs_unbreak_active_protection(new_parent);
+@@ -1119,7 +1119,7 @@ int cgroup1_reconfigure(struct fs_context *fc)
+ 	trace_cgroup_remount(root);
+ 
+  out_unlock:
+-	mutex_unlock(&cgroup_mutex);
++	cgroup_unlock();
+ 	return ret;
+ }
+ 
+@@ -1246,7 +1246,7 @@ int cgroup1_get_tree(struct fs_context *fc)
+ 	if (!ret && !percpu_ref_tryget_live(&ctx->root->cgrp.self.refcnt))
+ 		ret = 1;	/* restart */
+ 
+-	mutex_unlock(&cgroup_mutex);
++	cgroup_unlock();
+ 
+ 	if (!ret)
+ 		ret = cgroup_do_get_tree(fc);
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 935e8121b21e6..fd57c1dca1ccf 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -1391,7 +1391,7 @@ static void cgroup_destroy_root(struct cgroup_root *root)
+ 	cgroup_favor_dynmods(root, false);
+ 	cgroup_exit_root_id(root);
+ 
+-	mutex_unlock(&cgroup_mutex);
++	cgroup_unlock();
+ 
+ 	cgroup_rstat_exit(cgrp);
+ 	kernfs_destroy_root(root->kf_root);
+@@ -1625,7 +1625,7 @@ void cgroup_kn_unlock(struct kernfs_node *kn)
+ 	else
+ 		cgrp = kn->parent->priv;
+ 
+-	mutex_unlock(&cgroup_mutex);
++	cgroup_unlock();
+ 
+ 	kernfs_unbreak_active_protection(kn);
+ 	cgroup_put(cgrp);
+@@ -1670,7 +1670,7 @@ struct cgroup *cgroup_kn_lock_live(struct kernfs_node *kn, bool drain_offline)
+ 	if (drain_offline)
+ 		cgroup_lock_and_drain_offline(cgrp);
+ 	else
+-		mutex_lock(&cgroup_mutex);
++		cgroup_lock();
+ 
+ 	if (!cgroup_is_dead(cgrp))
+ 		return cgrp;
+@@ -2167,13 +2167,13 @@ int cgroup_do_get_tree(struct fs_context *fc)
+ 		struct super_block *sb = fc->root->d_sb;
+ 		struct cgroup *cgrp;
+ 
+-		mutex_lock(&cgroup_mutex);
++		cgroup_lock();
+ 		spin_lock_irq(&css_set_lock);
+ 
+ 		cgrp = cset_cgroup_from_root(ctx->ns->root_cset, ctx->root);
+ 
+ 		spin_unlock_irq(&css_set_lock);
+-		mutex_unlock(&cgroup_mutex);
++		cgroup_unlock();
+ 
+ 		nsdentry = kernfs_node_dentry(cgrp->kn, sb);
+ 		dput(fc->root);
+@@ -2356,13 +2356,13 @@ int cgroup_path_ns(struct cgroup *cgrp, char *buf, size_t buflen,
+ {
+ 	int ret;
+ 
+-	mutex_lock(&cgroup_mutex);
++	cgroup_lock();
+ 	spin_lock_irq(&css_set_lock);
+ 
+ 	ret = cgroup_path_ns_locked(cgrp, buf, buflen, ns);
+ 
+ 	spin_unlock_irq(&css_set_lock);
+-	mutex_unlock(&cgroup_mutex);
++	cgroup_unlock();
+ 
+ 	return ret;
+ }
+@@ -2388,7 +2388,7 @@ int task_cgroup_path(struct task_struct *task, char *buf, size_t buflen)
+ 	int hierarchy_id = 1;
+ 	int ret;
+ 
+-	mutex_lock(&cgroup_mutex);
++	cgroup_lock();
+ 	spin_lock_irq(&css_set_lock);
+ 
+ 	root = idr_get_next(&cgroup_hierarchy_idr, &hierarchy_id);
+@@ -2402,7 +2402,7 @@ int task_cgroup_path(struct task_struct *task, char *buf, size_t buflen)
+ 	}
+ 
+ 	spin_unlock_irq(&css_set_lock);
+-	mutex_unlock(&cgroup_mutex);
++	cgroup_unlock();
+ 	return ret;
+ }
+ EXPORT_SYMBOL_GPL(task_cgroup_path);
+@@ -3111,7 +3111,7 @@ void cgroup_lock_and_drain_offline(struct cgroup *cgrp)
+ 	int ssid;
+ 
+ restart:
+-	mutex_lock(&cgroup_mutex);
++	cgroup_lock();
+ 
+ 	cgroup_for_each_live_descendant_post(dsct, d_css, cgrp) {
+ 		for_each_subsys(ss, ssid) {
+@@ -3125,7 +3125,7 @@ restart:
+ 			prepare_to_wait(&dsct->offline_waitq, &wait,
+ 					TASK_UNINTERRUPTIBLE);
+ 
+-			mutex_unlock(&cgroup_mutex);
++			cgroup_unlock();
+ 			schedule();
+ 			finish_wait(&dsct->offline_waitq, &wait);
+ 
+@@ -4374,9 +4374,9 @@ int cgroup_rm_cftypes(struct cftype *cfts)
+ 	if (!(cfts[0].flags & __CFTYPE_ADDED))
+ 		return -ENOENT;
+ 
+-	mutex_lock(&cgroup_mutex);
++	cgroup_lock();
+ 	ret = cgroup_rm_cftypes_locked(cfts);
+-	mutex_unlock(&cgroup_mutex);
++	cgroup_unlock();
+ 	return ret;
+ }
+ 
+@@ -4408,14 +4408,14 @@ static int cgroup_add_cftypes(struct cgroup_subsys *ss, struct cftype *cfts)
+ 	if (ret)
+ 		return ret;
+ 
+-	mutex_lock(&cgroup_mutex);
++	cgroup_lock();
+ 
+ 	list_add_tail(&cfts->node, &ss->cfts);
+ 	ret = cgroup_apply_cftypes(cfts, true);
+ 	if (ret)
+ 		cgroup_rm_cftypes_locked(cfts);
+ 
+-	mutex_unlock(&cgroup_mutex);
++	cgroup_unlock();
+ 	return ret;
+ }
+ 
+@@ -5385,7 +5385,7 @@ static void css_release_work_fn(struct work_struct *work)
+ 	struct cgroup_subsys *ss = css->ss;
+ 	struct cgroup *cgrp = css->cgroup;
+ 
+-	mutex_lock(&cgroup_mutex);
++	cgroup_lock();
+ 
+ 	css->flags |= CSS_RELEASED;
+ 	list_del_rcu(&css->sibling);
+@@ -5426,7 +5426,7 @@ static void css_release_work_fn(struct work_struct *work)
+ 					 NULL);
+ 	}
+ 
+-	mutex_unlock(&cgroup_mutex);
++	cgroup_unlock();
+ 
+ 	INIT_RCU_WORK(&css->destroy_rwork, css_free_rwork_fn);
+ 	queue_rcu_work(cgroup_destroy_wq, &css->destroy_rwork);
+@@ -5774,7 +5774,7 @@ static void css_killed_work_fn(struct work_struct *work)
+ 	struct cgroup_subsys_state *css =
+ 		container_of(work, struct cgroup_subsys_state, destroy_work);
+ 
+-	mutex_lock(&cgroup_mutex);
++	cgroup_lock();
+ 
+ 	do {
+ 		offline_css(css);
+@@ -5783,7 +5783,7 @@ static void css_killed_work_fn(struct work_struct *work)
+ 		css = css->parent;
+ 	} while (css && atomic_dec_and_test(&css->online_cnt));
+ 
+-	mutex_unlock(&cgroup_mutex);
++	cgroup_unlock();
+ }
+ 
+ /* css kill confirmation processing requires process context, bounce */
+@@ -5967,7 +5967,7 @@ static void __init cgroup_init_subsys(struct cgroup_subsys *ss, bool early)
+ 
+ 	pr_debug("Initializing cgroup subsys %s\n", ss->name);
+ 
+-	mutex_lock(&cgroup_mutex);
++	cgroup_lock();
+ 
+ 	idr_init(&ss->css_idr);
+ 	INIT_LIST_HEAD(&ss->cfts);
+@@ -6011,7 +6011,7 @@ static void __init cgroup_init_subsys(struct cgroup_subsys *ss, bool early)
+ 
+ 	BUG_ON(online_css(css));
+ 
+-	mutex_unlock(&cgroup_mutex);
++	cgroup_unlock();
+ }
+ 
+ /**
+@@ -6071,7 +6071,7 @@ int __init cgroup_init(void)
+ 
+ 	get_user_ns(init_cgroup_ns.user_ns);
+ 
+-	mutex_lock(&cgroup_mutex);
++	cgroup_lock();
+ 
+ 	/*
+ 	 * Add init_css_set to the hash table so that dfl_root can link to
+@@ -6082,7 +6082,7 @@ int __init cgroup_init(void)
+ 
+ 	BUG_ON(cgroup_setup_root(&cgrp_dfl_root, 0));
+ 
+-	mutex_unlock(&cgroup_mutex);
++	cgroup_unlock();
+ 
+ 	for_each_subsys(ss, ssid) {
+ 		if (ss->early_init) {
+@@ -6134,9 +6134,9 @@ int __init cgroup_init(void)
+ 		if (ss->bind)
+ 			ss->bind(init_css_set.subsys[ssid]);
+ 
+-		mutex_lock(&cgroup_mutex);
++		cgroup_lock();
+ 		css_populate_dir(init_css_set.subsys[ssid]);
+-		mutex_unlock(&cgroup_mutex);
++		cgroup_unlock();
+ 	}
+ 
+ 	/* init_css_set.subsys[] has been updated, re-hash */
+@@ -6241,7 +6241,7 @@ int proc_cgroup_show(struct seq_file *m, struct pid_namespace *ns,
+ 	if (!buf)
+ 		goto out;
+ 
+-	mutex_lock(&cgroup_mutex);
++	cgroup_lock();
+ 	spin_lock_irq(&css_set_lock);
+ 
+ 	for_each_root(root) {
+@@ -6296,7 +6296,7 @@ int proc_cgroup_show(struct seq_file *m, struct pid_namespace *ns,
+ 	retval = 0;
+ out_unlock:
+ 	spin_unlock_irq(&css_set_lock);
+-	mutex_unlock(&cgroup_mutex);
++	cgroup_unlock();
+ 	kfree(buf);
+ out:
+ 	return retval;
+@@ -6380,7 +6380,7 @@ static int cgroup_css_set_fork(struct kernel_clone_args *kargs)
+ 	struct file *f;
+ 
+ 	if (kargs->flags & CLONE_INTO_CGROUP)
+-		mutex_lock(&cgroup_mutex);
++		cgroup_lock();
+ 
+ 	cgroup_threadgroup_change_begin(current);
+ 
+@@ -6455,7 +6455,7 @@ static int cgroup_css_set_fork(struct kernel_clone_args *kargs)
+ 
+ err:
+ 	cgroup_threadgroup_change_end(current);
+-	mutex_unlock(&cgroup_mutex);
++	cgroup_unlock();
+ 	if (f)
+ 		fput(f);
+ 	if (dst_cgrp)
+@@ -6476,19 +6476,18 @@ err:
+ static void cgroup_css_set_put_fork(struct kernel_clone_args *kargs)
+ 	__releases(&cgroup_threadgroup_rwsem) __releases(&cgroup_mutex)
+ {
+-	cgroup_threadgroup_change_end(current);
+-
+-	if (kargs->flags & CLONE_INTO_CGROUP) {
+-		struct cgroup *cgrp = kargs->cgrp;
+-		struct css_set *cset = kargs->cset;
++	struct cgroup *cgrp = kargs->cgrp;
++	struct css_set *cset = kargs->cset;
+ 
+-		mutex_unlock(&cgroup_mutex);
++	cgroup_threadgroup_change_end(current);
+ 
+-		if (cset) {
+-			put_css_set(cset);
+-			kargs->cset = NULL;
+-		}
++	if (cset) {
++		put_css_set(cset);
++		kargs->cset = NULL;
++	}
+ 
++	if (kargs->flags & CLONE_INTO_CGROUP) {
++		cgroup_unlock();
+ 		if (cgrp) {
+ 			cgroup_put(cgrp);
+ 			kargs->cgrp = NULL;
+diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c
+index f1a0e4e3fb5cd..c7a0e51a6d87f 100644
+--- a/kernel/kexec_file.c
++++ b/kernel/kexec_file.c
+@@ -901,10 +901,22 @@ static int kexec_purgatory_setup_sechdrs(struct purgatory_info *pi,
+ 		}
+ 
+ 		offset = ALIGN(offset, align);
++
++		/*
++		 * Check if the segment contains the entry point, if so,
++		 * calculate the value of image->start based on it.
++		 * If the compiler has produced more than one .text section
++		 * (Eg: .text.hot), they are generally after the main .text
++		 * section, and they shall not be used to calculate
++		 * image->start. So do not re-calculate image->start if it
++		 * is not set to the initial value, and warn the user so they
++		 * have a chance to fix their purgatory's linker script.
++		 */
+ 		if (sechdrs[i].sh_flags & SHF_EXECINSTR &&
+ 		    pi->ehdr->e_entry >= sechdrs[i].sh_addr &&
+ 		    pi->ehdr->e_entry < (sechdrs[i].sh_addr
+-					 + sechdrs[i].sh_size)) {
++					 + sechdrs[i].sh_size) &&
++		    !WARN_ON(kbuf->image->start != pi->ehdr->e_entry)) {
+ 			kbuf->image->start -= sechdrs[i].sh_addr;
+ 			kbuf->image->start += kbuf->mem + offset;
+ 		}
+diff --git a/mm/damon/core.c b/mm/damon/core.c
+index d9ef62047bf5f..91cff7f2997ef 100644
+--- a/mm/damon/core.c
++++ b/mm/damon/core.c
+@@ -551,6 +551,8 @@ int damon_set_attrs(struct damon_ctx *ctx, struct damon_attrs *attrs)
+ 		return -EINVAL;
+ 	if (attrs->min_nr_regions > attrs->max_nr_regions)
+ 		return -EINVAL;
++	if (attrs->sample_interval > attrs->aggr_interval)
++		return -EINVAL;
+ 
+ 	damon_update_monitoring_results(ctx, attrs);
+ 	ctx->attrs = *attrs;
+diff --git a/mm/gup_test.c b/mm/gup_test.c
+index 8ae7307a1bb6e..c0421b786dcd4 100644
+--- a/mm/gup_test.c
++++ b/mm/gup_test.c
+@@ -381,6 +381,7 @@ static int gup_test_release(struct inode *inode, struct file *file)
+ static const struct file_operations gup_test_fops = {
+ 	.open = nonseekable_open,
+ 	.unlocked_ioctl = gup_test_ioctl,
++	.compat_ioctl = compat_ptr_ioctl,
+ 	.release = gup_test_release,
+ };
+ 
+diff --git a/mm/zswap.c b/mm/zswap.c
+index 5d5977c9ea45b..cca0c3100edf8 100644
+--- a/mm/zswap.c
++++ b/mm/zswap.c
+@@ -1141,9 +1141,16 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
+ 		goto reject;
+ 	}
+ 
++	/*
++	 * XXX: zswap reclaim does not work with cgroups yet. Without a
++	 * cgroup-aware entry LRU, we will push out entries system-wide based on
++	 * local cgroup limits.
++	 */
+ 	objcg = get_obj_cgroup_from_page(page);
+-	if (objcg && !obj_cgroup_may_zswap(objcg))
+-		goto shrink;
++	if (objcg && !obj_cgroup_may_zswap(objcg)) {
++		ret = -ENOMEM;
++		goto reject;
++	}
+ 
+ 	/* reclaim space if needed */
+ 	if (zswap_is_full()) {
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index 6798f6d2423b9..0116b0ff91a78 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -627,37 +627,6 @@ struct neighbour *neigh_lookup(struct neigh_table *tbl, const void *pkey,
+ }
+ EXPORT_SYMBOL(neigh_lookup);
+ 
+-struct neighbour *neigh_lookup_nodev(struct neigh_table *tbl, struct net *net,
+-				     const void *pkey)
+-{
+-	struct neighbour *n;
+-	unsigned int key_len = tbl->key_len;
+-	u32 hash_val;
+-	struct neigh_hash_table *nht;
+-
+-	NEIGH_CACHE_STAT_INC(tbl, lookups);
+-
+-	rcu_read_lock_bh();
+-	nht = rcu_dereference_bh(tbl->nht);
+-	hash_val = tbl->hash(pkey, NULL, nht->hash_rnd) >> (32 - nht->hash_shift);
+-
+-	for (n = rcu_dereference_bh(nht->hash_buckets[hash_val]);
+-	     n != NULL;
+-	     n = rcu_dereference_bh(n->next)) {
+-		if (!memcmp(n->primary_key, pkey, key_len) &&
+-		    net_eq(dev_net(n->dev), net)) {
+-			if (!refcount_inc_not_zero(&n->refcnt))
+-				n = NULL;
+-			NEIGH_CACHE_STAT_INC(tbl, hits);
+-			break;
+-		}
+-	}
+-
+-	rcu_read_unlock_bh();
+-	return n;
+-}
+-EXPORT_SYMBOL(neigh_lookup_nodev);
+-
+ static struct neighbour *
+ ___neigh_create(struct neigh_table *tbl, const void *pkey,
+ 		struct net_device *dev, u32 flags,
+diff --git a/net/ipv6/ping.c b/net/ipv6/ping.c
+index 808983bc2ec9f..4651aaf70db4f 100644
+--- a/net/ipv6/ping.c
++++ b/net/ipv6/ping.c
+@@ -114,7 +114,8 @@ static int ping_v6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 	addr_type = ipv6_addr_type(daddr);
+ 	if ((__ipv6_addr_needs_scope_id(addr_type) && !oif) ||
+ 	    (addr_type & IPV6_ADDR_MAPPED) ||
+-	    (oif && sk->sk_bound_dev_if && oif != sk->sk_bound_dev_if))
++	    (oif && sk->sk_bound_dev_if && oif != sk->sk_bound_dev_if &&
++	     l3mdev_master_ifindex_by_index(sock_net(sk), oif) != sk->sk_bound_dev_if))
+ 		return -EINVAL;
+ 
+ 	ipcm6_init_sk(&ipc6, np);
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index 5ddbe0c8cfaa1..2c4689a1bf064 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -4778,11 +4778,16 @@ static int ieee80211_add_intf_link(struct wiphy *wiphy,
+ 				   unsigned int link_id)
+ {
+ 	struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev);
++	int res;
+ 
+ 	if (wdev->use_4addr)
+ 		return -EOPNOTSUPP;
+ 
+-	return ieee80211_vif_set_links(sdata, wdev->valid_links);
++	mutex_lock(&sdata->local->mtx);
++	res = ieee80211_vif_set_links(sdata, wdev->valid_links);
++	mutex_unlock(&sdata->local->mtx);
++
++	return res;
+ }
+ 
+ static void ieee80211_del_intf_link(struct wiphy *wiphy,
+@@ -4791,7 +4796,9 @@ static void ieee80211_del_intf_link(struct wiphy *wiphy,
+ {
+ 	struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev);
+ 
++	mutex_lock(&sdata->local->mtx);
+ 	ieee80211_vif_set_links(sdata, wdev->valid_links);
++	mutex_unlock(&sdata->local->mtx);
+ }
+ 
+ static int sta_add_link_station(struct ieee80211_local *local,
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index eba7ae63fac45..347030b9bb9e3 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -2272,7 +2272,7 @@ ieee802_11_parse_elems(const u8 *start, size_t len, bool action,
+ 	return ieee802_11_parse_elems_crc(start, len, action, 0, 0, bss);
+ }
+ 
+-void ieee80211_fragment_element(struct sk_buff *skb, u8 *len_pos);
++void ieee80211_fragment_element(struct sk_buff *skb, u8 *len_pos, u8 frag_id);
+ 
+ extern const int ieee802_1d_to_ac[8];
+ 
+diff --git a/net/mac80211/link.c b/net/mac80211/link.c
+index 8c8869cc1fb4c..aab4d7b4def24 100644
+--- a/net/mac80211/link.c
++++ b/net/mac80211/link.c
+@@ -2,7 +2,7 @@
+ /*
+  * MLO link handling
+  *
+- * Copyright (C) 2022 Intel Corporation
++ * Copyright (C) 2022-2023 Intel Corporation
+  */
+ #include <linux/slab.h>
+ #include <linux/kernel.h>
+@@ -404,6 +404,7 @@ static int _ieee80211_set_active_links(struct ieee80211_sub_if_data *sdata,
+ 						 IEEE80211_CHANCTX_SHARED);
+ 		WARN_ON_ONCE(ret);
+ 
++		ieee80211_mgd_set_link_qos_params(link);
+ 		ieee80211_link_info_change_notify(sdata, link,
+ 						  BSS_CHANGED_ERP_CTS_PROT |
+ 						  BSS_CHANGED_ERP_PREAMBLE |
+@@ -418,7 +419,6 @@ static int _ieee80211_set_active_links(struct ieee80211_sub_if_data *sdata,
+ 						  BSS_CHANGED_TWT |
+ 						  BSS_CHANGED_HE_OBSS_PD |
+ 						  BSS_CHANGED_HE_BSS_COLOR);
+-		ieee80211_mgd_set_link_qos_params(link);
+ 	}
+ 
+ 	old_active = sdata->vif.active_links;
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 7a970b6dda640..d28a35c538bac 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -1372,10 +1372,11 @@ static void ieee80211_assoc_add_ml_elem(struct ieee80211_sub_if_data *sdata,
+ 		ieee80211_add_non_inheritance_elem(skb, outer_present_elems,
+ 						   link_present_elems);
+ 
+-		ieee80211_fragment_element(skb, subelem_len);
++		ieee80211_fragment_element(skb, subelem_len,
++					   IEEE80211_MLE_SUBELEM_FRAGMENT);
+ 	}
+ 
+-	ieee80211_fragment_element(skb, ml_elem_len);
++	ieee80211_fragment_element(skb, ml_elem_len, WLAN_EID_FRAGMENT);
+ }
+ 
+ static int ieee80211_send_assoc(struct ieee80211_sub_if_data *sdata)
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index d7b382866b260..1a0d38cd46337 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -4955,7 +4955,7 @@ u8 *ieee80211_ie_build_eht_cap(u8 *pos,
+ 	return pos;
+ }
+ 
+-void ieee80211_fragment_element(struct sk_buff *skb, u8 *len_pos)
++void ieee80211_fragment_element(struct sk_buff *skb, u8 *len_pos, u8 frag_id)
+ {
+ 	unsigned int elem_len;
+ 
+@@ -4975,7 +4975,7 @@ void ieee80211_fragment_element(struct sk_buff *skb, u8 *len_pos)
+ 		memmove(len_pos + 255 + 3, len_pos + 255 + 1, elem_len);
+ 		/* place the fragment ID */
+ 		len_pos += 255 + 1;
+-		*len_pos = WLAN_EID_FRAGMENT;
++		*len_pos = frag_id;
+ 		/* and point to fragment length to update later */
+ 		len_pos++;
+ 	}
+diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c
+index 04bd0ed4d2ae7..b0ef48b21dcb4 100644
+--- a/net/netfilter/nf_flow_table_core.c
++++ b/net/netfilter/nf_flow_table_core.c
+@@ -317,12 +317,12 @@ int flow_offload_add(struct nf_flowtable *flow_table, struct flow_offload *flow)
+ EXPORT_SYMBOL_GPL(flow_offload_add);
+ 
+ void flow_offload_refresh(struct nf_flowtable *flow_table,
+-			  struct flow_offload *flow)
++			  struct flow_offload *flow, bool force)
+ {
+ 	u32 timeout;
+ 
+ 	timeout = nf_flowtable_time_stamp + flow_offload_get_timeout(flow);
+-	if (timeout - READ_ONCE(flow->timeout) > HZ)
++	if (force || timeout - READ_ONCE(flow->timeout) > HZ)
+ 		WRITE_ONCE(flow->timeout, timeout);
+ 	else
+ 		return;
+@@ -334,6 +334,12 @@ void flow_offload_refresh(struct nf_flowtable *flow_table,
+ }
+ EXPORT_SYMBOL_GPL(flow_offload_refresh);
+ 
++static bool nf_flow_is_outdated(const struct flow_offload *flow)
++{
++	return test_bit(IPS_SEEN_REPLY_BIT, &flow->ct->status) &&
++		!test_bit(NF_FLOW_HW_ESTABLISHED, &flow->flags);
++}
++
+ static inline bool nf_flow_has_expired(const struct flow_offload *flow)
+ {
+ 	return nf_flow_timeout_delta(flow->timeout) <= 0;
+@@ -423,7 +429,8 @@ static void nf_flow_offload_gc_step(struct nf_flowtable *flow_table,
+ 				    struct flow_offload *flow, void *data)
+ {
+ 	if (nf_flow_has_expired(flow) ||
+-	    nf_ct_is_dying(flow->ct))
++	    nf_ct_is_dying(flow->ct) ||
++	    nf_flow_is_outdated(flow))
+ 		flow_offload_teardown(flow);
+ 
+ 	if (test_bit(NF_FLOW_TEARDOWN, &flow->flags)) {
+diff --git a/net/netfilter/nf_flow_table_ip.c b/net/netfilter/nf_flow_table_ip.c
+index 19efba1e51ef9..3bbaf9c7ea46a 100644
+--- a/net/netfilter/nf_flow_table_ip.c
++++ b/net/netfilter/nf_flow_table_ip.c
+@@ -384,7 +384,7 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
+ 	if (skb_try_make_writable(skb, thoff + hdrsize))
+ 		return NF_DROP;
+ 
+-	flow_offload_refresh(flow_table, flow);
++	flow_offload_refresh(flow_table, flow, false);
+ 
+ 	nf_flow_encap_pop(skb, tuplehash);
+ 	thoff -= offset;
+@@ -650,7 +650,7 @@ nf_flow_offload_ipv6_hook(void *priv, struct sk_buff *skb,
+ 	if (skb_try_make_writable(skb, thoff + hdrsize))
+ 		return NF_DROP;
+ 
+-	flow_offload_refresh(flow_table, flow);
++	flow_offload_refresh(flow_table, flow, false);
+ 
+ 	nf_flow_encap_pop(skb, tuplehash);
+ 
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 368aeabd8f8f1..8f63514656a17 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -3781,7 +3781,8 @@ err_destroy_flow_rule:
+ 	if (flow)
+ 		nft_flow_rule_destroy(flow);
+ err_release_rule:
+-	nf_tables_rule_release(&ctx, rule);
++	nft_rule_expr_deactivate(&ctx, rule, NFT_TRANS_PREPARE);
++	nf_tables_rule_destroy(&ctx, rule);
+ err_release_expr:
+ 	for (i = 0; i < n; i++) {
+ 		if (expr_info[i].ops) {
+@@ -4850,6 +4851,7 @@ static int nf_tables_newset(struct sk_buff *skb, const struct nfnl_info *info,
+ 
+ 	set->num_exprs = num_exprs;
+ 	set->handle = nf_tables_alloc_handle(table);
++	INIT_LIST_HEAD(&set->pending_update);
+ 
+ 	err = nft_trans_set_add(&ctx, NFT_MSG_NEWSET, set);
+ 	if (err < 0)
+@@ -9190,10 +9192,25 @@ static void nf_tables_commit_audit_log(struct list_head *adl, u32 generation)
+ 	}
+ }
+ 
++static void nft_set_commit_update(struct list_head *set_update_list)
++{
++	struct nft_set *set, *next;
++
++	list_for_each_entry_safe(set, next, set_update_list, pending_update) {
++		list_del_init(&set->pending_update);
++
++		if (!set->ops->commit)
++			continue;
++
++		set->ops->commit(set);
++	}
++}
++
+ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ {
+ 	struct nftables_pernet *nft_net = nft_pernet(net);
+ 	struct nft_trans *trans, *next;
++	LIST_HEAD(set_update_list);
+ 	struct nft_trans_elem *te;
+ 	struct nft_chain *chain;
+ 	struct nft_table *table;
+@@ -9359,6 +9376,11 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 			nf_tables_setelem_notify(&trans->ctx, te->set,
+ 						 &te->elem,
+ 						 NFT_MSG_NEWSETELEM);
++			if (te->set->ops->commit &&
++			    list_empty(&te->set->pending_update)) {
++				list_add_tail(&te->set->pending_update,
++					      &set_update_list);
++			}
+ 			nft_trans_destroy(trans);
+ 			break;
+ 		case NFT_MSG_DELSETELEM:
+@@ -9373,6 +9395,11 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 				atomic_dec(&te->set->nelems);
+ 				te->set->ndeact--;
+ 			}
++			if (te->set->ops->commit &&
++			    list_empty(&te->set->pending_update)) {
++				list_add_tail(&te->set->pending_update,
++					      &set_update_list);
++			}
+ 			break;
+ 		case NFT_MSG_NEWOBJ:
+ 			if (nft_trans_obj_update(trans)) {
+@@ -9435,6 +9462,8 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 		}
+ 	}
+ 
++	nft_set_commit_update(&set_update_list);
++
+ 	nft_commit_notify(net, NETLINK_CB(skb).portid);
+ 	nf_tables_gen_notify(net, skb, NFT_MSG_NEWGEN);
+ 	nf_tables_commit_audit_log(&adl, nft_net->base_seq);
+@@ -9494,10 +9523,25 @@ static void nf_tables_abort_release(struct nft_trans *trans)
+ 	kfree(trans);
+ }
+ 
++static void nft_set_abort_update(struct list_head *set_update_list)
++{
++	struct nft_set *set, *next;
++
++	list_for_each_entry_safe(set, next, set_update_list, pending_update) {
++		list_del_init(&set->pending_update);
++
++		if (!set->ops->abort)
++			continue;
++
++		set->ops->abort(set);
++	}
++}
++
+ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ {
+ 	struct nftables_pernet *nft_net = nft_pernet(net);
+ 	struct nft_trans *trans, *next;
++	LIST_HEAD(set_update_list);
+ 	struct nft_trans_elem *te;
+ 
+ 	if (action == NFNL_ABORT_VALIDATE &&
+@@ -9602,6 +9646,12 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 			nft_setelem_remove(net, te->set, &te->elem);
+ 			if (!nft_setelem_is_catchall(te->set, &te->elem))
+ 				atomic_dec(&te->set->nelems);
++
++			if (te->set->ops->abort &&
++			    list_empty(&te->set->pending_update)) {
++				list_add_tail(&te->set->pending_update,
++					      &set_update_list);
++			}
+ 			break;
+ 		case NFT_MSG_DELSETELEM:
+ 		case NFT_MSG_DESTROYSETELEM:
+@@ -9612,6 +9662,11 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 			if (!nft_setelem_is_catchall(te->set, &te->elem))
+ 				te->set->ndeact--;
+ 
++			if (te->set->ops->abort &&
++			    list_empty(&te->set->pending_update)) {
++				list_add_tail(&te->set->pending_update,
++					      &set_update_list);
++			}
+ 			nft_trans_destroy(trans);
+ 			break;
+ 		case NFT_MSG_NEWOBJ:
+@@ -9654,6 +9709,8 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 		}
+ 	}
+ 
++	nft_set_abort_update(&set_update_list);
++
+ 	synchronize_rcu();
+ 
+ 	list_for_each_entry_safe_reverse(trans, next,
+diff --git a/net/netfilter/nfnetlink.c b/net/netfilter/nfnetlink.c
+index ae7146475d17a..c9fbe0f707b5f 100644
+--- a/net/netfilter/nfnetlink.c
++++ b/net/netfilter/nfnetlink.c
+@@ -533,7 +533,8 @@ ack:
+ 			 * processed, this avoids that the same error is
+ 			 * reported several times when replaying the batch.
+ 			 */
+-			if (nfnl_err_add(&err_list, nlh, err, &extack) < 0) {
++			if (err == -ENOMEM ||
++			    nfnl_err_add(&err_list, nlh, err, &extack) < 0) {
+ 				/* We failed to enqueue an error, reset the
+ 				 * list of errors and send OOM to userspace
+ 				 * pointing to the batch header.
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index 06d46d1826347..15e451dc3fc46 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -1600,17 +1600,10 @@ static void pipapo_free_fields(struct nft_pipapo_match *m)
+ 	}
+ }
+ 
+-/**
+- * pipapo_reclaim_match - RCU callback to free fields from old matching data
+- * @rcu:	RCU head
+- */
+-static void pipapo_reclaim_match(struct rcu_head *rcu)
++static void pipapo_free_match(struct nft_pipapo_match *m)
+ {
+-	struct nft_pipapo_match *m;
+ 	int i;
+ 
+-	m = container_of(rcu, struct nft_pipapo_match, rcu);
+-
+ 	for_each_possible_cpu(i)
+ 		kfree(*per_cpu_ptr(m->scratch, i));
+ 
+@@ -1625,7 +1618,19 @@ static void pipapo_reclaim_match(struct rcu_head *rcu)
+ }
+ 
+ /**
+- * pipapo_commit() - Replace lookup data with current working copy
++ * pipapo_reclaim_match - RCU callback to free fields from old matching data
++ * @rcu:	RCU head
++ */
++static void pipapo_reclaim_match(struct rcu_head *rcu)
++{
++	struct nft_pipapo_match *m;
++
++	m = container_of(rcu, struct nft_pipapo_match, rcu);
++	pipapo_free_match(m);
++}
++
++/**
++ * nft_pipapo_commit() - Replace lookup data with current working copy
+  * @set:	nftables API set representation
+  *
+  * While at it, check if we should perform garbage collection on the working
+@@ -1635,7 +1640,7 @@ static void pipapo_reclaim_match(struct rcu_head *rcu)
+  * We also need to create a new working copy for subsequent insertions and
+  * deletions.
+  */
+-static void pipapo_commit(const struct nft_set *set)
++static void nft_pipapo_commit(const struct nft_set *set)
+ {
+ 	struct nft_pipapo *priv = nft_set_priv(set);
+ 	struct nft_pipapo_match *new_clone, *old;
+@@ -1660,6 +1665,26 @@ static void pipapo_commit(const struct nft_set *set)
+ 	priv->clone = new_clone;
+ }
+ 
++static void nft_pipapo_abort(const struct nft_set *set)
++{
++	struct nft_pipapo *priv = nft_set_priv(set);
++	struct nft_pipapo_match *new_clone, *m;
++
++	if (!priv->dirty)
++		return;
++
++	m = rcu_dereference(priv->match);
++
++	new_clone = pipapo_clone(m);
++	if (IS_ERR(new_clone))
++		return;
++
++	priv->dirty = false;
++
++	pipapo_free_match(priv->clone);
++	priv->clone = new_clone;
++}
++
+ /**
+  * nft_pipapo_activate() - Mark element reference as active given key, commit
+  * @net:	Network namespace
+@@ -1667,8 +1692,7 @@ static void pipapo_commit(const struct nft_set *set)
+  * @elem:	nftables API element representation containing key data
+  *
+  * On insertion, elements are added to a copy of the matching data currently
+- * in use for lookups, and not directly inserted into current lookup data, so
+- * we'll take care of that by calling pipapo_commit() here. Both
++ * in use for lookups, and not directly inserted into current lookup data. Both
+  * nft_pipapo_insert() and nft_pipapo_activate() are called once for each
+  * element, hence we can't purpose either one as a real commit operation.
+  */
+@@ -1684,8 +1708,6 @@ static void nft_pipapo_activate(const struct net *net,
+ 
+ 	nft_set_elem_change_active(net, set, &e->ext);
+ 	nft_set_elem_clear_busy(&e->ext);
+-
+-	pipapo_commit(set);
+ }
+ 
+ /**
+@@ -1931,7 +1953,6 @@ static void nft_pipapo_remove(const struct net *net, const struct nft_set *set,
+ 		if (i == m->field_count) {
+ 			priv->dirty = true;
+ 			pipapo_drop(m, rulemap);
+-			pipapo_commit(set);
+ 			return;
+ 		}
+ 
+@@ -2230,6 +2251,8 @@ const struct nft_set_type nft_set_pipapo_type = {
+ 		.init		= nft_pipapo_init,
+ 		.destroy	= nft_pipapo_destroy,
+ 		.gc_init	= nft_pipapo_gc_init,
++		.commit		= nft_pipapo_commit,
++		.abort		= nft_pipapo_abort,
+ 		.elemsize	= offsetof(struct nft_pipapo_elem, ext),
+ 	},
+ };
+@@ -2252,6 +2275,8 @@ const struct nft_set_type nft_set_pipapo_avx2_type = {
+ 		.init		= nft_pipapo_init,
+ 		.destroy	= nft_pipapo_destroy,
+ 		.gc_init	= nft_pipapo_gc_init,
++		.commit		= nft_pipapo_commit,
++		.abort		= nft_pipapo_abort,
+ 		.elemsize	= offsetof(struct nft_pipapo_elem, ext),
+ 	},
+ };
+diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
+index 9cc0bc7c71ed7..abc71a06d634a 100644
+--- a/net/sched/act_ct.c
++++ b/net/sched/act_ct.c
+@@ -610,6 +610,7 @@ static bool tcf_ct_flow_table_lookup(struct tcf_ct_params *p,
+ 	struct flow_offload_tuple tuple = {};
+ 	enum ip_conntrack_info ctinfo;
+ 	struct tcphdr *tcph = NULL;
++	bool force_refresh = false;
+ 	struct flow_offload *flow;
+ 	struct nf_conn *ct;
+ 	u8 dir;
+@@ -647,6 +648,7 @@ static bool tcf_ct_flow_table_lookup(struct tcf_ct_params *p,
+ 			 * established state, then don't refresh.
+ 			 */
+ 			return false;
++		force_refresh = true;
+ 	}
+ 
+ 	if (tcph && (unlikely(tcph->fin || tcph->rst))) {
+@@ -660,7 +662,12 @@ static bool tcf_ct_flow_table_lookup(struct tcf_ct_params *p,
+ 	else
+ 		ctinfo = IP_CT_ESTABLISHED_REPLY;
+ 
+-	flow_offload_refresh(nf_ft, flow);
++	flow_offload_refresh(nf_ft, flow, force_refresh);
++	if (!test_bit(IPS_ASSURED_BIT, &ct->status)) {
++		/* Process this flow in SW to allow promoting to ASSURED */
++		return false;
++	}
++
+ 	nf_conntrack_get(&ct->ct_general);
+ 	nf_ct_set(skb, ct, ctinfo);
+ 	if (nf_ft->flags & NF_FLOWTABLE_COUNTER)
+diff --git a/net/sched/act_pedit.c b/net/sched/act_pedit.c
+index 4559a1507ea5a..300b5ae760dcc 100644
+--- a/net/sched/act_pedit.c
++++ b/net/sched/act_pedit.c
+@@ -13,7 +13,10 @@
+ #include <linux/rtnetlink.h>
+ #include <linux/module.h>
+ #include <linux/init.h>
++#include <linux/ip.h>
++#include <linux/ipv6.h>
+ #include <linux/slab.h>
++#include <net/ipv6.h>
+ #include <net/netlink.h>
+ #include <net/pkt_sched.h>
+ #include <linux/tc_act/tc_pedit.h>
+@@ -313,11 +316,35 @@ static bool offset_valid(struct sk_buff *skb, int offset)
+ 	return true;
+ }
+ 
+-static int pedit_skb_hdr_offset(struct sk_buff *skb,
+-				enum pedit_header_type htype, int *hoffset)
++static int pedit_l4_skb_offset(struct sk_buff *skb, int *hoffset, const int header_type)
+ {
++	const int noff = skb_network_offset(skb);
+ 	int ret = -EINVAL;
++	struct iphdr _iph;
++
++	switch (skb->protocol) {
++	case htons(ETH_P_IP): {
++		const struct iphdr *iph = skb_header_pointer(skb, noff, sizeof(_iph), &_iph);
++
++		if (!iph)
++			goto out;
++		*hoffset = noff + iph->ihl * 4;
++		ret = 0;
++		break;
++	}
++	case htons(ETH_P_IPV6):
++		ret = ipv6_find_hdr(skb, hoffset, header_type, NULL, NULL) == header_type ? 0 : -EINVAL;
++		break;
++	}
++out:
++	return ret;
++}
+ 
++static int pedit_skb_hdr_offset(struct sk_buff *skb,
++				 enum pedit_header_type htype, int *hoffset)
++{
++	int ret = -EINVAL;
++	/* 'htype' is validated in the netlink parsing */
+ 	switch (htype) {
+ 	case TCA_PEDIT_KEY_EX_HDR_TYPE_ETH:
+ 		if (skb_mac_header_was_set(skb)) {
+@@ -332,17 +359,14 @@ static int pedit_skb_hdr_offset(struct sk_buff *skb,
+ 		ret = 0;
+ 		break;
+ 	case TCA_PEDIT_KEY_EX_HDR_TYPE_TCP:
++		ret = pedit_l4_skb_offset(skb, hoffset, IPPROTO_TCP);
++		break;
+ 	case TCA_PEDIT_KEY_EX_HDR_TYPE_UDP:
+-		if (skb_transport_header_was_set(skb)) {
+-			*hoffset = skb_transport_offset(skb);
+-			ret = 0;
+-		}
++		ret = pedit_l4_skb_offset(skb, hoffset, IPPROTO_UDP);
+ 		break;
+ 	default:
+-		ret = -EINVAL;
+ 		break;
+ 	}
+-
+ 	return ret;
+ }
+ 
+@@ -376,8 +400,8 @@ TC_INDIRECT_SCOPE int tcf_pedit_act(struct sk_buff *skb,
+ 
+ 	for (i = parms->tcfp_nkeys; i > 0; i--, tkey++) {
+ 		int offset = tkey->off;
++		int hoffset = 0;
+ 		u32 *ptr, hdata;
+-		int hoffset;
+ 		u32 val;
+ 		int rc;
+ 
+@@ -390,8 +414,7 @@ TC_INDIRECT_SCOPE int tcf_pedit_act(struct sk_buff *skb,
+ 
+ 		rc = pedit_skb_hdr_offset(skb, htype, &hoffset);
+ 		if (rc) {
+-			pr_info("tc action pedit bad header type specified (0x%x)\n",
+-				htype);
++			pr_info_ratelimited("tc action pedit unable to extract header offset for header type (0x%x)\n", htype);
+ 			goto bad;
+ 		}
+ 
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index c877a6343fd47..a193cc7b32418 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -657,8 +657,8 @@ static void __tcf_chain_put(struct tcf_chain *chain, bool by_act,
+ {
+ 	struct tcf_block *block = chain->block;
+ 	const struct tcf_proto_ops *tmplt_ops;
++	unsigned int refcnt, non_act_refcnt;
+ 	bool free_block = false;
+-	unsigned int refcnt;
+ 	void *tmplt_priv;
+ 
+ 	mutex_lock(&block->lock);
+@@ -678,13 +678,15 @@ static void __tcf_chain_put(struct tcf_chain *chain, bool by_act,
+ 	 * save these to temporary variables.
+ 	 */
+ 	refcnt = --chain->refcnt;
++	non_act_refcnt = refcnt - chain->action_refcnt;
+ 	tmplt_ops = chain->tmplt_ops;
+ 	tmplt_priv = chain->tmplt_priv;
+ 
+-	/* The last dropped non-action reference will trigger notification. */
+-	if (refcnt - chain->action_refcnt == 0 && !by_act) {
+-		tc_chain_notify_delete(tmplt_ops, tmplt_priv, chain->index,
+-				       block, NULL, 0, 0, false);
++	if (non_act_refcnt == chain->explicitly_created && !by_act) {
++		if (non_act_refcnt == 0)
++			tc_chain_notify_delete(tmplt_ops, tmplt_priv,
++					       chain->index, block, NULL, 0, 0,
++					       false);
+ 		/* Last reference to chain, no need to lock. */
+ 		chain->flushing = false;
+ 	}
+diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c
+index 4e2e269f121f8..d15d50de79802 100644
+--- a/net/sched/cls_u32.c
++++ b/net/sched/cls_u32.c
+@@ -718,13 +718,19 @@ static int u32_set_parms(struct net *net, struct tcf_proto *tp,
+ 			 struct nlattr *est, u32 flags, u32 fl_flags,
+ 			 struct netlink_ext_ack *extack)
+ {
+-	int err;
++	int err, ifindex = -1;
+ 
+ 	err = tcf_exts_validate_ex(net, tp, tb, est, &n->exts, flags,
+ 				   fl_flags, extack);
+ 	if (err < 0)
+ 		return err;
+ 
++	if (tb[TCA_U32_INDEV]) {
++		ifindex = tcf_change_indev(net, tb[TCA_U32_INDEV], extack);
++		if (ifindex < 0)
++			return -EINVAL;
++	}
++
+ 	if (tb[TCA_U32_LINK]) {
+ 		u32 handle = nla_get_u32(tb[TCA_U32_LINK]);
+ 		struct tc_u_hnode *ht_down = NULL, *ht_old;
+@@ -759,13 +765,9 @@ static int u32_set_parms(struct net *net, struct tcf_proto *tp,
+ 		tcf_bind_filter(tp, &n->res, base);
+ 	}
+ 
+-	if (tb[TCA_U32_INDEV]) {
+-		int ret;
+-		ret = tcf_change_indev(net, tb[TCA_U32_INDEV], extack);
+-		if (ret < 0)
+-			return -EINVAL;
+-		n->ifindex = ret;
+-	}
++	if (ifindex >= 0)
++		n->ifindex = ifindex;
++
+ 	return 0;
+ }
+ 
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index b2a63d697a4aa..3f7311529cc00 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -1077,17 +1077,29 @@ static int qdisc_graft(struct net_device *dev, struct Qdisc *parent,
+ 
+ 	if (parent == NULL) {
+ 		unsigned int i, num_q, ingress;
++		struct netdev_queue *dev_queue;
+ 
+ 		ingress = 0;
+ 		num_q = dev->num_tx_queues;
+ 		if ((q && q->flags & TCQ_F_INGRESS) ||
+ 		    (new && new->flags & TCQ_F_INGRESS)) {
+-			num_q = 1;
+ 			ingress = 1;
+-			if (!dev_ingress_queue(dev)) {
++			dev_queue = dev_ingress_queue(dev);
++			if (!dev_queue) {
+ 				NL_SET_ERR_MSG(extack, "Device does not have an ingress queue");
+ 				return -ENOENT;
+ 			}
++
++			q = rtnl_dereference(dev_queue->qdisc_sleeping);
++
++			/* This is the counterpart of that qdisc_refcount_inc_nz() call in
++			 * __tcf_qdisc_find() for filter requests.
++			 */
++			if (!qdisc_refcount_dec_if_one(q)) {
++				NL_SET_ERR_MSG(extack,
++					       "Current ingress or clsact Qdisc has ongoing filter requests");
++				return -EBUSY;
++			}
+ 		}
+ 
+ 		if (dev->flags & IFF_UP)
+@@ -1098,18 +1110,26 @@ static int qdisc_graft(struct net_device *dev, struct Qdisc *parent,
+ 		if (new && new->ops->attach && !ingress)
+ 			goto skip;
+ 
+-		for (i = 0; i < num_q; i++) {
+-			struct netdev_queue *dev_queue = dev_ingress_queue(dev);
+-
+-			if (!ingress)
++		if (!ingress) {
++			for (i = 0; i < num_q; i++) {
+ 				dev_queue = netdev_get_tx_queue(dev, i);
++				old = dev_graft_qdisc(dev_queue, new);
+ 
+-			old = dev_graft_qdisc(dev_queue, new);
+-			if (new && i > 0)
+-				qdisc_refcount_inc(new);
+-
+-			if (!ingress)
++				if (new && i > 0)
++					qdisc_refcount_inc(new);
+ 				qdisc_put(old);
++			}
++		} else {
++			old = dev_graft_qdisc(dev_queue, NULL);
++
++			/* {ingress,clsact}_destroy() @old before grafting @new to avoid
++			 * unprotected concurrent accesses to net_device::miniq_{in,e}gress
++			 * pointer(s) in mini_qdisc_pair_swap().
++			 */
++			qdisc_notify(net, skb, n, classid, old, new, extack);
++			qdisc_destroy(old);
++
++			dev_graft_qdisc(dev_queue, new);
+ 		}
+ 
+ skip:
+@@ -1123,8 +1143,6 @@ skip:
+ 
+ 			if (new && new->ops->attach)
+ 				new->ops->attach(new);
+-		} else {
+-			notify_and_destroy(net, skb, n, classid, old, new, extack);
+ 		}
+ 
+ 		if (dev->flags & IFF_UP)
+diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
+index ee43e8ac039ed..a5693e25b2482 100644
+--- a/net/sched/sch_generic.c
++++ b/net/sched/sch_generic.c
+@@ -1046,7 +1046,7 @@ static void qdisc_free_cb(struct rcu_head *head)
+ 	qdisc_free(q);
+ }
+ 
+-static void qdisc_destroy(struct Qdisc *qdisc)
++static void __qdisc_destroy(struct Qdisc *qdisc)
+ {
+ 	const struct Qdisc_ops  *ops = qdisc->ops;
+ 
+@@ -1070,6 +1070,14 @@ static void qdisc_destroy(struct Qdisc *qdisc)
+ 	call_rcu(&qdisc->rcu, qdisc_free_cb);
+ }
+ 
++void qdisc_destroy(struct Qdisc *qdisc)
++{
++	if (qdisc->flags & TCQ_F_BUILTIN)
++		return;
++
++	__qdisc_destroy(qdisc);
++}
++
+ void qdisc_put(struct Qdisc *qdisc)
+ {
+ 	if (!qdisc)
+@@ -1079,7 +1087,7 @@ void qdisc_put(struct Qdisc *qdisc)
+ 	    !refcount_dec_and_test(&qdisc->refcnt))
+ 		return;
+ 
+-	qdisc_destroy(qdisc);
++	__qdisc_destroy(qdisc);
+ }
+ EXPORT_SYMBOL(qdisc_put);
+ 
+@@ -1094,7 +1102,7 @@ void qdisc_put_unlocked(struct Qdisc *qdisc)
+ 	    !refcount_dec_and_rtnl_lock(&qdisc->refcnt))
+ 		return;
+ 
+-	qdisc_destroy(qdisc);
++	__qdisc_destroy(qdisc);
+ 	rtnl_unlock();
+ }
+ EXPORT_SYMBOL(qdisc_put_unlocked);
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index a6cf56a969421..7190482b52e05 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -795,6 +795,9 @@ static struct sk_buff *taprio_dequeue_tc_priority(struct Qdisc *sch,
+ 
+ 			taprio_next_tc_txq(dev, tc, &q->cur_txq[tc]);
+ 
++			if (q->cur_txq[tc] >= dev->num_tx_queues)
++				q->cur_txq[tc] = first_txq;
++
+ 			if (skb)
+ 				return skb;
+ 		} while (q->cur_txq[tc] != first_txq);
+diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c
+index ce54261712062..07edf56f7b8c6 100644
+--- a/net/sctp/sm_statefuns.c
++++ b/net/sctp/sm_statefuns.c
+@@ -4484,7 +4484,7 @@ enum sctp_disposition sctp_sf_eat_auth(struct net *net,
+ 				    SCTP_AUTH_NEW_KEY, GFP_ATOMIC);
+ 
+ 		if (!ev)
+-			return -ENOMEM;
++			return SCTP_DISPOSITION_NOMEM;
+ 
+ 		sctp_add_cmd_sf(commands, SCTP_CMD_EVENT_ULP,
+ 				SCTP_ULPEVENT(ev));
+diff --git a/net/tipc/bearer.c b/net/tipc/bearer.c
+index 53881406e2006..cdcd2731860ba 100644
+--- a/net/tipc/bearer.c
++++ b/net/tipc/bearer.c
+@@ -1258,7 +1258,7 @@ int tipc_nl_media_get(struct sk_buff *skb, struct genl_info *info)
+ 	struct tipc_nl_msg msg;
+ 	struct tipc_media *media;
+ 	struct sk_buff *rep;
+-	struct nlattr *attrs[TIPC_NLA_BEARER_MAX + 1];
++	struct nlattr *attrs[TIPC_NLA_MEDIA_MAX + 1];
+ 
+ 	if (!info->attrs[TIPC_NLA_MEDIA])
+ 		return -EINVAL;
+@@ -1307,7 +1307,7 @@ int __tipc_nl_media_set(struct sk_buff *skb, struct genl_info *info)
+ 	int err;
+ 	char *name;
+ 	struct tipc_media *m;
+-	struct nlattr *attrs[TIPC_NLA_BEARER_MAX + 1];
++	struct nlattr *attrs[TIPC_NLA_MEDIA_MAX + 1];
+ 
+ 	if (!info->attrs[TIPC_NLA_MEDIA])
+ 		return -EINVAL;
+diff --git a/net/wireless/rdev-ops.h b/net/wireless/rdev-ops.h
+index 13b209a8db287..ee853a14a02de 100644
+--- a/net/wireless/rdev-ops.h
++++ b/net/wireless/rdev-ops.h
+@@ -2,7 +2,7 @@
+ /*
+  * Portions of this file
+  * Copyright(c) 2016-2017 Intel Deutschland GmbH
+- * Copyright (C) 2018, 2021-2022 Intel Corporation
++ * Copyright (C) 2018, 2021-2023 Intel Corporation
+  */
+ #ifndef __CFG80211_RDEV_OPS
+ #define __CFG80211_RDEV_OPS
+@@ -1441,8 +1441,8 @@ rdev_del_intf_link(struct cfg80211_registered_device *rdev,
+ 		   unsigned int link_id)
+ {
+ 	trace_rdev_del_intf_link(&rdev->wiphy, wdev, link_id);
+-	if (rdev->ops->add_intf_link)
+-		rdev->ops->add_intf_link(&rdev->wiphy, wdev, link_id);
++	if (rdev->ops->del_intf_link)
++		rdev->ops->del_intf_link(&rdev->wiphy, wdev, link_id);
+ 	trace_rdev_return_void(&rdev->wiphy);
+ }
+ 
+diff --git a/net/wireless/reg.c b/net/wireless/reg.c
+index 0d40d6af7e10a..26f11e4746c05 100644
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -2404,11 +2404,8 @@ static bool reg_wdev_chan_valid(struct wiphy *wiphy, struct wireless_dev *wdev)
+ 		case NL80211_IFTYPE_P2P_GO:
+ 		case NL80211_IFTYPE_ADHOC:
+ 		case NL80211_IFTYPE_MESH_POINT:
+-			wiphy_lock(wiphy);
+ 			ret = cfg80211_reg_can_beacon_relax(wiphy, &chandef,
+ 							    iftype);
+-			wiphy_unlock(wiphy);
+-
+ 			if (!ret)
+ 				return ret;
+ 			break;
+@@ -2440,11 +2437,11 @@ static void reg_leave_invalid_chans(struct wiphy *wiphy)
+ 	struct wireless_dev *wdev;
+ 	struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy);
+ 
+-	ASSERT_RTNL();
+-
++	wiphy_lock(wiphy);
+ 	list_for_each_entry(wdev, &rdev->wiphy.wdev_list, list)
+ 		if (!reg_wdev_chan_valid(wiphy, wdev))
+ 			cfg80211_leave(rdev, wdev);
++	wiphy_unlock(wiphy);
+ }
+ 
+ static void reg_check_chans_work(struct work_struct *work)
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 48a0e87136f1c..920e44ba998a5 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -11738,6 +11738,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1b0a, 0x01b8, "ACER Veriton", ALC662_FIXUP_ACER_VERITON),
+ 	SND_PCI_QUIRK(0x1b35, 0x1234, "CZC ET26", ALC662_FIXUP_CZC_ET26),
+ 	SND_PCI_QUIRK(0x1b35, 0x2206, "CZC P10T", ALC662_FIXUP_CZC_P10T),
++	SND_PCI_QUIRK(0x1c6c, 0x1239, "Compaq N14JP6-V2", ALC897_FIXUP_HP_HSMIC_VERB),
+ 
+ #if 0
+ 	/* Below is a quirk table taken from the old code.
+diff --git a/sound/soc/codecs/cs35l41-lib.c b/sound/soc/codecs/cs35l41-lib.c
+index 04be71435491e..c2c56e5608094 100644
+--- a/sound/soc/codecs/cs35l41-lib.c
++++ b/sound/soc/codecs/cs35l41-lib.c
+@@ -46,7 +46,7 @@ static const struct reg_default cs35l41_reg[] = {
+ 	{ CS35L41_DSP1_RX5_SRC,			0x00000020 },
+ 	{ CS35L41_DSP1_RX6_SRC,			0x00000021 },
+ 	{ CS35L41_DSP1_RX7_SRC,			0x0000003A },
+-	{ CS35L41_DSP1_RX8_SRC,			0x00000001 },
++	{ CS35L41_DSP1_RX8_SRC,			0x0000003B },
+ 	{ CS35L41_NGATE1_SRC,			0x00000008 },
+ 	{ CS35L41_NGATE2_SRC,			0x00000009 },
+ 	{ CS35L41_AMP_DIG_VOL_CTRL,		0x00008000 },
+@@ -58,8 +58,8 @@ static const struct reg_default cs35l41_reg[] = {
+ 	{ CS35L41_IRQ1_MASK2,			0xFFFFFFFF },
+ 	{ CS35L41_IRQ1_MASK3,			0xFFFF87FF },
+ 	{ CS35L41_IRQ1_MASK4,			0xFEFFFFFF },
+-	{ CS35L41_GPIO1_CTRL1,			0xE1000001 },
+-	{ CS35L41_GPIO2_CTRL1,			0xE1000001 },
++	{ CS35L41_GPIO1_CTRL1,			0x81000001 },
++	{ CS35L41_GPIO2_CTRL1,			0x81000001 },
+ 	{ CS35L41_MIXER_NGATE_CFG,		0x00000000 },
+ 	{ CS35L41_MIXER_NGATE_CH1_CFG,		0x00000303 },
+ 	{ CS35L41_MIXER_NGATE_CH2_CFG,		0x00000303 },
+diff --git a/sound/soc/dwc/dwc-i2s.c b/sound/soc/dwc/dwc-i2s.c
+index 3496301582b22..f966d39c5c907 100644
+--- a/sound/soc/dwc/dwc-i2s.c
++++ b/sound/soc/dwc/dwc-i2s.c
+@@ -183,30 +183,6 @@ static void i2s_stop(struct dw_i2s_dev *dev,
+ 	}
+ }
+ 
+-static int dw_i2s_startup(struct snd_pcm_substream *substream,
+-		struct snd_soc_dai *cpu_dai)
+-{
+-	struct dw_i2s_dev *dev = snd_soc_dai_get_drvdata(cpu_dai);
+-	union dw_i2s_snd_dma_data *dma_data = NULL;
+-
+-	if (!(dev->capability & DWC_I2S_RECORD) &&
+-			(substream->stream == SNDRV_PCM_STREAM_CAPTURE))
+-		return -EINVAL;
+-
+-	if (!(dev->capability & DWC_I2S_PLAY) &&
+-			(substream->stream == SNDRV_PCM_STREAM_PLAYBACK))
+-		return -EINVAL;
+-
+-	if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
+-		dma_data = &dev->play_dma_data;
+-	else if (substream->stream == SNDRV_PCM_STREAM_CAPTURE)
+-		dma_data = &dev->capture_dma_data;
+-
+-	snd_soc_dai_set_dma_data(cpu_dai, substream, (void *)dma_data);
+-
+-	return 0;
+-}
+-
+ static void dw_i2s_config(struct dw_i2s_dev *dev, int stream)
+ {
+ 	u32 ch_reg;
+@@ -305,12 +281,6 @@ static int dw_i2s_hw_params(struct snd_pcm_substream *substream,
+ 	return 0;
+ }
+ 
+-static void dw_i2s_shutdown(struct snd_pcm_substream *substream,
+-		struct snd_soc_dai *dai)
+-{
+-	snd_soc_dai_set_dma_data(dai, substream, NULL);
+-}
+-
+ static int dw_i2s_prepare(struct snd_pcm_substream *substream,
+ 			  struct snd_soc_dai *dai)
+ {
+@@ -382,8 +352,6 @@ static int dw_i2s_set_fmt(struct snd_soc_dai *cpu_dai, unsigned int fmt)
+ }
+ 
+ static const struct snd_soc_dai_ops dw_i2s_dai_ops = {
+-	.startup	= dw_i2s_startup,
+-	.shutdown	= dw_i2s_shutdown,
+ 	.hw_params	= dw_i2s_hw_params,
+ 	.prepare	= dw_i2s_prepare,
+ 	.trigger	= dw_i2s_trigger,
+@@ -625,6 +593,14 @@ static int dw_configure_dai_by_dt(struct dw_i2s_dev *dev,
+ 
+ }
+ 
++static int dw_i2s_dai_probe(struct snd_soc_dai *dai)
++{
++	struct dw_i2s_dev *dev = snd_soc_dai_get_drvdata(dai);
++
++	snd_soc_dai_init_dma_data(dai, &dev->play_dma_data, &dev->capture_dma_data);
++	return 0;
++}
++
+ static int dw_i2s_probe(struct platform_device *pdev)
+ {
+ 	const struct i2s_platform_data *pdata = pdev->dev.platform_data;
+@@ -643,6 +619,7 @@ static int dw_i2s_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	dw_i2s_dai->ops = &dw_i2s_dai_ops;
++	dw_i2s_dai->probe = dw_i2s_dai_probe;
+ 
+ 	dev->i2s_base = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
+ 	if (IS_ERR(dev->i2s_base))
+diff --git a/sound/soc/intel/avs/avs.h b/sound/soc/intel/avs/avs.h
+index d7fccdcb9c167..0cf38c9e768e7 100644
+--- a/sound/soc/intel/avs/avs.h
++++ b/sound/soc/intel/avs/avs.h
+@@ -283,8 +283,8 @@ void avs_release_firmwares(struct avs_dev *adev);
+ 
+ int avs_dsp_init_module(struct avs_dev *adev, u16 module_id, u8 ppl_instance_id,
+ 			u8 core_id, u8 domain, void *param, u32 param_size,
+-			u16 *instance_id);
+-void avs_dsp_delete_module(struct avs_dev *adev, u16 module_id, u16 instance_id,
++			u8 *instance_id);
++void avs_dsp_delete_module(struct avs_dev *adev, u16 module_id, u8 instance_id,
+ 			   u8 ppl_instance_id, u8 core_id);
+ int avs_dsp_create_pipeline(struct avs_dev *adev, u16 req_size, u8 priority,
+ 			    bool lp, u16 attributes, u8 *instance_id);
+diff --git a/sound/soc/intel/avs/board_selection.c b/sound/soc/intel/avs/board_selection.c
+index b2823c2107f77..60f8fb0bff95b 100644
+--- a/sound/soc/intel/avs/board_selection.c
++++ b/sound/soc/intel/avs/board_selection.c
+@@ -443,7 +443,7 @@ static int avs_register_i2s_boards(struct avs_dev *adev)
+ 	}
+ 
+ 	for (mach = boards->machs; mach->id[0]; mach++) {
+-		if (!acpi_dev_present(mach->id, NULL, -1))
++		if (!acpi_dev_present(mach->id, mach->uid, -1))
+ 			continue;
+ 
+ 		if (mach->machine_quirk)
+diff --git a/sound/soc/intel/avs/dsp.c b/sound/soc/intel/avs/dsp.c
+index b881100d3e02a..aa03af4473e94 100644
+--- a/sound/soc/intel/avs/dsp.c
++++ b/sound/soc/intel/avs/dsp.c
+@@ -225,7 +225,7 @@ err:
+ 
+ int avs_dsp_init_module(struct avs_dev *adev, u16 module_id, u8 ppl_instance_id,
+ 			u8 core_id, u8 domain, void *param, u32 param_size,
+-			u16 *instance_id)
++			u8 *instance_id)
+ {
+ 	struct avs_module_entry mentry;
+ 	bool was_loaded = false;
+@@ -272,7 +272,7 @@ err_mod_entry:
+ 	return ret;
+ }
+ 
+-void avs_dsp_delete_module(struct avs_dev *adev, u16 module_id, u16 instance_id,
++void avs_dsp_delete_module(struct avs_dev *adev, u16 module_id, u8 instance_id,
+ 			   u8 ppl_instance_id, u8 core_id)
+ {
+ 	struct avs_module_entry mentry;
+diff --git a/sound/soc/intel/avs/path.h b/sound/soc/intel/avs/path.h
+index 197222c5e008e..657f7b093e805 100644
+--- a/sound/soc/intel/avs/path.h
++++ b/sound/soc/intel/avs/path.h
+@@ -37,7 +37,7 @@ struct avs_path_pipeline {
+ 
+ struct avs_path_module {
+ 	u16 module_id;
+-	u16 instance_id;
++	u8 instance_id;
+ 	union avs_gtw_attributes gtw_attrs;
+ 
+ 	struct avs_tplg_module *template;
+diff --git a/sound/soc/intel/avs/pcm.c b/sound/soc/intel/avs/pcm.c
+index 31c032a0f7e4b..1fbb2c2fadb55 100644
+--- a/sound/soc/intel/avs/pcm.c
++++ b/sound/soc/intel/avs/pcm.c
+@@ -468,21 +468,34 @@ static int avs_dai_fe_startup(struct snd_pcm_substream *substream, struct snd_so
+ 
+ 	host_stream = snd_hdac_ext_stream_assign(bus, substream, HDAC_EXT_STREAM_TYPE_HOST);
+ 	if (!host_stream) {
+-		kfree(data);
+-		return -EBUSY;
++		ret = -EBUSY;
++		goto err;
+ 	}
+ 
+ 	data->host_stream = host_stream;
+-	snd_pcm_hw_constraint_integer(runtime, SNDRV_PCM_HW_PARAM_PERIODS);
++	ret = snd_pcm_hw_constraint_integer(runtime, SNDRV_PCM_HW_PARAM_PERIODS);
++	if (ret < 0)
++		goto err;
++
+ 	/* avoid wrap-around with wall-clock */
+-	snd_pcm_hw_constraint_minmax(runtime, SNDRV_PCM_HW_PARAM_BUFFER_TIME, 20, 178000000);
+-	snd_pcm_hw_constraint_list(runtime, 0, SNDRV_PCM_HW_PARAM_RATE, &hw_rates);
++	ret = snd_pcm_hw_constraint_minmax(runtime, SNDRV_PCM_HW_PARAM_BUFFER_TIME, 20, 178000000);
++	if (ret < 0)
++		goto err;
++
++	ret = snd_pcm_hw_constraint_list(runtime, 0, SNDRV_PCM_HW_PARAM_RATE, &hw_rates);
++	if (ret < 0)
++		goto err;
++
+ 	snd_pcm_set_sync(substream);
+ 
+ 	dev_dbg(dai->dev, "%s fe STARTUP tag %d str %p",
+ 		__func__, hdac_stream(host_stream)->stream_tag, substream);
+ 
+ 	return 0;
++
++err:
++	kfree(data);
++	return ret;
+ }
+ 
+ static void avs_dai_fe_shutdown(struct snd_pcm_substream *substream, struct snd_soc_dai *dai)
+diff --git a/sound/soc/intel/avs/probes.c b/sound/soc/intel/avs/probes.c
+index 70a94201d6a56..275928281c6c6 100644
+--- a/sound/soc/intel/avs/probes.c
++++ b/sound/soc/intel/avs/probes.c
+@@ -18,7 +18,7 @@ static int avs_dsp_init_probe(struct avs_dev *adev, union avs_connector_node_id
+ {
+ 	struct avs_probe_cfg cfg = {{0}};
+ 	struct avs_module_entry mentry;
+-	u16 dummy;
++	u8 dummy;
+ 
+ 	avs_get_module_entry(adev, &AVS_PROBE_MOD_UUID, &mentry);
+ 
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index 7958c9defd492..1db82501fec18 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -2417,6 +2417,9 @@ int dpcm_be_dai_prepare(struct snd_soc_pcm_runtime *fe, int stream)
+ 		if (!snd_soc_dpcm_be_can_update(fe, be, stream))
+ 			continue;
+ 
++		if (!snd_soc_dpcm_can_be_prepared(fe, be, stream))
++			continue;
++
+ 		if ((be->dpcm[stream].state != SND_SOC_DPCM_STATE_HW_PARAMS) &&
+ 		    (be->dpcm[stream].state != SND_SOC_DPCM_STATE_STOP) &&
+ 		    (be->dpcm[stream].state != SND_SOC_DPCM_STATE_SUSPEND) &&
+@@ -3057,3 +3060,20 @@ int snd_soc_dpcm_can_be_params(struct snd_soc_pcm_runtime *fe,
+ 	return snd_soc_dpcm_check_state(fe, be, stream, state, ARRAY_SIZE(state));
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_dpcm_can_be_params);
++
++/*
++ * We can only prepare a BE DAI if any of it's FE are not prepared,
++ * running or paused for the specified stream direction.
++ */
++int snd_soc_dpcm_can_be_prepared(struct snd_soc_pcm_runtime *fe,
++				 struct snd_soc_pcm_runtime *be, int stream)
++{
++	const enum snd_soc_dpcm_state state[] = {
++		SND_SOC_DPCM_STATE_START,
++		SND_SOC_DPCM_STATE_PAUSED,
++		SND_SOC_DPCM_STATE_PREPARE,
++	};
++
++	return snd_soc_dpcm_check_state(fe, be, stream, state, ARRAY_SIZE(state));
++}
++EXPORT_SYMBOL_GPL(snd_soc_dpcm_can_be_prepared);
+diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c
+index eec5232f9fb29..08bf535ed1632 100644
+--- a/sound/usb/pcm.c
++++ b/sound/usb/pcm.c
+@@ -650,6 +650,10 @@ static int snd_usb_pcm_prepare(struct snd_pcm_substream *substream)
+ 		goto unlock;
+ 	}
+ 
++	ret = snd_usb_pcm_change_state(subs, UAC3_PD_STATE_D0);
++	if (ret < 0)
++		goto unlock;
++
+  again:
+ 	if (subs->sync_endpoint) {
+ 		ret = snd_usb_endpoint_prepare(chip, subs->sync_endpoint);
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 3ecd1ba7fd4b1..6cf55b7f7a041 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -2191,6 +2191,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_DSD_RAW),
+ 	VENDOR_FLG(0x2ab6, /* T+A devices */
+ 		   QUIRK_FLAG_DSD_RAW),
++	VENDOR_FLG(0x3336, /* HEM devices */
++		   QUIRK_FLAG_DSD_RAW),
+ 	VENDOR_FLG(0x3353, /* Khadas devices */
+ 		   QUIRK_FLAG_DSD_RAW),
+ 	VENDOR_FLG(0x3842, /* EVGA */
+diff --git a/tools/gpio/lsgpio.c b/tools/gpio/lsgpio.c
+index c61d061247e17..52a0be45410c9 100644
+--- a/tools/gpio/lsgpio.c
++++ b/tools/gpio/lsgpio.c
+@@ -94,7 +94,7 @@ static void print_attributes(struct gpio_v2_line_info *info)
+ 	for (i = 0; i < info->num_attrs; i++) {
+ 		if (info->attrs[i].id == GPIO_V2_LINE_ATTR_ID_DEBOUNCE)
+ 			fprintf(stdout, ", debounce_period=%dusec",
+-				info->attrs[0].debounce_period_us);
++				info->attrs[i].debounce_period_us);
+ 	}
+ }
+ 
+diff --git a/tools/testing/selftests/gpio/gpio-sim.sh b/tools/testing/selftests/gpio/gpio-sim.sh
+index 9f539d454ee4d..fa2ce2b9dd5fc 100755
+--- a/tools/testing/selftests/gpio/gpio-sim.sh
++++ b/tools/testing/selftests/gpio/gpio-sim.sh
+@@ -389,6 +389,9 @@ create_chip chip
+ create_bank chip bank
+ set_num_lines chip bank 8
+ enable_chip chip
++DEVNAME=`configfs_dev_name chip`
++CHIPNAME=`configfs_chip_name chip bank`
++SYSFS_PATH="/sys/devices/platform/$DEVNAME/$CHIPNAME/sim_gpio0/value"
+ $BASE_DIR/gpio-mockup-cdev -b pull-up /dev/`configfs_chip_name chip bank` 0
+ test `cat $SYSFS_PATH` = "1" || fail "bias setting does not work"
+ remove_chip chip
+diff --git a/tools/testing/selftests/net/forwarding/hw_stats_l3.sh b/tools/testing/selftests/net/forwarding/hw_stats_l3.sh
+index 9c1f76e108af1..1a936ffbacee7 100755
+--- a/tools/testing/selftests/net/forwarding/hw_stats_l3.sh
++++ b/tools/testing/selftests/net/forwarding/hw_stats_l3.sh
+@@ -84,8 +84,9 @@ h2_destroy()
+ 
+ router_rp1_200_create()
+ {
+-	ip link add name $rp1.200 up \
+-		link $rp1 addrgenmode eui64 type vlan id 200
++	ip link add name $rp1.200 link $rp1 type vlan id 200
++	ip link set dev $rp1.200 addrgenmode eui64
++	ip link set dev $rp1.200 up
+ 	ip address add dev $rp1.200 192.0.2.2/28
+ 	ip address add dev $rp1.200 2001:db8:1::2/64
+ 	ip stats set dev $rp1.200 l3_stats on
+@@ -256,9 +257,11 @@ reapply_config()
+ 
+ 	router_rp1_200_destroy
+ 
+-	ip link add name $rp1.200 link $rp1 addrgenmode none type vlan id 200
++	ip link add name $rp1.200 link $rp1 type vlan id 200
++	ip link set dev $rp1.200 addrgenmode none
+ 	ip stats set dev $rp1.200 l3_stats on
+-	ip link set dev $rp1.200 up addrgenmode eui64
++	ip link set dev $rp1.200 addrgenmode eui64
++	ip link set dev $rp1.200 up
+ 	ip address add dev $rp1.200 192.0.2.2/28
+ 	ip address add dev $rp1.200 2001:db8:1::2/64
+ }
+diff --git a/tools/testing/selftests/ptp/testptp.c b/tools/testing/selftests/ptp/testptp.c
+index 198ad5f321878..cfa9562f3cd83 100644
+--- a/tools/testing/selftests/ptp/testptp.c
++++ b/tools/testing/selftests/ptp/testptp.c
+@@ -502,11 +502,11 @@ int main(int argc, char *argv[])
+ 			interval = t2 - t1;
+ 			offset = (t2 + t1) / 2 - tp;
+ 
+-			printf("system time: %lld.%u\n",
++			printf("system time: %lld.%09u\n",
+ 				(pct+2*i)->sec, (pct+2*i)->nsec);
+-			printf("phc    time: %lld.%u\n",
++			printf("phc    time: %lld.%09u\n",
+ 				(pct+2*i+1)->sec, (pct+2*i+1)->nsec);
+-			printf("system time: %lld.%u\n",
++			printf("system time: %lld.%09u\n",
+ 				(pct+2*i+2)->sec, (pct+2*i+2)->nsec);
+ 			printf("system/phc clock time offset is %" PRId64 " ns\n"
+ 			       "system     clock time delay  is %" PRId64 " ns\n",
+diff --git a/tools/testing/selftests/tc-testing/config b/tools/testing/selftests/tc-testing/config
+index 4638c63a339ff..aec4de8bea78b 100644
+--- a/tools/testing/selftests/tc-testing/config
++++ b/tools/testing/selftests/tc-testing/config
+@@ -6,6 +6,7 @@ CONFIG_NF_CONNTRACK_MARK=y
+ CONFIG_NF_CONNTRACK_ZONES=y
+ CONFIG_NF_CONNTRACK_LABELS=y
+ CONFIG_NF_NAT=m
++CONFIG_NETFILTER_XT_TARGET_LOG=m
+ 
+ CONFIG_NET_SCHED=y
+ 
+diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/sfb.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/sfb.json
+index ba2f5e79cdbfe..e21c7f22c6d4c 100644
+--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/sfb.json
++++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/sfb.json
+@@ -58,10 +58,10 @@
+         "setup": [
+             "$IP link add dev $DUMMY type dummy || /bin/true"
+         ],
+-        "cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root sfb db 10",
++        "cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root sfb db 100",
+         "expExitCode": "0",
+         "verifyCmd": "$TC qdisc show dev $DUMMY",
+-        "matchPattern": "qdisc sfb 1: root refcnt [0-9]+ rehash 600s db 10ms",
++        "matchPattern": "qdisc sfb 1: root refcnt [0-9]+ rehash 600s db 100ms",
+         "matchCount": "1",
+         "teardown": [
+             "$TC qdisc del dev $DUMMY handle 1: root",
+diff --git a/tools/testing/selftests/tc-testing/tdc.sh b/tools/testing/selftests/tc-testing/tdc.sh
+index afb0cd86fa3df..eb357bd7923c0 100755
+--- a/tools/testing/selftests/tc-testing/tdc.sh
++++ b/tools/testing/selftests/tc-testing/tdc.sh
+@@ -2,5 +2,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+ 
+ modprobe netdevsim
++modprobe sch_teql
+ ./tdc.py -c actions --nobuildebpf
+ ./tdc.py -c qdisc


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:6.3 commit in: /
@ 2023-06-28 10:25 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2023-06-28 10:25 UTC (permalink / raw
  To: gentoo-commits

commit:     9fa612395ce64be756bd7ac6a49f9eac97b1be1e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 28 10:25:19 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jun 28 10:25:19 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9fa61239

Linux patch 6.3.10

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |     4 +
 1009_linux-6.3.10.patch | 10133 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 10137 insertions(+)

diff --git a/0000_README b/0000_README
index f891b16a..79a95e0e 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch:  1008_linux-6.3.9.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.3.9
 
+Patch:  1009_linux-6.3.10.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.3.10
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1009_linux-6.3.10.patch b/1009_linux-6.3.10.patch
new file mode 100644
index 00000000..c1ac3295
--- /dev/null
+++ b/1009_linux-6.3.10.patch
@@ -0,0 +1,10133 @@
+diff --git a/Makefile b/Makefile
+index f8ee723758856..47253ac6c85c1 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 3
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/arch/arm/boot/dts/am57xx-cl-som-am57x.dts b/arch/arm/boot/dts/am57xx-cl-som-am57x.dts
+index 2fc9a5d5e0c0d..625b9b311b49d 100644
+--- a/arch/arm/boot/dts/am57xx-cl-som-am57x.dts
++++ b/arch/arm/boot/dts/am57xx-cl-som-am57x.dts
+@@ -527,7 +527,7 @@
+ 
+ 		interrupt-parent = <&gpio1>;
+ 		interrupts = <31 0>;
+-		pendown-gpio = <&gpio1 31 0>;
++		pendown-gpio = <&gpio1 31 GPIO_ACTIVE_LOW>;
+ 
+ 
+ 		ti,x-min = /bits/ 16 <0x0>;
+diff --git a/arch/arm/boot/dts/at91sam9261ek.dts b/arch/arm/boot/dts/at91sam9261ek.dts
+index 88869ca874d1a..045cb253f23a6 100644
+--- a/arch/arm/boot/dts/at91sam9261ek.dts
++++ b/arch/arm/boot/dts/at91sam9261ek.dts
+@@ -156,7 +156,7 @@
+ 					compatible = "ti,ads7843";
+ 					interrupts-extended = <&pioC 2 IRQ_TYPE_EDGE_BOTH>;
+ 					spi-max-frequency = <3000000>;
+-					pendown-gpio = <&pioC 2 GPIO_ACTIVE_HIGH>;
++					pendown-gpio = <&pioC 2 GPIO_ACTIVE_LOW>;
+ 
+ 					ti,x-min = /bits/ 16 <150>;
+ 					ti,x-max = /bits/ 16 <3830>;
+diff --git a/arch/arm/boot/dts/imx7d-pico-hobbit.dts b/arch/arm/boot/dts/imx7d-pico-hobbit.dts
+index d917dc4f2f227..6ad39dca70096 100644
+--- a/arch/arm/boot/dts/imx7d-pico-hobbit.dts
++++ b/arch/arm/boot/dts/imx7d-pico-hobbit.dts
+@@ -64,7 +64,7 @@
+ 		interrupt-parent = <&gpio2>;
+ 		interrupts = <7 0>;
+ 		spi-max-frequency = <1000000>;
+-		pendown-gpio = <&gpio2 7 0>;
++		pendown-gpio = <&gpio2 7 GPIO_ACTIVE_LOW>;
+ 		vcc-supply = <&reg_3p3v>;
+ 		ti,x-min = /bits/ 16 <0>;
+ 		ti,x-max = /bits/ 16 <4095>;
+diff --git a/arch/arm/boot/dts/imx7d-sdb.dts b/arch/arm/boot/dts/imx7d-sdb.dts
+index f483bc0afe5ea..234e5fc647b22 100644
+--- a/arch/arm/boot/dts/imx7d-sdb.dts
++++ b/arch/arm/boot/dts/imx7d-sdb.dts
+@@ -205,7 +205,7 @@
+ 		pinctrl-0 = <&pinctrl_tsc2046_pendown>;
+ 		interrupt-parent = <&gpio2>;
+ 		interrupts = <29 0>;
+-		pendown-gpio = <&gpio2 29 GPIO_ACTIVE_HIGH>;
++		pendown-gpio = <&gpio2 29 GPIO_ACTIVE_LOW>;
+ 		touchscreen-max-pressure = <255>;
+ 		wakeup-source;
+ 	};
+diff --git a/arch/arm/boot/dts/omap3-cm-t3x.dtsi b/arch/arm/boot/dts/omap3-cm-t3x.dtsi
+index e61b8a2bfb7de..51baedf1603bd 100644
+--- a/arch/arm/boot/dts/omap3-cm-t3x.dtsi
++++ b/arch/arm/boot/dts/omap3-cm-t3x.dtsi
+@@ -227,7 +227,7 @@
+ 
+ 		interrupt-parent = <&gpio2>;
+ 		interrupts = <25 0>;		/* gpio_57 */
+-		pendown-gpio = <&gpio2 25 GPIO_ACTIVE_HIGH>;
++		pendown-gpio = <&gpio2 25 GPIO_ACTIVE_LOW>;
+ 
+ 		ti,x-min = /bits/ 16 <0x0>;
+ 		ti,x-max = /bits/ 16 <0x0fff>;
+diff --git a/arch/arm/boot/dts/omap3-devkit8000-lcd-common.dtsi b/arch/arm/boot/dts/omap3-devkit8000-lcd-common.dtsi
+index 3decc2d78a6ca..a7f99ae0c1fe9 100644
+--- a/arch/arm/boot/dts/omap3-devkit8000-lcd-common.dtsi
++++ b/arch/arm/boot/dts/omap3-devkit8000-lcd-common.dtsi
+@@ -54,7 +54,7 @@
+ 
+ 		interrupt-parent = <&gpio1>;
+ 		interrupts = <27 0>;		/* gpio_27 */
+-		pendown-gpio = <&gpio1 27 GPIO_ACTIVE_HIGH>;
++		pendown-gpio = <&gpio1 27 GPIO_ACTIVE_LOW>;
+ 
+ 		ti,x-min = /bits/ 16 <0x0>;
+ 		ti,x-max = /bits/ 16 <0x0fff>;
+diff --git a/arch/arm/boot/dts/omap3-lilly-a83x.dtsi b/arch/arm/boot/dts/omap3-lilly-a83x.dtsi
+index c595afe4181d7..d310b5c7bac36 100644
+--- a/arch/arm/boot/dts/omap3-lilly-a83x.dtsi
++++ b/arch/arm/boot/dts/omap3-lilly-a83x.dtsi
+@@ -311,7 +311,7 @@
+ 		interrupt-parent = <&gpio1>;
+ 		interrupts = <8 0>;   /* boot6 / gpio_8 */
+ 		spi-max-frequency = <1000000>;
+-		pendown-gpio = <&gpio1 8 GPIO_ACTIVE_HIGH>;
++		pendown-gpio = <&gpio1 8 GPIO_ACTIVE_LOW>;
+ 		vcc-supply = <&reg_vcc3>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&tsc2048_pins>;
+diff --git a/arch/arm/boot/dts/omap3-overo-common-lcd35.dtsi b/arch/arm/boot/dts/omap3-overo-common-lcd35.dtsi
+index 1d6e88f99eb31..c3570acc35fad 100644
+--- a/arch/arm/boot/dts/omap3-overo-common-lcd35.dtsi
++++ b/arch/arm/boot/dts/omap3-overo-common-lcd35.dtsi
+@@ -149,7 +149,7 @@
+ 
+ 		interrupt-parent = <&gpio4>;
+ 		interrupts = <18 0>;			/* gpio_114 */
+-		pendown-gpio = <&gpio4 18 GPIO_ACTIVE_HIGH>;
++		pendown-gpio = <&gpio4 18 GPIO_ACTIVE_LOW>;
+ 
+ 		ti,x-min = /bits/ 16 <0x0>;
+ 		ti,x-max = /bits/ 16 <0x0fff>;
+diff --git a/arch/arm/boot/dts/omap3-overo-common-lcd43.dtsi b/arch/arm/boot/dts/omap3-overo-common-lcd43.dtsi
+index 7e30f9d45790e..d95a0e130058c 100644
+--- a/arch/arm/boot/dts/omap3-overo-common-lcd43.dtsi
++++ b/arch/arm/boot/dts/omap3-overo-common-lcd43.dtsi
+@@ -160,7 +160,7 @@
+ 
+ 		interrupt-parent = <&gpio4>;
+ 		interrupts = <18 0>;			/* gpio_114 */
+-		pendown-gpio = <&gpio4 18 GPIO_ACTIVE_HIGH>;
++		pendown-gpio = <&gpio4 18 GPIO_ACTIVE_LOW>;
+ 
+ 		ti,x-min = /bits/ 16 <0x0>;
+ 		ti,x-max = /bits/ 16 <0x0fff>;
+diff --git a/arch/arm/boot/dts/omap3-pandora-common.dtsi b/arch/arm/boot/dts/omap3-pandora-common.dtsi
+index 559853764487f..4c3b6bab179cc 100644
+--- a/arch/arm/boot/dts/omap3-pandora-common.dtsi
++++ b/arch/arm/boot/dts/omap3-pandora-common.dtsi
+@@ -651,7 +651,7 @@
+ 		pinctrl-0 = <&penirq_pins>;
+ 		interrupt-parent = <&gpio3>;
+ 		interrupts = <30 IRQ_TYPE_NONE>;	/* GPIO_94 */
+-		pendown-gpio = <&gpio3 30 GPIO_ACTIVE_HIGH>;
++		pendown-gpio = <&gpio3 30 GPIO_ACTIVE_LOW>;
+ 		vcc-supply = <&vaux4>;
+ 
+ 		ti,x-min = /bits/ 16 <0>;
+diff --git a/arch/arm/boot/dts/omap5-cm-t54.dts b/arch/arm/boot/dts/omap5-cm-t54.dts
+index 2d87b9fc230ee..af288d63a26a4 100644
+--- a/arch/arm/boot/dts/omap5-cm-t54.dts
++++ b/arch/arm/boot/dts/omap5-cm-t54.dts
+@@ -354,7 +354,7 @@
+ 
+ 		interrupt-parent = <&gpio1>;
+ 		interrupts = <15 0>;			/* gpio1_wk15 */
+-		pendown-gpio = <&gpio1 15 GPIO_ACTIVE_HIGH>;
++		pendown-gpio = <&gpio1 15 GPIO_ACTIVE_LOW>;
+ 
+ 
+ 		ti,x-min = /bits/ 16 <0x0>;
+diff --git a/arch/arm64/boot/dts/qcom/sc7280-idp.dtsi b/arch/arm64/boot/dts/qcom/sc7280-idp.dtsi
+index 8b5293e7fd2a3..a0e767ff252c5 100644
+--- a/arch/arm64/boot/dts/qcom/sc7280-idp.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7280-idp.dtsi
+@@ -480,7 +480,6 @@
+ 	wcd_rx: codec@0,4 {
+ 		compatible = "sdw20217010d00";
+ 		reg = <0 4>;
+-		#sound-dai-cells = <1>;
+ 		qcom,rx-port-mapping = <1 2 3 4 5>;
+ 	};
+ };
+@@ -491,7 +490,6 @@
+ 	wcd_tx: codec@0,3 {
+ 		compatible = "sdw20217010d00";
+ 		reg = <0 3>;
+-		#sound-dai-cells = <1>;
+ 		qcom,tx-port-mapping = <1 2 3 4>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sc7280-qcard.dtsi b/arch/arm64/boot/dts/qcom/sc7280-qcard.dtsi
+index 88204f794ccb7..1080d701d45a2 100644
+--- a/arch/arm64/boot/dts/qcom/sc7280-qcard.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7280-qcard.dtsi
+@@ -419,7 +419,6 @@
+ 	wcd_rx: codec@0,4 {
+ 		compatible = "sdw20217010d00";
+ 		reg = <0 4>;
+-		#sound-dai-cells = <1>;
+ 		qcom,rx-port-mapping = <1 2 3 4 5>;
+ 	};
+ };
+@@ -428,7 +427,6 @@
+ 	wcd_tx: codec@0,3 {
+ 		compatible = "sdw20217010d00";
+ 		reg = <0 3>;
+-		#sound-dai-cells = <1>;
+ 		qcom,tx-port-mapping = <1 2 3 4>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3566-soquartz-cm4.dts b/arch/arm64/boot/dts/rockchip/rk3566-soquartz-cm4.dts
+index 263ce40770dde..cddf6cd2fecb1 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3566-soquartz-cm4.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3566-soquartz-cm4.dts
+@@ -28,6 +28,16 @@
+ 		regulator-max-microvolt = <5000000>;
+ 		vin-supply = <&vcc12v_dcin>;
+ 	};
++
++	vcc_sd_pwr: vcc-sd-pwr-regulator {
++		compatible = "regulator-fixed";
++		regulator-name = "vcc_sd_pwr";
++		regulator-always-on;
++		regulator-boot-on;
++		regulator-min-microvolt = <3300000>;
++		regulator-max-microvolt = <3300000>;
++		vin-supply = <&vcc3v3_sys>;
++	};
+ };
+ 
+ /* phy for pcie */
+@@ -130,13 +140,7 @@
+ };
+ 
+ &sdmmc0 {
+-	vmmc-supply = <&sdmmc_pwr>;
+-	status = "okay";
+-};
+-
+-&sdmmc_pwr {
+-	regulator-min-microvolt = <3300000>;
+-	regulator-max-microvolt = <3300000>;
++	vmmc-supply = <&vcc_sd_pwr>;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3566-soquartz.dtsi b/arch/arm64/boot/dts/rockchip/rk3566-soquartz.dtsi
+index 102e448bc026a..31aa2b8efe393 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3566-soquartz.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3566-soquartz.dtsi
+@@ -104,16 +104,6 @@
+ 		regulator-max-microvolt = <3300000>;
+ 		vin-supply = <&vcc5v0_sys>;
+ 	};
+-
+-	sdmmc_pwr: sdmmc-pwr-regulator {
+-		compatible = "regulator-fixed";
+-		enable-active-high;
+-		gpio = <&gpio0 RK_PA5 GPIO_ACTIVE_HIGH>;
+-		pinctrl-names = "default";
+-		pinctrl-0 = <&sdmmc_pwr_h>;
+-		regulator-name = "sdmmc_pwr";
+-		status = "disabled";
+-	};
+ };
+ 
+ &cpu0 {
+@@ -155,6 +145,19 @@
+ 	status = "disabled";
+ };
+ 
++&gpio0 {
++	nextrst-hog {
++		gpio-hog;
++		/*
++		 * GPIO_ACTIVE_LOW + output-low here means that the pin is set
++		 * to high, because output-low decides the value pre-inversion.
++		 */
++		gpios = <RK_PA5 GPIO_ACTIVE_LOW>;
++		line-name = "nEXTRST";
++		output-low;
++	};
++};
++
+ &gpu {
+ 	mali-supply = <&vdd_gpu>;
+ 	status = "okay";
+@@ -538,12 +541,6 @@
+ 			rockchip,pins = <2 RK_PC2 RK_FUNC_GPIO &pcfg_pull_none>;
+ 		};
+ 	};
+-
+-	sdmmc-pwr {
+-		sdmmc_pwr_h: sdmmc-pwr-h {
+-			rockchip,pins = <0 RK_PA5 RK_FUNC_GPIO &pcfg_pull_none>;
+-		};
+-	};
+ };
+ 
+ &pmu_io_domains {
+diff --git a/arch/arm64/boot/dts/rockchip/rk3568.dtsi b/arch/arm64/boot/dts/rockchip/rk3568.dtsi
+index ba67b58f05b79..f1be76a54ceb0 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3568.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3568.dtsi
+@@ -94,9 +94,10 @@
+ 		power-domains = <&power RK3568_PD_PIPE>;
+ 		reg = <0x3 0xc0400000 0x0 0x00400000>,
+ 		      <0x0 0xfe270000 0x0 0x00010000>,
+-		      <0x3 0x7f000000 0x0 0x01000000>;
+-		ranges = <0x01000000 0x0 0x3ef00000 0x3 0x7ef00000 0x0 0x00100000>,
+-			 <0x02000000 0x0 0x00000000 0x3 0x40000000 0x0 0x3ef00000>;
++		      <0x0 0xf2000000 0x0 0x00100000>;
++		ranges = <0x01000000 0x0 0xf2100000 0x0 0xf2100000 0x0 0x00100000>,
++			 <0x02000000 0x0 0xf2200000 0x0 0xf2200000 0x0 0x01e00000>,
++			 <0x03000000 0x0 0x40000000 0x3 0x40000000 0x0 0x40000000>;
+ 		reg-names = "dbi", "apb", "config";
+ 		resets = <&cru SRST_PCIE30X1_POWERUP>;
+ 		reset-names = "pipe";
+@@ -146,9 +147,10 @@
+ 		power-domains = <&power RK3568_PD_PIPE>;
+ 		reg = <0x3 0xc0800000 0x0 0x00400000>,
+ 		      <0x0 0xfe280000 0x0 0x00010000>,
+-		      <0x3 0xbf000000 0x0 0x01000000>;
+-		ranges = <0x01000000 0x0 0x3ef00000 0x3 0xbef00000 0x0 0x00100000>,
+-			 <0x02000000 0x0 0x00000000 0x3 0x80000000 0x0 0x3ef00000>;
++		      <0x0 0xf0000000 0x0 0x00100000>;
++		ranges = <0x01000000 0x0 0xf0100000 0x0 0xf0100000 0x0 0x00100000>,
++			 <0x02000000 0x0 0xf0200000 0x0 0xf0200000 0x0 0x01e00000>,
++			 <0x03000000 0x0 0x40000000 0x3 0x80000000 0x0 0x40000000>;
+ 		reg-names = "dbi", "apb", "config";
+ 		resets = <&cru SRST_PCIE30X2_POWERUP>;
+ 		reset-names = "pipe";
+diff --git a/arch/arm64/boot/dts/rockchip/rk356x.dtsi b/arch/arm64/boot/dts/rockchip/rk356x.dtsi
+index eed0059a68b8d..3ee5d764ac63e 100644
+--- a/arch/arm64/boot/dts/rockchip/rk356x.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk356x.dtsi
+@@ -952,7 +952,7 @@
+ 		compatible = "rockchip,rk3568-pcie";
+ 		reg = <0x3 0xc0000000 0x0 0x00400000>,
+ 		      <0x0 0xfe260000 0x0 0x00010000>,
+-		      <0x3 0x3f000000 0x0 0x01000000>;
++		      <0x0 0xf4000000 0x0 0x00100000>;
+ 		reg-names = "dbi", "apb", "config";
+ 		interrupts = <GIC_SPI 75 IRQ_TYPE_LEVEL_HIGH>,
+ 			     <GIC_SPI 74 IRQ_TYPE_LEVEL_HIGH>,
+@@ -982,8 +982,9 @@
+ 		phys = <&combphy2 PHY_TYPE_PCIE>;
+ 		phy-names = "pcie-phy";
+ 		power-domains = <&power RK3568_PD_PIPE>;
+-		ranges = <0x01000000 0x0 0x3ef00000 0x3 0x3ef00000 0x0 0x00100000
+-			  0x02000000 0x0 0x00000000 0x3 0x00000000 0x0 0x3ef00000>;
++		ranges = <0x01000000 0x0 0xf4100000 0x0 0xf4100000 0x0 0x00100000>,
++			 <0x02000000 0x0 0xf4200000 0x0 0xf4200000 0x0 0x01e00000>,
++			 <0x03000000 0x0 0x40000000 0x3 0x00000000 0x0 0x40000000>;
+ 		resets = <&cru SRST_PCIE20_POWERUP>;
+ 		reset-names = "pipe";
+ 		#address-cells = <3>;
+diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
+index 9e3ecba3c4e67..d4a85c1db2e2a 100644
+--- a/arch/arm64/include/asm/sysreg.h
++++ b/arch/arm64/include/asm/sysreg.h
+@@ -115,8 +115,14 @@
+ #define SB_BARRIER_INSN			__SYS_BARRIER_INSN(0, 7, 31)
+ 
+ #define SYS_DC_ISW			sys_insn(1, 0, 7, 6, 2)
++#define SYS_DC_IGSW			sys_insn(1, 0, 7, 6, 4)
++#define SYS_DC_IGDSW			sys_insn(1, 0, 7, 6, 6)
+ #define SYS_DC_CSW			sys_insn(1, 0, 7, 10, 2)
++#define SYS_DC_CGSW			sys_insn(1, 0, 7, 10, 4)
++#define SYS_DC_CGDSW			sys_insn(1, 0, 7, 10, 6)
+ #define SYS_DC_CISW			sys_insn(1, 0, 7, 14, 2)
++#define SYS_DC_CIGSW			sys_insn(1, 0, 7, 14, 4)
++#define SYS_DC_CIGDSW			sys_insn(1, 0, 7, 14, 6)
+ 
+ /*
+  * Automatically generated definitions for system registers, the
+diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
+index 33f4d42003296..168365167a137 100644
+--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
++++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
+@@ -81,7 +81,12 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
+ 	 * EL1 instead of being trapped to EL2.
+ 	 */
+ 	if (kvm_arm_support_pmu_v3()) {
++		struct kvm_cpu_context *hctxt;
++
+ 		write_sysreg(0, pmselr_el0);
++
++		hctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
++		ctxt_sys_reg(hctxt, PMUSERENR_EL0) = read_sysreg(pmuserenr_el0);
+ 		write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
+ 	}
+ 
+@@ -105,8 +110,12 @@ static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu)
+ 	write_sysreg(vcpu->arch.mdcr_el2_host, mdcr_el2);
+ 
+ 	write_sysreg(0, hstr_el2);
+-	if (kvm_arm_support_pmu_v3())
+-		write_sysreg(0, pmuserenr_el0);
++	if (kvm_arm_support_pmu_v3()) {
++		struct kvm_cpu_context *hctxt;
++
++		hctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
++		write_sysreg(ctxt_sys_reg(hctxt, PMUSERENR_EL0), pmuserenr_el0);
++	}
+ 
+ 	if (cpus_have_final_cap(ARM64_SME)) {
+ 		sysreg_clear_set_s(SYS_HFGRTR_EL2, 0,
+diff --git a/arch/arm64/kvm/vgic/vgic-init.c b/arch/arm64/kvm/vgic/vgic-init.c
+index c199ba2f192ef..06ca3b722f260 100644
+--- a/arch/arm64/kvm/vgic/vgic-init.c
++++ b/arch/arm64/kvm/vgic/vgic-init.c
+@@ -446,6 +446,7 @@ int vgic_lazy_init(struct kvm *kvm)
+ int kvm_vgic_map_resources(struct kvm *kvm)
+ {
+ 	struct vgic_dist *dist = &kvm->arch.vgic;
++	enum vgic_type type;
+ 	gpa_t dist_base;
+ 	int ret = 0;
+ 
+@@ -460,10 +461,13 @@ int kvm_vgic_map_resources(struct kvm *kvm)
+ 	if (!irqchip_in_kernel(kvm))
+ 		goto out;
+ 
+-	if (dist->vgic_model == KVM_DEV_TYPE_ARM_VGIC_V2)
++	if (dist->vgic_model == KVM_DEV_TYPE_ARM_VGIC_V2) {
+ 		ret = vgic_v2_map_resources(kvm);
+-	else
++		type = VGIC_V2;
++	} else {
+ 		ret = vgic_v3_map_resources(kvm);
++		type = VGIC_V3;
++	}
+ 
+ 	if (ret) {
+ 		__kvm_vgic_destroy(kvm);
+@@ -473,8 +477,7 @@ int kvm_vgic_map_resources(struct kvm *kvm)
+ 	dist_base = dist->vgic_dist_base;
+ 	mutex_unlock(&kvm->arch.config_lock);
+ 
+-	ret = vgic_register_dist_iodev(kvm, dist_base,
+-				       kvm_vgic_global_state.type);
++	ret = vgic_register_dist_iodev(kvm, dist_base, type);
+ 	if (ret) {
+ 		kvm_err("Unable to register VGIC dist MMIO regions\n");
+ 		kvm_vgic_destroy(kvm);
+diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
+index b05e833a022d1..d46b6722710f9 100644
+--- a/arch/riscv/Makefile
++++ b/arch/riscv/Makefile
+@@ -7,7 +7,7 @@
+ #
+ 
+ OBJCOPYFLAGS    := -O binary
+-LDFLAGS_vmlinux :=
++LDFLAGS_vmlinux := -z norelro
+ ifeq ($(CONFIG_DYNAMIC_FTRACE),y)
+ 	LDFLAGS_vmlinux := --no-relax
+ 	KBUILD_CPPFLAGS += -DCC_USING_PATCHABLE_FUNCTION_ENTRY
+diff --git a/arch/s390/purgatory/Makefile b/arch/s390/purgatory/Makefile
+index 32573b4f9bd20..cc8cf5abea158 100644
+--- a/arch/s390/purgatory/Makefile
++++ b/arch/s390/purgatory/Makefile
+@@ -26,6 +26,7 @@ KBUILD_CFLAGS += -Wno-pointer-sign -Wno-sign-compare
+ KBUILD_CFLAGS += -fno-zero-initialized-in-bss -fno-builtin -ffreestanding
+ KBUILD_CFLAGS += -Os -m64 -msoft-float -fno-common
+ KBUILD_CFLAGS += -fno-stack-protector
++KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING
+ KBUILD_CFLAGS += $(CLANG_FLAGS)
+ KBUILD_CFLAGS += $(call cc-option,-fno-PIE)
+ KBUILD_AFLAGS := $(filter-out -DCC_USING_EXPOLINE,$(KBUILD_AFLAGS))
+diff --git a/arch/x86/Makefile b/arch/x86/Makefile
+index b39975977c037..fdc2e3abd6152 100644
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -305,6 +305,18 @@ ifeq ($(RETPOLINE_CFLAGS),)
+ endif
+ endif
+ 
++ifdef CONFIG_UNWINDER_ORC
++orc_hash_h := arch/$(SRCARCH)/include/generated/asm/orc_hash.h
++orc_hash_sh := $(srctree)/scripts/orc_hash.sh
++targets += $(orc_hash_h)
++quiet_cmd_orc_hash = GEN     $@
++      cmd_orc_hash = mkdir -p $(dir $@); \
++		     $(CONFIG_SHELL) $(orc_hash_sh) < $< > $@
++$(orc_hash_h): $(srctree)/arch/x86/include/asm/orc_types.h $(orc_hash_sh) FORCE
++	$(call if_changed,orc_hash)
++archprepare: $(orc_hash_h)
++endif
++
+ archclean:
+ 	$(Q)rm -rf $(objtree)/arch/i386
+ 	$(Q)rm -rf $(objtree)/arch/x86_64
+diff --git a/arch/x86/include/asm/Kbuild b/arch/x86/include/asm/Kbuild
+index 1e51650b79d7c..4f1ce5fc4e194 100644
+--- a/arch/x86/include/asm/Kbuild
++++ b/arch/x86/include/asm/Kbuild
+@@ -1,6 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ 
+ 
++generated-y += orc_hash.h
+ generated-y += syscalls_32.h
+ generated-y += syscalls_64.h
+ generated-y += syscalls_x32.h
+diff --git a/arch/x86/include/asm/orc_header.h b/arch/x86/include/asm/orc_header.h
+new file mode 100644
+index 0000000000000..07bacf3e160ea
+--- /dev/null
++++ b/arch/x86/include/asm/orc_header.h
+@@ -0,0 +1,19 @@
++/* SPDX-License-Identifier: GPL-2.0-or-later */
++/* Copyright (c) Meta Platforms, Inc. and affiliates. */
++
++#ifndef _ORC_HEADER_H
++#define _ORC_HEADER_H
++
++#include <linux/types.h>
++#include <linux/compiler.h>
++#include <asm/orc_hash.h>
++
++/*
++ * The header is currently a 20-byte hash of the ORC entry definition; see
++ * scripts/orc_hash.sh.
++ */
++#define ORC_HEADER					\
++	__used __section(".orc_header") __aligned(4)	\
++	static const u8 orc_header[] = { ORC_HASH }
++
++#endif /* _ORC_HEADER_H */
+diff --git a/arch/x86/kernel/apic/x2apic_phys.c b/arch/x86/kernel/apic/x2apic_phys.c
+index 6bde05a86b4ed..896bc41cb2ba7 100644
+--- a/arch/x86/kernel/apic/x2apic_phys.c
++++ b/arch/x86/kernel/apic/x2apic_phys.c
+@@ -97,7 +97,10 @@ static void init_x2apic_ldr(void)
+ 
+ static int x2apic_phys_probe(void)
+ {
+-	if (x2apic_mode && (x2apic_phys || x2apic_fadt_phys()))
++	if (!x2apic_mode)
++		return 0;
++
++	if (x2apic_phys || x2apic_fadt_phys())
+ 		return 1;
+ 
+ 	return apic == &apic_x2apic_phys;
+diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c
+index 37307b40f8daf..c960f250624ab 100644
+--- a/arch/x86/kernel/unwind_orc.c
++++ b/arch/x86/kernel/unwind_orc.c
+@@ -7,6 +7,9 @@
+ #include <asm/unwind.h>
+ #include <asm/orc_types.h>
+ #include <asm/orc_lookup.h>
++#include <asm/orc_header.h>
++
++ORC_HEADER;
+ 
+ #define orc_warn(fmt, ...) \
+ 	printk_deferred_once(KERN_WARNING "WARNING: " fmt, ##__VA_ARGS__)
+diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
+index 557f0fe25dff4..37db264866b64 100644
+--- a/arch/x86/mm/kaslr.c
++++ b/arch/x86/mm/kaslr.c
+@@ -172,10 +172,10 @@ void __meminit init_trampoline_kaslr(void)
+ 		set_p4d(p4d_tramp,
+ 			__p4d(_KERNPG_TABLE | __pa(pud_page_tramp)));
+ 
+-		set_pgd(&trampoline_pgd_entry,
+-			__pgd(_KERNPG_TABLE | __pa(p4d_page_tramp)));
++		trampoline_pgd_entry =
++			__pgd(_KERNPG_TABLE | __pa(p4d_page_tramp));
+ 	} else {
+-		set_pgd(&trampoline_pgd_entry,
+-			__pgd(_KERNPG_TABLE | __pa(pud_page_tramp)));
++		trampoline_pgd_entry =
++			__pgd(_KERNPG_TABLE | __pa(pud_page_tramp));
+ 	}
+ }
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index 1056bbf55b172..438adb695daab 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -2570,7 +2570,7 @@ out_image:
+ 	}
+ 
+ 	if (bpf_jit_enable > 1)
+-		bpf_jit_dump(prog->len, proglen, pass + 1, image);
++		bpf_jit_dump(prog->len, proglen, pass + 1, rw_image);
+ 
+ 	if (image) {
+ 		if (!prog->is_func || extra_pass) {
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index dd6d1c0117b15..581aa08e34a8e 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -907,6 +907,7 @@ static void __blkcg_rstat_flush(struct blkcg *blkcg, int cpu)
+ 	struct llist_head *lhead = per_cpu_ptr(blkcg->lhead, cpu);
+ 	struct llist_node *lnode;
+ 	struct blkg_iostat_set *bisc, *next_bisc;
++	unsigned long flags;
+ 
+ 	rcu_read_lock();
+ 
+@@ -920,7 +921,7 @@ static void __blkcg_rstat_flush(struct blkcg *blkcg, int cpu)
+ 	 * When flushing from cgroup, cgroup_rstat_lock is always held, so
+ 	 * this lock won't cause contention most of time.
+ 	 */
+-	raw_spin_lock(&blkg_stat_lock);
++	raw_spin_lock_irqsave(&blkg_stat_lock, flags);
+ 
+ 	/*
+ 	 * Iterate only the iostat_cpu's queued in the lockless list.
+@@ -946,7 +947,7 @@ static void __blkcg_rstat_flush(struct blkcg *blkcg, int cpu)
+ 			blkcg_iostat_update(parent, &blkg->iostat.cur,
+ 					    &blkg->iostat.last);
+ 	}
+-	raw_spin_unlock(&blkg_stat_lock);
++	raw_spin_unlock_irqrestore(&blkg_stat_lock, flags);
+ out:
+ 	rcu_read_unlock();
+ }
+diff --git a/drivers/acpi/acpica/achware.h b/drivers/acpi/acpica/achware.h
+index 6f2787506b50c..c1b6e7bf4fcb5 100644
+--- a/drivers/acpi/acpica/achware.h
++++ b/drivers/acpi/acpica/achware.h
+@@ -101,8 +101,6 @@ acpi_status
+ acpi_hw_get_gpe_status(struct acpi_gpe_event_info *gpe_event_info,
+ 		       acpi_event_status *event_status);
+ 
+-acpi_status acpi_hw_disable_all_gpes(void);
+-
+ acpi_status acpi_hw_enable_all_runtime_gpes(void);
+ 
+ acpi_status acpi_hw_enable_all_wakeup_gpes(void);
+diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
+index 4ca6672512722..539c12fbd2f14 100644
+--- a/drivers/acpi/sleep.c
++++ b/drivers/acpi/sleep.c
+@@ -636,11 +636,19 @@ static int acpi_suspend_enter(suspend_state_t pm_state)
+ 	}
+ 
+ 	/*
+-	 * Disable and clear GPE status before interrupt is enabled. Some GPEs
+-	 * (like wakeup GPE) haven't handler, this can avoid such GPE misfire.
+-	 * acpi_leave_sleep_state will reenable specific GPEs later
++	 * Disable all GPE and clear their status bits before interrupts are
++	 * enabled. Some GPEs (like wakeup GPEs) have no handlers and this can
++	 * prevent them from producing spurious interrups.
++	 *
++	 * acpi_leave_sleep_state() will reenable specific GPEs later.
++	 *
++	 * Because this code runs on one CPU with disabled interrupts (all of
++	 * the other CPUs are offline at this time), it need not acquire any
++	 * sleeping locks which may trigger an implicit preemption point even
++	 * if there is no contention, so avoid doing that by using a low-level
++	 * library routine here.
+ 	 */
+-	acpi_disable_all_gpes();
++	acpi_hw_disable_all_gpes();
+ 	/* Allow EC transactions to happen. */
+ 	acpi_ec_unblock_transactions();
+ 
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 14c17c3bda4e6..d37df499c9fa6 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -5348,7 +5348,7 @@ struct ata_port *ata_port_alloc(struct ata_host *host)
+ 
+ 	mutex_init(&ap->scsi_scan_mutex);
+ 	INIT_DELAYED_WORK(&ap->hotplug_task, ata_scsi_hotplug);
+-	INIT_WORK(&ap->scsi_rescan_task, ata_scsi_dev_rescan);
++	INIT_DELAYED_WORK(&ap->scsi_rescan_task, ata_scsi_dev_rescan);
+ 	INIT_LIST_HEAD(&ap->eh_done_q);
+ 	init_waitqueue_head(&ap->eh_wait_q);
+ 	init_completion(&ap->park_req_pending);
+@@ -5954,6 +5954,7 @@ static void ata_port_detach(struct ata_port *ap)
+ 	WARN_ON(!(ap->pflags & ATA_PFLAG_UNLOADED));
+ 
+ 	cancel_delayed_work_sync(&ap->hotplug_task);
++	cancel_delayed_work_sync(&ap->scsi_rescan_task);
+ 
+  skip_eh:
+ 	/* clean up zpodd on port removal */
+diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
+index a6c9018118027..6f8d141915939 100644
+--- a/drivers/ata/libata-eh.c
++++ b/drivers/ata/libata-eh.c
+@@ -2984,7 +2984,7 @@ static int ata_eh_revalidate_and_attach(struct ata_link *link,
+ 			ehc->i.flags |= ATA_EHI_SETMODE;
+ 
+ 			/* schedule the scsi_rescan_device() here */
+-			schedule_work(&(ap->scsi_rescan_task));
++			schedule_delayed_work(&ap->scsi_rescan_task, 0);
+ 		} else if (dev->class == ATA_DEV_UNKNOWN &&
+ 			   ehc->tries[dev->devno] &&
+ 			   ata_class_enabled(ehc->classes[dev->devno])) {
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index a9e768fdae4e3..51673d95e5b22 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -4597,10 +4597,11 @@ int ata_scsi_user_scan(struct Scsi_Host *shost, unsigned int channel,
+ void ata_scsi_dev_rescan(struct work_struct *work)
+ {
+ 	struct ata_port *ap =
+-		container_of(work, struct ata_port, scsi_rescan_task);
++		container_of(work, struct ata_port, scsi_rescan_task.work);
+ 	struct ata_link *link;
+ 	struct ata_device *dev;
+ 	unsigned long flags;
++	bool delay_rescan = false;
+ 
+ 	mutex_lock(&ap->scsi_scan_mutex);
+ 	spin_lock_irqsave(ap->lock, flags);
+@@ -4614,6 +4615,21 @@ void ata_scsi_dev_rescan(struct work_struct *work)
+ 			if (scsi_device_get(sdev))
+ 				continue;
+ 
++			/*
++			 * If the rescan work was scheduled because of a resume
++			 * event, the port is already fully resumed, but the
++			 * SCSI device may not yet be fully resumed. In such
++			 * case, executing scsi_rescan_device() may cause a
++			 * deadlock with the PM code on device_lock(). Prevent
++			 * this by giving up and retrying rescan after a short
++			 * delay.
++			 */
++			delay_rescan = sdev->sdev_gendev.power.is_suspended;
++			if (delay_rescan) {
++				scsi_device_put(sdev);
++				break;
++			}
++
+ 			spin_unlock_irqrestore(ap->lock, flags);
+ 			scsi_rescan_device(&(sdev->sdev_gendev));
+ 			scsi_device_put(sdev);
+@@ -4623,4 +4639,8 @@ void ata_scsi_dev_rescan(struct work_struct *work)
+ 
+ 	spin_unlock_irqrestore(ap->lock, flags);
+ 	mutex_unlock(&ap->scsi_scan_mutex);
++
++	if (delay_rescan)
++		schedule_delayed_work(&ap->scsi_rescan_task,
++				      msecs_to_jiffies(5));
+ }
+diff --git a/drivers/base/regmap/regmap-spi-avmm.c b/drivers/base/regmap/regmap-spi-avmm.c
+index 4c2b94b3e30be..6af692844c196 100644
+--- a/drivers/base/regmap/regmap-spi-avmm.c
++++ b/drivers/base/regmap/regmap-spi-avmm.c
+@@ -660,7 +660,7 @@ static const struct regmap_bus regmap_spi_avmm_bus = {
+ 	.reg_format_endian_default = REGMAP_ENDIAN_NATIVE,
+ 	.val_format_endian_default = REGMAP_ENDIAN_NATIVE,
+ 	.max_raw_read = SPI_AVMM_VAL_SIZE * MAX_READ_CNT,
+-	.max_raw_write = SPI_AVMM_VAL_SIZE * MAX_WRITE_CNT,
++	.max_raw_write = SPI_AVMM_REG_SIZE + SPI_AVMM_VAL_SIZE * MAX_WRITE_CNT,
+ 	.free_context = spi_avmm_bridge_ctx_free,
+ };
+ 
+diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
+index 14491952047f5..3b6b4cb400f42 100644
+--- a/drivers/block/null_blk/main.c
++++ b/drivers/block/null_blk/main.c
+@@ -2212,6 +2212,7 @@ static void null_destroy_dev(struct nullb *nullb)
+ 	struct nullb_device *dev = nullb->dev;
+ 
+ 	null_del_dev(nullb);
++	null_free_device_storage(dev, false);
+ 	null_free_dev(dev);
+ }
+ 
+diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
+index 2b918e28acaac..b47358da92a23 100644
+--- a/drivers/block/virtio_blk.c
++++ b/drivers/block/virtio_blk.c
+@@ -348,63 +348,33 @@ static inline void virtblk_request_done(struct request *req)
+ 	blk_mq_end_request(req, status);
+ }
+ 
+-static void virtblk_complete_batch(struct io_comp_batch *iob)
+-{
+-	struct request *req;
+-
+-	rq_list_for_each(&iob->req_list, req) {
+-		virtblk_unmap_data(req, blk_mq_rq_to_pdu(req));
+-		virtblk_cleanup_cmd(req);
+-	}
+-	blk_mq_end_request_batch(iob);
+-}
+-
+-static int virtblk_handle_req(struct virtio_blk_vq *vq,
+-			      struct io_comp_batch *iob)
+-{
+-	struct virtblk_req *vbr;
+-	int req_done = 0;
+-	unsigned int len;
+-
+-	while ((vbr = virtqueue_get_buf(vq->vq, &len)) != NULL) {
+-		struct request *req = blk_mq_rq_from_pdu(vbr);
+-
+-		if (likely(!blk_should_fake_timeout(req->q)) &&
+-		    !blk_mq_complete_request_remote(req) &&
+-		    !blk_mq_add_to_batch(req, iob, virtblk_vbr_status(vbr),
+-					 virtblk_complete_batch))
+-			virtblk_request_done(req);
+-		req_done++;
+-	}
+-
+-	return req_done;
+-}
+-
+ static void virtblk_done(struct virtqueue *vq)
+ {
+ 	struct virtio_blk *vblk = vq->vdev->priv;
+-	struct virtio_blk_vq *vblk_vq = &vblk->vqs[vq->index];
+-	int req_done = 0;
++	bool req_done = false;
++	int qid = vq->index;
++	struct virtblk_req *vbr;
+ 	unsigned long flags;
+-	DEFINE_IO_COMP_BATCH(iob);
++	unsigned int len;
+ 
+-	spin_lock_irqsave(&vblk_vq->lock, flags);
++	spin_lock_irqsave(&vblk->vqs[qid].lock, flags);
+ 	do {
+ 		virtqueue_disable_cb(vq);
+-		req_done += virtblk_handle_req(vblk_vq, &iob);
++		while ((vbr = virtqueue_get_buf(vblk->vqs[qid].vq, &len)) != NULL) {
++			struct request *req = blk_mq_rq_from_pdu(vbr);
+ 
++			if (likely(!blk_should_fake_timeout(req->q)))
++				blk_mq_complete_request(req);
++			req_done = true;
++		}
+ 		if (unlikely(virtqueue_is_broken(vq)))
+ 			break;
+ 	} while (!virtqueue_enable_cb(vq));
+ 
+-	if (req_done) {
+-		if (!rq_list_empty(iob.req_list))
+-			iob.complete(&iob);
+-
+-		/* In case queue is stopped waiting for more buffers. */
++	/* In case queue is stopped waiting for more buffers. */
++	if (req_done)
+ 		blk_mq_start_stopped_hw_queues(vblk->disk->queue, true);
+-	}
+-	spin_unlock_irqrestore(&vblk_vq->lock, flags);
++	spin_unlock_irqrestore(&vblk->vqs[qid].lock, flags);
+ }
+ 
+ static void virtio_commit_rqs(struct blk_mq_hw_ctx *hctx)
+@@ -1283,15 +1253,37 @@ static void virtblk_map_queues(struct blk_mq_tag_set *set)
+ 	}
+ }
+ 
++static void virtblk_complete_batch(struct io_comp_batch *iob)
++{
++	struct request *req;
++
++	rq_list_for_each(&iob->req_list, req) {
++		virtblk_unmap_data(req, blk_mq_rq_to_pdu(req));
++		virtblk_cleanup_cmd(req);
++	}
++	blk_mq_end_request_batch(iob);
++}
++
+ static int virtblk_poll(struct blk_mq_hw_ctx *hctx, struct io_comp_batch *iob)
+ {
+ 	struct virtio_blk *vblk = hctx->queue->queuedata;
+ 	struct virtio_blk_vq *vq = get_virtio_blk_vq(hctx);
++	struct virtblk_req *vbr;
+ 	unsigned long flags;
++	unsigned int len;
+ 	int found = 0;
+ 
+ 	spin_lock_irqsave(&vq->lock, flags);
+-	found = virtblk_handle_req(vq, iob);
++
++	while ((vbr = virtqueue_get_buf(vq->vq, &len)) != NULL) {
++		struct request *req = blk_mq_rq_from_pdu(vbr);
++
++		found++;
++		if (!blk_mq_complete_request_remote(req) &&
++		    !blk_mq_add_to_batch(req, iob, virtblk_vbr_status(vbr),
++						virtblk_complete_batch))
++			virtblk_request_done(req);
++	}
+ 
+ 	if (found)
+ 		blk_mq_start_stopped_hw_queues(vblk->disk->queue, true);
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index e05d2b227de37..55f6ff1e05aa5 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -772,7 +772,9 @@ static irqreturn_t tis_int_handler(int dummy, void *dev_id)
+ 		wake_up_interruptible(&priv->int_queue);
+ 
+ 	/* Clear interrupts handled with TPM_EOI */
++	tpm_tis_request_locality(chip, 0);
+ 	rc = tpm_tis_write32(priv, TPM_INT_STATUS(priv->locality), interrupt);
++	tpm_tis_relinquish_locality(chip, 0);
+ 	if (rc < 0)
+ 		return IRQ_NONE;
+ 
+diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
+index 740d6e426ee95..1e946c3c12607 100644
+--- a/drivers/dma-buf/udmabuf.c
++++ b/drivers/dma-buf/udmabuf.c
+@@ -12,7 +12,6 @@
+ #include <linux/shmem_fs.h>
+ #include <linux/slab.h>
+ #include <linux/udmabuf.h>
+-#include <linux/hugetlb.h>
+ #include <linux/vmalloc.h>
+ #include <linux/iosys-map.h>
+ 
+@@ -207,9 +206,7 @@ static long udmabuf_create(struct miscdevice *device,
+ 	struct udmabuf *ubuf;
+ 	struct dma_buf *buf;
+ 	pgoff_t pgoff, pgcnt, pgidx, pgbuf = 0, pglimit;
+-	struct page *page, *hpage = NULL;
+-	pgoff_t subpgoff, maxsubpgs;
+-	struct hstate *hpstate;
++	struct page *page;
+ 	int seals, ret = -EINVAL;
+ 	u32 i, flags;
+ 
+@@ -245,7 +242,7 @@ static long udmabuf_create(struct miscdevice *device,
+ 		if (!memfd)
+ 			goto err;
+ 		mapping = memfd->f_mapping;
+-		if (!shmem_mapping(mapping) && !is_file_hugepages(memfd))
++		if (!shmem_mapping(mapping))
+ 			goto err;
+ 		seals = memfd_fcntl(memfd, F_GET_SEALS, 0);
+ 		if (seals == -EINVAL)
+@@ -256,48 +253,16 @@ static long udmabuf_create(struct miscdevice *device,
+ 			goto err;
+ 		pgoff = list[i].offset >> PAGE_SHIFT;
+ 		pgcnt = list[i].size   >> PAGE_SHIFT;
+-		if (is_file_hugepages(memfd)) {
+-			hpstate = hstate_file(memfd);
+-			pgoff = list[i].offset >> huge_page_shift(hpstate);
+-			subpgoff = (list[i].offset &
+-				    ~huge_page_mask(hpstate)) >> PAGE_SHIFT;
+-			maxsubpgs = huge_page_size(hpstate) >> PAGE_SHIFT;
+-		}
+ 		for (pgidx = 0; pgidx < pgcnt; pgidx++) {
+-			if (is_file_hugepages(memfd)) {
+-				if (!hpage) {
+-					hpage = find_get_page_flags(mapping, pgoff,
+-								    FGP_ACCESSED);
+-					if (!hpage) {
+-						ret = -EINVAL;
+-						goto err;
+-					}
+-				}
+-				page = hpage + subpgoff;
+-				get_page(page);
+-				subpgoff++;
+-				if (subpgoff == maxsubpgs) {
+-					put_page(hpage);
+-					hpage = NULL;
+-					subpgoff = 0;
+-					pgoff++;
+-				}
+-			} else {
+-				page = shmem_read_mapping_page(mapping,
+-							       pgoff + pgidx);
+-				if (IS_ERR(page)) {
+-					ret = PTR_ERR(page);
+-					goto err;
+-				}
++			page = shmem_read_mapping_page(mapping, pgoff + pgidx);
++			if (IS_ERR(page)) {
++				ret = PTR_ERR(page);
++				goto err;
+ 			}
+ 			ubuf->pages[pgbuf++] = page;
+ 		}
+ 		fput(memfd);
+ 		memfd = NULL;
+-		if (hpage) {
+-			put_page(hpage);
+-			hpage = NULL;
+-		}
+ 	}
+ 
+ 	exp_info.ops  = &udmabuf_ops;
+diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
+index abeff7dc0b581..34b9e78765386 100644
+--- a/drivers/firmware/efi/efi.c
++++ b/drivers/firmware/efi/efi.c
+@@ -361,24 +361,6 @@ static void __init efi_debugfs_init(void)
+ static inline void efi_debugfs_init(void) {}
+ #endif
+ 
+-static void refresh_nv_rng_seed(struct work_struct *work)
+-{
+-	u8 seed[EFI_RANDOM_SEED_SIZE];
+-
+-	get_random_bytes(seed, sizeof(seed));
+-	efi.set_variable(L"RandomSeed", &LINUX_EFI_RANDOM_SEED_TABLE_GUID,
+-			 EFI_VARIABLE_NON_VOLATILE | EFI_VARIABLE_BOOTSERVICE_ACCESS |
+-			 EFI_VARIABLE_RUNTIME_ACCESS, sizeof(seed), seed);
+-	memzero_explicit(seed, sizeof(seed));
+-}
+-static int refresh_nv_rng_seed_notification(struct notifier_block *nb, unsigned long action, void *data)
+-{
+-	static DECLARE_WORK(work, refresh_nv_rng_seed);
+-	schedule_work(&work);
+-	return NOTIFY_DONE;
+-}
+-static struct notifier_block refresh_nv_rng_seed_nb = { .notifier_call = refresh_nv_rng_seed_notification };
+-
+ /*
+  * We register the efi subsystem with the firmware subsystem and the
+  * efivars subsystem with the efi subsystem, if the system was booted with
+@@ -451,9 +433,6 @@ static int __init efisubsys_init(void)
+ 		platform_device_register_simple("efi_secret", 0, NULL, 0);
+ #endif
+ 
+-	if (efi_rt_services_supported(EFI_RT_SUPPORTED_SET_VARIABLE))
+-		execute_with_initialized_rng(&refresh_nv_rng_seed_nb);
+-
+ 	return 0;
+ 
+ err_remove_group:
+diff --git a/drivers/gpio/gpio-sifive.c b/drivers/gpio/gpio-sifive.c
+index bc5660f61c570..0f1e1226ebbe8 100644
+--- a/drivers/gpio/gpio-sifive.c
++++ b/drivers/gpio/gpio-sifive.c
+@@ -221,8 +221,12 @@ static int sifive_gpio_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 	}
+ 
+-	for (i = 0; i < ngpio; i++)
+-		chip->irq_number[i] = platform_get_irq(pdev, i);
++	for (i = 0; i < ngpio; i++) {
++		ret = platform_get_irq(pdev, i);
++		if (ret < 0)
++			return ret;
++		chip->irq_number[i] = ret;
++	}
+ 
+ 	ret = bgpio_init(&chip->gc, dev, 4,
+ 			 chip->base + SIFIVE_GPIO_INPUT_VAL,
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 4472214fcd43a..ce8f71e06eb62 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -1716,7 +1716,7 @@ static void gpiochip_irqchip_remove(struct gpio_chip *gc)
+ 	}
+ 
+ 	/* Remove all IRQ mappings and delete the domain */
+-	if (gc->irq.domain) {
++	if (!gc->irq.domain_is_allocated_externally && gc->irq.domain) {
+ 		unsigned int irq;
+ 
+ 		for (offset = 0; offset < gc->ngpio; offset++) {
+@@ -1762,6 +1762,15 @@ int gpiochip_irqchip_add_domain(struct gpio_chip *gc,
+ 
+ 	gc->to_irq = gpiochip_to_irq;
+ 	gc->irq.domain = domain;
++	gc->irq.domain_is_allocated_externally = true;
++
++	/*
++	 * Using barrier() here to prevent compiler from reordering
++	 * gc->irq.initialized before adding irqdomain.
++	 */
++	barrier();
++
++	gc->irq.initialized = true;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 289c261ffe876..2cbd6949804f5 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -350,6 +350,35 @@ static inline bool is_dc_timing_adjust_needed(struct dm_crtc_state *old_state,
+ 		return false;
+ }
+ 
++/**
++ * update_planes_and_stream_adapter() - Send planes to be updated in DC
++ *
++ * DC has a generic way to update planes and stream via
++ * dc_update_planes_and_stream function; however, DM might need some
++ * adjustments and preparation before calling it. This function is a wrapper
++ * for the dc_update_planes_and_stream that does any required configuration
++ * before passing control to DC.
++ */
++static inline bool update_planes_and_stream_adapter(struct dc *dc,
++						    int update_type,
++						    int planes_count,
++						    struct dc_stream_state *stream,
++						    struct dc_stream_update *stream_update,
++						    struct dc_surface_update *array_of_surface_update)
++{
++	/*
++	 * Previous frame finished and HW is ready for optimization.
++	 */
++	if (update_type == UPDATE_TYPE_FAST)
++		dc_post_update_surfaces_to_stream(dc);
++
++	return dc_update_planes_and_stream(dc,
++					   array_of_surface_update,
++					   planes_count,
++					   stream,
++					   stream_update);
++}
++
+ /**
+  * dm_pflip_high_irq() - Handle pageflip interrupt
+  * @interrupt_params: ignored
+@@ -2683,10 +2712,13 @@ static void dm_gpureset_commit_state(struct dc_state *dc_state,
+ 			bundle->surface_updates[m].surface->force_full_update =
+ 				true;
+ 		}
+-		dc_commit_updates_for_stream(
+-			dm->dc, bundle->surface_updates,
+-			dc_state->stream_status->plane_count,
+-			dc_state->streams[k], &bundle->stream_update, dc_state);
++
++		update_planes_and_stream_adapter(dm->dc,
++					 UPDATE_TYPE_FULL,
++					 dc_state->stream_status->plane_count,
++					 dc_state->streams[k],
++					 &bundle->stream_update,
++					 bundle->surface_updates);
+ 	}
+ 
+ cleanup:
+@@ -8181,6 +8213,12 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ 		if (acrtc_state->abm_level != dm_old_crtc_state->abm_level)
+ 			bundle->stream_update.abm_level = &acrtc_state->abm_level;
+ 
++		mutex_lock(&dm->dc_lock);
++		if ((acrtc_state->update_type > UPDATE_TYPE_FAST) &&
++				acrtc_state->stream->link->psr_settings.psr_allow_active)
++			amdgpu_dm_psr_disable(acrtc_state->stream);
++		mutex_unlock(&dm->dc_lock);
++
+ 		/*
+ 		 * If FreeSync state on the stream has changed then we need to
+ 		 * re-adjust the min/max bounds now that DC doesn't handle this
+@@ -8194,16 +8232,12 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ 			spin_unlock_irqrestore(&pcrtc->dev->event_lock, flags);
+ 		}
+ 		mutex_lock(&dm->dc_lock);
+-		if ((acrtc_state->update_type > UPDATE_TYPE_FAST) &&
+-				acrtc_state->stream->link->psr_settings.psr_allow_active)
+-			amdgpu_dm_psr_disable(acrtc_state->stream);
+-
+-		dc_commit_updates_for_stream(dm->dc,
+-						     bundle->surface_updates,
+-						     planes_count,
+-						     acrtc_state->stream,
+-						     &bundle->stream_update,
+-						     dc_state);
++		update_planes_and_stream_adapter(dm->dc,
++					 acrtc_state->update_type,
++					 planes_count,
++					 acrtc_state->stream,
++					 &bundle->stream_update,
++					 bundle->surface_updates);
+ 
+ 		/**
+ 		 * Enable or disable the interrupts on the backend.
+@@ -8741,12 +8775,11 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
+ 
+ 
+ 		mutex_lock(&dm->dc_lock);
+-		dc_commit_updates_for_stream(dm->dc,
+-						     dummy_updates,
+-						     status->plane_count,
+-						     dm_new_crtc_state->stream,
+-						     &stream_update,
+-						     dc_state);
++		dc_update_planes_and_stream(dm->dc,
++					    dummy_updates,
++					    status->plane_count,
++					    dm_new_crtc_state->stream,
++					    &stream_update);
+ 		mutex_unlock(&dm->dc_lock);
+ 	}
+ 
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_g2d.c b/drivers/gpu/drm/exynos/exynos_drm_g2d.c
+index ec784e58da5c1..414e585ec7dd0 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_g2d.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_g2d.c
+@@ -1335,7 +1335,7 @@ int exynos_g2d_exec_ioctl(struct drm_device *drm_dev, void *data,
+ 	/* Let the runqueue know that there is work to do. */
+ 	queue_work(g2d->g2d_workq, &g2d->runqueue_work);
+ 
+-	if (runqueue_node->async)
++	if (req->async)
+ 		goto out;
+ 
+ 	wait_for_completion(&runqueue_node->complete);
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_vidi.c b/drivers/gpu/drm/exynos/exynos_drm_vidi.c
+index 4d56c8c799c5a..f5e1adfcaa514 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_vidi.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_vidi.c
+@@ -469,8 +469,6 @@ static int vidi_remove(struct platform_device *pdev)
+ 	if (ctx->raw_edid != (struct edid *)fake_edid_info) {
+ 		kfree(ctx->raw_edid);
+ 		ctx->raw_edid = NULL;
+-
+-		return -EINVAL;
+ 	}
+ 
+ 	component_del(&pdev->dev, &vidi_component_ops);
+diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
+index 261fcbae88d78..75d79c3110389 100644
+--- a/drivers/gpu/drm/radeon/radeon_gem.c
++++ b/drivers/gpu/drm/radeon/radeon_gem.c
+@@ -459,7 +459,6 @@ int radeon_gem_set_domain_ioctl(struct drm_device *dev, void *data,
+ 	struct radeon_device *rdev = dev->dev_private;
+ 	struct drm_radeon_gem_set_domain *args = data;
+ 	struct drm_gem_object *gobj;
+-	struct radeon_bo *robj;
+ 	int r;
+ 
+ 	/* for now if someone requests domain CPU -
+@@ -472,13 +471,12 @@ int radeon_gem_set_domain_ioctl(struct drm_device *dev, void *data,
+ 		up_read(&rdev->exclusive_lock);
+ 		return -ENOENT;
+ 	}
+-	robj = gem_to_radeon_bo(gobj);
+ 
+ 	r = radeon_gem_set_domain(gobj, args->read_domains, args->write_domain);
+ 
+ 	drm_gem_object_put(gobj);
+ 	up_read(&rdev->exclusive_lock);
+-	r = radeon_gem_handle_lockup(robj->rdev, r);
++	r = radeon_gem_handle_lockup(rdev, r);
+ 	return r;
+ }
+ 
+diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c
+index fb538a6c4add8..aff4a21a46b6a 100644
+--- a/drivers/hid/wacom_sys.c
++++ b/drivers/hid/wacom_sys.c
+@@ -2417,8 +2417,13 @@ static int wacom_parse_and_register(struct wacom *wacom, bool wireless)
+ 		goto fail_quirks;
+ 	}
+ 
+-	if (features->device_type & WACOM_DEVICETYPE_WL_MONITOR)
++	if (features->device_type & WACOM_DEVICETYPE_WL_MONITOR) {
+ 		error = hid_hw_open(hdev);
++		if (error) {
++			hid_err(hdev, "hw open failed\n");
++			goto fail_quirks;
++		}
++	}
+ 
+ 	wacom_set_shared_values(wacom_wac);
+ 	devres_close_group(&hdev->dev, wacom);
+diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
+index cc23b90cae02f..d95e567a190d2 100644
+--- a/drivers/hv/channel_mgmt.c
++++ b/drivers/hv/channel_mgmt.c
+@@ -829,11 +829,22 @@ static void vmbus_wait_for_unload(void)
+ 		if (completion_done(&vmbus_connection.unload_event))
+ 			goto completed;
+ 
+-		for_each_online_cpu(cpu) {
++		for_each_present_cpu(cpu) {
+ 			struct hv_per_cpu_context *hv_cpu
+ 				= per_cpu_ptr(hv_context.cpu_context, cpu);
+ 
++			/*
++			 * In a CoCo VM the synic_message_page is not allocated
++			 * in hv_synic_alloc(). Instead it is set/cleared in
++			 * hv_synic_enable_regs() and hv_synic_disable_regs()
++			 * such that it is set only when the CPU is online. If
++			 * not all present CPUs are online, the message page
++			 * might be NULL, so skip such CPUs.
++			 */
+ 			page_addr = hv_cpu->synic_message_page;
++			if (!page_addr)
++				continue;
++
+ 			msg = (struct hv_message *)page_addr
+ 				+ VMBUS_MESSAGE_SINT;
+ 
+@@ -867,11 +878,14 @@ completed:
+ 	 * maybe-pending messages on all CPUs to be able to receive new
+ 	 * messages after we reconnect.
+ 	 */
+-	for_each_online_cpu(cpu) {
++	for_each_present_cpu(cpu) {
+ 		struct hv_per_cpu_context *hv_cpu
+ 			= per_cpu_ptr(hv_context.cpu_context, cpu);
+ 
+ 		page_addr = hv_cpu->synic_message_page;
++		if (!page_addr)
++			continue;
++
+ 		msg = (struct hv_message *)page_addr + VMBUS_MESSAGE_SINT;
+ 		msg->header.message_type = HVMSG_NONE;
+ 	}
+diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
+index d24dd65b33d4d..b50cc83e8315e 100644
+--- a/drivers/hv/vmbus_drv.c
++++ b/drivers/hv/vmbus_drv.c
+@@ -1525,7 +1525,7 @@ static int vmbus_bus_init(void)
+ 	ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "hyperv/vmbus:online",
+ 				hv_synic_init, hv_synic_cleanup);
+ 	if (ret < 0)
+-		goto err_cpuhp;
++		goto err_alloc;
+ 	hyperv_cpuhp_online = ret;
+ 
+ 	ret = vmbus_connect();
+@@ -1577,9 +1577,8 @@ static int vmbus_bus_init(void)
+ 
+ err_connect:
+ 	cpuhp_remove_state(hyperv_cpuhp_online);
+-err_cpuhp:
+-	hv_synic_free();
+ err_alloc:
++	hv_synic_free();
+ 	if (vmbus_irq == -1) {
+ 		hv_remove_vmbus_handler();
+ 	} else {
+diff --git a/drivers/i2c/busses/i2c-designware-core.h b/drivers/i2c/busses/i2c-designware-core.h
+index 050d8c63ad3c5..0164d92163308 100644
+--- a/drivers/i2c/busses/i2c-designware-core.h
++++ b/drivers/i2c/busses/i2c-designware-core.h
+@@ -40,6 +40,7 @@
+ #define DW_IC_CON_BUS_CLEAR_CTRL		BIT(11)
+ 
+ #define DW_IC_DATA_CMD_DAT			GENMASK(7, 0)
++#define DW_IC_DATA_CMD_FIRST_DATA_BYTE		BIT(11)
+ 
+ /*
+  * Registers offset
+diff --git a/drivers/i2c/busses/i2c-designware-slave.c b/drivers/i2c/busses/i2c-designware-slave.c
+index cec25054bb244..2e079cf20bb5b 100644
+--- a/drivers/i2c/busses/i2c-designware-slave.c
++++ b/drivers/i2c/busses/i2c-designware-slave.c
+@@ -176,6 +176,10 @@ static irqreturn_t i2c_dw_isr_slave(int this_irq, void *dev_id)
+ 
+ 		do {
+ 			regmap_read(dev->map, DW_IC_DATA_CMD, &tmp);
++			if (tmp & DW_IC_DATA_CMD_FIRST_DATA_BYTE)
++				i2c_slave_event(dev->slave,
++						I2C_SLAVE_WRITE_REQUESTED,
++						&val);
+ 			val = tmp;
+ 			i2c_slave_event(dev->slave, I2C_SLAVE_WRITE_RECEIVED,
+ 					&val);
+diff --git a/drivers/i2c/busses/i2c-imx-lpi2c.c b/drivers/i2c/busses/i2c-imx-lpi2c.c
+index a49b14d52a986..ff12018bc2060 100644
+--- a/drivers/i2c/busses/i2c-imx-lpi2c.c
++++ b/drivers/i2c/busses/i2c-imx-lpi2c.c
+@@ -201,8 +201,8 @@ static void lpi2c_imx_stop(struct lpi2c_imx_struct *lpi2c_imx)
+ /* CLKLO = I2C_CLK_RATIO * CLKHI, SETHOLD = CLKHI, DATAVD = CLKHI/2 */
+ static int lpi2c_imx_config(struct lpi2c_imx_struct *lpi2c_imx)
+ {
+-	u8 prescale, filt, sethold, clkhi, clklo, datavd;
+-	unsigned int clk_rate, clk_cycle;
++	u8 prescale, filt, sethold, datavd;
++	unsigned int clk_rate, clk_cycle, clkhi, clklo;
+ 	enum lpi2c_imx_pincfg pincfg;
+ 	unsigned int temp;
+ 
+diff --git a/drivers/i2c/busses/i2c-mchp-pci1xxxx.c b/drivers/i2c/busses/i2c-mchp-pci1xxxx.c
+index b21ffd6df9276..5ef136c3ecb12 100644
+--- a/drivers/i2c/busses/i2c-mchp-pci1xxxx.c
++++ b/drivers/i2c/busses/i2c-mchp-pci1xxxx.c
+@@ -1118,8 +1118,10 @@ static int pci1xxxx_i2c_resume(struct device *dev)
+ static DEFINE_SIMPLE_DEV_PM_OPS(pci1xxxx_i2c_pm_ops, pci1xxxx_i2c_suspend,
+ 			 pci1xxxx_i2c_resume);
+ 
+-static void pci1xxxx_i2c_shutdown(struct pci1xxxx_i2c *i2c)
++static void pci1xxxx_i2c_shutdown(void *data)
+ {
++	struct pci1xxxx_i2c *i2c = data;
++
+ 	pci1xxxx_i2c_config_padctrl(i2c, false);
+ 	pci1xxxx_i2c_configure_core_reg(i2c, false);
+ }
+@@ -1156,7 +1158,7 @@ static int pci1xxxx_i2c_probe_pci(struct pci_dev *pdev,
+ 	init_completion(&i2c->i2c_xfer_done);
+ 	pci1xxxx_i2c_init(i2c);
+ 
+-	ret = devm_add_action(dev, (void (*)(void *))pci1xxxx_i2c_shutdown, i2c);
++	ret = devm_add_action(dev, pci1xxxx_i2c_shutdown, i2c);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/input/misc/soc_button_array.c b/drivers/input/misc/soc_button_array.c
+index 09489380afda7..e79f5497948b8 100644
+--- a/drivers/input/misc/soc_button_array.c
++++ b/drivers/input/misc/soc_button_array.c
+@@ -108,6 +108,27 @@ static const struct dmi_system_id dmi_use_low_level_irq[] = {
+ 	{} /* Terminating entry */
+ };
+ 
++/*
++ * Some devices have a wrong entry which points to a GPIO which is
++ * required in another driver, so this driver must not claim it.
++ */
++static const struct dmi_system_id dmi_invalid_acpi_index[] = {
++	{
++		/*
++		 * Lenovo Yoga Book X90F / X90L, the PNP0C40 home button entry
++		 * points to a GPIO which is not a home button and which is
++		 * required by the lenovo-yogabook driver.
++		 */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "CHERRYVIEW D1 PLATFORM"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "YETI-11"),
++		},
++		.driver_data = (void *)1l,
++	},
++	{} /* Terminating entry */
++};
++
+ /*
+  * Get the Nth GPIO number from the ACPI object.
+  */
+@@ -137,6 +158,8 @@ soc_button_device_create(struct platform_device *pdev,
+ 	struct platform_device *pd;
+ 	struct gpio_keys_button *gpio_keys;
+ 	struct gpio_keys_platform_data *gpio_keys_pdata;
++	const struct dmi_system_id *dmi_id;
++	int invalid_acpi_index = -1;
+ 	int error, gpio, irq;
+ 	int n_buttons = 0;
+ 
+@@ -154,10 +177,17 @@ soc_button_device_create(struct platform_device *pdev,
+ 	gpio_keys = (void *)(gpio_keys_pdata + 1);
+ 	n_buttons = 0;
+ 
++	dmi_id = dmi_first_match(dmi_invalid_acpi_index);
++	if (dmi_id)
++		invalid_acpi_index = (long)dmi_id->driver_data;
++
+ 	for (info = button_info; info->name; info++) {
+ 		if (info->autorepeat != autorepeat)
+ 			continue;
+ 
++		if (info->acpi_index == invalid_acpi_index)
++			continue;
++
+ 		error = soc_button_lookup_gpio(&pdev->dev, info->acpi_index, &gpio, &irq);
+ 		if (error || irq < 0) {
+ 			/*
+diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
+index cd0e3fb78c3ff..e9eac86cddbd2 100644
+--- a/drivers/iommu/amd/iommu.c
++++ b/drivers/iommu/amd/iommu.c
+@@ -2069,10 +2069,6 @@ static struct protection_domain *protection_domain_alloc(unsigned int type)
+ 	int mode = DEFAULT_PGTABLE_LEVEL;
+ 	int ret;
+ 
+-	domain = kzalloc(sizeof(*domain), GFP_KERNEL);
+-	if (!domain)
+-		return NULL;
+-
+ 	/*
+ 	 * Force IOMMU v1 page table when iommu=pt and
+ 	 * when allocating domain for pass-through devices.
+@@ -2088,6 +2084,10 @@ static struct protection_domain *protection_domain_alloc(unsigned int type)
+ 		return NULL;
+ 	}
+ 
++	domain = kzalloc(sizeof(*domain), GFP_KERNEL);
++	if (!domain)
++		return NULL;
++
+ 	switch (pgtable) {
+ 	case AMD_IOMMU_V1:
+ 		ret = protection_domain_init_v1(domain, mode);
+diff --git a/drivers/media/cec/core/cec-adap.c b/drivers/media/cec/core/cec-adap.c
+index 4f5ab3cae8a71..b1512f9c5895c 100644
+--- a/drivers/media/cec/core/cec-adap.c
++++ b/drivers/media/cec/core/cec-adap.c
+@@ -1090,7 +1090,8 @@ void cec_received_msg_ts(struct cec_adapter *adap,
+ 	mutex_lock(&adap->lock);
+ 	dprintk(2, "%s: %*ph\n", __func__, msg->len, msg->msg);
+ 
+-	adap->last_initiator = 0xff;
++	if (!adap->transmit_in_progress)
++		adap->last_initiator = 0xff;
+ 
+ 	/* Check if this message was for us (directed or broadcast). */
+ 	if (!cec_msg_is_broadcast(msg))
+@@ -1582,7 +1583,7 @@ static void cec_claim_log_addrs(struct cec_adapter *adap, bool block)
+  *
+  * This function is called with adap->lock held.
+  */
+-static int cec_adap_enable(struct cec_adapter *adap)
++int cec_adap_enable(struct cec_adapter *adap)
+ {
+ 	bool enable;
+ 	int ret = 0;
+@@ -1592,6 +1593,9 @@ static int cec_adap_enable(struct cec_adapter *adap)
+ 	if (adap->needs_hpd)
+ 		enable = enable && adap->phys_addr != CEC_PHYS_ADDR_INVALID;
+ 
++	if (adap->devnode.unregistered)
++		enable = false;
++
+ 	if (enable == adap->is_enabled)
+ 		return 0;
+ 
+diff --git a/drivers/media/cec/core/cec-core.c b/drivers/media/cec/core/cec-core.c
+index af358e901b5f3..7e153c5cad04f 100644
+--- a/drivers/media/cec/core/cec-core.c
++++ b/drivers/media/cec/core/cec-core.c
+@@ -191,6 +191,8 @@ static void cec_devnode_unregister(struct cec_adapter *adap)
+ 	mutex_lock(&adap->lock);
+ 	__cec_s_phys_addr(adap, CEC_PHYS_ADDR_INVALID, false);
+ 	__cec_s_log_addrs(adap, NULL, false);
++	// Disable the adapter (since adap->devnode.unregistered is true)
++	cec_adap_enable(adap);
+ 	mutex_unlock(&adap->lock);
+ 
+ 	cdev_device_del(&devnode->cdev, &devnode->dev);
+diff --git a/drivers/media/cec/core/cec-priv.h b/drivers/media/cec/core/cec-priv.h
+index b78df931aa74b..ed1f8c67626bf 100644
+--- a/drivers/media/cec/core/cec-priv.h
++++ b/drivers/media/cec/core/cec-priv.h
+@@ -47,6 +47,7 @@ int cec_monitor_pin_cnt_inc(struct cec_adapter *adap);
+ void cec_monitor_pin_cnt_dec(struct cec_adapter *adap);
+ int cec_adap_status(struct seq_file *file, void *priv);
+ int cec_thread_func(void *_adap);
++int cec_adap_enable(struct cec_adapter *adap);
+ void __cec_s_phys_addr(struct cec_adapter *adap, u16 phys_addr, bool block);
+ int __cec_s_log_addrs(struct cec_adapter *adap,
+ 		      struct cec_log_addrs *log_addrs, bool block);
+diff --git a/drivers/mmc/host/bcm2835.c b/drivers/mmc/host/bcm2835.c
+index 8648f7e63ca1a..eea208856ce0d 100644
+--- a/drivers/mmc/host/bcm2835.c
++++ b/drivers/mmc/host/bcm2835.c
+@@ -1403,8 +1403,8 @@ static int bcm2835_probe(struct platform_device *pdev)
+ 	host->max_clk = clk_get_rate(clk);
+ 
+ 	host->irq = platform_get_irq(pdev, 0);
+-	if (host->irq <= 0) {
+-		ret = -EINVAL;
++	if (host->irq < 0) {
++		ret = host->irq;
+ 		goto err;
+ 	}
+ 
+diff --git a/drivers/mmc/host/litex_mmc.c b/drivers/mmc/host/litex_mmc.c
+index 39c6707fdfdbc..9af6b0902efe1 100644
+--- a/drivers/mmc/host/litex_mmc.c
++++ b/drivers/mmc/host/litex_mmc.c
+@@ -649,6 +649,7 @@ static struct platform_driver litex_mmc_driver = {
+ 	.driver = {
+ 		.name = "litex-mmc",
+ 		.of_match_table = litex_match,
++		.probe_type = PROBE_PREFER_ASYNCHRONOUS,
+ 	},
+ };
+ module_platform_driver(litex_mmc_driver);
+diff --git a/drivers/mmc/host/meson-gx-mmc.c b/drivers/mmc/host/meson-gx-mmc.c
+index 2b963a81c2ada..2eda0e496225f 100644
+--- a/drivers/mmc/host/meson-gx-mmc.c
++++ b/drivers/mmc/host/meson-gx-mmc.c
+@@ -1006,11 +1006,8 @@ static irqreturn_t meson_mmc_irq(int irq, void *dev_id)
+ 
+ 		if (data && !cmd->error)
+ 			data->bytes_xfered = data->blksz * data->blocks;
+-		if (meson_mmc_bounce_buf_read(data) ||
+-		    meson_mmc_get_next_command(cmd))
+-			ret = IRQ_WAKE_THREAD;
+-		else
+-			ret = IRQ_HANDLED;
++
++		return IRQ_WAKE_THREAD;
+ 	}
+ 
+ out:
+@@ -1022,9 +1019,6 @@ out:
+ 		writel(start, host->regs + SD_EMMC_START);
+ 	}
+ 
+-	if (ret == IRQ_HANDLED)
+-		meson_mmc_request_done(host->mmc, cmd->mrq);
+-
+ 	return ret;
+ }
+ 
+@@ -1208,8 +1202,8 @@ static int meson_mmc_probe(struct platform_device *pdev)
+ 		return PTR_ERR(host->regs);
+ 
+ 	host->irq = platform_get_irq(pdev, 0);
+-	if (host->irq <= 0)
+-		return -EINVAL;
++	if (host->irq < 0)
++		return host->irq;
+ 
+ 	cd_irq = platform_get_irq_optional(pdev, 1);
+ 	mmc_gpio_set_cd_irq(mmc, cd_irq);
+diff --git a/drivers/mmc/host/mmci.c b/drivers/mmc/host/mmci.c
+index b9e5dfe74e5c7..1c326e4307f4a 100644
+--- a/drivers/mmc/host/mmci.c
++++ b/drivers/mmc/host/mmci.c
+@@ -1735,7 +1735,8 @@ static void mmci_set_max_busy_timeout(struct mmc_host *mmc)
+ 		return;
+ 
+ 	if (host->variant->busy_timeout && mmc->actual_clock)
+-		max_busy_timeout = ~0UL / (mmc->actual_clock / MSEC_PER_SEC);
++		max_busy_timeout = U32_MAX / DIV_ROUND_UP(mmc->actual_clock,
++							  MSEC_PER_SEC);
+ 
+ 	mmc->max_busy_timeout = max_busy_timeout;
+ }
+diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c
+index edade0e54a0c2..9785ec91654f7 100644
+--- a/drivers/mmc/host/mtk-sd.c
++++ b/drivers/mmc/host/mtk-sd.c
+@@ -2680,7 +2680,7 @@ static int msdc_drv_probe(struct platform_device *pdev)
+ 
+ 	host->irq = platform_get_irq(pdev, 0);
+ 	if (host->irq < 0) {
+-		ret = -EINVAL;
++		ret = host->irq;
+ 		goto host_free;
+ 	}
+ 
+diff --git a/drivers/mmc/host/mvsdio.c b/drivers/mmc/host/mvsdio.c
+index 629efbe639c4f..b4f6a0a2fcb51 100644
+--- a/drivers/mmc/host/mvsdio.c
++++ b/drivers/mmc/host/mvsdio.c
+@@ -704,7 +704,7 @@ static int mvsd_probe(struct platform_device *pdev)
+ 	}
+ 	irq = platform_get_irq(pdev, 0);
+ 	if (irq < 0)
+-		return -ENXIO;
++		return irq;
+ 
+ 	mmc = mmc_alloc_host(sizeof(struct mvsd_host), &pdev->dev);
+ 	if (!mmc) {
+diff --git a/drivers/mmc/host/omap.c b/drivers/mmc/host/omap.c
+index 57d39283924da..cc2213ea324f1 100644
+--- a/drivers/mmc/host/omap.c
++++ b/drivers/mmc/host/omap.c
+@@ -1343,7 +1343,7 @@ static int mmc_omap_probe(struct platform_device *pdev)
+ 
+ 	irq = platform_get_irq(pdev, 0);
+ 	if (irq < 0)
+-		return -ENXIO;
++		return irq;
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	host->virt_base = devm_ioremap_resource(&pdev->dev, res);
+diff --git a/drivers/mmc/host/omap_hsmmc.c b/drivers/mmc/host/omap_hsmmc.c
+index 4bd7447552055..2db3a16e63c48 100644
+--- a/drivers/mmc/host/omap_hsmmc.c
++++ b/drivers/mmc/host/omap_hsmmc.c
+@@ -1791,9 +1791,11 @@ static int omap_hsmmc_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	irq = platform_get_irq(pdev, 0);
+-	if (res == NULL || irq < 0)
++	if (!res)
+ 		return -ENXIO;
++	irq = platform_get_irq(pdev, 0);
++	if (irq < 0)
++		return irq;
+ 
+ 	base = devm_ioremap_resource(&pdev->dev, res);
+ 	if (IS_ERR(base))
+diff --git a/drivers/mmc/host/owl-mmc.c b/drivers/mmc/host/owl-mmc.c
+index 3dc143b039397..679b8b0b310e5 100644
+--- a/drivers/mmc/host/owl-mmc.c
++++ b/drivers/mmc/host/owl-mmc.c
+@@ -638,7 +638,7 @@ static int owl_mmc_probe(struct platform_device *pdev)
+ 
+ 	owl_host->irq = platform_get_irq(pdev, 0);
+ 	if (owl_host->irq < 0) {
+-		ret = -EINVAL;
++		ret = owl_host->irq;
+ 		goto err_release_channel;
+ 	}
+ 
+diff --git a/drivers/mmc/host/sdhci-acpi.c b/drivers/mmc/host/sdhci-acpi.c
+index 8f0e639236b15..edf2e6c14dc6f 100644
+--- a/drivers/mmc/host/sdhci-acpi.c
++++ b/drivers/mmc/host/sdhci-acpi.c
+@@ -829,7 +829,7 @@ static int sdhci_acpi_probe(struct platform_device *pdev)
+ 	host->ops	= &sdhci_acpi_ops_dflt;
+ 	host->irq	= platform_get_irq(pdev, 0);
+ 	if (host->irq < 0) {
+-		err = -EINVAL;
++		err = host->irq;
+ 		goto err_free;
+ 	}
+ 
+diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
+index 8ac81d57a3dfe..1877d583fe8c3 100644
+--- a/drivers/mmc/host/sdhci-msm.c
++++ b/drivers/mmc/host/sdhci-msm.c
+@@ -2479,6 +2479,9 @@ static inline void sdhci_msm_get_of_property(struct platform_device *pdev,
+ 		msm_host->ddr_config = DDR_CONFIG_POR_VAL;
+ 
+ 	of_property_read_u32(node, "qcom,dll-config", &msm_host->dll_config);
++
++	if (of_device_is_compatible(node, "qcom,msm8916-sdhci"))
++		host->quirks2 |= SDHCI_QUIRK2_BROKEN_64_BIT_DMA;
+ }
+ 
+ static int sdhci_msm_gcc_reset(struct device *dev, struct sdhci_host *host)
+diff --git a/drivers/mmc/host/sdhci-spear.c b/drivers/mmc/host/sdhci-spear.c
+index d463e2fd5b1a8..c79035727b20b 100644
+--- a/drivers/mmc/host/sdhci-spear.c
++++ b/drivers/mmc/host/sdhci-spear.c
+@@ -65,8 +65,8 @@ static int sdhci_probe(struct platform_device *pdev)
+ 	host->hw_name = "sdhci";
+ 	host->ops = &sdhci_pltfm_ops;
+ 	host->irq = platform_get_irq(pdev, 0);
+-	if (host->irq <= 0) {
+-		ret = -EINVAL;
++	if (host->irq < 0) {
++		ret = host->irq;
+ 		goto err_host;
+ 	}
+ 	host->quirks = SDHCI_QUIRK_BROKEN_ADMA;
+diff --git a/drivers/mmc/host/sh_mmcif.c b/drivers/mmc/host/sh_mmcif.c
+index 0fd4c9d644dd5..5cf53348372a4 100644
+--- a/drivers/mmc/host/sh_mmcif.c
++++ b/drivers/mmc/host/sh_mmcif.c
+@@ -1400,7 +1400,7 @@ static int sh_mmcif_probe(struct platform_device *pdev)
+ 	irq[0] = platform_get_irq(pdev, 0);
+ 	irq[1] = platform_get_irq_optional(pdev, 1);
+ 	if (irq[0] < 0)
+-		return -ENXIO;
++		return irq[0];
+ 
+ 	reg = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(reg))
+diff --git a/drivers/mmc/host/sunxi-mmc.c b/drivers/mmc/host/sunxi-mmc.c
+index 3db9f32d6a7b9..69dcb8805e05f 100644
+--- a/drivers/mmc/host/sunxi-mmc.c
++++ b/drivers/mmc/host/sunxi-mmc.c
+@@ -1350,8 +1350,8 @@ static int sunxi_mmc_resource_request(struct sunxi_mmc_host *host,
+ 		return ret;
+ 
+ 	host->irq = platform_get_irq(pdev, 0);
+-	if (host->irq <= 0) {
+-		ret = -EINVAL;
++	if (host->irq < 0) {
++		ret = host->irq;
+ 		goto error_disable_mmc;
+ 	}
+ 
+diff --git a/drivers/mmc/host/usdhi6rol0.c b/drivers/mmc/host/usdhi6rol0.c
+index 99515be6e5e57..2032e4e1ee68b 100644
+--- a/drivers/mmc/host/usdhi6rol0.c
++++ b/drivers/mmc/host/usdhi6rol0.c
+@@ -1757,8 +1757,10 @@ static int usdhi6_probe(struct platform_device *pdev)
+ 	irq_cd = platform_get_irq_byname(pdev, "card detect");
+ 	irq_sd = platform_get_irq_byname(pdev, "data");
+ 	irq_sdio = platform_get_irq_byname(pdev, "SDIO");
+-	if (irq_sd < 0 || irq_sdio < 0)
+-		return -ENODEV;
++	if (irq_sd < 0)
++		return irq_sd;
++	if (irq_sdio < 0)
++		return irq_sdio;
+ 
+ 	mmc = mmc_alloc_host(sizeof(struct usdhi6_host), dev);
+ 	if (!mmc)
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index 0c81f5830b90a..566545b6554ba 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -419,6 +419,20 @@ static void mt7530_pll_setup(struct mt7530_priv *priv)
+ 	core_set(priv, CORE_TRGMII_GSW_CLK_CG, REG_GSWCK_EN);
+ }
+ 
++/* If port 6 is available as a CPU port, always prefer that as the default,
++ * otherwise don't care.
++ */
++static struct dsa_port *
++mt753x_preferred_default_local_cpu_port(struct dsa_switch *ds)
++{
++	struct dsa_port *cpu_dp = dsa_to_port(ds, 6);
++
++	if (dsa_port_is_cpu(cpu_dp))
++		return cpu_dp;
++
++	return NULL;
++}
++
+ /* Setup port 6 interface mode and TRGMII TX circuit */
+ static int
+ mt7530_pad_clk_setup(struct dsa_switch *ds, phy_interface_t interface)
+@@ -991,6 +1005,18 @@ unlock_exit:
+ 	mutex_unlock(&priv->reg_mutex);
+ }
+ 
++static void
++mt753x_trap_frames(struct mt7530_priv *priv)
++{
++	/* Trap BPDUs to the CPU port(s) */
++	mt7530_rmw(priv, MT753X_BPC, MT753X_BPDU_PORT_FW_MASK,
++		   MT753X_BPDU_CPU_ONLY);
++
++	/* Trap LLDP frames with :0E MAC DA to the CPU port(s) */
++	mt7530_rmw(priv, MT753X_RGAC2, MT753X_R0E_PORT_FW_MASK,
++		   MT753X_R0E_PORT_FW(MT753X_BPDU_CPU_ONLY));
++}
++
+ static int
+ mt753x_cpu_port_enable(struct dsa_switch *ds, int port)
+ {
+@@ -1013,7 +1039,7 @@ mt753x_cpu_port_enable(struct dsa_switch *ds, int port)
+ 		   UNU_FFP(BIT(port)));
+ 
+ 	/* Set CPU port number */
+-	if (priv->id == ID_MT7621)
++	if (priv->id == ID_MT7530 || priv->id == ID_MT7621)
+ 		mt7530_rmw(priv, MT7530_MFC, CPU_MASK, CPU_EN | CPU_PORT(port));
+ 
+ 	/* CPU port gets connected to all user ports of
+@@ -2214,6 +2240,8 @@ mt7530_setup(struct dsa_switch *ds)
+ 
+ 	priv->p6_interface = PHY_INTERFACE_MODE_NA;
+ 
++	mt753x_trap_frames(priv);
++
+ 	/* Enable and reset MIB counters */
+ 	mt7530_mib_reset(ds);
+ 
+@@ -2320,8 +2348,8 @@ mt7531_setup_common(struct dsa_switch *ds)
+ 			   BIT(cpu_dp->index));
+ 		break;
+ 	}
+-	mt7530_rmw(priv, MT753X_BPC, MT753X_BPDU_PORT_FW_MASK,
+-		   MT753X_BPDU_CPU_ONLY);
++
++	mt753x_trap_frames(priv);
+ 
+ 	/* Enable and reset MIB counters */
+ 	mt7530_mib_reset(ds);
+@@ -3177,6 +3205,7 @@ static int mt753x_set_mac_eee(struct dsa_switch *ds, int port,
+ static const struct dsa_switch_ops mt7530_switch_ops = {
+ 	.get_tag_protocol	= mtk_get_tag_protocol,
+ 	.setup			= mt753x_setup,
++	.preferred_default_local_cpu_port = mt753x_preferred_default_local_cpu_port,
+ 	.get_strings		= mt7530_get_strings,
+ 	.get_ethtool_stats	= mt7530_get_ethtool_stats,
+ 	.get_sset_count		= mt7530_get_sset_count,
+diff --git a/drivers/net/dsa/mt7530.h b/drivers/net/dsa/mt7530.h
+index 6b2fc6290ea84..31c0f7156b699 100644
+--- a/drivers/net/dsa/mt7530.h
++++ b/drivers/net/dsa/mt7530.h
+@@ -65,6 +65,11 @@ enum mt753x_id {
+ #define MT753X_BPC			0x24
+ #define  MT753X_BPDU_PORT_FW_MASK	GENMASK(2, 0)
+ 
++/* Register for :03 and :0E MAC DA frame control */
++#define MT753X_RGAC2			0x2c
++#define  MT753X_R0E_PORT_FW_MASK	GENMASK(18, 16)
++#define  MT753X_R0E_PORT_FW(x)		FIELD_PREP(MT753X_R0E_PORT_FW_MASK, x)
++
+ enum mt753x_bpdu_port_fw {
+ 	MT753X_BPDU_FOLLOW_MFC,
+ 	MT753X_BPDU_CPU_EXCLUDE = 4,
+diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
+index 46fe3d74e2e98..7236c4f70cf51 100644
+--- a/drivers/net/ethernet/emulex/benet/be_main.c
++++ b/drivers/net/ethernet/emulex/benet/be_main.c
+@@ -1136,8 +1136,8 @@ static struct sk_buff *be_lancer_xmit_workarounds(struct be_adapter *adapter,
+ 	eth_hdr_len = ntohs(skb->protocol) == ETH_P_8021Q ?
+ 						VLAN_ETH_HLEN : ETH_HLEN;
+ 	if (skb->len <= 60 &&
+-	    (lancer_chip(adapter) || skb_vlan_tag_present(skb)) &&
+-	    is_ipv4_pkt(skb)) {
++	    (lancer_chip(adapter) || BE3_chip(adapter) ||
++	     skb_vlan_tag_present(skb)) && is_ipv4_pkt(skb)) {
+ 		ip = (struct iphdr *)ip_hdr(skb);
+ 		pskb_trim(skb, eth_hdr_len + ntohs(ip->tot_len));
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c
+index ee104cf04392f..438c3bfae762a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c
+@@ -1403,9 +1403,13 @@ dr_action_create_reformat_action(struct mlx5dr_domain *dmn,
+ 	}
+ 	case DR_ACTION_TYP_TNL_L3_TO_L2:
+ 	{
+-		u8 hw_actions[ACTION_CACHE_LINE_SIZE] = {};
++		u8 *hw_actions;
+ 		int ret;
+ 
++		hw_actions = kzalloc(ACTION_CACHE_LINE_SIZE, GFP_KERNEL);
++		if (!hw_actions)
++			return -ENOMEM;
++
+ 		ret = mlx5dr_ste_set_action_decap_l3_list(dmn->ste_ctx,
+ 							  data, data_sz,
+ 							  hw_actions,
+@@ -1413,6 +1417,7 @@ dr_action_create_reformat_action(struct mlx5dr_domain *dmn,
+ 							  &action->rewrite->num_of_actions);
+ 		if (ret) {
+ 			mlx5dr_dbg(dmn, "Failed creating decap l3 action list\n");
++			kfree(hw_actions);
+ 			return ret;
+ 		}
+ 
+@@ -1420,6 +1425,7 @@ dr_action_create_reformat_action(struct mlx5dr_domain *dmn,
+ 								DR_CHUNK_SIZE_8);
+ 		if (!action->rewrite->chunk) {
+ 			mlx5dr_dbg(dmn, "Failed allocating modify header chunk\n");
++			kfree(hw_actions);
+ 			return -ENOMEM;
+ 		}
+ 
+@@ -1433,6 +1439,7 @@ dr_action_create_reformat_action(struct mlx5dr_domain *dmn,
+ 		if (ret) {
+ 			mlx5dr_dbg(dmn, "Writing decap l3 actions to ICM failed\n");
+ 			mlx5dr_icm_free_chunk(action->rewrite->chunk);
++			kfree(hw_actions);
+ 			return ret;
+ 		}
+ 		return 0;
+diff --git a/drivers/net/ethernet/qualcomm/qca_spi.c b/drivers/net/ethernet/qualcomm/qca_spi.c
+index c865a4be05eec..4a1b94e5a8ea9 100644
+--- a/drivers/net/ethernet/qualcomm/qca_spi.c
++++ b/drivers/net/ethernet/qualcomm/qca_spi.c
+@@ -582,8 +582,7 @@ qcaspi_spi_thread(void *data)
+ 	while (!kthread_should_stop()) {
+ 		set_current_state(TASK_INTERRUPTIBLE);
+ 		if ((qca->intr_req == qca->intr_svc) &&
+-		    (qca->txr.skb[qca->txr.head] == NULL) &&
+-		    (qca->sync == QCASPI_SYNC_READY))
++		    !qca->txr.skb[qca->txr.head])
+ 			schedule();
+ 
+ 		set_current_state(TASK_RUNNING);
+diff --git a/drivers/net/ethernet/sfc/ef10.c b/drivers/net/ethernet/sfc/ef10.c
+index d30459dbfe8f8..b63e47af63655 100644
+--- a/drivers/net/ethernet/sfc/ef10.c
++++ b/drivers/net/ethernet/sfc/ef10.c
+@@ -2950,7 +2950,7 @@ static u32 efx_ef10_extract_event_ts(efx_qword_t *event)
+ 	return tstamp;
+ }
+ 
+-static void
++static int
+ efx_ef10_handle_tx_event(struct efx_channel *channel, efx_qword_t *event)
+ {
+ 	struct efx_nic *efx = channel->efx;
+@@ -2958,13 +2958,14 @@ efx_ef10_handle_tx_event(struct efx_channel *channel, efx_qword_t *event)
+ 	unsigned int tx_ev_desc_ptr;
+ 	unsigned int tx_ev_q_label;
+ 	unsigned int tx_ev_type;
++	int work_done;
+ 	u64 ts_part;
+ 
+ 	if (unlikely(READ_ONCE(efx->reset_pending)))
+-		return;
++		return 0;
+ 
+ 	if (unlikely(EFX_QWORD_FIELD(*event, ESF_DZ_TX_DROP_EVENT)))
+-		return;
++		return 0;
+ 
+ 	/* Get the transmit queue */
+ 	tx_ev_q_label = EFX_QWORD_FIELD(*event, ESF_DZ_TX_QLABEL);
+@@ -2973,8 +2974,7 @@ efx_ef10_handle_tx_event(struct efx_channel *channel, efx_qword_t *event)
+ 	if (!tx_queue->timestamping) {
+ 		/* Transmit completion */
+ 		tx_ev_desc_ptr = EFX_QWORD_FIELD(*event, ESF_DZ_TX_DESCR_INDX);
+-		efx_xmit_done(tx_queue, tx_ev_desc_ptr & tx_queue->ptr_mask);
+-		return;
++		return efx_xmit_done(tx_queue, tx_ev_desc_ptr & tx_queue->ptr_mask);
+ 	}
+ 
+ 	/* Transmit timestamps are only available for 8XXX series. They result
+@@ -3000,6 +3000,7 @@ efx_ef10_handle_tx_event(struct efx_channel *channel, efx_qword_t *event)
+ 	 * fields in the event.
+ 	 */
+ 	tx_ev_type = EFX_QWORD_FIELD(*event, ESF_EZ_TX_SOFT1);
++	work_done = 0;
+ 
+ 	switch (tx_ev_type) {
+ 	case TX_TIMESTAMP_EVENT_TX_EV_COMPLETION:
+@@ -3016,6 +3017,7 @@ efx_ef10_handle_tx_event(struct efx_channel *channel, efx_qword_t *event)
+ 		tx_queue->completed_timestamp_major = ts_part;
+ 
+ 		efx_xmit_done_single(tx_queue);
++		work_done = 1;
+ 		break;
+ 
+ 	default:
+@@ -3026,6 +3028,8 @@ efx_ef10_handle_tx_event(struct efx_channel *channel, efx_qword_t *event)
+ 			  EFX_QWORD_VAL(*event));
+ 		break;
+ 	}
++
++	return work_done;
+ }
+ 
+ static void
+@@ -3081,13 +3085,16 @@ static void efx_ef10_handle_driver_generated_event(struct efx_channel *channel,
+ 	}
+ }
+ 
++#define EFX_NAPI_MAX_TX 512
++
+ static int efx_ef10_ev_process(struct efx_channel *channel, int quota)
+ {
+ 	struct efx_nic *efx = channel->efx;
+ 	efx_qword_t event, *p_event;
+ 	unsigned int read_ptr;
+-	int ev_code;
++	int spent_tx = 0;
+ 	int spent = 0;
++	int ev_code;
+ 
+ 	if (quota <= 0)
+ 		return spent;
+@@ -3126,7 +3133,11 @@ static int efx_ef10_ev_process(struct efx_channel *channel, int quota)
+ 			}
+ 			break;
+ 		case ESE_DZ_EV_CODE_TX_EV:
+-			efx_ef10_handle_tx_event(channel, &event);
++			spent_tx += efx_ef10_handle_tx_event(channel, &event);
++			if (spent_tx >= EFX_NAPI_MAX_TX) {
++				spent = quota;
++				goto out;
++			}
+ 			break;
+ 		case ESE_DZ_EV_CODE_DRIVER_EV:
+ 			efx_ef10_handle_driver_event(channel, &event);
+diff --git a/drivers/net/ethernet/sfc/ef100_nic.c b/drivers/net/ethernet/sfc/ef100_nic.c
+index 4dc643b0d2db4..7adde9639c8ab 100644
+--- a/drivers/net/ethernet/sfc/ef100_nic.c
++++ b/drivers/net/ethernet/sfc/ef100_nic.c
+@@ -253,6 +253,8 @@ static void ef100_ev_read_ack(struct efx_channel *channel)
+ 		   efx_reg(channel->efx, ER_GZ_EVQ_INT_PRIME));
+ }
+ 
++#define EFX_NAPI_MAX_TX 512
++
+ static int ef100_ev_process(struct efx_channel *channel, int quota)
+ {
+ 	struct efx_nic *efx = channel->efx;
+@@ -260,6 +262,7 @@ static int ef100_ev_process(struct efx_channel *channel, int quota)
+ 	bool evq_phase, old_evq_phase;
+ 	unsigned int read_ptr;
+ 	efx_qword_t *p_event;
++	int spent_tx = 0;
+ 	int spent = 0;
+ 	bool ev_phase;
+ 	int ev_type;
+@@ -295,7 +298,9 @@ static int ef100_ev_process(struct efx_channel *channel, int quota)
+ 			efx_mcdi_process_event(channel, p_event);
+ 			break;
+ 		case ESE_GZ_EF100_EV_TX_COMPLETION:
+-			ef100_ev_tx(channel, p_event);
++			spent_tx += ef100_ev_tx(channel, p_event);
++			if (spent_tx >= EFX_NAPI_MAX_TX)
++				spent = quota;
+ 			break;
+ 		case ESE_GZ_EF100_EV_DRIVER:
+ 			netif_info(efx, drv, efx->net_dev,
+diff --git a/drivers/net/ethernet/sfc/ef100_tx.c b/drivers/net/ethernet/sfc/ef100_tx.c
+index 29ffaf35559d6..849e5555bd128 100644
+--- a/drivers/net/ethernet/sfc/ef100_tx.c
++++ b/drivers/net/ethernet/sfc/ef100_tx.c
+@@ -346,7 +346,7 @@ void ef100_tx_write(struct efx_tx_queue *tx_queue)
+ 	ef100_tx_push_buffers(tx_queue);
+ }
+ 
+-void ef100_ev_tx(struct efx_channel *channel, const efx_qword_t *p_event)
++int ef100_ev_tx(struct efx_channel *channel, const efx_qword_t *p_event)
+ {
+ 	unsigned int tx_done =
+ 		EFX_QWORD_FIELD(*p_event, ESF_GZ_EV_TXCMPL_NUM_DESC);
+@@ -357,7 +357,7 @@ void ef100_ev_tx(struct efx_channel *channel, const efx_qword_t *p_event)
+ 	unsigned int tx_index = (tx_queue->read_count + tx_done - 1) &
+ 				tx_queue->ptr_mask;
+ 
+-	efx_xmit_done(tx_queue, tx_index);
++	return efx_xmit_done(tx_queue, tx_index);
+ }
+ 
+ /* Add a socket buffer to a TX queue
+diff --git a/drivers/net/ethernet/sfc/ef100_tx.h b/drivers/net/ethernet/sfc/ef100_tx.h
+index e9e11540fcdea..d9a0819c5a72c 100644
+--- a/drivers/net/ethernet/sfc/ef100_tx.h
++++ b/drivers/net/ethernet/sfc/ef100_tx.h
+@@ -20,7 +20,7 @@ void ef100_tx_init(struct efx_tx_queue *tx_queue);
+ void ef100_tx_write(struct efx_tx_queue *tx_queue);
+ unsigned int ef100_tx_max_skb_descs(struct efx_nic *efx);
+ 
+-void ef100_ev_tx(struct efx_channel *channel, const efx_qword_t *p_event);
++int ef100_ev_tx(struct efx_channel *channel, const efx_qword_t *p_event);
+ 
+ netdev_tx_t ef100_enqueue_skb(struct efx_tx_queue *tx_queue, struct sk_buff *skb);
+ int __ef100_enqueue_skb(struct efx_tx_queue *tx_queue, struct sk_buff *skb,
+diff --git a/drivers/net/ethernet/sfc/tx_common.c b/drivers/net/ethernet/sfc/tx_common.c
+index 67e789b96c437..755aa92bf8236 100644
+--- a/drivers/net/ethernet/sfc/tx_common.c
++++ b/drivers/net/ethernet/sfc/tx_common.c
+@@ -249,7 +249,7 @@ void efx_xmit_done_check_empty(struct efx_tx_queue *tx_queue)
+ 	}
+ }
+ 
+-void efx_xmit_done(struct efx_tx_queue *tx_queue, unsigned int index)
++int efx_xmit_done(struct efx_tx_queue *tx_queue, unsigned int index)
+ {
+ 	unsigned int fill_level, pkts_compl = 0, bytes_compl = 0;
+ 	unsigned int efv_pkts_compl = 0;
+@@ -279,6 +279,8 @@ void efx_xmit_done(struct efx_tx_queue *tx_queue, unsigned int index)
+ 	}
+ 
+ 	efx_xmit_done_check_empty(tx_queue);
++
++	return pkts_compl + efv_pkts_compl;
+ }
+ 
+ /* Remove buffers put into a tx_queue for the current packet.
+diff --git a/drivers/net/ethernet/sfc/tx_common.h b/drivers/net/ethernet/sfc/tx_common.h
+index d87aecbc7bf1a..1e9f42938aac9 100644
+--- a/drivers/net/ethernet/sfc/tx_common.h
++++ b/drivers/net/ethernet/sfc/tx_common.h
+@@ -28,7 +28,7 @@ static inline bool efx_tx_buffer_in_use(struct efx_tx_buffer *buffer)
+ }
+ 
+ void efx_xmit_done_check_empty(struct efx_tx_queue *tx_queue);
+-void efx_xmit_done(struct efx_tx_queue *tx_queue, unsigned int index);
++int efx_xmit_done(struct efx_tx_queue *tx_queue, unsigned int index);
+ 
+ void efx_enqueue_unwind(struct efx_tx_queue *tx_queue,
+ 			unsigned int insert_count);
+diff --git a/drivers/net/ieee802154/mac802154_hwsim.c b/drivers/net/ieee802154/mac802154_hwsim.c
+index 8445c2189d116..31cba9aa76366 100644
+--- a/drivers/net/ieee802154/mac802154_hwsim.c
++++ b/drivers/net/ieee802154/mac802154_hwsim.c
+@@ -685,7 +685,7 @@ static int hwsim_del_edge_nl(struct sk_buff *msg, struct genl_info *info)
+ static int hwsim_set_edge_lqi(struct sk_buff *msg, struct genl_info *info)
+ {
+ 	struct nlattr *edge_attrs[MAC802154_HWSIM_EDGE_ATTR_MAX + 1];
+-	struct hwsim_edge_info *einfo;
++	struct hwsim_edge_info *einfo, *einfo_old;
+ 	struct hwsim_phy *phy_v0;
+ 	struct hwsim_edge *e;
+ 	u32 v0, v1;
+@@ -723,8 +723,10 @@ static int hwsim_set_edge_lqi(struct sk_buff *msg, struct genl_info *info)
+ 	list_for_each_entry_rcu(e, &phy_v0->edges, list) {
+ 		if (e->endpoint->idx == v1) {
+ 			einfo->lqi = lqi;
+-			rcu_assign_pointer(e->info, einfo);
++			einfo_old = rcu_replace_pointer(e->info, einfo,
++							lockdep_is_held(&hwsim_phys_lock));
+ 			rcu_read_unlock();
++			kfree_rcu(einfo_old, rcu);
+ 			mutex_unlock(&hwsim_phys_lock);
+ 			return 0;
+ 		}
+diff --git a/drivers/net/phy/dp83867.c b/drivers/net/phy/dp83867.c
+index 9f7ff88200484..76fe7e7d9ac91 100644
+--- a/drivers/net/phy/dp83867.c
++++ b/drivers/net/phy/dp83867.c
+@@ -905,7 +905,7 @@ static int dp83867_phy_reset(struct phy_device *phydev)
+ {
+ 	int err;
+ 
+-	err = phy_write(phydev, DP83867_CTRL, DP83867_SW_RESTART);
++	err = phy_write(phydev, DP83867_CTRL, DP83867_SW_RESET);
+ 	if (err < 0)
+ 		return err;
+ 
+diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
+index 389f33a125344..8b3618d3da4aa 100644
+--- a/drivers/net/phy/mdio_bus.c
++++ b/drivers/net/phy/mdio_bus.c
+@@ -1287,7 +1287,7 @@ EXPORT_SYMBOL_GPL(mdiobus_modify_changed);
+  * @mask: bit mask of bits to clear
+  * @set: bit mask of bits to set
+  */
+-int mdiobus_c45_modify_changed(struct mii_bus *bus, int devad, int addr,
++int mdiobus_c45_modify_changed(struct mii_bus *bus, int addr, int devad,
+ 			       u32 regnum, u16 mask, u16 set)
+ {
+ 	int err;
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 25b2d41de4c1d..3a6a7d62e8839 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -547,6 +547,8 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
+ 	IWL_DEV_INFO(0x54F0, 0x1692, iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_killer_1690i_name),
+ 	IWL_DEV_INFO(0x7A70, 0x1691, iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_killer_1690s_name),
+ 	IWL_DEV_INFO(0x7A70, 0x1692, iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_killer_1690i_name),
++	IWL_DEV_INFO(0x7AF0, 0x1691, iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_killer_1690s_name),
++	IWL_DEV_INFO(0x7AF0, 0x1692, iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_killer_1690i_name),
+ 
+ 	IWL_DEV_INFO(0x271C, 0x0214, iwl9260_2ac_cfg, iwl9260_1_name),
+ 	IWL_DEV_INFO(0x7E40, 0x1691, iwl_cfg_ma_a0_gf4_a0, iwl_ax411_killer_1690s_name),
+diff --git a/drivers/nfc/nfcsim.c b/drivers/nfc/nfcsim.c
+index 85bf8d586c707..0f6befe8be1e2 100644
+--- a/drivers/nfc/nfcsim.c
++++ b/drivers/nfc/nfcsim.c
+@@ -336,10 +336,6 @@ static struct dentry *nfcsim_debugfs_root;
+ static void nfcsim_debugfs_init(void)
+ {
+ 	nfcsim_debugfs_root = debugfs_create_dir("nfcsim", NULL);
+-
+-	if (!nfcsim_debugfs_root)
+-		pr_err("Could not create debugfs entry\n");
+-
+ }
+ 
+ static void nfcsim_debugfs_remove(void)
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index c015393beeee8..8a632bf7f5a8f 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -397,7 +397,16 @@ void nvme_complete_rq(struct request *req)
+ 	trace_nvme_complete_rq(req);
+ 	nvme_cleanup_cmd(req);
+ 
+-	if (ctrl->kas)
++	/*
++	 * Completions of long-running commands should not be able to
++	 * defer sending of periodic keep alives, since the controller
++	 * may have completed processing such commands a long time ago
++	 * (arbitrarily close to command submission time).
++	 * req->deadline - req->timeout is the command submission time
++	 * in jiffies.
++	 */
++	if (ctrl->kas &&
++	    req->deadline - req->timeout >= ctrl->ka_last_check_time)
+ 		ctrl->comp_seen = true;
+ 
+ 	switch (nvme_decide_disposition(req)) {
+@@ -1115,7 +1124,7 @@ u32 nvme_passthru_start(struct nvme_ctrl *ctrl, struct nvme_ns *ns, u8 opcode)
+ }
+ EXPORT_SYMBOL_NS_GPL(nvme_passthru_start, NVME_TARGET_PASSTHRU);
+ 
+-void nvme_passthru_end(struct nvme_ctrl *ctrl, u32 effects,
++void nvme_passthru_end(struct nvme_ctrl *ctrl, struct nvme_ns *ns, u32 effects,
+ 		       struct nvme_command *cmd, int status)
+ {
+ 	if (effects & NVME_CMD_EFFECTS_CSE_MASK) {
+@@ -1132,6 +1141,8 @@ void nvme_passthru_end(struct nvme_ctrl *ctrl, u32 effects,
+ 		nvme_queue_scan(ctrl);
+ 		flush_work(&ctrl->scan_work);
+ 	}
++	if (ns)
++		return;
+ 
+ 	switch (cmd->common.opcode) {
+ 	case nvme_admin_set_features:
+@@ -1161,9 +1172,25 @@ EXPORT_SYMBOL_NS_GPL(nvme_passthru_end, NVME_TARGET_PASSTHRU);
+  *   The host should send Keep Alive commands at half of the Keep Alive Timeout
+  *   accounting for transport roundtrip times [..].
+  */
++static unsigned long nvme_keep_alive_work_period(struct nvme_ctrl *ctrl)
++{
++	unsigned long delay = ctrl->kato * HZ / 2;
++
++	/*
++	 * When using Traffic Based Keep Alive, we need to run
++	 * nvme_keep_alive_work at twice the normal frequency, as one
++	 * command completion can postpone sending a keep alive command
++	 * by up to twice the delay between runs.
++	 */
++	if (ctrl->ctratt & NVME_CTRL_ATTR_TBKAS)
++		delay /= 2;
++	return delay;
++}
++
+ static void nvme_queue_keep_alive_work(struct nvme_ctrl *ctrl)
+ {
+-	queue_delayed_work(nvme_wq, &ctrl->ka_work, ctrl->kato * HZ / 2);
++	queue_delayed_work(nvme_wq, &ctrl->ka_work,
++			   nvme_keep_alive_work_period(ctrl));
+ }
+ 
+ static enum rq_end_io_ret nvme_keep_alive_end_io(struct request *rq,
+@@ -1172,6 +1199,20 @@ static enum rq_end_io_ret nvme_keep_alive_end_io(struct request *rq,
+ 	struct nvme_ctrl *ctrl = rq->end_io_data;
+ 	unsigned long flags;
+ 	bool startka = false;
++	unsigned long rtt = jiffies - (rq->deadline - rq->timeout);
++	unsigned long delay = nvme_keep_alive_work_period(ctrl);
++
++	/*
++	 * Subtract off the keepalive RTT so nvme_keep_alive_work runs
++	 * at the desired frequency.
++	 */
++	if (rtt <= delay) {
++		delay -= rtt;
++	} else {
++		dev_warn(ctrl->device, "long keepalive RTT (%u ms)\n",
++			 jiffies_to_msecs(rtt));
++		delay = 0;
++	}
+ 
+ 	blk_mq_free_request(rq);
+ 
+@@ -1182,6 +1223,7 @@ static enum rq_end_io_ret nvme_keep_alive_end_io(struct request *rq,
+ 		return RQ_END_IO_NONE;
+ 	}
+ 
++	ctrl->ka_last_check_time = jiffies;
+ 	ctrl->comp_seen = false;
+ 	spin_lock_irqsave(&ctrl->lock, flags);
+ 	if (ctrl->state == NVME_CTRL_LIVE ||
+@@ -1189,7 +1231,7 @@ static enum rq_end_io_ret nvme_keep_alive_end_io(struct request *rq,
+ 		startka = true;
+ 	spin_unlock_irqrestore(&ctrl->lock, flags);
+ 	if (startka)
+-		nvme_queue_keep_alive_work(ctrl);
++		queue_delayed_work(nvme_wq, &ctrl->ka_work, delay);
+ 	return RQ_END_IO_NONE;
+ }
+ 
+@@ -1200,6 +1242,8 @@ static void nvme_keep_alive_work(struct work_struct *work)
+ 	bool comp_seen = ctrl->comp_seen;
+ 	struct request *rq;
+ 
++	ctrl->ka_last_check_time = jiffies;
++
+ 	if ((ctrl->ctratt & NVME_CTRL_ATTR_TBKAS) && comp_seen) {
+ 		dev_dbg(ctrl->device,
+ 			"reschedule traffic based keep-alive timer\n");
+diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
+index d24ea2e051564..0264ec74a4361 100644
+--- a/drivers/nvme/host/ioctl.c
++++ b/drivers/nvme/host/ioctl.c
+@@ -254,7 +254,7 @@ static int nvme_submit_user_cmd(struct request_queue *q,
+ 	blk_mq_free_request(req);
+ 
+ 	if (effects)
+-		nvme_passthru_end(ctrl, effects, cmd, ret);
++		nvme_passthru_end(ctrl, ns, effects, cmd, ret);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index a2d4f59e0535a..8657811f8b887 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -328,6 +328,7 @@ struct nvme_ctrl {
+ 	struct delayed_work ka_work;
+ 	struct delayed_work failfast_work;
+ 	struct nvme_command ka_cmd;
++	unsigned long ka_last_check_time;
+ 	struct work_struct fw_act_work;
+ 	unsigned long events;
+ 
+@@ -1077,7 +1078,7 @@ u32 nvme_command_effects(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
+ 			 u8 opcode);
+ u32 nvme_passthru_start(struct nvme_ctrl *ctrl, struct nvme_ns *ns, u8 opcode);
+ int nvme_execute_rq(struct request *rq, bool at_head);
+-void nvme_passthru_end(struct nvme_ctrl *ctrl, u32 effects,
++void nvme_passthru_end(struct nvme_ctrl *ctrl, struct nvme_ns *ns, u32 effects,
+ 		       struct nvme_command *cmd, int status);
+ struct nvme_ctrl *nvme_ctrl_from_file(struct file *file);
+ struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned nsid);
+diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
+index 511c980d538df..71a9c1cc57f59 100644
+--- a/drivers/nvme/target/passthru.c
++++ b/drivers/nvme/target/passthru.c
+@@ -243,7 +243,7 @@ static void nvmet_passthru_execute_cmd_work(struct work_struct *w)
+ 	blk_mq_free_request(rq);
+ 
+ 	if (effects)
+-		nvme_passthru_end(ctrl, effects, req->cmd, status);
++		nvme_passthru_end(ctrl, ns, effects, req->cmd, status);
+ }
+ 
+ static enum rq_end_io_ret nvmet_passthru_req_done(struct request *rq,
+diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
+index f33370b756283..e961424ed7d99 100644
+--- a/drivers/pci/controller/pci-hyperv.c
++++ b/drivers/pci/controller/pci-hyperv.c
+@@ -489,7 +489,10 @@ struct hv_pcibus_device {
+ 	struct fwnode_handle *fwnode;
+ 	/* Protocol version negotiated with the host */
+ 	enum pci_protocol_version_t protocol_version;
++
++	struct mutex state_lock;
+ 	enum hv_pcibus_state state;
++
+ 	struct hv_device *hdev;
+ 	resource_size_t low_mmio_space;
+ 	resource_size_t high_mmio_space;
+@@ -553,19 +556,10 @@ struct hv_dr_state {
+ 	struct hv_pcidev_description func[];
+ };
+ 
+-enum hv_pcichild_state {
+-	hv_pcichild_init = 0,
+-	hv_pcichild_requirements,
+-	hv_pcichild_resourced,
+-	hv_pcichild_ejecting,
+-	hv_pcichild_maximum
+-};
+-
+ struct hv_pci_dev {
+ 	/* List protected by pci_rescan_remove_lock */
+ 	struct list_head list_entry;
+ 	refcount_t refs;
+-	enum hv_pcichild_state state;
+ 	struct pci_slot *pci_slot;
+ 	struct hv_pcidev_description desc;
+ 	bool reported_missing;
+@@ -643,6 +637,11 @@ static void hv_arch_irq_unmask(struct irq_data *data)
+ 	pbus = pdev->bus;
+ 	hbus = container_of(pbus->sysdata, struct hv_pcibus_device, sysdata);
+ 	int_desc = data->chip_data;
++	if (!int_desc) {
++		dev_warn(&hbus->hdev->device, "%s() can not unmask irq %u\n",
++			 __func__, data->irq);
++		return;
++	}
+ 
+ 	spin_lock_irqsave(&hbus->retarget_msi_interrupt_lock, flags);
+ 
+@@ -1911,12 +1910,6 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+ 		hv_pci_onchannelcallback(hbus);
+ 		spin_unlock_irqrestore(&channel->sched_lock, flags);
+ 
+-		if (hpdev->state == hv_pcichild_ejecting) {
+-			dev_err_once(&hbus->hdev->device,
+-				     "the device is being ejected\n");
+-			goto enable_tasklet;
+-		}
+-
+ 		udelay(100);
+ 	}
+ 
+@@ -2522,6 +2515,8 @@ static void pci_devices_present_work(struct work_struct *work)
+ 	if (!dr)
+ 		return;
+ 
++	mutex_lock(&hbus->state_lock);
++
+ 	/* First, mark all existing children as reported missing. */
+ 	spin_lock_irqsave(&hbus->device_list_lock, flags);
+ 	list_for_each_entry(hpdev, &hbus->children, list_entry) {
+@@ -2603,6 +2598,8 @@ static void pci_devices_present_work(struct work_struct *work)
+ 		break;
+ 	}
+ 
++	mutex_unlock(&hbus->state_lock);
++
+ 	kfree(dr);
+ }
+ 
+@@ -2751,7 +2748,7 @@ static void hv_eject_device_work(struct work_struct *work)
+ 	hpdev = container_of(work, struct hv_pci_dev, wrk);
+ 	hbus = hpdev->hbus;
+ 
+-	WARN_ON(hpdev->state != hv_pcichild_ejecting);
++	mutex_lock(&hbus->state_lock);
+ 
+ 	/*
+ 	 * Ejection can come before or after the PCI bus has been set up, so
+@@ -2789,6 +2786,8 @@ static void hv_eject_device_work(struct work_struct *work)
+ 	put_pcichild(hpdev);
+ 	put_pcichild(hpdev);
+ 	/* hpdev has been freed. Do not use it any more. */
++
++	mutex_unlock(&hbus->state_lock);
+ }
+ 
+ /**
+@@ -2809,7 +2808,6 @@ static void hv_pci_eject_device(struct hv_pci_dev *hpdev)
+ 		return;
+ 	}
+ 
+-	hpdev->state = hv_pcichild_ejecting;
+ 	get_pcichild(hpdev);
+ 	INIT_WORK(&hpdev->wrk, hv_eject_device_work);
+ 	queue_work(hbus->wq, &hpdev->wrk);
+@@ -3238,8 +3236,10 @@ static int hv_pci_enter_d0(struct hv_device *hdev)
+ 	struct pci_bus_d0_entry *d0_entry;
+ 	struct hv_pci_compl comp_pkt;
+ 	struct pci_packet *pkt;
++	bool retry = true;
+ 	int ret;
+ 
++enter_d0_retry:
+ 	/*
+ 	 * Tell the host that the bus is ready to use, and moved into the
+ 	 * powered-on state.  This includes telling the host which region
+@@ -3266,6 +3266,38 @@ static int hv_pci_enter_d0(struct hv_device *hdev)
+ 	if (ret)
+ 		goto exit;
+ 
++	/*
++	 * In certain case (Kdump) the pci device of interest was
++	 * not cleanly shut down and resource is still held on host
++	 * side, the host could return invalid device status.
++	 * We need to explicitly request host to release the resource
++	 * and try to enter D0 again.
++	 */
++	if (comp_pkt.completion_status < 0 && retry) {
++		retry = false;
++
++		dev_err(&hdev->device, "Retrying D0 Entry\n");
++
++		/*
++		 * Hv_pci_bus_exit() calls hv_send_resource_released()
++		 * to free up resources of its child devices.
++		 * In the kdump kernel we need to set the
++		 * wslot_res_allocated to 255 so it scans all child
++		 * devices to release resources allocated in the
++		 * normal kernel before panic happened.
++		 */
++		hbus->wslot_res_allocated = 255;
++
++		ret = hv_pci_bus_exit(hdev, true);
++
++		if (ret == 0) {
++			kfree(pkt);
++			goto enter_d0_retry;
++		}
++		dev_err(&hdev->device,
++			"Retrying D0 failed with ret %d\n", ret);
++	}
++
+ 	if (comp_pkt.completion_status < 0) {
+ 		dev_err(&hdev->device,
+ 			"PCI Pass-through VSP failed D0 Entry with status %x\n",
+@@ -3308,6 +3340,24 @@ static int hv_pci_query_relations(struct hv_device *hdev)
+ 	if (!ret)
+ 		ret = wait_for_response(hdev, &comp);
+ 
++	/*
++	 * In the case of fast device addition/removal, it's possible that
++	 * vmbus_sendpacket() or wait_for_response() returns -ENODEV but we
++	 * already got a PCI_BUS_RELATIONS* message from the host and the
++	 * channel callback already scheduled a work to hbus->wq, which can be
++	 * running pci_devices_present_work() -> survey_child_resources() ->
++	 * complete(&hbus->survey_event), even after hv_pci_query_relations()
++	 * exits and the stack variable 'comp' is no longer valid; as a result,
++	 * a hang or a page fault may happen when the complete() calls
++	 * raw_spin_lock_irqsave(). Flush hbus->wq before we exit from
++	 * hv_pci_query_relations() to avoid the issues. Note: if 'ret' is
++	 * -ENODEV, there can't be any more work item scheduled to hbus->wq
++	 * after the flush_workqueue(): see vmbus_onoffer_rescind() ->
++	 * vmbus_reset_channel_cb(), vmbus_rescind_cleanup() ->
++	 * channel->rescind = true.
++	 */
++	flush_workqueue(hbus->wq);
++
+ 	return ret;
+ }
+ 
+@@ -3493,7 +3543,6 @@ static int hv_pci_probe(struct hv_device *hdev,
+ 	struct hv_pcibus_device *hbus;
+ 	u16 dom_req, dom;
+ 	char *name;
+-	bool enter_d0_retry = true;
+ 	int ret;
+ 
+ 	/*
+@@ -3529,6 +3578,7 @@ static int hv_pci_probe(struct hv_device *hdev,
+ 		return -ENOMEM;
+ 
+ 	hbus->bridge = bridge;
++	mutex_init(&hbus->state_lock);
+ 	hbus->state = hv_pcibus_init;
+ 	hbus->wslot_res_allocated = -1;
+ 
+@@ -3633,49 +3683,15 @@ static int hv_pci_probe(struct hv_device *hdev,
+ 	if (ret)
+ 		goto free_fwnode;
+ 
+-retry:
+ 	ret = hv_pci_query_relations(hdev);
+ 	if (ret)
+ 		goto free_irq_domain;
+ 
+-	ret = hv_pci_enter_d0(hdev);
+-	/*
+-	 * In certain case (Kdump) the pci device of interest was
+-	 * not cleanly shut down and resource is still held on host
+-	 * side, the host could return invalid device status.
+-	 * We need to explicitly request host to release the resource
+-	 * and try to enter D0 again.
+-	 * Since the hv_pci_bus_exit() call releases structures
+-	 * of all its child devices, we need to start the retry from
+-	 * hv_pci_query_relations() call, requesting host to send
+-	 * the synchronous child device relations message before this
+-	 * information is needed in hv_send_resources_allocated()
+-	 * call later.
+-	 */
+-	if (ret == -EPROTO && enter_d0_retry) {
+-		enter_d0_retry = false;
+-
+-		dev_err(&hdev->device, "Retrying D0 Entry\n");
+-
+-		/*
+-		 * Hv_pci_bus_exit() calls hv_send_resources_released()
+-		 * to free up resources of its child devices.
+-		 * In the kdump kernel we need to set the
+-		 * wslot_res_allocated to 255 so it scans all child
+-		 * devices to release resources allocated in the
+-		 * normal kernel before panic happened.
+-		 */
+-		hbus->wslot_res_allocated = 255;
+-		ret = hv_pci_bus_exit(hdev, true);
+-
+-		if (ret == 0)
+-			goto retry;
++	mutex_lock(&hbus->state_lock);
+ 
+-		dev_err(&hdev->device,
+-			"Retrying D0 failed with ret %d\n", ret);
+-	}
++	ret = hv_pci_enter_d0(hdev);
+ 	if (ret)
+-		goto free_irq_domain;
++		goto release_state_lock;
+ 
+ 	ret = hv_pci_allocate_bridge_windows(hbus);
+ 	if (ret)
+@@ -3693,12 +3709,15 @@ retry:
+ 	if (ret)
+ 		goto free_windows;
+ 
++	mutex_unlock(&hbus->state_lock);
+ 	return 0;
+ 
+ free_windows:
+ 	hv_pci_free_bridge_windows(hbus);
+ exit_d0:
+ 	(void) hv_pci_bus_exit(hdev, true);
++release_state_lock:
++	mutex_unlock(&hbus->state_lock);
+ free_irq_domain:
+ 	irq_domain_remove(hbus->irq_domain);
+ free_fwnode:
+@@ -3948,20 +3967,26 @@ static int hv_pci_resume(struct hv_device *hdev)
+ 	if (ret)
+ 		goto out;
+ 
++	mutex_lock(&hbus->state_lock);
++
+ 	ret = hv_pci_enter_d0(hdev);
+ 	if (ret)
+-		goto out;
++		goto release_state_lock;
+ 
+ 	ret = hv_send_resources_allocated(hdev);
+ 	if (ret)
+-		goto out;
++		goto release_state_lock;
+ 
+ 	prepopulate_bars(hbus);
+ 
+ 	hv_pci_restore_msi_state(hbus);
+ 
+ 	hbus->state = hv_pcibus_installed;
++	mutex_unlock(&hbus->state_lock);
+ 	return 0;
++
++release_state_lock:
++	mutex_unlock(&hbus->state_lock);
+ out:
+ 	vmbus_close(hdev->channel);
+ 	return ret;
+diff --git a/drivers/platform/x86/amd/pmf/core.c b/drivers/platform/x86/amd/pmf/core.c
+index dc9803e1a4b9b..73d2357e32f8e 100644
+--- a/drivers/platform/x86/amd/pmf/core.c
++++ b/drivers/platform/x86/amd/pmf/core.c
+@@ -297,6 +297,8 @@ static void amd_pmf_init_features(struct amd_pmf_dev *dev)
+ 	/* Enable Static Slider */
+ 	if (is_apmf_func_supported(dev, APMF_FUNC_STATIC_SLIDER_GRANULAR)) {
+ 		amd_pmf_init_sps(dev);
++		dev->pwr_src_notifier.notifier_call = amd_pmf_pwr_src_notify_call;
++		power_supply_reg_notifier(&dev->pwr_src_notifier);
+ 		dev_dbg(dev->dev, "SPS enabled and Platform Profiles registered\n");
+ 	}
+ 
+@@ -315,8 +317,10 @@ static void amd_pmf_init_features(struct amd_pmf_dev *dev)
+ 
+ static void amd_pmf_deinit_features(struct amd_pmf_dev *dev)
+ {
+-	if (is_apmf_func_supported(dev, APMF_FUNC_STATIC_SLIDER_GRANULAR))
++	if (is_apmf_func_supported(dev, APMF_FUNC_STATIC_SLIDER_GRANULAR)) {
++		power_supply_unreg_notifier(&dev->pwr_src_notifier);
+ 		amd_pmf_deinit_sps(dev);
++	}
+ 
+ 	if (is_apmf_func_supported(dev, APMF_FUNC_AUTO_MODE)) {
+ 		amd_pmf_deinit_auto_mode(dev);
+@@ -399,9 +403,6 @@ static int amd_pmf_probe(struct platform_device *pdev)
+ 	apmf_install_handler(dev);
+ 	amd_pmf_dbgfs_register(dev);
+ 
+-	dev->pwr_src_notifier.notifier_call = amd_pmf_pwr_src_notify_call;
+-	power_supply_reg_notifier(&dev->pwr_src_notifier);
+-
+ 	dev_info(dev->dev, "registered PMF device successfully\n");
+ 
+ 	return 0;
+@@ -411,7 +412,6 @@ static int amd_pmf_remove(struct platform_device *pdev)
+ {
+ 	struct amd_pmf_dev *dev = platform_get_drvdata(pdev);
+ 
+-	power_supply_unreg_notifier(&dev->pwr_src_notifier);
+ 	amd_pmf_deinit_features(dev);
+ 	apmf_acpi_deinit(dev);
+ 	amd_pmf_dbgfs_unregister(dev);
+diff --git a/drivers/platform/x86/intel/int3472/clk_and_regulator.c b/drivers/platform/x86/intel/int3472/clk_and_regulator.c
+index 1086c3d834945..399f0623ca1b5 100644
+--- a/drivers/platform/x86/intel/int3472/clk_and_regulator.c
++++ b/drivers/platform/x86/intel/int3472/clk_and_regulator.c
+@@ -101,9 +101,11 @@ int skl_int3472_register_clock(struct int3472_discrete_device *int3472,
+ 
+ 	int3472->clock.ena_gpio = acpi_get_and_request_gpiod(path, agpio->pin_table[0],
+ 							     "int3472,clk-enable");
+-	if (IS_ERR(int3472->clock.ena_gpio))
+-		return dev_err_probe(int3472->dev, PTR_ERR(int3472->clock.ena_gpio),
+-				     "getting clk-enable GPIO\n");
++	if (IS_ERR(int3472->clock.ena_gpio)) {
++		ret = PTR_ERR(int3472->clock.ena_gpio);
++		int3472->clock.ena_gpio = NULL;
++		return dev_err_probe(int3472->dev, ret, "getting clk-enable GPIO\n");
++	}
+ 
+ 	if (polarity == GPIO_ACTIVE_LOW)
+ 		gpiod_toggle_active_low(int3472->clock.ena_gpio);
+@@ -199,8 +201,9 @@ int skl_int3472_register_regulator(struct int3472_discrete_device *int3472,
+ 	int3472->regulator.gpio = acpi_get_and_request_gpiod(path, agpio->pin_table[0],
+ 							     "int3472,regulator");
+ 	if (IS_ERR(int3472->regulator.gpio)) {
+-		dev_err(int3472->dev, "Failed to get regulator GPIO line\n");
+-		return PTR_ERR(int3472->regulator.gpio);
++		ret = PTR_ERR(int3472->regulator.gpio);
++		int3472->regulator.gpio = NULL;
++		return dev_err_probe(int3472->dev, ret, "getting regulator GPIO\n");
+ 	}
+ 
+ 	/* Ensure the pin is in output mode and non-active state */
+diff --git a/drivers/s390/cio/device.c b/drivers/s390/cio/device.c
+index d5c43e9b51289..c0d620ffea618 100644
+--- a/drivers/s390/cio/device.c
++++ b/drivers/s390/cio/device.c
+@@ -1376,6 +1376,7 @@ void ccw_device_set_notoper(struct ccw_device *cdev)
+ enum io_sch_action {
+ 	IO_SCH_UNREG,
+ 	IO_SCH_ORPH_UNREG,
++	IO_SCH_UNREG_CDEV,
+ 	IO_SCH_ATTACH,
+ 	IO_SCH_UNREG_ATTACH,
+ 	IO_SCH_ORPH_ATTACH,
+@@ -1408,7 +1409,7 @@ static enum io_sch_action sch_get_action(struct subchannel *sch)
+ 	}
+ 	if ((sch->schib.pmcw.pam & sch->opm) == 0) {
+ 		if (ccw_device_notify(cdev, CIO_NO_PATH) != NOTIFY_OK)
+-			return IO_SCH_UNREG;
++			return IO_SCH_UNREG_CDEV;
+ 		return IO_SCH_DISC;
+ 	}
+ 	if (device_is_disconnected(cdev))
+@@ -1470,6 +1471,7 @@ static int io_subchannel_sch_event(struct subchannel *sch, int process)
+ 	case IO_SCH_ORPH_ATTACH:
+ 		ccw_device_set_disconnected(cdev);
+ 		break;
++	case IO_SCH_UNREG_CDEV:
+ 	case IO_SCH_UNREG_ATTACH:
+ 	case IO_SCH_UNREG:
+ 		if (!cdev)
+@@ -1503,6 +1505,7 @@ static int io_subchannel_sch_event(struct subchannel *sch, int process)
+ 		if (rc)
+ 			goto out;
+ 		break;
++	case IO_SCH_UNREG_CDEV:
+ 	case IO_SCH_UNREG_ATTACH:
+ 		spin_lock_irqsave(sch->lock, flags);
+ 		sch_set_cdev(sch, NULL);
+diff --git a/drivers/soundwire/dmi-quirks.c b/drivers/soundwire/dmi-quirks.c
+index 58ea013fa918a..2a1096dab63d3 100644
+--- a/drivers/soundwire/dmi-quirks.c
++++ b/drivers/soundwire/dmi-quirks.c
+@@ -99,6 +99,13 @@ static const struct dmi_system_id adr_remap_quirk_table[] = {
+ 		},
+ 		.driver_data = (void *)intel_tgl_bios,
+ 	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "HP"),
++			DMI_MATCH(DMI_BOARD_NAME, "8709"),
++		},
++		.driver_data = (void *)intel_tgl_bios,
++	},
+ 	{
+ 		/* quirk used for NUC15 'Bishop County' LAPBC510 and LAPBC710 skews */
+ 		.matches = {
+diff --git a/drivers/soundwire/qcom.c b/drivers/soundwire/qcom.c
+index 30575ed20947e..0dcdbd4e1ec3a 100644
+--- a/drivers/soundwire/qcom.c
++++ b/drivers/soundwire/qcom.c
+@@ -1098,8 +1098,10 @@ static int qcom_swrm_startup(struct snd_pcm_substream *substream,
+ 	}
+ 
+ 	sruntime = sdw_alloc_stream(dai->name);
+-	if (!sruntime)
+-		return -ENOMEM;
++	if (!sruntime) {
++		ret = -ENOMEM;
++		goto err_alloc;
++	}
+ 
+ 	ctrl->sruntime[dai->id] = sruntime;
+ 
+@@ -1109,12 +1111,19 @@ static int qcom_swrm_startup(struct snd_pcm_substream *substream,
+ 		if (ret < 0 && ret != -ENOTSUPP) {
+ 			dev_err(dai->dev, "Failed to set sdw stream on %s\n",
+ 				codec_dai->name);
+-			sdw_release_stream(sruntime);
+-			return ret;
++			goto err_set_stream;
+ 		}
+ 	}
+ 
+ 	return 0;
++
++err_set_stream:
++	sdw_release_stream(sruntime);
++err_alloc:
++	pm_runtime_mark_last_busy(ctrl->dev);
++	pm_runtime_put_autosuspend(ctrl->dev);
++
++	return ret;
+ }
+ 
+ static void qcom_swrm_shutdown(struct snd_pcm_substream *substream,
+diff --git a/drivers/spi/spi-fsl-lpspi.c b/drivers/spi/spi-fsl-lpspi.c
+index 34488de555871..457fe6bc7e41e 100644
+--- a/drivers/spi/spi-fsl-lpspi.c
++++ b/drivers/spi/spi-fsl-lpspi.c
+@@ -910,9 +910,14 @@ static int fsl_lpspi_probe(struct platform_device *pdev)
+ 	ret = fsl_lpspi_dma_init(&pdev->dev, fsl_lpspi, controller);
+ 	if (ret == -EPROBE_DEFER)
+ 		goto out_pm_get;
+-
+ 	if (ret < 0)
+ 		dev_err(&pdev->dev, "dma setup error %d, use pio\n", ret);
++	else
++		/*
++		 * disable LPSPI module IRQ when enable DMA mode successfully,
++		 * to prevent the unexpected LPSPI module IRQ events.
++		 */
++		disable_irq(irq);
+ 
+ 	ret = devm_spi_register_controller(&pdev->dev, controller);
+ 	if (ret < 0) {
+diff --git a/drivers/spi/spi-geni-qcom.c b/drivers/spi/spi-geni-qcom.c
+index b106faf21a723..baf477383682d 100644
+--- a/drivers/spi/spi-geni-qcom.c
++++ b/drivers/spi/spi-geni-qcom.c
+@@ -646,6 +646,8 @@ static int spi_geni_init(struct spi_geni_master *mas)
+ 			geni_se_select_mode(se, GENI_GPI_DMA);
+ 			dev_dbg(mas->dev, "Using GPI DMA mode for SPI\n");
+ 			break;
++		} else if (ret == -EPROBE_DEFER) {
++			goto out_pm;
+ 		}
+ 		/*
+ 		 * in case of failure to get gpi dma channel, we can still do the
+diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
+index 07e196b44b91d..c70ce1fa399f8 100644
+--- a/drivers/target/iscsi/iscsi_target.c
++++ b/drivers/target/iscsi/iscsi_target.c
+@@ -363,8 +363,6 @@ struct iscsi_np *iscsit_add_np(
+ 	init_completion(&np->np_restart_comp);
+ 	INIT_LIST_HEAD(&np->np_list);
+ 
+-	timer_setup(&np->np_login_timer, iscsi_handle_login_thread_timeout, 0);
+-
+ 	ret = iscsi_target_setup_login_socket(np, sockaddr);
+ 	if (ret != 0) {
+ 		kfree(np);
+diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c
+index 274bdd7845ca9..90b870f234f03 100644
+--- a/drivers/target/iscsi/iscsi_target_login.c
++++ b/drivers/target/iscsi/iscsi_target_login.c
+@@ -811,59 +811,6 @@ void iscsi_post_login_handler(
+ 	iscsit_dec_conn_usage_count(conn);
+ }
+ 
+-void iscsi_handle_login_thread_timeout(struct timer_list *t)
+-{
+-	struct iscsi_np *np = from_timer(np, t, np_login_timer);
+-
+-	spin_lock_bh(&np->np_thread_lock);
+-	pr_err("iSCSI Login timeout on Network Portal %pISpc\n",
+-			&np->np_sockaddr);
+-
+-	if (np->np_login_timer_flags & ISCSI_TF_STOP) {
+-		spin_unlock_bh(&np->np_thread_lock);
+-		return;
+-	}
+-
+-	if (np->np_thread)
+-		send_sig(SIGINT, np->np_thread, 1);
+-
+-	np->np_login_timer_flags &= ~ISCSI_TF_RUNNING;
+-	spin_unlock_bh(&np->np_thread_lock);
+-}
+-
+-static void iscsi_start_login_thread_timer(struct iscsi_np *np)
+-{
+-	/*
+-	 * This used the TA_LOGIN_TIMEOUT constant because at this
+-	 * point we do not have access to ISCSI_TPG_ATTRIB(tpg)->login_timeout
+-	 */
+-	spin_lock_bh(&np->np_thread_lock);
+-	np->np_login_timer_flags &= ~ISCSI_TF_STOP;
+-	np->np_login_timer_flags |= ISCSI_TF_RUNNING;
+-	mod_timer(&np->np_login_timer, jiffies + TA_LOGIN_TIMEOUT * HZ);
+-
+-	pr_debug("Added timeout timer to iSCSI login request for"
+-			" %u seconds.\n", TA_LOGIN_TIMEOUT);
+-	spin_unlock_bh(&np->np_thread_lock);
+-}
+-
+-static void iscsi_stop_login_thread_timer(struct iscsi_np *np)
+-{
+-	spin_lock_bh(&np->np_thread_lock);
+-	if (!(np->np_login_timer_flags & ISCSI_TF_RUNNING)) {
+-		spin_unlock_bh(&np->np_thread_lock);
+-		return;
+-	}
+-	np->np_login_timer_flags |= ISCSI_TF_STOP;
+-	spin_unlock_bh(&np->np_thread_lock);
+-
+-	del_timer_sync(&np->np_login_timer);
+-
+-	spin_lock_bh(&np->np_thread_lock);
+-	np->np_login_timer_flags &= ~ISCSI_TF_RUNNING;
+-	spin_unlock_bh(&np->np_thread_lock);
+-}
+-
+ int iscsit_setup_np(
+ 	struct iscsi_np *np,
+ 	struct sockaddr_storage *sockaddr)
+@@ -1123,10 +1070,13 @@ static struct iscsit_conn *iscsit_alloc_conn(struct iscsi_np *np)
+ 	spin_lock_init(&conn->nopin_timer_lock);
+ 	spin_lock_init(&conn->response_queue_lock);
+ 	spin_lock_init(&conn->state_lock);
++	spin_lock_init(&conn->login_worker_lock);
++	spin_lock_init(&conn->login_timer_lock);
+ 
+ 	timer_setup(&conn->nopin_response_timer,
+ 		    iscsit_handle_nopin_response_timeout, 0);
+ 	timer_setup(&conn->nopin_timer, iscsit_handle_nopin_timeout, 0);
++	timer_setup(&conn->login_timer, iscsit_login_timeout, 0);
+ 
+ 	if (iscsit_conn_set_transport(conn, np->np_transport) < 0)
+ 		goto free_conn;
+@@ -1304,7 +1254,7 @@ static int __iscsi_target_login_thread(struct iscsi_np *np)
+ 		goto new_sess_out;
+ 	}
+ 
+-	iscsi_start_login_thread_timer(np);
++	iscsit_start_login_timer(conn, current);
+ 
+ 	pr_debug("Moving to TARG_CONN_STATE_XPT_UP.\n");
+ 	conn->conn_state = TARG_CONN_STATE_XPT_UP;
+@@ -1417,8 +1367,6 @@ static int __iscsi_target_login_thread(struct iscsi_np *np)
+ 	if (ret < 0)
+ 		goto new_sess_out;
+ 
+-	iscsi_stop_login_thread_timer(np);
+-
+ 	if (ret == 1) {
+ 		tpg_np = conn->tpg_np;
+ 
+@@ -1434,7 +1382,7 @@ static int __iscsi_target_login_thread(struct iscsi_np *np)
+ new_sess_out:
+ 	new_sess = true;
+ old_sess_out:
+-	iscsi_stop_login_thread_timer(np);
++	iscsit_stop_login_timer(conn);
+ 	tpg_np = conn->tpg_np;
+ 	iscsi_target_login_sess_out(conn, zero_tsih, new_sess);
+ 	new_sess = false;
+@@ -1448,7 +1396,6 @@ old_sess_out:
+ 	return 1;
+ 
+ exit:
+-	iscsi_stop_login_thread_timer(np);
+ 	spin_lock_bh(&np->np_thread_lock);
+ 	np->np_thread_state = ISCSI_NP_THREAD_EXIT;
+ 	spin_unlock_bh(&np->np_thread_lock);
+diff --git a/drivers/target/iscsi/iscsi_target_nego.c b/drivers/target/iscsi/iscsi_target_nego.c
+index 24040c118e49d..fa3fb5f4e6bc4 100644
+--- a/drivers/target/iscsi/iscsi_target_nego.c
++++ b/drivers/target/iscsi/iscsi_target_nego.c
+@@ -535,25 +535,6 @@ static void iscsi_target_login_drop(struct iscsit_conn *conn, struct iscsi_login
+ 	iscsi_target_login_sess_out(conn, zero_tsih, true);
+ }
+ 
+-struct conn_timeout {
+-	struct timer_list timer;
+-	struct iscsit_conn *conn;
+-};
+-
+-static void iscsi_target_login_timeout(struct timer_list *t)
+-{
+-	struct conn_timeout *timeout = from_timer(timeout, t, timer);
+-	struct iscsit_conn *conn = timeout->conn;
+-
+-	pr_debug("Entering iscsi_target_login_timeout >>>>>>>>>>>>>>>>>>>\n");
+-
+-	if (conn->login_kworker) {
+-		pr_debug("Sending SIGINT to conn->login_kworker %s/%d\n",
+-			 conn->login_kworker->comm, conn->login_kworker->pid);
+-		send_sig(SIGINT, conn->login_kworker, 1);
+-	}
+-}
+-
+ static void iscsi_target_do_login_rx(struct work_struct *work)
+ {
+ 	struct iscsit_conn *conn = container_of(work,
+@@ -562,12 +543,15 @@ static void iscsi_target_do_login_rx(struct work_struct *work)
+ 	struct iscsi_np *np = login->np;
+ 	struct iscsi_portal_group *tpg = conn->tpg;
+ 	struct iscsi_tpg_np *tpg_np = conn->tpg_np;
+-	struct conn_timeout timeout;
+ 	int rc, zero_tsih = login->zero_tsih;
+ 	bool state;
+ 
+ 	pr_debug("entering iscsi_target_do_login_rx, conn: %p, %s:%d\n",
+ 			conn, current->comm, current->pid);
++
++	spin_lock(&conn->login_worker_lock);
++	set_bit(LOGIN_FLAGS_WORKER_RUNNING, &conn->login_flags);
++	spin_unlock(&conn->login_worker_lock);
+ 	/*
+ 	 * If iscsi_target_do_login_rx() has been invoked by ->sk_data_ready()
+ 	 * before initial PDU processing in iscsi_target_start_negotiation()
+@@ -597,19 +581,16 @@ static void iscsi_target_do_login_rx(struct work_struct *work)
+ 		goto err;
+ 	}
+ 
+-	conn->login_kworker = current;
+ 	allow_signal(SIGINT);
+-
+-	timeout.conn = conn;
+-	timer_setup_on_stack(&timeout.timer, iscsi_target_login_timeout, 0);
+-	mod_timer(&timeout.timer, jiffies + TA_LOGIN_TIMEOUT * HZ);
+-	pr_debug("Starting login timer for %s/%d\n", current->comm, current->pid);
++	rc = iscsit_set_login_timer_kworker(conn, current);
++	if (rc < 0) {
++		/* The login timer has already expired */
++		pr_debug("iscsi_target_do_login_rx, login failed\n");
++		goto err;
++	}
+ 
+ 	rc = conn->conn_transport->iscsit_get_login_rx(conn, login);
+-	del_timer_sync(&timeout.timer);
+-	destroy_timer_on_stack(&timeout.timer);
+ 	flush_signals(current);
+-	conn->login_kworker = NULL;
+ 
+ 	if (rc < 0)
+ 		goto err;
+@@ -646,7 +627,17 @@ static void iscsi_target_do_login_rx(struct work_struct *work)
+ 		if (iscsi_target_sk_check_and_clear(conn,
+ 						    LOGIN_FLAGS_WRITE_ACTIVE))
+ 			goto err;
++
++		/*
++		 * Set the login timer thread pointer to NULL to prevent the
++		 * login process from getting stuck if the initiator
++		 * stops sending data.
++		 */
++		rc = iscsit_set_login_timer_kworker(conn, NULL);
++		if (rc < 0)
++			goto err;
+ 	} else if (rc == 1) {
++		iscsit_stop_login_timer(conn);
+ 		cancel_delayed_work(&conn->login_work);
+ 		iscsi_target_nego_release(conn);
+ 		iscsi_post_login_handler(np, conn, zero_tsih);
+@@ -656,6 +647,7 @@ static void iscsi_target_do_login_rx(struct work_struct *work)
+ 
+ err:
+ 	iscsi_target_restore_sock_callbacks(conn);
++	iscsit_stop_login_timer(conn);
+ 	cancel_delayed_work(&conn->login_work);
+ 	iscsi_target_login_drop(conn, login);
+ 	iscsit_deaccess_np(np, tpg, tpg_np);
+@@ -1130,6 +1122,7 @@ int iscsi_target_locate_portal(
+ 	iscsi_target_set_sock_callbacks(conn);
+ 
+ 	login->np = np;
++	conn->tpg = NULL;
+ 
+ 	login_req = (struct iscsi_login_req *) login->req;
+ 	payload_length = ntoh24(login_req->dlength);
+@@ -1197,7 +1190,6 @@ int iscsi_target_locate_portal(
+ 	 */
+ 	sessiontype = strncmp(s_buf, DISCOVERY, 9);
+ 	if (!sessiontype) {
+-		conn->tpg = iscsit_global->discovery_tpg;
+ 		if (!login->leading_connection)
+ 			goto get_target;
+ 
+@@ -1214,9 +1206,11 @@ int iscsi_target_locate_portal(
+ 		 * Serialize access across the discovery struct iscsi_portal_group to
+ 		 * process login attempt.
+ 		 */
++		conn->tpg = iscsit_global->discovery_tpg;
+ 		if (iscsit_access_np(np, conn->tpg) < 0) {
+ 			iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+ 				ISCSI_LOGIN_STATUS_SVC_UNAVAILABLE);
++			conn->tpg = NULL;
+ 			ret = -1;
+ 			goto out;
+ 		}
+@@ -1368,14 +1362,30 @@ int iscsi_target_start_negotiation(
+ 	 * and perform connection cleanup now.
+ 	 */
+ 	ret = iscsi_target_do_login(conn, login);
+-	if (!ret && iscsi_target_sk_check_and_clear(conn, LOGIN_FLAGS_INITIAL_PDU))
+-		ret = -1;
++	if (!ret) {
++		spin_lock(&conn->login_worker_lock);
++
++		if (iscsi_target_sk_check_and_clear(conn, LOGIN_FLAGS_INITIAL_PDU))
++			ret = -1;
++		else if (!test_bit(LOGIN_FLAGS_WORKER_RUNNING, &conn->login_flags)) {
++			if (iscsit_set_login_timer_kworker(conn, NULL) < 0) {
++				/*
++				 * The timeout has expired already.
++				 * Schedule login_work to perform the cleanup.
++				 */
++				schedule_delayed_work(&conn->login_work, 0);
++			}
++		}
++
++		spin_unlock(&conn->login_worker_lock);
++	}
+ 
+ 	if (ret < 0) {
+ 		iscsi_target_restore_sock_callbacks(conn);
+ 		iscsi_remove_failed_auth_entry(conn);
+ 	}
+ 	if (ret != 0) {
++		iscsit_stop_login_timer(conn);
+ 		cancel_delayed_work_sync(&conn->login_work);
+ 		iscsi_target_nego_release(conn);
+ 	}
+diff --git a/drivers/target/iscsi/iscsi_target_util.c b/drivers/target/iscsi/iscsi_target_util.c
+index 26dc8ed3045b6..b14835fcb0334 100644
+--- a/drivers/target/iscsi/iscsi_target_util.c
++++ b/drivers/target/iscsi/iscsi_target_util.c
+@@ -1040,6 +1040,57 @@ void iscsit_stop_nopin_timer(struct iscsit_conn *conn)
+ 	spin_unlock_bh(&conn->nopin_timer_lock);
+ }
+ 
++void iscsit_login_timeout(struct timer_list *t)
++{
++	struct iscsit_conn *conn = from_timer(conn, t, login_timer);
++	struct iscsi_login *login = conn->login;
++
++	pr_debug("Entering iscsi_target_login_timeout >>>>>>>>>>>>>>>>>>>\n");
++
++	spin_lock_bh(&conn->login_timer_lock);
++	login->login_failed = 1;
++
++	if (conn->login_kworker) {
++		pr_debug("Sending SIGINT to conn->login_kworker %s/%d\n",
++			 conn->login_kworker->comm, conn->login_kworker->pid);
++		send_sig(SIGINT, conn->login_kworker, 1);
++	} else {
++		schedule_delayed_work(&conn->login_work, 0);
++	}
++	spin_unlock_bh(&conn->login_timer_lock);
++}
++
++void iscsit_start_login_timer(struct iscsit_conn *conn, struct task_struct *kthr)
++{
++	pr_debug("Login timer started\n");
++
++	conn->login_kworker = kthr;
++	mod_timer(&conn->login_timer, jiffies + TA_LOGIN_TIMEOUT * HZ);
++}
++
++int iscsit_set_login_timer_kworker(struct iscsit_conn *conn, struct task_struct *kthr)
++{
++	struct iscsi_login *login = conn->login;
++	int ret = 0;
++
++	spin_lock_bh(&conn->login_timer_lock);
++	if (login->login_failed) {
++		/* The timer has already expired */
++		ret = -1;
++	} else {
++		conn->login_kworker = kthr;
++	}
++	spin_unlock_bh(&conn->login_timer_lock);
++
++	return ret;
++}
++
++void iscsit_stop_login_timer(struct iscsit_conn *conn)
++{
++	pr_debug("Login timer stopped\n");
++	timer_delete_sync(&conn->login_timer);
++}
++
+ int iscsit_send_tx_data(
+ 	struct iscsit_cmd *cmd,
+ 	struct iscsit_conn *conn,
+diff --git a/drivers/target/iscsi/iscsi_target_util.h b/drivers/target/iscsi/iscsi_target_util.h
+index 33ea799a08508..24b8e577575a6 100644
+--- a/drivers/target/iscsi/iscsi_target_util.h
++++ b/drivers/target/iscsi/iscsi_target_util.h
+@@ -56,6 +56,10 @@ extern void iscsit_handle_nopin_timeout(struct timer_list *t);
+ extern void __iscsit_start_nopin_timer(struct iscsit_conn *);
+ extern void iscsit_start_nopin_timer(struct iscsit_conn *);
+ extern void iscsit_stop_nopin_timer(struct iscsit_conn *);
++extern void iscsit_login_timeout(struct timer_list *t);
++extern void iscsit_start_login_timer(struct iscsit_conn *, struct task_struct *kthr);
++extern void iscsit_stop_login_timer(struct iscsit_conn *);
++extern int iscsit_set_login_timer_kworker(struct iscsit_conn *, struct task_struct *kthr);
+ extern int iscsit_send_tx_data(struct iscsit_cmd *, struct iscsit_conn *, int);
+ extern int iscsit_fe_sendpage_sg(struct iscsit_cmd *, struct iscsit_conn *);
+ extern int iscsit_tx_login_rsp(struct iscsit_conn *, u8, u8);
+diff --git a/drivers/thermal/intel/intel_soc_dts_iosf.c b/drivers/thermal/intel/intel_soc_dts_iosf.c
+index 8c26f7b2316b5..99b3c2ce46434 100644
+--- a/drivers/thermal/intel/intel_soc_dts_iosf.c
++++ b/drivers/thermal/intel/intel_soc_dts_iosf.c
+@@ -401,7 +401,7 @@ struct intel_soc_dts_sensors *intel_soc_dts_iosf_init(
+ 	spin_lock_init(&sensors->intr_notify_lock);
+ 	mutex_init(&sensors->dts_update_lock);
+ 	sensors->intr_type = intr_type;
+-	sensors->tj_max = tj_max;
++	sensors->tj_max = tj_max * 1000;
+ 	if (intr_type == INTEL_SOC_DTS_INTERRUPT_NONE)
+ 		notification = false;
+ 	else
+diff --git a/drivers/usb/gadget/udc/amd5536udc_pci.c b/drivers/usb/gadget/udc/amd5536udc_pci.c
+index c80f9bd51b750..a36913ae31f9e 100644
+--- a/drivers/usb/gadget/udc/amd5536udc_pci.c
++++ b/drivers/usb/gadget/udc/amd5536udc_pci.c
+@@ -170,6 +170,9 @@ static int udc_pci_probe(
+ 		retval = -ENODEV;
+ 		goto err_probe;
+ 	}
++
++	udc = dev;
++
+ 	return 0;
+ 
+ err_probe:
+diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
+index 07181cd8d52e6..ae2273196b0c9 100644
+--- a/drivers/vhost/net.c
++++ b/drivers/vhost/net.c
+@@ -935,13 +935,18 @@ static void handle_tx_zerocopy(struct vhost_net *net, struct socket *sock)
+ 
+ 		err = sock->ops->sendmsg(sock, &msg, len);
+ 		if (unlikely(err < 0)) {
++			bool retry = err == -EAGAIN || err == -ENOMEM || err == -ENOBUFS;
++
+ 			if (zcopy_used) {
+ 				if (vq->heads[ubuf->desc].len == VHOST_DMA_IN_PROGRESS)
+ 					vhost_net_ubuf_put(ubufs);
+-				nvq->upend_idx = ((unsigned)nvq->upend_idx - 1)
+-					% UIO_MAXIOV;
++				if (retry)
++					nvq->upend_idx = ((unsigned)nvq->upend_idx - 1)
++						% UIO_MAXIOV;
++				else
++					vq->heads[ubuf->desc].len = VHOST_DMA_DONE_LEN;
+ 			}
+-			if (err == -EAGAIN || err == -ENOMEM || err == -ENOBUFS) {
++			if (retry) {
+ 				vhost_discard_vq_desc(vq, 1);
+ 				vhost_net_enable_vq(net, vq);
+ 				break;
+diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
+index 779fc44677162..22e6b23ac96ff 100644
+--- a/drivers/vhost/vdpa.c
++++ b/drivers/vhost/vdpa.c
+@@ -385,7 +385,10 @@ static long vhost_vdpa_set_features(struct vhost_vdpa *v, u64 __user *featurep)
+ {
+ 	struct vdpa_device *vdpa = v->vdpa;
+ 	const struct vdpa_config_ops *ops = vdpa->config;
++	struct vhost_dev *d = &v->vdev;
++	u64 actual_features;
+ 	u64 features;
++	int i;
+ 
+ 	/*
+ 	 * It's not allowed to change the features after they have
+@@ -400,6 +403,16 @@ static long vhost_vdpa_set_features(struct vhost_vdpa *v, u64 __user *featurep)
+ 	if (vdpa_set_features(vdpa, features))
+ 		return -EINVAL;
+ 
++	/* let the vqs know what has been configured */
++	actual_features = ops->get_driver_features(vdpa);
++	for (i = 0; i < d->nvqs; ++i) {
++		struct vhost_virtqueue *vq = d->vqs[i];
++
++		mutex_lock(&vq->mutex);
++		vq->acked_features = actual_features;
++		mutex_unlock(&vq->mutex);
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/fs/afs/write.c b/fs/afs/write.c
+index 571f3b9a417e5..ad46f4b3c0c82 100644
+--- a/fs/afs/write.c
++++ b/fs/afs/write.c
+@@ -731,6 +731,7 @@ static int afs_writepages_region(struct address_space *mapping,
+ 			 * (changing page->mapping to NULL), or even swizzled
+ 			 * back from swapper_space to tmpfs file mapping
+ 			 */
++try_again:
+ 			if (wbc->sync_mode != WB_SYNC_NONE) {
+ 				ret = folio_lock_killable(folio);
+ 				if (ret < 0) {
+@@ -757,12 +758,14 @@ static int afs_writepages_region(struct address_space *mapping,
+ #ifdef CONFIG_AFS_FSCACHE
+ 					folio_wait_fscache(folio);
+ #endif
+-				} else {
+-					start += folio_size(folio);
++					goto try_again;
+ 				}
++
++				start += folio_size(folio);
+ 				if (wbc->sync_mode == WB_SYNC_NONE) {
+ 					if (skips >= 5 || need_resched()) {
+ 						*_next = start;
++						folio_batch_release(&fbatch);
+ 						_leave(" = 0 [%llx]", *_next);
+ 						return 0;
+ 					}
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 200cea6e49e51..b91fa398b8141 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -6163,7 +6163,7 @@ static int log_delayed_deletions_incremental(struct btrfs_trans_handle *trans,
+ {
+ 	struct btrfs_root *log = inode->root->log_root;
+ 	const struct btrfs_delayed_item *curr;
+-	u64 last_range_start;
++	u64 last_range_start = 0;
+ 	u64 last_range_end = 0;
+ 	struct btrfs_key key;
+ 
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 8e9a672320ab7..1250d156619b7 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -4086,16 +4086,17 @@ int cifs_tree_connect(const unsigned int xid, struct cifs_tcon *tcon, const stru
+ 
+ 	/* only send once per connect */
+ 	spin_lock(&tcon->tc_lock);
++	if (tcon->status == TID_GOOD) {
++		spin_unlock(&tcon->tc_lock);
++		return 0;
++	}
++
+ 	if (tcon->status != TID_NEW &&
+ 	    tcon->status != TID_NEED_TCON) {
+ 		spin_unlock(&tcon->tc_lock);
+ 		return -EHOSTDOWN;
+ 	}
+ 
+-	if (tcon->status == TID_GOOD) {
+-		spin_unlock(&tcon->tc_lock);
+-		return 0;
+-	}
+ 	tcon->status = TID_IN_TCON;
+ 	spin_unlock(&tcon->tc_lock);
+ 
+diff --git a/fs/cifs/dfs.c b/fs/cifs/dfs.c
+index 2f93bf8c3325a..2390b2fedd6a3 100644
+--- a/fs/cifs/dfs.c
++++ b/fs/cifs/dfs.c
+@@ -575,16 +575,17 @@ int cifs_tree_connect(const unsigned int xid, struct cifs_tcon *tcon, const stru
+ 
+ 	/* only send once per connect */
+ 	spin_lock(&tcon->tc_lock);
++	if (tcon->status == TID_GOOD) {
++		spin_unlock(&tcon->tc_lock);
++		return 0;
++	}
++
+ 	if (tcon->status != TID_NEW &&
+ 	    tcon->status != TID_NEED_TCON) {
+ 		spin_unlock(&tcon->tc_lock);
+ 		return -EHOSTDOWN;
+ 	}
+ 
+-	if (tcon->status == TID_GOOD) {
+-		spin_unlock(&tcon->tc_lock);
+-		return 0;
+-	}
+ 	tcon->status = TID_IN_TCON;
+ 	spin_unlock(&tcon->tc_lock);
+ 
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 02228a590cd54..a2a95cd113df8 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -3777,7 +3777,7 @@ SMB2_change_notify(const unsigned int xid, struct cifs_tcon *tcon,
+ 		if (*out_data == NULL) {
+ 			rc = -ENOMEM;
+ 			goto cnotify_exit;
+-		} else
++		} else if (plen)
+ 			*plen = le32_to_cpu(smb_rsp->OutputBufferLength);
+ 	}
+ 
+diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
+index 300844f50dcd2..cb62c8f07d1e7 100644
+--- a/fs/gfs2/file.c
++++ b/fs/gfs2/file.c
+@@ -784,9 +784,13 @@ static inline bool should_fault_in_pages(struct iov_iter *i,
+ 	if (!user_backed_iter(i))
+ 		return false;
+ 
++	/*
++	 * Try to fault in multiple pages initially.  When that doesn't result
++	 * in any progress, fall back to a single page.
++	 */
+ 	size = PAGE_SIZE;
+ 	offs = offset_in_page(iocb->ki_pos);
+-	if (*prev_count != count || !*window_size) {
++	if (*prev_count != count) {
+ 		size_t nr_dirtied;
+ 
+ 		nr_dirtied = max(current->nr_dirtied_pause -
+@@ -870,6 +874,7 @@ static ssize_t gfs2_file_direct_write(struct kiocb *iocb, struct iov_iter *from,
+ 	struct gfs2_inode *ip = GFS2_I(inode);
+ 	size_t prev_count = 0, window_size = 0;
+ 	size_t written = 0;
++	bool enough_retries;
+ 	ssize_t ret;
+ 
+ 	/*
+@@ -913,11 +918,17 @@ retry:
+ 	if (ret > 0)
+ 		written = ret;
+ 
++	enough_retries = prev_count == iov_iter_count(from) &&
++			 window_size <= PAGE_SIZE;
+ 	if (should_fault_in_pages(from, iocb, &prev_count, &window_size)) {
+ 		gfs2_glock_dq(gh);
+ 		window_size -= fault_in_iov_iter_readable(from, window_size);
+-		if (window_size)
+-			goto retry;
++		if (window_size) {
++			if (!enough_retries)
++				goto retry;
++			/* fall back to buffered I/O */
++			ret = 0;
++		}
+ 	}
+ out_unlock:
+ 	if (gfs2_holder_queued(gh))
+diff --git a/fs/internal.h b/fs/internal.h
+index dc4eb91a577a8..071a7517f1a74 100644
+--- a/fs/internal.h
++++ b/fs/internal.h
+@@ -59,8 +59,6 @@ extern int finish_clean_context(struct fs_context *fc);
+  */
+ extern int filename_lookup(int dfd, struct filename *name, unsigned flags,
+ 			   struct path *path, struct path *root);
+-extern int vfs_path_lookup(struct dentry *, struct vfsmount *,
+-			   const char *, unsigned int, struct path *);
+ int do_rmdir(int dfd, struct filename *name);
+ int do_unlinkat(int dfd, struct filename *name);
+ int may_linkat(struct mnt_idmap *idmap, const struct path *link);
+diff --git a/fs/ksmbd/server.c b/fs/ksmbd/server.c
+index dc76d7cf241f0..14df83c205577 100644
+--- a/fs/ksmbd/server.c
++++ b/fs/ksmbd/server.c
+@@ -185,24 +185,31 @@ static void __handle_ksmbd_work(struct ksmbd_work *work,
+ 		goto send;
+ 	}
+ 
+-	if (conn->ops->check_user_session) {
+-		rc = conn->ops->check_user_session(work);
+-		if (rc < 0) {
+-			command = conn->ops->get_cmd_val(work);
+-			conn->ops->set_rsp_status(work,
+-					STATUS_USER_SESSION_DELETED);
+-			goto send;
+-		} else if (rc > 0) {
+-			rc = conn->ops->get_ksmbd_tcon(work);
++	do {
++		if (conn->ops->check_user_session) {
++			rc = conn->ops->check_user_session(work);
+ 			if (rc < 0) {
+-				conn->ops->set_rsp_status(work,
+-					STATUS_NETWORK_NAME_DELETED);
++				if (rc == -EINVAL)
++					conn->ops->set_rsp_status(work,
++						STATUS_INVALID_PARAMETER);
++				else
++					conn->ops->set_rsp_status(work,
++						STATUS_USER_SESSION_DELETED);
+ 				goto send;
++			} else if (rc > 0) {
++				rc = conn->ops->get_ksmbd_tcon(work);
++				if (rc < 0) {
++					if (rc == -EINVAL)
++						conn->ops->set_rsp_status(work,
++							STATUS_INVALID_PARAMETER);
++					else
++						conn->ops->set_rsp_status(work,
++							STATUS_NETWORK_NAME_DELETED);
++					goto send;
++				}
+ 			}
+ 		}
+-	}
+ 
+-	do {
+ 		rc = __process_request(work, conn, &command);
+ 		if (rc == SERVER_HANDLER_ABORT)
+ 			break;
+diff --git a/fs/ksmbd/smb2misc.c b/fs/ksmbd/smb2misc.c
+index 0ffe663b75906..33b7e6c4ceffb 100644
+--- a/fs/ksmbd/smb2misc.c
++++ b/fs/ksmbd/smb2misc.c
+@@ -351,9 +351,16 @@ int ksmbd_smb2_check_message(struct ksmbd_work *work)
+ 	int command;
+ 	__u32 clc_len;  /* calculated length */
+ 	__u32 len = get_rfc1002_len(work->request_buf);
++	__u32 req_struct_size, next_cmd = le32_to_cpu(hdr->NextCommand);
+ 
+-	if (le32_to_cpu(hdr->NextCommand) > 0)
+-		len = le32_to_cpu(hdr->NextCommand);
++	if ((u64)work->next_smb2_rcv_hdr_off + next_cmd > len) {
++		pr_err("next command(%u) offset exceeds smb msg size\n",
++				next_cmd);
++		return 1;
++	}
++
++	if (next_cmd > 0)
++		len = next_cmd;
+ 	else if (work->next_smb2_rcv_hdr_off)
+ 		len -= work->next_smb2_rcv_hdr_off;
+ 
+@@ -373,17 +380,9 @@ int ksmbd_smb2_check_message(struct ksmbd_work *work)
+ 	}
+ 
+ 	if (smb2_req_struct_sizes[command] != pdu->StructureSize2) {
+-		if (command != SMB2_OPLOCK_BREAK_HE &&
+-		    (hdr->Status == 0 || pdu->StructureSize2 != SMB2_ERROR_STRUCTURE_SIZE2_LE)) {
+-			/* error packets have 9 byte structure size */
+-			ksmbd_debug(SMB,
+-				    "Illegal request size %u for command %d\n",
+-				    le16_to_cpu(pdu->StructureSize2), command);
+-			return 1;
+-		} else if (command == SMB2_OPLOCK_BREAK_HE &&
+-			   hdr->Status == 0 &&
+-			   le16_to_cpu(pdu->StructureSize2) != OP_BREAK_STRUCT_SIZE_20 &&
+-			   le16_to_cpu(pdu->StructureSize2) != OP_BREAK_STRUCT_SIZE_21) {
++		if (command == SMB2_OPLOCK_BREAK_HE &&
++		    le16_to_cpu(pdu->StructureSize2) != OP_BREAK_STRUCT_SIZE_20 &&
++		    le16_to_cpu(pdu->StructureSize2) != OP_BREAK_STRUCT_SIZE_21) {
+ 			/* special case for SMB2.1 lease break message */
+ 			ksmbd_debug(SMB,
+ 				    "Illegal request size %d for oplock break\n",
+@@ -392,6 +391,14 @@ int ksmbd_smb2_check_message(struct ksmbd_work *work)
+ 		}
+ 	}
+ 
++	req_struct_size = le16_to_cpu(pdu->StructureSize2) +
++		__SMB2_HEADER_STRUCTURE_SIZE;
++	if (command == SMB2_LOCK_HE)
++		req_struct_size -= sizeof(struct smb2_lock_element);
++
++	if (req_struct_size > len + 1)
++		return 1;
++
+ 	if (smb2_calc_size(hdr, &clc_len))
+ 		return 1;
+ 
+diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c
+index 85783112e56a0..9610b8a884d0e 100644
+--- a/fs/ksmbd/smb2pdu.c
++++ b/fs/ksmbd/smb2pdu.c
+@@ -91,7 +91,6 @@ int smb2_get_ksmbd_tcon(struct ksmbd_work *work)
+ 	unsigned int cmd = le16_to_cpu(req_hdr->Command);
+ 	int tree_id;
+ 
+-	work->tcon = NULL;
+ 	if (cmd == SMB2_TREE_CONNECT_HE ||
+ 	    cmd ==  SMB2_CANCEL_HE ||
+ 	    cmd ==  SMB2_LOGOFF_HE) {
+@@ -105,10 +104,28 @@ int smb2_get_ksmbd_tcon(struct ksmbd_work *work)
+ 	}
+ 
+ 	tree_id = le32_to_cpu(req_hdr->Id.SyncId.TreeId);
++
++	/*
++	 * If request is not the first in Compound request,
++	 * Just validate tree id in header with work->tcon->id.
++	 */
++	if (work->next_smb2_rcv_hdr_off) {
++		if (!work->tcon) {
++			pr_err("The first operation in the compound does not have tcon\n");
++			return -EINVAL;
++		}
++		if (work->tcon->id != tree_id) {
++			pr_err("tree id(%u) is different with id(%u) in first operation\n",
++					tree_id, work->tcon->id);
++			return -EINVAL;
++		}
++		return 1;
++	}
++
+ 	work->tcon = ksmbd_tree_conn_lookup(work->sess, tree_id);
+ 	if (!work->tcon) {
+ 		pr_err("Invalid tid %d\n", tree_id);
+-		return -EINVAL;
++		return -ENOENT;
+ 	}
+ 
+ 	return 1;
+@@ -547,7 +564,6 @@ int smb2_check_user_session(struct ksmbd_work *work)
+ 	unsigned int cmd = conn->ops->get_cmd_val(work);
+ 	unsigned long long sess_id;
+ 
+-	work->sess = NULL;
+ 	/*
+ 	 * SMB2_ECHO, SMB2_NEGOTIATE, SMB2_SESSION_SETUP command do not
+ 	 * require a session id, so no need to validate user session's for
+@@ -558,15 +574,33 @@ int smb2_check_user_session(struct ksmbd_work *work)
+ 		return 0;
+ 
+ 	if (!ksmbd_conn_good(conn))
+-		return -EINVAL;
++		return -EIO;
+ 
+ 	sess_id = le64_to_cpu(req_hdr->SessionId);
++
++	/*
++	 * If request is not the first in Compound request,
++	 * Just validate session id in header with work->sess->id.
++	 */
++	if (work->next_smb2_rcv_hdr_off) {
++		if (!work->sess) {
++			pr_err("The first operation in the compound does not have sess\n");
++			return -EINVAL;
++		}
++		if (work->sess->id != sess_id) {
++			pr_err("session id(%llu) is different with the first operation(%lld)\n",
++					sess_id, work->sess->id);
++			return -EINVAL;
++		}
++		return 1;
++	}
++
+ 	/* Check for validity of user session */
+ 	work->sess = ksmbd_session_lookup_all(conn, sess_id);
+ 	if (work->sess)
+ 		return 1;
+ 	ksmbd_debug(SMB, "Invalid user session, Uid %llu\n", sess_id);
+-	return -EINVAL;
++	return -ENOENT;
+ }
+ 
+ static void destroy_previous_session(struct ksmbd_conn *conn,
+@@ -2277,7 +2311,7 @@ static int smb2_set_ea(struct smb2_ea_info *eabuf, unsigned int buf_len,
+ 			/* delete the EA only when it exits */
+ 			if (rc > 0) {
+ 				rc = ksmbd_vfs_remove_xattr(idmap,
+-							    path->dentry,
++							    path,
+ 							    attr_name);
+ 
+ 				if (rc < 0) {
+@@ -2291,8 +2325,7 @@ static int smb2_set_ea(struct smb2_ea_info *eabuf, unsigned int buf_len,
+ 			/* if the EA doesn't exist, just do nothing. */
+ 			rc = 0;
+ 		} else {
+-			rc = ksmbd_vfs_setxattr(idmap,
+-						path->dentry, attr_name, value,
++			rc = ksmbd_vfs_setxattr(idmap, path, attr_name, value,
+ 						le16_to_cpu(eabuf->EaValueLength), 0);
+ 			if (rc < 0) {
+ 				ksmbd_debug(SMB,
+@@ -2349,8 +2382,7 @@ static noinline int smb2_set_stream_name_xattr(const struct path *path,
+ 		return -EBADF;
+ 	}
+ 
+-	rc = ksmbd_vfs_setxattr(idmap, path->dentry,
+-				xattr_stream_name, NULL, 0, 0);
++	rc = ksmbd_vfs_setxattr(idmap, path, xattr_stream_name, NULL, 0, 0);
+ 	if (rc < 0)
+ 		pr_err("Failed to store XATTR stream name :%d\n", rc);
+ 	return 0;
+@@ -2378,7 +2410,7 @@ static int smb2_remove_smb_xattrs(const struct path *path)
+ 		if (!strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN) &&
+ 		    !strncmp(&name[XATTR_USER_PREFIX_LEN], STREAM_PREFIX,
+ 			     STREAM_PREFIX_LEN)) {
+-			err = ksmbd_vfs_remove_xattr(idmap, path->dentry,
++			err = ksmbd_vfs_remove_xattr(idmap, path,
+ 						     name);
+ 			if (err)
+ 				ksmbd_debug(SMB, "remove xattr failed : %s\n",
+@@ -2425,8 +2457,7 @@ static void smb2_new_xattrs(struct ksmbd_tree_connect *tcon, const struct path *
+ 	da.flags = XATTR_DOSINFO_ATTRIB | XATTR_DOSINFO_CREATE_TIME |
+ 		XATTR_DOSINFO_ITIME;
+ 
+-	rc = ksmbd_vfs_set_dos_attrib_xattr(mnt_idmap(path->mnt),
+-					    path->dentry, &da);
++	rc = ksmbd_vfs_set_dos_attrib_xattr(mnt_idmap(path->mnt), path, &da);
+ 	if (rc)
+ 		ksmbd_debug(SMB, "failed to store file attribute into xattr\n");
+ }
+@@ -2481,7 +2512,7 @@ static int smb2_creat(struct ksmbd_work *work, struct path *path, char *name,
+ 			return rc;
+ 	}
+ 
+-	rc = ksmbd_vfs_kern_path(work, name, 0, path, 0);
++	rc = ksmbd_vfs_kern_path_locked(work, name, 0, path, 0);
+ 	if (rc) {
+ 		pr_err("cannot get linux path (%s), err = %d\n",
+ 		       name, rc);
+@@ -2772,8 +2803,10 @@ int smb2_open(struct ksmbd_work *work)
+ 		goto err_out1;
+ 	}
+ 
+-	rc = ksmbd_vfs_kern_path(work, name, LOOKUP_NO_SYMLINKS, &path, 1);
++	rc = ksmbd_vfs_kern_path_locked(work, name, LOOKUP_NO_SYMLINKS, &path, 1);
+ 	if (!rc) {
++		file_present = true;
++
+ 		if (req->CreateOptions & FILE_DELETE_ON_CLOSE_LE) {
+ 			/*
+ 			 * If file exists with under flags, return access
+@@ -2782,7 +2815,6 @@ int smb2_open(struct ksmbd_work *work)
+ 			if (req->CreateDisposition == FILE_OVERWRITE_IF_LE ||
+ 			    req->CreateDisposition == FILE_OPEN_IF_LE) {
+ 				rc = -EACCES;
+-				path_put(&path);
+ 				goto err_out;
+ 			}
+ 
+@@ -2790,26 +2822,23 @@ int smb2_open(struct ksmbd_work *work)
+ 				ksmbd_debug(SMB,
+ 					    "User does not have write permission\n");
+ 				rc = -EACCES;
+-				path_put(&path);
+ 				goto err_out;
+ 			}
+ 		} else if (d_is_symlink(path.dentry)) {
+ 			rc = -EACCES;
+-			path_put(&path);
+ 			goto err_out;
+ 		}
+-	}
+ 
+-	if (rc) {
++		file_present = true;
++		idmap = mnt_idmap(path.mnt);
++	} else {
+ 		if (rc != -ENOENT)
+ 			goto err_out;
+ 		ksmbd_debug(SMB, "can not get linux path for %s, rc = %d\n",
+ 			    name, rc);
+ 		rc = 0;
+-	} else {
+-		file_present = true;
+-		idmap = mnt_idmap(path.mnt);
+ 	}
++
+ 	if (stream_name) {
+ 		if (req->CreateOptions & FILE_DIRECTORY_FILE_LE) {
+ 			if (s_type == DATA_STREAM) {
+@@ -2937,8 +2966,9 @@ int smb2_open(struct ksmbd_work *work)
+ 
+ 			if ((daccess & FILE_DELETE_LE) ||
+ 			    (req->CreateOptions & FILE_DELETE_ON_CLOSE_LE)) {
+-				rc = ksmbd_vfs_may_delete(idmap,
+-							  path.dentry);
++				rc = inode_permission(idmap,
++						      d_inode(path.dentry->d_parent),
++						      MAY_EXEC | MAY_WRITE);
+ 				if (rc)
+ 					goto err_out;
+ 			}
+@@ -3001,7 +3031,7 @@ int smb2_open(struct ksmbd_work *work)
+ 		struct inode *inode = d_inode(path.dentry);
+ 
+ 		posix_acl_rc = ksmbd_vfs_inherit_posix_acl(idmap,
+-							   path.dentry,
++							   &path,
+ 							   d_inode(path.dentry->d_parent));
+ 		if (posix_acl_rc)
+ 			ksmbd_debug(SMB, "inherit posix acl failed : %d\n", posix_acl_rc);
+@@ -3017,7 +3047,7 @@ int smb2_open(struct ksmbd_work *work)
+ 			if (rc) {
+ 				if (posix_acl_rc)
+ 					ksmbd_vfs_set_init_posix_acl(idmap,
+-								     path.dentry);
++								     &path);
+ 
+ 				if (test_share_config_flag(work->tcon->share_conf,
+ 							   KSMBD_SHARE_FLAG_ACL_XATTR)) {
+@@ -3057,7 +3087,7 @@ int smb2_open(struct ksmbd_work *work)
+ 
+ 					rc = ksmbd_vfs_set_sd_xattr(conn,
+ 								    idmap,
+-								    path.dentry,
++								    &path,
+ 								    pntsd,
+ 								    pntsd_size);
+ 					kfree(pntsd);
+@@ -3309,10 +3339,13 @@ int smb2_open(struct ksmbd_work *work)
+ 	}
+ 
+ err_out:
+-	if (file_present || created)
+-		path_put(&path);
++	if (file_present || created) {
++		inode_unlock(d_inode(path.dentry->d_parent));
++		dput(path.dentry);
++	}
+ 	ksmbd_revert_fsids(work);
+ err_out1:
++
+ 	if (rc) {
+ 		if (rc == -EINVAL)
+ 			rsp->hdr.Status = STATUS_INVALID_PARAMETER;
+@@ -5451,44 +5484,19 @@ int smb2_echo(struct ksmbd_work *work)
+ 
+ static int smb2_rename(struct ksmbd_work *work,
+ 		       struct ksmbd_file *fp,
+-		       struct mnt_idmap *idmap,
+ 		       struct smb2_file_rename_info *file_info,
+ 		       struct nls_table *local_nls)
+ {
+ 	struct ksmbd_share_config *share = fp->tcon->share_conf;
+-	char *new_name = NULL, *abs_oldname = NULL, *old_name = NULL;
+-	char *pathname = NULL;
+-	struct path path;
+-	bool file_present = true;
+-	int rc;
++	char *new_name = NULL;
++	int rc, flags = 0;
+ 
+ 	ksmbd_debug(SMB, "setting FILE_RENAME_INFO\n");
+-	pathname = kmalloc(PATH_MAX, GFP_KERNEL);
+-	if (!pathname)
+-		return -ENOMEM;
+-
+-	abs_oldname = file_path(fp->filp, pathname, PATH_MAX);
+-	if (IS_ERR(abs_oldname)) {
+-		rc = -EINVAL;
+-		goto out;
+-	}
+-	old_name = strrchr(abs_oldname, '/');
+-	if (old_name && old_name[1] != '\0') {
+-		old_name++;
+-	} else {
+-		ksmbd_debug(SMB, "can't get last component in path %s\n",
+-			    abs_oldname);
+-		rc = -ENOENT;
+-		goto out;
+-	}
+-
+ 	new_name = smb2_get_name(file_info->FileName,
+ 				 le32_to_cpu(file_info->FileNameLength),
+ 				 local_nls);
+-	if (IS_ERR(new_name)) {
+-		rc = PTR_ERR(new_name);
+-		goto out;
+-	}
++	if (IS_ERR(new_name))
++		return PTR_ERR(new_name);
+ 
+ 	if (strchr(new_name, ':')) {
+ 		int s_type;
+@@ -5514,8 +5522,8 @@ static int smb2_rename(struct ksmbd_work *work,
+ 		if (rc)
+ 			goto out;
+ 
+-		rc = ksmbd_vfs_setxattr(idmap,
+-					fp->filp->f_path.dentry,
++		rc = ksmbd_vfs_setxattr(file_mnt_idmap(fp->filp),
++					&fp->filp->f_path,
+ 					xattr_stream_name,
+ 					NULL, 0, 0);
+ 		if (rc < 0) {
+@@ -5529,47 +5537,18 @@ static int smb2_rename(struct ksmbd_work *work,
+ 	}
+ 
+ 	ksmbd_debug(SMB, "new name %s\n", new_name);
+-	rc = ksmbd_vfs_kern_path(work, new_name, LOOKUP_NO_SYMLINKS, &path, 1);
+-	if (rc) {
+-		if (rc != -ENOENT)
+-			goto out;
+-		file_present = false;
+-	} else {
+-		path_put(&path);
+-	}
+-
+ 	if (ksmbd_share_veto_filename(share, new_name)) {
+ 		rc = -ENOENT;
+ 		ksmbd_debug(SMB, "Can't rename vetoed file: %s\n", new_name);
+ 		goto out;
+ 	}
+ 
+-	if (file_info->ReplaceIfExists) {
+-		if (file_present) {
+-			rc = ksmbd_vfs_remove_file(work, new_name);
+-			if (rc) {
+-				if (rc != -ENOTEMPTY)
+-					rc = -EINVAL;
+-				ksmbd_debug(SMB, "cannot delete %s, rc %d\n",
+-					    new_name, rc);
+-				goto out;
+-			}
+-		}
+-	} else {
+-		if (file_present &&
+-		    strncmp(old_name, path.dentry->d_name.name, strlen(old_name))) {
+-			rc = -EEXIST;
+-			ksmbd_debug(SMB,
+-				    "cannot rename already existing file\n");
+-			goto out;
+-		}
+-	}
++	if (!file_info->ReplaceIfExists)
++		flags = RENAME_NOREPLACE;
+ 
+-	rc = ksmbd_vfs_fp_rename(work, fp, new_name);
++	rc = ksmbd_vfs_rename(work, &fp->filp->f_path, new_name, flags);
+ out:
+-	kfree(pathname);
+-	if (!IS_ERR(new_name))
+-		kfree(new_name);
++	kfree(new_name);
+ 	return rc;
+ }
+ 
+@@ -5581,7 +5560,7 @@ static int smb2_create_link(struct ksmbd_work *work,
+ {
+ 	char *link_name = NULL, *target_name = NULL, *pathname = NULL;
+ 	struct path path;
+-	bool file_present = true;
++	bool file_present = false;
+ 	int rc;
+ 
+ 	if (buf_len < (u64)sizeof(struct smb2_file_link_info) +
+@@ -5609,18 +5588,17 @@ static int smb2_create_link(struct ksmbd_work *work,
+ 	}
+ 
+ 	ksmbd_debug(SMB, "target name is %s\n", target_name);
+-	rc = ksmbd_vfs_kern_path(work, link_name, LOOKUP_NO_SYMLINKS, &path, 0);
++	rc = ksmbd_vfs_kern_path_locked(work, link_name, LOOKUP_NO_SYMLINKS,
++					&path, 0);
+ 	if (rc) {
+ 		if (rc != -ENOENT)
+ 			goto out;
+-		file_present = false;
+-	} else {
+-		path_put(&path);
+-	}
++	} else
++		file_present = true;
+ 
+ 	if (file_info->ReplaceIfExists) {
+ 		if (file_present) {
+-			rc = ksmbd_vfs_remove_file(work, link_name);
++			rc = ksmbd_vfs_remove_file(work, &path);
+ 			if (rc) {
+ 				rc = -EINVAL;
+ 				ksmbd_debug(SMB, "cannot delete %s\n",
+@@ -5640,6 +5618,10 @@ static int smb2_create_link(struct ksmbd_work *work,
+ 	if (rc)
+ 		rc = -EINVAL;
+ out:
++	if (file_present) {
++		inode_unlock(d_inode(path.dentry->d_parent));
++		path_put(&path);
++	}
+ 	if (!IS_ERR(link_name))
+ 		kfree(link_name);
+ 	kfree(pathname);
+@@ -5706,8 +5688,7 @@ static int set_file_basic_info(struct ksmbd_file *fp,
+ 		da.flags = XATTR_DOSINFO_ATTRIB | XATTR_DOSINFO_CREATE_TIME |
+ 			XATTR_DOSINFO_ITIME;
+ 
+-		rc = ksmbd_vfs_set_dos_attrib_xattr(idmap,
+-						    filp->f_path.dentry, &da);
++		rc = ksmbd_vfs_set_dos_attrib_xattr(idmap, &filp->f_path, &da);
+ 		if (rc)
+ 			ksmbd_debug(SMB,
+ 				    "failed to restore file attribute in EA\n");
+@@ -5817,12 +5798,6 @@ static int set_rename_info(struct ksmbd_work *work, struct ksmbd_file *fp,
+ 			   struct smb2_file_rename_info *rename_info,
+ 			   unsigned int buf_len)
+ {
+-	struct mnt_idmap *idmap;
+-	struct ksmbd_file *parent_fp;
+-	struct dentry *parent;
+-	struct dentry *dentry = fp->filp->f_path.dentry;
+-	int ret;
+-
+ 	if (!(fp->daccess & FILE_DELETE_LE)) {
+ 		pr_err("no right to delete : 0x%x\n", fp->daccess);
+ 		return -EACCES;
+@@ -5832,32 +5807,10 @@ static int set_rename_info(struct ksmbd_work *work, struct ksmbd_file *fp,
+ 			le32_to_cpu(rename_info->FileNameLength))
+ 		return -EINVAL;
+ 
+-	idmap = file_mnt_idmap(fp->filp);
+-	if (ksmbd_stream_fd(fp))
+-		goto next;
+-
+-	parent = dget_parent(dentry);
+-	ret = ksmbd_vfs_lock_parent(idmap, parent, dentry);
+-	if (ret) {
+-		dput(parent);
+-		return ret;
+-	}
+-
+-	parent_fp = ksmbd_lookup_fd_inode(d_inode(parent));
+-	inode_unlock(d_inode(parent));
+-	dput(parent);
++	if (!le32_to_cpu(rename_info->FileNameLength))
++		return -EINVAL;
+ 
+-	if (parent_fp) {
+-		if (parent_fp->daccess & FILE_DELETE_LE) {
+-			pr_err("parent dir is opened with delete access\n");
+-			ksmbd_fd_put(work, parent_fp);
+-			return -ESHARE;
+-		}
+-		ksmbd_fd_put(work, parent_fp);
+-	}
+-next:
+-	return smb2_rename(work, fp, idmap, rename_info,
+-			   work->conn->local_nls);
++	return smb2_rename(work, fp, rename_info, work->conn->local_nls);
+ }
+ 
+ static int set_file_disposition_info(struct ksmbd_file *fp,
+@@ -7590,7 +7543,7 @@ static inline int fsctl_set_sparse(struct ksmbd_work *work, u64 id,
+ 
+ 		da.attr = le32_to_cpu(fp->f_ci->m_fattr);
+ 		ret = ksmbd_vfs_set_dos_attrib_xattr(idmap,
+-						     fp->filp->f_path.dentry, &da);
++						     &fp->filp->f_path, &da);
+ 		if (ret)
+ 			fp->f_ci->m_fattr = old_fattr;
+ 	}
+diff --git a/fs/ksmbd/smbacl.c b/fs/ksmbd/smbacl.c
+index 0a5862a61c773..ad919a4239d0a 100644
+--- a/fs/ksmbd/smbacl.c
++++ b/fs/ksmbd/smbacl.c
+@@ -1162,8 +1162,7 @@ pass:
+ 			pntsd_size += sizeof(struct smb_acl) + nt_size;
+ 		}
+ 
+-		ksmbd_vfs_set_sd_xattr(conn, idmap,
+-				       path->dentry, pntsd, pntsd_size);
++		ksmbd_vfs_set_sd_xattr(conn, idmap, path, pntsd, pntsd_size);
+ 		kfree(pntsd);
+ 	}
+ 
+@@ -1383,7 +1382,7 @@ int set_info_sec(struct ksmbd_conn *conn, struct ksmbd_tree_connect *tcon,
+ 	newattrs.ia_valid |= ATTR_MODE;
+ 	newattrs.ia_mode = (inode->i_mode & ~0777) | (fattr.cf_mode & 0777);
+ 
+-	ksmbd_vfs_remove_acl_xattrs(idmap, path->dentry);
++	ksmbd_vfs_remove_acl_xattrs(idmap, path);
+ 	/* Update posix acls */
+ 	if (IS_ENABLED(CONFIG_FS_POSIX_ACL) && fattr.cf_dacls) {
+ 		rc = set_posix_acl(idmap, path->dentry,
+@@ -1414,9 +1413,8 @@ int set_info_sec(struct ksmbd_conn *conn, struct ksmbd_tree_connect *tcon,
+ 
+ 	if (test_share_config_flag(tcon->share_conf, KSMBD_SHARE_FLAG_ACL_XATTR)) {
+ 		/* Update WinACL in xattr */
+-		ksmbd_vfs_remove_sd_xattrs(idmap, path->dentry);
+-		ksmbd_vfs_set_sd_xattr(conn, idmap,
+-				       path->dentry, pntsd, ntsd_len);
++		ksmbd_vfs_remove_sd_xattrs(idmap, path);
++		ksmbd_vfs_set_sd_xattr(conn, idmap, path, pntsd, ntsd_len);
+ 	}
+ 
+ out:
+diff --git a/fs/ksmbd/vfs.c b/fs/ksmbd/vfs.c
+index f6a8b26593090..81489fdedd8e0 100644
+--- a/fs/ksmbd/vfs.c
++++ b/fs/ksmbd/vfs.c
+@@ -18,8 +18,7 @@
+ #include <linux/vmalloc.h>
+ #include <linux/sched/xacct.h>
+ #include <linux/crc32c.h>
+-
+-#include "../internal.h"	/* for vfs_path_lookup */
++#include <linux/namei.h>
+ 
+ #include "glob.h"
+ #include "oplock.h"
+@@ -37,19 +36,6 @@
+ #include "mgmt/user_session.h"
+ #include "mgmt/user_config.h"
+ 
+-static char *extract_last_component(char *path)
+-{
+-	char *p = strrchr(path, '/');
+-
+-	if (p && p[1] != '\0') {
+-		*p = '\0';
+-		p++;
+-	} else {
+-		p = NULL;
+-	}
+-	return p;
+-}
+-
+ static void ksmbd_vfs_inherit_owner(struct ksmbd_work *work,
+ 				    struct inode *parent_inode,
+ 				    struct inode *inode)
+@@ -63,65 +49,81 @@ static void ksmbd_vfs_inherit_owner(struct ksmbd_work *work,
+ 
+ /**
+  * ksmbd_vfs_lock_parent() - lock parent dentry if it is stable
+- *
+- * the parent dentry got by dget_parent or @parent could be
+- * unstable, we try to lock a parent inode and lookup the
+- * child dentry again.
+- *
+- * the reference count of @parent isn't incremented.
+  */
+-int ksmbd_vfs_lock_parent(struct mnt_idmap *idmap, struct dentry *parent,
+-			  struct dentry *child)
++int ksmbd_vfs_lock_parent(struct dentry *parent, struct dentry *child)
+ {
+-	struct dentry *dentry;
+-	int ret = 0;
+-
+ 	inode_lock_nested(d_inode(parent), I_MUTEX_PARENT);
+-	dentry = lookup_one(idmap, child->d_name.name, parent,
+-			    child->d_name.len);
+-	if (IS_ERR(dentry)) {
+-		ret = PTR_ERR(dentry);
+-		goto out_err;
+-	}
+-
+-	if (dentry != child) {
+-		ret = -ESTALE;
+-		dput(dentry);
+-		goto out_err;
++	if (child->d_parent != parent) {
++		inode_unlock(d_inode(parent));
++		return -ENOENT;
+ 	}
+ 
+-	dput(dentry);
+ 	return 0;
+-out_err:
+-	inode_unlock(d_inode(parent));
+-	return ret;
+ }
+ 
+-int ksmbd_vfs_may_delete(struct mnt_idmap *idmap,
+-			 struct dentry *dentry)
++static int ksmbd_vfs_path_lookup_locked(struct ksmbd_share_config *share_conf,
++					char *pathname, unsigned int flags,
++					struct path *path)
+ {
+-	struct dentry *parent;
+-	int ret;
++	struct qstr last;
++	struct filename *filename;
++	struct path *root_share_path = &share_conf->vfs_path;
++	int err, type;
++	struct path parent_path;
++	struct dentry *d;
++
++	if (pathname[0] == '\0') {
++		pathname = share_conf->path;
++		root_share_path = NULL;
++	} else {
++		flags |= LOOKUP_BENEATH;
++	}
++
++	filename = getname_kernel(pathname);
++	if (IS_ERR(filename))
++		return PTR_ERR(filename);
++
++	err = vfs_path_parent_lookup(filename, flags,
++				     &parent_path, &last, &type,
++				     root_share_path);
++	if (err) {
++		putname(filename);
++		return err;
++	}
+ 
+-	parent = dget_parent(dentry);
+-	ret = ksmbd_vfs_lock_parent(idmap, parent, dentry);
+-	if (ret) {
+-		dput(parent);
+-		return ret;
++	if (unlikely(type != LAST_NORM)) {
++		path_put(&parent_path);
++		putname(filename);
++		return -ENOENT;
+ 	}
+ 
+-	ret = inode_permission(idmap, d_inode(parent),
+-			       MAY_EXEC | MAY_WRITE);
++	inode_lock_nested(parent_path.dentry->d_inode, I_MUTEX_PARENT);
++	d = lookup_one_qstr_excl(&last, parent_path.dentry, 0);
++	if (IS_ERR(d))
++		goto err_out;
+ 
+-	inode_unlock(d_inode(parent));
+-	dput(parent);
+-	return ret;
++	if (d_is_negative(d)) {
++		dput(d);
++		goto err_out;
++	}
++
++	path->dentry = d;
++	path->mnt = share_conf->vfs_path.mnt;
++	path_put(&parent_path);
++	putname(filename);
++
++	return 0;
++
++err_out:
++	inode_unlock(parent_path.dentry->d_inode);
++	path_put(&parent_path);
++	putname(filename);
++	return -ENOENT;
+ }
+ 
+ int ksmbd_vfs_query_maximal_access(struct mnt_idmap *idmap,
+ 				   struct dentry *dentry, __le32 *daccess)
+ {
+-	struct dentry *parent;
+ 	int ret = 0;
+ 
+ 	*daccess = cpu_to_le32(FILE_READ_ATTRIBUTES | READ_CONTROL);
+@@ -138,18 +140,9 @@ int ksmbd_vfs_query_maximal_access(struct mnt_idmap *idmap,
+ 	if (!inode_permission(idmap, d_inode(dentry), MAY_OPEN | MAY_EXEC))
+ 		*daccess |= FILE_EXECUTE_LE;
+ 
+-	parent = dget_parent(dentry);
+-	ret = ksmbd_vfs_lock_parent(idmap, parent, dentry);
+-	if (ret) {
+-		dput(parent);
+-		return ret;
+-	}
+-
+-	if (!inode_permission(idmap, d_inode(parent), MAY_EXEC | MAY_WRITE))
++	if (!inode_permission(idmap, d_inode(dentry->d_parent), MAY_EXEC | MAY_WRITE))
+ 		*daccess |= FILE_DELETE_LE;
+ 
+-	inode_unlock(d_inode(parent));
+-	dput(parent);
+ 	return ret;
+ }
+ 
+@@ -177,6 +170,10 @@ int ksmbd_vfs_create(struct ksmbd_work *work, const char *name, umode_t mode)
+ 		return err;
+ 	}
+ 
++	err = mnt_want_write(path.mnt);
++	if (err)
++		goto out_err;
++
+ 	mode |= S_IFREG;
+ 	err = vfs_create(mnt_idmap(path.mnt), d_inode(path.dentry),
+ 			 dentry, mode, true);
+@@ -186,6 +183,9 @@ int ksmbd_vfs_create(struct ksmbd_work *work, const char *name, umode_t mode)
+ 	} else {
+ 		pr_err("File(%s): creation failed (err:%d)\n", name, err);
+ 	}
++	mnt_drop_write(path.mnt);
++
++out_err:
+ 	done_path_create(&path, dentry);
+ 	return err;
+ }
+@@ -216,30 +216,35 @@ int ksmbd_vfs_mkdir(struct ksmbd_work *work, const char *name, umode_t mode)
+ 		return err;
+ 	}
+ 
++	err = mnt_want_write(path.mnt);
++	if (err)
++		goto out_err2;
++
+ 	idmap = mnt_idmap(path.mnt);
+ 	mode |= S_IFDIR;
+ 	err = vfs_mkdir(idmap, d_inode(path.dentry), dentry, mode);
+-	if (err) {
+-		goto out;
+-	} else if (d_unhashed(dentry)) {
++	if (!err && d_unhashed(dentry)) {
+ 		struct dentry *d;
+ 
+ 		d = lookup_one(idmap, dentry->d_name.name, dentry->d_parent,
+ 			       dentry->d_name.len);
+ 		if (IS_ERR(d)) {
+ 			err = PTR_ERR(d);
+-			goto out;
++			goto out_err1;
+ 		}
+ 		if (unlikely(d_is_negative(d))) {
+ 			dput(d);
+ 			err = -ENOENT;
+-			goto out;
++			goto out_err1;
+ 		}
+ 
+ 		ksmbd_vfs_inherit_owner(work, d_inode(path.dentry), d_inode(d));
+ 		dput(d);
+ 	}
+-out:
++
++out_err1:
++	mnt_drop_write(path.mnt);
++out_err2:
+ 	done_path_create(&path, dentry);
+ 	if (err)
+ 		pr_err("mkdir(%s): creation failed (err:%d)\n", name, err);
+@@ -450,7 +455,7 @@ static int ksmbd_vfs_stream_write(struct ksmbd_file *fp, char *buf, loff_t *pos,
+ 	memcpy(&stream_buf[*pos], buf, count);
+ 
+ 	err = ksmbd_vfs_setxattr(idmap,
+-				 fp->filp->f_path.dentry,
++				 &fp->filp->f_path,
+ 				 fp->stream.name,
+ 				 (void *)stream_buf,
+ 				 size,
+@@ -582,54 +587,37 @@ int ksmbd_vfs_fsync(struct ksmbd_work *work, u64 fid, u64 p_id)
+  *
+  * Return:	0 on success, otherwise error
+  */
+-int ksmbd_vfs_remove_file(struct ksmbd_work *work, char *name)
++int ksmbd_vfs_remove_file(struct ksmbd_work *work, const struct path *path)
+ {
+ 	struct mnt_idmap *idmap;
+-	struct path path;
+-	struct dentry *parent;
++	struct dentry *parent = path->dentry->d_parent;
+ 	int err;
+ 
+ 	if (ksmbd_override_fsids(work))
+ 		return -ENOMEM;
+ 
+-	err = ksmbd_vfs_kern_path(work, name, LOOKUP_NO_SYMLINKS, &path, false);
+-	if (err) {
+-		ksmbd_debug(VFS, "can't get %s, err %d\n", name, err);
+-		ksmbd_revert_fsids(work);
+-		return err;
+-	}
+-
+-	idmap = mnt_idmap(path.mnt);
+-	parent = dget_parent(path.dentry);
+-	err = ksmbd_vfs_lock_parent(idmap, parent, path.dentry);
+-	if (err) {
+-		dput(parent);
+-		path_put(&path);
+-		ksmbd_revert_fsids(work);
+-		return err;
+-	}
+-
+-	if (!d_inode(path.dentry)->i_nlink) {
++	if (!d_inode(path->dentry)->i_nlink) {
+ 		err = -ENOENT;
+ 		goto out_err;
+ 	}
+ 
+-	if (S_ISDIR(d_inode(path.dentry)->i_mode)) {
+-		err = vfs_rmdir(idmap, d_inode(parent), path.dentry);
++	err = mnt_want_write(path->mnt);
++	if (err)
++		goto out_err;
++
++	idmap = mnt_idmap(path->mnt);
++	if (S_ISDIR(d_inode(path->dentry)->i_mode)) {
++		err = vfs_rmdir(idmap, d_inode(parent), path->dentry);
+ 		if (err && err != -ENOTEMPTY)
+-			ksmbd_debug(VFS, "%s: rmdir failed, err %d\n", name,
+-				    err);
++			ksmbd_debug(VFS, "rmdir failed, err %d\n", err);
+ 	} else {
+-		err = vfs_unlink(idmap, d_inode(parent), path.dentry, NULL);
++		err = vfs_unlink(idmap, d_inode(parent), path->dentry, NULL);
+ 		if (err)
+-			ksmbd_debug(VFS, "%s: unlink failed, err %d\n", name,
+-				    err);
++			ksmbd_debug(VFS, "unlink failed, err %d\n", err);
+ 	}
++	mnt_drop_write(path->mnt);
+ 
+ out_err:
+-	inode_unlock(d_inode(parent));
+-	dput(parent);
+-	path_put(&path);
+ 	ksmbd_revert_fsids(work);
+ 	return err;
+ }
+@@ -673,11 +661,16 @@ int ksmbd_vfs_link(struct ksmbd_work *work, const char *oldname,
+ 		goto out3;
+ 	}
+ 
++	err = mnt_want_write(newpath.mnt);
++	if (err)
++		goto out3;
++
+ 	err = vfs_link(oldpath.dentry, mnt_idmap(newpath.mnt),
+ 		       d_inode(newpath.dentry),
+ 		       dentry, NULL);
+ 	if (err)
+ 		ksmbd_debug(VFS, "vfs_link failed err %d\n", err);
++	mnt_drop_write(newpath.mnt);
+ 
+ out3:
+ 	done_path_create(&newpath, dentry);
+@@ -688,149 +681,120 @@ out1:
+ 	return err;
+ }
+ 
+-static int ksmbd_validate_entry_in_use(struct dentry *src_dent)
++int ksmbd_vfs_rename(struct ksmbd_work *work, const struct path *old_path,
++		     char *newname, int flags)
+ {
+-	struct dentry *dst_dent;
++	struct dentry *old_parent, *new_dentry, *trap;
++	struct dentry *old_child = old_path->dentry;
++	struct path new_path;
++	struct qstr new_last;
++	struct renamedata rd;
++	struct filename *to;
++	struct ksmbd_share_config *share_conf = work->tcon->share_conf;
++	struct ksmbd_file *parent_fp;
++	int new_type;
++	int err, lookup_flags = LOOKUP_NO_SYMLINKS;
++
++	if (ksmbd_override_fsids(work))
++		return -ENOMEM;
+ 
+-	spin_lock(&src_dent->d_lock);
+-	list_for_each_entry(dst_dent, &src_dent->d_subdirs, d_child) {
+-		struct ksmbd_file *child_fp;
++	to = getname_kernel(newname);
++	if (IS_ERR(to)) {
++		err = PTR_ERR(to);
++		goto revert_fsids;
++	}
+ 
+-		if (d_really_is_negative(dst_dent))
+-			continue;
++retry:
++	err = vfs_path_parent_lookup(to, lookup_flags | LOOKUP_BENEATH,
++				     &new_path, &new_last, &new_type,
++				     &share_conf->vfs_path);
++	if (err)
++		goto out1;
+ 
+-		child_fp = ksmbd_lookup_fd_inode(d_inode(dst_dent));
+-		if (child_fp) {
+-			spin_unlock(&src_dent->d_lock);
+-			ksmbd_debug(VFS, "Forbid rename, sub file/dir is in use\n");
+-			return -EACCES;
+-		}
++	if (old_path->mnt != new_path.mnt) {
++		err = -EXDEV;
++		goto out2;
+ 	}
+-	spin_unlock(&src_dent->d_lock);
+ 
+-	return 0;
+-}
++	err = mnt_want_write(old_path->mnt);
++	if (err)
++		goto out2;
+ 
+-static int __ksmbd_vfs_rename(struct ksmbd_work *work,
+-			      struct mnt_idmap *src_idmap,
+-			      struct dentry *src_dent_parent,
+-			      struct dentry *src_dent,
+-			      struct mnt_idmap *dst_idmap,
+-			      struct dentry *dst_dent_parent,
+-			      struct dentry *trap_dent,
+-			      char *dst_name)
+-{
+-	struct dentry *dst_dent;
+-	int err;
++	trap = lock_rename_child(old_child, new_path.dentry);
+ 
+-	if (!work->tcon->posix_extensions) {
+-		err = ksmbd_validate_entry_in_use(src_dent);
+-		if (err)
+-			return err;
++	old_parent = dget(old_child->d_parent);
++	if (d_unhashed(old_child)) {
++		err = -EINVAL;
++		goto out3;
+ 	}
+ 
+-	if (d_really_is_negative(src_dent_parent))
+-		return -ENOENT;
+-	if (d_really_is_negative(dst_dent_parent))
+-		return -ENOENT;
+-	if (d_really_is_negative(src_dent))
+-		return -ENOENT;
+-	if (src_dent == trap_dent)
+-		return -EINVAL;
+-
+-	if (ksmbd_override_fsids(work))
+-		return -ENOMEM;
++	parent_fp = ksmbd_lookup_fd_inode(d_inode(old_child->d_parent));
++	if (parent_fp) {
++		if (parent_fp->daccess & FILE_DELETE_LE) {
++			pr_err("parent dir is opened with delete access\n");
++			err = -ESHARE;
++			ksmbd_fd_put(work, parent_fp);
++			goto out3;
++		}
++		ksmbd_fd_put(work, parent_fp);
++	}
+ 
+-	dst_dent = lookup_one(dst_idmap, dst_name,
+-			      dst_dent_parent, strlen(dst_name));
+-	err = PTR_ERR(dst_dent);
+-	if (IS_ERR(dst_dent)) {
+-		pr_err("lookup failed %s [%d]\n", dst_name, err);
+-		goto out;
++	new_dentry = lookup_one_qstr_excl(&new_last, new_path.dentry,
++					  lookup_flags | LOOKUP_RENAME_TARGET);
++	if (IS_ERR(new_dentry)) {
++		err = PTR_ERR(new_dentry);
++		goto out3;
+ 	}
+ 
+-	err = -ENOTEMPTY;
+-	if (dst_dent != trap_dent && !d_really_is_positive(dst_dent)) {
+-		struct renamedata rd = {
+-			.old_mnt_idmap	= src_idmap,
+-			.old_dir	= d_inode(src_dent_parent),
+-			.old_dentry	= src_dent,
+-			.new_mnt_idmap	= dst_idmap,
+-			.new_dir	= d_inode(dst_dent_parent),
+-			.new_dentry	= dst_dent,
+-		};
+-		err = vfs_rename(&rd);
++	if (d_is_symlink(new_dentry)) {
++		err = -EACCES;
++		goto out4;
+ 	}
+-	if (err)
+-		pr_err("vfs_rename failed err %d\n", err);
+-	if (dst_dent)
+-		dput(dst_dent);
+-out:
+-	ksmbd_revert_fsids(work);
+-	return err;
+-}
+ 
+-int ksmbd_vfs_fp_rename(struct ksmbd_work *work, struct ksmbd_file *fp,
+-			char *newname)
+-{
+-	struct mnt_idmap *idmap;
+-	struct path dst_path;
+-	struct dentry *src_dent_parent, *dst_dent_parent;
+-	struct dentry *src_dent, *trap_dent, *src_child;
+-	char *dst_name;
+-	int err;
++	if ((flags & RENAME_NOREPLACE) && d_is_positive(new_dentry)) {
++		err = -EEXIST;
++		goto out4;
++	}
+ 
+-	dst_name = extract_last_component(newname);
+-	if (!dst_name) {
+-		dst_name = newname;
+-		newname = "";
++	if (old_child == trap) {
++		err = -EINVAL;
++		goto out4;
+ 	}
+ 
+-	src_dent_parent = dget_parent(fp->filp->f_path.dentry);
+-	src_dent = fp->filp->f_path.dentry;
++	if (new_dentry == trap) {
++		err = -ENOTEMPTY;
++		goto out4;
++	}
++
++	rd.old_mnt_idmap	= mnt_idmap(old_path->mnt),
++	rd.old_dir		= d_inode(old_parent),
++	rd.old_dentry		= old_child,
++	rd.new_mnt_idmap	= mnt_idmap(new_path.mnt),
++	rd.new_dir		= new_path.dentry->d_inode,
++	rd.new_dentry		= new_dentry,
++	rd.flags		= flags,
++	rd.delegated_inode	= NULL,
++	err = vfs_rename(&rd);
++	if (err)
++		ksmbd_debug(VFS, "vfs_rename failed err %d\n", err);
+ 
+-	err = ksmbd_vfs_kern_path(work, newname,
+-				  LOOKUP_NO_SYMLINKS | LOOKUP_DIRECTORY,
+-				  &dst_path, false);
+-	if (err) {
+-		ksmbd_debug(VFS, "Cannot get path for %s [%d]\n", newname, err);
+-		goto out;
++out4:
++	dput(new_dentry);
++out3:
++	dput(old_parent);
++	unlock_rename(old_parent, new_path.dentry);
++	mnt_drop_write(old_path->mnt);
++out2:
++	path_put(&new_path);
++
++	if (retry_estale(err, lookup_flags)) {
++		lookup_flags |= LOOKUP_REVAL;
++		goto retry;
+ 	}
+-	dst_dent_parent = dst_path.dentry;
+-
+-	trap_dent = lock_rename(src_dent_parent, dst_dent_parent);
+-	dget(src_dent);
+-	dget(dst_dent_parent);
+-	idmap = file_mnt_idmap(fp->filp);
+-	src_child = lookup_one(idmap, src_dent->d_name.name, src_dent_parent,
+-			       src_dent->d_name.len);
+-	if (IS_ERR(src_child)) {
+-		err = PTR_ERR(src_child);
+-		goto out_lock;
+-	}
+-
+-	if (src_child != src_dent) {
+-		err = -ESTALE;
+-		dput(src_child);
+-		goto out_lock;
+-	}
+-	dput(src_child);
+-
+-	err = __ksmbd_vfs_rename(work,
+-				 idmap,
+-				 src_dent_parent,
+-				 src_dent,
+-				 mnt_idmap(dst_path.mnt),
+-				 dst_dent_parent,
+-				 trap_dent,
+-				 dst_name);
+-out_lock:
+-	dput(src_dent);
+-	dput(dst_dent_parent);
+-	unlock_rename(src_dent_parent, dst_dent_parent);
+-	path_put(&dst_path);
+-out:
+-	dput(src_dent_parent);
++out1:
++	putname(to);
++revert_fsids:
++	ksmbd_revert_fsids(work);
+ 	return err;
+ }
+ 
+@@ -960,19 +924,24 @@ ssize_t ksmbd_vfs_getxattr(struct mnt_idmap *idmap,
+  * Return:	0 on success, otherwise error
+  */
+ int ksmbd_vfs_setxattr(struct mnt_idmap *idmap,
+-		       struct dentry *dentry, const char *attr_name,
++		       const struct path *path, const char *attr_name,
+ 		       void *attr_value, size_t attr_size, int flags)
+ {
+ 	int err;
+ 
++	err = mnt_want_write(path->mnt);
++	if (err)
++		return err;
++
+ 	err = vfs_setxattr(idmap,
+-			   dentry,
++			   path->dentry,
+ 			   attr_name,
+ 			   attr_value,
+ 			   attr_size,
+ 			   flags);
+ 	if (err)
+ 		ksmbd_debug(VFS, "setxattr failed, err %d\n", err);
++	mnt_drop_write(path->mnt);
+ 	return err;
+ }
+ 
+@@ -1076,19 +1045,34 @@ int ksmbd_vfs_fqar_lseek(struct ksmbd_file *fp, loff_t start, loff_t length,
+ }
+ 
+ int ksmbd_vfs_remove_xattr(struct mnt_idmap *idmap,
+-			   struct dentry *dentry, char *attr_name)
++			   const struct path *path, char *attr_name)
+ {
+-	return vfs_removexattr(idmap, dentry, attr_name);
++	int err;
++
++	err = mnt_want_write(path->mnt);
++	if (err)
++		return err;
++
++	err = vfs_removexattr(idmap, path->dentry, attr_name);
++	mnt_drop_write(path->mnt);
++
++	return err;
+ }
+ 
+-int ksmbd_vfs_unlink(struct mnt_idmap *idmap,
+-		     struct dentry *dir, struct dentry *dentry)
++int ksmbd_vfs_unlink(struct file *filp)
+ {
+ 	int err = 0;
++	struct dentry *dir, *dentry = filp->f_path.dentry;
++	struct mnt_idmap *idmap = file_mnt_idmap(filp);
+ 
+-	err = ksmbd_vfs_lock_parent(idmap, dir, dentry);
++	err = mnt_want_write(filp->f_path.mnt);
+ 	if (err)
+ 		return err;
++
++	dir = dget_parent(dentry);
++	err = ksmbd_vfs_lock_parent(dir, dentry);
++	if (err)
++		goto out;
+ 	dget(dentry);
+ 
+ 	if (S_ISDIR(d_inode(dentry)->i_mode))
+@@ -1100,6 +1084,9 @@ int ksmbd_vfs_unlink(struct mnt_idmap *idmap,
+ 	inode_unlock(d_inode(dir));
+ 	if (err)
+ 		ksmbd_debug(VFS, "failed to delete, err %d\n", err);
++out:
++	dput(dir);
++	mnt_drop_write(filp->f_path.mnt);
+ 
+ 	return err;
+ }
+@@ -1202,7 +1189,7 @@ static int ksmbd_vfs_lookup_in_dir(const struct path *dir, char *name,
+ }
+ 
+ /**
+- * ksmbd_vfs_kern_path() - lookup a file and get path info
++ * ksmbd_vfs_kern_path_locked() - lookup a file and get path info
+  * @name:	file path that is relative to share
+  * @flags:	lookup flags
+  * @path:	if lookup succeed, return path info
+@@ -1210,24 +1197,20 @@ static int ksmbd_vfs_lookup_in_dir(const struct path *dir, char *name,
+  *
+  * Return:	0 on success, otherwise error
+  */
+-int ksmbd_vfs_kern_path(struct ksmbd_work *work, char *name,
+-			unsigned int flags, struct path *path, bool caseless)
++int ksmbd_vfs_kern_path_locked(struct ksmbd_work *work, char *name,
++			       unsigned int flags, struct path *path,
++			       bool caseless)
+ {
+ 	struct ksmbd_share_config *share_conf = work->tcon->share_conf;
+ 	int err;
++	struct path parent_path;
+ 
+-	flags |= LOOKUP_BENEATH;
+-	err = vfs_path_lookup(share_conf->vfs_path.dentry,
+-			      share_conf->vfs_path.mnt,
+-			      name,
+-			      flags,
+-			      path);
++	err = ksmbd_vfs_path_lookup_locked(share_conf, name, flags, path);
+ 	if (!err)
+-		return 0;
++		return err;
+ 
+ 	if (caseless) {
+ 		char *filepath;
+-		struct path parent;
+ 		size_t path_len, remain_len;
+ 
+ 		filepath = kstrdup(name, GFP_KERNEL);
+@@ -1237,10 +1220,10 @@ int ksmbd_vfs_kern_path(struct ksmbd_work *work, char *name,
+ 		path_len = strlen(filepath);
+ 		remain_len = path_len;
+ 
+-		parent = share_conf->vfs_path;
+-		path_get(&parent);
++		parent_path = share_conf->vfs_path;
++		path_get(&parent_path);
+ 
+-		while (d_can_lookup(parent.dentry)) {
++		while (d_can_lookup(parent_path.dentry)) {
+ 			char *filename = filepath + path_len - remain_len;
+ 			char *next = strchrnul(filename, '/');
+ 			size_t filename_len = next - filename;
+@@ -1249,12 +1232,11 @@ int ksmbd_vfs_kern_path(struct ksmbd_work *work, char *name,
+ 			if (filename_len == 0)
+ 				break;
+ 
+-			err = ksmbd_vfs_lookup_in_dir(&parent, filename,
++			err = ksmbd_vfs_lookup_in_dir(&parent_path, filename,
+ 						      filename_len,
+ 						      work->conn->um);
+-			path_put(&parent);
+ 			if (err)
+-				goto out;
++				goto out2;
+ 
+ 			next[0] = '\0';
+ 
+@@ -1262,23 +1244,31 @@ int ksmbd_vfs_kern_path(struct ksmbd_work *work, char *name,
+ 					      share_conf->vfs_path.mnt,
+ 					      filepath,
+ 					      flags,
+-					      &parent);
++					      path);
+ 			if (err)
+-				goto out;
+-			else if (is_last) {
+-				*path = parent;
+-				goto out;
+-			}
++				goto out2;
++			else if (is_last)
++				goto out1;
++			path_put(&parent_path);
++			parent_path = *path;
+ 
+ 			next[0] = '/';
+ 			remain_len -= filename_len + 1;
+ 		}
+ 
+-		path_put(&parent);
+ 		err = -EINVAL;
+-out:
++out2:
++		path_put(&parent_path);
++out1:
+ 		kfree(filepath);
+ 	}
++
++	if (!err) {
++		err = ksmbd_vfs_lock_parent(parent_path.dentry, path->dentry);
++		if (err)
++			dput(path->dentry);
++		path_put(&parent_path);
++	}
+ 	return err;
+ }
+ 
+@@ -1300,13 +1290,13 @@ struct dentry *ksmbd_vfs_kern_path_create(struct ksmbd_work *work,
+ }
+ 
+ int ksmbd_vfs_remove_acl_xattrs(struct mnt_idmap *idmap,
+-				struct dentry *dentry)
++				const struct path *path)
+ {
+ 	char *name, *xattr_list = NULL;
+ 	ssize_t xattr_list_len;
+ 	int err = 0;
+ 
+-	xattr_list_len = ksmbd_vfs_listxattr(dentry, &xattr_list);
++	xattr_list_len = ksmbd_vfs_listxattr(path->dentry, &xattr_list);
+ 	if (xattr_list_len < 0) {
+ 		goto out;
+ 	} else if (!xattr_list_len) {
+@@ -1314,6 +1304,10 @@ int ksmbd_vfs_remove_acl_xattrs(struct mnt_idmap *idmap,
+ 		goto out;
+ 	}
+ 
++	err = mnt_want_write(path->mnt);
++	if (err)
++		goto out;
++
+ 	for (name = xattr_list; name - xattr_list < xattr_list_len;
+ 	     name += strlen(name) + 1) {
+ 		ksmbd_debug(SMB, "%s, len %zd\n", name, strlen(name));
+@@ -1322,25 +1316,26 @@ int ksmbd_vfs_remove_acl_xattrs(struct mnt_idmap *idmap,
+ 			     sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1) ||
+ 		    !strncmp(name, XATTR_NAME_POSIX_ACL_DEFAULT,
+ 			     sizeof(XATTR_NAME_POSIX_ACL_DEFAULT) - 1)) {
+-			err = vfs_remove_acl(idmap, dentry, name);
++			err = vfs_remove_acl(idmap, path->dentry, name);
+ 			if (err)
+ 				ksmbd_debug(SMB,
+ 					    "remove acl xattr failed : %s\n", name);
+ 		}
+ 	}
++	mnt_drop_write(path->mnt);
++
+ out:
+ 	kvfree(xattr_list);
+ 	return err;
+ }
+ 
+-int ksmbd_vfs_remove_sd_xattrs(struct mnt_idmap *idmap,
+-			       struct dentry *dentry)
++int ksmbd_vfs_remove_sd_xattrs(struct mnt_idmap *idmap, const struct path *path)
+ {
+ 	char *name, *xattr_list = NULL;
+ 	ssize_t xattr_list_len;
+ 	int err = 0;
+ 
+-	xattr_list_len = ksmbd_vfs_listxattr(dentry, &xattr_list);
++	xattr_list_len = ksmbd_vfs_listxattr(path->dentry, &xattr_list);
+ 	if (xattr_list_len < 0) {
+ 		goto out;
+ 	} else if (!xattr_list_len) {
+@@ -1353,7 +1348,7 @@ int ksmbd_vfs_remove_sd_xattrs(struct mnt_idmap *idmap,
+ 		ksmbd_debug(SMB, "%s, len %zd\n", name, strlen(name));
+ 
+ 		if (!strncmp(name, XATTR_NAME_SD, XATTR_NAME_SD_LEN)) {
+-			err = ksmbd_vfs_remove_xattr(idmap, dentry, name);
++			err = ksmbd_vfs_remove_xattr(idmap, path, name);
+ 			if (err)
+ 				ksmbd_debug(SMB, "remove xattr failed : %s\n", name);
+ 		}
+@@ -1430,13 +1425,14 @@ out:
+ 
+ int ksmbd_vfs_set_sd_xattr(struct ksmbd_conn *conn,
+ 			   struct mnt_idmap *idmap,
+-			   struct dentry *dentry,
++			   const struct path *path,
+ 			   struct smb_ntsd *pntsd, int len)
+ {
+ 	int rc;
+ 	struct ndr sd_ndr = {0}, acl_ndr = {0};
+ 	struct xattr_ntacl acl = {0};
+ 	struct xattr_smb_acl *smb_acl, *def_smb_acl = NULL;
++	struct dentry *dentry = path->dentry;
+ 	struct inode *inode = d_inode(dentry);
+ 
+ 	acl.version = 4;
+@@ -1488,7 +1484,7 @@ int ksmbd_vfs_set_sd_xattr(struct ksmbd_conn *conn,
+ 		goto out;
+ 	}
+ 
+-	rc = ksmbd_vfs_setxattr(idmap, dentry,
++	rc = ksmbd_vfs_setxattr(idmap, path,
+ 				XATTR_NAME_SD, sd_ndr.data,
+ 				sd_ndr.offset, 0);
+ 	if (rc < 0)
+@@ -1578,7 +1574,7 @@ free_n_data:
+ }
+ 
+ int ksmbd_vfs_set_dos_attrib_xattr(struct mnt_idmap *idmap,
+-				   struct dentry *dentry,
++				   const struct path *path,
+ 				   struct xattr_dos_attrib *da)
+ {
+ 	struct ndr n;
+@@ -1588,7 +1584,7 @@ int ksmbd_vfs_set_dos_attrib_xattr(struct mnt_idmap *idmap,
+ 	if (err)
+ 		return err;
+ 
+-	err = ksmbd_vfs_setxattr(idmap, dentry, XATTR_NAME_DOS_ATTRIBUTE,
++	err = ksmbd_vfs_setxattr(idmap, path, XATTR_NAME_DOS_ATTRIBUTE,
+ 				 (void *)n.data, n.offset, 0);
+ 	if (err)
+ 		ksmbd_debug(SMB, "failed to store dos attribute in xattr\n");
+@@ -1825,10 +1821,11 @@ void ksmbd_vfs_posix_lock_unblock(struct file_lock *flock)
+ }
+ 
+ int ksmbd_vfs_set_init_posix_acl(struct mnt_idmap *idmap,
+-				 struct dentry *dentry)
++				 struct path *path)
+ {
+ 	struct posix_acl_state acl_state;
+ 	struct posix_acl *acls;
++	struct dentry *dentry = path->dentry;
+ 	struct inode *inode = d_inode(dentry);
+ 	int rc;
+ 
+@@ -1858,6 +1855,11 @@ int ksmbd_vfs_set_init_posix_acl(struct mnt_idmap *idmap,
+ 		return -ENOMEM;
+ 	}
+ 	posix_state_to_acl(&acl_state, acls->a_entries);
++
++	rc = mnt_want_write(path->mnt);
++	if (rc)
++		goto out_err;
++
+ 	rc = set_posix_acl(idmap, dentry, ACL_TYPE_ACCESS, acls);
+ 	if (rc < 0)
+ 		ksmbd_debug(SMB, "Set posix acl(ACL_TYPE_ACCESS) failed, rc : %d\n",
+@@ -1869,16 +1871,20 @@ int ksmbd_vfs_set_init_posix_acl(struct mnt_idmap *idmap,
+ 			ksmbd_debug(SMB, "Set posix acl(ACL_TYPE_DEFAULT) failed, rc : %d\n",
+ 				    rc);
+ 	}
++	mnt_drop_write(path->mnt);
++
++out_err:
+ 	free_acl_state(&acl_state);
+ 	posix_acl_release(acls);
+ 	return rc;
+ }
+ 
+ int ksmbd_vfs_inherit_posix_acl(struct mnt_idmap *idmap,
+-				struct dentry *dentry, struct inode *parent_inode)
++				struct path *path, struct inode *parent_inode)
+ {
+ 	struct posix_acl *acls;
+ 	struct posix_acl_entry *pace;
++	struct dentry *dentry = path->dentry;
+ 	struct inode *inode = d_inode(dentry);
+ 	int rc, i;
+ 
+@@ -1897,6 +1903,10 @@ int ksmbd_vfs_inherit_posix_acl(struct mnt_idmap *idmap,
+ 		}
+ 	}
+ 
++	rc = mnt_want_write(path->mnt);
++	if (rc)
++		goto out_err;
++
+ 	rc = set_posix_acl(idmap, dentry, ACL_TYPE_ACCESS, acls);
+ 	if (rc < 0)
+ 		ksmbd_debug(SMB, "Set posix acl(ACL_TYPE_ACCESS) failed, rc : %d\n",
+@@ -1908,6 +1918,9 @@ int ksmbd_vfs_inherit_posix_acl(struct mnt_idmap *idmap,
+ 			ksmbd_debug(SMB, "Set posix acl(ACL_TYPE_DEFAULT) failed, rc : %d\n",
+ 				    rc);
+ 	}
++	mnt_drop_write(path->mnt);
++
++out_err:
+ 	posix_acl_release(acls);
+ 	return rc;
+ }
+diff --git a/fs/ksmbd/vfs.h b/fs/ksmbd/vfs.h
+index 9d676ab0cd25b..8c0931d4d5310 100644
+--- a/fs/ksmbd/vfs.h
++++ b/fs/ksmbd/vfs.h
+@@ -71,9 +71,7 @@ struct ksmbd_kstat {
+ 	__le32			file_attributes;
+ };
+ 
+-int ksmbd_vfs_lock_parent(struct mnt_idmap *idmap, struct dentry *parent,
+-			  struct dentry *child);
+-int ksmbd_vfs_may_delete(struct mnt_idmap *idmap, struct dentry *dentry);
++int ksmbd_vfs_lock_parent(struct dentry *parent, struct dentry *child);
+ int ksmbd_vfs_query_maximal_access(struct mnt_idmap *idmap,
+ 				   struct dentry *dentry, __le32 *daccess);
+ int ksmbd_vfs_create(struct ksmbd_work *work, const char *name, umode_t mode);
+@@ -84,12 +82,12 @@ int ksmbd_vfs_write(struct ksmbd_work *work, struct ksmbd_file *fp,
+ 		    char *buf, size_t count, loff_t *pos, bool sync,
+ 		    ssize_t *written);
+ int ksmbd_vfs_fsync(struct ksmbd_work *work, u64 fid, u64 p_id);
+-int ksmbd_vfs_remove_file(struct ksmbd_work *work, char *name);
++int ksmbd_vfs_remove_file(struct ksmbd_work *work, const struct path *path);
+ int ksmbd_vfs_link(struct ksmbd_work *work,
+ 		   const char *oldname, const char *newname);
+ int ksmbd_vfs_getattr(const struct path *path, struct kstat *stat);
+-int ksmbd_vfs_fp_rename(struct ksmbd_work *work, struct ksmbd_file *fp,
+-			char *newname);
++int ksmbd_vfs_rename(struct ksmbd_work *work, const struct path *old_path,
++		     char *newname, int flags);
+ int ksmbd_vfs_truncate(struct ksmbd_work *work,
+ 		       struct ksmbd_file *fp, loff_t size);
+ struct srv_copychunk;
+@@ -110,15 +108,15 @@ ssize_t ksmbd_vfs_casexattr_len(struct mnt_idmap *idmap,
+ 				struct dentry *dentry, char *attr_name,
+ 				int attr_name_len);
+ int ksmbd_vfs_setxattr(struct mnt_idmap *idmap,
+-		       struct dentry *dentry, const char *attr_name,
++		       const struct path *path, const char *attr_name,
+ 		       void *attr_value, size_t attr_size, int flags);
+ int ksmbd_vfs_xattr_stream_name(char *stream_name, char **xattr_stream_name,
+ 				size_t *xattr_stream_name_size, int s_type);
+ int ksmbd_vfs_remove_xattr(struct mnt_idmap *idmap,
+-			   struct dentry *dentry, char *attr_name);
+-int ksmbd_vfs_kern_path(struct ksmbd_work *work,
+-			char *name, unsigned int flags, struct path *path,
+-			bool caseless);
++			   const struct path *path, char *attr_name);
++int ksmbd_vfs_kern_path_locked(struct ksmbd_work *work, char *name,
++			       unsigned int flags, struct path *path,
++			       bool caseless);
+ struct dentry *ksmbd_vfs_kern_path_create(struct ksmbd_work *work,
+ 					  const char *name,
+ 					  unsigned int flags,
+@@ -131,8 +129,7 @@ struct file_allocated_range_buffer;
+ int ksmbd_vfs_fqar_lseek(struct ksmbd_file *fp, loff_t start, loff_t length,
+ 			 struct file_allocated_range_buffer *ranges,
+ 			 unsigned int in_count, unsigned int *out_count);
+-int ksmbd_vfs_unlink(struct mnt_idmap *idmap, struct dentry *dir,
+-		     struct dentry *dentry);
++int ksmbd_vfs_unlink(struct file *filp);
+ void *ksmbd_vfs_init_kstat(char **p, struct ksmbd_kstat *ksmbd_kstat);
+ int ksmbd_vfs_fill_dentry_attrs(struct ksmbd_work *work,
+ 				struct mnt_idmap *idmap,
+@@ -142,26 +139,25 @@ void ksmbd_vfs_posix_lock_wait(struct file_lock *flock);
+ int ksmbd_vfs_posix_lock_wait_timeout(struct file_lock *flock, long timeout);
+ void ksmbd_vfs_posix_lock_unblock(struct file_lock *flock);
+ int ksmbd_vfs_remove_acl_xattrs(struct mnt_idmap *idmap,
+-				struct dentry *dentry);
+-int ksmbd_vfs_remove_sd_xattrs(struct mnt_idmap *idmap,
+-			       struct dentry *dentry);
++				const struct path *path);
++int ksmbd_vfs_remove_sd_xattrs(struct mnt_idmap *idmap, const struct path *path);
+ int ksmbd_vfs_set_sd_xattr(struct ksmbd_conn *conn,
+ 			   struct mnt_idmap *idmap,
+-			   struct dentry *dentry,
++			   const struct path *path,
+ 			   struct smb_ntsd *pntsd, int len);
+ int ksmbd_vfs_get_sd_xattr(struct ksmbd_conn *conn,
+ 			   struct mnt_idmap *idmap,
+ 			   struct dentry *dentry,
+ 			   struct smb_ntsd **pntsd);
+ int ksmbd_vfs_set_dos_attrib_xattr(struct mnt_idmap *idmap,
+-				   struct dentry *dentry,
++				   const struct path *path,
+ 				   struct xattr_dos_attrib *da);
+ int ksmbd_vfs_get_dos_attrib_xattr(struct mnt_idmap *idmap,
+ 				   struct dentry *dentry,
+ 				   struct xattr_dos_attrib *da);
+ int ksmbd_vfs_set_init_posix_acl(struct mnt_idmap *idmap,
+-				 struct dentry *dentry);
++				 struct path *path);
+ int ksmbd_vfs_inherit_posix_acl(struct mnt_idmap *idmap,
+-				struct dentry *dentry,
++				struct path *path,
+ 				struct inode *parent_inode);
+ #endif /* __KSMBD_VFS_H__ */
+diff --git a/fs/ksmbd/vfs_cache.c b/fs/ksmbd/vfs_cache.c
+index 054a7d2e0f489..f41f8d6108ce9 100644
+--- a/fs/ksmbd/vfs_cache.c
++++ b/fs/ksmbd/vfs_cache.c
+@@ -244,7 +244,6 @@ void ksmbd_release_inode_hash(void)
+ 
+ static void __ksmbd_inode_close(struct ksmbd_file *fp)
+ {
+-	struct dentry *dir, *dentry;
+ 	struct ksmbd_inode *ci = fp->f_ci;
+ 	int err;
+ 	struct file *filp;
+@@ -253,7 +252,7 @@ static void __ksmbd_inode_close(struct ksmbd_file *fp)
+ 	if (ksmbd_stream_fd(fp) && (ci->m_flags & S_DEL_ON_CLS_STREAM)) {
+ 		ci->m_flags &= ~S_DEL_ON_CLS_STREAM;
+ 		err = ksmbd_vfs_remove_xattr(file_mnt_idmap(filp),
+-					     filp->f_path.dentry,
++					     &filp->f_path,
+ 					     fp->stream.name);
+ 		if (err)
+ 			pr_err("remove xattr failed : %s\n",
+@@ -263,11 +262,9 @@ static void __ksmbd_inode_close(struct ksmbd_file *fp)
+ 	if (atomic_dec_and_test(&ci->m_count)) {
+ 		write_lock(&ci->m_lock);
+ 		if (ci->m_flags & (S_DEL_ON_CLS | S_DEL_PENDING)) {
+-			dentry = filp->f_path.dentry;
+-			dir = dentry->d_parent;
+ 			ci->m_flags &= ~(S_DEL_ON_CLS | S_DEL_PENDING);
+ 			write_unlock(&ci->m_lock);
+-			ksmbd_vfs_unlink(file_mnt_idmap(filp), dir, dentry);
++			ksmbd_vfs_unlink(filp);
+ 			write_lock(&ci->m_lock);
+ 		}
+ 		write_unlock(&ci->m_lock);
+diff --git a/fs/namei.c b/fs/namei.c
+index edfedfbccaef4..c1d59ae5af83e 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -254,6 +254,7 @@ getname_kernel(const char * filename)
+ 
+ 	return result;
+ }
++EXPORT_SYMBOL(getname_kernel);
+ 
+ void putname(struct filename *name)
+ {
+@@ -271,6 +272,7 @@ void putname(struct filename *name)
+ 	} else
+ 		__putname(name);
+ }
++EXPORT_SYMBOL(putname);
+ 
+ /**
+  * check_acl - perform ACL permission checking
+@@ -1581,8 +1583,9 @@ static struct dentry *lookup_dcache(const struct qstr *name,
+  * when directory is guaranteed to have no in-lookup children
+  * at all.
+  */
+-static struct dentry *__lookup_hash(const struct qstr *name,
+-		struct dentry *base, unsigned int flags)
++struct dentry *lookup_one_qstr_excl(const struct qstr *name,
++				    struct dentry *base,
++				    unsigned int flags)
+ {
+ 	struct dentry *dentry = lookup_dcache(name, base, flags);
+ 	struct dentry *old;
+@@ -1606,6 +1609,7 @@ static struct dentry *__lookup_hash(const struct qstr *name,
+ 	}
+ 	return dentry;
+ }
++EXPORT_SYMBOL(lookup_one_qstr_excl);
+ 
+ static struct dentry *lookup_fast(struct nameidata *nd)
+ {
+@@ -2532,16 +2536,17 @@ static int path_parentat(struct nameidata *nd, unsigned flags,
+ }
+ 
+ /* Note: this does not consume "name" */
+-static int filename_parentat(int dfd, struct filename *name,
+-			     unsigned int flags, struct path *parent,
+-			     struct qstr *last, int *type)
++static int __filename_parentat(int dfd, struct filename *name,
++			       unsigned int flags, struct path *parent,
++			       struct qstr *last, int *type,
++			       const struct path *root)
+ {
+ 	int retval;
+ 	struct nameidata nd;
+ 
+ 	if (IS_ERR(name))
+ 		return PTR_ERR(name);
+-	set_nameidata(&nd, dfd, name, NULL);
++	set_nameidata(&nd, dfd, name, root);
+ 	retval = path_parentat(&nd, flags | LOOKUP_RCU, parent);
+ 	if (unlikely(retval == -ECHILD))
+ 		retval = path_parentat(&nd, flags, parent);
+@@ -2556,6 +2561,13 @@ static int filename_parentat(int dfd, struct filename *name,
+ 	return retval;
+ }
+ 
++static int filename_parentat(int dfd, struct filename *name,
++			     unsigned int flags, struct path *parent,
++			     struct qstr *last, int *type)
++{
++	return __filename_parentat(dfd, name, flags, parent, last, type, NULL);
++}
++
+ /* does lookup, returns the object with parent locked */
+ static struct dentry *__kern_path_locked(struct filename *name, struct path *path)
+ {
+@@ -2571,7 +2583,7 @@ static struct dentry *__kern_path_locked(struct filename *name, struct path *pat
+ 		return ERR_PTR(-EINVAL);
+ 	}
+ 	inode_lock_nested(path->dentry->d_inode, I_MUTEX_PARENT);
+-	d = __lookup_hash(&last, path->dentry, 0);
++	d = lookup_one_qstr_excl(&last, path->dentry, 0);
+ 	if (IS_ERR(d)) {
+ 		inode_unlock(path->dentry->d_inode);
+ 		path_put(path);
+@@ -2599,6 +2611,24 @@ int kern_path(const char *name, unsigned int flags, struct path *path)
+ }
+ EXPORT_SYMBOL(kern_path);
+ 
++/**
++ * vfs_path_parent_lookup - lookup a parent path relative to a dentry-vfsmount pair
++ * @filename: filename structure
++ * @flags: lookup flags
++ * @parent: pointer to struct path to fill
++ * @last: last component
++ * @type: type of the last component
++ * @root: pointer to struct path of the base directory
++ */
++int vfs_path_parent_lookup(struct filename *filename, unsigned int flags,
++			   struct path *parent, struct qstr *last, int *type,
++			   const struct path *root)
++{
++	return  __filename_parentat(AT_FDCWD, filename, flags, parent, last,
++				    type, root);
++}
++EXPORT_SYMBOL(vfs_path_parent_lookup);
++
+ /**
+  * vfs_path_lookup - lookup a file path relative to a dentry-vfsmount pair
+  * @dentry:  pointer to dentry of the base directory
+@@ -2980,20 +3010,10 @@ static inline int may_create(struct mnt_idmap *idmap,
+ 	return inode_permission(idmap, dir, MAY_WRITE | MAY_EXEC);
+ }
+ 
+-/*
+- * p1 and p2 should be directories on the same fs.
+- */
+-struct dentry *lock_rename(struct dentry *p1, struct dentry *p2)
++static struct dentry *lock_two_directories(struct dentry *p1, struct dentry *p2)
+ {
+ 	struct dentry *p;
+ 
+-	if (p1 == p2) {
+-		inode_lock_nested(p1->d_inode, I_MUTEX_PARENT);
+-		return NULL;
+-	}
+-
+-	mutex_lock(&p1->d_sb->s_vfs_rename_mutex);
+-
+ 	p = d_ancestor(p2, p1);
+ 	if (p) {
+ 		inode_lock_nested(p2->d_inode, I_MUTEX_PARENT);
+@@ -3012,8 +3032,64 @@ struct dentry *lock_rename(struct dentry *p1, struct dentry *p2)
+ 	inode_lock_nested(p2->d_inode, I_MUTEX_PARENT2);
+ 	return NULL;
+ }
++
++/*
++ * p1 and p2 should be directories on the same fs.
++ */
++struct dentry *lock_rename(struct dentry *p1, struct dentry *p2)
++{
++	if (p1 == p2) {
++		inode_lock_nested(p1->d_inode, I_MUTEX_PARENT);
++		return NULL;
++	}
++
++	mutex_lock(&p1->d_sb->s_vfs_rename_mutex);
++	return lock_two_directories(p1, p2);
++}
+ EXPORT_SYMBOL(lock_rename);
+ 
++/*
++ * c1 and p2 should be on the same fs.
++ */
++struct dentry *lock_rename_child(struct dentry *c1, struct dentry *p2)
++{
++	if (READ_ONCE(c1->d_parent) == p2) {
++		/*
++		 * hopefully won't need to touch ->s_vfs_rename_mutex at all.
++		 */
++		inode_lock_nested(p2->d_inode, I_MUTEX_PARENT);
++		/*
++		 * now that p2 is locked, nobody can move in or out of it,
++		 * so the test below is safe.
++		 */
++		if (likely(c1->d_parent == p2))
++			return NULL;
++
++		/*
++		 * c1 got moved out of p2 while we'd been taking locks;
++		 * unlock and fall back to slow case.
++		 */
++		inode_unlock(p2->d_inode);
++	}
++
++	mutex_lock(&c1->d_sb->s_vfs_rename_mutex);
++	/*
++	 * nobody can move out of any directories on this fs.
++	 */
++	if (likely(c1->d_parent != p2))
++		return lock_two_directories(c1->d_parent, p2);
++
++	/*
++	 * c1 got moved into p2 while we were taking locks;
++	 * we need p2 locked and ->s_vfs_rename_mutex unlocked,
++	 * for consistency with lock_rename().
++	 */
++	inode_lock_nested(p2->d_inode, I_MUTEX_PARENT);
++	mutex_unlock(&c1->d_sb->s_vfs_rename_mutex);
++	return NULL;
++}
++EXPORT_SYMBOL(lock_rename_child);
++
+ void unlock_rename(struct dentry *p1, struct dentry *p2)
+ {
+ 	inode_unlock(p1->d_inode);
+@@ -3806,7 +3882,8 @@ static struct dentry *filename_create(int dfd, struct filename *name,
+ 	if (last.name[last.len] && !want_dir)
+ 		create_flags = 0;
+ 	inode_lock_nested(path->dentry->d_inode, I_MUTEX_PARENT);
+-	dentry = __lookup_hash(&last, path->dentry, reval_flag | create_flags);
++	dentry = lookup_one_qstr_excl(&last, path->dentry,
++				      reval_flag | create_flags);
+ 	if (IS_ERR(dentry))
+ 		goto unlock;
+ 
+@@ -4166,7 +4243,7 @@ retry:
+ 		goto exit2;
+ 
+ 	inode_lock_nested(path.dentry->d_inode, I_MUTEX_PARENT);
+-	dentry = __lookup_hash(&last, path.dentry, lookup_flags);
++	dentry = lookup_one_qstr_excl(&last, path.dentry, lookup_flags);
+ 	error = PTR_ERR(dentry);
+ 	if (IS_ERR(dentry))
+ 		goto exit3;
+@@ -4299,7 +4376,7 @@ retry:
+ 		goto exit2;
+ retry_deleg:
+ 	inode_lock_nested(path.dentry->d_inode, I_MUTEX_PARENT);
+-	dentry = __lookup_hash(&last, path.dentry, lookup_flags);
++	dentry = lookup_one_qstr_excl(&last, path.dentry, lookup_flags);
+ 	error = PTR_ERR(dentry);
+ 	if (!IS_ERR(dentry)) {
+ 
+@@ -4863,7 +4940,8 @@ retry:
+ retry_deleg:
+ 	trap = lock_rename(new_path.dentry, old_path.dentry);
+ 
+-	old_dentry = __lookup_hash(&old_last, old_path.dentry, lookup_flags);
++	old_dentry = lookup_one_qstr_excl(&old_last, old_path.dentry,
++					  lookup_flags);
+ 	error = PTR_ERR(old_dentry);
+ 	if (IS_ERR(old_dentry))
+ 		goto exit3;
+@@ -4871,7 +4949,8 @@ retry_deleg:
+ 	error = -ENOENT;
+ 	if (d_is_negative(old_dentry))
+ 		goto exit4;
+-	new_dentry = __lookup_hash(&new_last, new_path.dentry, lookup_flags | target_flags);
++	new_dentry = lookup_one_qstr_excl(&new_last, new_path.dentry,
++					  lookup_flags | target_flags);
+ 	error = PTR_ERR(new_dentry);
+ 	if (IS_ERR(new_dentry))
+ 		goto exit4;
+diff --git a/fs/nilfs2/page.c b/fs/nilfs2/page.c
+index 41ccd43cd9797..afbf4bcefaf22 100644
+--- a/fs/nilfs2/page.c
++++ b/fs/nilfs2/page.c
+@@ -370,7 +370,15 @@ void nilfs_clear_dirty_pages(struct address_space *mapping, bool silent)
+ 			struct folio *folio = fbatch.folios[i];
+ 
+ 			folio_lock(folio);
+-			nilfs_clear_dirty_page(&folio->page, silent);
++
++			/*
++			 * This folio may have been removed from the address
++			 * space by truncation or invalidation when the lock
++			 * was acquired.  Skip processing in that case.
++			 */
++			if (likely(folio->mapping == mapping))
++				nilfs_clear_dirty_page(&folio->page, silent);
++
+ 			folio_unlock(folio);
+ 		}
+ 		folio_batch_release(&fbatch);
+diff --git a/fs/nilfs2/segbuf.c b/fs/nilfs2/segbuf.c
+index 1362ccb64ec7d..6e59dc19a7324 100644
+--- a/fs/nilfs2/segbuf.c
++++ b/fs/nilfs2/segbuf.c
+@@ -101,6 +101,12 @@ int nilfs_segbuf_extend_segsum(struct nilfs_segment_buffer *segbuf)
+ 	if (unlikely(!bh))
+ 		return -ENOMEM;
+ 
++	lock_buffer(bh);
++	if (!buffer_uptodate(bh)) {
++		memset(bh->b_data, 0, bh->b_size);
++		set_buffer_uptodate(bh);
++	}
++	unlock_buffer(bh);
+ 	nilfs_segbuf_add_segsum_buffer(segbuf, bh);
+ 	return 0;
+ }
+diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
+index ac949fd7603ff..c2553024bd25e 100644
+--- a/fs/nilfs2/segment.c
++++ b/fs/nilfs2/segment.c
+@@ -981,10 +981,13 @@ static void nilfs_segctor_fill_in_super_root(struct nilfs_sc_info *sci,
+ 	unsigned int isz, srsz;
+ 
+ 	bh_sr = NILFS_LAST_SEGBUF(&sci->sc_segbufs)->sb_super_root;
++
++	lock_buffer(bh_sr);
+ 	raw_sr = (struct nilfs_super_root *)bh_sr->b_data;
+ 	isz = nilfs->ns_inode_size;
+ 	srsz = NILFS_SR_BYTES(isz);
+ 
++	raw_sr->sr_sum = 0;  /* Ensure initialization within this update */
+ 	raw_sr->sr_bytes = cpu_to_le16(srsz);
+ 	raw_sr->sr_nongc_ctime
+ 		= cpu_to_le64(nilfs_doing_gc() ?
+@@ -998,6 +1001,8 @@ static void nilfs_segctor_fill_in_super_root(struct nilfs_sc_info *sci,
+ 	nilfs_write_inode_common(nilfs->ns_sufile, (void *)raw_sr +
+ 				 NILFS_SR_SUFILE_OFFSET(isz), 1);
+ 	memset((void *)raw_sr + srsz, 0, nilfs->ns_blocksize - srsz);
++	set_buffer_uptodate(bh_sr);
++	unlock_buffer(bh_sr);
+ }
+ 
+ static void nilfs_redirty_inodes(struct list_head *head)
+@@ -1780,6 +1785,7 @@ static void nilfs_abort_logs(struct list_head *logs, int err)
+ 	list_for_each_entry(segbuf, logs, sb_list) {
+ 		list_for_each_entry(bh, &segbuf->sb_segsum_buffers,
+ 				    b_assoc_buffers) {
++			clear_buffer_uptodate(bh);
+ 			if (bh->b_page != bd_page) {
+ 				if (bd_page)
+ 					end_page_writeback(bd_page);
+@@ -1791,6 +1797,7 @@ static void nilfs_abort_logs(struct list_head *logs, int err)
+ 				    b_assoc_buffers) {
+ 			clear_buffer_async_write(bh);
+ 			if (bh == segbuf->sb_super_root) {
++				clear_buffer_uptodate(bh);
+ 				if (bh->b_page != bd_page) {
+ 					end_page_writeback(bd_page);
+ 					bd_page = bh->b_page;
+diff --git a/fs/nilfs2/super.c b/fs/nilfs2/super.c
+index 77f1e5778d1c8..9ba4933087af0 100644
+--- a/fs/nilfs2/super.c
++++ b/fs/nilfs2/super.c
+@@ -372,10 +372,31 @@ static int nilfs_move_2nd_super(struct super_block *sb, loff_t sb2off)
+ 		goto out;
+ 	}
+ 	nsbp = (void *)nsbh->b_data + offset;
+-	memset(nsbp, 0, nilfs->ns_blocksize);
+ 
++	lock_buffer(nsbh);
+ 	if (sb2i >= 0) {
++		/*
++		 * The position of the second superblock only changes by 4KiB,
++		 * which is larger than the maximum superblock data size
++		 * (= 1KiB), so there is no need to use memmove() to allow
++		 * overlap between source and destination.
++		 */
+ 		memcpy(nsbp, nilfs->ns_sbp[sb2i], nilfs->ns_sbsize);
++
++		/*
++		 * Zero fill after copy to avoid overwriting in case of move
++		 * within the same block.
++		 */
++		memset(nsbh->b_data, 0, offset);
++		memset((void *)nsbp + nilfs->ns_sbsize, 0,
++		       nsbh->b_size - offset - nilfs->ns_sbsize);
++	} else {
++		memset(nsbh->b_data, 0, nsbh->b_size);
++	}
++	set_buffer_uptodate(nsbh);
++	unlock_buffer(nsbh);
++
++	if (sb2i >= 0) {
+ 		brelse(nilfs->ns_sbh[sb2i]);
+ 		nilfs->ns_sbh[sb2i] = nsbh;
+ 		nilfs->ns_sbp[sb2i] = nsbp;
+diff --git a/include/acpi/acpixf.h b/include/acpi/acpixf.h
+index 8e364cbdd14ab..651c0fe567570 100644
+--- a/include/acpi/acpixf.h
++++ b/include/acpi/acpixf.h
+@@ -761,6 +761,7 @@ ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status
+ 						     acpi_event_status
+ 						     *event_status))
+ ACPI_HW_DEPENDENT_RETURN_UINT32(u32 acpi_dispatch_gpe(acpi_handle gpe_device, u32 gpe_number))
++ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_hw_disable_all_gpes(void))
+ ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_disable_all_gpes(void))
+ ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable_all_runtime_gpes(void))
+ ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable_all_wakeup_gpes(void))
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index d1f57e4868ed3..7058b01e9f146 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -839,6 +839,9 @@
+ 
+ #ifdef CONFIG_UNWINDER_ORC
+ #define ORC_UNWIND_TABLE						\
++	.orc_header : AT(ADDR(.orc_header) - LOAD_OFFSET) {		\
++		BOUNDED_SECTION_BY(.orc_header, _orc_header)		\
++	}								\
+ 	. = ALIGN(4);							\
+ 	.orc_unwind_ip : AT(ADDR(.orc_unwind_ip) - LOAD_OFFSET) {	\
+ 		BOUNDED_SECTION_BY(.orc_unwind_ip, _orc_unwind_ip)	\
+diff --git a/include/linux/gpio/driver.h b/include/linux/gpio/driver.h
+index ccd8a512d8546..b769ff6b00fdd 100644
+--- a/include/linux/gpio/driver.h
++++ b/include/linux/gpio/driver.h
+@@ -244,6 +244,14 @@ struct gpio_irq_chip {
+ 	 */
+ 	bool initialized;
+ 
++	/**
++	 * @domain_is_allocated_externally:
++	 *
++	 * True it the irq_domain was allocated outside of gpiolib, in which
++	 * case gpiolib won't free the irq_domain itself.
++	 */
++	bool domain_is_allocated_externally;
++
+ 	/**
+ 	 * @init_hw: optional routine to initialize hardware before
+ 	 * an IRQ chip will be added. This is quite useful when
+diff --git a/include/linux/libata.h b/include/linux/libata.h
+index a759dfbdcc91a..bb5dda3ec2db5 100644
+--- a/include/linux/libata.h
++++ b/include/linux/libata.h
+@@ -836,7 +836,7 @@ struct ata_port {
+ 
+ 	struct mutex		scsi_scan_mutex;
+ 	struct delayed_work	hotplug_task;
+-	struct work_struct	scsi_rescan_task;
++	struct delayed_work	scsi_rescan_task;
+ 
+ 	unsigned int		hsm_task_state;
+ 
+diff --git a/include/linux/namei.h b/include/linux/namei.h
+index 0d797f3367cad..1463cbda48886 100644
+--- a/include/linux/namei.h
++++ b/include/linux/namei.h
+@@ -57,12 +57,20 @@ static inline int user_path_at(int dfd, const char __user *name, unsigned flags,
+ 	return user_path_at_empty(dfd, name, flags, path, NULL);
+ }
+ 
++struct dentry *lookup_one_qstr_excl(const struct qstr *name,
++				    struct dentry *base,
++				    unsigned int flags);
+ extern int kern_path(const char *, unsigned, struct path *);
+ 
+ extern struct dentry *kern_path_create(int, const char *, struct path *, unsigned int);
+ extern struct dentry *user_path_create(int, const char __user *, struct path *, unsigned int);
+ extern void done_path_create(struct path *, struct dentry *);
+ extern struct dentry *kern_path_locked(const char *, struct path *);
++int vfs_path_parent_lookup(struct filename *filename, unsigned int flags,
++			   struct path *parent, struct qstr *last, int *type,
++			   const struct path *root);
++int vfs_path_lookup(struct dentry *, struct vfsmount *, const char *,
++		    unsigned int, struct path *);
+ 
+ extern struct dentry *try_lookup_one_len(const char *, struct dentry *, int);
+ extern struct dentry *lookup_one_len(const char *, struct dentry *, int);
+@@ -81,6 +89,7 @@ extern int follow_down(struct path *path, unsigned int flags);
+ extern int follow_up(struct path *);
+ 
+ extern struct dentry *lock_rename(struct dentry *, struct dentry *);
++extern struct dentry *lock_rename_child(struct dentry *, struct dentry *);
+ extern void unlock_rename(struct dentry *, struct dentry *);
+ 
+ extern int __must_check nd_jump_link(const struct path *path);
+diff --git a/include/linux/regulator/pca9450.h b/include/linux/regulator/pca9450.h
+index 3c01c2bf84f53..505c908dbb817 100644
+--- a/include/linux/regulator/pca9450.h
++++ b/include/linux/regulator/pca9450.h
+@@ -196,11 +196,11 @@ enum {
+ 
+ /* PCA9450_REG_LDO3_VOLT bits */
+ #define LDO3_EN_MASK			0xC0
+-#define LDO3OUT_MASK			0x0F
++#define LDO3OUT_MASK			0x1F
+ 
+ /* PCA9450_REG_LDO4_VOLT bits */
+ #define LDO4_EN_MASK			0xC0
+-#define LDO4OUT_MASK			0x0F
++#define LDO4OUT_MASK			0x1F
+ 
+ /* PCA9450_REG_LDO5_VOLT bits */
+ #define LDO5L_EN_MASK			0xC0
+diff --git a/include/net/dsa.h b/include/net/dsa.h
+index a15f17a38eca6..def06ef676dd8 100644
+--- a/include/net/dsa.h
++++ b/include/net/dsa.h
+@@ -973,6 +973,14 @@ struct dsa_switch_ops {
+ 			       struct phy_device *phy);
+ 	void	(*port_disable)(struct dsa_switch *ds, int port);
+ 
++	/*
++	 * Compatibility between device trees defining multiple CPU ports and
++	 * drivers which are not OK to use by default the numerically smallest
++	 * CPU port of a switch for its local ports. This can return NULL,
++	 * meaning "don't know/don't care".
++	 */
++	struct dsa_port *(*preferred_default_local_cpu_port)(struct dsa_switch *ds);
++
+ 	/*
+ 	 * Port's MAC EEE settings
+ 	 */
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 300bce4d67cec..730c63ceee567 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -472,7 +472,8 @@ struct nft_set_ops {
+ 	int				(*init)(const struct nft_set *set,
+ 						const struct nft_set_desc *desc,
+ 						const struct nlattr * const nla[]);
+-	void				(*destroy)(const struct nft_set *set);
++	void				(*destroy)(const struct nft_ctx *ctx,
++						   const struct nft_set *set);
+ 	void				(*gc_init)(const struct nft_set *set);
+ 
+ 	unsigned int			elemsize;
+@@ -809,6 +810,8 @@ int nft_set_elem_expr_clone(const struct nft_ctx *ctx, struct nft_set *set,
+ 			    struct nft_expr *expr_array[]);
+ void nft_set_elem_destroy(const struct nft_set *set, void *elem,
+ 			  bool destroy_expr);
++void nf_tables_set_elem_destroy(const struct nft_ctx *ctx,
++				const struct nft_set *set, void *elem);
+ 
+ /**
+  *	struct nft_set_gc_batch_head - nf_tables set garbage collection batch
+@@ -901,6 +904,7 @@ struct nft_expr_type {
+ 
+ enum nft_trans_phase {
+ 	NFT_TRANS_PREPARE,
++	NFT_TRANS_PREPARE_ERROR,
+ 	NFT_TRANS_ABORT,
+ 	NFT_TRANS_COMMIT,
+ 	NFT_TRANS_RELEASE
+@@ -1009,7 +1013,10 @@ static inline struct nft_userdata *nft_userdata(const struct nft_rule *rule)
+ 	return (void *)&rule->data[rule->dlen];
+ }
+ 
+-void nf_tables_rule_release(const struct nft_ctx *ctx, struct nft_rule *rule);
++void nft_rule_expr_activate(const struct nft_ctx *ctx, struct nft_rule *rule);
++void nft_rule_expr_deactivate(const struct nft_ctx *ctx, struct nft_rule *rule,
++			      enum nft_trans_phase phase);
++void nf_tables_rule_destroy(const struct nft_ctx *ctx, struct nft_rule *rule);
+ 
+ static inline void nft_set_elem_update_expr(const struct nft_set_ext *ext,
+ 					    struct nft_regs *regs,
+@@ -1092,6 +1099,8 @@ int nft_setelem_validate(const struct nft_ctx *ctx, struct nft_set *set,
+ 			 const struct nft_set_iter *iter,
+ 			 struct nft_set_elem *elem);
+ int nft_set_catchall_validate(const struct nft_ctx *ctx, struct nft_set *set);
++int nf_tables_bind_chain(const struct nft_ctx *ctx, struct nft_chain *chain);
++void nf_tables_unbind_chain(const struct nft_ctx *ctx, struct nft_chain *chain);
+ 
+ enum nft_chain_types {
+ 	NFT_CHAIN_T_DEFAULT = 0,
+@@ -1128,11 +1137,17 @@ int nft_chain_validate_dependency(const struct nft_chain *chain,
+ int nft_chain_validate_hooks(const struct nft_chain *chain,
+                              unsigned int hook_flags);
+ 
++static inline bool nft_chain_binding(const struct nft_chain *chain)
++{
++	return chain->flags & NFT_CHAIN_BINDING;
++}
++
+ static inline bool nft_chain_is_bound(struct nft_chain *chain)
+ {
+ 	return (chain->flags & NFT_CHAIN_BINDING) && chain->bound;
+ }
+ 
++int nft_chain_add(struct nft_table *table, struct nft_chain *chain);
+ void nft_chain_del(struct nft_chain *chain);
+ void nf_tables_chain_destroy(struct nft_ctx *ctx);
+ 
+@@ -1550,6 +1565,7 @@ static inline void nft_set_elem_clear_busy(struct nft_set_ext *ext)
+  *	struct nft_trans - nf_tables object update in transaction
+  *
+  *	@list: used internally
++ *	@binding_list: list of objects with possible bindings
+  *	@msg_type: message type
+  *	@put_net: ctx->net needs to be put
+  *	@ctx: transaction context
+@@ -1557,6 +1573,7 @@ static inline void nft_set_elem_clear_busy(struct nft_set_ext *ext)
+  */
+ struct nft_trans {
+ 	struct list_head		list;
++	struct list_head		binding_list;
+ 	int				msg_type;
+ 	bool				put_net;
+ 	struct nft_ctx			ctx;
+@@ -1567,6 +1584,7 @@ struct nft_trans_rule {
+ 	struct nft_rule			*rule;
+ 	struct nft_flow_rule		*flow;
+ 	u32				rule_id;
++	bool				bound;
+ };
+ 
+ #define nft_trans_rule(trans)	\
+@@ -1575,6 +1593,8 @@ struct nft_trans_rule {
+ 	(((struct nft_trans_rule *)trans->data)->flow)
+ #define nft_trans_rule_id(trans)	\
+ 	(((struct nft_trans_rule *)trans->data)->rule_id)
++#define nft_trans_rule_bound(trans)	\
++	(((struct nft_trans_rule *)trans->data)->bound)
+ 
+ struct nft_trans_set {
+ 	struct nft_set			*set;
+@@ -1599,15 +1619,19 @@ struct nft_trans_set {
+ 	(((struct nft_trans_set *)trans->data)->gc_int)
+ 
+ struct nft_trans_chain {
++	struct nft_chain		*chain;
+ 	bool				update;
+ 	char				*name;
+ 	struct nft_stats __percpu	*stats;
+ 	u8				policy;
++	bool				bound;
+ 	u32				chain_id;
+ 	struct nft_base_chain		*basechain;
+ 	struct list_head		hook_list;
+ };
+ 
++#define nft_trans_chain(trans)	\
++	(((struct nft_trans_chain *)trans->data)->chain)
+ #define nft_trans_chain_update(trans)	\
+ 	(((struct nft_trans_chain *)trans->data)->update)
+ #define nft_trans_chain_name(trans)	\
+@@ -1616,6 +1640,8 @@ struct nft_trans_chain {
+ 	(((struct nft_trans_chain *)trans->data)->stats)
+ #define nft_trans_chain_policy(trans)	\
+ 	(((struct nft_trans_chain *)trans->data)->policy)
++#define nft_trans_chain_bound(trans)	\
++	(((struct nft_trans_chain *)trans->data)->bound)
+ #define nft_trans_chain_id(trans)	\
+ 	(((struct nft_trans_chain *)trans->data)->chain_id)
+ #define nft_trans_basechain(trans)	\
+@@ -1692,6 +1718,7 @@ static inline int nft_request_module(struct net *net, const char *fmt, ...) { re
+ struct nftables_pernet {
+ 	struct list_head	tables;
+ 	struct list_head	commit_list;
++	struct list_head	binding_list;
+ 	struct list_head	module_list;
+ 	struct list_head	notify_list;
+ 	struct mutex		commit_mutex;
+diff --git a/include/net/xfrm.h b/include/net/xfrm.h
+index 3e1f70e8e4247..47ecf1d4719b0 100644
+--- a/include/net/xfrm.h
++++ b/include/net/xfrm.h
+@@ -1049,6 +1049,7 @@ struct xfrm_offload {
+ struct sec_path {
+ 	int			len;
+ 	int			olen;
++	int			verified_cnt;
+ 
+ 	struct xfrm_state	*xvec[XFRM_MAX_DEPTH];
+ 	struct xfrm_offload	ovec[XFRM_MAX_OFFLOAD_DEPTH];
+diff --git a/include/target/iscsi/iscsi_target_core.h b/include/target/iscsi/iscsi_target_core.h
+index 229118156a1f6..4c15420e8965d 100644
+--- a/include/target/iscsi/iscsi_target_core.h
++++ b/include/target/iscsi/iscsi_target_core.h
+@@ -562,12 +562,13 @@ struct iscsit_conn {
+ #define LOGIN_FLAGS_READ_ACTIVE		2
+ #define LOGIN_FLAGS_WRITE_ACTIVE	3
+ #define LOGIN_FLAGS_CLOSED		4
++#define LOGIN_FLAGS_WORKER_RUNNING	5
+ 	unsigned long		login_flags;
+ 	struct delayed_work	login_work;
+ 	struct iscsi_login	*login;
+ 	struct timer_list	nopin_timer;
+ 	struct timer_list	nopin_response_timer;
+-	struct timer_list	transport_timer;
++	struct timer_list	login_timer;
+ 	struct task_struct	*login_kworker;
+ 	/* Spinlock used for add/deleting cmd's from conn_cmd_list */
+ 	spinlock_t		cmd_lock;
+@@ -576,6 +577,8 @@ struct iscsit_conn {
+ 	spinlock_t		nopin_timer_lock;
+ 	spinlock_t		response_queue_lock;
+ 	spinlock_t		state_lock;
++	spinlock_t		login_timer_lock;
++	spinlock_t		login_worker_lock;
+ 	/* libcrypto RX and TX contexts for crc32c */
+ 	struct ahash_request	*conn_rx_hash;
+ 	struct ahash_request	*conn_tx_hash;
+@@ -792,7 +795,6 @@ struct iscsi_np {
+ 	enum np_thread_state_table np_thread_state;
+ 	bool                    enabled;
+ 	atomic_t		np_reset_count;
+-	enum iscsi_timer_flags_table np_login_timer_flags;
+ 	u32			np_exports;
+ 	enum np_flags_table	np_flags;
+ 	spinlock_t		np_thread_lock;
+@@ -800,7 +802,6 @@ struct iscsi_np {
+ 	struct socket		*np_socket;
+ 	struct sockaddr_storage np_sockaddr;
+ 	struct task_struct	*np_thread;
+-	struct timer_list	np_login_timer;
+ 	void			*np_context;
+ 	struct iscsit_transport *np_transport;
+ 	struct list_head	np_list;
+diff --git a/include/trace/events/writeback.h b/include/trace/events/writeback.h
+index 86b2a82da546a..54e353c9f919f 100644
+--- a/include/trace/events/writeback.h
++++ b/include/trace/events/writeback.h
+@@ -68,7 +68,7 @@ DECLARE_EVENT_CLASS(writeback_folio_template,
+ 		strscpy_pad(__entry->name,
+ 			    bdi_dev_name(mapping ? inode_to_bdi(mapping->host) :
+ 					 NULL), 32);
+-		__entry->ino = mapping ? mapping->host->i_ino : 0;
++		__entry->ino = (mapping && mapping->host) ? mapping->host->i_ino : 0;
+ 		__entry->index = folio->index;
+ 	),
+ 
+diff --git a/io_uring/net.c b/io_uring/net.c
+index 05fd285ac75ae..2cf671cef0b61 100644
+--- a/io_uring/net.c
++++ b/io_uring/net.c
+@@ -203,7 +203,7 @@ static int io_sendmsg_copy_hdr(struct io_kiocb *req,
+ 	ret = sendmsg_copy_msghdr(&iomsg->msg, sr->umsg, sr->msg_flags,
+ 					&iomsg->free_iov);
+ 	/* save msg_control as sys_sendmsg() overwrites it */
+-	sr->msg_control = iomsg->msg.msg_control;
++	sr->msg_control = iomsg->msg.msg_control_user;
+ 	return ret;
+ }
+ 
+@@ -302,7 +302,7 @@ int io_sendmsg(struct io_kiocb *req, unsigned int issue_flags)
+ 
+ 	if (req_has_async_data(req)) {
+ 		kmsg = req->async_data;
+-		kmsg->msg.msg_control = sr->msg_control;
++		kmsg->msg.msg_control_user = sr->msg_control;
+ 	} else {
+ 		ret = io_sendmsg_copy_hdr(req, &iomsg);
+ 		if (ret)
+@@ -326,6 +326,8 @@ int io_sendmsg(struct io_kiocb *req, unsigned int issue_flags)
+ 		if (ret == -EAGAIN && (issue_flags & IO_URING_F_NONBLOCK))
+ 			return io_setup_async_msg(req, kmsg, issue_flags);
+ 		if (ret > 0 && io_net_retry(sock, flags)) {
++			kmsg->msg.msg_controllen = 0;
++			kmsg->msg.msg_control = NULL;
+ 			sr->done_io += ret;
+ 			req->flags |= REQ_F_PARTIAL_IO;
+ 			return io_setup_async_msg(req, kmsg, issue_flags);
+@@ -787,16 +789,19 @@ retry_multishot:
+ 	flags = sr->msg_flags;
+ 	if (force_nonblock)
+ 		flags |= MSG_DONTWAIT;
+-	if (flags & MSG_WAITALL)
+-		min_ret = iov_iter_count(&kmsg->msg.msg_iter);
+ 
+ 	kmsg->msg.msg_get_inq = 1;
+-	if (req->flags & REQ_F_APOLL_MULTISHOT)
++	if (req->flags & REQ_F_APOLL_MULTISHOT) {
+ 		ret = io_recvmsg_multishot(sock, sr, kmsg, flags,
+ 					   &mshot_finished);
+-	else
++	} else {
++		/* disable partial retry for recvmsg with cmsg attached */
++		if (flags & MSG_WAITALL && !kmsg->msg.msg_controllen)
++			min_ret = iov_iter_count(&kmsg->msg.msg_iter);
++
+ 		ret = __sys_recvmsg_sock(sock, &kmsg->msg, sr->umsg,
+ 					 kmsg->uaddr, flags);
++	}
+ 
+ 	if (ret < min_ret) {
+ 		if (ret == -EAGAIN && force_nonblock) {
+diff --git a/io_uring/poll.c b/io_uring/poll.c
+index 55306e8010813..e346a518dfcf9 100644
+--- a/io_uring/poll.c
++++ b/io_uring/poll.c
+@@ -977,8 +977,9 @@ int io_poll_remove(struct io_kiocb *req, unsigned int issue_flags)
+ 	struct io_hash_bucket *bucket;
+ 	struct io_kiocb *preq;
+ 	int ret2, ret = 0;
+-	bool locked;
++	bool locked = true;
+ 
++	io_ring_submit_lock(ctx, issue_flags);
+ 	preq = io_poll_find(ctx, true, &cd, &ctx->cancel_table, &bucket);
+ 	ret2 = io_poll_disarm(preq);
+ 	if (bucket)
+@@ -990,12 +991,10 @@ int io_poll_remove(struct io_kiocb *req, unsigned int issue_flags)
+ 		goto out;
+ 	}
+ 
+-	io_ring_submit_lock(ctx, issue_flags);
+ 	preq = io_poll_find(ctx, true, &cd, &ctx->cancel_table_locked, &bucket);
+ 	ret2 = io_poll_disarm(preq);
+ 	if (bucket)
+ 		spin_unlock(&bucket->lock);
+-	io_ring_submit_unlock(ctx, issue_flags);
+ 	if (ret2) {
+ 		ret = ret2;
+ 		goto out;
+@@ -1019,7 +1018,7 @@ found:
+ 		if (poll_update->update_user_data)
+ 			preq->cqe.user_data = poll_update->new_user_data;
+ 
+-		ret2 = io_poll_add(preq, issue_flags);
++		ret2 = io_poll_add(preq, issue_flags & ~IO_URING_F_UNLOCKED);
+ 		/* successfully updated, don't complete poll request */
+ 		if (!ret2 || ret2 == -EIOCBQUEUED)
+ 			goto out;
+@@ -1027,9 +1026,9 @@ found:
+ 
+ 	req_set_fail(preq);
+ 	io_req_set_res(preq, -ECANCELED, 0);
+-	locked = !(issue_flags & IO_URING_F_UNLOCKED);
+ 	io_req_task_complete(preq, &locked);
+ out:
++	io_ring_submit_unlock(ctx, issue_flags);
+ 	if (ret < 0) {
+ 		req_set_fail(req);
+ 		return ret;
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index 46bce5577fb48..3cc3232f8c334 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -735,13 +735,12 @@ static bool btf_name_offset_valid(const struct btf *btf, u32 offset)
+ 	return offset < btf->hdr.str_len;
+ }
+ 
+-static bool __btf_name_char_ok(char c, bool first, bool dot_ok)
++static bool __btf_name_char_ok(char c, bool first)
+ {
+ 	if ((first ? !isalpha(c) :
+ 		     !isalnum(c)) &&
+ 	    c != '_' &&
+-	    ((c == '.' && !dot_ok) ||
+-	      c != '.'))
++	    c != '.')
+ 		return false;
+ 	return true;
+ }
+@@ -758,20 +757,20 @@ static const char *btf_str_by_offset(const struct btf *btf, u32 offset)
+ 	return NULL;
+ }
+ 
+-static bool __btf_name_valid(const struct btf *btf, u32 offset, bool dot_ok)
++static bool __btf_name_valid(const struct btf *btf, u32 offset)
+ {
+ 	/* offset must be valid */
+ 	const char *src = btf_str_by_offset(btf, offset);
+ 	const char *src_limit;
+ 
+-	if (!__btf_name_char_ok(*src, true, dot_ok))
++	if (!__btf_name_char_ok(*src, true))
+ 		return false;
+ 
+ 	/* set a limit on identifier length */
+ 	src_limit = src + KSYM_NAME_LEN;
+ 	src++;
+ 	while (*src && src < src_limit) {
+-		if (!__btf_name_char_ok(*src, false, dot_ok))
++		if (!__btf_name_char_ok(*src, false))
+ 			return false;
+ 		src++;
+ 	}
+@@ -779,17 +778,14 @@ static bool __btf_name_valid(const struct btf *btf, u32 offset, bool dot_ok)
+ 	return !*src;
+ }
+ 
+-/* Only C-style identifier is permitted. This can be relaxed if
+- * necessary.
+- */
+ static bool btf_name_valid_identifier(const struct btf *btf, u32 offset)
+ {
+-	return __btf_name_valid(btf, offset, false);
++	return __btf_name_valid(btf, offset);
+ }
+ 
+ static bool btf_name_valid_section(const struct btf *btf, u32 offset)
+ {
+-	return __btf_name_valid(btf, offset, true);
++	return __btf_name_valid(btf, offset);
+ }
+ 
+ static const char *__btf_name_by_offset(const struct btf *btf, u32 offset)
+@@ -4450,7 +4446,7 @@ static s32 btf_var_check_meta(struct btf_verifier_env *env,
+ 	}
+ 
+ 	if (!t->name_off ||
+-	    !__btf_name_valid(env->btf, t->name_off, true)) {
++	    !__btf_name_valid(env->btf, t->name_off)) {
+ 		btf_verifier_log_type(env, t, "Invalid name");
+ 		return -EINVAL;
+ 	}
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index adc83cb82f379..8bd5e3741d418 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -3412,6 +3412,11 @@ static int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog,
+ 		return prog->enforce_expected_attach_type &&
+ 			prog->expected_attach_type != attach_type ?
+ 			-EINVAL : 0;
++	case BPF_PROG_TYPE_KPROBE:
++		if (prog->expected_attach_type == BPF_TRACE_KPROBE_MULTI &&
++		    attach_type != BPF_TRACE_KPROBE_MULTI)
++			return -EINVAL;
++		return 0;
+ 	default:
+ 		return 0;
+ 	}
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 8ed149cc9f648..b039990137444 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -3559,6 +3559,9 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
+ 				return err;
+ 		}
+ 		save_register_state(state, spi, reg, size);
++		/* Break the relation on a narrowing spill. */
++		if (fls64(reg->umax_value) > BITS_PER_BYTE * size)
++			state->stack[spi].spilled_ptr.id = 0;
+ 	} else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) &&
+ 		   insn->imm != 0 && env->bpf_capable) {
+ 		struct bpf_reg_state fake_reg = {};
+@@ -16198,9 +16201,10 @@ static int jit_subprogs(struct bpf_verifier_env *env)
+ 	}
+ 
+ 	/* finally lock prog and jit images for all functions and
+-	 * populate kallsysm
++	 * populate kallsysm. Begin at the first subprogram, since
++	 * bpf_prog_load will add the kallsyms for the main program.
+ 	 */
+-	for (i = 0; i < env->subprog_cnt; i++) {
++	for (i = 1; i < env->subprog_cnt; i++) {
+ 		bpf_prog_lock_ro(func[i]);
+ 		bpf_prog_kallsyms_add(func[i]);
+ 	}
+@@ -16226,6 +16230,8 @@ static int jit_subprogs(struct bpf_verifier_env *env)
+ 	prog->jited = 1;
+ 	prog->bpf_func = func[0]->bpf_func;
+ 	prog->jited_len = func[0]->jited_len;
++	prog->aux->extable = func[0]->aux->extable;
++	prog->aux->num_exentries = func[0]->aux->num_exentries;
+ 	prog->aux->func = func;
+ 	prog->aux->func_cnt = env->subprog_cnt;
+ 	bpf_prog_jit_attempt_done(prog);
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index fd57c1dca1ccf..dad3eacf51265 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -1788,7 +1788,7 @@ int rebind_subsystems(struct cgroup_root *dst_root, u16 ss_mask)
+ {
+ 	struct cgroup *dcgrp = &dst_root->cgrp;
+ 	struct cgroup_subsys *ss;
+-	int ssid, i, ret;
++	int ssid, ret;
+ 	u16 dfl_disable_ss_mask = 0;
+ 
+ 	lockdep_assert_held(&cgroup_mutex);
+@@ -1832,7 +1832,8 @@ int rebind_subsystems(struct cgroup_root *dst_root, u16 ss_mask)
+ 		struct cgroup_root *src_root = ss->root;
+ 		struct cgroup *scgrp = &src_root->cgrp;
+ 		struct cgroup_subsys_state *css = cgroup_css(scgrp, ss);
+-		struct css_set *cset;
++		struct css_set *cset, *cset_pos;
++		struct css_task_iter *it;
+ 
+ 		WARN_ON(!css || cgroup_css(dcgrp, ss));
+ 
+@@ -1850,9 +1851,22 @@ int rebind_subsystems(struct cgroup_root *dst_root, u16 ss_mask)
+ 		css->cgroup = dcgrp;
+ 
+ 		spin_lock_irq(&css_set_lock);
+-		hash_for_each(css_set_table, i, cset, hlist)
++		WARN_ON(!list_empty(&dcgrp->e_csets[ss->id]));
++		list_for_each_entry_safe(cset, cset_pos, &scgrp->e_csets[ss->id],
++					 e_cset_node[ss->id]) {
+ 			list_move_tail(&cset->e_cset_node[ss->id],
+ 				       &dcgrp->e_csets[ss->id]);
++			/*
++			 * all css_sets of scgrp together in same order to dcgrp,
++			 * patch in-flight iterators to preserve correct iteration.
++			 * since the iterator is always advanced right away and
++			 * finished when it->cset_pos meets it->cset_head, so only
++			 * update it->cset_head is enough here.
++			 */
++			list_for_each_entry(it, &cset->task_iters, iters_node)
++				if (it->cset_head == &scgrp->e_csets[ss->id])
++					it->cset_head = &dcgrp->e_csets[ss->id];
++		}
+ 		spin_unlock_irq(&css_set_lock);
+ 
+ 		if (ss->css_rstat_flush) {
+diff --git a/kernel/cgroup/legacy_freezer.c b/kernel/cgroup/legacy_freezer.c
+index 936473203a6b5..122dacb3a4439 100644
+--- a/kernel/cgroup/legacy_freezer.c
++++ b/kernel/cgroup/legacy_freezer.c
+@@ -108,16 +108,18 @@ static int freezer_css_online(struct cgroup_subsys_state *css)
+ 	struct freezer *freezer = css_freezer(css);
+ 	struct freezer *parent = parent_freezer(freezer);
+ 
++	cpus_read_lock();
+ 	mutex_lock(&freezer_mutex);
+ 
+ 	freezer->state |= CGROUP_FREEZER_ONLINE;
+ 
+ 	if (parent && (parent->state & CGROUP_FREEZING)) {
+ 		freezer->state |= CGROUP_FREEZING_PARENT | CGROUP_FROZEN;
+-		static_branch_inc(&freezer_active);
++		static_branch_inc_cpuslocked(&freezer_active);
+ 	}
+ 
+ 	mutex_unlock(&freezer_mutex);
++	cpus_read_unlock();
+ 	return 0;
+ }
+ 
+@@ -132,14 +134,16 @@ static void freezer_css_offline(struct cgroup_subsys_state *css)
+ {
+ 	struct freezer *freezer = css_freezer(css);
+ 
++	cpus_read_lock();
+ 	mutex_lock(&freezer_mutex);
+ 
+ 	if (freezer->state & CGROUP_FREEZING)
+-		static_branch_dec(&freezer_active);
++		static_branch_dec_cpuslocked(&freezer_active);
+ 
+ 	freezer->state = 0;
+ 
+ 	mutex_unlock(&freezer_mutex);
++	cpus_read_unlock();
+ }
+ 
+ static void freezer_css_free(struct cgroup_subsys_state *css)
+diff --git a/kernel/time/tick-common.c b/kernel/time/tick-common.c
+index 65b8658da829e..e9138cd7a0f52 100644
+--- a/kernel/time/tick-common.c
++++ b/kernel/time/tick-common.c
+@@ -218,19 +218,8 @@ static void tick_setup_device(struct tick_device *td,
+ 		 * this cpu:
+ 		 */
+ 		if (tick_do_timer_cpu == TICK_DO_TIMER_BOOT) {
+-			ktime_t next_p;
+-			u32 rem;
+-
+ 			tick_do_timer_cpu = cpu;
+-
+-			next_p = ktime_get();
+-			div_u64_rem(next_p, TICK_NSEC, &rem);
+-			if (rem) {
+-				next_p -= rem;
+-				next_p += TICK_NSEC;
+-			}
+-
+-			tick_next_period = next_p;
++			tick_next_period = ktime_get();
+ #ifdef CONFIG_NO_HZ_FULL
+ 			/*
+ 			 * The boot CPU may be nohz_full, in which case set
+diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
+index a46506f7ec6d0..d6fb6a676bbbb 100644
+--- a/kernel/time/tick-sched.c
++++ b/kernel/time/tick-sched.c
+@@ -161,8 +161,19 @@ static ktime_t tick_init_jiffy_update(void)
+ 	raw_spin_lock(&jiffies_lock);
+ 	write_seqcount_begin(&jiffies_seq);
+ 	/* Did we start the jiffies update yet ? */
+-	if (last_jiffies_update == 0)
++	if (last_jiffies_update == 0) {
++		u32 rem;
++
++		/*
++		 * Ensure that the tick is aligned to a multiple of
++		 * TICK_NSEC.
++		 */
++		div_u64_rem(tick_next_period, TICK_NSEC, &rem);
++		if (rem)
++			tick_next_period += TICK_NSEC - rem;
++
+ 		last_jiffies_update = tick_next_period;
++	}
+ 	period = last_jiffies_update;
+ 	write_seqcount_end(&jiffies_seq);
+ 	raw_spin_unlock(&jiffies_lock);
+diff --git a/mm/maccess.c b/mm/maccess.c
+index 074f6b086671e..518a25667323e 100644
+--- a/mm/maccess.c
++++ b/mm/maccess.c
+@@ -5,6 +5,7 @@
+ #include <linux/export.h>
+ #include <linux/mm.h>
+ #include <linux/uaccess.h>
++#include <asm/tlb.h>
+ 
+ bool __weak copy_from_kernel_nofault_allowed(const void *unsafe_src,
+ 		size_t size)
+@@ -113,11 +114,16 @@ Efault:
+ long copy_from_user_nofault(void *dst, const void __user *src, size_t size)
+ {
+ 	long ret = -EFAULT;
+-	if (access_ok(src, size)) {
+-		pagefault_disable();
+-		ret = __copy_from_user_inatomic(dst, src, size);
+-		pagefault_enable();
+-	}
++
++	if (!__access_ok(src, size))
++		return ret;
++
++	if (!nmi_uaccess_okay())
++		return ret;
++
++	pagefault_disable();
++	ret = __copy_from_user_inatomic(dst, src, size);
++	pagefault_enable();
+ 
+ 	if (ret)
+ 		return -EFAULT;
+diff --git a/mm/memfd.c b/mm/memfd.c
+index a0a7a37e81771..0382f001e2a00 100644
+--- a/mm/memfd.c
++++ b/mm/memfd.c
+@@ -375,12 +375,15 @@ SYSCALL_DEFINE2(memfd_create,
+ 
+ 		inode->i_mode &= ~0111;
+ 		file_seals = memfd_file_seals_ptr(file);
+-		*file_seals &= ~F_SEAL_SEAL;
+-		*file_seals |= F_SEAL_EXEC;
++		if (file_seals) {
++			*file_seals &= ~F_SEAL_SEAL;
++			*file_seals |= F_SEAL_EXEC;
++		}
+ 	} else if (flags & MFD_ALLOW_SEALING) {
+ 		/* MFD_EXEC and MFD_ALLOW_SEALING are set */
+ 		file_seals = memfd_file_seals_ptr(file);
+-		*file_seals &= ~F_SEAL_SEAL;
++		if (file_seals)
++			*file_seals &= ~F_SEAL_SEAL;
+ 	}
+ 
+ 	fd_install(fd, file);
+diff --git a/mm/mprotect.c b/mm/mprotect.c
+index 36351a00c0e82..498de2c2f2c4f 100644
+--- a/mm/mprotect.c
++++ b/mm/mprotect.c
+@@ -838,7 +838,7 @@ static int do_mprotect_pkey(unsigned long start, size_t len,
+ 	}
+ 	tlb_finish_mmu(&tlb);
+ 
+-	if (!error && vma_iter_end(&vmi) < end)
++	if (!error && tmp < end)
+ 		error = -ENOMEM;
+ 
+ out:
+diff --git a/mm/usercopy.c b/mm/usercopy.c
+index 4c3164beacec0..83c164aba6e0f 100644
+--- a/mm/usercopy.c
++++ b/mm/usercopy.c
+@@ -173,7 +173,7 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
+ 		return;
+ 	}
+ 
+-	if (is_vmalloc_addr(ptr)) {
++	if (is_vmalloc_addr(ptr) && !pagefault_disabled()) {
+ 		struct vmap_area *area = find_vmap_area(addr);
+ 
+ 		if (!area)
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index 31ff782d368b0..f908abba4d4ae 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -3046,11 +3046,20 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
+ 	 * allocation request, free them via vfree() if any.
+ 	 */
+ 	if (area->nr_pages != nr_small_pages) {
+-		/* vm_area_alloc_pages() can also fail due to a fatal signal */
+-		if (!fatal_signal_pending(current))
++		/*
++		 * vm_area_alloc_pages() can fail due to insufficient memory but
++		 * also:-
++		 *
++		 * - a pending fatal signal
++		 * - insufficient huge page-order pages
++		 *
++		 * Since we always retry allocations at order-0 in the huge page
++		 * case a warning for either is spurious.
++		 */
++		if (!fatal_signal_pending(current) && page_order == 0)
+ 			warn_alloc(gfp_mask, NULL,
+-				"vmalloc error: size %lu, page order %u, failed to allocate pages",
+-				area->nr_pages * PAGE_SIZE, page_order);
++				"vmalloc error: size %lu, failed to allocate pages",
++				area->nr_pages * PAGE_SIZE);
+ 		goto fail;
+ 	}
+ 
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 3fd71f343c9f2..b34c48f802e98 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -1362,12 +1362,6 @@ set_sndbuf:
+ 		__sock_set_mark(sk, val);
+ 		break;
+ 	case SO_RCVMARK:
+-		if (!sockopt_ns_capable(sock_net(sk)->user_ns, CAP_NET_RAW) &&
+-		    !sockopt_ns_capable(sock_net(sk)->user_ns, CAP_NET_ADMIN)) {
+-			ret = -EPERM;
+-			break;
+-		}
+-
+ 		sock_valbool_flag(sk, SOCK_RCVMARK, valbool);
+ 		break;
+ 
+diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c
+index e5f156940c671..6cd8607a3928f 100644
+--- a/net/dsa/dsa.c
++++ b/net/dsa/dsa.c
+@@ -402,6 +402,24 @@ static int dsa_tree_setup_default_cpu(struct dsa_switch_tree *dst)
+ 	return 0;
+ }
+ 
++static struct dsa_port *
++dsa_switch_preferred_default_local_cpu_port(struct dsa_switch *ds)
++{
++	struct dsa_port *cpu_dp;
++
++	if (!ds->ops->preferred_default_local_cpu_port)
++		return NULL;
++
++	cpu_dp = ds->ops->preferred_default_local_cpu_port(ds);
++	if (!cpu_dp)
++		return NULL;
++
++	if (WARN_ON(!dsa_port_is_cpu(cpu_dp) || cpu_dp->ds != ds))
++		return NULL;
++
++	return cpu_dp;
++}
++
+ /* Perform initial assignment of CPU ports to user ports and DSA links in the
+  * fabric, giving preference to CPU ports local to each switch. Default to
+  * using the first CPU port in the switch tree if the port does not have a CPU
+@@ -409,12 +427,16 @@ static int dsa_tree_setup_default_cpu(struct dsa_switch_tree *dst)
+  */
+ static int dsa_tree_setup_cpu_ports(struct dsa_switch_tree *dst)
+ {
+-	struct dsa_port *cpu_dp, *dp;
++	struct dsa_port *preferred_cpu_dp, *cpu_dp, *dp;
+ 
+ 	list_for_each_entry(cpu_dp, &dst->ports, list) {
+ 		if (!dsa_port_is_cpu(cpu_dp))
+ 			continue;
+ 
++		preferred_cpu_dp = dsa_switch_preferred_default_local_cpu_port(cpu_dp->ds);
++		if (preferred_cpu_dp && preferred_cpu_dp != cpu_dp)
++			continue;
++
+ 		/* Prefer a local CPU port */
+ 		dsa_switch_for_each_port(dp, cpu_dp->ds) {
+ 			/* Prefer the first local CPU port found */
+diff --git a/net/ipv4/esp4_offload.c b/net/ipv4/esp4_offload.c
+index 3969fa805679c..ee848be59e65a 100644
+--- a/net/ipv4/esp4_offload.c
++++ b/net/ipv4/esp4_offload.c
+@@ -340,6 +340,9 @@ static int esp_xmit(struct xfrm_state *x, struct sk_buff *skb,  netdev_features_
+ 
+ 	secpath_reset(skb);
+ 
++	if (skb_needs_linearize(skb, skb->dev->features) &&
++	    __skb_linearize(skb))
++		return -ENOMEM;
+ 	return 0;
+ }
+ 
+diff --git a/net/ipv4/xfrm4_input.c b/net/ipv4/xfrm4_input.c
+index ad2afeef4f106..eac206a290d05 100644
+--- a/net/ipv4/xfrm4_input.c
++++ b/net/ipv4/xfrm4_input.c
+@@ -164,6 +164,7 @@ drop:
+ 	kfree_skb(skb);
+ 	return 0;
+ }
++EXPORT_SYMBOL(xfrm4_udp_encap_rcv);
+ 
+ int xfrm4_rcv(struct sk_buff *skb)
+ {
+diff --git a/net/ipv6/esp6_offload.c b/net/ipv6/esp6_offload.c
+index 75c02992c520f..7723402689973 100644
+--- a/net/ipv6/esp6_offload.c
++++ b/net/ipv6/esp6_offload.c
+@@ -374,6 +374,9 @@ static int esp6_xmit(struct xfrm_state *x, struct sk_buff *skb,  netdev_features
+ 
+ 	secpath_reset(skb);
+ 
++	if (skb_needs_linearize(skb, skb->dev->features) &&
++	    __skb_linearize(skb))
++		return -ENOMEM;
+ 	return 0;
+ }
+ 
+diff --git a/net/ipv6/xfrm6_input.c b/net/ipv6/xfrm6_input.c
+index 04cbeefd89828..4907ab241d6be 100644
+--- a/net/ipv6/xfrm6_input.c
++++ b/net/ipv6/xfrm6_input.c
+@@ -86,6 +86,9 @@ int xfrm6_udp_encap_rcv(struct sock *sk, struct sk_buff *skb)
+ 	__be32 *udpdata32;
+ 	__u16 encap_type = up->encap_type;
+ 
++	if (skb->protocol == htons(ETH_P_IP))
++		return xfrm4_udp_encap_rcv(sk, skb);
++
+ 	/* if this is not encapsulated socket, then just return now */
+ 	if (!encap_type)
+ 		return 1;
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index 5396907cc5963..7b185a064acca 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -1047,6 +1047,7 @@ static int mptcp_pm_nl_create_listen_socket(struct sock *sk,
+ 	if (err)
+ 		return err;
+ 
++	inet_sk_state_store(newsk, TCP_LISTEN);
+ 	err = kernel_listen(ssock, backlog);
+ 	if (err)
+ 		return err;
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 0e7c194948cfe..a08b7419062c2 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -44,7 +44,7 @@ enum {
+ static struct percpu_counter mptcp_sockets_allocated ____cacheline_aligned_in_smp;
+ 
+ static void __mptcp_destroy_sock(struct sock *sk);
+-static void __mptcp_check_send_data_fin(struct sock *sk);
++static void mptcp_check_send_data_fin(struct sock *sk);
+ 
+ DEFINE_PER_CPU(struct mptcp_delegated_action, mptcp_delegated_actions);
+ static struct net_device mptcp_napi_dev;
+@@ -411,8 +411,7 @@ static bool mptcp_pending_data_fin_ack(struct sock *sk)
+ {
+ 	struct mptcp_sock *msk = mptcp_sk(sk);
+ 
+-	return !__mptcp_check_fallback(msk) &&
+-	       ((1 << sk->sk_state) &
++	return ((1 << sk->sk_state) &
+ 		(TCPF_FIN_WAIT1 | TCPF_CLOSING | TCPF_LAST_ACK)) &&
+ 	       msk->write_seq == READ_ONCE(msk->snd_una);
+ }
+@@ -570,9 +569,6 @@ static bool mptcp_check_data_fin(struct sock *sk)
+ 	u64 rcv_data_fin_seq;
+ 	bool ret = false;
+ 
+-	if (__mptcp_check_fallback(msk))
+-		return ret;
+-
+ 	/* Need to ack a DATA_FIN received from a peer while this side
+ 	 * of the connection is in ESTABLISHED, FIN_WAIT1, or FIN_WAIT2.
+ 	 * msk->rcv_data_fin was set when parsing the incoming options
+@@ -610,7 +606,8 @@ static bool mptcp_check_data_fin(struct sock *sk)
+ 		}
+ 
+ 		ret = true;
+-		mptcp_send_ack(msk);
++		if (!__mptcp_check_fallback(msk))
++			mptcp_send_ack(msk);
+ 		mptcp_close_wake_up(sk);
+ 	}
+ 	return ret;
+@@ -837,12 +834,12 @@ static bool __mptcp_finish_join(struct mptcp_sock *msk, struct sock *ssk)
+ 	return true;
+ }
+ 
+-static void __mptcp_flush_join_list(struct sock *sk)
++static void __mptcp_flush_join_list(struct sock *sk, struct list_head *join_list)
+ {
+ 	struct mptcp_subflow_context *tmp, *subflow;
+ 	struct mptcp_sock *msk = mptcp_sk(sk);
+ 
+-	list_for_each_entry_safe(subflow, tmp, &msk->join_list, node) {
++	list_for_each_entry_safe(subflow, tmp, join_list, node) {
+ 		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
+ 		bool slow = lock_sock_fast(ssk);
+ 
+@@ -1596,7 +1593,7 @@ out:
+ 	if (!mptcp_timer_pending(sk))
+ 		mptcp_reset_timer(sk);
+ 	if (do_check_data_fin)
+-		__mptcp_check_send_data_fin(sk);
++		mptcp_check_send_data_fin(sk);
+ }
+ 
+ static void __mptcp_subflow_push_pending(struct sock *sk, struct sock *ssk, bool first)
+@@ -1696,7 +1693,13 @@ static int mptcp_sendmsg_fastopen(struct sock *sk, struct sock *ssk, struct msgh
+ 		if (ret && ret != -EINPROGRESS && ret != -ERESTARTSYS && ret != -EINTR)
+ 			*copied_syn = 0;
+ 	} else if (ret && ret != -EINPROGRESS) {
+-		mptcp_disconnect(sk, 0);
++		/* The disconnect() op called by tcp_sendmsg_fastopen()/
++		 * __inet_stream_connect() can fail, due to looking check,
++		 * see mptcp_disconnect().
++		 * Attempt it again outside the problematic scope.
++		 */
++		if (!mptcp_disconnect(sk, 0))
++			sk->sk_socket->state = SS_UNCONNECTED;
+ 	}
+ 
+ 	return ret;
+@@ -2360,7 +2363,10 @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
+ 
+ 	need_push = (flags & MPTCP_CF_PUSH) && __mptcp_retransmit_pending_data(sk);
+ 	if (!dispose_it) {
+-		tcp_disconnect(ssk, 0);
++		/* The MPTCP code never wait on the subflow sockets, TCP-level
++		 * disconnect should never fail
++		 */
++		WARN_ON_ONCE(tcp_disconnect(ssk, 0));
+ 		msk->subflow->state = SS_UNCONNECTED;
+ 		mptcp_subflow_ctx_reset(subflow);
+ 		release_sock(ssk);
+@@ -2379,13 +2385,6 @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
+ 		kfree_rcu(subflow, rcu);
+ 	} else {
+ 		/* otherwise tcp will dispose of the ssk and subflow ctx */
+-		if (ssk->sk_state == TCP_LISTEN) {
+-			tcp_set_state(ssk, TCP_CLOSE);
+-			mptcp_subflow_queue_clean(sk, ssk);
+-			inet_csk_listen_stop(ssk);
+-			mptcp_event_pm_listener(ssk, MPTCP_EVENT_LISTENER_CLOSED);
+-		}
+-
+ 		__tcp_close(ssk, 0);
+ 
+ 		/* close acquired an extra ref */
+@@ -2642,8 +2641,6 @@ static void mptcp_worker(struct work_struct *work)
+ 	if (unlikely((1 << state) & (TCPF_CLOSE | TCPF_LISTEN)))
+ 		goto unlock;
+ 
+-	mptcp_check_data_fin_ack(sk);
+-
+ 	mptcp_check_fastclose(msk);
+ 
+ 	mptcp_pm_nl_work(msk);
+@@ -2651,7 +2648,8 @@ static void mptcp_worker(struct work_struct *work)
+ 	if (test_and_clear_bit(MPTCP_WORK_EOF, &msk->flags))
+ 		mptcp_check_for_eof(msk);
+ 
+-	__mptcp_check_send_data_fin(sk);
++	mptcp_check_send_data_fin(sk);
++	mptcp_check_data_fin_ack(sk);
+ 	mptcp_check_data_fin(sk);
+ 
+ 	if (test_and_clear_bit(MPTCP_WORK_CLOSE_SUBFLOW, &msk->flags))
+@@ -2787,13 +2785,19 @@ void mptcp_subflow_shutdown(struct sock *sk, struct sock *ssk, int how)
+ 			break;
+ 		fallthrough;
+ 	case TCP_SYN_SENT:
+-		tcp_disconnect(ssk, O_NONBLOCK);
++		WARN_ON_ONCE(tcp_disconnect(ssk, O_NONBLOCK));
+ 		break;
+ 	default:
+ 		if (__mptcp_check_fallback(mptcp_sk(sk))) {
+ 			pr_debug("Fallback");
+ 			ssk->sk_shutdown |= how;
+ 			tcp_shutdown(ssk, how);
++
++			/* simulate the data_fin ack reception to let the state
++			 * machine move forward
++			 */
++			WRITE_ONCE(mptcp_sk(sk)->snd_una, mptcp_sk(sk)->snd_nxt);
++			mptcp_schedule_work(sk);
+ 		} else {
+ 			pr_debug("Sending DATA_FIN on subflow %p", ssk);
+ 			tcp_send_ack(ssk);
+@@ -2833,7 +2837,7 @@ static int mptcp_close_state(struct sock *sk)
+ 	return next & TCP_ACTION_FIN;
+ }
+ 
+-static void __mptcp_check_send_data_fin(struct sock *sk)
++static void mptcp_check_send_data_fin(struct sock *sk)
+ {
+ 	struct mptcp_subflow_context *subflow;
+ 	struct mptcp_sock *msk = mptcp_sk(sk);
+@@ -2851,19 +2855,6 @@ static void __mptcp_check_send_data_fin(struct sock *sk)
+ 
+ 	WRITE_ONCE(msk->snd_nxt, msk->write_seq);
+ 
+-	/* fallback socket will not get data_fin/ack, can move to the next
+-	 * state now
+-	 */
+-	if (__mptcp_check_fallback(msk)) {
+-		WRITE_ONCE(msk->snd_una, msk->write_seq);
+-		if ((1 << sk->sk_state) & (TCPF_CLOSING | TCPF_LAST_ACK)) {
+-			inet_sk_state_store(sk, TCP_CLOSE);
+-			mptcp_close_wake_up(sk);
+-		} else if (sk->sk_state == TCP_FIN_WAIT1) {
+-			inet_sk_state_store(sk, TCP_FIN_WAIT2);
+-		}
+-	}
+-
+ 	mptcp_for_each_subflow(msk, subflow) {
+ 		struct sock *tcp_sk = mptcp_subflow_tcp_sock(subflow);
+ 
+@@ -2883,7 +2874,7 @@ static void __mptcp_wr_shutdown(struct sock *sk)
+ 	WRITE_ONCE(msk->write_seq, msk->write_seq + 1);
+ 	WRITE_ONCE(msk->snd_data_fin_enable, 1);
+ 
+-	__mptcp_check_send_data_fin(sk);
++	mptcp_check_send_data_fin(sk);
+ }
+ 
+ static void __mptcp_destroy_sock(struct sock *sk)
+@@ -2928,10 +2919,24 @@ static __poll_t mptcp_check_readable(struct mptcp_sock *msk)
+ 	return EPOLLIN | EPOLLRDNORM;
+ }
+ 
+-static void mptcp_listen_inuse_dec(struct sock *sk)
++static void mptcp_check_listen_stop(struct sock *sk)
+ {
+-	if (inet_sk_state_load(sk) == TCP_LISTEN)
+-		sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
++	struct sock *ssk;
++
++	if (inet_sk_state_load(sk) != TCP_LISTEN)
++		return;
++
++	sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
++	ssk = mptcp_sk(sk)->first;
++	if (WARN_ON_ONCE(!ssk || inet_sk_state_load(ssk) != TCP_LISTEN))
++		return;
++
++	lock_sock_nested(ssk, SINGLE_DEPTH_NESTING);
++	mptcp_subflow_queue_clean(sk, ssk);
++	inet_csk_listen_stop(ssk);
++	mptcp_event_pm_listener(ssk, MPTCP_EVENT_LISTENER_CLOSED);
++	tcp_set_state(ssk, TCP_CLOSE);
++	release_sock(ssk);
+ }
+ 
+ bool __mptcp_close(struct sock *sk, long timeout)
+@@ -2944,7 +2949,7 @@ bool __mptcp_close(struct sock *sk, long timeout)
+ 	WRITE_ONCE(sk->sk_shutdown, SHUTDOWN_MASK);
+ 
+ 	if ((1 << sk->sk_state) & (TCPF_LISTEN | TCPF_CLOSE)) {
+-		mptcp_listen_inuse_dec(sk);
++		mptcp_check_listen_stop(sk);
+ 		inet_sk_state_store(sk, TCP_CLOSE);
+ 		goto cleanup;
+ 	}
+@@ -3045,15 +3050,20 @@ static int mptcp_disconnect(struct sock *sk, int flags)
+ {
+ 	struct mptcp_sock *msk = mptcp_sk(sk);
+ 
++	/* Deny disconnect if other threads are blocked in sk_wait_event()
++	 * or inet_wait_for_connect().
++	 */
++	if (sk->sk_wait_pending)
++		return -EBUSY;
++
+ 	/* We are on the fastopen error path. We can't call straight into the
+ 	 * subflows cleanup code due to lock nesting (we are already under
+-	 * msk->firstsocket lock). Do nothing and leave the cleanup to the
+-	 * caller.
++	 * msk->firstsocket lock).
+ 	 */
+ 	if (msk->fastopening)
+-		return 0;
++		return -EBUSY;
+ 
+-	mptcp_listen_inuse_dec(sk);
++	mptcp_check_listen_stop(sk);
+ 	inet_sk_state_store(sk, TCP_CLOSE);
+ 
+ 	mptcp_stop_timer(sk);
+@@ -3112,6 +3122,7 @@ struct sock *mptcp_sk_clone_init(const struct sock *sk,
+ 		inet_sk(nsk)->pinet6 = mptcp_inet6_sk(nsk);
+ #endif
+ 
++	nsk->sk_wait_pending = 0;
+ 	__mptcp_init_sock(nsk);
+ 
+ 	msk = mptcp_sk(nsk);
+@@ -3299,9 +3310,14 @@ static void mptcp_release_cb(struct sock *sk)
+ 	for (;;) {
+ 		unsigned long flags = (msk->cb_flags & MPTCP_FLAGS_PROCESS_CTX_NEED) |
+ 				      msk->push_pending;
++		struct list_head join_list;
++
+ 		if (!flags)
+ 			break;
+ 
++		INIT_LIST_HEAD(&join_list);
++		list_splice_init(&msk->join_list, &join_list);
++
+ 		/* the following actions acquire the subflow socket lock
+ 		 *
+ 		 * 1) can't be invoked in atomic scope
+@@ -3312,8 +3328,9 @@ static void mptcp_release_cb(struct sock *sk)
+ 		msk->push_pending = 0;
+ 		msk->cb_flags &= ~flags;
+ 		spin_unlock_bh(&sk->sk_lock.slock);
++
+ 		if (flags & BIT(MPTCP_FLUSH_JOIN_LIST))
+-			__mptcp_flush_join_list(sk);
++			__mptcp_flush_join_list(sk, &join_list);
+ 		if (flags & BIT(MPTCP_PUSH_PENDING))
+ 			__mptcp_push_pending(sk, 0);
+ 		if (flags & BIT(MPTCP_RETRANSMIT))
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index bb0301398d3b4..8e0c35c598289 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -1749,14 +1749,16 @@ static void subflow_state_change(struct sock *sk)
+ {
+ 	struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);
+ 	struct sock *parent = subflow->conn;
++	struct mptcp_sock *msk;
+ 
+ 	__subflow_state_change(sk);
+ 
++	msk = mptcp_sk(parent);
+ 	if (subflow_simultaneous_connect(sk)) {
+ 		mptcp_propagate_sndbuf(parent, sk);
+ 		mptcp_do_fallback(sk);
+-		mptcp_rcv_space_init(mptcp_sk(parent), sk);
+-		pr_fallback(mptcp_sk(parent));
++		mptcp_rcv_space_init(msk, sk);
++		pr_fallback(msk);
+ 		subflow->conn_finished = 1;
+ 		mptcp_set_connected(parent);
+ 	}
+@@ -1772,11 +1774,12 @@ static void subflow_state_change(struct sock *sk)
+ 
+ 	subflow_sched_work_if_closed(mptcp_sk(parent), sk);
+ 
+-	if (__mptcp_check_fallback(mptcp_sk(parent)) &&
+-	    !subflow->rx_eof && subflow_is_done(sk)) {
+-		subflow->rx_eof = 1;
+-		mptcp_subflow_eof(parent);
+-	}
++	/* when the fallback subflow closes the rx side, trigger a 'dummy'
++	 * ingress data fin, so that the msk state will follow along
++	 */
++	if (__mptcp_check_fallback(msk) && subflow_is_done(sk) && msk->first == sk &&
++	    mptcp_update_rcv_data_fin(msk, READ_ONCE(msk->ack_seq), true))
++		mptcp_schedule_work(parent);
+ }
+ 
+ void mptcp_subflow_queue_clean(struct sock *listener_sk, struct sock *listener_ssk)
+diff --git a/net/netfilter/ipvs/ip_vs_xmit.c b/net/netfilter/ipvs/ip_vs_xmit.c
+index 80448885c3d71..b452eb3ddcecb 100644
+--- a/net/netfilter/ipvs/ip_vs_xmit.c
++++ b/net/netfilter/ipvs/ip_vs_xmit.c
+@@ -1225,6 +1225,7 @@ ip_vs_tunnel_xmit(struct sk_buff *skb, struct ip_vs_conn *cp,
+ 	skb->transport_header = skb->network_header;
+ 
+ 	skb_set_inner_ipproto(skb, next_protocol);
++	skb_set_inner_mac_header(skb, skb_inner_network_offset(skb));
+ 
+ 	if (tun_type == IP_VS_CONN_F_TUNNEL_TYPE_GUE) {
+ 		bool check = false;
+@@ -1373,6 +1374,7 @@ ip_vs_tunnel_xmit_v6(struct sk_buff *skb, struct ip_vs_conn *cp,
+ 	skb->transport_header = skb->network_header;
+ 
+ 	skb_set_inner_ipproto(skb, next_protocol);
++	skb_set_inner_mac_header(skb, skb_inner_network_offset(skb));
+ 
+ 	if (tun_type == IP_VS_CONN_F_TUNNEL_TYPE_GUE) {
+ 		bool check = false;
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 8f63514656a17..398bb8fb30a52 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -153,6 +153,7 @@ static struct nft_trans *nft_trans_alloc_gfp(const struct nft_ctx *ctx,
+ 		return NULL;
+ 
+ 	INIT_LIST_HEAD(&trans->list);
++	INIT_LIST_HEAD(&trans->binding_list);
+ 	trans->msg_type = msg_type;
+ 	trans->ctx	= *ctx;
+ 
+@@ -165,13 +166,20 @@ static struct nft_trans *nft_trans_alloc(const struct nft_ctx *ctx,
+ 	return nft_trans_alloc_gfp(ctx, msg_type, size, GFP_KERNEL);
+ }
+ 
+-static void nft_trans_destroy(struct nft_trans *trans)
++static void nft_trans_list_del(struct nft_trans *trans)
+ {
+ 	list_del(&trans->list);
++	list_del(&trans->binding_list);
++}
++
++static void nft_trans_destroy(struct nft_trans *trans)
++{
++	nft_trans_list_del(trans);
+ 	kfree(trans);
+ }
+ 
+-static void nft_set_trans_bind(const struct nft_ctx *ctx, struct nft_set *set)
++static void __nft_set_trans_bind(const struct nft_ctx *ctx, struct nft_set *set,
++				 bool bind)
+ {
+ 	struct nftables_pernet *nft_net;
+ 	struct net *net = ctx->net;
+@@ -185,16 +193,80 @@ static void nft_set_trans_bind(const struct nft_ctx *ctx, struct nft_set *set)
+ 		switch (trans->msg_type) {
+ 		case NFT_MSG_NEWSET:
+ 			if (nft_trans_set(trans) == set)
+-				nft_trans_set_bound(trans) = true;
++				nft_trans_set_bound(trans) = bind;
+ 			break;
+ 		case NFT_MSG_NEWSETELEM:
+ 			if (nft_trans_elem_set(trans) == set)
+-				nft_trans_elem_set_bound(trans) = true;
++				nft_trans_elem_set_bound(trans) = bind;
+ 			break;
+ 		}
+ 	}
+ }
+ 
++static void nft_set_trans_bind(const struct nft_ctx *ctx, struct nft_set *set)
++{
++	return __nft_set_trans_bind(ctx, set, true);
++}
++
++static void nft_set_trans_unbind(const struct nft_ctx *ctx, struct nft_set *set)
++{
++	return __nft_set_trans_bind(ctx, set, false);
++}
++
++static void __nft_chain_trans_bind(const struct nft_ctx *ctx,
++				   struct nft_chain *chain, bool bind)
++{
++	struct nftables_pernet *nft_net;
++	struct net *net = ctx->net;
++	struct nft_trans *trans;
++
++	if (!nft_chain_binding(chain))
++		return;
++
++	nft_net = nft_pernet(net);
++	list_for_each_entry_reverse(trans, &nft_net->commit_list, list) {
++		switch (trans->msg_type) {
++		case NFT_MSG_NEWCHAIN:
++			if (nft_trans_chain(trans) == chain)
++				nft_trans_chain_bound(trans) = bind;
++			break;
++		case NFT_MSG_NEWRULE:
++			if (trans->ctx.chain == chain)
++				nft_trans_rule_bound(trans) = bind;
++			break;
++		}
++	}
++}
++
++static void nft_chain_trans_bind(const struct nft_ctx *ctx,
++				 struct nft_chain *chain)
++{
++	__nft_chain_trans_bind(ctx, chain, true);
++}
++
++int nf_tables_bind_chain(const struct nft_ctx *ctx, struct nft_chain *chain)
++{
++	if (!nft_chain_binding(chain))
++		return 0;
++
++	if (nft_chain_binding(ctx->chain))
++		return -EOPNOTSUPP;
++
++	if (chain->bound)
++		return -EBUSY;
++
++	chain->bound = true;
++	chain->use++;
++	nft_chain_trans_bind(ctx, chain);
++
++	return 0;
++}
++
++void nf_tables_unbind_chain(const struct nft_ctx *ctx, struct nft_chain *chain)
++{
++	__nft_chain_trans_bind(ctx, chain, false);
++}
++
+ static int nft_netdev_register_hooks(struct net *net,
+ 				     struct list_head *hook_list)
+ {
+@@ -294,6 +366,19 @@ static void nft_trans_commit_list_add_tail(struct net *net, struct nft_trans *tr
+ {
+ 	struct nftables_pernet *nft_net = nft_pernet(net);
+ 
++	switch (trans->msg_type) {
++	case NFT_MSG_NEWSET:
++		if (!nft_trans_set_update(trans) &&
++		    nft_set_is_anonymous(nft_trans_set(trans)))
++			list_add_tail(&trans->binding_list, &nft_net->binding_list);
++		break;
++	case NFT_MSG_NEWCHAIN:
++		if (!nft_trans_chain_update(trans) &&
++		    nft_chain_binding(nft_trans_chain(trans)))
++			list_add_tail(&trans->binding_list, &nft_net->binding_list);
++		break;
++	}
++
+ 	list_add_tail(&trans->list, &nft_net->commit_list);
+ }
+ 
+@@ -340,8 +425,9 @@ static struct nft_trans *nft_trans_chain_add(struct nft_ctx *ctx, int msg_type)
+ 				ntohl(nla_get_be32(ctx->nla[NFTA_CHAIN_ID]));
+ 		}
+ 	}
+-
++	nft_trans_chain(trans) = ctx->chain;
+ 	nft_trans_commit_list_add_tail(ctx->net, trans);
++
+ 	return trans;
+ }
+ 
+@@ -359,8 +445,7 @@ static int nft_delchain(struct nft_ctx *ctx)
+ 	return 0;
+ }
+ 
+-static void nft_rule_expr_activate(const struct nft_ctx *ctx,
+-				   struct nft_rule *rule)
++void nft_rule_expr_activate(const struct nft_ctx *ctx, struct nft_rule *rule)
+ {
+ 	struct nft_expr *expr;
+ 
+@@ -373,9 +458,8 @@ static void nft_rule_expr_activate(const struct nft_ctx *ctx,
+ 	}
+ }
+ 
+-static void nft_rule_expr_deactivate(const struct nft_ctx *ctx,
+-				     struct nft_rule *rule,
+-				     enum nft_trans_phase phase)
++void nft_rule_expr_deactivate(const struct nft_ctx *ctx, struct nft_rule *rule,
++			      enum nft_trans_phase phase)
+ {
+ 	struct nft_expr *expr;
+ 
+@@ -497,6 +581,58 @@ static int nft_trans_set_add(const struct nft_ctx *ctx, int msg_type,
+ 	return __nft_trans_set_add(ctx, msg_type, set, NULL);
+ }
+ 
++static void nft_setelem_data_deactivate(const struct net *net,
++					const struct nft_set *set,
++					struct nft_set_elem *elem);
++
++static int nft_mapelem_deactivate(const struct nft_ctx *ctx,
++				  struct nft_set *set,
++				  const struct nft_set_iter *iter,
++				  struct nft_set_elem *elem)
++{
++	nft_setelem_data_deactivate(ctx->net, set, elem);
++
++	return 0;
++}
++
++struct nft_set_elem_catchall {
++	struct list_head	list;
++	struct rcu_head		rcu;
++	void			*elem;
++};
++
++static void nft_map_catchall_deactivate(const struct nft_ctx *ctx,
++					struct nft_set *set)
++{
++	u8 genmask = nft_genmask_next(ctx->net);
++	struct nft_set_elem_catchall *catchall;
++	struct nft_set_elem elem;
++	struct nft_set_ext *ext;
++
++	list_for_each_entry(catchall, &set->catchall_list, list) {
++		ext = nft_set_elem_ext(set, catchall->elem);
++		if (!nft_set_elem_active(ext, genmask))
++			continue;
++
++		elem.priv = catchall->elem;
++		nft_setelem_data_deactivate(ctx->net, set, &elem);
++		break;
++	}
++}
++
++static void nft_map_deactivate(const struct nft_ctx *ctx, struct nft_set *set)
++{
++	struct nft_set_iter iter = {
++		.genmask	= nft_genmask_next(ctx->net),
++		.fn		= nft_mapelem_deactivate,
++	};
++
++	set->ops->walk(ctx, set, &iter);
++	WARN_ON_ONCE(iter.err);
++
++	nft_map_catchall_deactivate(ctx, set);
++}
++
+ static int nft_delset(const struct nft_ctx *ctx, struct nft_set *set)
+ {
+ 	int err;
+@@ -505,6 +641,9 @@ static int nft_delset(const struct nft_ctx *ctx, struct nft_set *set)
+ 	if (err < 0)
+ 		return err;
+ 
++	if (set->flags & (NFT_SET_MAP | NFT_SET_OBJECT))
++		nft_map_deactivate(ctx, set);
++
+ 	nft_deactivate_next(ctx->net, set);
+ 	ctx->table->use--;
+ 
+@@ -2221,7 +2360,7 @@ static int nft_basechain_init(struct nft_base_chain *basechain, u8 family,
+ 	return 0;
+ }
+ 
+-static int nft_chain_add(struct nft_table *table, struct nft_chain *chain)
++int nft_chain_add(struct nft_table *table, struct nft_chain *chain)
+ {
+ 	int err;
+ 
+@@ -2525,6 +2664,8 @@ static int nf_tables_updchain(struct nft_ctx *ctx, u8 genmask, u8 policy,
+ 	nft_trans_basechain(trans) = basechain;
+ 	INIT_LIST_HEAD(&nft_trans_chain_hooks(trans));
+ 	list_splice(&hook.list, &nft_trans_chain_hooks(trans));
++	if (nla[NFTA_CHAIN_HOOK])
++		module_put(hook.type->owner);
+ 
+ 	nft_trans_commit_list_add_tail(ctx->net, trans);
+ 
+@@ -3427,8 +3568,7 @@ err_fill_rule_info:
+ 	return err;
+ }
+ 
+-static void nf_tables_rule_destroy(const struct nft_ctx *ctx,
+-				   struct nft_rule *rule)
++void nf_tables_rule_destroy(const struct nft_ctx *ctx, struct nft_rule *rule)
+ {
+ 	struct nft_expr *expr, *next;
+ 
+@@ -3445,7 +3585,7 @@ static void nf_tables_rule_destroy(const struct nft_ctx *ctx,
+ 	kfree(rule);
+ }
+ 
+-void nf_tables_rule_release(const struct nft_ctx *ctx, struct nft_rule *rule)
++static void nf_tables_rule_release(const struct nft_ctx *ctx, struct nft_rule *rule)
+ {
+ 	nft_rule_expr_deactivate(ctx, rule, NFT_TRANS_RELEASE);
+ 	nf_tables_rule_destroy(ctx, rule);
+@@ -3533,12 +3673,6 @@ int nft_setelem_validate(const struct nft_ctx *ctx, struct nft_set *set,
+ 	return 0;
+ }
+ 
+-struct nft_set_elem_catchall {
+-	struct list_head	list;
+-	struct rcu_head		rcu;
+-	void			*elem;
+-};
+-
+ int nft_set_catchall_validate(const struct nft_ctx *ctx, struct nft_set *set)
+ {
+ 	u8 genmask = nft_genmask_next(ctx->net);
+@@ -3781,7 +3915,7 @@ err_destroy_flow_rule:
+ 	if (flow)
+ 		nft_flow_rule_destroy(flow);
+ err_release_rule:
+-	nft_rule_expr_deactivate(&ctx, rule, NFT_TRANS_PREPARE);
++	nft_rule_expr_deactivate(&ctx, rule, NFT_TRANS_PREPARE_ERROR);
+ 	nf_tables_rule_destroy(&ctx, rule);
+ err_release_expr:
+ 	for (i = 0; i < n; i++) {
+@@ -4762,6 +4896,9 @@ static int nf_tables_newset(struct sk_buff *skb, const struct nfnl_info *info,
+ 		if (info->nlh->nlmsg_flags & NLM_F_REPLACE)
+ 			return -EOPNOTSUPP;
+ 
++		if (nft_set_is_anonymous(set))
++			return -EOPNOTSUPP;
++
+ 		err = nft_set_expr_alloc(&ctx, set, nla, exprs, &num_exprs, flags);
+ 		if (err < 0)
+ 			return err;
+@@ -4865,7 +5002,7 @@ err_set_expr_alloc:
+ 	for (i = 0; i < set->num_exprs; i++)
+ 		nft_expr_destroy(&ctx, set->exprs[i]);
+ err_set_destroy:
+-	ops->destroy(set);
++	ops->destroy(&ctx, set);
+ err_set_init:
+ 	kfree(set->name);
+ err_set_name:
+@@ -4880,7 +5017,7 @@ static void nft_set_catchall_destroy(const struct nft_ctx *ctx,
+ 
+ 	list_for_each_entry_safe(catchall, next, &set->catchall_list, list) {
+ 		list_del_rcu(&catchall->list);
+-		nft_set_elem_destroy(set, catchall->elem, true);
++		nf_tables_set_elem_destroy(ctx, set, catchall->elem);
+ 		kfree_rcu(catchall, rcu);
+ 	}
+ }
+@@ -4895,7 +5032,7 @@ static void nft_set_destroy(const struct nft_ctx *ctx, struct nft_set *set)
+ 	for (i = 0; i < set->num_exprs; i++)
+ 		nft_expr_destroy(ctx, set->exprs[i]);
+ 
+-	set->ops->destroy(set);
++	set->ops->destroy(ctx, set);
+ 	nft_set_catchall_destroy(ctx, set);
+ 	kfree(set->name);
+ 	kvfree(set);
+@@ -5060,10 +5197,60 @@ static void nf_tables_unbind_set(const struct nft_ctx *ctx, struct nft_set *set,
+ 	}
+ }
+ 
++static void nft_setelem_data_activate(const struct net *net,
++				      const struct nft_set *set,
++				      struct nft_set_elem *elem);
++
++static int nft_mapelem_activate(const struct nft_ctx *ctx,
++				struct nft_set *set,
++				const struct nft_set_iter *iter,
++				struct nft_set_elem *elem)
++{
++	nft_setelem_data_activate(ctx->net, set, elem);
++
++	return 0;
++}
++
++static void nft_map_catchall_activate(const struct nft_ctx *ctx,
++				      struct nft_set *set)
++{
++	u8 genmask = nft_genmask_next(ctx->net);
++	struct nft_set_elem_catchall *catchall;
++	struct nft_set_elem elem;
++	struct nft_set_ext *ext;
++
++	list_for_each_entry(catchall, &set->catchall_list, list) {
++		ext = nft_set_elem_ext(set, catchall->elem);
++		if (!nft_set_elem_active(ext, genmask))
++			continue;
++
++		elem.priv = catchall->elem;
++		nft_setelem_data_activate(ctx->net, set, &elem);
++		break;
++	}
++}
++
++static void nft_map_activate(const struct nft_ctx *ctx, struct nft_set *set)
++{
++	struct nft_set_iter iter = {
++		.genmask	= nft_genmask_next(ctx->net),
++		.fn		= nft_mapelem_activate,
++	};
++
++	set->ops->walk(ctx, set, &iter);
++	WARN_ON_ONCE(iter.err);
++
++	nft_map_catchall_activate(ctx, set);
++}
++
+ void nf_tables_activate_set(const struct nft_ctx *ctx, struct nft_set *set)
+ {
+-	if (nft_set_is_anonymous(set))
++	if (nft_set_is_anonymous(set)) {
++		if (set->flags & (NFT_SET_MAP | NFT_SET_OBJECT))
++			nft_map_activate(ctx, set);
++
+ 		nft_clear(ctx->net, set);
++	}
+ 
+ 	set->use++;
+ }
+@@ -5074,14 +5261,28 @@ void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
+ 			      enum nft_trans_phase phase)
+ {
+ 	switch (phase) {
+-	case NFT_TRANS_PREPARE:
++	case NFT_TRANS_PREPARE_ERROR:
++		nft_set_trans_unbind(ctx, set);
+ 		if (nft_set_is_anonymous(set))
+ 			nft_deactivate_next(ctx->net, set);
+ 
++		set->use--;
++		break;
++	case NFT_TRANS_PREPARE:
++		if (nft_set_is_anonymous(set)) {
++			if (set->flags & (NFT_SET_MAP | NFT_SET_OBJECT))
++				nft_map_deactivate(ctx, set);
++
++			nft_deactivate_next(ctx->net, set);
++		}
+ 		set->use--;
+ 		return;
+ 	case NFT_TRANS_ABORT:
+ 	case NFT_TRANS_RELEASE:
++		if (nft_set_is_anonymous(set) &&
++		    set->flags & (NFT_SET_MAP | NFT_SET_OBJECT))
++			nft_map_deactivate(ctx, set);
++
+ 		set->use--;
+ 		fallthrough;
+ 	default:
+@@ -5834,6 +6035,7 @@ static void nft_set_elem_expr_destroy(const struct nft_ctx *ctx,
+ 		__nft_set_elem_expr_destroy(ctx, expr);
+ }
+ 
++/* Drop references and destroy. Called from gc, dynset and abort path. */
+ void nft_set_elem_destroy(const struct nft_set *set, void *elem,
+ 			  bool destroy_expr)
+ {
+@@ -5855,11 +6057,11 @@ void nft_set_elem_destroy(const struct nft_set *set, void *elem,
+ }
+ EXPORT_SYMBOL_GPL(nft_set_elem_destroy);
+ 
+-/* Only called from commit path, nft_setelem_data_deactivate() already deals
+- * with the refcounting from the preparation phase.
++/* Destroy element. References have been already dropped in the preparation
++ * path via nft_setelem_data_deactivate().
+  */
+-static void nf_tables_set_elem_destroy(const struct nft_ctx *ctx,
+-				       const struct nft_set *set, void *elem)
++void nf_tables_set_elem_destroy(const struct nft_ctx *ctx,
++				const struct nft_set *set, void *elem)
+ {
+ 	struct nft_set_ext *ext = nft_set_elem_ext(set, elem);
+ 
+@@ -6492,7 +6694,7 @@ err_elem_free:
+ 	if (obj)
+ 		obj->use--;
+ err_elem_userdata:
+-	nf_tables_set_elem_destroy(ctx, set, elem.priv);
++	nft_set_elem_destroy(set, elem.priv, true);
+ err_parse_data:
+ 	if (nla[NFTA_SET_ELEM_DATA] != NULL)
+ 		nft_data_release(&elem.data.val, desc.type);
+@@ -6537,7 +6739,8 @@ static int nf_tables_newsetelem(struct sk_buff *skb,
+ 	if (IS_ERR(set))
+ 		return PTR_ERR(set);
+ 
+-	if (!list_empty(&set->bindings) && set->flags & NFT_SET_CONSTANT)
++	if (!list_empty(&set->bindings) &&
++	    (set->flags & (NFT_SET_CONSTANT | NFT_SET_ANONYMOUS)))
+ 		return -EBUSY;
+ 
+ 	nft_ctx_init(&ctx, net, skb, info->nlh, family, table, NULL, nla);
+@@ -6570,7 +6773,6 @@ static int nf_tables_newsetelem(struct sk_buff *skb,
+ void nft_data_hold(const struct nft_data *data, enum nft_data_types type)
+ {
+ 	struct nft_chain *chain;
+-	struct nft_rule *rule;
+ 
+ 	if (type == NFT_DATA_VERDICT) {
+ 		switch (data->verdict.code) {
+@@ -6578,15 +6780,6 @@ void nft_data_hold(const struct nft_data *data, enum nft_data_types type)
+ 		case NFT_GOTO:
+ 			chain = data->verdict.chain;
+ 			chain->use++;
+-
+-			if (!nft_chain_is_bound(chain))
+-				break;
+-
+-			chain->table->use++;
+-			list_for_each_entry(rule, &chain->rules, list)
+-				chain->use++;
+-
+-			nft_chain_add(chain->table, chain);
+ 			break;
+ 		}
+ 	}
+@@ -6821,7 +7014,9 @@ static int nf_tables_delsetelem(struct sk_buff *skb,
+ 	set = nft_set_lookup(table, nla[NFTA_SET_ELEM_LIST_SET], genmask);
+ 	if (IS_ERR(set))
+ 		return PTR_ERR(set);
+-	if (!list_empty(&set->bindings) && set->flags & NFT_SET_CONSTANT)
++
++	if (!list_empty(&set->bindings) &&
++	    (set->flags & (NFT_SET_CONSTANT | NFT_SET_ANONYMOUS)))
+ 		return -EBUSY;
+ 
+ 	nft_ctx_init(&ctx, net, skb, info->nlh, family, table, NULL, nla);
+@@ -7596,6 +7791,7 @@ void nf_tables_deactivate_flowtable(const struct nft_ctx *ctx,
+ 				    enum nft_trans_phase phase)
+ {
+ 	switch (phase) {
++	case NFT_TRANS_PREPARE_ERROR:
+ 	case NFT_TRANS_PREPARE:
+ 	case NFT_TRANS_ABORT:
+ 	case NFT_TRANS_RELEASE:
+@@ -8856,7 +9052,7 @@ static void nf_tables_trans_destroy_work(struct work_struct *w)
+ 	synchronize_rcu();
+ 
+ 	list_for_each_entry_safe(trans, next, &head, list) {
+-		list_del(&trans->list);
++		nft_trans_list_del(trans);
+ 		nft_commit_release(trans);
+ 	}
+ }
+@@ -9223,6 +9419,27 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 		return 0;
+ 	}
+ 
++	list_for_each_entry(trans, &nft_net->binding_list, binding_list) {
++		switch (trans->msg_type) {
++		case NFT_MSG_NEWSET:
++			if (!nft_trans_set_update(trans) &&
++			    nft_set_is_anonymous(nft_trans_set(trans)) &&
++			    !nft_trans_set_bound(trans)) {
++				pr_warn_once("nftables ruleset with unbound set\n");
++				return -EINVAL;
++			}
++			break;
++		case NFT_MSG_NEWCHAIN:
++			if (!nft_trans_chain_update(trans) &&
++			    nft_chain_binding(nft_trans_chain(trans)) &&
++			    !nft_trans_chain_bound(trans)) {
++				pr_warn_once("nftables ruleset with unbound chain\n");
++				return -EINVAL;
++			}
++			break;
++		}
++	}
++
+ 	/* 0. Validate ruleset, otherwise roll back for error reporting. */
+ 	if (nf_tables_validate(net) < 0)
+ 		return -EAGAIN;
+@@ -9583,7 +9800,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 				kfree(nft_trans_chain_name(trans));
+ 				nft_trans_destroy(trans);
+ 			} else {
+-				if (nft_chain_is_bound(trans->ctx.chain)) {
++				if (nft_trans_chain_bound(trans)) {
+ 					nft_trans_destroy(trans);
+ 					break;
+ 				}
+@@ -9601,6 +9818,10 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 			nft_trans_destroy(trans);
+ 			break;
+ 		case NFT_MSG_NEWRULE:
++			if (nft_trans_rule_bound(trans)) {
++				nft_trans_destroy(trans);
++				break;
++			}
+ 			trans->ctx.chain->use--;
+ 			list_del_rcu(&nft_trans_rule(trans)->list);
+ 			nft_rule_expr_deactivate(&trans->ctx,
+@@ -9635,6 +9856,9 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 		case NFT_MSG_DESTROYSET:
+ 			trans->ctx.table->use++;
+ 			nft_clear(trans->ctx.net, nft_trans_set(trans));
++			if (nft_trans_set(trans)->flags & (NFT_SET_MAP | NFT_SET_OBJECT))
++				nft_map_activate(&trans->ctx, nft_trans_set(trans));
++
+ 			nft_trans_destroy(trans);
+ 			break;
+ 		case NFT_MSG_NEWSETELEM:
+@@ -9715,7 +9939,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 
+ 	list_for_each_entry_safe_reverse(trans, next,
+ 					 &nft_net->commit_list, list) {
+-		list_del(&trans->list);
++		nft_trans_list_del(trans);
+ 		nf_tables_abort_release(trans);
+ 	}
+ 
+@@ -10164,22 +10388,12 @@ static int nft_verdict_init(const struct nft_ctx *ctx, struct nft_data *data,
+ static void nft_verdict_uninit(const struct nft_data *data)
+ {
+ 	struct nft_chain *chain;
+-	struct nft_rule *rule;
+ 
+ 	switch (data->verdict.code) {
+ 	case NFT_JUMP:
+ 	case NFT_GOTO:
+ 		chain = data->verdict.chain;
+ 		chain->use--;
+-
+-		if (!nft_chain_is_bound(chain))
+-			break;
+-
+-		chain->table->use--;
+-		list_for_each_entry(rule, &chain->rules, list)
+-			chain->use--;
+-
+-		nft_chain_del(chain);
+ 		break;
+ 	}
+ }
+@@ -10414,6 +10628,9 @@ static void __nft_release_table(struct net *net, struct nft_table *table)
+ 	list_for_each_entry_safe(set, ns, &table->sets, list) {
+ 		list_del(&set->list);
+ 		table->use--;
++		if (set->flags & (NFT_SET_MAP | NFT_SET_OBJECT))
++			nft_map_deactivate(&ctx, set);
++
+ 		nft_set_destroy(&ctx, set);
+ 	}
+ 	list_for_each_entry_safe(obj, ne, &table->objects, list) {
+@@ -10498,6 +10715,7 @@ static int __net_init nf_tables_init_net(struct net *net)
+ 
+ 	INIT_LIST_HEAD(&nft_net->tables);
+ 	INIT_LIST_HEAD(&nft_net->commit_list);
++	INIT_LIST_HEAD(&nft_net->binding_list);
+ 	INIT_LIST_HEAD(&nft_net->module_list);
+ 	INIT_LIST_HEAD(&nft_net->notify_list);
+ 	mutex_init(&nft_net->commit_mutex);
+diff --git a/net/netfilter/nfnetlink_osf.c b/net/netfilter/nfnetlink_osf.c
+index ee6840bd59337..8f1bfa6ccc2d9 100644
+--- a/net/netfilter/nfnetlink_osf.c
++++ b/net/netfilter/nfnetlink_osf.c
+@@ -439,3 +439,4 @@ module_init(nfnl_osf_init);
+ module_exit(nfnl_osf_fini);
+ 
+ MODULE_LICENSE("GPL");
++MODULE_ALIAS_NFNL_SUBSYS(NFNL_SUBSYS_OSF);
+diff --git a/net/netfilter/nft_immediate.c b/net/netfilter/nft_immediate.c
+index c9d2f7c29f530..3d76ebfe8939b 100644
+--- a/net/netfilter/nft_immediate.c
++++ b/net/netfilter/nft_immediate.c
+@@ -76,11 +76,9 @@ static int nft_immediate_init(const struct nft_ctx *ctx,
+ 		switch (priv->data.verdict.code) {
+ 		case NFT_JUMP:
+ 		case NFT_GOTO:
+-			if (nft_chain_is_bound(chain)) {
+-				err = -EBUSY;
+-				goto err1;
+-			}
+-			chain->bound = true;
++			err = nf_tables_bind_chain(ctx, chain);
++			if (err < 0)
++				return err;
+ 			break;
+ 		default:
+ 			break;
+@@ -98,6 +96,31 @@ static void nft_immediate_activate(const struct nft_ctx *ctx,
+ 				   const struct nft_expr *expr)
+ {
+ 	const struct nft_immediate_expr *priv = nft_expr_priv(expr);
++	const struct nft_data *data = &priv->data;
++	struct nft_ctx chain_ctx;
++	struct nft_chain *chain;
++	struct nft_rule *rule;
++
++	if (priv->dreg == NFT_REG_VERDICT) {
++		switch (data->verdict.code) {
++		case NFT_JUMP:
++		case NFT_GOTO:
++			chain = data->verdict.chain;
++			if (!nft_chain_binding(chain))
++				break;
++
++			chain_ctx = *ctx;
++			chain_ctx.chain = chain;
++
++			list_for_each_entry(rule, &chain->rules, list)
++				nft_rule_expr_activate(&chain_ctx, rule);
++
++			nft_clear(ctx->net, chain);
++			break;
++		default:
++			break;
++		}
++	}
+ 
+ 	return nft_data_hold(&priv->data, nft_dreg_to_type(priv->dreg));
+ }
+@@ -107,6 +130,43 @@ static void nft_immediate_deactivate(const struct nft_ctx *ctx,
+ 				     enum nft_trans_phase phase)
+ {
+ 	const struct nft_immediate_expr *priv = nft_expr_priv(expr);
++	const struct nft_data *data = &priv->data;
++	struct nft_ctx chain_ctx;
++	struct nft_chain *chain;
++	struct nft_rule *rule;
++
++	if (priv->dreg == NFT_REG_VERDICT) {
++		switch (data->verdict.code) {
++		case NFT_JUMP:
++		case NFT_GOTO:
++			chain = data->verdict.chain;
++			if (!nft_chain_binding(chain))
++				break;
++
++			chain_ctx = *ctx;
++			chain_ctx.chain = chain;
++
++			list_for_each_entry(rule, &chain->rules, list)
++				nft_rule_expr_deactivate(&chain_ctx, rule, phase);
++
++			switch (phase) {
++			case NFT_TRANS_PREPARE_ERROR:
++				nf_tables_unbind_chain(ctx, chain);
++				fallthrough;
++			case NFT_TRANS_PREPARE:
++				nft_deactivate_next(ctx->net, chain);
++				break;
++			default:
++				nft_chain_del(chain);
++				chain->bound = false;
++				chain->table->use--;
++				break;
++			}
++			break;
++		default:
++			break;
++		}
++	}
+ 
+ 	if (phase == NFT_TRANS_COMMIT)
+ 		return;
+@@ -131,15 +191,27 @@ static void nft_immediate_destroy(const struct nft_ctx *ctx,
+ 	case NFT_GOTO:
+ 		chain = data->verdict.chain;
+ 
+-		if (!nft_chain_is_bound(chain))
++		if (!nft_chain_binding(chain))
++			break;
++
++		/* Rule construction failed, but chain is already bound:
++		 * let the transaction records release this chain and its rules.
++		 */
++		if (chain->bound) {
++			chain->use--;
+ 			break;
++		}
+ 
++		/* Rule has been deleted, release chain and its rules. */
+ 		chain_ctx = *ctx;
+ 		chain_ctx.chain = chain;
+ 
+-		list_for_each_entry_safe(rule, n, &chain->rules, list)
+-			nf_tables_rule_release(&chain_ctx, rule);
+-
++		chain->use--;
++		list_for_each_entry_safe(rule, n, &chain->rules, list) {
++			chain->use--;
++			list_del(&rule->list);
++			nf_tables_rule_destroy(&chain_ctx, rule);
++		}
+ 		nf_tables_chain_destroy(&chain_ctx);
+ 		break;
+ 	default:
+diff --git a/net/netfilter/nft_set_bitmap.c b/net/netfilter/nft_set_bitmap.c
+index 96081ac8d2b4c..1e5e7a181e0bc 100644
+--- a/net/netfilter/nft_set_bitmap.c
++++ b/net/netfilter/nft_set_bitmap.c
+@@ -271,13 +271,14 @@ static int nft_bitmap_init(const struct nft_set *set,
+ 	return 0;
+ }
+ 
+-static void nft_bitmap_destroy(const struct nft_set *set)
++static void nft_bitmap_destroy(const struct nft_ctx *ctx,
++			       const struct nft_set *set)
+ {
+ 	struct nft_bitmap *priv = nft_set_priv(set);
+ 	struct nft_bitmap_elem *be, *n;
+ 
+ 	list_for_each_entry_safe(be, n, &priv->list, head)
+-		nft_set_elem_destroy(set, be, true);
++		nf_tables_set_elem_destroy(ctx, set, be);
+ }
+ 
+ static bool nft_bitmap_estimate(const struct nft_set_desc *desc, u32 features,
+diff --git a/net/netfilter/nft_set_hash.c b/net/netfilter/nft_set_hash.c
+index 76de6c8d98655..0b73cb0e752f7 100644
+--- a/net/netfilter/nft_set_hash.c
++++ b/net/netfilter/nft_set_hash.c
+@@ -400,19 +400,31 @@ static int nft_rhash_init(const struct nft_set *set,
+ 	return 0;
+ }
+ 
++struct nft_rhash_ctx {
++	const struct nft_ctx	ctx;
++	const struct nft_set	*set;
++};
++
+ static void nft_rhash_elem_destroy(void *ptr, void *arg)
+ {
+-	nft_set_elem_destroy(arg, ptr, true);
++	struct nft_rhash_ctx *rhash_ctx = arg;
++
++	nf_tables_set_elem_destroy(&rhash_ctx->ctx, rhash_ctx->set, ptr);
+ }
+ 
+-static void nft_rhash_destroy(const struct nft_set *set)
++static void nft_rhash_destroy(const struct nft_ctx *ctx,
++			      const struct nft_set *set)
+ {
+ 	struct nft_rhash *priv = nft_set_priv(set);
++	struct nft_rhash_ctx rhash_ctx = {
++		.ctx	= *ctx,
++		.set	= set,
++	};
+ 
+ 	cancel_delayed_work_sync(&priv->gc_work);
+ 	rcu_barrier();
+ 	rhashtable_free_and_destroy(&priv->ht, nft_rhash_elem_destroy,
+-				    (void *)set);
++				    (void *)&rhash_ctx);
+ }
+ 
+ /* Number of buckets is stored in u32, so cap our result to 1U<<31 */
+@@ -643,7 +655,8 @@ static int nft_hash_init(const struct nft_set *set,
+ 	return 0;
+ }
+ 
+-static void nft_hash_destroy(const struct nft_set *set)
++static void nft_hash_destroy(const struct nft_ctx *ctx,
++			     const struct nft_set *set)
+ {
+ 	struct nft_hash *priv = nft_set_priv(set);
+ 	struct nft_hash_elem *he;
+@@ -653,7 +666,7 @@ static void nft_hash_destroy(const struct nft_set *set)
+ 	for (i = 0; i < priv->buckets; i++) {
+ 		hlist_for_each_entry_safe(he, next, &priv->table[i], node) {
+ 			hlist_del_rcu(&he->node);
+-			nft_set_elem_destroy(set, he, true);
++			nf_tables_set_elem_destroy(ctx, set, he);
+ 		}
+ 	}
+ }
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index 15e451dc3fc46..0452ee586c1cc 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -1974,12 +1974,16 @@ static void nft_pipapo_walk(const struct nft_ctx *ctx, struct nft_set *set,
+ 			    struct nft_set_iter *iter)
+ {
+ 	struct nft_pipapo *priv = nft_set_priv(set);
++	struct net *net = read_pnet(&set->net);
+ 	struct nft_pipapo_match *m;
+ 	struct nft_pipapo_field *f;
+ 	int i, r;
+ 
+ 	rcu_read_lock();
+-	m = rcu_dereference(priv->match);
++	if (iter->genmask == nft_genmask_cur(net))
++		m = rcu_dereference(priv->match);
++	else
++		m = priv->clone;
+ 
+ 	if (unlikely(!m))
+ 		goto out;
+@@ -2148,10 +2152,12 @@ out_scratch:
+ 
+ /**
+  * nft_set_pipapo_match_destroy() - Destroy elements from key mapping array
++ * @ctx:	context
+  * @set:	nftables API set representation
+  * @m:		matching data pointing to key mapping array
+  */
+-static void nft_set_pipapo_match_destroy(const struct nft_set *set,
++static void nft_set_pipapo_match_destroy(const struct nft_ctx *ctx,
++					 const struct nft_set *set,
+ 					 struct nft_pipapo_match *m)
+ {
+ 	struct nft_pipapo_field *f;
+@@ -2168,15 +2174,17 @@ static void nft_set_pipapo_match_destroy(const struct nft_set *set,
+ 
+ 		e = f->mt[r].e;
+ 
+-		nft_set_elem_destroy(set, e, true);
++		nf_tables_set_elem_destroy(ctx, set, e);
+ 	}
+ }
+ 
+ /**
+  * nft_pipapo_destroy() - Free private data for set and all committed elements
++ * @ctx:	context
+  * @set:	nftables API set representation
+  */
+-static void nft_pipapo_destroy(const struct nft_set *set)
++static void nft_pipapo_destroy(const struct nft_ctx *ctx,
++			       const struct nft_set *set)
+ {
+ 	struct nft_pipapo *priv = nft_set_priv(set);
+ 	struct nft_pipapo_match *m;
+@@ -2186,7 +2194,7 @@ static void nft_pipapo_destroy(const struct nft_set *set)
+ 	if (m) {
+ 		rcu_barrier();
+ 
+-		nft_set_pipapo_match_destroy(set, m);
++		nft_set_pipapo_match_destroy(ctx, set, m);
+ 
+ #ifdef NFT_PIPAPO_ALIGN
+ 		free_percpu(m->scratch_aligned);
+@@ -2203,7 +2211,7 @@ static void nft_pipapo_destroy(const struct nft_set *set)
+ 		m = priv->clone;
+ 
+ 		if (priv->dirty)
+-			nft_set_pipapo_match_destroy(set, m);
++			nft_set_pipapo_match_destroy(ctx, set, m);
+ 
+ #ifdef NFT_PIPAPO_ALIGN
+ 		free_percpu(priv->clone->scratch_aligned);
+diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
+index 2f114aa10f1a7..5c05c9b990fba 100644
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -664,7 +664,8 @@ static int nft_rbtree_init(const struct nft_set *set,
+ 	return 0;
+ }
+ 
+-static void nft_rbtree_destroy(const struct nft_set *set)
++static void nft_rbtree_destroy(const struct nft_ctx *ctx,
++			       const struct nft_set *set)
+ {
+ 	struct nft_rbtree *priv = nft_set_priv(set);
+ 	struct nft_rbtree_elem *rbe;
+@@ -675,7 +676,7 @@ static void nft_rbtree_destroy(const struct nft_set *set)
+ 	while ((node = priv->root.rb_node) != NULL) {
+ 		rb_erase(node, &priv->root);
+ 		rbe = rb_entry(node, struct nft_rbtree_elem, node);
+-		nft_set_elem_destroy(set, rbe, true);
++		nf_tables_set_elem_destroy(ctx, set, rbe);
+ 	}
+ }
+ 
+diff --git a/net/netfilter/xt_osf.c b/net/netfilter/xt_osf.c
+index e1990baf3a3b7..dc9485854002a 100644
+--- a/net/netfilter/xt_osf.c
++++ b/net/netfilter/xt_osf.c
+@@ -71,4 +71,3 @@ MODULE_AUTHOR("Evgeniy Polyakov <zbr@ioremap.net>");
+ MODULE_DESCRIPTION("Passive OS fingerprint matching.");
+ MODULE_ALIAS("ipt_osf");
+ MODULE_ALIAS("ip6t_osf");
+-MODULE_ALIAS_NFNL_SUBSYS(NFNL_SUBSYS_OSF);
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 3f7311529cc00..34c90c9c2fcad 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -2324,7 +2324,9 @@ static struct pernet_operations psched_net_ops = {
+ 	.exit = psched_net_exit,
+ };
+ 
++#if IS_ENABLED(CONFIG_RETPOLINE)
+ DEFINE_STATIC_KEY_FALSE(tc_skip_wrapper);
++#endif
+ 
+ static int __init pktsched_init(void)
+ {
+diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
+index 6ef3021e1169a..e79be1b3e74da 100644
+--- a/net/sched/sch_netem.c
++++ b/net/sched/sch_netem.c
+@@ -966,6 +966,7 @@ static int netem_change(struct Qdisc *sch, struct nlattr *opt,
+ 	if (ret < 0)
+ 		return ret;
+ 
++	sch_tree_lock(sch);
+ 	/* backup q->clg and q->loss_model */
+ 	old_clg = q->clg;
+ 	old_loss_model = q->loss_model;
+@@ -974,7 +975,7 @@ static int netem_change(struct Qdisc *sch, struct nlattr *opt,
+ 		ret = get_loss_clg(q, tb[TCA_NETEM_LOSS]);
+ 		if (ret) {
+ 			q->loss_model = old_loss_model;
+-			return ret;
++			goto unlock;
+ 		}
+ 	} else {
+ 		q->loss_model = CLG_RANDOM;
+@@ -1041,6 +1042,8 @@ static int netem_change(struct Qdisc *sch, struct nlattr *opt,
+ 	/* capping jitter to the range acceptable by tabledist() */
+ 	q->jitter = min_t(s64, abs(q->jitter), INT_MAX);
+ 
++unlock:
++	sch_tree_unlock(sch);
+ 	return ret;
+ 
+ get_table_failure:
+@@ -1050,7 +1053,8 @@ get_table_failure:
+ 	 */
+ 	q->clg = old_clg;
+ 	q->loss_model = old_loss_model;
+-	return ret;
++
++	goto unlock;
+ }
+ 
+ static int netem_init(struct Qdisc *sch, struct nlattr *opt,
+diff --git a/net/xfrm/xfrm_input.c b/net/xfrm/xfrm_input.c
+index 436d29640ac2c..9f294a20dcece 100644
+--- a/net/xfrm/xfrm_input.c
++++ b/net/xfrm/xfrm_input.c
+@@ -131,6 +131,7 @@ struct sec_path *secpath_set(struct sk_buff *skb)
+ 	memset(sp->ovec, 0, sizeof(sp->ovec));
+ 	sp->olen = 0;
+ 	sp->len = 0;
++	sp->verified_cnt = 0;
+ 
+ 	return sp;
+ }
+diff --git a/net/xfrm/xfrm_interface_core.c b/net/xfrm/xfrm_interface_core.c
+index 1f99dc4690271..35279c220bd78 100644
+--- a/net/xfrm/xfrm_interface_core.c
++++ b/net/xfrm/xfrm_interface_core.c
+@@ -310,6 +310,52 @@ static void xfrmi_scrub_packet(struct sk_buff *skb, bool xnet)
+ 	skb->mark = 0;
+ }
+ 
++static int xfrmi_input(struct sk_buff *skb, int nexthdr, __be32 spi,
++		       int encap_type, unsigned short family)
++{
++	struct sec_path *sp;
++
++	sp = skb_sec_path(skb);
++	if (sp && (sp->len || sp->olen) &&
++	    !xfrm_policy_check(NULL, XFRM_POLICY_IN, skb, family))
++		goto discard;
++
++	XFRM_SPI_SKB_CB(skb)->family = family;
++	if (family == AF_INET) {
++		XFRM_SPI_SKB_CB(skb)->daddroff = offsetof(struct iphdr, daddr);
++		XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip4 = NULL;
++	} else {
++		XFRM_SPI_SKB_CB(skb)->daddroff = offsetof(struct ipv6hdr, daddr);
++		XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip6 = NULL;
++	}
++
++	return xfrm_input(skb, nexthdr, spi, encap_type);
++discard:
++	kfree_skb(skb);
++	return 0;
++}
++
++static int xfrmi4_rcv(struct sk_buff *skb)
++{
++	return xfrmi_input(skb, ip_hdr(skb)->protocol, 0, 0, AF_INET);
++}
++
++static int xfrmi6_rcv(struct sk_buff *skb)
++{
++	return xfrmi_input(skb, skb_network_header(skb)[IP6CB(skb)->nhoff],
++			   0, 0, AF_INET6);
++}
++
++static int xfrmi4_input(struct sk_buff *skb, int nexthdr, __be32 spi, int encap_type)
++{
++	return xfrmi_input(skb, nexthdr, spi, encap_type, AF_INET);
++}
++
++static int xfrmi6_input(struct sk_buff *skb, int nexthdr, __be32 spi, int encap_type)
++{
++	return xfrmi_input(skb, nexthdr, spi, encap_type, AF_INET6);
++}
++
+ static int xfrmi_rcv_cb(struct sk_buff *skb, int err)
+ {
+ 	const struct xfrm_mode *inner_mode;
+@@ -945,8 +991,8 @@ static struct pernet_operations xfrmi_net_ops = {
+ };
+ 
+ static struct xfrm6_protocol xfrmi_esp6_protocol __read_mostly = {
+-	.handler	=	xfrm6_rcv,
+-	.input_handler	=	xfrm_input,
++	.handler	=	xfrmi6_rcv,
++	.input_handler	=	xfrmi6_input,
+ 	.cb_handler	=	xfrmi_rcv_cb,
+ 	.err_handler	=	xfrmi6_err,
+ 	.priority	=	10,
+@@ -996,8 +1042,8 @@ static struct xfrm6_tunnel xfrmi_ip6ip_handler __read_mostly = {
+ #endif
+ 
+ static struct xfrm4_protocol xfrmi_esp4_protocol __read_mostly = {
+-	.handler	=	xfrm4_rcv,
+-	.input_handler	=	xfrm_input,
++	.handler	=	xfrmi4_rcv,
++	.input_handler	=	xfrmi4_input,
+ 	.cb_handler	=	xfrmi_rcv_cb,
+ 	.err_handler	=	xfrmi4_err,
+ 	.priority	=	10,
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index 6d15788b51231..e7617c9959c31 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -1831,6 +1831,7 @@ again:
+ 
+ 		__xfrm_policy_unlink(pol, dir);
+ 		spin_unlock_bh(&net->xfrm.xfrm_policy_lock);
++		xfrm_dev_policy_delete(pol);
+ 		cnt++;
+ 		xfrm_audit_policy_delete(pol, 1, task_valid);
+ 		xfrm_policy_kill(pol);
+@@ -1869,6 +1870,7 @@ again:
+ 
+ 		__xfrm_policy_unlink(pol, dir);
+ 		spin_unlock_bh(&net->xfrm.xfrm_policy_lock);
++		xfrm_dev_policy_delete(pol);
+ 		cnt++;
+ 		xfrm_audit_policy_delete(pol, 1, task_valid);
+ 		xfrm_policy_kill(pol);
+@@ -3349,6 +3351,13 @@ xfrm_policy_ok(const struct xfrm_tmpl *tmpl, const struct sec_path *sp, int star
+ 		if (xfrm_state_ok(tmpl, sp->xvec[idx], family, if_id))
+ 			return ++idx;
+ 		if (sp->xvec[idx]->props.mode != XFRM_MODE_TRANSPORT) {
++			if (idx < sp->verified_cnt) {
++				/* Secpath entry previously verified, consider optional and
++				 * continue searching
++				 */
++				continue;
++			}
++
+ 			if (start == -1)
+ 				start = -2-idx;
+ 			break;
+@@ -3723,6 +3732,9 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb,
+ 		 * Order is _important_. Later we will implement
+ 		 * some barriers, but at the moment barriers
+ 		 * are implied between each two transformations.
++		 * Upon success, marks secpath entries as having been
++		 * verified to allow them to be skipped in future policy
++		 * checks (e.g. nested tunnels).
+ 		 */
+ 		for (i = xfrm_nr-1, k = 0; i >= 0; i--) {
+ 			k = xfrm_policy_ok(tpp[i], sp, k, family, if_id);
+@@ -3741,6 +3753,8 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb,
+ 		}
+ 
+ 		xfrm_pols_put(pols, npols);
++		sp->verified_cnt = k;
++
+ 		return 1;
+ 	}
+ 	XFRM_INC_STATS(net, LINUX_MIB_XFRMINPOLBLOCK);
+diff --git a/scripts/gfp-translate b/scripts/gfp-translate
+index b2ce416d944b3..6c9aed17cf563 100755
+--- a/scripts/gfp-translate
++++ b/scripts/gfp-translate
+@@ -63,11 +63,11 @@ fi
+ 
+ # Extract GFP flags from the kernel source
+ TMPFILE=`mktemp -t gfptranslate-XXXXXX` || exit 1
+-grep -q ___GFP $SOURCE/include/linux/gfp.h
++grep -q ___GFP $SOURCE/include/linux/gfp_types.h
+ if [ $? -eq 0 ]; then
+-	grep "^#define ___GFP" $SOURCE/include/linux/gfp.h | sed -e 's/u$//' | grep -v GFP_BITS > $TMPFILE
++	grep "^#define ___GFP" $SOURCE/include/linux/gfp_types.h | sed -e 's/u$//' | grep -v GFP_BITS > $TMPFILE
+ else
+-	grep "^#define __GFP" $SOURCE/include/linux/gfp.h | sed -e 's/(__force gfp_t)//' | sed -e 's/u)/)/' | grep -v GFP_BITS | sed -e 's/)\//) \//' > $TMPFILE
++	grep "^#define __GFP" $SOURCE/include/linux/gfp_types.h | sed -e 's/(__force gfp_t)//' | sed -e 's/u)/)/' | grep -v GFP_BITS | sed -e 's/)\//) \//' > $TMPFILE
+ fi
+ 
+ # Parse the flags
+diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
+index 9466b6a2abae4..5b3964b39709f 100644
+--- a/scripts/mod/modpost.c
++++ b/scripts/mod/modpost.c
+@@ -1985,6 +1985,11 @@ static void add_header(struct buffer *b, struct module *mod)
+ 	buf_printf(b, "#include <linux/vermagic.h>\n");
+ 	buf_printf(b, "#include <linux/compiler.h>\n");
+ 	buf_printf(b, "\n");
++	buf_printf(b, "#ifdef CONFIG_UNWINDER_ORC\n");
++	buf_printf(b, "#include <asm/orc_header.h>\n");
++	buf_printf(b, "ORC_HEADER;\n");
++	buf_printf(b, "#endif\n");
++	buf_printf(b, "\n");
+ 	buf_printf(b, "BUILD_SALT;\n");
+ 	buf_printf(b, "BUILD_LTO_INFO;\n");
+ 	buf_printf(b, "\n");
+diff --git a/scripts/orc_hash.sh b/scripts/orc_hash.sh
+new file mode 100644
+index 0000000000000..466611aa0053f
+--- /dev/null
++++ b/scripts/orc_hash.sh
+@@ -0,0 +1,16 @@
++#!/bin/sh
++# SPDX-License-Identifier: GPL-2.0-or-later
++# Copyright (c) Meta Platforms, Inc. and affiliates.
++
++set -e
++
++printf '%s' '#define ORC_HASH '
++
++awk '
++/^#define ORC_(REG|TYPE)_/ { print }
++/^struct orc_entry {$/ { p=1 }
++p { print }
++/^}/ { p=0 }' |
++	sha1sum |
++	cut -d " " -f 1 |
++	sed 's/\([0-9a-f]\{2\}\)/0x\1,/g'
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 920e44ba998a5..eb049014f87ac 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -9594,6 +9594,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x10ec, 0x124c, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+ 	SND_PCI_QUIRK(0x10ec, 0x1252, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+ 	SND_PCI_QUIRK(0x10ec, 0x1254, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
++	SND_PCI_QUIRK(0x10ec, 0x12cc, "Intel Reference board", ALC225_FIXUP_HEADSET_JACK),
+ 	SND_PCI_QUIRK(0x10f7, 0x8338, "Panasonic CF-SZ6", ALC269_FIXUP_HEADSET_MODE),
+ 	SND_PCI_QUIRK(0x144d, 0xc109, "Samsung Ativ book 9 (NP900X3G)", ALC269_FIXUP_INV_DMIC),
+ 	SND_PCI_QUIRK(0x144d, 0xc169, "Samsung Notebook 9 Pen (NP930SBE-K01US)", ALC298_FIXUP_SAMSUNG_AMP),
+@@ -9814,6 +9815,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x8086, 0x2074, "Intel NUC 8", ALC233_FIXUP_INTEL_NUC8_DMIC),
+ 	SND_PCI_QUIRK(0x8086, 0x2080, "Intel NUC 8 Rugged", ALC256_FIXUP_INTEL_NUC8_RUGGED),
+ 	SND_PCI_QUIRK(0x8086, 0x2081, "Intel NUC 10", ALC256_FIXUP_INTEL_NUC10),
++	SND_PCI_QUIRK(0x8086, 0x3038, "Intel NUC 13", ALC225_FIXUP_HEADSET_JACK),
+ 	SND_PCI_QUIRK(0xf111, 0x0001, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE),
+ 
+ #if 0
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index 84b401b685f7f..c1ca3ceac5f2f 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -171,6 +171,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "21CL"),
+ 		}
+ 	},
++	{
++		.driver_data = &acp6x_card,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "21EF"),
++		}
++	},
+ 	{
+ 		.driver_data = &acp6x_card,
+ 		.matches = {
+diff --git a/sound/soc/codecs/nau8824.c b/sound/soc/codecs/nau8824.c
+index 4f19fd9b65d11..5a4db8944d06a 100644
+--- a/sound/soc/codecs/nau8824.c
++++ b/sound/soc/codecs/nau8824.c
+@@ -1903,6 +1903,30 @@ static const struct dmi_system_id nau8824_quirk_table[] = {
+ 		},
+ 		.driver_data = (void *)(NAU8824_MONO_SPEAKER),
+ 	},
++	{
++		/* Positivo CW14Q01P */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Positivo Tecnologia SA"),
++			DMI_MATCH(DMI_BOARD_NAME, "CW14Q01P"),
++		},
++		.driver_data = (void *)(NAU8824_JD_ACTIVE_HIGH),
++	},
++	{
++		/* Positivo K1424G */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Positivo Tecnologia SA"),
++			DMI_MATCH(DMI_BOARD_NAME, "K1424G"),
++		},
++		.driver_data = (void *)(NAU8824_JD_ACTIVE_HIGH),
++	},
++	{
++		/* Positivo N14ZP74G */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Positivo Tecnologia SA"),
++			DMI_MATCH(DMI_BOARD_NAME, "N14ZP74G"),
++		},
++		.driver_data = (void *)(NAU8824_JD_ACTIVE_HIGH),
++	},
+ 	{}
+ };
+ 
+diff --git a/sound/soc/codecs/wcd938x-sdw.c b/sound/soc/codecs/wcd938x-sdw.c
+index 402286dfaea44..9c10200ff34b2 100644
+--- a/sound/soc/codecs/wcd938x-sdw.c
++++ b/sound/soc/codecs/wcd938x-sdw.c
+@@ -1190,7 +1190,6 @@ static const struct regmap_config wcd938x_regmap_config = {
+ 	.readable_reg = wcd938x_readable_register,
+ 	.writeable_reg = wcd938x_writeable_register,
+ 	.volatile_reg = wcd938x_volatile_register,
+-	.can_multi_write = true,
+ };
+ 
+ static const struct sdw_slave_ops wcd9380_slave_ops = {
+diff --git a/sound/soc/fsl/fsl_sai.c b/sound/soc/fsl/fsl_sai.c
+index 990bba0be1fb1..8a64f3c1d1556 100644
+--- a/sound/soc/fsl/fsl_sai.c
++++ b/sound/soc/fsl/fsl_sai.c
+@@ -491,14 +491,21 @@ static int fsl_sai_set_bclk(struct snd_soc_dai *dai, bool tx, u32 freq)
+ 	regmap_update_bits(sai->regmap, reg, FSL_SAI_CR2_MSEL_MASK,
+ 			   FSL_SAI_CR2_MSEL(sai->mclk_id[tx]));
+ 
+-	if (savediv == 1)
++	if (savediv == 1) {
+ 		regmap_update_bits(sai->regmap, reg,
+ 				   FSL_SAI_CR2_DIV_MASK | FSL_SAI_CR2_BYP,
+ 				   FSL_SAI_CR2_BYP);
+-	else
++		if (fsl_sai_dir_is_synced(sai, adir))
++			regmap_update_bits(sai->regmap, FSL_SAI_xCR2(tx, ofs),
++					   FSL_SAI_CR2_BCI, FSL_SAI_CR2_BCI);
++		else
++			regmap_update_bits(sai->regmap, FSL_SAI_xCR2(tx, ofs),
++					   FSL_SAI_CR2_BCI, 0);
++	} else {
+ 		regmap_update_bits(sai->regmap, reg,
+ 				   FSL_SAI_CR2_DIV_MASK | FSL_SAI_CR2_BYP,
+ 				   savediv / 2 - 1);
++	}
+ 
+ 	if (sai->soc_data->max_register >= FSL_SAI_MCTL) {
+ 		/* SAI is in master mode at this point, so enable MCLK */
+diff --git a/sound/soc/fsl/fsl_sai.h b/sound/soc/fsl/fsl_sai.h
+index 197748a888d5f..a53c4f0e25faf 100644
+--- a/sound/soc/fsl/fsl_sai.h
++++ b/sound/soc/fsl/fsl_sai.h
+@@ -116,6 +116,7 @@
+ 
+ /* SAI Transmit and Receive Configuration 2 Register */
+ #define FSL_SAI_CR2_SYNC	BIT(30)
++#define FSL_SAI_CR2_BCI		BIT(28)
+ #define FSL_SAI_CR2_MSEL_MASK	(0x3 << 26)
+ #define FSL_SAI_CR2_MSEL_BUS	0
+ #define FSL_SAI_CR2_MSEL_MCLK1	BIT(26)
+diff --git a/sound/soc/generic/simple-card.c b/sound/soc/generic/simple-card.c
+index e98932c167542..5f8468ff36562 100644
+--- a/sound/soc/generic/simple-card.c
++++ b/sound/soc/generic/simple-card.c
+@@ -416,6 +416,7 @@ static int __simple_for_each_link(struct asoc_simple_priv *priv,
+ 
+ 			if (ret < 0) {
+ 				of_node_put(codec);
++				of_node_put(plat);
+ 				of_node_put(np);
+ 				goto error;
+ 			}
+diff --git a/tools/testing/selftests/net/fcnal-test.sh b/tools/testing/selftests/net/fcnal-test.sh
+index 21ca91473c095..ee6880ac3e5ed 100755
+--- a/tools/testing/selftests/net/fcnal-test.sh
++++ b/tools/testing/selftests/net/fcnal-test.sh
+@@ -92,6 +92,13 @@ NSC_CMD="ip netns exec ${NSC}"
+ 
+ which ping6 > /dev/null 2>&1 && ping6=$(which ping6) || ping6=$(which ping)
+ 
++# Check if FIPS mode is enabled
++if [ -f /proc/sys/crypto/fips_enabled ]; then
++	fips_enabled=`cat /proc/sys/crypto/fips_enabled`
++else
++	fips_enabled=0
++fi
++
+ ################################################################################
+ # utilities
+ 
+@@ -1216,7 +1223,7 @@ ipv4_tcp_novrf()
+ 	run_cmd nettest -d ${NSA_DEV} -r ${a}
+ 	log_test_addr ${a} $? 1 "No server, device client, local conn"
+ 
+-	ipv4_tcp_md5_novrf
++	[ "$fips_enabled" = "1" ] || ipv4_tcp_md5_novrf
+ }
+ 
+ ipv4_tcp_vrf()
+@@ -1270,9 +1277,11 @@ ipv4_tcp_vrf()
+ 	log_test_addr ${a} $? 1 "Global server, local connection"
+ 
+ 	# run MD5 tests
+-	setup_vrf_dup
+-	ipv4_tcp_md5
+-	cleanup_vrf_dup
++	if [ "$fips_enabled" = "0" ]; then
++		setup_vrf_dup
++		ipv4_tcp_md5
++		cleanup_vrf_dup
++	fi
+ 
+ 	#
+ 	# enable VRF global server
+@@ -2772,7 +2781,7 @@ ipv6_tcp_novrf()
+ 		log_test_addr ${a} $? 1 "No server, device client, local conn"
+ 	done
+ 
+-	ipv6_tcp_md5_novrf
++	[ "$fips_enabled" = "1" ] || ipv6_tcp_md5_novrf
+ }
+ 
+ ipv6_tcp_vrf()
+@@ -2842,9 +2851,11 @@ ipv6_tcp_vrf()
+ 	log_test_addr ${a} $? 1 "Global server, local connection"
+ 
+ 	# run MD5 tests
+-	setup_vrf_dup
+-	ipv6_tcp_md5
+-	cleanup_vrf_dup
++	if [ "$fips_enabled" = "0" ]; then
++		setup_vrf_dup
++		ipv6_tcp_md5
++		cleanup_vrf_dup
++	fi
+ 
+ 	#
+ 	# enable VRF global server
+diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d.sh b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d.sh
+index c5095da7f6bf8..aec752a22e9ec 100755
+--- a/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d.sh
++++ b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d.sh
+@@ -93,12 +93,16 @@ cleanup()
+ 
+ test_gretap()
+ {
++	ip neigh replace 192.0.2.130 lladdr $(mac_get $h3) \
++		 nud permanent dev br2
+ 	full_test_span_gre_dir gt4 ingress 8 0 "mirror to gretap"
+ 	full_test_span_gre_dir gt4 egress 0 8 "mirror to gretap"
+ }
+ 
+ test_ip6gretap()
+ {
++	ip neigh replace 2001:db8:2::2 lladdr $(mac_get $h3) \
++		nud permanent dev br2
+ 	full_test_span_gre_dir gt6 ingress 8 0 "mirror to ip6gretap"
+ 	full_test_span_gre_dir gt6 egress 0 8 "mirror to ip6gretap"
+ }
+diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1q.sh b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1q.sh
+index 9ff22f28032dd..0cf4c47a46f9b 100755
+--- a/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1q.sh
++++ b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1q.sh
+@@ -90,12 +90,16 @@ cleanup()
+ 
+ test_gretap()
+ {
++	ip neigh replace 192.0.2.130 lladdr $(mac_get $h3) \
++		 nud permanent dev br1
+ 	full_test_span_gre_dir gt4 ingress 8 0 "mirror to gretap"
+ 	full_test_span_gre_dir gt4 egress 0 8 "mirror to gretap"
+ }
+ 
+ test_ip6gretap()
+ {
++	ip neigh replace 2001:db8:2::2 lladdr $(mac_get $h3) \
++		nud permanent dev br1
+ 	full_test_span_gre_dir gt6 ingress 8 0 "mirror to ip6gretap"
+ 	full_test_span_gre_dir gt6 egress 0 8 "mirror to ip6gretap"
+ }
+diff --git a/tools/testing/selftests/net/mptcp/config b/tools/testing/selftests/net/mptcp/config
+index 38021a0dd5276..6032f9b23c4c2 100644
+--- a/tools/testing/selftests/net/mptcp/config
++++ b/tools/testing/selftests/net/mptcp/config
+@@ -1,3 +1,4 @@
++CONFIG_KALLSYMS=y
+ CONFIG_MPTCP=y
+ CONFIG_IPV6=y
+ CONFIG_MPTCP_IPV6=y
+diff --git a/tools/testing/selftests/net/mptcp/diag.sh b/tools/testing/selftests/net/mptcp/diag.sh
+index 4eacdb1ab9628..fa9e09ad97d99 100755
+--- a/tools/testing/selftests/net/mptcp/diag.sh
++++ b/tools/testing/selftests/net/mptcp/diag.sh
+@@ -55,16 +55,20 @@ __chk_nr()
+ {
+ 	local command="$1"
+ 	local expected=$2
+-	local msg nr
++	local msg="$3"
++	local skip="${4:-SKIP}"
++	local nr
+ 
+-	shift 2
+-	msg=$*
+ 	nr=$(eval $command)
+ 
+ 	printf "%-50s" "$msg"
+ 	if [ $nr != $expected ]; then
+-		echo "[ fail ] expected $expected found $nr"
+-		ret=$test_cnt
++		if [ $nr = "$skip" ] && ! mptcp_lib_expect_all_features; then
++			echo "[ skip ] Feature probably not supported"
++		else
++			echo "[ fail ] expected $expected found $nr"
++			ret=$test_cnt
++		fi
+ 	else
+ 		echo "[  ok  ]"
+ 	fi
+@@ -76,12 +80,12 @@ __chk_msk_nr()
+ 	local condition=$1
+ 	shift 1
+ 
+-	__chk_nr "ss -inmHMN $ns | $condition" $*
++	__chk_nr "ss -inmHMN $ns | $condition" "$@"
+ }
+ 
+ chk_msk_nr()
+ {
+-	__chk_msk_nr "grep -c token:" $*
++	__chk_msk_nr "grep -c token:" "$@"
+ }
+ 
+ wait_msk_nr()
+@@ -119,37 +123,26 @@ wait_msk_nr()
+ 
+ chk_msk_fallback_nr()
+ {
+-		__chk_msk_nr "grep -c fallback" $*
++	__chk_msk_nr "grep -c fallback" "$@"
+ }
+ 
+ chk_msk_remote_key_nr()
+ {
+-		__chk_msk_nr "grep -c remote_key" $*
++	__chk_msk_nr "grep -c remote_key" "$@"
+ }
+ 
+ __chk_listen()
+ {
+ 	local filter="$1"
+ 	local expected=$2
++	local msg="$3"
+ 
+-	shift 2
+-	msg=$*
+-
+-	nr=$(ss -N $ns -Ml "$filter" | grep -c LISTEN)
+-	printf "%-50s" "$msg"
+-
+-	if [ $nr != $expected ]; then
+-		echo "[ fail ] expected $expected found $nr"
+-		ret=$test_cnt
+-	else
+-		echo "[  ok  ]"
+-	fi
++	__chk_nr "ss -N $ns -Ml '$filter' | grep -c LISTEN" "$expected" "$msg" 0
+ }
+ 
+ chk_msk_listen()
+ {
+ 	lport=$1
+-	local msg="check for listen socket"
+ 
+ 	# destination port search should always return empty list
+ 	__chk_listen "dport $lport" 0 "listen match for dport $lport"
+@@ -167,10 +160,9 @@ chk_msk_listen()
+ chk_msk_inuse()
+ {
+ 	local expected=$1
++	local msg="$2"
+ 	local listen_nr
+ 
+-	shift 1
+-
+ 	listen_nr=$(ss -N "${ns}" -Ml | grep -c LISTEN)
+ 	expected=$((expected + listen_nr))
+ 
+@@ -181,7 +173,7 @@ chk_msk_inuse()
+ 		sleep 0.1
+ 	done
+ 
+-	__chk_nr get_msk_inuse $expected $*
++	__chk_nr get_msk_inuse $expected "$msg" 0
+ }
+ 
+ # $1: ns, $2: port
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.c b/tools/testing/selftests/net/mptcp/mptcp_connect.c
+index b25a31445ded3..c7f9ebeebc2c5 100644
+--- a/tools/testing/selftests/net/mptcp/mptcp_connect.c
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect.c
+@@ -106,8 +106,8 @@ static struct cfg_sockopt_types cfg_sockopt_types;
+ static void die_usage(void)
+ {
+ 	fprintf(stderr, "Usage: mptcp_connect [-6] [-c cmsg] [-f offset] [-i file] [-I num] [-j] [-l] "
+-		"[-m mode] [-M mark] [-o option] [-p port] [-P mode] [-j] [-l] [-r num] "
+-		"[-s MPTCP|TCP] [-S num] [-r num] [-t num] [-T num] [-u] [-w sec] connect_address\n");
++		"[-m mode] [-M mark] [-o option] [-p port] [-P mode] [-r num] [-R num] "
++		"[-s MPTCP|TCP] [-S num] [-t num] [-T num] [-w sec] connect_address\n");
+ 	fprintf(stderr, "\t-6 use ipv6\n");
+ 	fprintf(stderr, "\t-c cmsg -- test cmsg type <cmsg>\n");
+ 	fprintf(stderr, "\t-f offset -- stop the I/O after receiving and sending the specified amount "
+@@ -126,13 +126,13 @@ static void die_usage(void)
+ 	fprintf(stderr, "\t-p num -- use port num\n");
+ 	fprintf(stderr,
+ 		"\t-P [saveWithPeek|saveAfterPeek] -- save data with/after MSG_PEEK form tcp socket\n");
+-	fprintf(stderr, "\t-t num -- set poll timeout to num\n");
+-	fprintf(stderr, "\t-T num -- set expected runtime to num ms\n");
+ 	fprintf(stderr, "\t-r num -- enable slow mode, limiting each write to num bytes "
+ 		"-- for remove addr tests\n");
+ 	fprintf(stderr, "\t-R num -- set SO_RCVBUF to num\n");
+ 	fprintf(stderr, "\t-s [MPTCP|TCP] -- use mptcp(default) or tcp sockets\n");
+ 	fprintf(stderr, "\t-S num -- set SO_SNDBUF to num\n");
++	fprintf(stderr, "\t-t num -- set poll timeout to num\n");
++	fprintf(stderr, "\t-T num -- set expected runtime to num ms\n");
+ 	fprintf(stderr, "\t-w num -- wait num sec before closing the socket\n");
+ 	exit(1);
+ }
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.sh b/tools/testing/selftests/net/mptcp/mptcp_connect.sh
+index c1f7bac199423..773dd770a5670 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_connect.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect.sh
+@@ -144,6 +144,7 @@ cleanup()
+ }
+ 
+ mptcp_lib_check_mptcp
++mptcp_lib_check_kallsyms
+ 
+ ip -Version > /dev/null 2>&1
+ if [ $? -ne 0 ];then
+@@ -695,6 +696,15 @@ run_test_transparent()
+ 		return 0
+ 	fi
+ 
++	# IP(V6)_TRANSPARENT has been added after TOS support which came with
++	# the required infrastructure in MPTCP sockopt code. To support TOS, the
++	# following function has been exported (T). Not great but better than
++	# checking for a specific kernel version.
++	if ! mptcp_lib_kallsyms_has "T __ip_sock_set_tos$"; then
++		echo "INFO: ${msg} not supported by the kernel: SKIP"
++		return
++	fi
++
+ ip netns exec "$listener_ns" nft -f /dev/stdin <<"EOF"
+ flush ruleset
+ table inet mangle {
+@@ -767,6 +777,11 @@ run_tests_peekmode()
+ 
+ run_tests_mptfo()
+ {
++	if ! mptcp_lib_kallsyms_has "mptcp_fastopen_"; then
++		echo "INFO: TFO not supported by the kernel: SKIP"
++		return
++	fi
++
+ 	echo "INFO: with MPTFO start"
+ 	ip netns exec "$ns1" sysctl -q net.ipv4.tcp_fastopen=2
+ 	ip netns exec "$ns2" sysctl -q net.ipv4.tcp_fastopen=1
+@@ -787,6 +802,11 @@ run_tests_disconnect()
+ 	local old_cin=$cin
+ 	local old_sin=$sin
+ 
++	if ! mptcp_lib_kallsyms_has "mptcp_pm_data_reset$"; then
++		echo "INFO: Full disconnect not supported: SKIP"
++		return
++	fi
++
+ 	cat $cin $cin $cin > "$cin".disconnect
+ 
+ 	# force do_transfer to cope with the multiple tranmissions
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+index 4152370298bf0..11ea1406d79f4 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+@@ -8,6 +8,10 @@
+ 
+ . "$(dirname "${0}")/mptcp_lib.sh"
+ 
++# ShellCheck incorrectly believes that most of the code here is unreachable
++# because it's invoked by variable name, see how the "tests" array is used
++#shellcheck disable=SC2317
++
+ ret=0
+ sin=""
+ sinfail=""
+@@ -21,6 +25,8 @@ capout=""
+ ns1=""
+ ns2=""
+ ksft_skip=4
++iptables="iptables"
++ip6tables="ip6tables"
+ timeout_poll=30
+ timeout_test=$((timeout_poll * 2 + 1))
+ capture=0
+@@ -78,7 +84,7 @@ init_partial()
+ 		ip netns add $netns || exit $ksft_skip
+ 		ip -net $netns link set lo up
+ 		ip netns exec $netns sysctl -q net.mptcp.enabled=1
+-		ip netns exec $netns sysctl -q net.mptcp.pm_type=0
++		ip netns exec $netns sysctl -q net.mptcp.pm_type=0 2>/dev/null || true
+ 		ip netns exec $netns sysctl -q net.ipv4.conf.all.rp_filter=0
+ 		ip netns exec $netns sysctl -q net.ipv4.conf.default.rp_filter=0
+ 		if [ $checksum -eq 1 ]; then
+@@ -136,13 +142,18 @@ cleanup_partial()
+ check_tools()
+ {
+ 	mptcp_lib_check_mptcp
++	mptcp_lib_check_kallsyms
+ 
+ 	if ! ip -Version &> /dev/null; then
+ 		echo "SKIP: Could not run test without ip tool"
+ 		exit $ksft_skip
+ 	fi
+ 
+-	if ! iptables -V &> /dev/null; then
++	# Use the legacy version if available to support old kernel versions
++	if iptables-legacy -V &> /dev/null; then
++		iptables="iptables-legacy"
++		ip6tables="ip6tables-legacy"
++	elif ! iptables -V &> /dev/null; then
+ 		echo "SKIP: Could not run all tests without iptables tool"
+ 		exit $ksft_skip
+ 	fi
+@@ -181,6 +192,32 @@ cleanup()
+ 	cleanup_partial
+ }
+ 
++# $1: msg
++print_title()
++{
++	printf "%03u %-36s %s" "${TEST_COUNT}" "${TEST_NAME}" "${1}"
++}
++
++# [ $1: fail msg ]
++mark_as_skipped()
++{
++	local msg="${1:-"Feature not supported"}"
++
++	mptcp_lib_fail_if_expected_feature "${msg}"
++
++	print_title "[ skip ] ${msg}"
++	printf "\n"
++}
++
++# $@: condition
++continue_if()
++{
++	if ! "${@}"; then
++		mark_as_skipped
++		return 1
++	fi
++}
++
+ skip_test()
+ {
+ 	if [ "${#only_tests_ids[@]}" -eq 0 ] && [ "${#only_tests_names[@]}" -eq 0 ]; then
+@@ -224,6 +261,19 @@ reset()
+ 	return 0
+ }
+ 
++# $1: test name ; $2: counter to check
++reset_check_counter()
++{
++	reset "${1}" || return 1
++
++	local counter="${2}"
++
++	if ! nstat -asz "${counter}" | grep -wq "${counter}"; then
++		mark_as_skipped "counter '${counter}' is not available"
++		return 1
++	fi
++}
++
+ # $1: test name
+ reset_with_cookies()
+ {
+@@ -243,17 +293,21 @@ reset_with_add_addr_timeout()
+ 
+ 	reset "${1}" || return 1
+ 
+-	tables="iptables"
++	tables="${iptables}"
+ 	if [ $ip -eq 6 ]; then
+-		tables="ip6tables"
++		tables="${ip6tables}"
+ 	fi
+ 
+ 	ip netns exec $ns1 sysctl -q net.mptcp.add_addr_timeout=1
+-	ip netns exec $ns2 $tables -A OUTPUT -p tcp \
+-		-m tcp --tcp-option 30 \
+-		-m bpf --bytecode \
+-		"$CBPF_MPTCP_SUBOPTION_ADD_ADDR" \
+-		-j DROP
++
++	if ! ip netns exec $ns2 $tables -A OUTPUT -p tcp \
++			-m tcp --tcp-option 30 \
++			-m bpf --bytecode \
++			"$CBPF_MPTCP_SUBOPTION_ADD_ADDR" \
++			-j DROP; then
++		mark_as_skipped "unable to set the 'add addr' rule"
++		return 1
++	fi
+ }
+ 
+ # $1: test name
+@@ -297,22 +351,17 @@ reset_with_allow_join_id0()
+ #     tc action pedit offset 162 out of bounds
+ #
+ # Netfilter is used to mark packets with enough data.
+-reset_with_fail()
++setup_fail_rules()
+ {
+-	reset "${1}" || return 1
+-
+-	ip netns exec $ns1 sysctl -q net.mptcp.checksum_enabled=1
+-	ip netns exec $ns2 sysctl -q net.mptcp.checksum_enabled=1
+-
+ 	check_invert=1
+ 	validate_checksum=1
+-	local i="$2"
+-	local ip="${3:-4}"
++	local i="$1"
++	local ip="${2:-4}"
+ 	local tables
+ 
+-	tables="iptables"
++	tables="${iptables}"
+ 	if [ $ip -eq 6 ]; then
+-		tables="ip6tables"
++		tables="${ip6tables}"
+ 	fi
+ 
+ 	ip netns exec $ns2 $tables \
+@@ -322,15 +371,32 @@ reset_with_fail()
+ 		-p tcp \
+ 		-m length --length 150:9999 \
+ 		-m statistic --mode nth --packet 1 --every 99999 \
+-		-j MARK --set-mark 42 || exit 1
++		-j MARK --set-mark 42 || return ${ksft_skip}
+ 
+-	tc -n $ns2 qdisc add dev ns2eth$i clsact || exit 1
++	tc -n $ns2 qdisc add dev ns2eth$i clsact || return ${ksft_skip}
+ 	tc -n $ns2 filter add dev ns2eth$i egress \
+ 		protocol ip prio 1000 \
+ 		handle 42 fw \
+ 		action pedit munge offset 148 u8 invert \
+ 		pipe csum tcp \
+-		index 100 || exit 1
++		index 100 || return ${ksft_skip}
++}
++
++reset_with_fail()
++{
++	reset_check_counter "${1}" "MPTcpExtInfiniteMapTx" || return 1
++	shift
++
++	ip netns exec $ns1 sysctl -q net.mptcp.checksum_enabled=1
++	ip netns exec $ns2 sysctl -q net.mptcp.checksum_enabled=1
++
++	local rc=0
++	setup_fail_rules "${@}" || rc=$?
++
++	if [ ${rc} -eq ${ksft_skip} ]; then
++		mark_as_skipped "unable to set the 'fail' rules"
++		return 1
++	fi
+ }
+ 
+ reset_with_events()
+@@ -345,6 +411,25 @@ reset_with_events()
+ 	evts_ns2_pid=$!
+ }
+ 
++reset_with_tcp_filter()
++{
++	reset "${1}" || return 1
++	shift
++
++	local ns="${!1}"
++	local src="${2}"
++	local target="${3}"
++
++	if ! ip netns exec "${ns}" ${iptables} \
++			-A INPUT \
++			-s "${src}" \
++			-p tcp \
++			-j "${target}"; then
++		mark_as_skipped "unable to set the filter rules"
++		return 1
++	fi
++}
++
+ fail_test()
+ {
+ 	ret=1
+@@ -377,8 +462,9 @@ check_transfer()
+ 
+ 	local line
+ 	if [ -n "$bytes" ]; then
++		local out_size
+ 		# when truncating we must check the size explicitly
+-		local out_size=$(wc -c $out | awk '{print $1}')
++		out_size=$(wc -c $out | awk '{print $1}')
+ 		if [ $out_size -ne $bytes ]; then
+ 			echo "[ FAIL ] $what output file has wrong size ($out_size, $bytes)"
+ 			fail_test
+@@ -462,11 +548,25 @@ wait_local_port_listen()
+ 	done
+ }
+ 
+-rm_addr_count()
++# $1: ns ; $2: counter
++get_counter()
+ {
+-	local ns=${1}
++	local ns="${1}"
++	local counter="${2}"
++	local count
++
++	count=$(ip netns exec ${ns} nstat -asz "${counter}" | awk 'NR==1 {next} {print $2}')
++	if [ -z "${count}" ]; then
++		mptcp_lib_fail_if_expected_feature "${counter} counter"
++		return 1
++	fi
+ 
+-	ip netns exec ${ns} nstat -as | grep MPTcpExtRmAddr | awk '{print $2}'
++	echo "${count}"
++}
++
++rm_addr_count()
++{
++	get_counter "${1}" "MPTcpExtRmAddr"
+ }
+ 
+ # $1: ns, $2: old rm_addr counter in $ns
+@@ -489,11 +589,11 @@ wait_mpj()
+ 	local ns="${1}"
+ 	local cnt old_cnt
+ 
+-	old_cnt=$(ip netns exec ${ns} nstat -as | grep MPJoinAckRx | awk '{print $2}')
++	old_cnt=$(get_counter ${ns} "MPTcpExtMPJoinAckRx")
+ 
+ 	local i
+ 	for i in $(seq 10); do
+-		cnt=$(ip netns exec ${ns} nstat -as | grep MPJoinAckRx | awk '{print $2}')
++		cnt=$(get_counter ${ns} "MPTcpExtMPJoinAckRx")
+ 		[ "$cnt" = "${old_cnt}" ] || break
+ 		sleep 0.1
+ 	done
+@@ -513,6 +613,7 @@ kill_events_pids()
+ 
+ kill_tests_wait()
+ {
++	#shellcheck disable=SC2046
+ 	kill -SIGUSR1 $(ip netns pids $ns2) $(ip netns pids $ns1)
+ 	wait
+ }
+@@ -692,15 +793,6 @@ pm_nl_check_endpoint()
+ 	fi
+ }
+ 
+-filter_tcp_from()
+-{
+-	local ns="${1}"
+-	local src="${2}"
+-	local target="${3}"
+-
+-	ip netns exec "${ns}" iptables -A INPUT -s "${src}" -p tcp -j "${target}"
+-}
+-
+ do_transfer()
+ {
+ 	local listener_ns="$1"
+@@ -1151,12 +1243,13 @@ chk_csum_nr()
+ 	fi
+ 
+ 	printf "%-${nr_blank}s %s" " " "sum"
+-	count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtDataCsumErr | awk '{print $2}')
+-	[ -z "$count" ] && count=0
++	count=$(get_counter ${ns1} "MPTcpExtDataCsumErr")
+ 	if [ "$count" != "$csum_ns1" ]; then
+ 		extra_msg="$extra_msg ns1=$count"
+ 	fi
+-	if { [ "$count" != $csum_ns1 ] && [ $allow_multi_errors_ns1 -eq 0 ]; } ||
++	if [ -z "$count" ]; then
++		echo -n "[skip]"
++	elif { [ "$count" != $csum_ns1 ] && [ $allow_multi_errors_ns1 -eq 0 ]; } ||
+ 	   { [ "$count" -lt $csum_ns1 ] && [ $allow_multi_errors_ns1 -eq 1 ]; }; then
+ 		echo "[fail] got $count data checksum error[s] expected $csum_ns1"
+ 		fail_test
+@@ -1165,12 +1258,13 @@ chk_csum_nr()
+ 		echo -n "[ ok ]"
+ 	fi
+ 	echo -n " - csum  "
+-	count=$(ip netns exec $ns2 nstat -as | grep MPTcpExtDataCsumErr | awk '{print $2}')
+-	[ -z "$count" ] && count=0
++	count=$(get_counter ${ns2} "MPTcpExtDataCsumErr")
+ 	if [ "$count" != "$csum_ns2" ]; then
+ 		extra_msg="$extra_msg ns2=$count"
+ 	fi
+-	if { [ "$count" != $csum_ns2 ] && [ $allow_multi_errors_ns2 -eq 0 ]; } ||
++	if [ -z "$count" ]; then
++		echo -n "[skip]"
++	elif { [ "$count" != $csum_ns2 ] && [ $allow_multi_errors_ns2 -eq 0 ]; } ||
+ 	   { [ "$count" -lt $csum_ns2 ] && [ $allow_multi_errors_ns2 -eq 1 ]; }; then
+ 		echo "[fail] got $count data checksum error[s] expected $csum_ns2"
+ 		fail_test
+@@ -1212,12 +1306,13 @@ chk_fail_nr()
+ 	fi
+ 
+ 	printf "%-${nr_blank}s %s" " " "ftx"
+-	count=$(ip netns exec $ns_tx nstat -as | grep MPTcpExtMPFailTx | awk '{print $2}')
+-	[ -z "$count" ] && count=0
++	count=$(get_counter ${ns_tx} "MPTcpExtMPFailTx")
+ 	if [ "$count" != "$fail_tx" ]; then
+ 		extra_msg="$extra_msg,tx=$count"
+ 	fi
+-	if { [ "$count" != "$fail_tx" ] && [ $allow_tx_lost -eq 0 ]; } ||
++	if [ -z "$count" ]; then
++		echo -n "[skip]"
++	elif { [ "$count" != "$fail_tx" ] && [ $allow_tx_lost -eq 0 ]; } ||
+ 	   { [ "$count" -gt "$fail_tx" ] && [ $allow_tx_lost -eq 1 ]; }; then
+ 		echo "[fail] got $count MP_FAIL[s] TX expected $fail_tx"
+ 		fail_test
+@@ -1227,12 +1322,13 @@ chk_fail_nr()
+ 	fi
+ 
+ 	echo -n " - failrx"
+-	count=$(ip netns exec $ns_rx nstat -as | grep MPTcpExtMPFailRx | awk '{print $2}')
+-	[ -z "$count" ] && count=0
++	count=$(get_counter ${ns_rx} "MPTcpExtMPFailRx")
+ 	if [ "$count" != "$fail_rx" ]; then
+ 		extra_msg="$extra_msg,rx=$count"
+ 	fi
+-	if { [ "$count" != "$fail_rx" ] && [ $allow_rx_lost -eq 0 ]; } ||
++	if [ -z "$count" ]; then
++		echo -n "[skip]"
++	elif { [ "$count" != "$fail_rx" ] && [ $allow_rx_lost -eq 0 ]; } ||
+ 	   { [ "$count" -gt "$fail_rx" ] && [ $allow_rx_lost -eq 1 ]; }; then
+ 		echo "[fail] got $count MP_FAIL[s] RX expected $fail_rx"
+ 		fail_test
+@@ -1264,10 +1360,11 @@ chk_fclose_nr()
+ 	fi
+ 
+ 	printf "%-${nr_blank}s %s" " " "ctx"
+-	count=$(ip netns exec $ns_tx nstat -as | grep MPTcpExtMPFastcloseTx | awk '{print $2}')
+-	[ -z "$count" ] && count=0
+-	[ "$count" != "$fclose_tx" ] && extra_msg="$extra_msg,tx=$count"
+-	if [ "$count" != "$fclose_tx" ]; then
++	count=$(get_counter ${ns_tx} "MPTcpExtMPFastcloseTx")
++	if [ -z "$count" ]; then
++		echo -n "[skip]"
++	elif [ "$count" != "$fclose_tx" ]; then
++		extra_msg="$extra_msg,tx=$count"
+ 		echo "[fail] got $count MP_FASTCLOSE[s] TX expected $fclose_tx"
+ 		fail_test
+ 		dump_stats=1
+@@ -1276,10 +1373,11 @@ chk_fclose_nr()
+ 	fi
+ 
+ 	echo -n " - fclzrx"
+-	count=$(ip netns exec $ns_rx nstat -as | grep MPTcpExtMPFastcloseRx | awk '{print $2}')
+-	[ -z "$count" ] && count=0
+-	[ "$count" != "$fclose_rx" ] && extra_msg="$extra_msg,rx=$count"
+-	if [ "$count" != "$fclose_rx" ]; then
++	count=$(get_counter ${ns_rx} "MPTcpExtMPFastcloseRx")
++	if [ -z "$count" ]; then
++		echo -n "[skip]"
++	elif [ "$count" != "$fclose_rx" ]; then
++		extra_msg="$extra_msg,rx=$count"
+ 		echo "[fail] got $count MP_FASTCLOSE[s] RX expected $fclose_rx"
+ 		fail_test
+ 		dump_stats=1
+@@ -1310,9 +1408,10 @@ chk_rst_nr()
+ 	fi
+ 
+ 	printf "%-${nr_blank}s %s" " " "rtx"
+-	count=$(ip netns exec $ns_tx nstat -as | grep MPTcpExtMPRstTx | awk '{print $2}')
+-	[ -z "$count" ] && count=0
+-	if [ $count -lt $rst_tx ]; then
++	count=$(get_counter ${ns_tx} "MPTcpExtMPRstTx")
++	if [ -z "$count" ]; then
++		echo -n "[skip]"
++	elif [ $count -lt $rst_tx ]; then
+ 		echo "[fail] got $count MP_RST[s] TX expected $rst_tx"
+ 		fail_test
+ 		dump_stats=1
+@@ -1321,9 +1420,10 @@ chk_rst_nr()
+ 	fi
+ 
+ 	echo -n " - rstrx "
+-	count=$(ip netns exec $ns_rx nstat -as | grep MPTcpExtMPRstRx | awk '{print $2}')
+-	[ -z "$count" ] && count=0
+-	if [ "$count" -lt "$rst_rx" ]; then
++	count=$(get_counter ${ns_rx} "MPTcpExtMPRstRx")
++	if [ -z "$count" ]; then
++		echo -n "[skip]"
++	elif [ "$count" -lt "$rst_rx" ]; then
+ 		echo "[fail] got $count MP_RST[s] RX expected $rst_rx"
+ 		fail_test
+ 		dump_stats=1
+@@ -1344,9 +1444,10 @@ chk_infi_nr()
+ 	local dump_stats
+ 
+ 	printf "%-${nr_blank}s %s" " " "itx"
+-	count=$(ip netns exec $ns2 nstat -as | grep InfiniteMapTx | awk '{print $2}')
+-	[ -z "$count" ] && count=0
+-	if [ "$count" != "$infi_tx" ]; then
++	count=$(get_counter ${ns2} "MPTcpExtInfiniteMapTx")
++	if [ -z "$count" ]; then
++		echo -n "[skip]"
++	elif [ "$count" != "$infi_tx" ]; then
+ 		echo "[fail] got $count infinite map[s] TX expected $infi_tx"
+ 		fail_test
+ 		dump_stats=1
+@@ -1355,9 +1456,10 @@ chk_infi_nr()
+ 	fi
+ 
+ 	echo -n " - infirx"
+-	count=$(ip netns exec $ns1 nstat -as | grep InfiniteMapRx | awk '{print $2}')
+-	[ -z "$count" ] && count=0
+-	if [ "$count" != "$infi_rx" ]; then
++	count=$(get_counter ${ns1} "MPTcpExtInfiniteMapRx")
++	if [ -z "$count" ]; then
++		echo "[skip]"
++	elif [ "$count" != "$infi_rx" ]; then
+ 		echo "[fail] got $count infinite map[s] RX expected $infi_rx"
+ 		fail_test
+ 		dump_stats=1
+@@ -1389,9 +1491,10 @@ chk_join_nr()
+ 	fi
+ 
+ 	printf "%03u %-36s %s" "${TEST_COUNT}" "${title}" "syn"
+-	count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMPJoinSynRx | awk '{print $2}')
+-	[ -z "$count" ] && count=0
+-	if [ "$count" != "$syn_nr" ]; then
++	count=$(get_counter ${ns1} "MPTcpExtMPJoinSynRx")
++	if [ -z "$count" ]; then
++		echo -n "[skip]"
++	elif [ "$count" != "$syn_nr" ]; then
+ 		echo "[fail] got $count JOIN[s] syn expected $syn_nr"
+ 		fail_test
+ 		dump_stats=1
+@@ -1401,9 +1504,10 @@ chk_join_nr()
+ 
+ 	echo -n " - synack"
+ 	with_cookie=$(ip netns exec $ns2 sysctl -n net.ipv4.tcp_syncookies)
+-	count=$(ip netns exec $ns2 nstat -as | grep MPTcpExtMPJoinSynAckRx | awk '{print $2}')
+-	[ -z "$count" ] && count=0
+-	if [ "$count" != "$syn_ack_nr" ]; then
++	count=$(get_counter ${ns2} "MPTcpExtMPJoinSynAckRx")
++	if [ -z "$count" ]; then
++		echo -n "[skip]"
++	elif [ "$count" != "$syn_ack_nr" ]; then
+ 		# simult connections exceeding the limit with cookie enabled could go up to
+ 		# synack validation as the conn limit can be enforced reliably only after
+ 		# the subflow creation
+@@ -1419,9 +1523,10 @@ chk_join_nr()
+ 	fi
+ 
+ 	echo -n " - ack"
+-	count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMPJoinAckRx | awk '{print $2}')
+-	[ -z "$count" ] && count=0
+-	if [ "$count" != "$ack_nr" ]; then
++	count=$(get_counter ${ns1} "MPTcpExtMPJoinAckRx")
++	if [ -z "$count" ]; then
++		echo "[skip]"
++	elif [ "$count" != "$ack_nr" ]; then
+ 		echo "[fail] got $count JOIN[s] ack expected $ack_nr"
+ 		fail_test
+ 		dump_stats=1
+@@ -1453,12 +1558,12 @@ chk_stale_nr()
+ 	local recover_nr
+ 
+ 	printf "%-${nr_blank}s %-18s" " " "stale"
+-	stale_nr=$(ip netns exec $ns nstat -as | grep MPTcpExtSubflowStale | awk '{print $2}')
+-	[ -z "$stale_nr" ] && stale_nr=0
+-	recover_nr=$(ip netns exec $ns nstat -as | grep MPTcpExtSubflowRecover | awk '{print $2}')
+-	[ -z "$recover_nr" ] && recover_nr=0
+ 
+-	if [ $stale_nr -lt $stale_min ] ||
++	stale_nr=$(get_counter ${ns} "MPTcpExtSubflowStale")
++	recover_nr=$(get_counter ${ns} "MPTcpExtSubflowRecover")
++	if [ -z "$stale_nr" ] || [ -z "$recover_nr" ]; then
++		echo "[skip]"
++	elif [ $stale_nr -lt $stale_min ] ||
+ 	   { [ $stale_max -gt 0 ] && [ $stale_nr -gt $stale_max ]; } ||
+ 	   [ $((stale_nr - recover_nr)) -ne $stale_delta ]; then
+ 		echo "[fail] got $stale_nr stale[s] $recover_nr recover[s], " \
+@@ -1494,12 +1599,12 @@ chk_add_nr()
+ 	timeout=$(ip netns exec $ns1 sysctl -n net.mptcp.add_addr_timeout)
+ 
+ 	printf "%-${nr_blank}s %s" " " "add"
+-	count=$(ip netns exec $ns2 nstat -as MPTcpExtAddAddr | grep MPTcpExtAddAddr | awk '{print $2}')
+-	[ -z "$count" ] && count=0
+-
++	count=$(get_counter ${ns2} "MPTcpExtAddAddr")
++	if [ -z "$count" ]; then
++		echo -n "[skip]"
+ 	# if the test configured a short timeout tolerate greater then expected
+ 	# add addrs options, due to retransmissions
+-	if [ "$count" != "$add_nr" ] && { [ "$timeout" -gt 1 ] || [ "$count" -lt "$add_nr" ]; }; then
++	elif [ "$count" != "$add_nr" ] && { [ "$timeout" -gt 1 ] || [ "$count" -lt "$add_nr" ]; }; then
+ 		echo "[fail] got $count ADD_ADDR[s] expected $add_nr"
+ 		fail_test
+ 		dump_stats=1
+@@ -1508,9 +1613,10 @@ chk_add_nr()
+ 	fi
+ 
+ 	echo -n " - echo  "
+-	count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtEchoAdd | awk '{print $2}')
+-	[ -z "$count" ] && count=0
+-	if [ "$count" != "$echo_nr" ]; then
++	count=$(get_counter ${ns1} "MPTcpExtEchoAdd")
++	if [ -z "$count" ]; then
++		echo -n "[skip]"
++	elif [ "$count" != "$echo_nr" ]; then
+ 		echo "[fail] got $count ADD_ADDR echo[s] expected $echo_nr"
+ 		fail_test
+ 		dump_stats=1
+@@ -1520,9 +1626,10 @@ chk_add_nr()
+ 
+ 	if [ $port_nr -gt 0 ]; then
+ 		echo -n " - pt "
+-		count=$(ip netns exec $ns2 nstat -as | grep MPTcpExtPortAdd | awk '{print $2}')
+-		[ -z "$count" ] && count=0
+-		if [ "$count" != "$port_nr" ]; then
++		count=$(get_counter ${ns2} "MPTcpExtPortAdd")
++		if [ -z "$count" ]; then
++			echo "[skip]"
++		elif [ "$count" != "$port_nr" ]; then
+ 			echo "[fail] got $count ADD_ADDR[s] with a port-number expected $port_nr"
+ 			fail_test
+ 			dump_stats=1
+@@ -1531,10 +1638,10 @@ chk_add_nr()
+ 		fi
+ 
+ 		printf "%-${nr_blank}s %s" " " "syn"
+-		count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMPJoinPortSynRx |
+-			awk '{print $2}')
+-		[ -z "$count" ] && count=0
+-		if [ "$count" != "$syn_nr" ]; then
++		count=$(get_counter ${ns1} "MPTcpExtMPJoinPortSynRx")
++		if [ -z "$count" ]; then
++			echo -n "[skip]"
++		elif [ "$count" != "$syn_nr" ]; then
+ 			echo "[fail] got $count JOIN[s] syn with a different \
+ 				port-number expected $syn_nr"
+ 			fail_test
+@@ -1544,10 +1651,10 @@ chk_add_nr()
+ 		fi
+ 
+ 		echo -n " - synack"
+-		count=$(ip netns exec $ns2 nstat -as | grep MPTcpExtMPJoinPortSynAckRx |
+-			awk '{print $2}')
+-		[ -z "$count" ] && count=0
+-		if [ "$count" != "$syn_ack_nr" ]; then
++		count=$(get_counter ${ns2} "MPTcpExtMPJoinPortSynAckRx")
++		if [ -z "$count" ]; then
++			echo -n "[skip]"
++		elif [ "$count" != "$syn_ack_nr" ]; then
+ 			echo "[fail] got $count JOIN[s] synack with a different \
+ 				port-number expected $syn_ack_nr"
+ 			fail_test
+@@ -1557,10 +1664,10 @@ chk_add_nr()
+ 		fi
+ 
+ 		echo -n " - ack"
+-		count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMPJoinPortAckRx |
+-			awk '{print $2}')
+-		[ -z "$count" ] && count=0
+-		if [ "$count" != "$ack_nr" ]; then
++		count=$(get_counter ${ns1} "MPTcpExtMPJoinPortAckRx")
++		if [ -z "$count" ]; then
++			echo "[skip]"
++		elif [ "$count" != "$ack_nr" ]; then
+ 			echo "[fail] got $count JOIN[s] ack with a different \
+ 				port-number expected $ack_nr"
+ 			fail_test
+@@ -1570,10 +1677,10 @@ chk_add_nr()
+ 		fi
+ 
+ 		printf "%-${nr_blank}s %s" " " "syn"
+-		count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMismatchPortSynRx |
+-			awk '{print $2}')
+-		[ -z "$count" ] && count=0
+-		if [ "$count" != "$mis_syn_nr" ]; then
++		count=$(get_counter ${ns1} "MPTcpExtMismatchPortSynRx")
++		if [ -z "$count" ]; then
++			echo -n "[skip]"
++		elif [ "$count" != "$mis_syn_nr" ]; then
+ 			echo "[fail] got $count JOIN[s] syn with a mismatched \
+ 				port-number expected $mis_syn_nr"
+ 			fail_test
+@@ -1583,10 +1690,10 @@ chk_add_nr()
+ 		fi
+ 
+ 		echo -n " - ack   "
+-		count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMismatchPortAckRx |
+-			awk '{print $2}')
+-		[ -z "$count" ] && count=0
+-		if [ "$count" != "$mis_ack_nr" ]; then
++		count=$(get_counter ${ns1} "MPTcpExtMismatchPortAckRx")
++		if [ -z "$count" ]; then
++			echo "[skip]"
++		elif [ "$count" != "$mis_ack_nr" ]; then
+ 			echo "[fail] got $count JOIN[s] ack with a mismatched \
+ 				port-number expected $mis_ack_nr"
+ 			fail_test
+@@ -1630,9 +1737,10 @@ chk_rm_nr()
+ 	fi
+ 
+ 	printf "%-${nr_blank}s %s" " " "rm "
+-	count=$(ip netns exec $addr_ns nstat -as | grep MPTcpExtRmAddr | awk '{print $2}')
+-	[ -z "$count" ] && count=0
+-	if [ "$count" != "$rm_addr_nr" ]; then
++	count=$(get_counter ${addr_ns} "MPTcpExtRmAddr")
++	if [ -z "$count" ]; then
++		echo -n "[skip]"
++	elif [ "$count" != "$rm_addr_nr" ]; then
+ 		echo "[fail] got $count RM_ADDR[s] expected $rm_addr_nr"
+ 		fail_test
+ 		dump_stats=1
+@@ -1641,29 +1749,27 @@ chk_rm_nr()
+ 	fi
+ 
+ 	echo -n " - rmsf  "
+-	count=$(ip netns exec $subflow_ns nstat -as | grep MPTcpExtRmSubflow | awk '{print $2}')
+-	[ -z "$count" ] && count=0
+-	if [ -n "$simult" ]; then
++	count=$(get_counter ${subflow_ns} "MPTcpExtRmSubflow")
++	if [ -z "$count" ]; then
++		echo -n "[skip]"
++	elif [ -n "$simult" ]; then
+ 		local cnt suffix
+ 
+-		cnt=$(ip netns exec $addr_ns nstat -as | grep MPTcpExtRmSubflow | awk '{print $2}')
++		cnt=$(get_counter ${addr_ns} "MPTcpExtRmSubflow")
+ 
+ 		# in case of simult flush, the subflow removal count on each side is
+ 		# unreliable
+-		[ -z "$cnt" ] && cnt=0
+ 		count=$((count + cnt))
+ 		[ "$count" != "$rm_subflow_nr" ] && suffix="$count in [$rm_subflow_nr:$((rm_subflow_nr*2))]"
+ 		if [ $count -ge "$rm_subflow_nr" ] && \
+ 		   [ "$count" -le "$((rm_subflow_nr *2 ))" ]; then
+-			echo "[ ok ] $suffix"
++			echo -n "[ ok ] $suffix"
+ 		else
+ 			echo "[fail] got $count RM_SUBFLOW[s] expected in range [$rm_subflow_nr:$((rm_subflow_nr*2))]"
+ 			fail_test
+ 			dump_stats=1
+ 		fi
+-		return
+-	fi
+-	if [ "$count" != "$rm_subflow_nr" ]; then
++	elif [ "$count" != "$rm_subflow_nr" ]; then
+ 		echo "[fail] got $count RM_SUBFLOW[s] expected $rm_subflow_nr"
+ 		fail_test
+ 		dump_stats=1
+@@ -1684,9 +1790,10 @@ chk_prio_nr()
+ 	local dump_stats
+ 
+ 	printf "%-${nr_blank}s %s" " " "ptx"
+-	count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMPPrioTx | awk '{print $2}')
+-	[ -z "$count" ] && count=0
+-	if [ "$count" != "$mp_prio_nr_tx" ]; then
++	count=$(get_counter ${ns1} "MPTcpExtMPPrioTx")
++	if [ -z "$count" ]; then
++		echo -n "[skip]"
++	elif [ "$count" != "$mp_prio_nr_tx" ]; then
+ 		echo "[fail] got $count MP_PRIO[s] TX expected $mp_prio_nr_tx"
+ 		fail_test
+ 		dump_stats=1
+@@ -1695,9 +1802,10 @@ chk_prio_nr()
+ 	fi
+ 
+ 	echo -n " - prx   "
+-	count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMPPrioRx | awk '{print $2}')
+-	[ -z "$count" ] && count=0
+-	if [ "$count" != "$mp_prio_nr_rx" ]; then
++	count=$(get_counter ${ns1} "MPTcpExtMPPrioRx")
++	if [ -z "$count" ]; then
++		echo "[skip]"
++	elif [ "$count" != "$mp_prio_nr_rx" ]; then
+ 		echo "[fail] got $count MP_PRIO[s] RX expected $mp_prio_nr_rx"
+ 		fail_test
+ 		dump_stats=1
+@@ -1725,7 +1833,7 @@ chk_subflow_nr()
+ 
+ 	cnt1=$(ss -N $ns1 -tOni | grep -c token)
+ 	cnt2=$(ss -N $ns2 -tOni | grep -c token)
+-	if [ "$cnt1" != "$subflow_nr" -o "$cnt2" != "$subflow_nr" ]; then
++	if [ "$cnt1" != "$subflow_nr" ] || [ "$cnt2" != "$subflow_nr" ]; then
+ 		echo "[fail] got $cnt1:$cnt2 subflows expected $subflow_nr"
+ 		fail_test
+ 		dump_stats=1
+@@ -1773,7 +1881,7 @@ wait_attempt_fail()
+ 	while [ $time -lt $timeout_ms ]; do
+ 		local cnt
+ 
+-		cnt=$(ip netns exec $ns nstat -as TcpAttemptFails | grep TcpAttemptFails | awk '{print $2}')
++		cnt=$(get_counter ${ns} "TcpAttemptFails")
+ 
+ 		[ "$cnt" = 1 ] && return 1
+ 		time=$((time + 100))
+@@ -1866,23 +1974,23 @@ subflows_error_tests()
+ 	fi
+ 
+ 	# multiple subflows, with subflow creation error
+-	if reset "multi subflows, with failing subflow"; then
++	if reset_with_tcp_filter "multi subflows, with failing subflow" ns1 10.0.3.2 REJECT &&
++	   continue_if mptcp_lib_kallsyms_has "mptcp_pm_subflow_check_next$"; then
+ 		pm_nl_set_limits $ns1 0 2
+ 		pm_nl_set_limits $ns2 0 2
+ 		pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ 		pm_nl_add_endpoint $ns2 10.0.2.2 flags subflow
+-		filter_tcp_from $ns1 10.0.3.2 REJECT
+ 		run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow
+ 		chk_join_nr 1 1 1
+ 	fi
+ 
+ 	# multiple subflows, with subflow timeout on MPJ
+-	if reset "multi subflows, with subflow timeout"; then
++	if reset_with_tcp_filter "multi subflows, with subflow timeout" ns1 10.0.3.2 DROP &&
++	   continue_if mptcp_lib_kallsyms_has "mptcp_pm_subflow_check_next$"; then
+ 		pm_nl_set_limits $ns1 0 2
+ 		pm_nl_set_limits $ns2 0 2
+ 		pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ 		pm_nl_add_endpoint $ns2 10.0.2.2 flags subflow
+-		filter_tcp_from $ns1 10.0.3.2 DROP
+ 		run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow
+ 		chk_join_nr 1 1 1
+ 	fi
+@@ -1890,11 +1998,11 @@ subflows_error_tests()
+ 	# multiple subflows, check that the endpoint corresponding to
+ 	# closed subflow (due to reset) is not reused if additional
+ 	# subflows are added later
+-	if reset "multi subflows, fair usage on close"; then
++	if reset_with_tcp_filter "multi subflows, fair usage on close" ns1 10.0.3.2 REJECT &&
++	   continue_if mptcp_lib_kallsyms_has "mptcp_pm_subflow_check_next$"; then
+ 		pm_nl_set_limits $ns1 0 1
+ 		pm_nl_set_limits $ns2 0 1
+ 		pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+-		filter_tcp_from $ns1 10.0.3.2 REJECT
+ 		run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow &
+ 
+ 		# mpj subflow will be in TW after the reset
+@@ -1994,11 +2102,18 @@ signal_address_tests()
+ 		# the peer could possibly miss some addr notification, allow retransmission
+ 		ip netns exec $ns1 sysctl -q net.mptcp.add_addr_timeout=1
+ 		run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow
+-		chk_join_nr 3 3 3
+ 
+-		# the server will not signal the address terminating
+-		# the MPC subflow
+-		chk_add_nr 3 3
++		# It is not directly linked to the commit introducing this
++		# symbol but for the parent one which is linked anyway.
++		if ! mptcp_lib_kallsyms_has "mptcp_pm_subflow_check_next$"; then
++			chk_join_nr 3 3 2
++			chk_add_nr 4 4
++		else
++			chk_join_nr 3 3 3
++			# the server will not signal the address terminating
++			# the MPC subflow
++			chk_add_nr 3 3
++		fi
+ 	fi
+ }
+ 
+@@ -2239,7 +2354,12 @@ remove_tests()
+ 		pm_nl_add_endpoint $ns2 10.0.4.2 flags subflow
+ 		run_tests $ns1 $ns2 10.0.1.1 0 -8 -8 slow
+ 		chk_join_nr 3 3 3
+-		chk_rm_nr 0 3 simult
++
++		if mptcp_lib_kversion_ge 5.18; then
++			chk_rm_nr 0 3 simult
++		else
++			chk_rm_nr 3 3
++		fi
+ 	fi
+ 
+ 	# addresses flush
+@@ -2477,7 +2597,8 @@ v4mapped_tests()
+ 
+ mixed_tests()
+ {
+-	if reset "IPv4 sockets do not use IPv6 addresses"; then
++	if reset "IPv4 sockets do not use IPv6 addresses" &&
++	   continue_if mptcp_lib_kversion_ge 6.3; then
+ 		pm_nl_set_limits $ns1 0 1
+ 		pm_nl_set_limits $ns2 1 1
+ 		pm_nl_add_endpoint $ns1 dead:beef:2::1 flags signal
+@@ -2486,7 +2607,8 @@ mixed_tests()
+ 	fi
+ 
+ 	# Need an IPv6 mptcp socket to allow subflows of both families
+-	if reset "simult IPv4 and IPv6 subflows"; then
++	if reset "simult IPv4 and IPv6 subflows" &&
++	   continue_if mptcp_lib_kversion_ge 6.3; then
+ 		pm_nl_set_limits $ns1 0 1
+ 		pm_nl_set_limits $ns2 1 1
+ 		pm_nl_add_endpoint $ns1 10.0.1.1 flags signal
+@@ -2495,7 +2617,8 @@ mixed_tests()
+ 	fi
+ 
+ 	# cross families subflows will not be created even in fullmesh mode
+-	if reset "simult IPv4 and IPv6 subflows, fullmesh 1x1"; then
++	if reset "simult IPv4 and IPv6 subflows, fullmesh 1x1" &&
++	   continue_if mptcp_lib_kversion_ge 6.3; then
+ 		pm_nl_set_limits $ns1 0 4
+ 		pm_nl_set_limits $ns2 1 4
+ 		pm_nl_add_endpoint $ns2 dead:beef:2::2 flags subflow,fullmesh
+@@ -2506,7 +2629,8 @@ mixed_tests()
+ 
+ 	# fullmesh still tries to create all the possibly subflows with
+ 	# matching family
+-	if reset "simult IPv4 and IPv6 subflows, fullmesh 2x2"; then
++	if reset "simult IPv4 and IPv6 subflows, fullmesh 2x2" &&
++	   continue_if mptcp_lib_kversion_ge 6.3; then
+ 		pm_nl_set_limits $ns1 0 4
+ 		pm_nl_set_limits $ns2 2 4
+ 		pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+@@ -2519,7 +2643,8 @@ mixed_tests()
+ backup_tests()
+ {
+ 	# single subflow, backup
+-	if reset "single subflow, backup"; then
++	if reset "single subflow, backup" &&
++	   continue_if mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then
+ 		pm_nl_set_limits $ns1 0 1
+ 		pm_nl_set_limits $ns2 0 1
+ 		pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow,backup
+@@ -2529,7 +2654,8 @@ backup_tests()
+ 	fi
+ 
+ 	# single address, backup
+-	if reset "single address, backup"; then
++	if reset "single address, backup" &&
++	   continue_if mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then
+ 		pm_nl_set_limits $ns1 0 1
+ 		pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+ 		pm_nl_set_limits $ns2 1 1
+@@ -2540,7 +2666,8 @@ backup_tests()
+ 	fi
+ 
+ 	# single address with port, backup
+-	if reset "single address with port, backup"; then
++	if reset "single address with port, backup" &&
++	   continue_if mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then
+ 		pm_nl_set_limits $ns1 0 1
+ 		pm_nl_add_endpoint $ns1 10.0.2.1 flags signal port 10100
+ 		pm_nl_set_limits $ns2 1 1
+@@ -2550,14 +2677,16 @@ backup_tests()
+ 		chk_prio_nr 1 1
+ 	fi
+ 
+-	if reset "mpc backup"; then
++	if reset "mpc backup" &&
++	   continue_if mptcp_lib_kallsyms_doesnt_have "mptcp_subflow_send_ack$"; then
+ 		pm_nl_add_endpoint $ns2 10.0.1.2 flags subflow,backup
+ 		run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow
+ 		chk_join_nr 0 0 0
+ 		chk_prio_nr 0 1
+ 	fi
+ 
+-	if reset "mpc backup both sides"; then
++	if reset "mpc backup both sides" &&
++	   continue_if mptcp_lib_kallsyms_doesnt_have "mptcp_subflow_send_ack$"; then
+ 		pm_nl_add_endpoint $ns1 10.0.1.1 flags subflow,backup
+ 		pm_nl_add_endpoint $ns2 10.0.1.2 flags subflow,backup
+ 		run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow
+@@ -2565,14 +2694,16 @@ backup_tests()
+ 		chk_prio_nr 1 1
+ 	fi
+ 
+-	if reset "mpc switch to backup"; then
++	if reset "mpc switch to backup" &&
++	   continue_if mptcp_lib_kallsyms_doesnt_have "mptcp_subflow_send_ack$"; then
+ 		pm_nl_add_endpoint $ns2 10.0.1.2 flags subflow
+ 		run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow backup
+ 		chk_join_nr 0 0 0
+ 		chk_prio_nr 0 1
+ 	fi
+ 
+-	if reset "mpc switch to backup both sides"; then
++	if reset "mpc switch to backup both sides" &&
++	   continue_if mptcp_lib_kallsyms_doesnt_have "mptcp_subflow_send_ack$"; then
+ 		pm_nl_add_endpoint $ns1 10.0.1.1 flags subflow
+ 		pm_nl_add_endpoint $ns2 10.0.1.2 flags subflow
+ 		run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow backup
+@@ -2598,38 +2729,41 @@ verify_listener_events()
+ 	local family
+ 	local saddr
+ 	local sport
++	local name
+ 
+ 	if [ $e_type = $LISTENER_CREATED ]; then
+-		stdbuf -o0 -e0 printf "\t\t\t\t\t CREATE_LISTENER %s:%s"\
+-			$e_saddr $e_sport
++		name="LISTENER_CREATED"
+ 	elif [ $e_type = $LISTENER_CLOSED ]; then
+-		stdbuf -o0 -e0 printf "\t\t\t\t\t CLOSE_LISTENER %s:%s "\
+-			$e_saddr $e_sport
++		name="LISTENER_CLOSED"
++	else
++		name="$e_type"
+ 	fi
+ 
+-	type=$(grep "type:$e_type," $evt |
+-	       sed --unbuffered -n 's/.*\(type:\)\([[:digit:]]*\).*$/\2/p;q')
+-	family=$(grep "type:$e_type," $evt |
+-		 sed --unbuffered -n 's/.*\(family:\)\([[:digit:]]*\).*$/\2/p;q')
+-	sport=$(grep "type:$e_type," $evt |
+-		sed --unbuffered -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q')
++	printf "%-${nr_blank}s %s %s:%s " " " "$name" "$e_saddr" "$e_sport"
++
++	if ! mptcp_lib_kallsyms_has "mptcp_event_pm_listener$"; then
++		printf "[skip]: event not supported\n"
++		return
++	fi
++
++	type=$(grep "type:$e_type," $evt | sed -n 's/.*\(type:\)\([[:digit:]]*\).*$/\2/p;q')
++	family=$(grep "type:$e_type," $evt | sed -n 's/.*\(family:\)\([[:digit:]]*\).*$/\2/p;q')
++	sport=$(grep "type:$e_type," $evt | sed -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q')
+ 	if [ $family ] && [ $family = $AF_INET6 ]; then
+-		saddr=$(grep "type:$e_type," $evt |
+-			sed --unbuffered -n 's/.*\(saddr6:\)\([0-9a-f:.]*\).*$/\2/p;q')
++		saddr=$(grep "type:$e_type," $evt | sed -n 's/.*\(saddr6:\)\([0-9a-f:.]*\).*$/\2/p;q')
+ 	else
+-		saddr=$(grep "type:$e_type," $evt |
+-			sed --unbuffered -n 's/.*\(saddr4:\)\([0-9.]*\).*$/\2/p;q')
++		saddr=$(grep "type:$e_type," $evt | sed -n 's/.*\(saddr4:\)\([0-9.]*\).*$/\2/p;q')
+ 	fi
+ 
+ 	if [ $type ] && [ $type = $e_type ] &&
+ 	   [ $family ] && [ $family = $e_family ] &&
+ 	   [ $saddr ] && [ $saddr = $e_saddr ] &&
+ 	   [ $sport ] && [ $sport = $e_sport ]; then
+-		stdbuf -o0 -e0 printf "[ ok ]\n"
++		echo "[ ok ]"
+ 		return 0
+ 	fi
+ 	fail_test
+-	stdbuf -o0 -e0 printf "[fail]\n"
++	echo "[fail]"
+ }
+ 
+ add_addr_ports_tests()
+@@ -2935,7 +3069,8 @@ fullmesh_tests()
+ 	fi
+ 
+ 	# set fullmesh flag
+-	if reset "set fullmesh flag test"; then
++	if reset "set fullmesh flag test" &&
++	   continue_if mptcp_lib_kversion_ge 5.18; then
+ 		pm_nl_set_limits $ns1 4 4
+ 		pm_nl_add_endpoint $ns1 10.0.2.1 flags subflow
+ 		pm_nl_set_limits $ns2 4 4
+@@ -2945,7 +3080,8 @@ fullmesh_tests()
+ 	fi
+ 
+ 	# set nofullmesh flag
+-	if reset "set nofullmesh flag test"; then
++	if reset "set nofullmesh flag test" &&
++	   continue_if mptcp_lib_kversion_ge 5.18; then
+ 		pm_nl_set_limits $ns1 4 4
+ 		pm_nl_add_endpoint $ns1 10.0.2.1 flags subflow,fullmesh
+ 		pm_nl_set_limits $ns2 4 4
+@@ -2955,7 +3091,8 @@ fullmesh_tests()
+ 	fi
+ 
+ 	# set backup,fullmesh flags
+-	if reset "set backup,fullmesh flags test"; then
++	if reset "set backup,fullmesh flags test" &&
++	   continue_if mptcp_lib_kversion_ge 5.18; then
+ 		pm_nl_set_limits $ns1 4 4
+ 		pm_nl_add_endpoint $ns1 10.0.2.1 flags subflow
+ 		pm_nl_set_limits $ns2 4 4
+@@ -2966,7 +3103,8 @@ fullmesh_tests()
+ 	fi
+ 
+ 	# set nobackup,nofullmesh flags
+-	if reset "set nobackup,nofullmesh flags test"; then
++	if reset "set nobackup,nofullmesh flags test" &&
++	   continue_if mptcp_lib_kversion_ge 5.18; then
+ 		pm_nl_set_limits $ns1 4 4
+ 		pm_nl_set_limits $ns2 4 4
+ 		pm_nl_add_endpoint $ns2 10.0.2.2 flags subflow,backup,fullmesh
+@@ -2979,14 +3117,14 @@ fullmesh_tests()
+ 
+ fastclose_tests()
+ {
+-	if reset "fastclose test"; then
++	if reset_check_counter "fastclose test" "MPTcpExtMPFastcloseTx"; then
+ 		run_tests $ns1 $ns2 10.0.1.1 1024 0 fastclose_client
+ 		chk_join_nr 0 0 0
+ 		chk_fclose_nr 1 1
+ 		chk_rst_nr 1 1 invert
+ 	fi
+ 
+-	if reset "fastclose server test"; then
++	if reset_check_counter "fastclose server test" "MPTcpExtMPFastcloseRx"; then
+ 		run_tests $ns1 $ns2 10.0.1.1 1024 0 fastclose_server
+ 		chk_join_nr 0 0 0
+ 		chk_fclose_nr 1 1 invert
+@@ -3024,7 +3162,8 @@ fail_tests()
+ userspace_tests()
+ {
+ 	# userspace pm type prevents add_addr
+-	if reset "userspace pm type prevents add_addr"; then
++	if reset "userspace pm type prevents add_addr" &&
++	   continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then
+ 		set_userspace_pm $ns1
+ 		pm_nl_set_limits $ns1 0 2
+ 		pm_nl_set_limits $ns2 0 2
+@@ -3035,7 +3174,8 @@ userspace_tests()
+ 	fi
+ 
+ 	# userspace pm type does not echo add_addr without daemon
+-	if reset "userspace pm no echo w/o daemon"; then
++	if reset "userspace pm no echo w/o daemon" &&
++	   continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then
+ 		set_userspace_pm $ns2
+ 		pm_nl_set_limits $ns1 0 2
+ 		pm_nl_set_limits $ns2 0 2
+@@ -3046,7 +3186,8 @@ userspace_tests()
+ 	fi
+ 
+ 	# userspace pm type rejects join
+-	if reset "userspace pm type rejects join"; then
++	if reset "userspace pm type rejects join" &&
++	   continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then
+ 		set_userspace_pm $ns1
+ 		pm_nl_set_limits $ns1 1 1
+ 		pm_nl_set_limits $ns2 1 1
+@@ -3056,7 +3197,8 @@ userspace_tests()
+ 	fi
+ 
+ 	# userspace pm type does not send join
+-	if reset "userspace pm type does not send join"; then
++	if reset "userspace pm type does not send join" &&
++	   continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then
+ 		set_userspace_pm $ns2
+ 		pm_nl_set_limits $ns1 1 1
+ 		pm_nl_set_limits $ns2 1 1
+@@ -3066,7 +3208,8 @@ userspace_tests()
+ 	fi
+ 
+ 	# userspace pm type prevents mp_prio
+-	if reset "userspace pm type prevents mp_prio"; then
++	if reset "userspace pm type prevents mp_prio" &&
++	   continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then
+ 		set_userspace_pm $ns1
+ 		pm_nl_set_limits $ns1 1 1
+ 		pm_nl_set_limits $ns2 1 1
+@@ -3077,7 +3220,8 @@ userspace_tests()
+ 	fi
+ 
+ 	# userspace pm type prevents rm_addr
+-	if reset "userspace pm type prevents rm_addr"; then
++	if reset "userspace pm type prevents rm_addr" &&
++	   continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then
+ 		set_userspace_pm $ns1
+ 		set_userspace_pm $ns2
+ 		pm_nl_set_limits $ns1 0 1
+@@ -3089,7 +3233,8 @@ userspace_tests()
+ 	fi
+ 
+ 	# userspace pm add & remove address
+-	if reset_with_events "userspace pm add & remove address"; then
++	if reset_with_events "userspace pm add & remove address" &&
++	   continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then
+ 		set_userspace_pm $ns1
+ 		pm_nl_set_limits $ns2 1 1
+ 		run_tests $ns1 $ns2 10.0.1.1 0 userspace_1 0 slow
+@@ -3100,7 +3245,8 @@ userspace_tests()
+ 	fi
+ 
+ 	# userspace pm create destroy subflow
+-	if reset_with_events "userspace pm create destroy subflow"; then
++	if reset_with_events "userspace pm create destroy subflow" &&
++	   continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then
+ 		set_userspace_pm $ns2
+ 		pm_nl_set_limits $ns1 0 1
+ 		run_tests $ns1 $ns2 10.0.1.1 0 0 userspace_1 slow
+@@ -3112,8 +3258,10 @@ userspace_tests()
+ 
+ endpoint_tests()
+ {
++	# subflow_rebuild_header is needed to support the implicit flag
+ 	# userspace pm type prevents add_addr
+-	if reset "implicit EP"; then
++	if reset "implicit EP" &&
++	   mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then
+ 		pm_nl_set_limits $ns1 2 2
+ 		pm_nl_set_limits $ns2 2 2
+ 		pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+@@ -3133,7 +3281,8 @@ endpoint_tests()
+ 		kill_tests_wait
+ 	fi
+ 
+-	if reset "delete and re-add"; then
++	if reset "delete and re-add" &&
++	   mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then
+ 		pm_nl_set_limits $ns1 1 1
+ 		pm_nl_set_limits $ns2 1 1
+ 		pm_nl_add_endpoint $ns2 10.0.2.2 id 2 dev ns2eth2 flags subflow
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_lib.sh b/tools/testing/selftests/net/mptcp/mptcp_lib.sh
+index 3286536b79d55..f32045b23b893 100644
+--- a/tools/testing/selftests/net/mptcp/mptcp_lib.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_lib.sh
+@@ -38,3 +38,67 @@ mptcp_lib_check_mptcp() {
+ 		exit ${KSFT_SKIP}
+ 	fi
+ }
++
++mptcp_lib_check_kallsyms() {
++	if ! mptcp_lib_has_file "/proc/kallsyms"; then
++		echo "SKIP: CONFIG_KALLSYMS is missing"
++		exit ${KSFT_SKIP}
++	fi
++}
++
++# Internal: use mptcp_lib_kallsyms_has() instead
++__mptcp_lib_kallsyms_has() {
++	local sym="${1}"
++
++	mptcp_lib_check_kallsyms
++
++	grep -q " ${sym}" /proc/kallsyms
++}
++
++# $1: part of a symbol to look at, add '$' at the end for full name
++mptcp_lib_kallsyms_has() {
++	local sym="${1}"
++
++	if __mptcp_lib_kallsyms_has "${sym}"; then
++		return 0
++	fi
++
++	mptcp_lib_fail_if_expected_feature "${sym} symbol not found"
++}
++
++# $1: part of a symbol to look at, add '$' at the end for full name
++mptcp_lib_kallsyms_doesnt_have() {
++	local sym="${1}"
++
++	if ! __mptcp_lib_kallsyms_has "${sym}"; then
++		return 0
++	fi
++
++	mptcp_lib_fail_if_expected_feature "${sym} symbol has been found"
++}
++
++# !!!AVOID USING THIS!!!
++# Features might not land in the expected version and features can be backported
++#
++# $1: kernel version, e.g. 6.3
++mptcp_lib_kversion_ge() {
++	local exp_maj="${1%.*}"
++	local exp_min="${1#*.}"
++	local v maj min
++
++	# If the kernel has backported features, set this env var to 1:
++	if [ "${SELFTESTS_MPTCP_LIB_NO_KVERSION_CHECK:-}" = "1" ]; then
++		return 0
++	fi
++
++	v=$(uname -r | cut -d'.' -f1,2)
++	maj=${v%.*}
++	min=${v#*.}
++
++	if   [ "${maj}" -gt "${exp_maj}" ] ||
++	   { [ "${maj}" -eq "${exp_maj}" ] && [ "${min}" -ge "${exp_min}" ]; }; then
++		return 0
++	fi
++
++	mptcp_lib_fail_if_expected_feature "kernel version ${1} lower than ${v}"
++}
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_sockopt.c b/tools/testing/selftests/net/mptcp/mptcp_sockopt.c
+index ae61f39556ca8..b35148edbf024 100644
+--- a/tools/testing/selftests/net/mptcp/mptcp_sockopt.c
++++ b/tools/testing/selftests/net/mptcp/mptcp_sockopt.c
+@@ -87,6 +87,10 @@ struct so_state {
+ 	uint64_t tcpi_rcv_delta;
+ };
+ 
++#ifndef MIN
++#define MIN(a, b) ((a) < (b) ? (a) : (b))
++#endif
++
+ static void die_perror(const char *msg)
+ {
+ 	perror(msg);
+@@ -349,13 +353,14 @@ static void do_getsockopt_tcp_info(struct so_state *s, int fd, size_t r, size_t
+ 			xerror("getsockopt MPTCP_TCPINFO (tries %d, %m)");
+ 
+ 		assert(olen <= sizeof(ti));
+-		assert(ti.d.size_user == ti.d.size_kernel);
+-		assert(ti.d.size_user == sizeof(struct tcp_info));
++		assert(ti.d.size_kernel > 0);
++		assert(ti.d.size_user ==
++		       MIN(ti.d.size_kernel, sizeof(struct tcp_info)));
+ 		assert(ti.d.num_subflows == 1);
+ 
+ 		assert(olen > (socklen_t)sizeof(struct mptcp_subflow_data));
+ 		olen -= sizeof(struct mptcp_subflow_data);
+-		assert(olen == sizeof(struct tcp_info));
++		assert(olen == ti.d.size_user);
+ 
+ 		if (ti.ti[0].tcpi_bytes_sent == w &&
+ 		    ti.ti[0].tcpi_bytes_received == r)
+@@ -401,13 +406,14 @@ static void do_getsockopt_subflow_addrs(int fd)
+ 		die_perror("getsockopt MPTCP_SUBFLOW_ADDRS");
+ 
+ 	assert(olen <= sizeof(addrs));
+-	assert(addrs.d.size_user == addrs.d.size_kernel);
+-	assert(addrs.d.size_user == sizeof(struct mptcp_subflow_addrs));
++	assert(addrs.d.size_kernel > 0);
++	assert(addrs.d.size_user ==
++	       MIN(addrs.d.size_kernel, sizeof(struct mptcp_subflow_addrs)));
+ 	assert(addrs.d.num_subflows == 1);
+ 
+ 	assert(olen > (socklen_t)sizeof(struct mptcp_subflow_data));
+ 	olen -= sizeof(struct mptcp_subflow_data);
+-	assert(olen == sizeof(struct mptcp_subflow_addrs));
++	assert(olen == addrs.d.size_user);
+ 
+ 	llen = sizeof(local);
+ 	ret = getsockname(fd, (struct sockaddr *)&local, &llen);
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_sockopt.sh b/tools/testing/selftests/net/mptcp/mptcp_sockopt.sh
+index ff5adbb9c7f2b..f295a371ff148 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_sockopt.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_sockopt.sh
+@@ -87,6 +87,7 @@ cleanup()
+ }
+ 
+ mptcp_lib_check_mptcp
++mptcp_lib_check_kallsyms
+ 
+ ip -Version > /dev/null 2>&1
+ if [ $? -ne 0 ];then
+@@ -186,9 +187,14 @@ do_transfer()
+ 		local_addr="0.0.0.0"
+ 	fi
+ 
++	cmsg="TIMESTAMPNS"
++	if mptcp_lib_kallsyms_has "mptcp_ioctl$"; then
++		cmsg+=",TCPINQ"
++	fi
++
+ 	timeout ${timeout_test} \
+ 		ip netns exec ${listener_ns} \
+-			$mptcp_connect -t ${timeout_poll} -l -M 1 -p $port -s ${srv_proto} -c TIMESTAMPNS,TCPINQ \
++			$mptcp_connect -t ${timeout_poll} -l -M 1 -p $port -s ${srv_proto} -c "${cmsg}" \
+ 				${local_addr} < "$sin" > "$sout" &
+ 	local spid=$!
+ 
+@@ -196,7 +202,7 @@ do_transfer()
+ 
+ 	timeout ${timeout_test} \
+ 		ip netns exec ${connector_ns} \
+-			$mptcp_connect -t ${timeout_poll} -M 2 -p $port -s ${cl_proto} -c TIMESTAMPNS,TCPINQ \
++			$mptcp_connect -t ${timeout_poll} -M 2 -p $port -s ${cl_proto} -c "${cmsg}" \
+ 				$connect_addr < "$cin" > "$cout" &
+ 
+ 	local cpid=$!
+@@ -253,6 +259,11 @@ do_mptcp_sockopt_tests()
+ {
+ 	local lret=0
+ 
++	if ! mptcp_lib_kallsyms_has "mptcp_diag_fill_info$"; then
++		echo "INFO: MPTCP sockopt not supported: SKIP"
++		return
++	fi
++
+ 	ip netns exec "$ns_sbox" ./mptcp_sockopt
+ 	lret=$?
+ 
+@@ -307,6 +318,11 @@ do_tcpinq_tests()
+ {
+ 	local lret=0
+ 
++	if ! mptcp_lib_kallsyms_has "mptcp_ioctl$"; then
++		echo "INFO: TCP_INQ not supported: SKIP"
++		return
++	fi
++
+ 	local args
+ 	for args in "-t tcp" "-r tcp"; do
+ 		do_tcpinq_test $args
+diff --git a/tools/testing/selftests/net/mptcp/pm_netlink.sh b/tools/testing/selftests/net/mptcp/pm_netlink.sh
+index 32f7533e0919a..d02e0d63a8f91 100755
+--- a/tools/testing/selftests/net/mptcp/pm_netlink.sh
++++ b/tools/testing/selftests/net/mptcp/pm_netlink.sh
+@@ -73,8 +73,12 @@ check()
+ }
+ 
+ check "ip netns exec $ns1 ./pm_nl_ctl dump" "" "defaults addr list"
+-check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 0
++
++default_limits="$(ip netns exec $ns1 ./pm_nl_ctl limits)"
++if mptcp_lib_expect_all_features; then
++	check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 0
+ subflows 2" "defaults limits"
++fi
+ 
+ ip netns exec $ns1 ./pm_nl_ctl add 10.0.1.1
+ ip netns exec $ns1 ./pm_nl_ctl add 10.0.1.2 flags subflow dev lo
+@@ -121,12 +125,10 @@ ip netns exec $ns1 ./pm_nl_ctl flush
+ check "ip netns exec $ns1 ./pm_nl_ctl dump" "" "flush addrs"
+ 
+ ip netns exec $ns1 ./pm_nl_ctl limits 9 1
+-check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 0
+-subflows 2" "rcv addrs above hard limit"
++check "ip netns exec $ns1 ./pm_nl_ctl limits" "$default_limits" "rcv addrs above hard limit"
+ 
+ ip netns exec $ns1 ./pm_nl_ctl limits 1 9
+-check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 0
+-subflows 2" "subflows above hard limit"
++check "ip netns exec $ns1 ./pm_nl_ctl limits" "$default_limits" "subflows above hard limit"
+ 
+ ip netns exec $ns1 ./pm_nl_ctl limits 8 8
+ check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 8
+@@ -176,14 +178,19 @@ subflow,backup 10.0.1.1" "set flags (backup)"
+ ip netns exec $ns1 ./pm_nl_ctl set 10.0.1.1 flags nobackup
+ check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \
+ subflow 10.0.1.1" "          (nobackup)"
++
++# fullmesh support has been added later
+ ip netns exec $ns1 ./pm_nl_ctl set id 1 flags fullmesh
+-check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \
++if ip netns exec $ns1 ./pm_nl_ctl dump | grep -q "fullmesh" ||
++   mptcp_lib_expect_all_features; then
++	check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \
+ subflow,fullmesh 10.0.1.1" "          (fullmesh)"
+-ip netns exec $ns1 ./pm_nl_ctl set id 1 flags nofullmesh
+-check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \
++	ip netns exec $ns1 ./pm_nl_ctl set id 1 flags nofullmesh
++	check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \
+ subflow 10.0.1.1" "          (nofullmesh)"
+-ip netns exec $ns1 ./pm_nl_ctl set id 1 flags backup,fullmesh
+-check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \
++	ip netns exec $ns1 ./pm_nl_ctl set id 1 flags backup,fullmesh
++	check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \
+ subflow,backup,fullmesh 10.0.1.1" "          (backup,fullmesh)"
++fi
+ 
+ exit $ret
+diff --git a/tools/testing/selftests/net/mptcp/userspace_pm.sh b/tools/testing/selftests/net/mptcp/userspace_pm.sh
+index 8092399d911f1..98d9e4d2d3fc2 100755
+--- a/tools/testing/selftests/net/mptcp/userspace_pm.sh
++++ b/tools/testing/selftests/net/mptcp/userspace_pm.sh
+@@ -4,11 +4,17 @@
+ . "$(dirname "${0}")/mptcp_lib.sh"
+ 
+ mptcp_lib_check_mptcp
++mptcp_lib_check_kallsyms
++
++if ! mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then
++	echo "userspace pm tests are not supported by the kernel: SKIP"
++	exit ${KSFT_SKIP}
++fi
+ 
+ ip -Version > /dev/null 2>&1
+ if [ $? -ne 0 ];then
+ 	echo "SKIP: Cannot not run test without ip tool"
+-	exit 1
++	exit ${KSFT_SKIP}
+ fi
+ 
+ ANNOUNCED=6        # MPTCP_EVENT_ANNOUNCED
+@@ -909,6 +915,11 @@ test_listener()
+ {
+ 	print_title "Listener tests"
+ 
++	if ! mptcp_lib_kallsyms_has "mptcp_event_pm_listener$"; then
++		stdbuf -o0 -e0 printf "LISTENER events                                            \t[SKIP] Not supported\n"
++		return
++	fi
++
+ 	# Capture events on the network namespace running the client
+ 	:>$client_evts
+ 
+diff --git a/tools/testing/selftests/net/tls.c b/tools/testing/selftests/net/tls.c
+index 2cbb12736596d..c0ad8385441f2 100644
+--- a/tools/testing/selftests/net/tls.c
++++ b/tools/testing/selftests/net/tls.c
+@@ -25,6 +25,8 @@
+ #define TLS_PAYLOAD_MAX_LEN 16384
+ #define SOL_TLS 282
+ 
++static int fips_enabled;
++
+ struct tls_crypto_info_keys {
+ 	union {
+ 		struct tls12_crypto_info_aes_gcm_128 aes128;
+@@ -235,7 +237,7 @@ FIXTURE_VARIANT(tls)
+ {
+ 	uint16_t tls_version;
+ 	uint16_t cipher_type;
+-	bool nopad;
++	bool nopad, fips_non_compliant;
+ };
+ 
+ FIXTURE_VARIANT_ADD(tls, 12_aes_gcm)
+@@ -254,24 +256,28 @@ FIXTURE_VARIANT_ADD(tls, 12_chacha)
+ {
+ 	.tls_version = TLS_1_2_VERSION,
+ 	.cipher_type = TLS_CIPHER_CHACHA20_POLY1305,
++	.fips_non_compliant = true,
+ };
+ 
+ FIXTURE_VARIANT_ADD(tls, 13_chacha)
+ {
+ 	.tls_version = TLS_1_3_VERSION,
+ 	.cipher_type = TLS_CIPHER_CHACHA20_POLY1305,
++	.fips_non_compliant = true,
+ };
+ 
+ FIXTURE_VARIANT_ADD(tls, 13_sm4_gcm)
+ {
+ 	.tls_version = TLS_1_3_VERSION,
+ 	.cipher_type = TLS_CIPHER_SM4_GCM,
++	.fips_non_compliant = true,
+ };
+ 
+ FIXTURE_VARIANT_ADD(tls, 13_sm4_ccm)
+ {
+ 	.tls_version = TLS_1_3_VERSION,
+ 	.cipher_type = TLS_CIPHER_SM4_CCM,
++	.fips_non_compliant = true,
+ };
+ 
+ FIXTURE_VARIANT_ADD(tls, 12_aes_ccm)
+@@ -311,6 +317,9 @@ FIXTURE_SETUP(tls)
+ 	int one = 1;
+ 	int ret;
+ 
++	if (fips_enabled && variant->fips_non_compliant)
++		SKIP(return, "Unsupported cipher in FIPS mode");
++
+ 	tls_crypto_info_init(variant->tls_version, variant->cipher_type,
+ 			     &tls12);
+ 
+@@ -1820,4 +1829,17 @@ TEST(tls_v6ops) {
+ 	close(sfd);
+ }
+ 
++static void __attribute__((constructor)) fips_check(void) {
++	int res;
++	FILE *f;
++
++	f = fopen("/proc/sys/crypto/fips_enabled", "r");
++	if (f) {
++		res = fscanf(f, "%d", &fips_enabled);
++		if (res != 1)
++			ksft_print_msg("ERROR: Couldn't read /proc/sys/crypto/fips_enabled\n");
++		fclose(f);
++	}
++}
++
+ TEST_HARNESS_MAIN
+diff --git a/tools/testing/selftests/net/vrf-xfrm-tests.sh b/tools/testing/selftests/net/vrf-xfrm-tests.sh
+index 184da81f554ff..452638ae8aed8 100755
+--- a/tools/testing/selftests/net/vrf-xfrm-tests.sh
++++ b/tools/testing/selftests/net/vrf-xfrm-tests.sh
+@@ -264,60 +264,60 @@ setup_xfrm()
+ 	ip -netns host1 xfrm state add src ${HOST1_4} dst ${HOST2_4} \
+ 	    proto esp spi ${SPI_1} reqid 0 mode tunnel \
+ 	    replay-window 4 replay-oseq 0x4 \
+-	    auth-trunc 'hmac(md5)' ${AUTH_1} 96 \
+-	    enc 'cbc(des3_ede)' ${ENC_1} \
++	    auth-trunc 'hmac(sha1)' ${AUTH_1} 96 \
++	    enc 'cbc(aes)' ${ENC_1} \
+ 	    sel src ${h1_4} dst ${h2_4} ${devarg}
+ 
+ 	ip -netns host2 xfrm state add src ${HOST1_4} dst ${HOST2_4} \
+ 	    proto esp spi ${SPI_1} reqid 0 mode tunnel \
+ 	    replay-window 4 replay-oseq 0x4 \
+-	    auth-trunc 'hmac(md5)' ${AUTH_1} 96 \
+-	    enc 'cbc(des3_ede)' ${ENC_1} \
++	    auth-trunc 'hmac(sha1)' ${AUTH_1} 96 \
++	    enc 'cbc(aes)' ${ENC_1} \
+ 	    sel src ${h1_4} dst ${h2_4}
+ 
+ 
+ 	ip -netns host1 xfrm state add src ${HOST2_4} dst ${HOST1_4} \
+ 	    proto esp spi ${SPI_2} reqid 0 mode tunnel \
+ 	    replay-window 4 replay-oseq 0x4 \
+-	    auth-trunc 'hmac(md5)' ${AUTH_2} 96 \
+-	    enc 'cbc(des3_ede)' ${ENC_2} \
++	    auth-trunc 'hmac(sha1)' ${AUTH_2} 96 \
++	    enc 'cbc(aes)' ${ENC_2} \
+ 	    sel src ${h2_4} dst ${h1_4} ${devarg}
+ 
+ 	ip -netns host2 xfrm state add src ${HOST2_4} dst ${HOST1_4} \
+ 	    proto esp spi ${SPI_2} reqid 0 mode tunnel \
+ 	    replay-window 4 replay-oseq 0x4 \
+-	    auth-trunc 'hmac(md5)' ${AUTH_2} 96 \
+-	    enc 'cbc(des3_ede)' ${ENC_2} \
++	    auth-trunc 'hmac(sha1)' ${AUTH_2} 96 \
++	    enc 'cbc(aes)' ${ENC_2} \
+ 	    sel src ${h2_4} dst ${h1_4}
+ 
+ 
+ 	ip -6 -netns host1 xfrm state add src ${HOST1_6} dst ${HOST2_6} \
+ 	    proto esp spi ${SPI_1} reqid 0 mode tunnel \
+ 	    replay-window 4 replay-oseq 0x4 \
+-	    auth-trunc 'hmac(md5)' ${AUTH_1} 96 \
+-	    enc 'cbc(des3_ede)' ${ENC_1} \
++	    auth-trunc 'hmac(sha1)' ${AUTH_1} 96 \
++	    enc 'cbc(aes)' ${ENC_1} \
+ 	    sel src ${h1_6} dst ${h2_6} ${devarg}
+ 
+ 	ip -6 -netns host2 xfrm state add src ${HOST1_6} dst ${HOST2_6} \
+ 	    proto esp spi ${SPI_1} reqid 0 mode tunnel \
+ 	    replay-window 4 replay-oseq 0x4 \
+-	    auth-trunc 'hmac(md5)' ${AUTH_1} 96 \
+-	    enc 'cbc(des3_ede)' ${ENC_1} \
++	    auth-trunc 'hmac(sha1)' ${AUTH_1} 96 \
++	    enc 'cbc(aes)' ${ENC_1} \
+ 	    sel src ${h1_6} dst ${h2_6}
+ 
+ 
+ 	ip -6 -netns host1 xfrm state add src ${HOST2_6} dst ${HOST1_6} \
+ 	    proto esp spi ${SPI_2} reqid 0 mode tunnel \
+ 	    replay-window 4 replay-oseq 0x4 \
+-	    auth-trunc 'hmac(md5)' ${AUTH_2} 96 \
+-	    enc 'cbc(des3_ede)' ${ENC_2} \
++	    auth-trunc 'hmac(sha1)' ${AUTH_2} 96 \
++	    enc 'cbc(aes)' ${ENC_2} \
+ 	    sel src ${h2_6} dst ${h1_6} ${devarg}
+ 
+ 	ip -6 -netns host2 xfrm state add src ${HOST2_6} dst ${HOST1_6} \
+ 	    proto esp spi ${SPI_2} reqid 0 mode tunnel \
+ 	    replay-window 4 replay-oseq 0x4 \
+-	    auth-trunc 'hmac(md5)' ${AUTH_2} 96 \
+-	    enc 'cbc(des3_ede)' ${ENC_2} \
++	    auth-trunc 'hmac(sha1)' ${AUTH_2} 96 \
++	    enc 'cbc(aes)' ${ENC_2} \
+ 	    sel src ${h2_6} dst ${h1_6}
+ }
+ 
+diff --git a/tools/virtio/ringtest/main.h b/tools/virtio/ringtest/main.h
+index b68920d527503..d18dd317e27f9 100644
+--- a/tools/virtio/ringtest/main.h
++++ b/tools/virtio/ringtest/main.h
+@@ -8,6 +8,7 @@
+ #ifndef MAIN_H
+ #define MAIN_H
+ 
++#include <assert.h>
+ #include <stdbool.h>
+ 
+ extern int param;
+@@ -95,6 +96,8 @@ extern unsigned ring_size;
+ #define cpu_relax() asm ("rep; nop" ::: "memory")
+ #elif defined(__s390x__)
+ #define cpu_relax() barrier()
++#elif defined(__aarch64__)
++#define cpu_relax() asm ("yield" ::: "memory")
+ #else
+ #define cpu_relax() assert(0)
+ #endif
+@@ -112,6 +115,8 @@ static inline void busy_wait(void)
+ 
+ #if defined(__x86_64__) || defined(__i386__)
+ #define smp_mb()     asm volatile("lock; addl $0,-132(%%rsp)" ::: "memory", "cc")
++#elif defined(__aarch64__)
++#define smp_mb()     asm volatile("dmb ish" ::: "memory")
+ #else
+ /*
+  * Not using __ATOMIC_SEQ_CST since gcc docs say they are only synchronized
+@@ -136,10 +141,16 @@ static inline void busy_wait(void)
+ 
+ #if defined(__i386__) || defined(__x86_64__) || defined(__s390x__)
+ #define smp_wmb() barrier()
++#elif defined(__aarch64__)
++#define smp_wmb() asm volatile("dmb ishst" ::: "memory")
+ #else
+ #define smp_wmb() smp_release()
+ #endif
+ 
++#ifndef __always_inline
++#define __always_inline inline __attribute__((always_inline))
++#endif
++
+ static __always_inline
+ void __read_once_size(const volatile void *p, void *res, int size)
+ {
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 4ec97f72f8ce8..9386e415e6b1d 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -683,6 +683,24 @@ static __always_inline int kvm_handle_hva_range_no_flush(struct mmu_notifier *mn
+ 
+ 	return __kvm_handle_hva_range(kvm, &range);
+ }
++
++static bool kvm_change_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
++{
++	/*
++	 * Skipping invalid memslots is correct if and only change_pte() is
++	 * surrounded by invalidate_range_{start,end}(), which is currently
++	 * guaranteed by the primary MMU.  If that ever changes, KVM needs to
++	 * unmap the memslot instead of skipping the memslot to ensure that KVM
++	 * doesn't hold references to the old PFN.
++	 */
++	WARN_ON_ONCE(!READ_ONCE(kvm->mn_active_invalidate_count));
++
++	if (range->slot->flags & KVM_MEMSLOT_INVALID)
++		return false;
++
++	return kvm_set_spte_gfn(kvm, range);
++}
++
+ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn,
+ 					struct mm_struct *mm,
+ 					unsigned long address,
+@@ -704,7 +722,7 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn,
+ 	if (!READ_ONCE(kvm->mmu_invalidate_in_progress))
+ 		return;
+ 
+-	kvm_handle_hva_range(mn, address, address + 1, pte, kvm_set_spte_gfn);
++	kvm_handle_hva_range(mn, address, address + 1, pte, kvm_change_spte_gfn);
+ }
+ 
+ void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start,


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:6.3 commit in: /
@ 2023-07-01 18:21 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2023-07-01 18:21 UTC (permalink / raw
  To: gentoo-commits

commit:     593914a53adb564b0f550d37c24eedaf39d8aeff
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Jul  1 18:21:03 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Jul  1 18:21:03 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=593914a5

Linux patch 6.3.11

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1010_linux-6.3.11.patch | 2476 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2480 insertions(+)

diff --git a/0000_README b/0000_README
index 79a95e0e..80499d11 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch:  1009_linux-6.3.10.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.3.10
 
+Patch:  1010_linux-6.3.11.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.3.11
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1010_linux-6.3.11.patch b/1010_linux-6.3.11.patch
new file mode 100644
index 00000000..3797a28d
--- /dev/null
+++ b/1010_linux-6.3.11.patch
@@ -0,0 +1,2476 @@
+diff --git a/Makefile b/Makefile
+index 47253ac6c85c1..34349623a76a7 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 3
+-SUBLEVEL = 10
++SUBLEVEL = 11
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/arch/alpha/Kconfig b/arch/alpha/Kconfig
+index 780d4673c3ca7..a40a61583b335 100644
+--- a/arch/alpha/Kconfig
++++ b/arch/alpha/Kconfig
+@@ -29,6 +29,7 @@ config ALPHA
+ 	select GENERIC_SMP_IDLE_THREAD
+ 	select HAVE_ARCH_AUDITSYSCALL
+ 	select HAVE_MOD_ARCH_SPECIFIC
++	select LOCK_MM_AND_FIND_VMA
+ 	select MODULES_USE_ELF_RELA
+ 	select ODD_RT_SIGACTION
+ 	select OLD_SIGSUSPEND
+diff --git a/arch/alpha/mm/fault.c b/arch/alpha/mm/fault.c
+index 7b01ae4f3bc6c..8c9850437e674 100644
+--- a/arch/alpha/mm/fault.c
++++ b/arch/alpha/mm/fault.c
+@@ -119,20 +119,12 @@ do_page_fault(unsigned long address, unsigned long mmcsr,
+ 		flags |= FAULT_FLAG_USER;
+ 	perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
+ retry:
+-	mmap_read_lock(mm);
+-	vma = find_vma(mm, address);
++	vma = lock_mm_and_find_vma(mm, address, regs);
+ 	if (!vma)
+-		goto bad_area;
+-	if (vma->vm_start <= address)
+-		goto good_area;
+-	if (!(vma->vm_flags & VM_GROWSDOWN))
+-		goto bad_area;
+-	if (expand_stack(vma, address))
+-		goto bad_area;
++		goto bad_area_nosemaphore;
+ 
+ 	/* Ok, we have a good vm_area for this memory access, so
+ 	   we can handle it.  */
+- good_area:
+ 	si_code = SEGV_ACCERR;
+ 	if (cause < 0) {
+ 		if (!(vma->vm_flags & VM_EXEC))
+@@ -192,6 +184,7 @@ retry:
+  bad_area:
+ 	mmap_read_unlock(mm);
+ 
++ bad_area_nosemaphore:
+ 	if (user_mode(regs))
+ 		goto do_sigsegv;
+ 
+diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig
+index d9a13ccf89a3a..cb1074f74c3f1 100644
+--- a/arch/arc/Kconfig
++++ b/arch/arc/Kconfig
+@@ -41,6 +41,7 @@ config ARC
+ 	select HAVE_PERF_EVENTS
+ 	select HAVE_SYSCALL_TRACEPOINTS
+ 	select IRQ_DOMAIN
++	select LOCK_MM_AND_FIND_VMA
+ 	select MODULES_USE_ELF_RELA
+ 	select OF
+ 	select OF_EARLY_FLATTREE
+diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c
+index 5ca59a482632a..f59e722d147f9 100644
+--- a/arch/arc/mm/fault.c
++++ b/arch/arc/mm/fault.c
+@@ -113,15 +113,9 @@ void do_page_fault(unsigned long address, struct pt_regs *regs)
+ 
+ 	perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
+ retry:
+-	mmap_read_lock(mm);
+-
+-	vma = find_vma(mm, address);
++	vma = lock_mm_and_find_vma(mm, address, regs);
+ 	if (!vma)
+-		goto bad_area;
+-	if (unlikely(address < vma->vm_start)) {
+-		if (!(vma->vm_flags & VM_GROWSDOWN) || expand_stack(vma, address))
+-			goto bad_area;
+-	}
++		goto bad_area_nosemaphore;
+ 
+ 	/*
+ 	 * vm_area is good, now check permissions for this memory access
+@@ -161,6 +155,7 @@ retry:
+ bad_area:
+ 	mmap_read_unlock(mm);
+ 
++bad_area_nosemaphore:
+ 	/*
+ 	 * Major/minor page fault accounting
+ 	 * (in case of retry we only land here once)
+diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
+index e24a9820e12fa..a04bb151a9028 100644
+--- a/arch/arm/Kconfig
++++ b/arch/arm/Kconfig
+@@ -125,6 +125,7 @@ config ARM
+ 	select HAVE_UID16
+ 	select HAVE_VIRT_CPU_ACCOUNTING_GEN
+ 	select IRQ_FORCED_THREADING
++	select LOCK_MM_AND_FIND_VMA
+ 	select MODULES_USE_ELF_REL
+ 	select NEED_DMA_MAP_STATE
+ 	select OF_EARLY_FLATTREE if OF
+diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
+index 2418f1efabd87..0860eeba8bd34 100644
+--- a/arch/arm/mm/fault.c
++++ b/arch/arm/mm/fault.c
+@@ -232,37 +232,11 @@ static inline bool is_permission_fault(unsigned int fsr)
+ 	return false;
+ }
+ 
+-static vm_fault_t __kprobes
+-__do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int flags,
+-		unsigned long vma_flags, struct pt_regs *regs)
+-{
+-	struct vm_area_struct *vma = find_vma(mm, addr);
+-	if (unlikely(!vma))
+-		return VM_FAULT_BADMAP;
+-
+-	if (unlikely(vma->vm_start > addr)) {
+-		if (!(vma->vm_flags & VM_GROWSDOWN))
+-			return VM_FAULT_BADMAP;
+-		if (addr < FIRST_USER_ADDRESS)
+-			return VM_FAULT_BADMAP;
+-		if (expand_stack(vma, addr))
+-			return VM_FAULT_BADMAP;
+-	}
+-
+-	/*
+-	 * ok, we have a good vm_area for this memory access, check the
+-	 * permissions on the VMA allow for the fault which occurred.
+-	 */
+-	if (!(vma->vm_flags & vma_flags))
+-		return VM_FAULT_BADACCESS;
+-
+-	return handle_mm_fault(vma, addr & PAGE_MASK, flags, regs);
+-}
+-
+ static int __kprobes
+ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
+ {
+ 	struct mm_struct *mm = current->mm;
++	struct vm_area_struct *vma;
+ 	int sig, code;
+ 	vm_fault_t fault;
+ 	unsigned int flags = FAULT_FLAG_DEFAULT;
+@@ -301,31 +275,21 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
+ 
+ 	perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr);
+ 
+-	/*
+-	 * As per x86, we may deadlock here.  However, since the kernel only
+-	 * validly references user space from well defined areas of the code,
+-	 * we can bug out early if this is from code which shouldn't.
+-	 */
+-	if (!mmap_read_trylock(mm)) {
+-		if (!user_mode(regs) && !search_exception_tables(regs->ARM_pc))
+-			goto no_context;
+ retry:
+-		mmap_read_lock(mm);
+-	} else {
+-		/*
+-		 * The above down_read_trylock() might have succeeded in
+-		 * which case, we'll have missed the might_sleep() from
+-		 * down_read()
+-		 */
+-		might_sleep();
+-#ifdef CONFIG_DEBUG_VM
+-		if (!user_mode(regs) &&
+-		    !search_exception_tables(regs->ARM_pc))
+-			goto no_context;
+-#endif
++	vma = lock_mm_and_find_vma(mm, addr, regs);
++	if (unlikely(!vma)) {
++		fault = VM_FAULT_BADMAP;
++		goto bad_area;
+ 	}
+ 
+-	fault = __do_page_fault(mm, addr, flags, vm_flags, regs);
++	/*
++	 * ok, we have a good vm_area for this memory access, check the
++	 * permissions on the VMA allow for the fault which occurred.
++	 */
++	if (!(vma->vm_flags & vm_flags))
++		fault = VM_FAULT_BADACCESS;
++	else
++		fault = handle_mm_fault(vma, addr & PAGE_MASK, flags, regs);
+ 
+ 	/* If we need to retry but a fatal signal is pending, handle the
+ 	 * signal first. We do not need to release the mmap_lock because
+@@ -356,6 +320,7 @@ retry:
+ 	if (likely(!(fault & (VM_FAULT_ERROR | VM_FAULT_BADMAP | VM_FAULT_BADACCESS))))
+ 		return 0;
+ 
++bad_area:
+ 	/*
+ 	 * If we are in kernel mode at this point, we
+ 	 * have no context to handle this fault with.
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 1023e896d46b8..15e097206e1a3 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -219,6 +219,7 @@ config ARM64
+ 	select IRQ_DOMAIN
+ 	select IRQ_FORCED_THREADING
+ 	select KASAN_VMALLOC if KASAN
++	select LOCK_MM_AND_FIND_VMA
+ 	select MODULES_USE_ELF_RELA
+ 	select NEED_DMA_MAP_STATE
+ 	select NEED_SG_DMA_LENGTH
+diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
+index d1136259b7b85..473575a75f557 100644
+--- a/arch/arm64/mm/fault.c
++++ b/arch/arm64/mm/fault.c
+@@ -483,27 +483,14 @@ static void do_bad_area(unsigned long far, unsigned long esr,
+ #define VM_FAULT_BADMAP		((__force vm_fault_t)0x010000)
+ #define VM_FAULT_BADACCESS	((__force vm_fault_t)0x020000)
+ 
+-static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr,
++static vm_fault_t __do_page_fault(struct mm_struct *mm,
++				  struct vm_area_struct *vma, unsigned long addr,
+ 				  unsigned int mm_flags, unsigned long vm_flags,
+ 				  struct pt_regs *regs)
+ {
+-	struct vm_area_struct *vma = find_vma(mm, addr);
+-
+-	if (unlikely(!vma))
+-		return VM_FAULT_BADMAP;
+-
+ 	/*
+ 	 * Ok, we have a good vm_area for this memory access, so we can handle
+ 	 * it.
+-	 */
+-	if (unlikely(vma->vm_start > addr)) {
+-		if (!(vma->vm_flags & VM_GROWSDOWN))
+-			return VM_FAULT_BADMAP;
+-		if (expand_stack(vma, addr))
+-			return VM_FAULT_BADMAP;
+-	}
+-
+-	/*
+ 	 * Check that the permissions on the VMA allow for the fault which
+ 	 * occurred.
+ 	 */
+@@ -535,6 +522,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
+ 	unsigned long vm_flags;
+ 	unsigned int mm_flags = FAULT_FLAG_DEFAULT;
+ 	unsigned long addr = untagged_addr(far);
++	struct vm_area_struct *vma;
+ 
+ 	if (kprobe_page_fault(regs, esr))
+ 		return 0;
+@@ -585,31 +573,14 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
+ 
+ 	perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr);
+ 
+-	/*
+-	 * As per x86, we may deadlock here. However, since the kernel only
+-	 * validly references user space from well defined areas of the code,
+-	 * we can bug out early if this is from code which shouldn't.
+-	 */
+-	if (!mmap_read_trylock(mm)) {
+-		if (!user_mode(regs) && !search_exception_tables(regs->pc))
+-			goto no_context;
+ retry:
+-		mmap_read_lock(mm);
+-	} else {
+-		/*
+-		 * The above mmap_read_trylock() might have succeeded in which
+-		 * case, we'll have missed the might_sleep() from down_read().
+-		 */
+-		might_sleep();
+-#ifdef CONFIG_DEBUG_VM
+-		if (!user_mode(regs) && !search_exception_tables(regs->pc)) {
+-			mmap_read_unlock(mm);
+-			goto no_context;
+-		}
+-#endif
++	vma = lock_mm_and_find_vma(mm, addr, regs);
++	if (unlikely(!vma)) {
++		fault = VM_FAULT_BADMAP;
++		goto done;
+ 	}
+ 
+-	fault = __do_page_fault(mm, addr, mm_flags, vm_flags, regs);
++	fault = __do_page_fault(mm, vma, addr, mm_flags, vm_flags, regs);
+ 
+ 	/* Quick path to respond to signals */
+ 	if (fault_signal_pending(fault, regs)) {
+@@ -628,6 +599,7 @@ retry:
+ 	}
+ 	mmap_read_unlock(mm);
+ 
++done:
+ 	/*
+ 	 * Handle the "normal" (no error) case first.
+ 	 */
+diff --git a/arch/csky/Kconfig b/arch/csky/Kconfig
+index dba02da6fa344..225df1674b33a 100644
+--- a/arch/csky/Kconfig
++++ b/arch/csky/Kconfig
+@@ -96,6 +96,7 @@ config CSKY
+ 	select HAVE_REGS_AND_STACK_ACCESS_API
+ 	select HAVE_STACKPROTECTOR
+ 	select HAVE_SYSCALL_TRACEPOINTS
++	select LOCK_MM_AND_FIND_VMA
+ 	select MAY_HAVE_SPARSE_IRQ
+ 	select MODULES_USE_ELF_RELA if MODULES
+ 	select OF
+diff --git a/arch/csky/mm/fault.c b/arch/csky/mm/fault.c
+index e15f736cca4b4..a885518ce1dd2 100644
+--- a/arch/csky/mm/fault.c
++++ b/arch/csky/mm/fault.c
+@@ -97,13 +97,12 @@ static inline void mm_fault_error(struct pt_regs *regs, unsigned long addr, vm_f
+ 	BUG();
+ }
+ 
+-static inline void bad_area(struct pt_regs *regs, struct mm_struct *mm, int code, unsigned long addr)
++static inline void bad_area_nosemaphore(struct pt_regs *regs, struct mm_struct *mm, int code, unsigned long addr)
+ {
+ 	/*
+ 	 * Something tried to access memory that isn't in our memory map.
+ 	 * Fix it, but check if it's kernel or user first.
+ 	 */
+-	mmap_read_unlock(mm);
+ 	/* User mode accesses just cause a SIGSEGV */
+ 	if (user_mode(regs)) {
+ 		do_trap(regs, SIGSEGV, code, addr);
+@@ -238,20 +237,9 @@ asmlinkage void do_page_fault(struct pt_regs *regs)
+ 	if (is_write(regs))
+ 		flags |= FAULT_FLAG_WRITE;
+ retry:
+-	mmap_read_lock(mm);
+-	vma = find_vma(mm, addr);
++	vma = lock_mm_and_find_vma(mm, addr, regs);
+ 	if (unlikely(!vma)) {
+-		bad_area(regs, mm, code, addr);
+-		return;
+-	}
+-	if (likely(vma->vm_start <= addr))
+-		goto good_area;
+-	if (unlikely(!(vma->vm_flags & VM_GROWSDOWN))) {
+-		bad_area(regs, mm, code, addr);
+-		return;
+-	}
+-	if (unlikely(expand_stack(vma, addr))) {
+-		bad_area(regs, mm, code, addr);
++		bad_area_nosemaphore(regs, mm, code, addr);
+ 		return;
+ 	}
+ 
+@@ -259,11 +247,11 @@ retry:
+ 	 * Ok, we have a good vm_area for this memory access, so
+ 	 * we can handle it.
+ 	 */
+-good_area:
+ 	code = SEGV_ACCERR;
+ 
+ 	if (unlikely(access_error(regs, vma))) {
+-		bad_area(regs, mm, code, addr);
++		mmap_read_unlock(mm);
++		bad_area_nosemaphore(regs, mm, code, addr);
+ 		return;
+ 	}
+ 
+diff --git a/arch/hexagon/Kconfig b/arch/hexagon/Kconfig
+index 54eadf2651786..6726f4941015f 100644
+--- a/arch/hexagon/Kconfig
++++ b/arch/hexagon/Kconfig
+@@ -28,6 +28,7 @@ config HEXAGON
+ 	select GENERIC_SMP_IDLE_THREAD
+ 	select STACKTRACE_SUPPORT
+ 	select GENERIC_CLOCKEVENTS_BROADCAST
++	select LOCK_MM_AND_FIND_VMA
+ 	select MODULES_USE_ELF_RELA
+ 	select GENERIC_CPU_DEVICES
+ 	select ARCH_WANT_LD_ORPHAN_WARN
+diff --git a/arch/hexagon/mm/vm_fault.c b/arch/hexagon/mm/vm_fault.c
+index 4b578d02fd01a..7295ea3f8cc8d 100644
+--- a/arch/hexagon/mm/vm_fault.c
++++ b/arch/hexagon/mm/vm_fault.c
+@@ -57,21 +57,10 @@ void do_page_fault(unsigned long address, long cause, struct pt_regs *regs)
+ 
+ 	perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
+ retry:
+-	mmap_read_lock(mm);
+-	vma = find_vma(mm, address);
+-	if (!vma)
+-		goto bad_area;
++	vma = lock_mm_and_find_vma(mm, address, regs);
++	if (unlikely(!vma))
++		goto bad_area_nosemaphore;
+ 
+-	if (vma->vm_start <= address)
+-		goto good_area;
+-
+-	if (!(vma->vm_flags & VM_GROWSDOWN))
+-		goto bad_area;
+-
+-	if (expand_stack(vma, address))
+-		goto bad_area;
+-
+-good_area:
+ 	/* Address space is OK.  Now check access rights. */
+ 	si_code = SEGV_ACCERR;
+ 
+@@ -143,6 +132,7 @@ good_area:
+ bad_area:
+ 	mmap_read_unlock(mm);
+ 
++bad_area_nosemaphore:
+ 	if (user_mode(regs)) {
+ 		force_sig_fault(SIGSEGV, si_code, (void __user *)address);
+ 		return;
+diff --git a/arch/ia64/mm/fault.c b/arch/ia64/mm/fault.c
+index 85c4d9ac8686d..5458b52b40099 100644
+--- a/arch/ia64/mm/fault.c
++++ b/arch/ia64/mm/fault.c
+@@ -110,10 +110,12 @@ retry:
+          * register backing store that needs to expand upwards, in
+          * this case vma will be null, but prev_vma will ne non-null
+          */
+-        if (( !vma && prev_vma ) || (address < vma->vm_start) )
+-		goto check_expansion;
++        if (( !vma && prev_vma ) || (address < vma->vm_start) ) {
++		vma = expand_stack(mm, address);
++		if (!vma)
++			goto bad_area_nosemaphore;
++	}
+ 
+-  good_area:
+ 	code = SEGV_ACCERR;
+ 
+ 	/* OK, we've got a good vm_area for this memory area.  Check the access permissions: */
+@@ -177,35 +179,9 @@ retry:
+ 	mmap_read_unlock(mm);
+ 	return;
+ 
+-  check_expansion:
+-	if (!(prev_vma && (prev_vma->vm_flags & VM_GROWSUP) && (address == prev_vma->vm_end))) {
+-		if (!vma)
+-			goto bad_area;
+-		if (!(vma->vm_flags & VM_GROWSDOWN))
+-			goto bad_area;
+-		if (REGION_NUMBER(address) != REGION_NUMBER(vma->vm_start)
+-		    || REGION_OFFSET(address) >= RGN_MAP_LIMIT)
+-			goto bad_area;
+-		if (expand_stack(vma, address))
+-			goto bad_area;
+-	} else {
+-		vma = prev_vma;
+-		if (REGION_NUMBER(address) != REGION_NUMBER(vma->vm_start)
+-		    || REGION_OFFSET(address) >= RGN_MAP_LIMIT)
+-			goto bad_area;
+-		/*
+-		 * Since the register backing store is accessed sequentially,
+-		 * we disallow growing it by more than a page at a time.
+-		 */
+-		if (address > vma->vm_end + PAGE_SIZE - sizeof(long))
+-			goto bad_area;
+-		if (expand_upwards(vma, address))
+-			goto bad_area;
+-	}
+-	goto good_area;
+-
+   bad_area:
+ 	mmap_read_unlock(mm);
++  bad_area_nosemaphore:
+ 	if ((isr & IA64_ISR_SP)
+ 	    || ((isr & IA64_ISR_NA) && (isr & IA64_ISR_CODE_MASK) == IA64_ISR_CODE_LFETCH))
+ 	{
+diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
+index 3e5d6acbf2409..085d87d7e91d9 100644
+--- a/arch/loongarch/Kconfig
++++ b/arch/loongarch/Kconfig
+@@ -125,6 +125,7 @@ config LOONGARCH
+ 	select HAVE_VIRT_CPU_ACCOUNTING_GEN if !SMP
+ 	select IRQ_FORCED_THREADING
+ 	select IRQ_LOONGARCH_CPU
++	select LOCK_MM_AND_FIND_VMA
+ 	select MMU_GATHER_MERGE_VMAS if MMU
+ 	select MODULES_USE_ELF_RELA if MODULES
+ 	select NEED_PER_CPU_EMBED_FIRST_CHUNK
+diff --git a/arch/loongarch/mm/fault.c b/arch/loongarch/mm/fault.c
+index 449087bd589d3..da5b6d518cdb1 100644
+--- a/arch/loongarch/mm/fault.c
++++ b/arch/loongarch/mm/fault.c
+@@ -169,22 +169,18 @@ static void __kprobes __do_page_fault(struct pt_regs *regs,
+ 
+ 	perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
+ retry:
+-	mmap_read_lock(mm);
+-	vma = find_vma(mm, address);
+-	if (!vma)
+-		goto bad_area;
+-	if (vma->vm_start <= address)
+-		goto good_area;
+-	if (!(vma->vm_flags & VM_GROWSDOWN))
+-		goto bad_area;
+-	if (!expand_stack(vma, address))
+-		goto good_area;
++	vma = lock_mm_and_find_vma(mm, address, regs);
++	if (unlikely(!vma))
++		goto bad_area_nosemaphore;
++	goto good_area;
++
+ /*
+  * Something tried to access memory that isn't in our memory map..
+  * Fix it, but check if it's kernel or user first..
+  */
+ bad_area:
+ 	mmap_read_unlock(mm);
++bad_area_nosemaphore:
+ 	do_sigsegv(regs, write, address, si_code);
+ 	return;
+ 
+diff --git a/arch/m68k/mm/fault.c b/arch/m68k/mm/fault.c
+index 228128e45c673..c290c5c0cfb93 100644
+--- a/arch/m68k/mm/fault.c
++++ b/arch/m68k/mm/fault.c
+@@ -105,8 +105,9 @@ retry:
+ 		if (address + 256 < rdusp())
+ 			goto map_err;
+ 	}
+-	if (expand_stack(vma, address))
+-		goto map_err;
++	vma = expand_stack(mm, address);
++	if (!vma)
++		goto map_err_nosemaphore;
+ 
+ /*
+  * Ok, we have a good vm_area for this memory access, so
+@@ -196,10 +197,12 @@ bus_err:
+ 	goto send_sig;
+ 
+ map_err:
++	mmap_read_unlock(mm);
++map_err_nosemaphore:
+ 	current->thread.signo = SIGSEGV;
+ 	current->thread.code = SEGV_MAPERR;
+ 	current->thread.faddr = address;
+-	goto send_sig;
++	return send_fault_sig(regs);
+ 
+ acc_err:
+ 	current->thread.signo = SIGSEGV;
+diff --git a/arch/microblaze/mm/fault.c b/arch/microblaze/mm/fault.c
+index 687714db6f4d0..d3c3c33b73a6e 100644
+--- a/arch/microblaze/mm/fault.c
++++ b/arch/microblaze/mm/fault.c
+@@ -192,8 +192,9 @@ retry:
+ 			&& (kernel_mode(regs) || !store_updates_sp(regs)))
+ 				goto bad_area;
+ 	}
+-	if (expand_stack(vma, address))
+-		goto bad_area;
++	vma = expand_stack(mm, address);
++	if (!vma)
++		goto bad_area_nosemaphore;
+ 
+ good_area:
+ 	code = SEGV_ACCERR;
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index 5b3f1f1dfd164..2cc139f10c1d8 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -94,6 +94,7 @@ config MIPS
+ 	select HAVE_VIRT_CPU_ACCOUNTING_GEN if 64BIT || !SMP
+ 	select IRQ_FORCED_THREADING
+ 	select ISA if EISA
++	select LOCK_MM_AND_FIND_VMA
+ 	select MODULES_USE_ELF_REL if MODULES
+ 	select MODULES_USE_ELF_RELA if MODULES && 64BIT
+ 	select PERF_USE_VMALLOC
+diff --git a/arch/mips/mm/fault.c b/arch/mips/mm/fault.c
+index a27045f5a556d..d7878208bd3fa 100644
+--- a/arch/mips/mm/fault.c
++++ b/arch/mips/mm/fault.c
+@@ -99,21 +99,13 @@ static void __do_page_fault(struct pt_regs *regs, unsigned long write,
+ 
+ 	perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
+ retry:
+-	mmap_read_lock(mm);
+-	vma = find_vma(mm, address);
++	vma = lock_mm_and_find_vma(mm, address, regs);
+ 	if (!vma)
+-		goto bad_area;
+-	if (vma->vm_start <= address)
+-		goto good_area;
+-	if (!(vma->vm_flags & VM_GROWSDOWN))
+-		goto bad_area;
+-	if (expand_stack(vma, address))
+-		goto bad_area;
++		goto bad_area_nosemaphore;
+ /*
+  * Ok, we have a good vm_area for this memory access, so
+  * we can handle it..
+  */
+-good_area:
+ 	si_code = SEGV_ACCERR;
+ 
+ 	if (write) {
+diff --git a/arch/nios2/Kconfig b/arch/nios2/Kconfig
+index a582f72104f39..1fb78865a4593 100644
+--- a/arch/nios2/Kconfig
++++ b/arch/nios2/Kconfig
+@@ -16,6 +16,7 @@ config NIOS2
+ 	select HAVE_ARCH_TRACEHOOK
+ 	select HAVE_ARCH_KGDB
+ 	select IRQ_DOMAIN
++	select LOCK_MM_AND_FIND_VMA
+ 	select MODULES_USE_ELF_RELA
+ 	select OF
+ 	select OF_EARLY_FLATTREE
+diff --git a/arch/nios2/mm/fault.c b/arch/nios2/mm/fault.c
+index ca64eccea5511..e3fa9c15181df 100644
+--- a/arch/nios2/mm/fault.c
++++ b/arch/nios2/mm/fault.c
+@@ -86,27 +86,14 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long cause,
+ 
+ 	perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
+ 
+-	if (!mmap_read_trylock(mm)) {
+-		if (!user_mode(regs) && !search_exception_tables(regs->ea))
+-			goto bad_area_nosemaphore;
+ retry:
+-		mmap_read_lock(mm);
+-	}
+-
+-	vma = find_vma(mm, address);
++	vma = lock_mm_and_find_vma(mm, address, regs);
+ 	if (!vma)
+-		goto bad_area;
+-	if (vma->vm_start <= address)
+-		goto good_area;
+-	if (!(vma->vm_flags & VM_GROWSDOWN))
+-		goto bad_area;
+-	if (expand_stack(vma, address))
+-		goto bad_area;
++		goto bad_area_nosemaphore;
+ /*
+  * Ok, we have a good vm_area for this memory access, so
+  * we can handle it..
+  */
+-good_area:
+ 	code = SEGV_ACCERR;
+ 
+ 	switch (cause) {
+diff --git a/arch/openrisc/mm/fault.c b/arch/openrisc/mm/fault.c
+index 6734fee3134f4..a9dcd4381d1a1 100644
+--- a/arch/openrisc/mm/fault.c
++++ b/arch/openrisc/mm/fault.c
+@@ -127,8 +127,9 @@ retry:
+ 		if (address + PAGE_SIZE < regs->sp)
+ 			goto bad_area;
+ 	}
+-	if (expand_stack(vma, address))
+-		goto bad_area;
++	vma = expand_stack(mm, address);
++	if (!vma)
++		goto bad_area_nosemaphore;
+ 
+ 	/*
+ 	 * Ok, we have a good vm_area for this memory access, so
+diff --git a/arch/parisc/mm/fault.c b/arch/parisc/mm/fault.c
+index 6941fdbf25173..a4c7c7630f48b 100644
+--- a/arch/parisc/mm/fault.c
++++ b/arch/parisc/mm/fault.c
+@@ -288,15 +288,19 @@ void do_page_fault(struct pt_regs *regs, unsigned long code,
+ retry:
+ 	mmap_read_lock(mm);
+ 	vma = find_vma_prev(mm, address, &prev_vma);
+-	if (!vma || address < vma->vm_start)
+-		goto check_expansion;
++	if (!vma || address < vma->vm_start) {
++		if (!prev_vma || !(prev_vma->vm_flags & VM_GROWSUP))
++			goto bad_area;
++		vma = expand_stack(mm, address);
++		if (!vma)
++			goto bad_area_nosemaphore;
++	}
++
+ /*
+  * Ok, we have a good vm_area for this memory access. We still need to
+  * check the access permissions.
+  */
+ 
+-good_area:
+-
+ 	if ((vma->vm_flags & acc_type) != acc_type)
+ 		goto bad_area;
+ 
+@@ -347,17 +351,13 @@ good_area:
+ 	mmap_read_unlock(mm);
+ 	return;
+ 
+-check_expansion:
+-	vma = prev_vma;
+-	if (vma && (expand_stack(vma, address) == 0))
+-		goto good_area;
+-
+ /*
+  * Something tried to access memory that isn't in our memory map..
+  */
+ bad_area:
+ 	mmap_read_unlock(mm);
+ 
++bad_area_nosemaphore:
+ 	if (user_mode(regs)) {
+ 		int signo, si_code;
+ 
+@@ -449,7 +449,7 @@ handle_nadtlb_fault(struct pt_regs *regs)
+ {
+ 	unsigned long insn = regs->iir;
+ 	int breg, treg, xreg, val = 0;
+-	struct vm_area_struct *vma, *prev_vma;
++	struct vm_area_struct *vma;
+ 	struct task_struct *tsk;
+ 	struct mm_struct *mm;
+ 	unsigned long address;
+@@ -485,7 +485,7 @@ handle_nadtlb_fault(struct pt_regs *regs)
+ 				/* Search for VMA */
+ 				address = regs->ior;
+ 				mmap_read_lock(mm);
+-				vma = find_vma_prev(mm, address, &prev_vma);
++				vma = vma_lookup(mm, address);
+ 				mmap_read_unlock(mm);
+ 
+ 				/*
+@@ -494,7 +494,6 @@ handle_nadtlb_fault(struct pt_regs *regs)
+ 				 */
+ 				acc_type = (insn & 0x40) ? VM_WRITE : VM_READ;
+ 				if (vma
+-				    && address >= vma->vm_start
+ 				    && (vma->vm_flags & acc_type) == acc_type)
+ 					val = 1;
+ 			}
+diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
+index a6c4407d3ec83..4ab9b44e6c1c6 100644
+--- a/arch/powerpc/Kconfig
++++ b/arch/powerpc/Kconfig
+@@ -263,6 +263,7 @@ config PPC
+ 	select IRQ_DOMAIN
+ 	select IRQ_FORCED_THREADING
+ 	select KASAN_VMALLOC			if KASAN && MODULES
++	select LOCK_MM_AND_FIND_VMA
+ 	select MMU_GATHER_PAGE_SIZE
+ 	select MMU_GATHER_RCU_TABLE_FREE
+ 	select MMU_GATHER_MERGE_VMAS
+diff --git a/arch/powerpc/mm/copro_fault.c b/arch/powerpc/mm/copro_fault.c
+index 7c507fb48182b..f49fd873df8da 100644
+--- a/arch/powerpc/mm/copro_fault.c
++++ b/arch/powerpc/mm/copro_fault.c
+@@ -33,19 +33,11 @@ int copro_handle_mm_fault(struct mm_struct *mm, unsigned long ea,
+ 	if (mm->pgd == NULL)
+ 		return -EFAULT;
+ 
+-	mmap_read_lock(mm);
+-	ret = -EFAULT;
+-	vma = find_vma(mm, ea);
++	vma = lock_mm_and_find_vma(mm, ea, NULL);
+ 	if (!vma)
+-		goto out_unlock;
+-
+-	if (ea < vma->vm_start) {
+-		if (!(vma->vm_flags & VM_GROWSDOWN))
+-			goto out_unlock;
+-		if (expand_stack(vma, ea))
+-			goto out_unlock;
+-	}
++		return -EFAULT;
+ 
++	ret = -EFAULT;
+ 	is_write = dsisr & DSISR_ISSTORE;
+ 	if (is_write) {
+ 		if (!(vma->vm_flags & VM_WRITE))
+diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
+index af46aa88422bf..644e4ec6ce99d 100644
+--- a/arch/powerpc/mm/fault.c
++++ b/arch/powerpc/mm/fault.c
+@@ -84,11 +84,6 @@ static int __bad_area(struct pt_regs *regs, unsigned long address, int si_code)
+ 	return __bad_area_nosemaphore(regs, address, si_code);
+ }
+ 
+-static noinline int bad_area(struct pt_regs *regs, unsigned long address)
+-{
+-	return __bad_area(regs, address, SEGV_MAPERR);
+-}
+-
+ static noinline int bad_access_pkey(struct pt_regs *regs, unsigned long address,
+ 				    struct vm_area_struct *vma)
+ {
+@@ -481,40 +476,12 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address,
+ 	 * we will deadlock attempting to validate the fault against the
+ 	 * address space.  Luckily the kernel only validly references user
+ 	 * space from well defined areas of code, which are listed in the
+-	 * exceptions table.
+-	 *
+-	 * As the vast majority of faults will be valid we will only perform
+-	 * the source reference check when there is a possibility of a deadlock.
+-	 * Attempt to lock the address space, if we cannot we then validate the
+-	 * source.  If this is invalid we can skip the address space check,
+-	 * thus avoiding the deadlock.
++	 * exceptions table. lock_mm_and_find_vma() handles that logic.
+ 	 */
+-	if (unlikely(!mmap_read_trylock(mm))) {
+-		if (!is_user && !search_exception_tables(regs->nip))
+-			return bad_area_nosemaphore(regs, address);
+-
+ retry:
+-		mmap_read_lock(mm);
+-	} else {
+-		/*
+-		 * The above down_read_trylock() might have succeeded in
+-		 * which case we'll have missed the might_sleep() from
+-		 * down_read():
+-		 */
+-		might_sleep();
+-	}
+-
+-	vma = find_vma(mm, address);
++	vma = lock_mm_and_find_vma(mm, address, regs);
+ 	if (unlikely(!vma))
+-		return bad_area(regs, address);
+-
+-	if (unlikely(vma->vm_start > address)) {
+-		if (unlikely(!(vma->vm_flags & VM_GROWSDOWN)))
+-			return bad_area(regs, address);
+-
+-		if (unlikely(expand_stack(vma, address)))
+-			return bad_area(regs, address);
+-	}
++		return bad_area_nosemaphore(regs, address);
+ 
+ 	if (unlikely(access_pkey_error(is_write, is_exec,
+ 				       (error_code & DSISR_KEYFAULT), vma)))
+diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
+index b462ed7d41fe1..73671af406dca 100644
+--- a/arch/riscv/Kconfig
++++ b/arch/riscv/Kconfig
+@@ -119,6 +119,7 @@ config RISCV
+ 	select HAVE_SYSCALL_TRACEPOINTS
+ 	select IRQ_DOMAIN
+ 	select IRQ_FORCED_THREADING
++	select LOCK_MM_AND_FIND_VMA
+ 	select MODULES_USE_ELF_RELA if MODULES
+ 	select MODULE_SECTIONS if MODULES
+ 	select OF
+diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
+index d5f3e501dffb3..89226458eab89 100644
+--- a/arch/riscv/mm/fault.c
++++ b/arch/riscv/mm/fault.c
+@@ -83,13 +83,13 @@ static inline void mm_fault_error(struct pt_regs *regs, unsigned long addr, vm_f
+ 	BUG();
+ }
+ 
+-static inline void bad_area(struct pt_regs *regs, struct mm_struct *mm, int code, unsigned long addr)
++static inline void
++bad_area_nosemaphore(struct pt_regs *regs, int code, unsigned long addr)
+ {
+ 	/*
+ 	 * Something tried to access memory that isn't in our memory map.
+ 	 * Fix it, but check if it's kernel or user first.
+ 	 */
+-	mmap_read_unlock(mm);
+ 	/* User mode accesses just cause a SIGSEGV */
+ 	if (user_mode(regs)) {
+ 		do_trap(regs, SIGSEGV, code, addr);
+@@ -99,6 +99,15 @@ static inline void bad_area(struct pt_regs *regs, struct mm_struct *mm, int code
+ 	no_context(regs, addr);
+ }
+ 
++static inline void
++bad_area(struct pt_regs *regs, struct mm_struct *mm, int code,
++	 unsigned long addr)
++{
++	mmap_read_unlock(mm);
++
++	bad_area_nosemaphore(regs, code, addr);
++}
++
+ static inline void vmalloc_fault(struct pt_regs *regs, int code, unsigned long addr)
+ {
+ 	pgd_t *pgd, *pgd_k;
+@@ -286,23 +295,10 @@ asmlinkage void do_page_fault(struct pt_regs *regs)
+ 	else if (cause == EXC_INST_PAGE_FAULT)
+ 		flags |= FAULT_FLAG_INSTRUCTION;
+ retry:
+-	mmap_read_lock(mm);
+-	vma = find_vma(mm, addr);
++	vma = lock_mm_and_find_vma(mm, addr, regs);
+ 	if (unlikely(!vma)) {
+ 		tsk->thread.bad_cause = cause;
+-		bad_area(regs, mm, code, addr);
+-		return;
+-	}
+-	if (likely(vma->vm_start <= addr))
+-		goto good_area;
+-	if (unlikely(!(vma->vm_flags & VM_GROWSDOWN))) {
+-		tsk->thread.bad_cause = cause;
+-		bad_area(regs, mm, code, addr);
+-		return;
+-	}
+-	if (unlikely(expand_stack(vma, addr))) {
+-		tsk->thread.bad_cause = cause;
+-		bad_area(regs, mm, code, addr);
++		bad_area_nosemaphore(regs, code, addr);
+ 		return;
+ 	}
+ 
+@@ -310,7 +306,6 @@ retry:
+ 	 * Ok, we have a good vm_area for this memory access, so
+ 	 * we can handle it.
+ 	 */
+-good_area:
+ 	code = SEGV_ACCERR;
+ 
+ 	if (unlikely(access_error(cause, vma))) {
+diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
+index a2632fd97d007..94e3970c48393 100644
+--- a/arch/s390/mm/fault.c
++++ b/arch/s390/mm/fault.c
+@@ -433,8 +433,9 @@ retry:
+ 	if (unlikely(vma->vm_start > address)) {
+ 		if (!(vma->vm_flags & VM_GROWSDOWN))
+ 			goto out_up;
+-		if (expand_stack(vma, address))
+-			goto out_up;
++		vma = expand_stack(mm, address);
++		if (!vma)
++			goto out;
+ 	}
+ 
+ 	/*
+diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig
+index 0665ac0add0b4..101a0d094a667 100644
+--- a/arch/sh/Kconfig
++++ b/arch/sh/Kconfig
+@@ -56,6 +56,7 @@ config SUPERH
+ 	select HAVE_STACKPROTECTOR
+ 	select HAVE_SYSCALL_TRACEPOINTS
+ 	select IRQ_FORCED_THREADING
++	select LOCK_MM_AND_FIND_VMA
+ 	select MODULES_USE_ELF_RELA
+ 	select NEED_SG_DMA_LENGTH
+ 	select NO_DMA if !MMU && !DMA_COHERENT
+diff --git a/arch/sh/mm/fault.c b/arch/sh/mm/fault.c
+index acd2f5e50bfcd..06e6b49529245 100644
+--- a/arch/sh/mm/fault.c
++++ b/arch/sh/mm/fault.c
+@@ -439,21 +439,9 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs *regs,
+ 	}
+ 
+ retry:
+-	mmap_read_lock(mm);
+-
+-	vma = find_vma(mm, address);
++	vma = lock_mm_and_find_vma(mm, address, regs);
+ 	if (unlikely(!vma)) {
+-		bad_area(regs, error_code, address);
+-		return;
+-	}
+-	if (likely(vma->vm_start <= address))
+-		goto good_area;
+-	if (unlikely(!(vma->vm_flags & VM_GROWSDOWN))) {
+-		bad_area(regs, error_code, address);
+-		return;
+-	}
+-	if (unlikely(expand_stack(vma, address))) {
+-		bad_area(regs, error_code, address);
++		bad_area_nosemaphore(regs, error_code, address);
+ 		return;
+ 	}
+ 
+@@ -461,7 +449,6 @@ retry:
+ 	 * Ok, we have a good vm_area for this memory access, so
+ 	 * we can handle it..
+ 	 */
+-good_area:
+ 	if (unlikely(access_error(error_code, vma))) {
+ 		bad_area_access_error(regs, error_code, address);
+ 		return;
+diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
+index 84437a4c65454..dbb1760cbe8c9 100644
+--- a/arch/sparc/Kconfig
++++ b/arch/sparc/Kconfig
+@@ -56,6 +56,7 @@ config SPARC32
+ 	select DMA_DIRECT_REMAP
+ 	select GENERIC_ATOMIC64
+ 	select HAVE_UID16
++	select LOCK_MM_AND_FIND_VMA
+ 	select OLD_SIGACTION
+ 	select ZONE_DMA
+ 
+diff --git a/arch/sparc/mm/fault_32.c b/arch/sparc/mm/fault_32.c
+index 179295b14664a..86a831ebd8c8e 100644
+--- a/arch/sparc/mm/fault_32.c
++++ b/arch/sparc/mm/fault_32.c
+@@ -143,28 +143,19 @@ asmlinkage void do_sparc_fault(struct pt_regs *regs, int text_fault, int write,
+ 	if (pagefault_disabled() || !mm)
+ 		goto no_context;
+ 
++	if (!from_user && address >= PAGE_OFFSET)
++		goto no_context;
++
+ 	perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
+ 
+ retry:
+-	mmap_read_lock(mm);
+-
+-	if (!from_user && address >= PAGE_OFFSET)
+-		goto bad_area;
+-
+-	vma = find_vma(mm, address);
++	vma = lock_mm_and_find_vma(mm, address, regs);
+ 	if (!vma)
+-		goto bad_area;
+-	if (vma->vm_start <= address)
+-		goto good_area;
+-	if (!(vma->vm_flags & VM_GROWSDOWN))
+-		goto bad_area;
+-	if (expand_stack(vma, address))
+-		goto bad_area;
++		goto bad_area_nosemaphore;
+ 	/*
+ 	 * Ok, we have a good vm_area for this memory access, so
+ 	 * we can handle it..
+ 	 */
+-good_area:
+ 	code = SEGV_ACCERR;
+ 	if (write) {
+ 		if (!(vma->vm_flags & VM_WRITE))
+@@ -321,17 +312,9 @@ static void force_user_fault(unsigned long address, int write)
+ 
+ 	code = SEGV_MAPERR;
+ 
+-	mmap_read_lock(mm);
+-	vma = find_vma(mm, address);
++	vma = lock_mm_and_find_vma(mm, address, NULL);
+ 	if (!vma)
+-		goto bad_area;
+-	if (vma->vm_start <= address)
+-		goto good_area;
+-	if (!(vma->vm_flags & VM_GROWSDOWN))
+-		goto bad_area;
+-	if (expand_stack(vma, address))
+-		goto bad_area;
+-good_area:
++		goto bad_area_nosemaphore;
+ 	code = SEGV_ACCERR;
+ 	if (write) {
+ 		if (!(vma->vm_flags & VM_WRITE))
+@@ -350,6 +333,7 @@ good_area:
+ 	return;
+ bad_area:
+ 	mmap_read_unlock(mm);
++bad_area_nosemaphore:
+ 	__do_fault_siginfo(code, SIGSEGV, tsk->thread.kregs, address);
+ 	return;
+ 
+diff --git a/arch/sparc/mm/fault_64.c b/arch/sparc/mm/fault_64.c
+index d91305de694c5..69ff07bc6c07d 100644
+--- a/arch/sparc/mm/fault_64.c
++++ b/arch/sparc/mm/fault_64.c
+@@ -383,8 +383,9 @@ continue_fault:
+ 				goto bad_area;
+ 		}
+ 	}
+-	if (expand_stack(vma, address))
+-		goto bad_area;
++	vma = expand_stack(mm, address);
++	if (!vma)
++		goto bad_area_nosemaphore;
+ 	/*
+ 	 * Ok, we have a good vm_area for this memory access, so
+ 	 * we can handle it..
+@@ -487,8 +488,9 @@ exit_exception:
+ 	 * Fix it, but check if it's kernel or user first..
+ 	 */
+ bad_area:
+-	insn = get_fault_insn(regs, insn);
+ 	mmap_read_unlock(mm);
++bad_area_nosemaphore:
++	insn = get_fault_insn(regs, insn);
+ 
+ handle_kernel_fault:
+ 	do_kernel_fault(regs, si_code, fault_code, insn, address);
+diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c
+index d3ce21c4ca32a..6d8ae86ae978f 100644
+--- a/arch/um/kernel/trap.c
++++ b/arch/um/kernel/trap.c
+@@ -47,14 +47,15 @@ retry:
+ 	vma = find_vma(mm, address);
+ 	if (!vma)
+ 		goto out;
+-	else if (vma->vm_start <= address)
++	if (vma->vm_start <= address)
+ 		goto good_area;
+-	else if (!(vma->vm_flags & VM_GROWSDOWN))
++	if (!(vma->vm_flags & VM_GROWSDOWN))
+ 		goto out;
+-	else if (is_user && !ARCH_IS_STACKGROW(address))
+-		goto out;
+-	else if (expand_stack(vma, address))
++	if (is_user && !ARCH_IS_STACKGROW(address))
+ 		goto out;
++	vma = expand_stack(mm, address);
++	if (!vma)
++		goto out_nosemaphore;
+ 
+ good_area:
+ 	*code_out = SEGV_ACCERR;
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index a825bf031f495..8e6db5262614e 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -274,6 +274,7 @@ config X86
+ 	select HAVE_GENERIC_VDSO
+ 	select HOTPLUG_SMT			if SMP
+ 	select IRQ_FORCED_THREADING
++	select LOCK_MM_AND_FIND_VMA
+ 	select NEED_PER_CPU_EMBED_FIRST_CHUNK
+ 	select NEED_PER_CPU_PAGE_FIRST_CHUNK
+ 	select NEED_SG_DMA_LENGTH
+diff --git a/arch/x86/include/asm/cpu.h b/arch/x86/include/asm/cpu.h
+index 78796b98a5449..9ba3c3dec6f30 100644
+--- a/arch/x86/include/asm/cpu.h
++++ b/arch/x86/include/asm/cpu.h
+@@ -98,4 +98,6 @@ extern u64 x86_read_arch_cap_msr(void);
+ int intel_find_matching_signature(void *mc, unsigned int csig, int cpf);
+ int intel_microcode_sanity_check(void *mc, bool print_err, int hdr_type);
+ 
++extern struct cpumask cpus_stop_mask;
++
+ #endif /* _ASM_X86_CPU_H */
+diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h
+index b4dbb20dab1a1..4e8e64541395e 100644
+--- a/arch/x86/include/asm/smp.h
++++ b/arch/x86/include/asm/smp.h
+@@ -131,6 +131,8 @@ void wbinvd_on_cpu(int cpu);
+ int wbinvd_on_all_cpus(void);
+ void cond_wakeup_cpu0(void);
+ 
++void smp_kick_mwait_play_dead(void);
++
+ void native_smp_send_reschedule(int cpu);
+ void native_send_call_func_ipi(const struct cpumask *mask);
+ void native_send_call_func_single_ipi(int cpu);
+diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c
+index 9eb457b103413..cedecba551630 100644
+--- a/arch/x86/kernel/cpu/microcode/amd.c
++++ b/arch/x86/kernel/cpu/microcode/amd.c
+@@ -705,7 +705,7 @@ static enum ucode_state apply_microcode_amd(int cpu)
+ 	rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
+ 
+ 	/* need to apply patch? */
+-	if (rev >= mc_amd->hdr.patch_id) {
++	if (rev > mc_amd->hdr.patch_id) {
+ 		ret = UCODE_OK;
+ 		goto out;
+ 	}
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index b650cde3f64db..2fed8aafb5214 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -752,15 +752,26 @@ bool xen_set_default_idle(void)
+ }
+ #endif
+ 
++struct cpumask cpus_stop_mask;
++
+ void __noreturn stop_this_cpu(void *dummy)
+ {
++	struct cpuinfo_x86 *c = this_cpu_ptr(&cpu_info);
++	unsigned int cpu = smp_processor_id();
++
+ 	local_irq_disable();
++
+ 	/*
+-	 * Remove this CPU:
++	 * Remove this CPU from the online mask and disable it
++	 * unconditionally. This might be redundant in case that the reboot
++	 * vector was handled late and stop_other_cpus() sent an NMI.
++	 *
++	 * According to SDM and APM NMIs can be accepted even after soft
++	 * disabling the local APIC.
+ 	 */
+-	set_cpu_online(smp_processor_id(), false);
++	set_cpu_online(cpu, false);
+ 	disable_local_APIC();
+-	mcheck_cpu_clear(this_cpu_ptr(&cpu_info));
++	mcheck_cpu_clear(c);
+ 
+ 	/*
+ 	 * Use wbinvd on processors that support SME. This provides support
+@@ -774,8 +785,17 @@ void __noreturn stop_this_cpu(void *dummy)
+ 	 * Test the CPUID bit directly because the machine might've cleared
+ 	 * X86_FEATURE_SME due to cmdline options.
+ 	 */
+-	if (cpuid_eax(0x8000001f) & BIT(0))
++	if (c->extended_cpuid_level >= 0x8000001f && (cpuid_eax(0x8000001f) & BIT(0)))
+ 		native_wbinvd();
++
++	/*
++	 * This brings a cache line back and dirties it, but
++	 * native_stop_other_cpus() will overwrite cpus_stop_mask after it
++	 * observed that all CPUs reported stop. This write will invalidate
++	 * the related cache line on this CPU.
++	 */
++	cpumask_clear_cpu(cpu, &cpus_stop_mask);
++
+ 	for (;;) {
+ 		/*
+ 		 * Use native_halt() so that memory contents don't change
+diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
+index 375b33ecafa27..174d6232b87fd 100644
+--- a/arch/x86/kernel/smp.c
++++ b/arch/x86/kernel/smp.c
+@@ -21,12 +21,14 @@
+ #include <linux/interrupt.h>
+ #include <linux/cpu.h>
+ #include <linux/gfp.h>
++#include <linux/kexec.h>
+ 
+ #include <asm/mtrr.h>
+ #include <asm/tlbflush.h>
+ #include <asm/mmu_context.h>
+ #include <asm/proto.h>
+ #include <asm/apic.h>
++#include <asm/cpu.h>
+ #include <asm/idtentry.h>
+ #include <asm/nmi.h>
+ #include <asm/mce.h>
+@@ -146,34 +148,47 @@ static int register_stop_handler(void)
+ 
+ static void native_stop_other_cpus(int wait)
+ {
+-	unsigned long flags;
+-	unsigned long timeout;
++	unsigned int cpu = smp_processor_id();
++	unsigned long flags, timeout;
+ 
+ 	if (reboot_force)
+ 		return;
+ 
+-	/*
+-	 * Use an own vector here because smp_call_function
+-	 * does lots of things not suitable in a panic situation.
+-	 */
++	/* Only proceed if this is the first CPU to reach this code */
++	if (atomic_cmpxchg(&stopping_cpu, -1, cpu) != -1)
++		return;
++
++	/* For kexec, ensure that offline CPUs are out of MWAIT and in HLT */
++	if (kexec_in_progress)
++		smp_kick_mwait_play_dead();
+ 
+ 	/*
+-	 * We start by using the REBOOT_VECTOR irq.
+-	 * The irq is treated as a sync point to allow critical
+-	 * regions of code on other cpus to release their spin locks
+-	 * and re-enable irqs.  Jumping straight to an NMI might
+-	 * accidentally cause deadlocks with further shutdown/panic
+-	 * code.  By syncing, we give the cpus up to one second to
+-	 * finish their work before we force them off with the NMI.
++	 * 1) Send an IPI on the reboot vector to all other CPUs.
++	 *
++	 *    The other CPUs should react on it after leaving critical
++	 *    sections and re-enabling interrupts. They might still hold
++	 *    locks, but there is nothing which can be done about that.
++	 *
++	 * 2) Wait for all other CPUs to report that they reached the
++	 *    HLT loop in stop_this_cpu()
++	 *
++	 * 3) If #2 timed out send an NMI to the CPUs which did not
++	 *    yet report
++	 *
++	 * 4) Wait for all other CPUs to report that they reached the
++	 *    HLT loop in stop_this_cpu()
++	 *
++	 * #3 can obviously race against a CPU reaching the HLT loop late.
++	 * That CPU will have reported already and the "have all CPUs
++	 * reached HLT" condition will be true despite the fact that the
++	 * other CPU is still handling the NMI. Again, there is no
++	 * protection against that as "disabled" APICs still respond to
++	 * NMIs.
+ 	 */
+-	if (num_online_cpus() > 1) {
+-		/* did someone beat us here? */
+-		if (atomic_cmpxchg(&stopping_cpu, -1, safe_smp_processor_id()) != -1)
+-			return;
+-
+-		/* sync above data before sending IRQ */
+-		wmb();
++	cpumask_copy(&cpus_stop_mask, cpu_online_mask);
++	cpumask_clear_cpu(cpu, &cpus_stop_mask);
+ 
++	if (!cpumask_empty(&cpus_stop_mask)) {
+ 		apic_send_IPI_allbutself(REBOOT_VECTOR);
+ 
+ 		/*
+@@ -183,24 +198,22 @@ static void native_stop_other_cpus(int wait)
+ 		 * CPUs reach shutdown state.
+ 		 */
+ 		timeout = USEC_PER_SEC;
+-		while (num_online_cpus() > 1 && timeout--)
++		while (!cpumask_empty(&cpus_stop_mask) && timeout--)
+ 			udelay(1);
+ 	}
+ 
+ 	/* if the REBOOT_VECTOR didn't work, try with the NMI */
+-	if (num_online_cpus() > 1) {
++	if (!cpumask_empty(&cpus_stop_mask)) {
+ 		/*
+ 		 * If NMI IPI is enabled, try to register the stop handler
+ 		 * and send the IPI. In any case try to wait for the other
+ 		 * CPUs to stop.
+ 		 */
+ 		if (!smp_no_nmi_ipi && !register_stop_handler()) {
+-			/* Sync above data before sending IRQ */
+-			wmb();
+-
+ 			pr_emerg("Shutting down cpus with NMI\n");
+ 
+-			apic_send_IPI_allbutself(NMI_VECTOR);
++			for_each_cpu(cpu, &cpus_stop_mask)
++				apic->send_IPI(cpu, NMI_VECTOR);
+ 		}
+ 		/*
+ 		 * Don't wait longer than 10 ms if the caller didn't
+@@ -208,7 +221,7 @@ static void native_stop_other_cpus(int wait)
+ 		 * one or more CPUs do not reach shutdown state.
+ 		 */
+ 		timeout = USEC_PER_MSEC * 10;
+-		while (num_online_cpus() > 1 && (wait || timeout--))
++		while (!cpumask_empty(&cpus_stop_mask) && (wait || timeout--))
+ 			udelay(1);
+ 	}
+ 
+@@ -216,6 +229,12 @@ static void native_stop_other_cpus(int wait)
+ 	disable_local_APIC();
+ 	mcheck_cpu_clear(this_cpu_ptr(&cpu_info));
+ 	local_irq_restore(flags);
++
++	/*
++	 * Ensure that the cpus_stop_mask cache lines are invalidated on
++	 * the other CPUs. See comment vs. SME in stop_this_cpu().
++	 */
++	cpumask_clear(&cpus_stop_mask);
+ }
+ 
+ /*
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index 9013bb28255a1..144c6f2c42306 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -53,6 +53,7 @@
+ #include <linux/tboot.h>
+ #include <linux/gfp.h>
+ #include <linux/cpuidle.h>
++#include <linux/kexec.h>
+ #include <linux/numa.h>
+ #include <linux/pgtable.h>
+ #include <linux/overflow.h>
+@@ -101,6 +102,20 @@ EXPORT_PER_CPU_SYMBOL(cpu_die_map);
+ DEFINE_PER_CPU_READ_MOSTLY(struct cpuinfo_x86, cpu_info);
+ EXPORT_PER_CPU_SYMBOL(cpu_info);
+ 
++struct mwait_cpu_dead {
++	unsigned int	control;
++	unsigned int	status;
++};
++
++#define CPUDEAD_MWAIT_WAIT	0xDEADBEEF
++#define CPUDEAD_MWAIT_KEXEC_HLT	0x4A17DEAD
++
++/*
++ * Cache line aligned data for mwait_play_dead(). Separate on purpose so
++ * that it's unlikely to be touched by other CPUs.
++ */
++static DEFINE_PER_CPU_ALIGNED(struct mwait_cpu_dead, mwait_cpu_dead);
++
+ /* Logical package management. We might want to allocate that dynamically */
+ unsigned int __max_logical_packages __read_mostly;
+ EXPORT_SYMBOL(__max_logical_packages);
+@@ -157,6 +172,10 @@ static void smp_callin(void)
+ {
+ 	int cpuid;
+ 
++	/* Mop up eventual mwait_play_dead() wreckage */
++	this_cpu_write(mwait_cpu_dead.status, 0);
++	this_cpu_write(mwait_cpu_dead.control, 0);
++
+ 	/*
+ 	 * If waken up by an INIT in an 82489DX configuration
+ 	 * cpu_callout_mask guarantees we don't get here before
+@@ -1750,10 +1769,10 @@ EXPORT_SYMBOL_GPL(cond_wakeup_cpu0);
+  */
+ static inline void mwait_play_dead(void)
+ {
++	struct mwait_cpu_dead *md = this_cpu_ptr(&mwait_cpu_dead);
+ 	unsigned int eax, ebx, ecx, edx;
+ 	unsigned int highest_cstate = 0;
+ 	unsigned int highest_subcstate = 0;
+-	void *mwait_ptr;
+ 	int i;
+ 
+ 	if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
+@@ -1788,12 +1807,9 @@ static inline void mwait_play_dead(void)
+ 			(highest_subcstate - 1);
+ 	}
+ 
+-	/*
+-	 * This should be a memory location in a cache line which is
+-	 * unlikely to be touched by other processors.  The actual
+-	 * content is immaterial as it is not actually modified in any way.
+-	 */
+-	mwait_ptr = &current_thread_info()->flags;
++	/* Set up state for the kexec() hack below */
++	md->status = CPUDEAD_MWAIT_WAIT;
++	md->control = CPUDEAD_MWAIT_WAIT;
+ 
+ 	wbinvd();
+ 
+@@ -1806,16 +1822,63 @@ static inline void mwait_play_dead(void)
+ 		 * case where we return around the loop.
+ 		 */
+ 		mb();
+-		clflush(mwait_ptr);
++		clflush(md);
+ 		mb();
+-		__monitor(mwait_ptr, 0, 0);
++		__monitor(md, 0, 0);
+ 		mb();
+ 		__mwait(eax, 0);
+ 
++		if (READ_ONCE(md->control) == CPUDEAD_MWAIT_KEXEC_HLT) {
++			/*
++			 * Kexec is about to happen. Don't go back into mwait() as
++			 * the kexec kernel might overwrite text and data including
++			 * page tables and stack. So mwait() would resume when the
++			 * monitor cache line is written to and then the CPU goes
++			 * south due to overwritten text, page tables and stack.
++			 *
++			 * Note: This does _NOT_ protect against a stray MCE, NMI,
++			 * SMI. They will resume execution at the instruction
++			 * following the HLT instruction and run into the problem
++			 * which this is trying to prevent.
++			 */
++			WRITE_ONCE(md->status, CPUDEAD_MWAIT_KEXEC_HLT);
++			while(1)
++				native_halt();
++		}
++
+ 		cond_wakeup_cpu0();
+ 	}
+ }
+ 
++/*
++ * Kick all "offline" CPUs out of mwait on kexec(). See comment in
++ * mwait_play_dead().
++ */
++void smp_kick_mwait_play_dead(void)
++{
++	u32 newstate = CPUDEAD_MWAIT_KEXEC_HLT;
++	struct mwait_cpu_dead *md;
++	unsigned int cpu, i;
++
++	for_each_cpu_andnot(cpu, cpu_present_mask, cpu_online_mask) {
++		md = per_cpu_ptr(&mwait_cpu_dead, cpu);
++
++		/* Does it sit in mwait_play_dead() ? */
++		if (READ_ONCE(md->status) != CPUDEAD_MWAIT_WAIT)
++			continue;
++
++		/* Wait up to 5ms */
++		for (i = 0; READ_ONCE(md->status) != newstate && i < 1000; i++) {
++			/* Bring it out of mwait */
++			WRITE_ONCE(md->control, newstate);
++			udelay(5);
++		}
++
++		if (READ_ONCE(md->status) != newstate)
++			pr_err_once("CPU%u is stuck in mwait_play_dead()\n", cpu);
++	}
++}
++
+ void hlt_play_dead(void)
+ {
+ 	if (__this_cpu_read(cpu_info.x86) >= 4)
+diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
+index a498ae1fbe665..67954abb9fdcb 100644
+--- a/arch/x86/mm/fault.c
++++ b/arch/x86/mm/fault.c
+@@ -879,12 +879,6 @@ __bad_area(struct pt_regs *regs, unsigned long error_code,
+ 	__bad_area_nosemaphore(regs, error_code, address, pkey, si_code);
+ }
+ 
+-static noinline void
+-bad_area(struct pt_regs *regs, unsigned long error_code, unsigned long address)
+-{
+-	__bad_area(regs, error_code, address, 0, SEGV_MAPERR);
+-}
+-
+ static inline bool bad_area_access_from_pkeys(unsigned long error_code,
+ 		struct vm_area_struct *vma)
+ {
+@@ -1333,51 +1327,10 @@ void do_user_addr_fault(struct pt_regs *regs,
+ 	}
+ #endif
+ 
+-	/*
+-	 * Kernel-mode access to the user address space should only occur
+-	 * on well-defined single instructions listed in the exception
+-	 * tables.  But, an erroneous kernel fault occurring outside one of
+-	 * those areas which also holds mmap_lock might deadlock attempting
+-	 * to validate the fault against the address space.
+-	 *
+-	 * Only do the expensive exception table search when we might be at
+-	 * risk of a deadlock.  This happens if we
+-	 * 1. Failed to acquire mmap_lock, and
+-	 * 2. The access did not originate in userspace.
+-	 */
+-	if (unlikely(!mmap_read_trylock(mm))) {
+-		if (!user_mode(regs) && !search_exception_tables(regs->ip)) {
+-			/*
+-			 * Fault from code in kernel from
+-			 * which we do not expect faults.
+-			 */
+-			bad_area_nosemaphore(regs, error_code, address);
+-			return;
+-		}
+ retry:
+-		mmap_read_lock(mm);
+-	} else {
+-		/*
+-		 * The above down_read_trylock() might have succeeded in
+-		 * which case we'll have missed the might_sleep() from
+-		 * down_read():
+-		 */
+-		might_sleep();
+-	}
+-
+-	vma = find_vma(mm, address);
++	vma = lock_mm_and_find_vma(mm, address, regs);
+ 	if (unlikely(!vma)) {
+-		bad_area(regs, error_code, address);
+-		return;
+-	}
+-	if (likely(vma->vm_start <= address))
+-		goto good_area;
+-	if (unlikely(!(vma->vm_flags & VM_GROWSDOWN))) {
+-		bad_area(regs, error_code, address);
+-		return;
+-	}
+-	if (unlikely(expand_stack(vma, address))) {
+-		bad_area(regs, error_code, address);
++		bad_area_nosemaphore(regs, error_code, address);
+ 		return;
+ 	}
+ 
+@@ -1385,7 +1338,6 @@ retry:
+ 	 * Ok, we have a good vm_area for this memory access, so
+ 	 * we can handle it..
+ 	 */
+-good_area:
+ 	if (unlikely(access_error(error_code, vma))) {
+ 		bad_area_access_error(regs, error_code, address, vma);
+ 		return;
+diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig
+index bcb0c5d2abc2f..6d3c9257aa133 100644
+--- a/arch/xtensa/Kconfig
++++ b/arch/xtensa/Kconfig
+@@ -49,6 +49,7 @@ config XTENSA
+ 	select HAVE_SYSCALL_TRACEPOINTS
+ 	select HAVE_VIRT_CPU_ACCOUNTING_GEN
+ 	select IRQ_DOMAIN
++	select LOCK_MM_AND_FIND_VMA
+ 	select MODULES_USE_ELF_RELA
+ 	select PERF_USE_VMALLOC
+ 	select TRACE_IRQFLAGS_SUPPORT
+diff --git a/arch/xtensa/mm/fault.c b/arch/xtensa/mm/fault.c
+index faf7cf35a0ee3..d1eb8d6c5b826 100644
+--- a/arch/xtensa/mm/fault.c
++++ b/arch/xtensa/mm/fault.c
+@@ -130,23 +130,14 @@ void do_page_fault(struct pt_regs *regs)
+ 	perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
+ 
+ retry:
+-	mmap_read_lock(mm);
+-	vma = find_vma(mm, address);
+-
++	vma = lock_mm_and_find_vma(mm, address, regs);
+ 	if (!vma)
+-		goto bad_area;
+-	if (vma->vm_start <= address)
+-		goto good_area;
+-	if (!(vma->vm_flags & VM_GROWSDOWN))
+-		goto bad_area;
+-	if (expand_stack(vma, address))
+-		goto bad_area;
++		goto bad_area_nosemaphore;
+ 
+ 	/* Ok, we have a good vm_area for this memory access, so
+ 	 * we can handle it..
+ 	 */
+ 
+-good_area:
+ 	code = SEGV_ACCERR;
+ 
+ 	if (is_write) {
+@@ -205,6 +196,7 @@ good_area:
+ 	 */
+ bad_area:
+ 	mmap_read_unlock(mm);
++bad_area_nosemaphore:
+ 	if (user_mode(regs)) {
+ 		force_sig_fault(SIGSEGV, code, (void *) address);
+ 		return;
+diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
+index a7eb25066c274..48543d29050a2 100644
+--- a/drivers/cpufreq/amd-pstate.c
++++ b/drivers/cpufreq/amd-pstate.c
+@@ -1272,7 +1272,7 @@ static struct cpufreq_driver amd_pstate_epp_driver = {
+ 	.online		= amd_pstate_epp_cpu_online,
+ 	.suspend	= amd_pstate_epp_suspend,
+ 	.resume		= amd_pstate_epp_resume,
+-	.name		= "amd_pstate_epp",
++	.name		= "amd-pstate-epp",
+ 	.attr		= amd_pstate_epp_attr,
+ };
+ 
+diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
+index 14bce1fdec9fe..3e1c30c653a50 100644
+--- a/drivers/hid/hid-logitech-hidpp.c
++++ b/drivers/hid/hid-logitech-hidpp.c
+@@ -4364,7 +4364,7 @@ static const struct hid_device_id hidpp_devices[] = {
+ 	{ /* wireless touchpad T651 */
+ 	  HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH,
+ 		USB_DEVICE_ID_LOGITECH_T651),
+-	  .driver_data = HIDPP_QUIRK_CLASS_WTP },
++	  .driver_data = HIDPP_QUIRK_CLASS_WTP | HIDPP_QUIRK_DELAYED_INIT },
+ 	{ /* Mouse Logitech Anywhere MX */
+ 	  LDJ_DEVICE(0x1017), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 },
+ 	{ /* Mouse logitech M560 */
+diff --git a/drivers/hid/hidraw.c b/drivers/hid/hidraw.c
+index 197b1e7bf029e..b617aada50b06 100644
+--- a/drivers/hid/hidraw.c
++++ b/drivers/hid/hidraw.c
+@@ -272,7 +272,12 @@ static int hidraw_open(struct inode *inode, struct file *file)
+ 		goto out;
+ 	}
+ 
+-	down_read(&minors_rwsem);
++	/*
++	 * Technically not writing to the hidraw_table but a write lock is
++	 * required to protect the device refcount. This is symmetrical to
++	 * hidraw_release().
++	 */
++	down_write(&minors_rwsem);
+ 	if (!hidraw_table[minor] || !hidraw_table[minor]->exist) {
+ 		err = -ENODEV;
+ 		goto out_unlock;
+@@ -301,7 +306,7 @@ static int hidraw_open(struct inode *inode, struct file *file)
+ 	spin_unlock_irqrestore(&hidraw_table[minor]->list_lock, flags);
+ 	file->private_data = list;
+ out_unlock:
+-	up_read(&minors_rwsem);
++	up_write(&minors_rwsem);
+ out:
+ 	if (err < 0)
+ 		kfree(list);
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 9c30dd30537af..15cd0cabee2a9 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -1309,7 +1309,7 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
+ 	struct input_dev *pen_input = wacom->pen_input;
+ 	unsigned char *data = wacom->data;
+ 	int number_of_valid_frames = 0;
+-	int time_interval = 15000000;
++	ktime_t time_interval = 15000000;
+ 	ktime_t time_packet_received = ktime_get();
+ 	int i;
+ 
+@@ -1343,7 +1343,7 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
+ 	if (number_of_valid_frames) {
+ 		if (wacom->hid_data.time_delayed)
+ 			time_interval = ktime_get() - wacom->hid_data.time_delayed;
+-		time_interval /= number_of_valid_frames;
++		time_interval = div_u64(time_interval, number_of_valid_frames);
+ 		wacom->hid_data.time_delayed = time_packet_received;
+ 	}
+ 
+@@ -1354,7 +1354,7 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
+ 		bool range = frame[0] & 0x20;
+ 		bool invert = frame[0] & 0x10;
+ 		int frames_number_reversed = number_of_valid_frames - i - 1;
+-		int event_timestamp = time_packet_received - frames_number_reversed * time_interval;
++		ktime_t event_timestamp = time_packet_received - frames_number_reversed * time_interval;
+ 
+ 		if (!valid)
+ 			continue;
+diff --git a/drivers/hid/wacom_wac.h b/drivers/hid/wacom_wac.h
+index 1a40bb8c5810c..ee21bb260f22f 100644
+--- a/drivers/hid/wacom_wac.h
++++ b/drivers/hid/wacom_wac.h
+@@ -324,7 +324,7 @@ struct hid_data {
+ 	int ps_connected;
+ 	bool pad_input_event_flag;
+ 	unsigned short sequence_number;
+-	int time_delayed;
++	ktime_t time_delayed;
+ };
+ 
+ struct wacom_remote_data {
+diff --git a/drivers/iommu/amd/iommu_v2.c b/drivers/iommu/amd/iommu_v2.c
+index 864e4ffb6aa94..261352a232716 100644
+--- a/drivers/iommu/amd/iommu_v2.c
++++ b/drivers/iommu/amd/iommu_v2.c
+@@ -485,8 +485,8 @@ static void do_fault(struct work_struct *work)
+ 	flags |= FAULT_FLAG_REMOTE;
+ 
+ 	mmap_read_lock(mm);
+-	vma = find_extend_vma(mm, address);
+-	if (!vma || address < vma->vm_start)
++	vma = vma_lookup(mm, address);
++	if (!vma)
+ 		/* failed to get a vma in the right range */
+ 		goto out;
+ 
+diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c
+index 24bf9b2b58aa6..ed5f11eb92e56 100644
+--- a/drivers/iommu/iommu-sva.c
++++ b/drivers/iommu/iommu-sva.c
+@@ -203,7 +203,7 @@ iommu_sva_handle_iopf(struct iommu_fault *fault, void *data)
+ 
+ 	mmap_read_lock(mm);
+ 
+-	vma = find_extend_vma(mm, prm->addr);
++	vma = vma_lookup(mm, prm->addr);
+ 	if (!vma)
+ 		/* Unmapped area */
+ 		goto out_put_mm;
+diff --git a/drivers/thermal/mediatek/auxadc_thermal.c b/drivers/thermal/mediatek/auxadc_thermal.c
+index 3372f7c29626c..ab730f9552d0e 100644
+--- a/drivers/thermal/mediatek/auxadc_thermal.c
++++ b/drivers/thermal/mediatek/auxadc_thermal.c
+@@ -1142,12 +1142,7 @@ static int mtk_thermal_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 	}
+ 
+-	auxadc_base = devm_of_iomap(&pdev->dev, auxadc, 0, NULL);
+-	if (IS_ERR(auxadc_base)) {
+-		of_node_put(auxadc);
+-		return PTR_ERR(auxadc_base);
+-	}
+-
++	auxadc_base = of_iomap(auxadc, 0);
+ 	auxadc_phys_base = of_get_phys_base(auxadc);
+ 
+ 	of_node_put(auxadc);
+@@ -1163,12 +1158,7 @@ static int mtk_thermal_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 	}
+ 
+-	apmixed_base = devm_of_iomap(&pdev->dev, apmixedsys, 0, NULL);
+-	if (IS_ERR(apmixed_base)) {
+-		of_node_put(apmixedsys);
+-		return PTR_ERR(apmixed_base);
+-	}
+-
++	apmixed_base = of_iomap(apmixedsys, 0);
+ 	apmixed_phys_base = of_get_phys_base(apmixedsys);
+ 
+ 	of_node_put(apmixedsys);
+diff --git a/drivers/video/fbdev/core/sysimgblt.c b/drivers/video/fbdev/core/sysimgblt.c
+index 335e92b813fc4..665ef7a0a2495 100644
+--- a/drivers/video/fbdev/core/sysimgblt.c
++++ b/drivers/video/fbdev/core/sysimgblt.c
+@@ -189,7 +189,7 @@ static void fast_imageblit(const struct fb_image *image, struct fb_info *p,
+ 	u32 fgx = fgcolor, bgx = bgcolor, bpp = p->var.bits_per_pixel;
+ 	u32 ppw = 32/bpp, spitch = (image->width + 7)/8;
+ 	u32 bit_mask, eorx, shift;
+-	const char *s = image->data, *src;
++	const u8 *s = image->data, *src;
+ 	u32 *dst;
+ 	const u32 *tab;
+ 	size_t tablen;
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index 8a884e795f6a7..3f6e7e5b7ead6 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -320,10 +320,10 @@ create_elf_tables(struct linux_binprm *bprm, const struct elfhdr *exec,
+ 	 * Grow the stack manually; some architectures have a limit on how
+ 	 * far ahead a user-space access may be in order to grow the stack.
+ 	 */
+-	if (mmap_read_lock_killable(mm))
++	if (mmap_write_lock_killable(mm))
+ 		return -EINTR;
+-	vma = find_extend_vma(mm, bprm->p);
+-	mmap_read_unlock(mm);
++	vma = find_extend_vma_locked(mm, bprm->p);
++	mmap_write_unlock(mm);
+ 	if (!vma)
+ 		return -EFAULT;
+ 
+diff --git a/fs/exec.c b/fs/exec.c
+index 7c44d0c65b1b4..a0188e5c9ee20 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -199,33 +199,39 @@ static struct page *get_arg_page(struct linux_binprm *bprm, unsigned long pos,
+ 		int write)
+ {
+ 	struct page *page;
++	struct vm_area_struct *vma = bprm->vma;
++	struct mm_struct *mm = bprm->mm;
+ 	int ret;
+-	unsigned int gup_flags = 0;
+ 
+-#ifdef CONFIG_STACK_GROWSUP
+-	if (write) {
+-		ret = expand_downwards(bprm->vma, pos);
+-		if (ret < 0)
++	/*
++	 * Avoid relying on expanding the stack down in GUP (which
++	 * does not work for STACK_GROWSUP anyway), and just do it
++	 * by hand ahead of time.
++	 */
++	if (write && pos < vma->vm_start) {
++		mmap_write_lock(mm);
++		ret = expand_downwards(vma, pos);
++		if (unlikely(ret < 0)) {
++			mmap_write_unlock(mm);
+ 			return NULL;
+-	}
+-#endif
+-
+-	if (write)
+-		gup_flags |= FOLL_WRITE;
++		}
++		mmap_write_downgrade(mm);
++	} else
++		mmap_read_lock(mm);
+ 
+ 	/*
+ 	 * We are doing an exec().  'current' is the process
+-	 * doing the exec and bprm->mm is the new process's mm.
++	 * doing the exec and 'mm' is the new process's mm.
+ 	 */
+-	mmap_read_lock(bprm->mm);
+-	ret = get_user_pages_remote(bprm->mm, pos, 1, gup_flags,
++	ret = get_user_pages_remote(mm, pos, 1,
++			write ? FOLL_WRITE : 0,
+ 			&page, NULL, NULL);
+-	mmap_read_unlock(bprm->mm);
++	mmap_read_unlock(mm);
+ 	if (ret <= 0)
+ 		return NULL;
+ 
+ 	if (write)
+-		acct_arg_size(bprm, vma_pages(bprm->vma));
++		acct_arg_size(bprm, vma_pages(vma));
+ 
+ 	return page;
+ }
+@@ -852,7 +858,7 @@ int setup_arg_pages(struct linux_binprm *bprm,
+ 	stack_base = vma->vm_end - stack_expand;
+ #endif
+ 	current->mm->start_stack = bprm->p;
+-	ret = expand_stack(vma, stack_base);
++	ret = expand_stack_locked(vma, stack_base);
+ 	if (ret)
+ 		ret = -EFAULT;
+ 
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index ced82b9c18e57..53bec6d4297bb 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -2179,6 +2179,9 @@ void pagecache_isize_extended(struct inode *inode, loff_t from, loff_t to);
+ void truncate_pagecache_range(struct inode *inode, loff_t offset, loff_t end);
+ int generic_error_remove_page(struct address_space *mapping, struct page *page);
+ 
++struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm,
++		unsigned long address, struct pt_regs *regs);
++
+ #ifdef CONFIG_MMU
+ extern vm_fault_t handle_mm_fault(struct vm_area_struct *vma,
+ 				  unsigned long address, unsigned int flags,
+@@ -3063,16 +3066,11 @@ extern vm_fault_t filemap_page_mkwrite(struct vm_fault *vmf);
+ 
+ extern unsigned long stack_guard_gap;
+ /* Generic expand stack which grows the stack according to GROWS{UP,DOWN} */
+-extern int expand_stack(struct vm_area_struct *vma, unsigned long address);
++int expand_stack_locked(struct vm_area_struct *vma, unsigned long address);
++struct vm_area_struct *expand_stack(struct mm_struct * mm, unsigned long addr);
+ 
+ /* CONFIG_STACK_GROWSUP still needs to grow downwards at some places */
+-extern int expand_downwards(struct vm_area_struct *vma,
+-		unsigned long address);
+-#if VM_GROWSUP
+-extern int expand_upwards(struct vm_area_struct *vma, unsigned long address);
+-#else
+-  #define expand_upwards(vma, address) (0)
+-#endif
++int expand_downwards(struct vm_area_struct *vma, unsigned long address);
+ 
+ /* Look up the first VMA which satisfies  addr < vm_end,  NULL if none. */
+ extern struct vm_area_struct * find_vma(struct mm_struct * mm, unsigned long addr);
+@@ -3167,7 +3165,8 @@ unsigned long change_prot_numa(struct vm_area_struct *vma,
+ 			unsigned long start, unsigned long end);
+ #endif
+ 
+-struct vm_area_struct *find_extend_vma(struct mm_struct *, unsigned long addr);
++struct vm_area_struct *find_extend_vma_locked(struct mm_struct *,
++		unsigned long addr);
+ int remap_pfn_range(struct vm_area_struct *, unsigned long addr,
+ 			unsigned long pfn, unsigned long size, pgprot_t);
+ int remap_pfn_range_notrack(struct vm_area_struct *vma, unsigned long addr,
+diff --git a/lib/maple_tree.c b/lib/maple_tree.c
+index a614129d43394..468d37bf512da 100644
+--- a/lib/maple_tree.c
++++ b/lib/maple_tree.c
+@@ -4287,11 +4287,13 @@ done:
+ 
+ static inline void mas_wr_end_piv(struct ma_wr_state *wr_mas)
+ {
+-	while ((wr_mas->mas->last > wr_mas->end_piv) &&
+-	       (wr_mas->offset_end < wr_mas->node_end))
+-		wr_mas->end_piv = wr_mas->pivots[++wr_mas->offset_end];
++	while ((wr_mas->offset_end < wr_mas->node_end) &&
++	       (wr_mas->mas->last > wr_mas->pivots[wr_mas->offset_end]))
++		wr_mas->offset_end++;
+ 
+-	if (wr_mas->mas->last > wr_mas->end_piv)
++	if (wr_mas->offset_end < wr_mas->node_end)
++		wr_mas->end_piv = wr_mas->pivots[wr_mas->offset_end];
++	else
+ 		wr_mas->end_piv = wr_mas->mas->max;
+ }
+ 
+@@ -4448,7 +4450,6 @@ static inline void *mas_wr_store_entry(struct ma_wr_state *wr_mas)
+ 	}
+ 
+ 	/* At this point, we are at the leaf node that needs to be altered. */
+-	wr_mas->end_piv = wr_mas->r_max;
+ 	mas_wr_end_piv(wr_mas);
+ 
+ 	if (!wr_mas->entry)
+diff --git a/mm/Kconfig b/mm/Kconfig
+index 4751031f3f052..ae3003a98d130 100644
+--- a/mm/Kconfig
++++ b/mm/Kconfig
+@@ -1202,6 +1202,10 @@ config LRU_GEN_STATS
+ 	  This option has a per-memcg and per-node memory overhead.
+ # }
+ 
++config LOCK_MM_AND_FIND_VMA
++	bool
++	depends on !STACK_GROWSUP
++
+ source "mm/damon/Kconfig"
+ 
+ endmenu
+diff --git a/mm/gup.c b/mm/gup.c
+index eab18ba045dbe..eb0b96985c3c4 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -1096,7 +1096,11 @@ static long __get_user_pages(struct mm_struct *mm,
+ 
+ 		/* first iteration or cross vma bound */
+ 		if (!vma || start >= vma->vm_end) {
+-			vma = find_extend_vma(mm, start);
++			vma = find_vma(mm, start);
++			if (vma && (start < vma->vm_start)) {
++				WARN_ON_ONCE(vma->vm_flags & VM_GROWSDOWN);
++				vma = NULL;
++			}
+ 			if (!vma && in_gate_area(mm, start)) {
+ 				ret = get_gate_page(mm, start & PAGE_MASK,
+ 						gup_flags, &vma,
+@@ -1265,9 +1269,13 @@ int fixup_user_fault(struct mm_struct *mm,
+ 		fault_flags |= FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
+ 
+ retry:
+-	vma = find_extend_vma(mm, address);
+-	if (!vma || address < vma->vm_start)
++	vma = find_vma(mm, address);
++	if (!vma)
++		return -EFAULT;
++	if (address < vma->vm_start ) {
++		WARN_ON_ONCE(vma->vm_flags & VM_GROWSDOWN);
+ 		return -EFAULT;
++	}
+ 
+ 	if (!vma_permits_fault(vma, fault_flags))
+ 		return -EFAULT;
+diff --git a/mm/memory.c b/mm/memory.c
+index 01a23ad48a042..2c2caae6fb3b6 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -5230,6 +5230,125 @@ vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
+ }
+ EXPORT_SYMBOL_GPL(handle_mm_fault);
+ 
++#ifdef CONFIG_LOCK_MM_AND_FIND_VMA
++#include <linux/extable.h>
++
++static inline bool get_mmap_lock_carefully(struct mm_struct *mm, struct pt_regs *regs)
++{
++	/* Even if this succeeds, make it clear we *might* have slept */
++	if (likely(mmap_read_trylock(mm))) {
++		might_sleep();
++		return true;
++	}
++
++	if (regs && !user_mode(regs)) {
++		unsigned long ip = instruction_pointer(regs);
++		if (!search_exception_tables(ip))
++			return false;
++	}
++
++	return !mmap_read_lock_killable(mm);
++}
++
++static inline bool mmap_upgrade_trylock(struct mm_struct *mm)
++{
++	/*
++	 * We don't have this operation yet.
++	 *
++	 * It should be easy enough to do: it's basically a
++	 *    atomic_long_try_cmpxchg_acquire()
++	 * from RWSEM_READER_BIAS -> RWSEM_WRITER_LOCKED, but
++	 * it also needs the proper lockdep magic etc.
++	 */
++	return false;
++}
++
++static inline bool upgrade_mmap_lock_carefully(struct mm_struct *mm, struct pt_regs *regs)
++{
++	mmap_read_unlock(mm);
++	if (regs && !user_mode(regs)) {
++		unsigned long ip = instruction_pointer(regs);
++		if (!search_exception_tables(ip))
++			return false;
++	}
++	return !mmap_write_lock_killable(mm);
++}
++
++/*
++ * Helper for page fault handling.
++ *
++ * This is kind of equivalend to "mmap_read_lock()" followed
++ * by "find_extend_vma()", except it's a lot more careful about
++ * the locking (and will drop the lock on failure).
++ *
++ * For example, if we have a kernel bug that causes a page
++ * fault, we don't want to just use mmap_read_lock() to get
++ * the mm lock, because that would deadlock if the bug were
++ * to happen while we're holding the mm lock for writing.
++ *
++ * So this checks the exception tables on kernel faults in
++ * order to only do this all for instructions that are actually
++ * expected to fault.
++ *
++ * We can also actually take the mm lock for writing if we
++ * need to extend the vma, which helps the VM layer a lot.
++ */
++struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm,
++			unsigned long addr, struct pt_regs *regs)
++{
++	struct vm_area_struct *vma;
++
++	if (!get_mmap_lock_carefully(mm, regs))
++		return NULL;
++
++	vma = find_vma(mm, addr);
++	if (likely(vma && (vma->vm_start <= addr)))
++		return vma;
++
++	/*
++	 * Well, dang. We might still be successful, but only
++	 * if we can extend a vma to do so.
++	 */
++	if (!vma || !(vma->vm_flags & VM_GROWSDOWN)) {
++		mmap_read_unlock(mm);
++		return NULL;
++	}
++
++	/*
++	 * We can try to upgrade the mmap lock atomically,
++	 * in which case we can continue to use the vma
++	 * we already looked up.
++	 *
++	 * Otherwise we'll have to drop the mmap lock and
++	 * re-take it, and also look up the vma again,
++	 * re-checking it.
++	 */
++	if (!mmap_upgrade_trylock(mm)) {
++		if (!upgrade_mmap_lock_carefully(mm, regs))
++			return NULL;
++
++		vma = find_vma(mm, addr);
++		if (!vma)
++			goto fail;
++		if (vma->vm_start <= addr)
++			goto success;
++		if (!(vma->vm_flags & VM_GROWSDOWN))
++			goto fail;
++	}
++
++	if (expand_stack_locked(vma, addr))
++		goto fail;
++
++success:
++	mmap_write_downgrade(mm);
++	return vma;
++
++fail:
++	mmap_write_unlock(mm);
++	return NULL;
++}
++#endif
++
+ #ifndef __PAGETABLE_P4D_FOLDED
+ /*
+  * Allocate p4d page table.
+@@ -5501,6 +5620,14 @@ int __access_remote_vm(struct mm_struct *mm, unsigned long addr, void *buf,
+ 	if (mmap_read_lock_killable(mm))
+ 		return 0;
+ 
++	/* We might need to expand the stack to access it */
++	vma = vma_lookup(mm, addr);
++	if (!vma) {
++		vma = expand_stack(mm, addr);
++		if (!vma)
++			return 0;
++	}
++
+ 	/* ignore errors, just check how much was successfully transferred */
+ 	while (len) {
+ 		int bytes, ret, offset;
+diff --git a/mm/mmap.c b/mm/mmap.c
+index eefa6f0cda28e..c9328a6c7d43d 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -1898,7 +1898,7 @@ static int acct_stack_growth(struct vm_area_struct *vma,
+  * PA-RISC uses this for its stack; IA64 for its Register Backing Store.
+  * vma is the last one with address > vma->vm_end.  Have to extend vma.
+  */
+-int expand_upwards(struct vm_area_struct *vma, unsigned long address)
++static int expand_upwards(struct vm_area_struct *vma, unsigned long address)
+ {
+ 	struct mm_struct *mm = vma->vm_mm;
+ 	struct vm_area_struct *next;
+@@ -1990,6 +1990,7 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
+ 
+ /*
+  * vma is the first one with address < vma->vm_start.  Have to extend vma.
++ * mmap_lock held for writing.
+  */
+ int expand_downwards(struct vm_area_struct *vma, unsigned long address)
+ {
+@@ -1998,16 +1999,20 @@ int expand_downwards(struct vm_area_struct *vma, unsigned long address)
+ 	struct vm_area_struct *prev;
+ 	int error = 0;
+ 
++	if (!(vma->vm_flags & VM_GROWSDOWN))
++		return -EFAULT;
++
+ 	address &= PAGE_MASK;
+-	if (address < mmap_min_addr)
++	if (address < mmap_min_addr || address < FIRST_USER_ADDRESS)
+ 		return -EPERM;
+ 
+ 	/* Enforce stack_guard_gap */
+ 	prev = mas_prev(&mas, 0);
+ 	/* Check that both stack segments have the same anon_vma? */
+-	if (prev && !(prev->vm_flags & VM_GROWSDOWN) &&
+-			vma_is_accessible(prev)) {
+-		if (address - prev->vm_end < stack_guard_gap)
++	if (prev) {
++		if (!(prev->vm_flags & VM_GROWSDOWN) &&
++		    vma_is_accessible(prev) &&
++		    (address - prev->vm_end < stack_guard_gap))
+ 			return -ENOMEM;
+ 	}
+ 
+@@ -2087,13 +2092,12 @@ static int __init cmdline_parse_stack_guard_gap(char *p)
+ __setup("stack_guard_gap=", cmdline_parse_stack_guard_gap);
+ 
+ #ifdef CONFIG_STACK_GROWSUP
+-int expand_stack(struct vm_area_struct *vma, unsigned long address)
++int expand_stack_locked(struct vm_area_struct *vma, unsigned long address)
+ {
+ 	return expand_upwards(vma, address);
+ }
+ 
+-struct vm_area_struct *
+-find_extend_vma(struct mm_struct *mm, unsigned long addr)
++struct vm_area_struct *find_extend_vma_locked(struct mm_struct *mm, unsigned long addr)
+ {
+ 	struct vm_area_struct *vma, *prev;
+ 
+@@ -2101,20 +2105,23 @@ find_extend_vma(struct mm_struct *mm, unsigned long addr)
+ 	vma = find_vma_prev(mm, addr, &prev);
+ 	if (vma && (vma->vm_start <= addr))
+ 		return vma;
+-	if (!prev || expand_stack(prev, addr))
++	if (!prev)
++		return NULL;
++	if (expand_stack_locked(prev, addr))
+ 		return NULL;
+ 	if (prev->vm_flags & VM_LOCKED)
+ 		populate_vma_page_range(prev, addr, prev->vm_end, NULL);
+ 	return prev;
+ }
+ #else
+-int expand_stack(struct vm_area_struct *vma, unsigned long address)
++int expand_stack_locked(struct vm_area_struct *vma, unsigned long address)
+ {
++	if (unlikely(!(vma->vm_flags & VM_GROWSDOWN)))
++		return -EINVAL;
+ 	return expand_downwards(vma, address);
+ }
+ 
+-struct vm_area_struct *
+-find_extend_vma(struct mm_struct *mm, unsigned long addr)
++struct vm_area_struct *find_extend_vma_locked(struct mm_struct *mm, unsigned long addr)
+ {
+ 	struct vm_area_struct *vma;
+ 	unsigned long start;
+@@ -2125,10 +2132,8 @@ find_extend_vma(struct mm_struct *mm, unsigned long addr)
+ 		return NULL;
+ 	if (vma->vm_start <= addr)
+ 		return vma;
+-	if (!(vma->vm_flags & VM_GROWSDOWN))
+-		return NULL;
+ 	start = vma->vm_start;
+-	if (expand_stack(vma, addr))
++	if (expand_stack_locked(vma, addr))
+ 		return NULL;
+ 	if (vma->vm_flags & VM_LOCKED)
+ 		populate_vma_page_range(vma, addr, start, NULL);
+@@ -2136,7 +2141,91 @@ find_extend_vma(struct mm_struct *mm, unsigned long addr)
+ }
+ #endif
+ 
+-EXPORT_SYMBOL_GPL(find_extend_vma);
++/*
++ * IA64 has some horrid mapping rules: it can expand both up and down,
++ * but with various special rules.
++ *
++ * We'll get rid of this architecture eventually, so the ugliness is
++ * temporary.
++ */
++#ifdef CONFIG_IA64
++static inline bool vma_expand_ok(struct vm_area_struct *vma, unsigned long addr)
++{
++	return REGION_NUMBER(addr) == REGION_NUMBER(vma->vm_start) &&
++		REGION_OFFSET(addr) < RGN_MAP_LIMIT;
++}
++
++/*
++ * IA64 stacks grow down, but there's a special register backing store
++ * that can grow up. Only sequentially, though, so the new address must
++ * match vm_end.
++ */
++static inline int vma_expand_up(struct vm_area_struct *vma, unsigned long addr)
++{
++	if (!vma_expand_ok(vma, addr))
++		return -EFAULT;
++	if (vma->vm_end != (addr & PAGE_MASK))
++		return -EFAULT;
++	return expand_upwards(vma, addr);
++}
++
++static inline bool vma_expand_down(struct vm_area_struct *vma, unsigned long addr)
++{
++	if (!vma_expand_ok(vma, addr))
++		return -EFAULT;
++	return expand_downwards(vma, addr);
++}
++
++#elif defined(CONFIG_STACK_GROWSUP)
++
++#define vma_expand_up(vma,addr) expand_upwards(vma, addr)
++#define vma_expand_down(vma, addr) (-EFAULT)
++
++#else
++
++#define vma_expand_up(vma,addr) (-EFAULT)
++#define vma_expand_down(vma, addr) expand_downwards(vma, addr)
++
++#endif
++
++/*
++ * expand_stack(): legacy interface for page faulting. Don't use unless
++ * you have to.
++ *
++ * This is called with the mm locked for reading, drops the lock, takes
++ * the lock for writing, tries to look up a vma again, expands it if
++ * necessary, and downgrades the lock to reading again.
++ *
++ * If no vma is found or it can't be expanded, it returns NULL and has
++ * dropped the lock.
++ */
++struct vm_area_struct *expand_stack(struct mm_struct *mm, unsigned long addr)
++{
++	struct vm_area_struct *vma, *prev;
++
++	mmap_read_unlock(mm);
++	if (mmap_write_lock_killable(mm))
++		return NULL;
++
++	vma = find_vma_prev(mm, addr, &prev);
++	if (vma && vma->vm_start <= addr)
++		goto success;
++
++	if (prev && !vma_expand_up(prev, addr)) {
++		vma = prev;
++		goto success;
++	}
++
++	if (vma && !vma_expand_down(vma, addr))
++		goto success;
++
++	mmap_write_unlock(mm);
++	return NULL;
++
++success:
++	mmap_write_downgrade(mm);
++	return vma;
++}
+ 
+ /*
+  * Ok - we have the memory areas we should free on a maple tree so release them,
+@@ -2280,19 +2369,6 @@ int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
+ 	return __split_vma(vmi, vma, addr, new_below);
+ }
+ 
+-static inline int munmap_sidetree(struct vm_area_struct *vma,
+-				   struct ma_state *mas_detach)
+-{
+-	mas_set_range(mas_detach, vma->vm_start, vma->vm_end - 1);
+-	if (mas_store_gfp(mas_detach, vma, GFP_KERNEL))
+-		return -ENOMEM;
+-
+-	if (vma->vm_flags & VM_LOCKED)
+-		vma->vm_mm->locked_vm -= vma_pages(vma);
+-
+-	return 0;
+-}
+-
+ /*
+  * do_vmi_align_munmap() - munmap the aligned region from @start to @end.
+  * @vmi: The vma iterator
+@@ -2314,6 +2390,7 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
+ 	struct maple_tree mt_detach;
+ 	int count = 0;
+ 	int error = -ENOMEM;
++	unsigned long locked_vm = 0;
+ 	MA_STATE(mas_detach, &mt_detach, 0, 0);
+ 	mt_init_flags(&mt_detach, vmi->mas.tree->ma_flags & MT_FLAGS_LOCK_MASK);
+ 	mt_set_external_lock(&mt_detach, &mm->mmap_lock);
+@@ -2359,9 +2436,12 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
+ 			if (error)
+ 				goto end_split_failed;
+ 		}
+-		error = munmap_sidetree(next, &mas_detach);
++		mas_set_range(&mas_detach, next->vm_start, next->vm_end - 1);
++		error = mas_store_gfp(&mas_detach, next, GFP_KERNEL);
+ 		if (error)
+-			goto munmap_sidetree_failed;
++			goto munmap_gather_failed;
++		if (next->vm_flags & VM_LOCKED)
++			locked_vm += vma_pages(next);
+ 
+ 		count++;
+ #ifdef CONFIG_DEBUG_VM_MAPLE_TREE
+@@ -2406,11 +2486,13 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
+ 		BUG_ON(count != test_count);
+ 	}
+ #endif
+-	/* Point of no return */
+ 	vma_iter_set(vmi, start);
+-	if (vma_iter_clear_gfp(vmi, start, end, GFP_KERNEL))
+-		return -ENOMEM;
++	error = vma_iter_clear_gfp(vmi, start, end, GFP_KERNEL);
++	if (error)
++		goto clear_tree_failed;
+ 
++	/* Point of no return */
++	mm->locked_vm -= locked_vm;
+ 	mm->map_count -= count;
+ 	/*
+ 	 * Do not downgrade mmap_lock if we are next to VM_GROWSDOWN or
+@@ -2440,8 +2522,9 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
+ 	validate_mm(mm);
+ 	return downgrade ? 1 : 0;
+ 
++clear_tree_failed:
+ userfaultfd_error:
+-munmap_sidetree_failed:
++munmap_gather_failed:
+ end_split_failed:
+ 	__mt_destroy(&mt_detach);
+ start_split_failed:
+diff --git a/mm/nommu.c b/mm/nommu.c
+index 57ba243c6a37f..07a3af6a94ea8 100644
+--- a/mm/nommu.c
++++ b/mm/nommu.c
+@@ -631,23 +631,31 @@ struct vm_area_struct *find_vma(struct mm_struct *mm, unsigned long addr)
+ EXPORT_SYMBOL(find_vma);
+ 
+ /*
+- * find a VMA
+- * - we don't extend stack VMAs under NOMMU conditions
++ * At least xtensa ends up having protection faults even with no
++ * MMU.. No stack expansion, at least.
+  */
+-struct vm_area_struct *find_extend_vma(struct mm_struct *mm, unsigned long addr)
++struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm,
++			unsigned long addr, struct pt_regs *regs)
+ {
+-	return find_vma(mm, addr);
++	mmap_read_lock(mm);
++	return vma_lookup(mm, addr);
+ }
+ 
+ /*
+  * expand a stack to a given address
+  * - not supported under NOMMU conditions
+  */
+-int expand_stack(struct vm_area_struct *vma, unsigned long address)
++int expand_stack_locked(struct vm_area_struct *vma, unsigned long addr)
+ {
+ 	return -ENOMEM;
+ }
+ 
++struct vm_area_struct *expand_stack(struct mm_struct *mm, unsigned long addr)
++{
++	mmap_read_unlock(mm);
++	return NULL;
++}
++
+ /*
+  * look up the first VMA exactly that exactly matches addr
+  * - should be called with mm->mmap_lock at least held readlocked
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index 1af623839bffa..b3c2a49b189cc 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -1079,8 +1079,9 @@ wait_free_buffer:
+ 		if (err)
+ 			goto err_event_drop;
+ 
+-		if (sk->sk_err)
+-			return -sk->sk_err;
++		err = sock_error(sk);
++		if (err)
++			return err;
+ 	}
+ 
+ 	return size;


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:6.3 commit in: /
@ 2023-07-04 13:06 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2023-07-04 13:06 UTC (permalink / raw
  To: gentoo-commits

commit:     6dd30a46a4609e1a119a362fd999f5b1d9e25817
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jul  4 13:05:49 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jul  4 13:05:49 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=6dd30a46

mm: disable CONFIG_PER_VMA_LOCK by default until its fixed

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                     |  4 ++
 1800_mm-execve-mark-stack-as-growing-down.patch | 82 +++++++++++++++++++++++++
 2 files changed, 86 insertions(+)

diff --git a/0000_README b/0000_README
index 80499d11..ac24f1fa 100644
--- a/0000_README
+++ b/0000_README
@@ -99,6 +99,10 @@ Patch:  1700_sparc-address-warray-bound-warnings.patch
 From:		https://github.com/KSPP/linux/issues/109
 Desc:		Address -Warray-bounds warnings 
 
+Patch:  1800_mm-execve-mark-stack-as-growing-down.patch
+From:   https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git
+Desc:   execve: always mark stack as growing down during early stack setup
+
 Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758

diff --git a/1800_mm-execve-mark-stack-as-growing-down.patch b/1800_mm-execve-mark-stack-as-growing-down.patch
new file mode 100644
index 00000000..f3da01d8
--- /dev/null
+++ b/1800_mm-execve-mark-stack-as-growing-down.patch
@@ -0,0 +1,82 @@
+From 53a70ffa22715ab23903ef9fa4f67a21ce10a759 Mon Sep 17 00:00:00 2001
+From: Linus Torvalds <torvalds@linux-foundation.org>
+Date: Sun, 2 Jul 2023 23:20:17 -0700
+Subject: execve: always mark stack as growing down during early stack setup
+
+commit f66066bc5136f25e36a2daff4896c768f18c211e upstream.
+
+While our user stacks can grow either down (all common architectures) or
+up (parisc and the ia64 register stack), the initial stack setup when we
+copy the argument and environment strings to the new stack at execve()
+time is always done by extending the stack downwards.
+
+But it turns out that in commit 8d7071af8907 ("mm: always expand the
+stack with the mmap write lock held"), as part of making the stack
+growing code more robust, 'expand_downwards()' was now made to actually
+check the vma flags:
+
+	if (!(vma->vm_flags & VM_GROWSDOWN))
+		return -EFAULT;
+
+and that meant that this execve-time stack expansion started failing on
+parisc, because on that architecture, the stack flags do not contain the
+VM_GROWSDOWN bit.
+
+At the same time the new check in expand_downwards() is clearly correct,
+and simplified the callers, so let's not remove it.
+
+The solution is instead to just codify the fact that yes, during
+execve(), the stack grows down.  This not only matches reality, it ends
+up being particularly simple: we already have special execve-time flags
+for the stack (VM_STACK_INCOMPLETE_SETUP) and use those flags to avoid
+page migration during this setup time (see vma_is_temporary_stack() and
+invalid_migration_vma()).
+
+So just add VM_GROWSDOWN to that set of temporary flags, and now our
+stack flags automatically match reality, and the parisc stack expansion
+works again.
+
+Note that the VM_STACK_INCOMPLETE_SETUP bits will be cleared when the
+stack is finalized, so we only add the extra VM_GROWSDOWN bit on
+CONFIG_STACK_GROWSUP architectures (ie parisc) rather than adding it in
+general.
+
+Link: https://lore.kernel.org/all/612eaa53-6904-6e16-67fc-394f4faa0e16@bell.net/
+Link: https://lore.kernel.org/all/5fd98a09-4792-1433-752d-029ae3545168@gmx.de/
+Fixes: 8d7071af8907 ("mm: always expand the stack with the mmap write lock held")
+Reported-by: John David Anglin <dave.anglin@bell.net>
+Reported-and-tested-by: Helge Deller <deller@gmx.de>
+Reported-and-tested-by: Guenter Roeck <linux@roeck-us.net>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ include/linux/mm.h | 4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index 53bec6d4297bb..e9cf8dcd4b83d 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -384,7 +384,7 @@ extern unsigned int kobjsize(const void *objp);
+ #endif /* CONFIG_HAVE_ARCH_USERFAULTFD_MINOR */
+ 
+ /* Bits set in the VMA until the stack is in its final location */
+-#define VM_STACK_INCOMPLETE_SETUP	(VM_RAND_READ | VM_SEQ_READ)
++#define VM_STACK_INCOMPLETE_SETUP (VM_RAND_READ | VM_SEQ_READ | VM_STACK_EARLY)
+ 
+ #define TASK_EXEC ((current->personality & READ_IMPLIES_EXEC) ? VM_EXEC : 0)
+ 
+@@ -406,8 +406,10 @@ extern unsigned int kobjsize(const void *objp);
+ 
+ #ifdef CONFIG_STACK_GROWSUP
+ #define VM_STACK	VM_GROWSUP
++#define VM_STACK_EARLY	VM_GROWSDOWN
+ #else
+ #define VM_STACK	VM_GROWSDOWN
++#define VM_STACK_EARLY	0
+ #endif
+ 
+ #define VM_STACK_FLAGS	(VM_STACK | VM_STACK_DEFAULT_FLAGS | VM_ACCOUNT)
+-- 
+cgit 
+


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:6.3 commit in: /
@ 2023-07-04 14:15 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2023-07-04 14:15 UTC (permalink / raw
  To: gentoo-commits

commit:     44b6839b58243736b0dd9059fd421a747e11e66c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jul  4 14:10:41 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jul  4 14:10:41 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=44b6839b

wireguard: queueing: use saner cpu selection wrapping

Bug: https://bugs.gentoo.org/909066

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |   4 +
 2400_wireguard-queueing-cpu-sel-wrapping-fix.patch | 116 +++++++++++++++++++++
 2 files changed, 120 insertions(+)

diff --git a/0000_README b/0000_README
index ac24f1fa..5a5a55c1 100644
--- a/0000_README
+++ b/0000_README
@@ -107,6 +107,10 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
+Patch:  2400_wireguard-queueing-cpu-sel-wrapping-fix.patch
+From:   https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=7387943fa35516f6f8017a3b0e9ce48a3bef9faa
+Desc:   wireguard: queueing: use saner cpu selection wrapping
+
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino

diff --git a/2400_wireguard-queueing-cpu-sel-wrapping-fix.patch b/2400_wireguard-queueing-cpu-sel-wrapping-fix.patch
new file mode 100644
index 00000000..fa199039
--- /dev/null
+++ b/2400_wireguard-queueing-cpu-sel-wrapping-fix.patch
@@ -0,0 +1,116 @@
+From 7387943fa35516f6f8017a3b0e9ce48a3bef9faa Mon Sep 17 00:00:00 2001
+From: "Jason A. Donenfeld" <Jason@zx2c4.com>
+Date: Mon, 3 Jul 2023 03:27:04 +0200
+Subject: wireguard: queueing: use saner cpu selection wrapping
+
+Using `% nr_cpumask_bits` is slow and complicated, and not totally
+robust toward dynamic changes to CPU topologies. Rather than storing the
+next CPU in the round-robin, just store the last one, and also return
+that value. This simplifies the loop drastically into a much more common
+pattern.
+
+Fixes: e7096c131e51 ("net: WireGuard secure network tunnel")
+Cc: stable@vger.kernel.org
+Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
+Tested-by: Manuel Leiner <manuel.leiner@gmx.de>
+Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+---
+ drivers/net/wireguard/queueing.c |  1 +
+ drivers/net/wireguard/queueing.h | 25 +++++++++++--------------
+ drivers/net/wireguard/receive.c  |  2 +-
+ drivers/net/wireguard/send.c     |  2 +-
+ 4 files changed, 14 insertions(+), 16 deletions(-)
+
+diff --git a/drivers/net/wireguard/queueing.c b/drivers/net/wireguard/queueing.c
+index 8084e7408c0ae..26d235d152352 100644
+--- a/drivers/net/wireguard/queueing.c
++++ b/drivers/net/wireguard/queueing.c
+@@ -28,6 +28,7 @@ int wg_packet_queue_init(struct crypt_queue *queue, work_func_t function,
+ 	int ret;
+ 
+ 	memset(queue, 0, sizeof(*queue));
++	queue->last_cpu = -1;
+ 	ret = ptr_ring_init(&queue->ring, len, GFP_KERNEL);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/net/wireguard/queueing.h b/drivers/net/wireguard/queueing.h
+index 125284b346a77..1ea4f874e367e 100644
+--- a/drivers/net/wireguard/queueing.h
++++ b/drivers/net/wireguard/queueing.h
+@@ -117,20 +117,17 @@ static inline int wg_cpumask_choose_online(int *stored_cpu, unsigned int id)
+ 	return cpu;
+ }
+ 
+-/* This function is racy, in the sense that next is unlocked, so it could return
+- * the same CPU twice. A race-free version of this would be to instead store an
+- * atomic sequence number, do an increment-and-return, and then iterate through
+- * every possible CPU until we get to that index -- choose_cpu. However that's
+- * a bit slower, and it doesn't seem like this potential race actually
+- * introduces any performance loss, so we live with it.
++/* This function is racy, in the sense that it's called while last_cpu is
++ * unlocked, so it could return the same CPU twice. Adding locking or using
++ * atomic sequence numbers is slower though, and the consequences of racing are
++ * harmless, so live with it.
+  */
+-static inline int wg_cpumask_next_online(int *next)
++static inline int wg_cpumask_next_online(int *last_cpu)
+ {
+-	int cpu = *next;
+-
+-	while (unlikely(!cpumask_test_cpu(cpu, cpu_online_mask)))
+-		cpu = cpumask_next(cpu, cpu_online_mask) % nr_cpumask_bits;
+-	*next = cpumask_next(cpu, cpu_online_mask) % nr_cpumask_bits;
++	int cpu = cpumask_next(*last_cpu, cpu_online_mask);
++	if (cpu >= nr_cpu_ids)
++		cpu = cpumask_first(cpu_online_mask);
++	*last_cpu = cpu;
+ 	return cpu;
+ }
+ 
+@@ -159,7 +156,7 @@ static inline void wg_prev_queue_drop_peeked(struct prev_queue *queue)
+ 
+ static inline int wg_queue_enqueue_per_device_and_peer(
+ 	struct crypt_queue *device_queue, struct prev_queue *peer_queue,
+-	struct sk_buff *skb, struct workqueue_struct *wq, int *next_cpu)
++	struct sk_buff *skb, struct workqueue_struct *wq)
+ {
+ 	int cpu;
+ 
+@@ -173,7 +170,7 @@ static inline int wg_queue_enqueue_per_device_and_peer(
+ 	/* Then we queue it up in the device queue, which consumes the
+ 	 * packet as soon as it can.
+ 	 */
+-	cpu = wg_cpumask_next_online(next_cpu);
++	cpu = wg_cpumask_next_online(&device_queue->last_cpu);
+ 	if (unlikely(ptr_ring_produce_bh(&device_queue->ring, skb)))
+ 		return -EPIPE;
+ 	queue_work_on(cpu, wq, &per_cpu_ptr(device_queue->worker, cpu)->work);
+diff --git a/drivers/net/wireguard/receive.c b/drivers/net/wireguard/receive.c
+index 7135d51d2d872..0b3f0c8435509 100644
+--- a/drivers/net/wireguard/receive.c
++++ b/drivers/net/wireguard/receive.c
+@@ -524,7 +524,7 @@ static void wg_packet_consume_data(struct wg_device *wg, struct sk_buff *skb)
+ 		goto err;
+ 
+ 	ret = wg_queue_enqueue_per_device_and_peer(&wg->decrypt_queue, &peer->rx_queue, skb,
+-						   wg->packet_crypt_wq, &wg->decrypt_queue.last_cpu);
++						   wg->packet_crypt_wq);
+ 	if (unlikely(ret == -EPIPE))
+ 		wg_queue_enqueue_per_peer_rx(skb, PACKET_STATE_DEAD);
+ 	if (likely(!ret || ret == -EPIPE)) {
+diff --git a/drivers/net/wireguard/send.c b/drivers/net/wireguard/send.c
+index 5368f7c35b4bf..95c853b59e1da 100644
+--- a/drivers/net/wireguard/send.c
++++ b/drivers/net/wireguard/send.c
+@@ -318,7 +318,7 @@ static void wg_packet_create_data(struct wg_peer *peer, struct sk_buff *first)
+ 		goto err;
+ 
+ 	ret = wg_queue_enqueue_per_device_and_peer(&wg->encrypt_queue, &peer->tx_queue, first,
+-						   wg->packet_crypt_wq, &wg->encrypt_queue.last_cpu);
++						   wg->packet_crypt_wq);
+ 	if (unlikely(ret == -EPIPE))
+ 		wg_queue_enqueue_per_peer_tx(first, PACKET_STATE_DEAD);
+ err:
+-- 
+cgit 


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:6.3 commit in: /
@ 2023-07-05 20:27 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2023-07-05 20:27 UTC (permalink / raw
  To: gentoo-commits

commit:     f9808122979b9ca97159dc7754d68f1da8d1375e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jul  5 20:27:20 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jul  5 20:27:20 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f9808122

Linux patch 6.3.12

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |   4 +
 1011_linux-6.3.12.patch | 559 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 563 insertions(+)

diff --git a/0000_README b/0000_README
index 5a5a55c1..2dbf6665 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch:  1010_linux-6.3.11.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.3.11
 
+Patch:  1011_linux-6.3.12.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.3.12
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1011_linux-6.3.12.patch b/1011_linux-6.3.12.patch
new file mode 100644
index 00000000..705c0652
--- /dev/null
+++ b/1011_linux-6.3.12.patch
@@ -0,0 +1,559 @@
+diff --git a/Documentation/process/changes.rst b/Documentation/process/changes.rst
+index ef540865ad22e..a9ef00509c9b1 100644
+--- a/Documentation/process/changes.rst
++++ b/Documentation/process/changes.rst
+@@ -60,6 +60,7 @@ openssl & libcrypto    1.0.0            openssl version
+ bc                     1.06.95          bc --version
+ Sphinx\ [#f1]_         1.7              sphinx-build --version
+ cpio                   any              cpio --version
++gtags (optional)       6.6.5            gtags --version
+ ====================== ===============  ========================================
+ 
+ .. [#f1] Sphinx is needed only to build the Kernel documentation
+@@ -174,6 +175,12 @@ You will need openssl to build kernels 3.7 and higher if module signing is
+ enabled.  You will also need openssl development packages to build kernels 4.3
+ and higher.
+ 
++gtags / GNU GLOBAL (optional)
++-----------------------------
++
++The kernel build requires GNU GLOBAL version 6.6.5 or later to generate
++tag files through ``make gtags``.  This is due to its use of the gtags
++``-C (--directory)`` flag.
+ 
+ System utilities
+ ****************
+diff --git a/Makefile b/Makefile
+index 34349623a76a7..7b6c66b7b0041 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 3
+-SUBLEVEL = 11
++SUBLEVEL = 12
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/drivers/cxl/core/pci.c b/drivers/cxl/core/pci.c
+index 2055d0b9d4af1..730edbd363118 100644
+--- a/drivers/cxl/core/pci.c
++++ b/drivers/cxl/core/pci.c
+@@ -308,36 +308,17 @@ static void disable_hdm(void *_cxlhdm)
+ 	       hdm + CXL_HDM_DECODER_CTRL_OFFSET);
+ }
+ 
+-int devm_cxl_enable_hdm(struct cxl_port *port, struct cxl_hdm *cxlhdm)
++static int devm_cxl_enable_hdm(struct device *host, struct cxl_hdm *cxlhdm)
+ {
+-	void __iomem *hdm;
++	void __iomem *hdm = cxlhdm->regs.hdm_decoder;
+ 	u32 global_ctrl;
+ 
+-	/*
+-	 * If the hdm capability was not mapped there is nothing to enable and
+-	 * the caller is responsible for what happens next.  For example,
+-	 * emulate a passthrough decoder.
+-	 */
+-	if (IS_ERR(cxlhdm))
+-		return 0;
+-
+-	hdm = cxlhdm->regs.hdm_decoder;
+ 	global_ctrl = readl(hdm + CXL_HDM_DECODER_CTRL_OFFSET);
+-
+-	/*
+-	 * If the HDM decoder capability was enabled on entry, skip
+-	 * registering disable_hdm() since this decode capability may be
+-	 * owned by platform firmware.
+-	 */
+-	if (global_ctrl & CXL_HDM_DECODER_ENABLE)
+-		return 0;
+-
+ 	writel(global_ctrl | CXL_HDM_DECODER_ENABLE,
+ 	       hdm + CXL_HDM_DECODER_CTRL_OFFSET);
+ 
+-	return devm_add_action_or_reset(&port->dev, disable_hdm, cxlhdm);
++	return devm_add_action_or_reset(host, disable_hdm, cxlhdm);
+ }
+-EXPORT_SYMBOL_NS_GPL(devm_cxl_enable_hdm, CXL);
+ 
+ int cxl_dvsec_rr_decode(struct device *dev, int d,
+ 			struct cxl_endpoint_dvsec_info *info)
+@@ -511,7 +492,7 @@ int cxl_hdm_decode_init(struct cxl_dev_state *cxlds, struct cxl_hdm *cxlhdm,
+ 	if (info->mem_enabled)
+ 		return 0;
+ 
+-	rc = devm_cxl_enable_hdm(port, cxlhdm);
++	rc = devm_cxl_enable_hdm(&port->dev, cxlhdm);
+ 	if (rc)
+ 		return rc;
+ 
+diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
+index f93a285389621..044a92d9813e2 100644
+--- a/drivers/cxl/cxl.h
++++ b/drivers/cxl/cxl.h
+@@ -710,7 +710,6 @@ struct cxl_endpoint_dvsec_info {
+ struct cxl_hdm;
+ struct cxl_hdm *devm_cxl_setup_hdm(struct cxl_port *port,
+ 				   struct cxl_endpoint_dvsec_info *info);
+-int devm_cxl_enable_hdm(struct cxl_port *port, struct cxl_hdm *cxlhdm);
+ int devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm,
+ 				struct cxl_endpoint_dvsec_info *info);
+ int devm_cxl_add_passthrough_decoder(struct cxl_port *port);
+diff --git a/drivers/cxl/port.c b/drivers/cxl/port.c
+index c23b6164e1c0f..07c5ac598da1c 100644
+--- a/drivers/cxl/port.c
++++ b/drivers/cxl/port.c
+@@ -60,17 +60,13 @@ static int discover_region(struct device *dev, void *root)
+ static int cxl_switch_port_probe(struct cxl_port *port)
+ {
+ 	struct cxl_hdm *cxlhdm;
+-	int rc, nr_dports;
+-
+-	nr_dports = devm_cxl_port_enumerate_dports(port);
+-	if (nr_dports < 0)
+-		return nr_dports;
++	int rc;
+ 
+-	cxlhdm = devm_cxl_setup_hdm(port, NULL);
+-	rc = devm_cxl_enable_hdm(port, cxlhdm);
+-	if (rc)
++	rc = devm_cxl_port_enumerate_dports(port);
++	if (rc < 0)
+ 		return rc;
+ 
++	cxlhdm = devm_cxl_setup_hdm(port, NULL);
+ 	if (!IS_ERR(cxlhdm))
+ 		return devm_cxl_enumerate_decoders(cxlhdm, NULL);
+ 
+@@ -79,7 +75,7 @@ static int cxl_switch_port_probe(struct cxl_port *port)
+ 		return PTR_ERR(cxlhdm);
+ 	}
+ 
+-	if (nr_dports == 1) {
++	if (rc == 1) {
+ 		dev_dbg(&port->dev, "Fallback to passthrough decoder\n");
+ 		return devm_cxl_add_passthrough_decoder(port);
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index b9441ab457ea7..587879f3ac2e6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -2371,6 +2371,10 @@ int amdgpu_vm_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
+ 	struct amdgpu_fpriv *fpriv = filp->driver_priv;
+ 	int r;
+ 
++	/* No valid flags defined yet */
++	if (args->in.flags)
++		return -EINVAL;
++
+ 	switch (args->in.op) {
+ 	case AMDGPU_VM_OP_RESERVE_VMID:
+ 		/* We only have requirement to reserve vmid from gfxhub */
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index eab53d6317c9f..9ec0a343efadb 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -400,6 +400,14 @@ bool dc_stream_adjust_vmin_vmax(struct dc *dc,
+ {
+ 	int i;
+ 
++	/*
++	 * Don't adjust DRR while there's bandwidth optimizations pending to
++	 * avoid conflicting with firmware updates.
++	 */
++	if (dc->ctx->dce_version > DCE_VERSION_MAX)
++		if (dc->optimized_required || dc->wm_optimized_required)
++			return false;
++
+ 	stream->adjust.v_total_max = adjust->v_total_max;
+ 	stream->adjust.v_total_mid = adjust->v_total_mid;
+ 	stream->adjust.v_total_mid_frame_num = adjust->v_total_mid_frame_num;
+@@ -2201,27 +2209,33 @@ void dc_post_update_surfaces_to_stream(struct dc *dc)
+ 
+ 	post_surface_trace(dc);
+ 
+-	if (dc->ctx->dce_version >= DCE_VERSION_MAX)
+-		TRACE_DCN_CLOCK_STATE(&context->bw_ctx.bw.dcn.clk);
+-	else
++	/*
++	 * Only relevant for DCN behavior where we can guarantee the optimization
++	 * is safe to apply - retain the legacy behavior for DCE.
++	 */
++
++	if (dc->ctx->dce_version < DCE_VERSION_MAX)
+ 		TRACE_DCE_CLOCK_STATE(&context->bw_ctx.bw.dce);
++	else {
++		TRACE_DCN_CLOCK_STATE(&context->bw_ctx.bw.dcn.clk);
+ 
+-	if (is_flip_pending_in_pipes(dc, context))
+-		return;
++		if (is_flip_pending_in_pipes(dc, context))
++			return;
+ 
+-	for (i = 0; i < dc->res_pool->pipe_count; i++)
+-		if (context->res_ctx.pipe_ctx[i].stream == NULL ||
+-		    context->res_ctx.pipe_ctx[i].plane_state == NULL) {
+-			context->res_ctx.pipe_ctx[i].pipe_idx = i;
+-			dc->hwss.disable_plane(dc, &context->res_ctx.pipe_ctx[i]);
+-		}
++		for (i = 0; i < dc->res_pool->pipe_count; i++)
++			if (context->res_ctx.pipe_ctx[i].stream == NULL ||
++					context->res_ctx.pipe_ctx[i].plane_state == NULL) {
++				context->res_ctx.pipe_ctx[i].pipe_idx = i;
++				dc->hwss.disable_plane(dc, &context->res_ctx.pipe_ctx[i]);
++			}
+ 
+-	process_deferred_updates(dc);
++		process_deferred_updates(dc);
+ 
+-	dc->hwss.optimize_bandwidth(dc, context);
++		dc->hwss.optimize_bandwidth(dc, context);
+ 
+-	if (dc->debug.enable_double_buffered_dsc_pg_support)
+-		dc->hwss.update_dsc_pg(dc, context, true);
++		if (dc->debug.enable_double_buffered_dsc_pg_support)
++			dc->hwss.update_dsc_pg(dc, context, true);
++	}
+ 
+ 	dc->optimized_required = false;
+ 	dc->wm_optimized_required = false;
+@@ -4203,12 +4217,9 @@ void dc_commit_updates_for_stream(struct dc *dc,
+ 			if (new_pipe->plane_state && new_pipe->plane_state != old_pipe->plane_state)
+ 				new_pipe->plane_state->force_full_update = true;
+ 		}
+-	} else if (update_type == UPDATE_TYPE_FAST && dc_ctx->dce_version >= DCE_VERSION_MAX) {
++	} else if (update_type == UPDATE_TYPE_FAST) {
+ 		/*
+ 		 * Previous frame finished and HW is ready for optimization.
+-		 *
+-		 * Only relevant for DCN behavior where we can guarantee the optimization
+-		 * is safe to apply - retain the legacy behavior for DCE.
+ 		 */
+ 		dc_post_update_surfaces_to_stream(dc);
+ 	}
+diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
+index 7d5c9c582ed2d..0d2fa7f86a544 100644
+--- a/drivers/md/dm-ioctl.c
++++ b/drivers/md/dm-ioctl.c
+@@ -1830,30 +1830,36 @@ static ioctl_fn lookup_ioctl(unsigned int cmd, int *ioctl_flags)
+  * As well as checking the version compatibility this always
+  * copies the kernel interface version out.
+  */
+-static int check_version(unsigned int cmd, struct dm_ioctl __user *user)
++static int check_version(unsigned int cmd, struct dm_ioctl __user *user,
++			 struct dm_ioctl *kernel_params)
+ {
+-	uint32_t version[3];
+ 	int r = 0;
+ 
+-	if (copy_from_user(version, user->version, sizeof(version)))
++	/* Make certain version is first member of dm_ioctl struct */
++	BUILD_BUG_ON(offsetof(struct dm_ioctl, version) != 0);
++
++	if (copy_from_user(kernel_params->version, user->version, sizeof(kernel_params->version)))
+ 		return -EFAULT;
+ 
+-	if ((version[0] != DM_VERSION_MAJOR) ||
+-	    (version[1] > DM_VERSION_MINOR)) {
++	if ((kernel_params->version[0] != DM_VERSION_MAJOR) ||
++	    (kernel_params->version[1] > DM_VERSION_MINOR)) {
+ 		DMERR("ioctl interface mismatch: kernel(%u.%u.%u), user(%u.%u.%u), cmd(%d)",
+ 		      DM_VERSION_MAJOR, DM_VERSION_MINOR,
+ 		      DM_VERSION_PATCHLEVEL,
+-		      version[0], version[1], version[2], cmd);
++		      kernel_params->version[0],
++		      kernel_params->version[1],
++		      kernel_params->version[2],
++		      cmd);
+ 		r = -EINVAL;
+ 	}
+ 
+ 	/*
+ 	 * Fill in the kernel version.
+ 	 */
+-	version[0] = DM_VERSION_MAJOR;
+-	version[1] = DM_VERSION_MINOR;
+-	version[2] = DM_VERSION_PATCHLEVEL;
+-	if (copy_to_user(user->version, version, sizeof(version)))
++	kernel_params->version[0] = DM_VERSION_MAJOR;
++	kernel_params->version[1] = DM_VERSION_MINOR;
++	kernel_params->version[2] = DM_VERSION_PATCHLEVEL;
++	if (copy_to_user(user->version, kernel_params->version, sizeof(kernel_params->version)))
+ 		return -EFAULT;
+ 
+ 	return r;
+@@ -1879,7 +1885,10 @@ static int copy_params(struct dm_ioctl __user *user, struct dm_ioctl *param_kern
+ 	const size_t minimum_data_size = offsetof(struct dm_ioctl, data);
+ 	unsigned int noio_flag;
+ 
+-	if (copy_from_user(param_kernel, user, minimum_data_size))
++	/* check_version() already copied version from userspace, avoid TOCTOU */
++	if (copy_from_user((char *)param_kernel + sizeof(param_kernel->version),
++			   (char __user *)user + sizeof(param_kernel->version),
++			   minimum_data_size - sizeof(param_kernel->version)))
+ 		return -EFAULT;
+ 
+ 	if (param_kernel->data_size < minimum_data_size) {
+@@ -1991,7 +2000,7 @@ static int ctl_ioctl(struct file *file, uint command, struct dm_ioctl __user *us
+ 	 * Check the interface version passed in.  This also
+ 	 * writes out the kernel's interface version.
+ 	 */
+-	r = check_version(cmd, user);
++	r = check_version(cmd, user, &param_kernel);
+ 	if (r)
+ 		return r;
+ 
+diff --git a/drivers/nubus/proc.c b/drivers/nubus/proc.c
+index 1fd667852271f..cd4bd06cf3094 100644
+--- a/drivers/nubus/proc.c
++++ b/drivers/nubus/proc.c
+@@ -137,6 +137,18 @@ static int nubus_proc_rsrc_show(struct seq_file *m, void *v)
+ 	return 0;
+ }
+ 
++static int nubus_rsrc_proc_open(struct inode *inode, struct file *file)
++{
++	return single_open(file, nubus_proc_rsrc_show, inode);
++}
++
++static const struct proc_ops nubus_rsrc_proc_ops = {
++	.proc_open	= nubus_rsrc_proc_open,
++	.proc_read	= seq_read,
++	.proc_lseek	= seq_lseek,
++	.proc_release	= single_release,
++};
++
+ void nubus_proc_add_rsrc_mem(struct proc_dir_entry *procdir,
+ 			     const struct nubus_dirent *ent,
+ 			     unsigned int size)
+@@ -152,8 +164,8 @@ void nubus_proc_add_rsrc_mem(struct proc_dir_entry *procdir,
+ 		pded = nubus_proc_alloc_pde_data(nubus_dirptr(ent), size);
+ 	else
+ 		pded = NULL;
+-	proc_create_single_data(name, S_IFREG | 0444, procdir,
+-			nubus_proc_rsrc_show, pded);
++	proc_create_data(name, S_IFREG | 0444, procdir,
++			 &nubus_rsrc_proc_ops, pded);
+ }
+ 
+ void nubus_proc_add_rsrc(struct proc_dir_entry *procdir,
+@@ -166,9 +178,9 @@ void nubus_proc_add_rsrc(struct proc_dir_entry *procdir,
+ 		return;
+ 
+ 	snprintf(name, sizeof(name), "%x", ent->type);
+-	proc_create_single_data(name, S_IFREG | 0444, procdir,
+-			nubus_proc_rsrc_show,
+-			nubus_proc_alloc_pde_data(data, 0));
++	proc_create_data(name, S_IFREG | 0444, procdir,
++			 &nubus_rsrc_proc_ops,
++			 nubus_proc_alloc_pde_data(data, 0));
+ }
+ 
+ /*
+diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c
+index 052a611081ecd..a05350a4e49cb 100644
+--- a/drivers/pci/pci-acpi.c
++++ b/drivers/pci/pci-acpi.c
+@@ -1043,6 +1043,16 @@ bool acpi_pci_bridge_d3(struct pci_dev *dev)
+ 	return false;
+ }
+ 
++static void acpi_pci_config_space_access(struct pci_dev *dev, bool enable)
++{
++	int val = enable ? ACPI_REG_CONNECT : ACPI_REG_DISCONNECT;
++	int ret = acpi_evaluate_reg(ACPI_HANDLE(&dev->dev),
++				    ACPI_ADR_SPACE_PCI_CONFIG, val);
++	if (ret)
++		pci_dbg(dev, "ACPI _REG %s evaluation failed (%d)\n",
++			enable ? "connect" : "disconnect", ret);
++}
++
+ int acpi_pci_set_power_state(struct pci_dev *dev, pci_power_t state)
+ {
+ 	struct acpi_device *adev = ACPI_COMPANION(&dev->dev);
+@@ -1053,32 +1063,49 @@ int acpi_pci_set_power_state(struct pci_dev *dev, pci_power_t state)
+ 		[PCI_D3hot] = ACPI_STATE_D3_HOT,
+ 		[PCI_D3cold] = ACPI_STATE_D3_COLD,
+ 	};
+-	int error = -EINVAL;
++	int error;
+ 
+ 	/* If the ACPI device has _EJ0, ignore the device */
+ 	if (!adev || acpi_has_method(adev->handle, "_EJ0"))
+ 		return -ENODEV;
+ 
+ 	switch (state) {
+-	case PCI_D3cold:
+-		if (dev_pm_qos_flags(&dev->dev, PM_QOS_FLAG_NO_POWER_OFF) ==
+-				PM_QOS_FLAGS_ALL) {
+-			error = -EBUSY;
+-			break;
+-		}
+-		fallthrough;
+ 	case PCI_D0:
+ 	case PCI_D1:
+ 	case PCI_D2:
+ 	case PCI_D3hot:
+-		error = acpi_device_set_power(adev, state_conv[state]);
++	case PCI_D3cold:
++		break;
++	default:
++		return -EINVAL;
++	}
++
++	if (state == PCI_D3cold) {
++		if (dev_pm_qos_flags(&dev->dev, PM_QOS_FLAG_NO_POWER_OFF) ==
++				PM_QOS_FLAGS_ALL)
++			return -EBUSY;
++
++		/* Notify AML lack of PCI config space availability */
++		acpi_pci_config_space_access(dev, false);
+ 	}
+ 
+-	if (!error)
+-		pci_dbg(dev, "power state changed by ACPI to %s\n",
+-		        acpi_power_state_string(adev->power.state));
++	error = acpi_device_set_power(adev, state_conv[state]);
++	if (error)
++		return error;
+ 
+-	return error;
++	pci_dbg(dev, "power state changed by ACPI to %s\n",
++	        acpi_power_state_string(adev->power.state));
++
++	/*
++	 * Notify AML of PCI config space availability.  Config space is
++	 * accessible in all states except D3cold; the only transitions
++	 * that change availability are transitions to D3cold and from
++	 * D3cold to D0.
++	 */
++	if (state == PCI_D0)
++		acpi_pci_config_space_access(dev, true);
++
++	return 0;
+ }
+ 
+ pci_power_t acpi_pci_get_power_state(struct pci_dev *dev)
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index 222a28320e1c2..83851078ce46c 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -845,7 +845,7 @@ int nfs_getattr(struct mnt_idmap *idmap, const struct path *path,
+ 
+ 	request_mask &= STATX_TYPE | STATX_MODE | STATX_NLINK | STATX_UID |
+ 			STATX_GID | STATX_ATIME | STATX_MTIME | STATX_CTIME |
+-			STATX_INO | STATX_SIZE | STATX_BLOCKS | STATX_BTIME |
++			STATX_INO | STATX_SIZE | STATX_BLOCKS |
+ 			STATX_CHANGE_COOKIE;
+ 
+ 	if ((query_flags & AT_STATX_DONT_SYNC) && !force_sync) {
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index 53bec6d4297bb..e9cf8dcd4b83d 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -384,7 +384,7 @@ extern unsigned int kobjsize(const void *objp);
+ #endif /* CONFIG_HAVE_ARCH_USERFAULTFD_MINOR */
+ 
+ /* Bits set in the VMA until the stack is in its final location */
+-#define VM_STACK_INCOMPLETE_SETUP	(VM_RAND_READ | VM_SEQ_READ)
++#define VM_STACK_INCOMPLETE_SETUP (VM_RAND_READ | VM_SEQ_READ | VM_STACK_EARLY)
+ 
+ #define TASK_EXEC ((current->personality & READ_IMPLIES_EXEC) ? VM_EXEC : 0)
+ 
+@@ -406,8 +406,10 @@ extern unsigned int kobjsize(const void *objp);
+ 
+ #ifdef CONFIG_STACK_GROWSUP
+ #define VM_STACK	VM_GROWSUP
++#define VM_STACK_EARLY	VM_GROWSDOWN
+ #else
+ #define VM_STACK	VM_GROWSDOWN
++#define VM_STACK_EARLY	0
+ #endif
+ 
+ #define VM_STACK_FLAGS	(VM_STACK | VM_STACK_DEFAULT_FLAGS | VM_ACCOUNT)
+diff --git a/mm/nommu.c b/mm/nommu.c
+index 07a3af6a94ea8..4e0c28644ffa0 100644
+--- a/mm/nommu.c
++++ b/mm/nommu.c
+@@ -637,8 +637,13 @@ EXPORT_SYMBOL(find_vma);
+ struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm,
+ 			unsigned long addr, struct pt_regs *regs)
+ {
++	struct vm_area_struct *vma;
++
+ 	mmap_read_lock(mm);
+-	return vma_lookup(mm, addr);
++	vma = vma_lookup(mm, addr);
++	if (!vma)
++		mmap_read_unlock(mm);
++	return vma;
+ }
+ 
+ /*
+diff --git a/scripts/tags.sh b/scripts/tags.sh
+index ea31640b26715..f6b3c7cd39c7c 100755
+--- a/scripts/tags.sh
++++ b/scripts/tags.sh
+@@ -32,6 +32,13 @@ else
+ 	tree=${srctree}/
+ fi
+ 
++# gtags(1) refuses to index any file outside of its current working dir.
++# If gtags indexing is requested and the build output directory is not
++# the kernel source tree, index all files in absolute-path form.
++if [[ "$1" == "gtags" && -n "${tree}" ]]; then
++	tree=$(realpath "$tree")/
++fi
++
+ # Detect if ALLSOURCE_ARCHS is set. If not, we assume SRCARCH
+ if [ "${ALLSOURCE_ARCHS}" = "" ]; then
+ 	ALLSOURCE_ARCHS=${SRCARCH}
+@@ -131,7 +138,7 @@ docscope()
+ 
+ dogtags()
+ {
+-	all_target_sources | gtags -i -f -
++	all_target_sources | gtags -i -C "${tree:-.}" -f - "$PWD"
+ }
+ 
+ # Basic regular expressions with an optional /kind-spec/ for ctags and
+diff --git a/tools/testing/cxl/Kbuild b/tools/testing/cxl/Kbuild
+index 6f9347ade82cd..fba7bec96acd1 100644
+--- a/tools/testing/cxl/Kbuild
++++ b/tools/testing/cxl/Kbuild
+@@ -6,7 +6,6 @@ ldflags-y += --wrap=acpi_pci_find_root
+ ldflags-y += --wrap=nvdimm_bus_register
+ ldflags-y += --wrap=devm_cxl_port_enumerate_dports
+ ldflags-y += --wrap=devm_cxl_setup_hdm
+-ldflags-y += --wrap=devm_cxl_enable_hdm
+ ldflags-y += --wrap=devm_cxl_add_passthrough_decoder
+ ldflags-y += --wrap=devm_cxl_enumerate_decoders
+ ldflags-y += --wrap=cxl_await_media_ready
+diff --git a/tools/testing/cxl/test/mock.c b/tools/testing/cxl/test/mock.c
+index 652b7dae1feba..c4e53f22e4215 100644
+--- a/tools/testing/cxl/test/mock.c
++++ b/tools/testing/cxl/test/mock.c
+@@ -149,21 +149,6 @@ struct cxl_hdm *__wrap_devm_cxl_setup_hdm(struct cxl_port *port,
+ }
+ EXPORT_SYMBOL_NS_GPL(__wrap_devm_cxl_setup_hdm, CXL);
+ 
+-int __wrap_devm_cxl_enable_hdm(struct cxl_port *port, struct cxl_hdm *cxlhdm)
+-{
+-	int index, rc;
+-	struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
+-
+-	if (ops && ops->is_mock_port(port->uport))
+-		rc = 0;
+-	else
+-		rc = devm_cxl_enable_hdm(port, cxlhdm);
+-	put_cxl_mock_ops(index);
+-
+-	return rc;
+-}
+-EXPORT_SYMBOL_NS_GPL(__wrap_devm_cxl_enable_hdm, CXL);
+-
+ int __wrap_devm_cxl_add_passthrough_decoder(struct cxl_port *port)
+ {
+ 	int rc, index;


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:6.3 commit in: /
@ 2023-07-05 20:37 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2023-07-05 20:37 UTC (permalink / raw
  To: gentoo-commits

commit:     2eca20c9ae88619bff0150d0683c45715b7a97d7
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jul  5 20:37:19 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jul  5 20:37:19 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2eca20c9

Remove redundant patch

Removed:
1800_mm-execve-mark-stack-as-growing-down.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                     |  4 --
 1800_mm-execve-mark-stack-as-growing-down.patch | 82 -------------------------
 2 files changed, 86 deletions(-)

diff --git a/0000_README b/0000_README
index 2dbf6665..db990ac3 100644
--- a/0000_README
+++ b/0000_README
@@ -103,10 +103,6 @@ Patch:  1700_sparc-address-warray-bound-warnings.patch
 From:		https://github.com/KSPP/linux/issues/109
 Desc:		Address -Warray-bounds warnings 
 
-Patch:  1800_mm-execve-mark-stack-as-growing-down.patch
-From:   https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git
-Desc:   execve: always mark stack as growing down during early stack setup
-
 Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758

diff --git a/1800_mm-execve-mark-stack-as-growing-down.patch b/1800_mm-execve-mark-stack-as-growing-down.patch
deleted file mode 100644
index f3da01d8..00000000
--- a/1800_mm-execve-mark-stack-as-growing-down.patch
+++ /dev/null
@@ -1,82 +0,0 @@
-From 53a70ffa22715ab23903ef9fa4f67a21ce10a759 Mon Sep 17 00:00:00 2001
-From: Linus Torvalds <torvalds@linux-foundation.org>
-Date: Sun, 2 Jul 2023 23:20:17 -0700
-Subject: execve: always mark stack as growing down during early stack setup
-
-commit f66066bc5136f25e36a2daff4896c768f18c211e upstream.
-
-While our user stacks can grow either down (all common architectures) or
-up (parisc and the ia64 register stack), the initial stack setup when we
-copy the argument and environment strings to the new stack at execve()
-time is always done by extending the stack downwards.
-
-But it turns out that in commit 8d7071af8907 ("mm: always expand the
-stack with the mmap write lock held"), as part of making the stack
-growing code more robust, 'expand_downwards()' was now made to actually
-check the vma flags:
-
-	if (!(vma->vm_flags & VM_GROWSDOWN))
-		return -EFAULT;
-
-and that meant that this execve-time stack expansion started failing on
-parisc, because on that architecture, the stack flags do not contain the
-VM_GROWSDOWN bit.
-
-At the same time the new check in expand_downwards() is clearly correct,
-and simplified the callers, so let's not remove it.
-
-The solution is instead to just codify the fact that yes, during
-execve(), the stack grows down.  This not only matches reality, it ends
-up being particularly simple: we already have special execve-time flags
-for the stack (VM_STACK_INCOMPLETE_SETUP) and use those flags to avoid
-page migration during this setup time (see vma_is_temporary_stack() and
-invalid_migration_vma()).
-
-So just add VM_GROWSDOWN to that set of temporary flags, and now our
-stack flags automatically match reality, and the parisc stack expansion
-works again.
-
-Note that the VM_STACK_INCOMPLETE_SETUP bits will be cleared when the
-stack is finalized, so we only add the extra VM_GROWSDOWN bit on
-CONFIG_STACK_GROWSUP architectures (ie parisc) rather than adding it in
-general.
-
-Link: https://lore.kernel.org/all/612eaa53-6904-6e16-67fc-394f4faa0e16@bell.net/
-Link: https://lore.kernel.org/all/5fd98a09-4792-1433-752d-029ae3545168@gmx.de/
-Fixes: 8d7071af8907 ("mm: always expand the stack with the mmap write lock held")
-Reported-by: John David Anglin <dave.anglin@bell.net>
-Reported-and-tested-by: Helge Deller <deller@gmx.de>
-Reported-and-tested-by: Guenter Roeck <linux@roeck-us.net>
-Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
----
- include/linux/mm.h | 4 +++-
- 1 file changed, 3 insertions(+), 1 deletion(-)
-
-diff --git a/include/linux/mm.h b/include/linux/mm.h
-index 53bec6d4297bb..e9cf8dcd4b83d 100644
---- a/include/linux/mm.h
-+++ b/include/linux/mm.h
-@@ -384,7 +384,7 @@ extern unsigned int kobjsize(const void *objp);
- #endif /* CONFIG_HAVE_ARCH_USERFAULTFD_MINOR */
- 
- /* Bits set in the VMA until the stack is in its final location */
--#define VM_STACK_INCOMPLETE_SETUP	(VM_RAND_READ | VM_SEQ_READ)
-+#define VM_STACK_INCOMPLETE_SETUP (VM_RAND_READ | VM_SEQ_READ | VM_STACK_EARLY)
- 
- #define TASK_EXEC ((current->personality & READ_IMPLIES_EXEC) ? VM_EXEC : 0)
- 
-@@ -406,8 +406,10 @@ extern unsigned int kobjsize(const void *objp);
- 
- #ifdef CONFIG_STACK_GROWSUP
- #define VM_STACK	VM_GROWSUP
-+#define VM_STACK_EARLY	VM_GROWSDOWN
- #else
- #define VM_STACK	VM_GROWSDOWN
-+#define VM_STACK_EARLY	0
- #endif
- 
- #define VM_STACK_FLAGS	(VM_STACK | VM_STACK_DEFAULT_FLAGS | VM_ACCOUNT)
--- 
-cgit 
-


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:6.3 commit in: /
@ 2023-07-11 18:38 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2023-07-11 18:38 UTC (permalink / raw
  To: gentoo-commits

commit:     eab0d513c7679ac29d6350f6a8e086e9b47d9a27
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jul 11 18:38:03 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jul 11 18:38:03 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=eab0d513

Linux patch 6.3.13

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |     4 +
 1012_linux-6.3.13.patch | 18968 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 18972 insertions(+)

diff --git a/0000_README b/0000_README
index db990ac3..a55d4916 100644
--- a/0000_README
+++ b/0000_README
@@ -91,6 +91,10 @@ Patch:  1011_linux-6.3.12.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.3.12
 
+Patch:  1012_linux-6.3.13.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.3.13
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1012_linux-6.3.13.patch b/1012_linux-6.3.13.patch
new file mode 100644
index 00000000..c71951c0
--- /dev/null
+++ b/1012_linux-6.3.13.patch
@@ -0,0 +1,18968 @@
+diff --git a/Documentation/devicetree/bindings/sound/mediatek,mt8188-afe.yaml b/Documentation/devicetree/bindings/sound/mediatek,mt8188-afe.yaml
+index 82ccb32f08f27..9e877f0d19fbb 100644
+--- a/Documentation/devicetree/bindings/sound/mediatek,mt8188-afe.yaml
++++ b/Documentation/devicetree/bindings/sound/mediatek,mt8188-afe.yaml
+@@ -63,15 +63,15 @@ properties:
+       - const: apll12_div2
+       - const: apll12_div3
+       - const: apll12_div9
+-      - const: a1sys_hp_sel
+-      - const: aud_intbus_sel
+-      - const: audio_h_sel
+-      - const: audio_local_bus_sel
+-      - const: dptx_m_sel
+-      - const: i2so1_m_sel
+-      - const: i2so2_m_sel
+-      - const: i2si1_m_sel
+-      - const: i2si2_m_sel
++      - const: top_a1sys_hp
++      - const: top_aud_intbus
++      - const: top_audio_h
++      - const: top_audio_local_bus
++      - const: top_dptx
++      - const: top_i2so1
++      - const: top_i2so2
++      - const: top_i2si1
++      - const: top_i2si2
+       - const: adsp_audio_26m
+ 
+   mediatek,etdm-in1-cowork-source:
+@@ -193,15 +193,15 @@ examples:
+                       "apll12_div2",
+                       "apll12_div3",
+                       "apll12_div9",
+-                      "a1sys_hp_sel",
+-                      "aud_intbus_sel",
+-                      "audio_h_sel",
+-                      "audio_local_bus_sel",
+-                      "dptx_m_sel",
+-                      "i2so1_m_sel",
+-                      "i2so2_m_sel",
+-                      "i2si1_m_sel",
+-                      "i2si2_m_sel",
++                      "top_a1sys_hp",
++                      "top_aud_intbus",
++                      "top_audio_h",
++                      "top_audio_local_bus",
++                      "top_dptx",
++                      "top_i2so1",
++                      "top_i2so2",
++                      "top_i2si1",
++                      "top_i2si2",
+                       "adsp_audio_26m";
+     };
+ 
+diff --git a/Makefile b/Makefile
+index 7b6c66b7b0041..df591a8efd8a6 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 3
+-SUBLEVEL = 12
++SUBLEVEL = 13
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/arch/arc/include/asm/linkage.h b/arch/arc/include/asm/linkage.h
+index c9434ff3aa4ce..8a3fb71e9cfad 100644
+--- a/arch/arc/include/asm/linkage.h
++++ b/arch/arc/include/asm/linkage.h
+@@ -8,6 +8,10 @@
+ 
+ #include <asm/dwarf.h>
+ 
++#define ASM_NL		 `	/* use '`' to mark new line in macro */
++#define __ALIGN		.align 4
++#define __ALIGN_STR	__stringify(__ALIGN)
++
+ #ifdef __ASSEMBLY__
+ 
+ .macro ST2 e, o, off
+@@ -28,10 +32,6 @@
+ #endif
+ .endm
+ 
+-#define ASM_NL		 `	/* use '`' to mark new line in macro */
+-#define __ALIGN		.align 4
+-#define __ALIGN_STR	__stringify(__ALIGN)
+-
+ /* annotation for data we want in DCCM - if enabled in .config */
+ .macro ARCFP_DATA nm
+ #ifdef CONFIG_ARC_HAS_DCCM
+diff --git a/arch/arm/boot/dts/bcm53015-meraki-mr26.dts b/arch/arm/boot/dts/bcm53015-meraki-mr26.dts
+index 14f58033efeb9..ca2266b936ee2 100644
+--- a/arch/arm/boot/dts/bcm53015-meraki-mr26.dts
++++ b/arch/arm/boot/dts/bcm53015-meraki-mr26.dts
+@@ -128,7 +128,7 @@
+ 
+ 			fixed-link {
+ 				speed = <1000>;
+-				duplex-full;
++				full-duplex;
+ 			};
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/bcm53016-meraki-mr32.dts b/arch/arm/boot/dts/bcm53016-meraki-mr32.dts
+index 46c2c93b01d88..a34e1746a6c59 100644
+--- a/arch/arm/boot/dts/bcm53016-meraki-mr32.dts
++++ b/arch/arm/boot/dts/bcm53016-meraki-mr32.dts
+@@ -187,7 +187,7 @@
+ 
+ 			fixed-link {
+ 				speed = <1000>;
+-				duplex-full;
++				full-duplex;
+ 			};
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/bcm5301x.dtsi b/arch/arm/boot/dts/bcm5301x.dtsi
+index 5fc1b847f4aa5..787a0dd8216b7 100644
+--- a/arch/arm/boot/dts/bcm5301x.dtsi
++++ b/arch/arm/boot/dts/bcm5301x.dtsi
+@@ -542,7 +542,6 @@
+ 				  "spi_lr_session_done",
+ 				  "spi_lr_overread";
+ 		clocks = <&iprocmed>;
+-		clock-names = "iprocmed";
+ 		num-cs = <2>;
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+diff --git a/arch/arm/boot/dts/iwg20d-q7-common.dtsi b/arch/arm/boot/dts/iwg20d-q7-common.dtsi
+index 03caea6fc6ffa..4351c5a02fa59 100644
+--- a/arch/arm/boot/dts/iwg20d-q7-common.dtsi
++++ b/arch/arm/boot/dts/iwg20d-q7-common.dtsi
+@@ -49,7 +49,7 @@
+ 	lcd_backlight: backlight {
+ 		compatible = "pwm-backlight";
+ 
+-		pwms = <&pwm3 0 5000000 0>;
++		pwms = <&pwm3 0 5000000>;
+ 		brightness-levels = <0 4 8 16 32 64 128 255>;
+ 		default-brightness-level = <7>;
+ 		enable-gpios = <&gpio5 14 GPIO_ACTIVE_HIGH>;
+diff --git a/arch/arm/boot/dts/lan966x-kontron-kswitch-d10-mmt.dtsi b/arch/arm/boot/dts/lan966x-kontron-kswitch-d10-mmt.dtsi
+index 0097e72e3fb22..f4df4cc1dfa5e 100644
+--- a/arch/arm/boot/dts/lan966x-kontron-kswitch-d10-mmt.dtsi
++++ b/arch/arm/boot/dts/lan966x-kontron-kswitch-d10-mmt.dtsi
+@@ -18,6 +18,8 @@
+ 
+ 	gpio-restart {
+ 		compatible = "gpio-restart";
++		pinctrl-0 = <&reset_pins>;
++		pinctrl-names = "default";
+ 		gpios = <&gpio 56 GPIO_ACTIVE_LOW>;
+ 		priority = <200>;
+ 	};
+@@ -39,7 +41,7 @@
+ 	status = "okay";
+ 
+ 	spi3: spi@400 {
+-		pinctrl-0 = <&fc3_b_pins>;
++		pinctrl-0 = <&fc3_b_pins>, <&spi3_cs_pins>;
+ 		pinctrl-names = "default";
+ 		status = "okay";
+ 		cs-gpios = <&gpio 46 GPIO_ACTIVE_LOW>;
+@@ -59,6 +61,12 @@
+ 		function = "miim_c";
+ 	};
+ 
++	reset_pins: reset-pins {
++		/* SYS_RST# */
++		pins = "GPIO_56";
++		function = "gpio";
++	};
++
+ 	sgpio_a_pins: sgpio-a-pins {
+ 		/* SCK, D0, D1 */
+ 		pins = "GPIO_32", "GPIO_33", "GPIO_34";
+@@ -71,6 +79,12 @@
+ 		function = "sgpio_b";
+ 	};
+ 
++	spi3_cs_pins: spi3-cs-pins {
++		/* CS# */
++		pins = "GPIO_46";
++		function = "gpio";
++	};
++
+ 	usart0_pins: usart0-pins {
+ 		/* RXD, TXD */
+ 		pins = "GPIO_25", "GPIO_26";
+diff --git a/arch/arm/boot/dts/meson8.dtsi b/arch/arm/boot/dts/meson8.dtsi
+index 21eb59041a7d9..8432efd48f610 100644
+--- a/arch/arm/boot/dts/meson8.dtsi
++++ b/arch/arm/boot/dts/meson8.dtsi
+@@ -752,13 +752,13 @@
+ 
+ &uart_B {
+ 	compatible = "amlogic,meson8-uart";
+-	clocks = <&xtal>, <&clkc CLKID_UART0>, <&clkc CLKID_CLK81>;
++	clocks = <&xtal>, <&clkc CLKID_UART1>, <&clkc CLKID_CLK81>;
+ 	clock-names = "xtal", "pclk", "baud";
+ };
+ 
+ &uart_C {
+ 	compatible = "amlogic,meson8-uart";
+-	clocks = <&xtal>, <&clkc CLKID_UART0>, <&clkc CLKID_CLK81>;
++	clocks = <&xtal>, <&clkc CLKID_UART2>, <&clkc CLKID_CLK81>;
+ 	clock-names = "xtal", "pclk", "baud";
+ };
+ 
+diff --git a/arch/arm/boot/dts/meson8b.dtsi b/arch/arm/boot/dts/meson8b.dtsi
+index d5a3fe21e8e7e..25f7c985f9ea1 100644
+--- a/arch/arm/boot/dts/meson8b.dtsi
++++ b/arch/arm/boot/dts/meson8b.dtsi
+@@ -740,13 +740,13 @@
+ 
+ &uart_B {
+ 	compatible = "amlogic,meson8b-uart";
+-	clocks = <&xtal>, <&clkc CLKID_UART0>, <&clkc CLKID_CLK81>;
++	clocks = <&xtal>, <&clkc CLKID_UART1>, <&clkc CLKID_CLK81>;
+ 	clock-names = "xtal", "pclk", "baud";
+ };
+ 
+ &uart_C {
+ 	compatible = "amlogic,meson8b-uart";
+-	clocks = <&xtal>, <&clkc CLKID_UART0>, <&clkc CLKID_CLK81>;
++	clocks = <&xtal>, <&clkc CLKID_UART2>, <&clkc CLKID_CLK81>;
+ 	clock-names = "xtal", "pclk", "baud";
+ };
+ 
+diff --git a/arch/arm/boot/dts/omap3-gta04a5one.dts b/arch/arm/boot/dts/omap3-gta04a5one.dts
+index 9db9fe67cd63b..95df45cc70c09 100644
+--- a/arch/arm/boot/dts/omap3-gta04a5one.dts
++++ b/arch/arm/boot/dts/omap3-gta04a5one.dts
+@@ -5,9 +5,11 @@
+ 
+ #include "omap3-gta04a5.dts"
+ 
+-&omap3_pmx_core {
++/ {
+ 	model = "Goldelico GTA04A5/Letux 2804 with OneNAND";
++};
+ 
++&omap3_pmx_core {
+ 	gpmc_pins: pinmux_gpmc_pins {
+ 		pinctrl-single,pins = <
+ 
+diff --git a/arch/arm/boot/dts/qcom-apq8074-dragonboard.dts b/arch/arm/boot/dts/qcom-apq8074-dragonboard.dts
+index 1345df7cbd002..6b047c6793707 100644
+--- a/arch/arm/boot/dts/qcom-apq8074-dragonboard.dts
++++ b/arch/arm/boot/dts/qcom-apq8074-dragonboard.dts
+@@ -23,6 +23,10 @@
+ 	status = "okay";
+ };
+ 
++&blsp2_dma {
++	qcom,controlled-remotely;
++};
++
+ &blsp2_i2c5 {
+ 	status = "okay";
+ 	clock-frequency = <200000>;
+diff --git a/arch/arm/boot/dts/qcom-msm8974.dtsi b/arch/arm/boot/dts/qcom-msm8974.dtsi
+index 834ad95515b17..1c3d36701b8e5 100644
+--- a/arch/arm/boot/dts/qcom-msm8974.dtsi
++++ b/arch/arm/boot/dts/qcom-msm8974.dtsi
+@@ -300,7 +300,7 @@
+ 			qcom,ipc = <&apcs 8 0>;
+ 			qcom,smd-edge = <15>;
+ 
+-			rpm_requests: rpm_requests {
++			rpm_requests: rpm-requests {
+ 				compatible = "qcom,rpm-msm8974";
+ 				qcom,smd-channels = "rpm_requests";
+ 
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
+index 4709677151aac..46b87a27d8b37 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
+@@ -137,10 +137,13 @@
+ 
+ 	sound {
+ 		compatible = "audio-graph-card";
+-		routing =
+-			"MIC_IN", "Capture",
+-			"Capture", "Mic Bias",
+-			"Playback", "HP_OUT";
++		widgets = "Headphone", "Headphone Jack",
++			  "Line", "Line In Jack",
++			  "Microphone", "Microphone Jack";
++		routing = "Headphone Jack", "HP_OUT",
++			  "LINE_IN", "Line In Jack",
++			  "MIC_IN", "Microphone Jack",
++			  "Microphone Jack", "Mic Bias";
+ 		dais = <&sai2a_port &sai2b_port>;
+ 		status = "okay";
+ 	};
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
+index 50af4a27d6be4..7d5d6d4360385 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
+@@ -87,7 +87,7 @@
+ 
+ 	sound {
+ 		compatible = "audio-graph-card";
+-		label = "STM32MP1-AV96-HDMI";
++		label = "STM32-AV96-HDMI";
+ 		dais = <&sai2a_port>;
+ 		status = "okay";
+ 	};
+@@ -321,6 +321,12 @@
+ 			};
+ 		};
+ 	};
++
++	dh_mac_eeprom: eeprom@53 {
++		compatible = "atmel,24c02";
++		reg = <0x53>;
++		pagesize = <16>;
++	};
+ };
+ 
+ &ltdc {
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcor-drc-compact.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcor-drc-compact.dtsi
+index c32c160f97f20..39af79dc654cc 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcor-drc-compact.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcor-drc-compact.dtsi
+@@ -192,6 +192,12 @@
+ 		reg = <0x50>;
+ 		pagesize = <16>;
+ 	};
++
++	dh_mac_eeprom: eeprom@53 {
++		compatible = "atmel,24c02";
++		reg = <0x53>;
++		pagesize = <16>;
++	};
+ };
+ 
+ &sdmmc1 {	/* MicroSD */
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi
+index bb40fb46da81d..bba19f21e5277 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi
+@@ -213,12 +213,6 @@
+ 			status = "disabled";
+ 		};
+ 	};
+-
+-	eeprom@53 {
+-		compatible = "atmel,24c02";
+-		reg = <0x53>;
+-		pagesize = <16>;
+-	};
+ };
+ 
+ &ipcc {
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcor-testbench.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcor-testbench.dtsi
+index 5fdb74b652aca..faed31b6d84a1 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcor-testbench.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcor-testbench.dtsi
+@@ -90,6 +90,14 @@
+ 	};
+ };
+ 
++&i2c4 {
++	dh_mac_eeprom: eeprom@53 {
++		compatible = "atmel,24c02";
++		reg = <0x53>;
++		pagesize = <16>;
++	};
++};
++
+ &sdmmc1 {
+ 	pinctrl-names = "default", "opendrain", "sleep";
+ 	pinctrl-0 = <&sdmmc1_b4_pins_a &sdmmc1_dir_pins_b>;
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi b/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi
+index 11370ae0d868b..030b7ace63f1e 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi
+@@ -438,7 +438,7 @@
+ 	i2s2_port: port {
+ 		i2s2_endpoint: endpoint {
+ 			remote-endpoint = <&sii9022_tx_endpoint>;
+-			format = "i2s";
++			dai-format = "i2s";
+ 			mclk-fs = <256>;
+ 		};
+ 	};
+diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h
+index 505a306e0271a..aebe2c8f6a686 100644
+--- a/arch/arm/include/asm/assembler.h
++++ b/arch/arm/include/asm/assembler.h
+@@ -394,6 +394,23 @@ ALT_UP_B(.L0_\@)
+ #endif
+ 	.endm
+ 
++/*
++ * Raw SMP data memory barrier
++ */
++	.macro	__smp_dmb mode
++#if __LINUX_ARM_ARCH__ >= 7
++	.ifeqs "\mode","arm"
++	dmb	ish
++	.else
++	W(dmb)	ish
++	.endif
++#elif __LINUX_ARM_ARCH__ == 6
++	mcr	p15, 0, r0, c7, c10, 5	@ dmb
++#else
++	.error "Incompatible SMP platform"
++#endif
++	.endm
++
+ #if defined(CONFIG_CPU_V7M)
+ 	/*
+ 	 * setmode is used to assert to be in svc mode during boot. For v7-M
+diff --git a/arch/arm/include/asm/sync_bitops.h b/arch/arm/include/asm/sync_bitops.h
+index 6f5d627c44a3c..f46b3c570f92e 100644
+--- a/arch/arm/include/asm/sync_bitops.h
++++ b/arch/arm/include/asm/sync_bitops.h
+@@ -14,14 +14,35 @@
+  * ops which are SMP safe even on a UP kernel.
+  */
+ 
++/*
++ * Unordered
++ */
++
+ #define sync_set_bit(nr, p)		_set_bit(nr, p)
+ #define sync_clear_bit(nr, p)		_clear_bit(nr, p)
+ #define sync_change_bit(nr, p)		_change_bit(nr, p)
+-#define sync_test_and_set_bit(nr, p)	_test_and_set_bit(nr, p)
+-#define sync_test_and_clear_bit(nr, p)	_test_and_clear_bit(nr, p)
+-#define sync_test_and_change_bit(nr, p)	_test_and_change_bit(nr, p)
+ #define sync_test_bit(nr, addr)		test_bit(nr, addr)
+-#define arch_sync_cmpxchg		arch_cmpxchg
+ 
++/*
++ * Fully ordered
++ */
++
++int _sync_test_and_set_bit(int nr, volatile unsigned long * p);
++#define sync_test_and_set_bit(nr, p)	_sync_test_and_set_bit(nr, p)
++
++int _sync_test_and_clear_bit(int nr, volatile unsigned long * p);
++#define sync_test_and_clear_bit(nr, p)	_sync_test_and_clear_bit(nr, p)
++
++int _sync_test_and_change_bit(int nr, volatile unsigned long * p);
++#define sync_test_and_change_bit(nr, p)	_sync_test_and_change_bit(nr, p)
++
++#define arch_sync_cmpxchg(ptr, old, new)				\
++({									\
++	__typeof__(*(ptr)) __ret;					\
++	__smp_mb__before_atomic();					\
++	__ret = arch_cmpxchg_relaxed((ptr), (old), (new));		\
++	__smp_mb__after_atomic();					\
++	__ret;								\
++})
+ 
+ #endif
+diff --git a/arch/arm/lib/bitops.h b/arch/arm/lib/bitops.h
+index 95bd359912889..f069d1b2318e6 100644
+--- a/arch/arm/lib/bitops.h
++++ b/arch/arm/lib/bitops.h
+@@ -28,7 +28,7 @@ UNWIND(	.fnend		)
+ ENDPROC(\name		)
+ 	.endm
+ 
+-	.macro	testop, name, instr, store
++	.macro	__testop, name, instr, store, barrier
+ ENTRY(	\name		)
+ UNWIND(	.fnstart	)
+ 	ands	ip, r1, #3
+@@ -38,7 +38,7 @@ UNWIND(	.fnstart	)
+ 	mov	r0, r0, lsr #5
+ 	add	r1, r1, r0, lsl #2	@ Get word offset
+ 	mov	r3, r2, lsl r3		@ create mask
+-	smp_dmb
++	\barrier
+ #if __LINUX_ARM_ARCH__ >= 7 && defined(CONFIG_SMP)
+ 	.arch_extension	mp
+ 	ALT_SMP(W(pldw)	[r1])
+@@ -50,13 +50,21 @@ UNWIND(	.fnstart	)
+ 	strex	ip, r2, [r1]
+ 	cmp	ip, #0
+ 	bne	1b
+-	smp_dmb
++	\barrier
+ 	cmp	r0, #0
+ 	movne	r0, #1
+ 2:	bx	lr
+ UNWIND(	.fnend		)
+ ENDPROC(\name		)
+ 	.endm
++
++	.macro	testop, name, instr, store
++	__testop \name, \instr, \store, smp_dmb
++	.endm
++
++	.macro	sync_testop, name, instr, store
++	__testop \name, \instr, \store, __smp_dmb
++	.endm
+ #else
+ 	.macro	bitop, name, instr
+ ENTRY(	\name		)
+diff --git a/arch/arm/lib/testchangebit.S b/arch/arm/lib/testchangebit.S
+index 4ebecc67e6e04..f13fe9bc2399a 100644
+--- a/arch/arm/lib/testchangebit.S
++++ b/arch/arm/lib/testchangebit.S
+@@ -10,3 +10,7 @@
+                 .text
+ 
+ testop	_test_and_change_bit, eor, str
++
++#if __LINUX_ARM_ARCH__ >= 6
++sync_testop	_sync_test_and_change_bit, eor, str
++#endif
+diff --git a/arch/arm/lib/testclearbit.S b/arch/arm/lib/testclearbit.S
+index 009afa0f5b4a7..4d2c5ca620ebf 100644
+--- a/arch/arm/lib/testclearbit.S
++++ b/arch/arm/lib/testclearbit.S
+@@ -10,3 +10,7 @@
+                 .text
+ 
+ testop	_test_and_clear_bit, bicne, strne
++
++#if __LINUX_ARM_ARCH__ >= 6
++sync_testop	_sync_test_and_clear_bit, bicne, strne
++#endif
+diff --git a/arch/arm/lib/testsetbit.S b/arch/arm/lib/testsetbit.S
+index f3192e55acc87..649dbab65d8d0 100644
+--- a/arch/arm/lib/testsetbit.S
++++ b/arch/arm/lib/testsetbit.S
+@@ -10,3 +10,7 @@
+                 .text
+ 
+ testop	_test_and_set_bit, orreq, streq
++
++#if __LINUX_ARM_ARCH__ >= 6
++sync_testop	_sync_test_and_set_bit, orreq, streq
++#endif
+diff --git a/arch/arm/mach-ep93xx/timer-ep93xx.c b/arch/arm/mach-ep93xx/timer-ep93xx.c
+index dd4b164d18317..a9efa7bc2fa12 100644
+--- a/arch/arm/mach-ep93xx/timer-ep93xx.c
++++ b/arch/arm/mach-ep93xx/timer-ep93xx.c
+@@ -9,6 +9,7 @@
+ #include <linux/io.h>
+ #include <asm/mach/time.h>
+ #include "soc.h"
++#include "platform.h"
+ 
+ /*************************************************************************
+  * Timer handling for EP93xx
+@@ -60,7 +61,7 @@ static u64 notrace ep93xx_read_sched_clock(void)
+ 	return ret;
+ }
+ 
+-u64 ep93xx_clocksource_read(struct clocksource *c)
++static u64 ep93xx_clocksource_read(struct clocksource *c)
+ {
+ 	u64 ret;
+ 
+diff --git a/arch/arm/mach-omap1/board-osk.c b/arch/arm/mach-omap1/board-osk.c
+index df758c1f92373..a8ca8d427182d 100644
+--- a/arch/arm/mach-omap1/board-osk.c
++++ b/arch/arm/mach-omap1/board-osk.c
+@@ -25,7 +25,8 @@
+  * with this program; if not, write  to the Free Software Foundation, Inc.,
+  * 675 Mass Ave, Cambridge, MA 02139, USA.
+  */
+-#include <linux/gpio.h>
++#include <linux/gpio/consumer.h>
++#include <linux/gpio/driver.h>
+ #include <linux/gpio/machine.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+@@ -64,13 +65,12 @@
+ /* TPS65010 has four GPIOs.  nPG and LED2 can be treated like GPIOs with
+  * alternate pin configurations for hardware-controlled blinking.
+  */
+-#define OSK_TPS_GPIO_BASE		(OMAP_MAX_GPIO_LINES + 16 /* MPUIO */)
+-#	define OSK_TPS_GPIO_USB_PWR_EN	(OSK_TPS_GPIO_BASE + 0)
+-#	define OSK_TPS_GPIO_LED_D3	(OSK_TPS_GPIO_BASE + 1)
+-#	define OSK_TPS_GPIO_LAN_RESET	(OSK_TPS_GPIO_BASE + 2)
+-#	define OSK_TPS_GPIO_DSP_PWR_EN	(OSK_TPS_GPIO_BASE + 3)
+-#	define OSK_TPS_GPIO_LED_D9	(OSK_TPS_GPIO_BASE + 4)
+-#	define OSK_TPS_GPIO_LED_D2	(OSK_TPS_GPIO_BASE + 5)
++#define OSK_TPS_GPIO_USB_PWR_EN	0
++#define OSK_TPS_GPIO_LED_D3	1
++#define OSK_TPS_GPIO_LAN_RESET	2
++#define OSK_TPS_GPIO_DSP_PWR_EN	3
++#define OSK_TPS_GPIO_LED_D9	4
++#define OSK_TPS_GPIO_LED_D2	5
+ 
+ static struct mtd_partition osk_partitions[] = {
+ 	/* bootloader (U-Boot, etc) in first sector */
+@@ -174,11 +174,20 @@ static const struct gpio_led tps_leds[] = {
+ 	/* NOTE:  D9 and D2 have hardware blink support.
+ 	 * Also, D9 requires non-battery power.
+ 	 */
+-	{ .gpio = OSK_TPS_GPIO_LED_D9, .name = "d9",
+-			.default_trigger = "disk-activity", },
+-	{ .gpio = OSK_TPS_GPIO_LED_D2, .name = "d2", },
+-	{ .gpio = OSK_TPS_GPIO_LED_D3, .name = "d3", .active_low = 1,
+-			.default_trigger = "heartbeat", },
++	{ .name = "d9", .default_trigger = "disk-activity", },
++	{ .name = "d2", },
++	{ .name = "d3", .default_trigger = "heartbeat", },
++};
++
++static struct gpiod_lookup_table tps_leds_gpio_table = {
++	.dev_id = "leds-gpio",
++	.table = {
++		/* Use local offsets on TPS65010 */
++		GPIO_LOOKUP_IDX("tps65010", OSK_TPS_GPIO_LED_D9, NULL, 0, GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP_IDX("tps65010", OSK_TPS_GPIO_LED_D2, NULL, 1, GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP_IDX("tps65010", OSK_TPS_GPIO_LED_D3, NULL, 2, GPIO_ACTIVE_LOW),
++		{ }
++	},
+ };
+ 
+ static struct gpio_led_platform_data tps_leds_data = {
+@@ -192,29 +201,34 @@ static struct platform_device osk5912_tps_leds = {
+ 	.dev.platform_data	= &tps_leds_data,
+ };
+ 
+-static int osk_tps_setup(struct i2c_client *client, void *context)
++/* The board just hold these GPIOs hogged from setup to teardown */
++static struct gpio_desc *eth_reset;
++static struct gpio_desc *vdd_dsp;
++
++static int osk_tps_setup(struct i2c_client *client, struct gpio_chip *gc)
+ {
++	struct gpio_desc *d;
+ 	if (!IS_BUILTIN(CONFIG_TPS65010))
+ 		return -ENOSYS;
+ 
+ 	/* Set GPIO 1 HIGH to disable VBUS power supply;
+ 	 * OHCI driver powers it up/down as needed.
+ 	 */
+-	gpio_request(OSK_TPS_GPIO_USB_PWR_EN, "n_vbus_en");
+-	gpio_direction_output(OSK_TPS_GPIO_USB_PWR_EN, 1);
++	d = gpiochip_request_own_desc(gc, OSK_TPS_GPIO_USB_PWR_EN, "n_vbus_en",
++				      GPIO_ACTIVE_HIGH, GPIOD_OUT_HIGH);
+ 	/* Free the GPIO again as the driver will request it */
+-	gpio_free(OSK_TPS_GPIO_USB_PWR_EN);
++	gpiochip_free_own_desc(d);
+ 
+ 	/* Set GPIO 2 high so LED D3 is off by default */
+ 	tps65010_set_gpio_out_value(GPIO2, HIGH);
+ 
+ 	/* Set GPIO 3 low to take ethernet out of reset */
+-	gpio_request(OSK_TPS_GPIO_LAN_RESET, "smc_reset");
+-	gpio_direction_output(OSK_TPS_GPIO_LAN_RESET, 0);
++	eth_reset = gpiochip_request_own_desc(gc, OSK_TPS_GPIO_LAN_RESET, "smc_reset",
++					      GPIO_ACTIVE_HIGH, GPIOD_OUT_LOW);
+ 
+ 	/* GPIO4 is VDD_DSP */
+-	gpio_request(OSK_TPS_GPIO_DSP_PWR_EN, "dsp_power");
+-	gpio_direction_output(OSK_TPS_GPIO_DSP_PWR_EN, 1);
++	vdd_dsp = gpiochip_request_own_desc(gc, OSK_TPS_GPIO_DSP_PWR_EN, "dsp_power",
++					    GPIO_ACTIVE_HIGH, GPIOD_OUT_HIGH);
+ 	/* REVISIT if DSP support isn't configured, power it off ... */
+ 
+ 	/* Let LED1 (D9) blink; leds-gpio may override it */
+@@ -232,15 +246,22 @@ static int osk_tps_setup(struct i2c_client *client, void *context)
+ 
+ 	/* register these three LEDs */
+ 	osk5912_tps_leds.dev.parent = &client->dev;
++	gpiod_add_lookup_table(&tps_leds_gpio_table);
+ 	platform_device_register(&osk5912_tps_leds);
+ 
+ 	return 0;
+ }
+ 
++static void osk_tps_teardown(struct i2c_client *client, struct gpio_chip *gc)
++{
++	gpiochip_free_own_desc(eth_reset);
++	gpiochip_free_own_desc(vdd_dsp);
++}
++
+ static struct tps65010_board tps_board = {
+-	.base		= OSK_TPS_GPIO_BASE,
+ 	.outmask	= 0x0f,
+ 	.setup		= osk_tps_setup,
++	.teardown	= osk_tps_teardown,
+ };
+ 
+ static struct i2c_board_info __initdata osk_i2c_board_info[] = {
+@@ -263,11 +284,6 @@ static void __init osk_init_smc91x(void)
+ {
+ 	u32 l;
+ 
+-	if ((gpio_request(0, "smc_irq")) < 0) {
+-		printk("Error requesting gpio 0 for smc91x irq\n");
+-		return;
+-	}
+-
+ 	/* Check EMIFS wait states to fix errors with SMC_GET_PKT_HDR */
+ 	l = omap_readl(EMIFS_CCS(1));
+ 	l |= 0x3;
+@@ -279,10 +295,6 @@ static void __init osk_init_cf(int seg)
+ 	struct resource *res = &osk5912_cf_resources[1];
+ 
+ 	omap_cfg_reg(M7_1610_GPIO62);
+-	if ((gpio_request(62, "cf_irq")) < 0) {
+-		printk("Error requesting gpio 62 for CF irq\n");
+-		return;
+-	}
+ 
+ 	switch (seg) {
+ 	/* NOTE: CS0 could be configured too ... */
+@@ -308,18 +320,17 @@ static void __init osk_init_cf(int seg)
+ 		seg, omap_readl(EMIFS_CCS(seg)), omap_readl(EMIFS_ACS(seg)));
+ 	omap_writel(0x0004a1b3, EMIFS_CCS(seg));	/* synch mode 4 etc */
+ 	omap_writel(0x00000000, EMIFS_ACS(seg));	/* OE hold/setup */
+-
+-	/* the CF I/O IRQ is really active-low */
+-	irq_set_irq_type(gpio_to_irq(62), IRQ_TYPE_EDGE_FALLING);
+ }
+ 
+ static struct gpiod_lookup_table osk_usb_gpio_table = {
+ 	.dev_id = "ohci",
+ 	.table = {
+ 		/* Power GPIO on the I2C-attached TPS65010 */
+-		GPIO_LOOKUP("tps65010", 0, "power", GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP("tps65010", OSK_TPS_GPIO_USB_PWR_EN, "power",
++			    GPIO_ACTIVE_HIGH),
+ 		GPIO_LOOKUP(OMAP_GPIO_LABEL, 9, "overcurrent",
+ 			    GPIO_ACTIVE_HIGH),
++		{ }
+ 	},
+ };
+ 
+@@ -341,8 +352,25 @@ static struct omap_usb_config osk_usb_config __initdata = {
+ 
+ #define EMIFS_CS3_VAL	(0x88013141)
+ 
++static struct gpiod_lookup_table osk_irq_gpio_table = {
++	.dev_id = NULL,
++	.table = {
++		/* GPIO used for SMC91x IRQ */
++		GPIO_LOOKUP(OMAP_GPIO_LABEL, 0, "smc_irq",
++			    GPIO_ACTIVE_HIGH),
++		/* GPIO used for CF IRQ */
++		GPIO_LOOKUP("gpio-48-63", 14, "cf_irq",
++			    GPIO_ACTIVE_HIGH),
++		/* GPIO used by the TPS65010 chip */
++		GPIO_LOOKUP("mpuio", 1, "tps65010",
++			    GPIO_ACTIVE_HIGH),
++		{ }
++	},
++};
++
+ static void __init osk_init(void)
+ {
++	struct gpio_desc *d;
+ 	u32 l;
+ 
+ 	osk_init_smc91x();
+@@ -359,10 +387,31 @@ static void __init osk_init(void)
+ 
+ 	osk_flash_resource.end = osk_flash_resource.start = omap_cs3_phys();
+ 	osk_flash_resource.end += SZ_32M - 1;
+-	osk5912_smc91x_resources[1].start = gpio_to_irq(0);
+-	osk5912_smc91x_resources[1].end = gpio_to_irq(0);
+-	osk5912_cf_resources[0].start = gpio_to_irq(62);
+-	osk5912_cf_resources[0].end = gpio_to_irq(62);
++
++	/*
++	 * Add the GPIOs to be used as IRQs and immediately look them up
++	 * to be passed as an IRQ resource. This is ugly but should work
++	 * until the day we convert to device tree.
++	 */
++	gpiod_add_lookup_table(&osk_irq_gpio_table);
++
++	d = gpiod_get(NULL, "smc_irq", GPIOD_IN);
++	if (IS_ERR(d)) {
++		pr_err("Unable to get SMC IRQ GPIO descriptor\n");
++	} else {
++		irq_set_irq_type(gpiod_to_irq(d), IRQ_TYPE_EDGE_RISING);
++		osk5912_smc91x_resources[1] = DEFINE_RES_IRQ(gpiod_to_irq(d));
++	}
++
++	d = gpiod_get(NULL, "cf_irq", GPIOD_IN);
++	if (IS_ERR(d)) {
++		pr_err("Unable to get CF IRQ GPIO descriptor\n");
++	} else {
++		/* the CF I/O IRQ is really active-low */
++		irq_set_irq_type(gpiod_to_irq(d), IRQ_TYPE_EDGE_FALLING);
++		osk5912_cf_resources[0] = DEFINE_RES_IRQ(gpiod_to_irq(d));
++	}
++
+ 	platform_add_devices(osk5912_devices, ARRAY_SIZE(osk5912_devices));
+ 
+ 	l = omap_readl(USB_TRANSCEIVER_CTRL);
+@@ -372,13 +421,15 @@ static void __init osk_init(void)
+ 	gpiod_add_lookup_table(&osk_usb_gpio_table);
+ 	omap1_usb_init(&osk_usb_config);
+ 
++	omap_serial_init();
++
+ 	/* irq for tps65010 chip */
+ 	/* bootloader effectively does:  omap_cfg_reg(U19_1610_MPUIO1); */
+-	if (gpio_request(OMAP_MPUIO(1), "tps65010") == 0)
+-		gpio_direction_input(OMAP_MPUIO(1));
+-
+-	omap_serial_init();
+-	osk_i2c_board_info[0].irq = gpio_to_irq(OMAP_MPUIO(1));
++	d = gpiod_get(NULL, "tps65010", GPIOD_IN);
++	if (IS_ERR(d))
++		pr_err("Unable to get TPS65010 IRQ GPIO descriptor\n");
++	else
++		osk_i2c_board_info[0].irq = gpiod_to_irq(d);
+ 	omap_register_i2c_bus(1, 400, osk_i2c_board_info,
+ 			      ARRAY_SIZE(osk_i2c_board_info));
+ }
+diff --git a/arch/arm/mach-omap2/board-generic.c b/arch/arm/mach-omap2/board-generic.c
+index 1610c567a6a3a..10d2f078e4a8e 100644
+--- a/arch/arm/mach-omap2/board-generic.c
++++ b/arch/arm/mach-omap2/board-generic.c
+@@ -13,6 +13,7 @@
+ #include <linux/of_platform.h>
+ #include <linux/irqdomain.h>
+ #include <linux/clocksource.h>
++#include <linux/clockchips.h>
+ 
+ #include <asm/setup.h>
+ #include <asm/mach/arch.h>
+diff --git a/arch/arm/probes/kprobes/checkers-common.c b/arch/arm/probes/kprobes/checkers-common.c
+index 4d720990cf2a3..eba7ac4725c02 100644
+--- a/arch/arm/probes/kprobes/checkers-common.c
++++ b/arch/arm/probes/kprobes/checkers-common.c
+@@ -40,7 +40,7 @@ enum probes_insn checker_stack_use_imm_0xx(probes_opcode_t insn,
+  * Different from other insn uses imm8, the real addressing offset of
+  * STRD in T32 encoding should be imm8 * 4. See ARMARM description.
+  */
+-enum probes_insn checker_stack_use_t32strd(probes_opcode_t insn,
++static enum probes_insn checker_stack_use_t32strd(probes_opcode_t insn,
+ 		struct arch_probes_insn *asi,
+ 		const struct decode_header *h)
+ {
+diff --git a/arch/arm/probes/kprobes/core.c b/arch/arm/probes/kprobes/core.c
+index 9090c3a74dcce..d8238da095df7 100644
+--- a/arch/arm/probes/kprobes/core.c
++++ b/arch/arm/probes/kprobes/core.c
+@@ -233,7 +233,7 @@ singlestep(struct kprobe *p, struct pt_regs *regs, struct kprobe_ctlblk *kcb)
+  * kprobe, and that level is reserved for user kprobe handlers, so we can't
+  * risk encountering a new kprobe in an interrupt handler.
+  */
+-void __kprobes kprobe_handler(struct pt_regs *regs)
++static void __kprobes kprobe_handler(struct pt_regs *regs)
+ {
+ 	struct kprobe *p, *cur;
+ 	struct kprobe_ctlblk *kcb;
+diff --git a/arch/arm/probes/kprobes/opt-arm.c b/arch/arm/probes/kprobes/opt-arm.c
+index dbef34ed933f2..7f65048380ca5 100644
+--- a/arch/arm/probes/kprobes/opt-arm.c
++++ b/arch/arm/probes/kprobes/opt-arm.c
+@@ -145,8 +145,6 @@ __arch_remove_optimized_kprobe(struct optimized_kprobe *op, int dirty)
+ 	}
+ }
+ 
+-extern void kprobe_handler(struct pt_regs *regs);
+-
+ static void
+ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
+ {
+diff --git a/arch/arm/probes/kprobes/test-core.c b/arch/arm/probes/kprobes/test-core.c
+index c562832b86272..171c7076b89f4 100644
+--- a/arch/arm/probes/kprobes/test-core.c
++++ b/arch/arm/probes/kprobes/test-core.c
+@@ -720,7 +720,7 @@ static const char coverage_register_lookup[16] = {
+ 	[REG_TYPE_NOSPPCX]	= COVERAGE_ANY_REG | COVERAGE_SP,
+ };
+ 
+-unsigned coverage_start_registers(const struct decode_header *h)
++static unsigned coverage_start_registers(const struct decode_header *h)
+ {
+ 	unsigned regs = 0;
+ 	int i;
+diff --git a/arch/arm/probes/kprobes/test-core.h b/arch/arm/probes/kprobes/test-core.h
+index 56ad3c0aaeeac..c7297037c1623 100644
+--- a/arch/arm/probes/kprobes/test-core.h
++++ b/arch/arm/probes/kprobes/test-core.h
+@@ -454,3 +454,7 @@ void kprobe_thumb32_test_cases(void);
+ #else
+ void kprobe_arm_test_cases(void);
+ #endif
++
++void __kprobes_test_case_start(void);
++void __kprobes_test_case_end_16(void);
++void __kprobes_test_case_end_32(void);
+diff --git a/arch/arm64/boot/dts/mediatek/mt7986a-bananapi-bpi-r3-nand.dtso b/arch/arm64/boot/dts/mediatek/mt7986a-bananapi-bpi-r3-nand.dtso
+index 15ee8c568f3c3..543c13385d6e3 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7986a-bananapi-bpi-r3-nand.dtso
++++ b/arch/arm64/boot/dts/mediatek/mt7986a-bananapi-bpi-r3-nand.dtso
+@@ -29,13 +29,13 @@
+ 
+ 					partition@0 {
+ 						label = "bl2";
+-						reg = <0x0 0x80000>;
++						reg = <0x0 0x100000>;
+ 						read-only;
+ 					};
+ 
+-					partition@80000 {
++					partition@100000 {
+ 						label = "reserved";
+-						reg = <0x80000 0x300000>;
++						reg = <0x100000 0x280000>;
+ 					};
+ 
+ 					partition@380000 {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
+index fbe14b13051a6..4836ad55fd4ae 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
+@@ -292,6 +292,10 @@
+ 	};
+ };
+ 
++&gic {
++	mediatek,broken-save-restore-fw;
++};
++
+ &gpu {
+ 	mali-supply = <&mt6358_vgpu_reg>;
+ 	sram-supply = <&mt6358_vsram_gpu_reg>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8192.dtsi b/arch/arm64/boot/dts/mediatek/mt8192.dtsi
+index 87b91c8feaf96..a3a2b7de54a7c 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8192.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8192.dtsi
+@@ -70,7 +70,8 @@
+ 			d-cache-line-size = <64>;
+ 			d-cache-sets = <128>;
+ 			next-level-cache = <&l2_0>;
+-			capacity-dmips-mhz = <530>;
++			performance-domains = <&performance 0>;
++			capacity-dmips-mhz = <427>;
+ 		};
+ 
+ 		cpu1: cpu@100 {
+@@ -87,7 +88,8 @@
+ 			d-cache-line-size = <64>;
+ 			d-cache-sets = <128>;
+ 			next-level-cache = <&l2_0>;
+-			capacity-dmips-mhz = <530>;
++			performance-domains = <&performance 0>;
++			capacity-dmips-mhz = <427>;
+ 		};
+ 
+ 		cpu2: cpu@200 {
+@@ -104,7 +106,8 @@
+ 			d-cache-line-size = <64>;
+ 			d-cache-sets = <128>;
+ 			next-level-cache = <&l2_0>;
+-			capacity-dmips-mhz = <530>;
++			performance-domains = <&performance 0>;
++			capacity-dmips-mhz = <427>;
+ 		};
+ 
+ 		cpu3: cpu@300 {
+@@ -121,7 +124,8 @@
+ 			d-cache-line-size = <64>;
+ 			d-cache-sets = <128>;
+ 			next-level-cache = <&l2_0>;
+-			capacity-dmips-mhz = <530>;
++			performance-domains = <&performance 0>;
++			capacity-dmips-mhz = <427>;
+ 		};
+ 
+ 		cpu4: cpu@400 {
+@@ -138,6 +142,7 @@
+ 			d-cache-line-size = <64>;
+ 			d-cache-sets = <256>;
+ 			next-level-cache = <&l2_1>;
++			performance-domains = <&performance 1>;
+ 			capacity-dmips-mhz = <1024>;
+ 		};
+ 
+@@ -155,6 +160,7 @@
+ 			d-cache-line-size = <64>;
+ 			d-cache-sets = <256>;
+ 			next-level-cache = <&l2_1>;
++			performance-domains = <&performance 1>;
+ 			capacity-dmips-mhz = <1024>;
+ 		};
+ 
+@@ -172,6 +178,7 @@
+ 			d-cache-line-size = <64>;
+ 			d-cache-sets = <256>;
+ 			next-level-cache = <&l2_1>;
++			performance-domains = <&performance 1>;
+ 			capacity-dmips-mhz = <1024>;
+ 		};
+ 
+@@ -189,6 +196,7 @@
+ 			d-cache-line-size = <64>;
+ 			d-cache-sets = <256>;
+ 			next-level-cache = <&l2_1>;
++			performance-domains = <&performance 1>;
+ 			capacity-dmips-mhz = <1024>;
+ 		};
+ 
+@@ -318,6 +326,12 @@
+ 		compatible = "simple-bus";
+ 		ranges;
+ 
++		performance: performance-controller@11bc10 {
++			compatible = "mediatek,cpufreq-hw";
++			reg = <0 0x0011bc10 0 0x120>, <0 0x0011bd30 0 0x120>;
++			#performance-domain-cells = <1>;
++		};
++
+ 		gic: interrupt-controller@c000000 {
+ 			compatible = "arm,gic-v3";
+ 			#interrupt-cells = <4>;
+diff --git a/arch/arm64/boot/dts/microchip/sparx5.dtsi b/arch/arm64/boot/dts/microchip/sparx5.dtsi
+index 0367a00a269b3..5eae6e7fd248e 100644
+--- a/arch/arm64/boot/dts/microchip/sparx5.dtsi
++++ b/arch/arm64/boot/dts/microchip/sparx5.dtsi
+@@ -61,7 +61,7 @@
+ 		interrupt-affinity = <&cpu0>, <&cpu1>;
+ 	};
+ 
+-	psci {
++	psci: psci {
+ 		compatible = "arm,psci-0.2";
+ 		method = "smc";
+ 	};
+diff --git a/arch/arm64/boot/dts/microchip/sparx5_pcb_common.dtsi b/arch/arm64/boot/dts/microchip/sparx5_pcb_common.dtsi
+index 9d1a082de3e29..32bb76b3202a0 100644
+--- a/arch/arm64/boot/dts/microchip/sparx5_pcb_common.dtsi
++++ b/arch/arm64/boot/dts/microchip/sparx5_pcb_common.dtsi
+@@ -6,6 +6,18 @@
+ /dts-v1/;
+ #include "sparx5.dtsi"
+ 
++&psci {
++	status = "disabled";
++};
++
++&cpu0 {
++	enable-method = "spin-table";
++};
++
++&cpu1 {
++	enable-method = "spin-table";
++};
++
+ &uart0 {
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/qcom/apq8016-sbc.dts b/arch/arm64/boot/dts/qcom/apq8016-sbc.dts
+index c52d79a55d80c..dbdb8077857ef 100644
+--- a/arch/arm64/boot/dts/qcom/apq8016-sbc.dts
++++ b/arch/arm64/boot/dts/qcom/apq8016-sbc.dts
+@@ -325,12 +325,6 @@
+ 	linux,code = <KEY_VOLUMEDOWN>;
+ };
+ 
+-&pronto {
+-	status = "okay";
+-
+-	firmware-name = "qcom/apq8016/wcnss.mbn";
+-};
+-
+ &sdhc_1 {
+ 	status = "okay";
+ 
+@@ -411,10 +405,19 @@
+ 	qcom,mbhc-vthreshold-high = <75 150 237 450 500>;
+ };
+ 
++&wcnss {
++	status = "okay";
++	firmware-name = "qcom/apq8016/wcnss.mbn";
++};
++
+ &wcnss_ctrl {
+ 	firmware-name = "qcom/apq8016/WCNSS_qcom_wlan_nv_sbc.bin";
+ };
+ 
++&wcnss_iris {
++	compatible = "qcom,wcn3620";
++};
++
+ /* Enable CoreSight */
+ &cti0 { status = "okay"; };
+ &cti1 { status = "okay"; };
+@@ -444,21 +447,21 @@
+ 	vdd_l7-supply = <&pm8916_s4>;
+ 
+ 	s3 {
+-		regulator-min-microvolt = <375000>;
+-		regulator-max-microvolt = <1562000>;
++		regulator-min-microvolt = <1250000>;
++		regulator-max-microvolt = <1350000>;
+ 	};
+ 
+ 	s4 {
+-		regulator-min-microvolt = <1800000>;
+-		regulator-max-microvolt = <1800000>;
++		regulator-min-microvolt = <1850000>;
++		regulator-max-microvolt = <2150000>;
+ 
+ 		regulator-always-on;
+ 		regulator-boot-on;
+ 	};
+ 
+ 	l1 {
+-		regulator-min-microvolt = <375000>;
+-		regulator-max-microvolt = <1525000>;
++		regulator-min-microvolt = <1225000>;
++		regulator-max-microvolt = <1225000>;
+ 	};
+ 
+ 	l2 {
+@@ -467,13 +470,13 @@
+ 	};
+ 
+ 	l4 {
+-		regulator-min-microvolt = <1750000>;
+-		regulator-max-microvolt = <3337000>;
++		regulator-min-microvolt = <2050000>;
++		regulator-max-microvolt = <2050000>;
+ 	};
+ 
+ 	l5 {
+-		regulator-min-microvolt = <1750000>;
+-		regulator-max-microvolt = <3337000>;
++		regulator-min-microvolt = <1800000>;
++		regulator-max-microvolt = <1800000>;
+ 	};
+ 
+ 	l6 {
+@@ -482,60 +485,68 @@
+ 	};
+ 
+ 	l7 {
+-		regulator-min-microvolt = <1750000>;
+-		regulator-max-microvolt = <3337000>;
++		regulator-min-microvolt = <1800000>;
++		regulator-max-microvolt = <1800000>;
+ 	};
+ 
+ 	l8 {
+-		regulator-min-microvolt = <1750000>;
+-		regulator-max-microvolt = <3337000>;
++		regulator-min-microvolt = <2900000>;
++		regulator-max-microvolt = <2900000>;
+ 	};
+ 
+ 	l9 {
+-		regulator-min-microvolt = <1750000>;
+-		regulator-max-microvolt = <3337000>;
++		regulator-min-microvolt = <3300000>;
++		regulator-max-microvolt = <3300000>;
+ 	};
+ 
+ 	l10 {
+-		regulator-min-microvolt = <1750000>;
+-		regulator-max-microvolt = <3337000>;
++		regulator-min-microvolt = <2800000>;
++		regulator-max-microvolt = <2800000>;
+ 	};
+ 
+ 	l11 {
+-		regulator-min-microvolt = <1750000>;
+-		regulator-max-microvolt = <3337000>;
++		regulator-min-microvolt = <2950000>;
++		regulator-max-microvolt = <2950000>;
+ 		regulator-allow-set-load;
+ 		regulator-system-load = <200000>;
+ 	};
+ 
+ 	l12 {
+-		regulator-min-microvolt = <1750000>;
+-		regulator-max-microvolt = <3337000>;
++		regulator-min-microvolt = <1800000>;
++		regulator-max-microvolt = <2950000>;
+ 	};
+ 
+ 	l13 {
+-		regulator-min-microvolt = <1750000>;
+-		regulator-max-microvolt = <3337000>;
++		regulator-min-microvolt = <3075000>;
++		regulator-max-microvolt = <3075000>;
+ 	};
+ 
+ 	l14 {
+-		regulator-min-microvolt = <1750000>;
+-		regulator-max-microvolt = <3337000>;
++		regulator-min-microvolt = <1800000>;
++		regulator-max-microvolt = <3300000>;
+ 	};
+ 
+-	/**
+-	 * 1.8v required on LS expansion
+-	 * for mezzanine boards
++	/*
++	 * The 96Boards specification expects a 1.8V power rail on the low-speed
++	 * expansion connector that is able to provide at least 0.18W / 100 mA.
++	 * L15/L16 are connected in parallel to provide 55 mA each. A minimum load
++	 * must be specified to ensure the regulators are not put in LPM where they
++	 * would only provide 5 mA.
+ 	 */
+ 	l15 {
+-		regulator-min-microvolt = <1750000>;
+-		regulator-max-microvolt = <3337000>;
++		regulator-min-microvolt = <1800000>;
++		regulator-max-microvolt = <1800000>;
++		regulator-system-load = <50000>;
++		regulator-allow-set-load;
+ 		regulator-always-on;
+ 	};
+ 
+ 	l16 {
+-		regulator-min-microvolt = <1750000>;
+-		regulator-max-microvolt = <3337000>;
++		regulator-min-microvolt = <1800000>;
++		regulator-max-microvolt = <1800000>;
++		regulator-system-load = <50000>;
++		regulator-allow-set-load;
++		regulator-always-on;
+ 	};
+ 
+ 	l17 {
+@@ -544,8 +555,8 @@
+ 	};
+ 
+ 	l18 {
+-		regulator-min-microvolt = <1750000>;
+-		regulator-max-microvolt = <3337000>;
++		regulator-min-microvolt = <2700000>;
++		regulator-max-microvolt = <2700000>;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/qcom/apq8096-ifc6640.dts b/arch/arm64/boot/dts/qcom/apq8096-ifc6640.dts
+index 71e0a500599c8..ed2e2f6c6775a 100644
+--- a/arch/arm64/boot/dts/qcom/apq8096-ifc6640.dts
++++ b/arch/arm64/boot/dts/qcom/apq8096-ifc6640.dts
+@@ -26,7 +26,7 @@
+ 
+ 	v1p05: v1p05-regulator {
+ 		compatible = "regulator-fixed";
+-		reglator-name = "v1p05";
++		regulator-name = "v1p05";
+ 		regulator-always-on;
+ 		regulator-boot-on;
+ 
+@@ -38,7 +38,7 @@
+ 
+ 	v12_poe: v12-poe-regulator {
+ 		compatible = "regulator-fixed";
+-		reglator-name = "v12_poe";
++		regulator-name = "v12_poe";
+ 		regulator-always-on;
+ 		regulator-boot-on;
+ 
+diff --git a/arch/arm64/boot/dts/qcom/ipq6018.dtsi b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+index 9ff4e9d45065b..8ec9e282b412c 100644
+--- a/arch/arm64/boot/dts/qcom/ipq6018.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+@@ -301,7 +301,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		prng: qrng@e1000 {
++		prng: qrng@e3000 {
+ 			compatible = "qcom,prng-ee";
+ 			reg = <0x0 0x000e3000 0x0 0x1000>;
+ 			clocks = <&gcc GCC_PRNG_AHB_CLK>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8916-acer-a1-724.dts b/arch/arm64/boot/dts/qcom/msm8916-acer-a1-724.dts
+index ed3fa7b3575b7..13cd9ad167df7 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916-acer-a1-724.dts
++++ b/arch/arm64/boot/dts/qcom/msm8916-acer-a1-724.dts
+@@ -118,10 +118,6 @@
+ 	status = "okay";
+ };
+ 
+-&pronto {
+-	status = "okay";
+-};
+-
+ &sdhc_1 {
+ 	pinctrl-names = "default", "sleep";
+ 	pinctrl-0 = <&sdc1_clk_on &sdc1_cmd_on &sdc1_data_on>;
+@@ -149,6 +145,14 @@
+ 	extcon = <&usb_id>;
+ };
+ 
++&wcnss {
++	status = "okay";
++};
++
++&wcnss_iris {
++	compatible = "qcom,wcn3620";
++};
++
+ &smd_rpm_regulators {
+ 	vdd_l1_l2_l3-supply = <&pm8916_s3>;
+ 	vdd_l4_l5_l6-supply = <&pm8916_s4>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8916-alcatel-idol347.dts b/arch/arm64/boot/dts/qcom/msm8916-alcatel-idol347.dts
+index 701a5585d77e4..fecb69944cfa3 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916-alcatel-idol347.dts
++++ b/arch/arm64/boot/dts/qcom/msm8916-alcatel-idol347.dts
+@@ -160,10 +160,6 @@
+ 	status = "okay";
+ };
+ 
+-&pronto {
+-	status = "okay";
+-};
+-
+ &sdhc_1 {
+ 	status = "okay";
+ 
+@@ -191,6 +187,14 @@
+ 	extcon = <&usb_id>;
+ };
+ 
++&wcnss {
++	status = "okay";
++};
++
++&wcnss_iris {
++	compatible = "qcom,wcn3620";
++};
++
+ &smd_rpm_regulators {
+ 	vdd_l1_l2_l3-supply = <&pm8916_s3>;
+ 	vdd_l4_l5_l6-supply = <&pm8916_s4>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8916-asus-z00l.dts b/arch/arm64/boot/dts/qcom/msm8916-asus-z00l.dts
+index 3618704a53309..91284a1d0966f 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916-asus-z00l.dts
++++ b/arch/arm64/boot/dts/qcom/msm8916-asus-z00l.dts
+@@ -128,10 +128,6 @@
+ 	status = "okay";
+ };
+ 
+-&pronto {
+-	status = "okay";
+-};
+-
+ &sdhc_1 {
+ 	status = "okay";
+ 
+@@ -159,6 +155,14 @@
+ 	extcon = <&usb_id>;
+ };
+ 
++&wcnss {
++	status = "okay";
++};
++
++&wcnss_iris {
++	compatible = "qcom,wcn3620";
++};
++
+ &smd_rpm_regulators {
+ 	vdd_l1_l2_l3-supply = <&pm8916_s3>;
+ 	vdd_l4_l5_l6-supply = <&pm8916_s4>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8916-gplus-fl8005a.dts b/arch/arm64/boot/dts/qcom/msm8916-gplus-fl8005a.dts
+index a0e520edde029..525ec76efeeb7 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916-gplus-fl8005a.dts
++++ b/arch/arm64/boot/dts/qcom/msm8916-gplus-fl8005a.dts
+@@ -118,10 +118,6 @@
+ 	status = "okay";
+ };
+ 
+-&pronto {
+-	status = "okay";
+-};
+-
+ &sdhc_1 {
+ 	pinctrl-0 = <&sdc1_clk_on &sdc1_cmd_on &sdc1_data_on>;
+ 	pinctrl-1 = <&sdc1_clk_off &sdc1_cmd_off &sdc1_data_off>;
+@@ -149,6 +145,14 @@
+ 	extcon = <&usb_id>;
+ };
+ 
++&wcnss {
++	status = "okay";
++};
++
++&wcnss_iris {
++	compatible = "qcom,wcn3620";
++};
++
+ &smd_rpm_regulators {
+ 	vdd_l1_l2_l3-supply = <&pm8916_s3>;
+ 	vdd_l4_l5_l6-supply = <&pm8916_s4>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8916-huawei-g7.dts b/arch/arm64/boot/dts/qcom/msm8916-huawei-g7.dts
+index 8c07eca900d3f..5b1bac8f51220 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916-huawei-g7.dts
++++ b/arch/arm64/boot/dts/qcom/msm8916-huawei-g7.dts
+@@ -227,10 +227,6 @@
+ 	status = "okay";
+ };
+ 
+-&pronto {
+-	status = "okay";
+-};
+-
+ &sdhc_1 {
+ 	status = "okay";
+ 
+@@ -312,6 +308,14 @@
+ 	qcom,hphl-jack-type-normally-open;
+ };
+ 
++&wcnss {
++	status = "okay";
++};
++
++&wcnss_iris {
++	compatible = "qcom,wcn3620";
++};
++
+ &smd_rpm_regulators {
+ 	vdd_l1_l2_l3-supply = <&pm8916_s3>;
+ 	vdd_l4_l5_l6-supply = <&pm8916_s4>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8916-longcheer-l8150.dts b/arch/arm64/boot/dts/qcom/msm8916-longcheer-l8150.dts
+index d1e8cf2f50c0d..f1dd625e18227 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916-longcheer-l8150.dts
++++ b/arch/arm64/boot/dts/qcom/msm8916-longcheer-l8150.dts
+@@ -231,10 +231,6 @@
+ 	status = "okay";
+ };
+ 
+-&pronto {
+-	status = "okay";
+-};
+-
+ &sdhc_1 {
+ 	status = "okay";
+ 
+@@ -263,6 +259,14 @@
+ 	extcon = <&pm8916_usbin>;
+ };
+ 
++&wcnss {
++	status = "okay";
++};
++
++&wcnss_iris {
++	compatible = "qcom,wcn3620";
++};
++
+ &smd_rpm_regulators {
+ 	vdd_l1_l2_l3-supply = <&pm8916_s3>;
+ 	vdd_l4_l5_l6-supply = <&pm8916_s4>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8916-longcheer-l8910.dts b/arch/arm64/boot/dts/qcom/msm8916-longcheer-l8910.dts
+index 3899e11b9843b..b79e80913af9f 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916-longcheer-l8910.dts
++++ b/arch/arm64/boot/dts/qcom/msm8916-longcheer-l8910.dts
+@@ -99,10 +99,6 @@
+ 	status = "okay";
+ };
+ 
+-&pronto {
+-	status = "okay";
+-};
+-
+ &sdhc_1 {
+ 	status = "okay";
+ 
+@@ -130,6 +126,14 @@
+ 	extcon = <&usb_id>;
+ };
+ 
++&wcnss {
++	status = "okay";
++};
++
++&wcnss_iris {
++	compatible = "qcom,wcn3620";
++};
++
+ &smd_rpm_regulators {
+ 	vdd_l1_l2_l3-supply = <&pm8916_s3>;
+ 	vdd_l4_l5_l6-supply = <&pm8916_s4>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8916-pm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916-pm8916.dtsi
+index 8cac23b5240c6..6eb5e0a395100 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916-pm8916.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916-pm8916.dtsi
+@@ -20,17 +20,6 @@
+ 	pll-supply = <&pm8916_l7>;
+ };
+ 
+-&pronto {
+-	vddpx-supply = <&pm8916_l7>;
+-
+-	iris {
+-		vddxo-supply = <&pm8916_l7>;
+-		vddrfa-supply = <&pm8916_s3>;
+-		vddpa-supply = <&pm8916_l9>;
+-		vdddig-supply = <&pm8916_l5>;
+-	};
+-};
+-
+ &sdhc_1 {
+ 	vmmc-supply = <&pm8916_l8>;
+ 	vqmmc-supply = <&pm8916_l5>;
+@@ -46,6 +35,17 @@
+ 	v3p3-supply = <&pm8916_l13>;
+ };
+ 
++&wcnss {
++	vddpx-supply = <&pm8916_l7>;
++};
++
++&wcnss_iris {
++	vddxo-supply = <&pm8916_l7>;
++	vddrfa-supply = <&pm8916_s3>;
++	vddpa-supply = <&pm8916_l9>;
++	vdddig-supply = <&pm8916_l5>;
++};
++
+ &rpm_requests {
+ 	smd_rpm_regulators: regulators {
+ 		compatible = "qcom,rpm-pm8916-regulators";
+diff --git a/arch/arm64/boot/dts/qcom/msm8916-samsung-a2015-common.dtsi b/arch/arm64/boot/dts/qcom/msm8916-samsung-a2015-common.dtsi
+index a2ed7bdbf528f..16d67749960e0 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916-samsung-a2015-common.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916-samsung-a2015-common.dtsi
+@@ -252,10 +252,6 @@
+ 	linux,code = <KEY_VOLUMEDOWN>;
+ };
+ 
+-&pronto {
+-	status = "okay";
+-};
+-
+ &sdhc_1 {
+ 	status = "okay";
+ 
+diff --git a/arch/arm64/boot/dts/qcom/msm8916-samsung-a3u-eur.dts b/arch/arm64/boot/dts/qcom/msm8916-samsung-a3u-eur.dts
+index c691cca2eb459..a1ca4d8834201 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916-samsung-a3u-eur.dts
++++ b/arch/arm64/boot/dts/qcom/msm8916-samsung-a3u-eur.dts
+@@ -112,6 +112,14 @@
+ 	status = "okay";
+ };
+ 
++&wcnss {
++	status = "okay";
++};
++
++&wcnss_iris {
++	compatible = "qcom,wcn3620";
++};
++
+ &msmgpio {
+ 	panel_vdd3_default: panel-vdd3-default-state {
+ 		pins = "gpio9";
+diff --git a/arch/arm64/boot/dts/qcom/msm8916-samsung-a5u-eur.dts b/arch/arm64/boot/dts/qcom/msm8916-samsung-a5u-eur.dts
+index 3dd819458785d..4e10b8a5e9f9c 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916-samsung-a5u-eur.dts
++++ b/arch/arm64/boot/dts/qcom/msm8916-samsung-a5u-eur.dts
+@@ -54,12 +54,6 @@
+ 	status = "okay";
+ };
+ 
+-&pronto {
+-	iris {
+-		compatible = "qcom,wcn3660b";
+-	};
+-};
+-
+ &touchkey {
+ 	vcc-supply = <&reg_touch_key>;
+ 	vdd-supply = <&reg_touch_key>;
+@@ -69,6 +63,14 @@
+ 	status = "okay";
+ };
+ 
++&wcnss {
++	status = "okay";
++};
++
++&wcnss_iris {
++	compatible = "qcom,wcn3660b";
++};
++
+ &msmgpio {
+ 	tkey_en_default: tkey-en-default-state {
+ 		pins = "gpio97";
+diff --git a/arch/arm64/boot/dts/qcom/msm8916-samsung-e2015-common.dtsi b/arch/arm64/boot/dts/qcom/msm8916-samsung-e2015-common.dtsi
+index c95f0b4bc61f3..f6c4a011fdfd2 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916-samsung-e2015-common.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916-samsung-e2015-common.dtsi
+@@ -58,6 +58,14 @@
+ 	vdd-supply = <&reg_touch_key>;
+ };
+ 
++&wcnss {
++	status = "okay";
++};
++
++&wcnss_iris {
++	compatible = "qcom,wcn3620";
++};
++
+ &msmgpio {
+ 	tkey_en_default: tkey-en-default-state {
+ 		pins = "gpio97";
+diff --git a/arch/arm64/boot/dts/qcom/msm8916-samsung-gt5-common.dtsi b/arch/arm64/boot/dts/qcom/msm8916-samsung-gt5-common.dtsi
+index d920b7247d823..74ffd04db8d84 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916-samsung-gt5-common.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916-samsung-gt5-common.dtsi
+@@ -125,14 +125,6 @@
+ 	status = "okay";
+ };
+ 
+-&pronto {
+-	status = "okay";
+-
+-	iris {
+-		compatible = "qcom,wcn3660b";
+-	};
+-};
+-
+ &sdhc_1 {
+ 	pinctrl-0 = <&sdc1_clk_on &sdc1_cmd_on &sdc1_data_on>;
+ 	pinctrl-1 = <&sdc1_clk_off &sdc1_cmd_off &sdc1_data_off>;
+@@ -162,6 +154,14 @@
+ 	extcon = <&pm8916_usbin>;
+ };
+ 
++&wcnss {
++	status = "okay";
++};
++
++&wcnss_iris {
++	compatible = "qcom,wcn3660b";
++};
++
+ &smd_rpm_regulators {
+ 	vdd_l1_l2_l3-supply = <&pm8916_s3>;
+ 	vdd_l4_l5_l6-supply = <&pm8916_s4>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8916-samsung-j5-common.dtsi b/arch/arm64/boot/dts/qcom/msm8916-samsung-j5-common.dtsi
+index f3b81b6f0a2f1..adeee0830e768 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916-samsung-j5-common.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916-samsung-j5-common.dtsi
+@@ -93,10 +93,6 @@
+ 	linux,code = <KEY_VOLUMEDOWN>;
+ };
+ 
+-&pronto {
+-	status = "okay";
+-};
+-
+ &sdhc_1 {
+ 	status = "okay";
+ 
+@@ -124,6 +120,14 @@
+ 	extcon = <&muic>;
+ };
+ 
++&wcnss {
++	status = "okay";
++};
++
++&wcnss_iris {
++	compatible = "qcom,wcn3620";
++};
++
+ &smd_rpm_regulators {
+ 	vdd_l1_l2_l3-supply = <&pm8916_s3>;
+ 	vdd_l4_l5_l6-supply = <&pm8916_s4>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8916-samsung-serranove.dts b/arch/arm64/boot/dts/qcom/msm8916-samsung-serranove.dts
+index d4984b3af8023..1a41a4db874da 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916-samsung-serranove.dts
++++ b/arch/arm64/boot/dts/qcom/msm8916-samsung-serranove.dts
+@@ -272,14 +272,6 @@
+ 	status = "okay";
+ };
+ 
+-&pronto {
+-	status = "okay";
+-
+-	iris {
+-		compatible = "qcom,wcn3660b";
+-	};
+-};
+-
+ &sdhc_1 {
+ 	status = "okay";
+ 
+@@ -320,6 +312,14 @@
+ 	extcon = <&muic>;
+ };
+ 
++&wcnss {
++	status = "okay";
++};
++
++&wcnss_iris {
++	compatible = "qcom,wcn3660b";
++};
++
+ &smd_rpm_regulators {
+ 	vdd_l1_l2_l3-supply = <&pm8916_s3>;
+ 	vdd_l4_l5_l6-supply = <&pm8916_s4>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8916-ufi.dtsi b/arch/arm64/boot/dts/qcom/msm8916-ufi.dtsi
+index cdf34b74fa8fa..50bae6f214f1f 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916-ufi.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916-ufi.dtsi
+@@ -99,10 +99,6 @@
+ 	status = "okay";
+ };
+ 
+-&pronto {
+-	status = "okay";
+-};
+-
+ &sdhc_1 {
+ 	pinctrl-0 = <&sdc1_clk_on &sdc1_cmd_on &sdc1_data_on>;
+ 	pinctrl-1 = <&sdc1_clk_off &sdc1_cmd_off &sdc1_data_off>;
+@@ -122,6 +118,14 @@
+ 	extcon = <&pm8916_usbin>;
+ };
+ 
++&wcnss {
++	status = "okay";
++};
++
++&wcnss_iris {
++	compatible = "qcom,wcn3620";
++};
++
+ &smd_rpm_regulators {
+ 	vdd_l1_l2_l3-supply = <&pm8916_s3>;
+ 	vdd_l4_l5_l6-supply = <&pm8916_s4>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8916-wingtech-wt88047.dts b/arch/arm64/boot/dts/qcom/msm8916-wingtech-wt88047.dts
+index a87be1d95b14b..ac56c7595f78a 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916-wingtech-wt88047.dts
++++ b/arch/arm64/boot/dts/qcom/msm8916-wingtech-wt88047.dts
+@@ -153,10 +153,6 @@
+ 	status = "okay";
+ };
+ 
+-&pronto {
+-	status = "okay";
+-};
+-
+ &sdhc_1 {
+ 	status = "okay";
+ 
+@@ -184,6 +180,14 @@
+ 	extcon = <&usb_id>;
+ };
+ 
++&wcnss {
++	status = "okay";
++};
++
++&wcnss_iris {
++	compatible = "qcom,wcn3620";
++};
++
+ &smd_rpm_regulators {
+ 	vdd_l1_l2_l3-supply = <&pm8916_s3>;
+ 	vdd_l4_l5_l6-supply = <&pm8916_s4>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+index 0d5283805f42c..7cc3d0d92cb9e 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+@@ -1161,7 +1161,7 @@
+ 			};
+ 		};
+ 
+-		camss: camss@1b00000 {
++		camss: camss@1b0ac00 {
+ 			compatible = "qcom,msm8916-camss";
+ 			reg = <0x01b0ac00 0x200>,
+ 				<0x01b00030 0x4>,
+@@ -1553,7 +1553,7 @@
+ 			#sound-dai-cells = <1>;
+ 		};
+ 
+-		sdhc_1: mmc@7824000 {
++		sdhc_1: mmc@7824900 {
+ 			compatible = "qcom,msm8916-sdhci", "qcom,sdhci-msm-v4";
+ 			reg = <0x07824900 0x11c>, <0x07824000 0x800>;
+ 			reg-names = "hc", "core";
+@@ -1571,7 +1571,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		sdhc_2: mmc@7864000 {
++		sdhc_2: mmc@7864900 {
+ 			compatible = "qcom,msm8916-sdhci", "qcom,sdhci-msm-v4";
+ 			reg = <0x07864900 0x11c>, <0x07864000 0x800>;
+ 			reg-names = "hc", "core";
+@@ -1870,7 +1870,7 @@
+ 			};
+ 		};
+ 
+-		pronto: remoteproc@a21b000 {
++		wcnss: remoteproc@a204000 {
+ 			compatible = "qcom,pronto-v2-pil", "qcom,pronto";
+ 			reg = <0x0a204000 0x2000>, <0x0a202000 0x1000>, <0x0a21b000 0x3000>;
+ 			reg-names = "ccu", "dxe", "pmu";
+@@ -1896,9 +1896,8 @@
+ 
+ 			status = "disabled";
+ 
+-			iris {
+-				compatible = "qcom,wcn3620";
+-
++			wcnss_iris: iris {
++				/* Separate chip, compatible is board-specific */
+ 				clocks = <&rpmcc RPM_SMD_RF_CLK2>;
+ 				clock-names = "xo";
+ 			};
+@@ -1916,13 +1915,13 @@
+ 					compatible = "qcom,wcnss";
+ 					qcom,smd-channels = "WCNSS_CTRL";
+ 
+-					qcom,mmio = <&pronto>;
++					qcom,mmio = <&wcnss>;
+ 
+-					bluetooth {
++					wcnss_bt: bluetooth {
+ 						compatible = "qcom,wcnss-bt";
+ 					};
+ 
+-					wifi {
++					wcnss_wifi: wifi {
+ 						compatible = "qcom,wcnss-wlan";
+ 
+ 						interrupts = <GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>,
+diff --git a/arch/arm64/boot/dts/qcom/msm8953.dtsi b/arch/arm64/boot/dts/qcom/msm8953.dtsi
+index 610f3e3fc0c22..7001f6b0b9f9a 100644
+--- a/arch/arm64/boot/dts/qcom/msm8953.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8953.dtsi
+@@ -878,7 +878,7 @@
+ 			};
+ 		};
+ 
+-		apps_iommu: iommu@1e00000 {
++		apps_iommu: iommu@1e20000 {
+ 			compatible = "qcom,msm8953-iommu", "qcom,msm-iommu-v1";
+ 			ranges  = <0 0x1e20000 0x20000>;
+ 
+diff --git a/arch/arm64/boot/dts/qcom/msm8976.dtsi b/arch/arm64/boot/dts/qcom/msm8976.dtsi
+index e55baafd9efd0..e1617b9a73df2 100644
+--- a/arch/arm64/boot/dts/qcom/msm8976.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8976.dtsi
+@@ -821,7 +821,7 @@
+ 			cell-index = <0>;
+ 		};
+ 
+-		sdhc_1: mmc@7824000 {
++		sdhc_1: mmc@7824900 {
+ 			compatible = "qcom,msm8976-sdhci", "qcom,sdhci-msm-v4";
+ 			reg = <0x07824900 0x500>, <0x07824000 0x800>;
+ 			reg-names = "hc", "core";
+@@ -837,7 +837,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		sdhc_2: mmc@7864000 {
++		sdhc_2: mmc@7864900 {
+ 			compatible = "qcom,msm8976-sdhci", "qcom,sdhci-msm-v4";
+ 			reg = <0x07864900 0x11c>, <0x07864000 0x800>;
+ 			reg-names = "hc", "core";
+@@ -956,7 +956,7 @@
+ 			#reset-cells = <1>;
+ 		};
+ 
+-		sdhc_3: mmc@7a24000 {
++		sdhc_3: mmc@7a24900 {
+ 			compatible = "qcom,msm8976-sdhci", "qcom,sdhci-msm-v4";
+ 			reg = <0x07a24900 0x11c>, <0x07a24000 0x800>;
+ 			reg-names = "hc", "core";
+diff --git a/arch/arm64/boot/dts/qcom/msm8994.dtsi b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+index 24c3fced8df71..5ce23306fd844 100644
+--- a/arch/arm64/boot/dts/qcom/msm8994.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+@@ -745,7 +745,7 @@
+ 			reg = <0xfc4ab000 0x4>;
+ 		};
+ 
+-		spmi_bus: spmi@fc4c0000 {
++		spmi_bus: spmi@fc4cf000 {
+ 			compatible = "qcom,spmi-pmic-arb";
+ 			reg = <0xfc4cf000 0x1000>,
+ 			      <0xfc4cb000 0x1000>,
+diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+index 73da1a4d52462..771db923b2e33 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+@@ -2069,7 +2069,7 @@
+ 			};
+ 		};
+ 
+-		camss: camss@a00000 {
++		camss: camss@a34000 {
+ 			compatible = "qcom,msm8996-camss";
+ 			reg = <0x00a34000 0x1000>,
+ 			      <0x00a00030 0x4>,
+diff --git a/arch/arm64/boot/dts/qcom/pm7250b.dtsi b/arch/arm64/boot/dts/qcom/pm7250b.dtsi
+index d709d955a2f5a..daa6f1d30efa0 100644
+--- a/arch/arm64/boot/dts/qcom/pm7250b.dtsi
++++ b/arch/arm64/boot/dts/qcom/pm7250b.dtsi
+@@ -3,6 +3,7 @@
+  * Copyright (C) 2022 Luca Weiss <luca.weiss@fairphone.com>
+  */
+ 
++#include <dt-bindings/iio/qcom,spmi-vadc.h>
+ #include <dt-bindings/interrupt-controller/irq.h>
+ #include <dt-bindings/spmi/spmi.h>
+ 
+diff --git a/arch/arm64/boot/dts/qcom/pm8998.dtsi b/arch/arm64/boot/dts/qcom/pm8998.dtsi
+index adbba9f4089ab..13925ac44669d 100644
+--- a/arch/arm64/boot/dts/qcom/pm8998.dtsi
++++ b/arch/arm64/boot/dts/qcom/pm8998.dtsi
+@@ -55,7 +55,7 @@
+ 
+ 			pm8998_resin: resin {
+ 				compatible = "qcom,pm8941-resin";
+-				interrupts = <GIC_SPI 0x8 1 IRQ_TYPE_EDGE_BOTH>;
++				interrupts = <0x0 0x8 1 IRQ_TYPE_EDGE_BOTH>;
+ 				debounce = <15625>;
+ 				bias-pull-up;
+ 				status = "disabled";
+diff --git a/arch/arm64/boot/dts/qcom/qdu1000.dtsi b/arch/arm64/boot/dts/qcom/qdu1000.dtsi
+index c72a51c32a300..eeb4e51b31cfc 100644
+--- a/arch/arm64/boot/dts/qcom/qdu1000.dtsi
++++ b/arch/arm64/boot/dts/qcom/qdu1000.dtsi
+@@ -1238,6 +1238,7 @@
+ 			qcom,tcs-config = <ACTIVE_TCS    2>, <SLEEP_TCS     3>,
+ 					  <WAKE_TCS      3>, <CONTROL_TCS   0>;
+ 			label = "apps_rsc";
++			power-domains = <&CLUSTER_PD>;
+ 
+ 			apps_bcm_voter: bcm-voter {
+ 				compatible = "qcom,bcm-voter";
+diff --git a/arch/arm64/boot/dts/qcom/sdm630.dtsi b/arch/arm64/boot/dts/qcom/sdm630.dtsi
+index 5827cda270a0e..07c720c5721d4 100644
+--- a/arch/arm64/boot/dts/qcom/sdm630.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm630.dtsi
+@@ -1893,7 +1893,7 @@
+ 			};
+ 		};
+ 
+-		camss: camss@ca00000 {
++		camss: camss@ca00020 {
+ 			compatible = "qcom,sdm660-camss";
+ 			reg = <0x0ca00020 0x10>,
+ 			      <0x0ca30000 0x100>,
+diff --git a/arch/arm64/boot/dts/qcom/sdm670.dtsi b/arch/arm64/boot/dts/qcom/sdm670.dtsi
+index 02f14692dd9da..50dd050eb132d 100644
+--- a/arch/arm64/boot/dts/qcom/sdm670.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm670.dtsi
+@@ -1155,6 +1155,7 @@
+ 					  <SLEEP_TCS   3>,
+ 					  <WAKE_TCS    3>,
+ 					  <CONTROL_TCS 1>;
++			power-domains = <&CLUSTER_PD>;
+ 
+ 			apps_bcm_voter: bcm-voter {
+ 				compatible = "qcom,bcm-voter";
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-xiaomi-polaris.dts b/arch/arm64/boot/dts/qcom/sdm845-xiaomi-polaris.dts
+index 56f2d855df78d..406f0224581a7 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-xiaomi-polaris.dts
++++ b/arch/arm64/boot/dts/qcom/sdm845-xiaomi-polaris.dts
+@@ -483,6 +483,7 @@
+ 		};
+ 
+ 		rmi4-f12@12 {
++			reg = <0x12>;
+ 			syna,rezero-wait-ms = <0xc8>;
+ 			syna,clip-x-high = <0x438>;
+ 			syna,clip-y-high = <0x870>;
+diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+index c5e92851a4f08..a818a81408dfb 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+@@ -4158,7 +4158,7 @@
+ 			#reset-cells = <1>;
+ 		};
+ 
+-		camss: camss@a00000 {
++		camss: camss@acb3000 {
+ 			compatible = "qcom,sdm845-camss";
+ 
+ 			reg = <0 0x0acb3000 0 0x1000>,
+@@ -5058,6 +5058,7 @@
+ 					  <SLEEP_TCS   3>,
+ 					  <WAKE_TCS    3>,
+ 					  <CONTROL_TCS 1>;
++			power-domains = <&CLUSTER_PD>;
+ 
+ 			apps_bcm_voter: bcm-voter {
+ 				compatible = "qcom,bcm-voter";
+diff --git a/arch/arm64/boot/dts/qcom/sm6115.dtsi b/arch/arm64/boot/dts/qcom/sm6115.dtsi
+index fbd67d2c8d781..62b1c1674a68d 100644
+--- a/arch/arm64/boot/dts/qcom/sm6115.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6115.dtsi
+@@ -693,7 +693,7 @@
+ 			#interrupt-cells = <4>;
+ 		};
+ 
+-		tsens0: thermal-sensor@4410000 {
++		tsens0: thermal-sensor@4411000 {
+ 			compatible = "qcom,sm6115-tsens", "qcom,tsens-v2";
+ 			reg = <0x0 0x04411000 0x0 0x1ff>, /* TM */
+ 			      <0x0 0x04410000 0x0 0x8>; /* SROT */
+diff --git a/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo.dtsi b/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo.dtsi
+index b9c982a059dfb..c0f22a3bea5ce 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo.dtsi
+@@ -26,9 +26,10 @@
+ 		framebuffer: framebuffer@9c000000 {
+ 			compatible = "simple-framebuffer";
+ 			reg = <0 0x9c000000 0 0x2300000>;
+-			width = <1644>;
+-			height = <3840>;
+-			stride = <(1644 * 4)>;
++			/* pdx203 BL initializes in 2.5k mode, not 4k */
++			width = <1096>;
++			height = <2560>;
++			stride = <(1096 * 4)>;
+ 			format = "a8r8g8b8";
+ 			/*
+ 			 * That's a lot of clocks, but it's necessary due
+diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+index 9cb52d7efdd8d..1a258a5461acf 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+@@ -892,7 +892,7 @@
+ 			};
+ 		};
+ 
+-		gpi_dma0: dma-controller@900000 {
++		gpi_dma0: dma-controller@9800000 {
+ 			compatible = "qcom,sm8350-gpi-dma", "qcom,sm6350-gpi-dma";
+ 			reg = <0 0x09800000 0 0x60000>;
+ 			interrupts = <GIC_SPI 244 IRQ_TYPE_LEVEL_HIGH>,
+@@ -1625,7 +1625,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		pcie1_phy: phy@1c0f000 {
++		pcie1_phy: phy@1c0e000 {
+ 			compatible = "qcom,sm8350-qmp-gen3x2-pcie-phy";
+ 			reg = <0 0x01c0e000 0 0x2000>;
+ 			clocks = <&gcc GCC_PCIE_1_AUX_CLK>,
+diff --git a/arch/arm64/boot/dts/qcom/sm8550.dtsi b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+index 90e3fb11e6e70..75bd5f2ae681f 100644
+--- a/arch/arm64/boot/dts/qcom/sm8550.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+@@ -1843,8 +1843,8 @@
+ 				 <&apps_smmu 0x481 0x0>;
+ 		};
+ 
+-		crypto: crypto@1de0000 {
+-			compatible = "qcom,sm8550-qce";
++		crypto: crypto@1dfa000 {
++			compatible = "qcom,sm8550-qce", "qcom,sm8150-qce", "qcom,qce";
+ 			reg = <0x0 0x01dfa000 0x0 0x6000>;
+ 			dmas = <&cryptobam 4>, <&cryptobam 5>;
+ 			dma-names = "rx", "tx";
+@@ -2441,6 +2441,10 @@
+ 
+ 			resets = <&gcc GCC_USB30_PRIM_BCR>;
+ 
++			interconnects = <&aggre1_noc MASTER_USB3_0 0 &mc_virt SLAVE_EBI1 0>,
++					<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_USB3_0 0>;
++			interconnect-names = "usb-ddr", "apps-usb";
++
+ 			status = "disabled";
+ 
+ 			usb_1_dwc3: usb@a600000 {
+@@ -2536,7 +2540,7 @@
+ 			#interrupt-cells = <4>;
+ 		};
+ 
+-		tlmm: pinctrl@f000000 {
++		tlmm: pinctrl@f100000 {
+ 			compatible = "qcom,sm8550-tlmm";
+ 			reg = <0 0x0f100000 0 0x300000>;
+ 			interrupts = <GIC_SPI 208 IRQ_TYPE_LEVEL_HIGH>;
+@@ -3250,6 +3254,7 @@
+ 			qcom,drv-id = <2>;
+ 			qcom,tcs-config = <ACTIVE_TCS    3>, <SLEEP_TCS     2>,
+ 					  <WAKE_TCS      2>, <CONTROL_TCS   0>;
++			power-domains = <&CLUSTER_PD>;
+ 
+ 			apps_bcm_voter: bcm-voter {
+ 				compatible = "qcom,bcm-voter";
+diff --git a/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi b/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi
+index efc80960380f4..c78b7a5c2e2aa 100644
+--- a/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi
++++ b/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi
+@@ -367,7 +367,7 @@
+ 	};
+ 
+ 	scif1_pins: scif1 {
+-		groups = "scif1_data_b", "scif1_ctrl";
++		groups = "scif1_data_b";
+ 		function = "scif1";
+ 	};
+ 
+@@ -397,7 +397,6 @@
+ &scif1 {
+ 	pinctrl-0 = <&scif1_pins>;
+ 	pinctrl-names = "default";
+-	uart-has-rtscts;
+ 
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/ti/k3-am69-sk.dts b/arch/arm64/boot/dts/ti/k3-am69-sk.dts
+index bc49ba534790e..f364b7803115d 100644
+--- a/arch/arm64/boot/dts/ti/k3-am69-sk.dts
++++ b/arch/arm64/boot/dts/ti/k3-am69-sk.dts
+@@ -23,7 +23,7 @@
+ 	aliases {
+ 		serial2 = &main_uart8;
+ 		mmc1 = &main_sdhci1;
+-		i2c0 = &main_i2c0;
++		i2c3 = &main_i2c0;
+ 	};
+ 
+ 	memory@80000000 {
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
+index 0d39d6b8cc0ca..63633e4f6c59f 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
++++ b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
+@@ -83,25 +83,25 @@
+ &wkup_pmx2 {
+ 	mcu_cpsw_pins_default: mcu-cpsw-pins-default {
+ 		pinctrl-single,pins = <
+-			J721E_WKUP_IOPAD(0x0068, PIN_OUTPUT, 0) /* MCU_RGMII1_TX_CTL */
+-			J721E_WKUP_IOPAD(0x006c, PIN_INPUT, 0) /* MCU_RGMII1_RX_CTL */
+-			J721E_WKUP_IOPAD(0x0070, PIN_OUTPUT, 0) /* MCU_RGMII1_TD3 */
+-			J721E_WKUP_IOPAD(0x0074, PIN_OUTPUT, 0) /* MCU_RGMII1_TD2 */
+-			J721E_WKUP_IOPAD(0x0078, PIN_OUTPUT, 0) /* MCU_RGMII1_TD1 */
+-			J721E_WKUP_IOPAD(0x007c, PIN_OUTPUT, 0) /* MCU_RGMII1_TD0 */
+-			J721E_WKUP_IOPAD(0x0088, PIN_INPUT, 0) /* MCU_RGMII1_RD3 */
+-			J721E_WKUP_IOPAD(0x008c, PIN_INPUT, 0) /* MCU_RGMII1_RD2 */
+-			J721E_WKUP_IOPAD(0x0090, PIN_INPUT, 0) /* MCU_RGMII1_RD1 */
+-			J721E_WKUP_IOPAD(0x0094, PIN_INPUT, 0) /* MCU_RGMII1_RD0 */
+-			J721E_WKUP_IOPAD(0x0080, PIN_OUTPUT, 0) /* MCU_RGMII1_TXC */
+-			J721E_WKUP_IOPAD(0x0084, PIN_INPUT, 0) /* MCU_RGMII1_RXC */
++			J721E_WKUP_IOPAD(0x0000, PIN_OUTPUT, 0) /* MCU_RGMII1_TX_CTL */
++			J721E_WKUP_IOPAD(0x0004, PIN_INPUT, 0) /* MCU_RGMII1_RX_CTL */
++			J721E_WKUP_IOPAD(0x0008, PIN_OUTPUT, 0) /* MCU_RGMII1_TD3 */
++			J721E_WKUP_IOPAD(0x000c, PIN_OUTPUT, 0) /* MCU_RGMII1_TD2 */
++			J721E_WKUP_IOPAD(0x0010, PIN_OUTPUT, 0) /* MCU_RGMII1_TD1 */
++			J721E_WKUP_IOPAD(0x0014, PIN_OUTPUT, 0) /* MCU_RGMII1_TD0 */
++			J721E_WKUP_IOPAD(0x0020, PIN_INPUT, 0) /* MCU_RGMII1_RD3 */
++			J721E_WKUP_IOPAD(0x0024, PIN_INPUT, 0) /* MCU_RGMII1_RD2 */
++			J721E_WKUP_IOPAD(0x0028, PIN_INPUT, 0) /* MCU_RGMII1_RD1 */
++			J721E_WKUP_IOPAD(0x002c, PIN_INPUT, 0) /* MCU_RGMII1_RD0 */
++			J721E_WKUP_IOPAD(0x0018, PIN_OUTPUT, 0) /* MCU_RGMII1_TXC */
++			J721E_WKUP_IOPAD(0x001c, PIN_INPUT, 0) /* MCU_RGMII1_RXC */
+ 		>;
+ 	};
+ 
+ 	mcu_mdio_pins_default: mcu-mdio1-pins-default {
+ 		pinctrl-single,pins = <
+-			J721E_WKUP_IOPAD(0x009c, PIN_OUTPUT, 0) /* (L1) MCU_MDIO0_MDC */
+-			J721E_WKUP_IOPAD(0x0098, PIN_INPUT, 0) /* (L4) MCU_MDIO0_MDIO */
++			J721E_WKUP_IOPAD(0x0034, PIN_OUTPUT, 0) /* (L1) MCU_MDIO0_MDC */
++			J721E_WKUP_IOPAD(0x0030, PIN_INPUT, 0) /* (L4) MCU_MDIO0_MDIO */
+ 		>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-beagleboneai64.dts b/arch/arm64/boot/dts/ti/k3-j721e-beagleboneai64.dts
+index 37c24b077b6aa..8a62ac263b89a 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-beagleboneai64.dts
++++ b/arch/arm64/boot/dts/ti/k3-j721e-beagleboneai64.dts
+@@ -936,6 +936,7 @@
+ };
+ 
+ &mailbox0_cluster0 {
++	status = "okay";
+ 	interrupts = <436>;
+ 
+ 	mbox_mcu_r5fss0_core0: mbox-mcu-r5fss0-core0 {
+@@ -950,6 +951,7 @@
+ };
+ 
+ &mailbox0_cluster1 {
++	status = "okay";
+ 	interrupts = <432>;
+ 
+ 	mbox_main_r5fss0_core0: mbox-main-r5fss0-core0 {
+@@ -964,6 +966,7 @@
+ };
+ 
+ &mailbox0_cluster2 {
++	status = "okay";
+ 	interrupts = <428>;
+ 
+ 	mbox_main_r5fss1_core0: mbox-main-r5fss1-core0 {
+@@ -978,6 +981,7 @@
+ };
+ 
+ &mailbox0_cluster3 {
++	status = "okay";
+ 	interrupts = <424>;
+ 
+ 	mbox_c66_0: mbox-c66-0 {
+@@ -992,6 +996,7 @@
+ };
+ 
+ &mailbox0_cluster4 {
++	status = "okay";
+ 	interrupts = <420>;
+ 
+ 	mbox_c71_0: mbox-c71-0 {
+diff --git a/arch/arm64/boot/dts/ti/k3-j784s4-evm.dts b/arch/arm64/boot/dts/ti/k3-j784s4-evm.dts
+index 8cd4a7ecc121e..ed575f17935b6 100644
+--- a/arch/arm64/boot/dts/ti/k3-j784s4-evm.dts
++++ b/arch/arm64/boot/dts/ti/k3-j784s4-evm.dts
+@@ -22,7 +22,7 @@
+ 	aliases {
+ 		serial2 = &main_uart8;
+ 		mmc1 = &main_sdhci1;
+-		i2c0 = &main_i2c0;
++		i2c3 = &main_i2c0;
+ 	};
+ 
+ 	memory@80000000 {
+@@ -140,6 +140,32 @@
+ 	};
+ };
+ 
++&wkup_pmx2 {
++	mcu_cpsw_pins_default: mcu-cpsw-pins-default {
++		pinctrl-single,pins = <
++			J784S4_WKUP_IOPAD(0x02c, PIN_INPUT, 0) /* (A35) MCU_RGMII1_RD0 */
++			J784S4_WKUP_IOPAD(0x028, PIN_INPUT, 0) /* (B36) MCU_RGMII1_RD1 */
++			J784S4_WKUP_IOPAD(0x024, PIN_INPUT, 0) /* (C36) MCU_RGMII1_RD2 */
++			J784S4_WKUP_IOPAD(0x020, PIN_INPUT, 0) /* (D36) MCU_RGMII1_RD3 */
++			J784S4_WKUP_IOPAD(0x01c, PIN_INPUT, 0) /* (B37) MCU_RGMII1_RXC */
++			J784S4_WKUP_IOPAD(0x004, PIN_INPUT, 0) /* (C37) MCU_RGMII1_RX_CTL */
++			J784S4_WKUP_IOPAD(0x014, PIN_OUTPUT, 0) /* (D37) MCU_RGMII1_TD0 */
++			J784S4_WKUP_IOPAD(0x010, PIN_OUTPUT, 0) /* (D38) MCU_RGMII1_TD1 */
++			J784S4_WKUP_IOPAD(0x00c, PIN_OUTPUT, 0) /* (E37) MCU_RGMII1_TD2 */
++			J784S4_WKUP_IOPAD(0x008, PIN_OUTPUT, 0) /* (E38) MCU_RGMII1_TD3 */
++			J784S4_WKUP_IOPAD(0x018, PIN_OUTPUT, 0) /* (E36) MCU_RGMII1_TXC */
++			J784S4_WKUP_IOPAD(0x000, PIN_OUTPUT, 0) /* (C38) MCU_RGMII1_TX_CTL */
++		>;
++	};
++
++	mcu_mdio_pins_default: mcu-mdio-pins-default {
++		pinctrl-single,pins = <
++			J784S4_WKUP_IOPAD(0x034, PIN_OUTPUT, 0) /* (A36) MCU_MDIO0_MDC */
++			J784S4_WKUP_IOPAD(0x030, PIN_INPUT, 0) /* (B35) MCU_MDIO0_MDIO */
++		>;
++	};
++};
++
+ &main_uart8 {
+ 	status = "okay";
+ 	pinctrl-names = "default";
+@@ -194,3 +220,27 @@
+ &main_gpio0 {
+ 	status = "okay";
+ };
++
++&mcu_cpsw {
++	status = "okay";
++	pinctrl-names = "default";
++	pinctrl-0 = <&mcu_cpsw_pins_default>;
++};
++
++&davinci_mdio {
++	pinctrl-names = "default";
++	pinctrl-0 = <&mcu_mdio_pins_default>;
++
++	mcu_phy0: ethernet-phy@0 {
++		reg = <0>;
++		ti,rx-internal-delay = <DP83867_RGMIIDCTL_2_00_NS>;
++		ti,fifo-depth = <DP83867_PHYCR_FIFO_DEPTH_4_B_NIB>;
++		ti,min-output-impedance;
++	};
++};
++
++&mcu_cpsw_port1 {
++	status = "okay";
++	phy-mode = "rgmii-rxid";
++	phy-handle = <&mcu_phy0>;
++};
+diff --git a/arch/arm64/boot/dts/ti/k3-j784s4-mcu-wakeup.dtsi b/arch/arm64/boot/dts/ti/k3-j784s4-mcu-wakeup.dtsi
+index 64bd3dee14aa6..8a2350f2c82d0 100644
+--- a/arch/arm64/boot/dts/ti/k3-j784s4-mcu-wakeup.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j784s4-mcu-wakeup.dtsi
+@@ -50,7 +50,34 @@
+ 	wkup_pmx0: pinctrl@4301c000 {
+ 		compatible = "pinctrl-single";
+ 		/* Proxy 0 addressing */
+-		reg = <0x00 0x4301c000 0x00 0x178>;
++		reg = <0x00 0x4301c000 0x00 0x034>;
++		#pinctrl-cells = <1>;
++		pinctrl-single,register-width = <32>;
++		pinctrl-single,function-mask = <0xffffffff>;
++	};
++
++	wkup_pmx1: pinctrl@4301c038 {
++		compatible = "pinctrl-single";
++		/* Proxy 0 addressing */
++		reg = <0x00 0x4301c038 0x00 0x02c>;
++		#pinctrl-cells = <1>;
++		pinctrl-single,register-width = <32>;
++		pinctrl-single,function-mask = <0xffffffff>;
++	};
++
++	wkup_pmx2: pinctrl@4301c068 {
++		compatible = "pinctrl-single";
++		/* Proxy 0 addressing */
++		reg = <0x00 0x4301c068 0x00 0x120>;
++		#pinctrl-cells = <1>;
++		pinctrl-single,register-width = <32>;
++		pinctrl-single,function-mask = <0xffffffff>;
++	};
++
++	wkup_pmx3: pinctrl@4301c190 {
++		compatible = "pinctrl-single";
++		/* Proxy 0 addressing */
++		reg = <0x00 0x4301c190 0x00 0x004>;
+ 		#pinctrl-cells = <1>;
+ 		pinctrl-single,register-width = <32>;
+ 		pinctrl-single,function-mask = <0xffffffff>;
+diff --git a/arch/arm64/include/asm/fpsimdmacros.h b/arch/arm64/include/asm/fpsimdmacros.h
+index cd03819a3b686..cdf6a35e39944 100644
+--- a/arch/arm64/include/asm/fpsimdmacros.h
++++ b/arch/arm64/include/asm/fpsimdmacros.h
+@@ -316,12 +316,12 @@
+  _for n, 0, 15,	_sve_str_p	\n, \nxbase, \n - 16
+ 		cbz		\save_ffr, 921f
+ 		_sve_rdffr	0
+-		_sve_str_p	0, \nxbase
+-		_sve_ldr_p	0, \nxbase, -16
+ 		b		922f
+ 921:
+-		str		xzr, [x\nxbase]		// Zero out FFR
++		_sve_pfalse	0			// Zero out FFR
+ 922:
++		_sve_str_p	0, \nxbase
++		_sve_ldr_p	0, \nxbase, -16
+ 		mrs		x\nxtmp, fpsr
+ 		str		w\nxtmp, [\xpfpsr]
+ 		mrs		x\nxtmp, fpcr
+diff --git a/arch/mips/boot/dts/ingenic/ci20.dts b/arch/mips/boot/dts/ingenic/ci20.dts
+index 239c4537484d0..2b1284c6c64a6 100644
+--- a/arch/mips/boot/dts/ingenic/ci20.dts
++++ b/arch/mips/boot/dts/ingenic/ci20.dts
+@@ -237,59 +237,49 @@
+ 	act8600: act8600@5a {
+ 		compatible = "active-semi,act8600";
+ 		reg = <0x5a>;
+-		status = "okay";
+ 
+ 		regulators {
+-			vddcore: SUDCDC1 {
+-				regulator-name = "DCDC_REG1";
++			vddcore: DCDC1 {
+ 				regulator-min-microvolt = <1100000>;
+ 				regulator-max-microvolt = <1100000>;
+ 				regulator-always-on;
+ 			};
+-			vddmem: SUDCDC2 {
+-				regulator-name = "DCDC_REG2";
++			vddmem: DCDC2 {
+ 				regulator-min-microvolt = <1500000>;
+ 				regulator-max-microvolt = <1500000>;
+ 				regulator-always-on;
+ 			};
+-			vcc_33: SUDCDC3 {
+-				regulator-name = "DCDC_REG3";
++			vcc_33: DCDC3 {
+ 				regulator-min-microvolt = <3300000>;
+ 				regulator-max-microvolt = <3300000>;
+ 				regulator-always-on;
+ 			};
+-			vcc_50: SUDCDC4 {
+-				regulator-name = "SUDCDC_REG4";
++			vcc_50: SUDCDC_REG4 {
+ 				regulator-min-microvolt = <5000000>;
+ 				regulator-max-microvolt = <5000000>;
+ 				regulator-always-on;
+ 			};
+-			vcc_25: LDO_REG5 {
+-				regulator-name = "LDO_REG5";
++			vcc_25: LDO5 {
+ 				regulator-min-microvolt = <2500000>;
+ 				regulator-max-microvolt = <2500000>;
+ 				regulator-always-on;
+ 			};
+-			wifi_io: LDO_REG6 {
+-				regulator-name = "LDO_REG6";
++			wifi_io: LDO6 {
+ 				regulator-min-microvolt = <2500000>;
+ 				regulator-max-microvolt = <2500000>;
+ 				regulator-always-on;
+ 			};
+-			vcc_28: LDO_REG7 {
+-				regulator-name = "LDO_REG7";
++			cim_io_28: LDO7 {
+ 				regulator-min-microvolt = <2800000>;
+ 				regulator-max-microvolt = <2800000>;
+ 				regulator-always-on;
+ 			};
+-			vcc_15: LDO_REG8 {
+-				regulator-name = "LDO_REG8";
++			cim_io_15: LDO8 {
+ 				regulator-min-microvolt = <1500000>;
+ 				regulator-max-microvolt = <1500000>;
+ 				regulator-always-on;
+ 			};
+ 			vrtc_18: LDO_REG9 {
+-				regulator-name = "LDO_REG9";
+ 				/* Despite the datasheet stating 3.3V
+ 				 * for REG9 and the driver expecting that,
+ 				 * REG9 outputs 1.8V.
+@@ -303,7 +293,6 @@
+ 				regulator-always-on;
+ 			};
+ 			vcc_11: LDO_REG10 {
+-				regulator-name = "LDO_REG10";
+ 				regulator-min-microvolt = <1200000>;
+ 				regulator-max-microvolt = <1200000>;
+ 				regulator-always-on;
+diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
+index 0ec1581619db5..cf770d86c03c6 100644
+--- a/arch/powerpc/kernel/interrupt.c
++++ b/arch/powerpc/kernel/interrupt.c
+@@ -368,7 +368,6 @@ void preempt_schedule_irq(void);
+ 
+ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs)
+ {
+-	unsigned long flags;
+ 	unsigned long ret = 0;
+ 	unsigned long kuap;
+ 	bool stack_store = read_thread_flags() & _TIF_EMULATE_STACK_STORE;
+@@ -392,7 +391,7 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs)
+ 
+ 	kuap = kuap_get_and_assert_locked();
+ 
+-	local_irq_save(flags);
++	local_irq_disable();
+ 
+ 	if (!arch_irq_disabled_regs(regs)) {
+ 		/* Returning to a kernel context with local irqs enabled. */
+diff --git a/arch/powerpc/kernel/ppc_save_regs.S b/arch/powerpc/kernel/ppc_save_regs.S
+index 49813f9824681..a9b9c32d0c1ff 100644
+--- a/arch/powerpc/kernel/ppc_save_regs.S
++++ b/arch/powerpc/kernel/ppc_save_regs.S
+@@ -31,10 +31,10 @@ _GLOBAL(ppc_save_regs)
+ 	lbz	r0,PACAIRQSOFTMASK(r13)
+ 	PPC_STL	r0,SOFTE(r3)
+ #endif
+-	/* go up one stack frame for SP */
+-	PPC_LL	r4,0(r1)
+-	PPC_STL	r4,GPR1(r3)
++	/* store current SP */
++	PPC_STL	r1,GPR1(r3)
+ 	/* get caller's LR */
++	PPC_LL	r4,0(r1)
+ 	PPC_LL	r0,LRSAVE(r4)
+ 	PPC_STL	r0,_LINK(r3)
+ 	mflr	r0
+diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c
+index c114c7f25645c..7a718ed32b277 100644
+--- a/arch/powerpc/kernel/signal_32.c
++++ b/arch/powerpc/kernel/signal_32.c
+@@ -264,8 +264,9 @@ static void prepare_save_user_regs(int ctx_has_vsx_region)
+ #endif
+ }
+ 
+-static int __unsafe_save_user_regs(struct pt_regs *regs, struct mcontext __user *frame,
+-				   struct mcontext __user *tm_frame, int ctx_has_vsx_region)
++static __always_inline int
++__unsafe_save_user_regs(struct pt_regs *regs, struct mcontext __user *frame,
++			struct mcontext __user *tm_frame, int ctx_has_vsx_region)
+ {
+ 	unsigned long msr = regs->msr;
+ 
+@@ -364,8 +365,9 @@ static void prepare_save_tm_user_regs(void)
+ 		current->thread.ckvrsave = mfspr(SPRN_VRSAVE);
+ }
+ 
+-static int save_tm_user_regs_unsafe(struct pt_regs *regs, struct mcontext __user *frame,
+-				    struct mcontext __user *tm_frame, unsigned long msr)
++static __always_inline int
++save_tm_user_regs_unsafe(struct pt_regs *regs, struct mcontext __user *frame,
++			 struct mcontext __user *tm_frame, unsigned long msr)
+ {
+ 	/* Save both sets of general registers */
+ 	unsafe_save_general_regs(&current->thread.ckpt_regs, frame, failed);
+@@ -444,8 +446,9 @@ failed:
+ #else
+ static void prepare_save_tm_user_regs(void) { }
+ 
+-static int save_tm_user_regs_unsafe(struct pt_regs *regs, struct mcontext __user *frame,
+-				    struct mcontext __user *tm_frame, unsigned long msr)
++static __always_inline int
++save_tm_user_regs_unsafe(struct pt_regs *regs, struct mcontext __user *frame,
++			 struct mcontext __user *tm_frame, unsigned long msr)
+ {
+ 	return 0;
+ }
+diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
+index 2297aa764ecdb..e8db8c8efe359 100644
+--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
++++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
+@@ -745,9 +745,9 @@ static void free_pud_table(pud_t *pud_start, p4d_t *p4d)
+ }
+ 
+ static void remove_pte_table(pte_t *pte_start, unsigned long addr,
+-			     unsigned long end)
++			     unsigned long end, bool direct)
+ {
+-	unsigned long next;
++	unsigned long next, pages = 0;
+ 	pte_t *pte;
+ 
+ 	pte = pte_start + pte_index(addr);
+@@ -769,13 +769,16 @@ static void remove_pte_table(pte_t *pte_start, unsigned long addr,
+ 		}
+ 
+ 		pte_clear(&init_mm, addr, pte);
++		pages++;
+ 	}
++	if (direct)
++		update_page_count(mmu_virtual_psize, -pages);
+ }
+ 
+ static void __meminit remove_pmd_table(pmd_t *pmd_start, unsigned long addr,
+-			     unsigned long end)
++				       unsigned long end, bool direct)
+ {
+-	unsigned long next;
++	unsigned long next, pages = 0;
+ 	pte_t *pte_base;
+ 	pmd_t *pmd;
+ 
+@@ -793,19 +796,22 @@ static void __meminit remove_pmd_table(pmd_t *pmd_start, unsigned long addr,
+ 				continue;
+ 			}
+ 			pte_clear(&init_mm, addr, (pte_t *)pmd);
++			pages++;
+ 			continue;
+ 		}
+ 
+ 		pte_base = (pte_t *)pmd_page_vaddr(*pmd);
+-		remove_pte_table(pte_base, addr, next);
++		remove_pte_table(pte_base, addr, next, direct);
+ 		free_pte_table(pte_base, pmd);
+ 	}
++	if (direct)
++		update_page_count(MMU_PAGE_2M, -pages);
+ }
+ 
+ static void __meminit remove_pud_table(pud_t *pud_start, unsigned long addr,
+-			     unsigned long end)
++				       unsigned long end, bool direct)
+ {
+-	unsigned long next;
++	unsigned long next, pages = 0;
+ 	pmd_t *pmd_base;
+ 	pud_t *pud;
+ 
+@@ -823,16 +829,20 @@ static void __meminit remove_pud_table(pud_t *pud_start, unsigned long addr,
+ 				continue;
+ 			}
+ 			pte_clear(&init_mm, addr, (pte_t *)pud);
++			pages++;
+ 			continue;
+ 		}
+ 
+ 		pmd_base = pud_pgtable(*pud);
+-		remove_pmd_table(pmd_base, addr, next);
++		remove_pmd_table(pmd_base, addr, next, direct);
+ 		free_pmd_table(pmd_base, pud);
+ 	}
++	if (direct)
++		update_page_count(MMU_PAGE_1G, -pages);
+ }
+ 
+-static void __meminit remove_pagetable(unsigned long start, unsigned long end)
++static void __meminit remove_pagetable(unsigned long start, unsigned long end,
++				       bool direct)
+ {
+ 	unsigned long addr, next;
+ 	pud_t *pud_base;
+@@ -861,7 +871,7 @@ static void __meminit remove_pagetable(unsigned long start, unsigned long end)
+ 		}
+ 
+ 		pud_base = p4d_pgtable(*p4d);
+-		remove_pud_table(pud_base, addr, next);
++		remove_pud_table(pud_base, addr, next, direct);
+ 		free_pud_table(pud_base, p4d);
+ 	}
+ 
+@@ -884,7 +894,7 @@ int __meminit radix__create_section_mapping(unsigned long start,
+ 
+ int __meminit radix__remove_section_mapping(unsigned long start, unsigned long end)
+ {
+-	remove_pagetable(start, end);
++	remove_pagetable(start, end, true);
+ 	return 0;
+ }
+ #endif /* CONFIG_MEMORY_HOTPLUG */
+@@ -920,7 +930,7 @@ int __meminit radix__vmemmap_create_mapping(unsigned long start,
+ #ifdef CONFIG_MEMORY_HOTPLUG
+ void __meminit radix__vmemmap_remove_mapping(unsigned long start, unsigned long page_size)
+ {
+-	remove_pagetable(start, start + page_size);
++	remove_pagetable(start, start + page_size, false);
+ }
+ #endif
+ #endif
+diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
+index 05b0d584e50b8..fe1b83020e0df 100644
+--- a/arch/powerpc/mm/init_64.c
++++ b/arch/powerpc/mm/init_64.c
+@@ -189,7 +189,7 @@ static bool altmap_cross_boundary(struct vmem_altmap *altmap, unsigned long star
+ 	unsigned long nr_pfn = page_size / sizeof(struct page);
+ 	unsigned long start_pfn = page_to_pfn((struct page *)start);
+ 
+-	if ((start_pfn + nr_pfn) > altmap->end_pfn)
++	if ((start_pfn + nr_pfn - 1) > altmap->end_pfn)
+ 		return true;
+ 
+ 	if (start_pfn < altmap->base_pfn)
+diff --git a/arch/powerpc/platforms/powernv/pci-sriov.c b/arch/powerpc/platforms/powernv/pci-sriov.c
+index 7195133b26bb9..59882da3e7425 100644
+--- a/arch/powerpc/platforms/powernv/pci-sriov.c
++++ b/arch/powerpc/platforms/powernv/pci-sriov.c
+@@ -594,12 +594,12 @@ static void pnv_pci_sriov_disable(struct pci_dev *pdev)
+ 	struct pnv_iov_data   *iov;
+ 
+ 	iov = pnv_iov_get(pdev);
+-	num_vfs = iov->num_vfs;
+-	base_pe = iov->vf_pe_arr[0].pe_number;
+-
+ 	if (WARN_ON(!iov))
+ 		return;
+ 
++	num_vfs = iov->num_vfs;
++	base_pe = iov->vf_pe_arr[0].pe_number;
++
+ 	/* Release VF PEs */
+ 	pnv_ioda_release_vf_PE(pdev);
+ 
+diff --git a/arch/powerpc/platforms/powernv/vas-window.c b/arch/powerpc/platforms/powernv/vas-window.c
+index 0072682531d80..b664838008c12 100644
+--- a/arch/powerpc/platforms/powernv/vas-window.c
++++ b/arch/powerpc/platforms/powernv/vas-window.c
+@@ -1310,8 +1310,8 @@ int vas_win_close(struct vas_window *vwin)
+ 	/* if send window, drop reference to matching receive window */
+ 	if (window->tx_win) {
+ 		if (window->user_win) {
+-			put_vas_user_win_ref(&vwin->task_ref);
+ 			mm_context_remove_vas_window(vwin->task_ref.mm);
++			put_vas_user_win_ref(&vwin->task_ref);
+ 		}
+ 		put_rx_win(window->rxwin);
+ 	}
+diff --git a/arch/powerpc/platforms/pseries/vas.c b/arch/powerpc/platforms/pseries/vas.c
+index 513180467562b..9a44a98ba3420 100644
+--- a/arch/powerpc/platforms/pseries/vas.c
++++ b/arch/powerpc/platforms/pseries/vas.c
+@@ -507,8 +507,8 @@ static int vas_deallocate_window(struct vas_window *vwin)
+ 	vascaps[win->win_type].nr_open_windows--;
+ 	mutex_unlock(&vas_pseries_mutex);
+ 
+-	put_vas_user_win_ref(&vwin->task_ref);
+ 	mm_context_remove_vas_window(vwin->task_ref.mm);
++	put_vas_user_win_ref(&vwin->task_ref);
+ 
+ 	kfree(win);
+ 	return 0;
+diff --git a/arch/riscv/kernel/probes/uprobes.c b/arch/riscv/kernel/probes/uprobes.c
+index c976a21cd4bd5..194f166b2cc40 100644
+--- a/arch/riscv/kernel/probes/uprobes.c
++++ b/arch/riscv/kernel/probes/uprobes.c
+@@ -67,6 +67,7 @@ int arch_uprobe_post_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
+ 	struct uprobe_task *utask = current->utask;
+ 
+ 	WARN_ON_ONCE(current->thread.bad_cause != UPROBE_TRAP_NR);
++	current->thread.bad_cause = utask->autask.saved_cause;
+ 
+ 	instruction_pointer_set(regs, utask->vaddr + auprobe->insn_size);
+ 
+@@ -102,6 +103,7 @@ void arch_uprobe_abort_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
+ {
+ 	struct uprobe_task *utask = current->utask;
+ 
++	current->thread.bad_cause = utask->autask.saved_cause;
+ 	/*
+ 	 * Task has received a fatal signal, so reset back to probbed
+ 	 * address.
+diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
+index 055300e08fb38..3d191ec036fb7 100644
+--- a/arch/x86/coco/tdx/tdx.c
++++ b/arch/x86/coco/tdx/tdx.c
+@@ -840,6 +840,30 @@ static bool tdx_enc_status_changed(unsigned long vaddr, int numpages, bool enc)
+ 	return true;
+ }
+ 
++static bool tdx_enc_status_change_prepare(unsigned long vaddr, int numpages,
++					  bool enc)
++{
++	/*
++	 * Only handle shared->private conversion here.
++	 * See the comment in tdx_early_init().
++	 */
++	if (enc)
++		return tdx_enc_status_changed(vaddr, numpages, enc);
++	return true;
++}
++
++static bool tdx_enc_status_change_finish(unsigned long vaddr, int numpages,
++					 bool enc)
++{
++	/*
++	 * Only handle private->shared conversion here.
++	 * See the comment in tdx_early_init().
++	 */
++	if (!enc)
++		return tdx_enc_status_changed(vaddr, numpages, enc);
++	return true;
++}
++
+ void __init tdx_early_init(void)
+ {
+ 	u64 cc_mask;
+@@ -867,9 +891,30 @@ void __init tdx_early_init(void)
+ 	 */
+ 	physical_mask &= cc_mask - 1;
+ 
+-	x86_platform.guest.enc_cache_flush_required = tdx_cache_flush_required;
+-	x86_platform.guest.enc_tlb_flush_required   = tdx_tlb_flush_required;
+-	x86_platform.guest.enc_status_change_finish = tdx_enc_status_changed;
++	/*
++	 * The kernel mapping should match the TDX metadata for the page.
++	 * load_unaligned_zeropad() can touch memory *adjacent* to that which is
++	 * owned by the caller and can catch even _momentary_ mismatches.  Bad
++	 * things happen on mismatch:
++	 *
++	 *   - Private mapping => Shared Page  == Guest shutdown
++         *   - Shared mapping  => Private Page == Recoverable #VE
++	 *
++	 * guest.enc_status_change_prepare() converts the page from
++	 * shared=>private before the mapping becomes private.
++	 *
++	 * guest.enc_status_change_finish() converts the page from
++	 * private=>shared after the mapping becomes private.
++	 *
++	 * In both cases there is a temporary shared mapping to a private page,
++	 * which can result in a #VE.  But, there is never a private mapping to
++	 * a shared page.
++	 */
++	x86_platform.guest.enc_status_change_prepare = tdx_enc_status_change_prepare;
++	x86_platform.guest.enc_status_change_finish  = tdx_enc_status_change_finish;
++
++	x86_platform.guest.enc_cache_flush_required  = tdx_cache_flush_required;
++	x86_platform.guest.enc_tlb_flush_required    = tdx_tlb_flush_required;
+ 
+ 	pr_info("Guest detected\n");
+ }
+diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
+index bccea57dee81e..abadd5f234254 100644
+--- a/arch/x86/events/amd/core.c
++++ b/arch/x86/events/amd/core.c
+@@ -374,7 +374,7 @@ static int amd_pmu_hw_config(struct perf_event *event)
+ 
+ 	/* pass precise event sampling to ibs: */
+ 	if (event->attr.precise_ip && get_ibs_caps())
+-		return -ENOENT;
++		return forward_event_to_ibs(event);
+ 
+ 	if (has_branch_stack(event) && !x86_pmu.lbr_nr)
+ 		return -EOPNOTSUPP;
+diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c
+index 64582954b5f67..3710148021916 100644
+--- a/arch/x86/events/amd/ibs.c
++++ b/arch/x86/events/amd/ibs.c
+@@ -190,7 +190,7 @@ static struct perf_ibs *get_ibs_pmu(int type)
+ }
+ 
+ /*
+- * Use IBS for precise event sampling:
++ * core pmu config -> IBS config
+  *
+  *  perf record -a -e cpu-cycles:p ...    # use ibs op counting cycle count
+  *  perf record -a -e r076:p ...          # same as -e cpu-cycles:p
+@@ -199,25 +199,9 @@ static struct perf_ibs *get_ibs_pmu(int type)
+  * IbsOpCntCtl (bit 19) of IBS Execution Control Register (IbsOpCtl,
+  * MSRC001_1033) is used to select either cycle or micro-ops counting
+  * mode.
+- *
+- * The rip of IBS samples has skid 0. Thus, IBS supports precise
+- * levels 1 and 2 and the PERF_EFLAGS_EXACT is set. In rare cases the
+- * rip is invalid when IBS was not able to record the rip correctly.
+- * We clear PERF_EFLAGS_EXACT and take the rip from pt_regs then.
+- *
+  */
+-static int perf_ibs_precise_event(struct perf_event *event, u64 *config)
++static int core_pmu_ibs_config(struct perf_event *event, u64 *config)
+ {
+-	switch (event->attr.precise_ip) {
+-	case 0:
+-		return -ENOENT;
+-	case 1:
+-	case 2:
+-		break;
+-	default:
+-		return -EOPNOTSUPP;
+-	}
+-
+ 	switch (event->attr.type) {
+ 	case PERF_TYPE_HARDWARE:
+ 		switch (event->attr.config) {
+@@ -243,22 +227,37 @@ static int perf_ibs_precise_event(struct perf_event *event, u64 *config)
+ 	return -EOPNOTSUPP;
+ }
+ 
++/*
++ * The rip of IBS samples has skid 0. Thus, IBS supports precise
++ * levels 1 and 2 and the PERF_EFLAGS_EXACT is set. In rare cases the
++ * rip is invalid when IBS was not able to record the rip correctly.
++ * We clear PERF_EFLAGS_EXACT and take the rip from pt_regs then.
++ */
++int forward_event_to_ibs(struct perf_event *event)
++{
++	u64 config = 0;
++
++	if (!event->attr.precise_ip || event->attr.precise_ip > 2)
++		return -EOPNOTSUPP;
++
++	if (!core_pmu_ibs_config(event, &config)) {
++		event->attr.type = perf_ibs_op.pmu.type;
++		event->attr.config = config;
++	}
++	return -ENOENT;
++}
++
+ static int perf_ibs_init(struct perf_event *event)
+ {
+ 	struct hw_perf_event *hwc = &event->hw;
+ 	struct perf_ibs *perf_ibs;
+ 	u64 max_cnt, config;
+-	int ret;
+ 
+ 	perf_ibs = get_ibs_pmu(event->attr.type);
+-	if (perf_ibs) {
+-		config = event->attr.config;
+-	} else {
+-		perf_ibs = &perf_ibs_op;
+-		ret = perf_ibs_precise_event(event, &config);
+-		if (ret)
+-			return ret;
+-	}
++	if (!perf_ibs)
++		return -ENOENT;
++
++	config = event->attr.config;
+ 
+ 	if (event->pmu != &perf_ibs->pmu)
+ 		return -ENOENT;
+diff --git a/arch/x86/include/asm/mtrr.h b/arch/x86/include/asm/mtrr.h
+index f0eeaf6e5f5f7..1bae790a553a5 100644
+--- a/arch/x86/include/asm/mtrr.h
++++ b/arch/x86/include/asm/mtrr.h
+@@ -23,14 +23,43 @@
+ #ifndef _ASM_X86_MTRR_H
+ #define _ASM_X86_MTRR_H
+ 
++#include <linux/bits.h>
+ #include <uapi/asm/mtrr.h>
+ 
++/* Defines for hardware MTRR registers. */
++#define MTRR_CAP_VCNT		GENMASK(7, 0)
++#define MTRR_CAP_FIX		BIT_MASK(8)
++#define MTRR_CAP_WC		BIT_MASK(10)
++
++#define MTRR_DEF_TYPE_TYPE	GENMASK(7, 0)
++#define MTRR_DEF_TYPE_FE	BIT_MASK(10)
++#define MTRR_DEF_TYPE_E		BIT_MASK(11)
++
++#define MTRR_DEF_TYPE_ENABLE	(MTRR_DEF_TYPE_FE | MTRR_DEF_TYPE_E)
++#define MTRR_DEF_TYPE_DISABLE	~(MTRR_DEF_TYPE_TYPE | MTRR_DEF_TYPE_ENABLE)
++
++#define MTRR_PHYSBASE_TYPE	GENMASK(7, 0)
++#define MTRR_PHYSBASE_RSVD	GENMASK(11, 8)
++
++#define MTRR_PHYSMASK_RSVD	GENMASK(10, 0)
++#define MTRR_PHYSMASK_V		BIT_MASK(11)
++
++struct mtrr_state_type {
++	struct mtrr_var_range var_ranges[MTRR_MAX_VAR_RANGES];
++	mtrr_type fixed_ranges[MTRR_NUM_FIXED_RANGES];
++	unsigned char enabled;
++	bool have_fixed;
++	mtrr_type def_type;
++};
++
+ /*
+  * The following functions are for use by other drivers that cannot use
+  * arch_phys_wc_add and arch_phys_wc_del.
+  */
+ # ifdef CONFIG_MTRR
+ void mtrr_bp_init(void);
++void mtrr_overwrite_state(struct mtrr_var_range *var, unsigned int num_var,
++			  mtrr_type def_type);
+ extern u8 mtrr_type_lookup(u64 addr, u64 end, u8 *uniform);
+ extern void mtrr_save_fixed_ranges(void *);
+ extern void mtrr_save_state(void);
+@@ -48,6 +77,12 @@ void mtrr_disable(void);
+ void mtrr_enable(void);
+ void mtrr_generic_set_state(void);
+ #  else
++static inline void mtrr_overwrite_state(struct mtrr_var_range *var,
++					unsigned int num_var,
++					mtrr_type def_type)
++{
++}
++
+ static inline u8 mtrr_type_lookup(u64 addr, u64 end, u8 *uniform)
+ {
+ 	/*
+@@ -121,7 +156,8 @@ struct mtrr_gentry32 {
+ #endif /* CONFIG_COMPAT */
+ 
+ /* Bit fields for enabled in struct mtrr_state_type */
+-#define MTRR_STATE_MTRR_FIXED_ENABLED	0x01
+-#define MTRR_STATE_MTRR_ENABLED		0x02
++#define MTRR_STATE_SHIFT		10
++#define MTRR_STATE_MTRR_FIXED_ENABLED	(MTRR_DEF_TYPE_FE >> MTRR_STATE_SHIFT)
++#define MTRR_STATE_MTRR_ENABLED		(MTRR_DEF_TYPE_E >> MTRR_STATE_SHIFT)
+ 
+ #endif /* _ASM_X86_MTRR_H */
+diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
+index abf09882f58b6..f1a46500a2753 100644
+--- a/arch/x86/include/asm/perf_event.h
++++ b/arch/x86/include/asm/perf_event.h
+@@ -478,8 +478,10 @@ struct pebs_xmm {
+ 
+ #ifdef CONFIG_X86_LOCAL_APIC
+ extern u32 get_ibs_caps(void);
++extern int forward_event_to_ibs(struct perf_event *event);
+ #else
+ static inline u32 get_ibs_caps(void) { return 0; }
++static inline int forward_event_to_ibs(struct perf_event *event) { return -ENOENT; }
+ #endif
+ 
+ #ifdef CONFIG_PERF_EVENTS
+diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h
+index 7929327abe009..a629b1b9f65a6 100644
+--- a/arch/x86/include/asm/pgtable_64.h
++++ b/arch/x86/include/asm/pgtable_64.h
+@@ -237,8 +237,8 @@ static inline void native_pgd_clear(pgd_t *pgd)
+ 
+ #define __pte_to_swp_entry(pte)		((swp_entry_t) { pte_val((pte)) })
+ #define __pmd_to_swp_entry(pmd)		((swp_entry_t) { pmd_val((pmd)) })
+-#define __swp_entry_to_pte(x)		((pte_t) { .pte = (x).val })
+-#define __swp_entry_to_pmd(x)		((pmd_t) { .pmd = (x).val })
++#define __swp_entry_to_pte(x)		(__pte((x).val))
++#define __swp_entry_to_pmd(x)		(__pmd((x).val))
+ 
+ extern void cleanup_highmap(void);
+ 
+diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
+index ebc271bb6d8ed..a0a58c4122ec3 100644
+--- a/arch/x86/include/asm/sev.h
++++ b/arch/x86/include/asm/sev.h
+@@ -187,12 +187,12 @@ static inline int pvalidate(unsigned long vaddr, bool rmp_psize, bool validate)
+ }
+ void setup_ghcb(void);
+ void __init early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr,
+-					 unsigned int npages);
++					 unsigned long npages);
+ void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr,
+-					unsigned int npages);
++					unsigned long npages);
+ void __init snp_prep_memory(unsigned long paddr, unsigned int sz, enum psc_op op);
+-void snp_set_memory_shared(unsigned long vaddr, unsigned int npages);
+-void snp_set_memory_private(unsigned long vaddr, unsigned int npages);
++void snp_set_memory_shared(unsigned long vaddr, unsigned long npages);
++void snp_set_memory_private(unsigned long vaddr, unsigned long npages);
+ void snp_set_wakeup_secondary_cpu(void);
+ bool snp_init(struct boot_params *bp);
+ void __init __noreturn snp_abort(void);
+@@ -207,12 +207,12 @@ static inline int pvalidate(unsigned long vaddr, bool rmp_psize, bool validate)
+ static inline int rmpadjust(unsigned long vaddr, bool rmp_psize, unsigned long attrs) { return 0; }
+ static inline void setup_ghcb(void) { }
+ static inline void __init
+-early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr, unsigned int npages) { }
++early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr, unsigned long npages) { }
+ static inline void __init
+-early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr, unsigned int npages) { }
++early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr, unsigned long npages) { }
+ static inline void __init snp_prep_memory(unsigned long paddr, unsigned int sz, enum psc_op op) { }
+-static inline void snp_set_memory_shared(unsigned long vaddr, unsigned int npages) { }
+-static inline void snp_set_memory_private(unsigned long vaddr, unsigned int npages) { }
++static inline void snp_set_memory_shared(unsigned long vaddr, unsigned long npages) { }
++static inline void snp_set_memory_private(unsigned long vaddr, unsigned long npages) { }
+ static inline void snp_set_wakeup_secondary_cpu(void) { }
+ static inline bool snp_init(struct boot_params *bp) { return false; }
+ static inline void snp_abort(void) { }
+diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
+index c1c8c581759d6..034e62838b284 100644
+--- a/arch/x86/include/asm/x86_init.h
++++ b/arch/x86/include/asm/x86_init.h
+@@ -150,7 +150,7 @@ struct x86_init_acpi {
+  * @enc_cache_flush_required	Returns true if a cache flush is needed before changing page encryption status
+  */
+ struct x86_guest {
+-	void (*enc_status_change_prepare)(unsigned long vaddr, int npages, bool enc);
++	bool (*enc_status_change_prepare)(unsigned long vaddr, int npages, bool enc);
+ 	bool (*enc_status_change_finish)(unsigned long vaddr, int npages, bool enc);
+ 	bool (*enc_tlb_flush_required)(bool enc);
+ 	bool (*enc_cache_flush_required)(void);
+diff --git a/arch/x86/include/uapi/asm/mtrr.h b/arch/x86/include/uapi/asm/mtrr.h
+index 376563f2bac1f..ab194c8316259 100644
+--- a/arch/x86/include/uapi/asm/mtrr.h
++++ b/arch/x86/include/uapi/asm/mtrr.h
+@@ -81,14 +81,6 @@ typedef __u8 mtrr_type;
+ #define MTRR_NUM_FIXED_RANGES 88
+ #define MTRR_MAX_VAR_RANGES 256
+ 
+-struct mtrr_state_type {
+-	struct mtrr_var_range var_ranges[MTRR_MAX_VAR_RANGES];
+-	mtrr_type fixed_ranges[MTRR_NUM_FIXED_RANGES];
+-	unsigned char enabled;
+-	unsigned char have_fixed;
+-	mtrr_type def_type;
+-};
+-
+ #define MTRRphysBase_MSR(reg) (0x200 + 2 * (reg))
+ #define MTRRphysMask_MSR(reg) (0x200 + 2 * (reg) + 1)
+ 
+diff --git a/arch/x86/kernel/cpu/mtrr/cleanup.c b/arch/x86/kernel/cpu/mtrr/cleanup.c
+index b5f43049fa5f7..ca2d567e729e2 100644
+--- a/arch/x86/kernel/cpu/mtrr/cleanup.c
++++ b/arch/x86/kernel/cpu/mtrr/cleanup.c
+@@ -173,7 +173,7 @@ early_param("mtrr_cleanup_debug", mtrr_cleanup_debug_setup);
+ 
+ static void __init
+ set_var_mtrr(unsigned int reg, unsigned long basek, unsigned long sizek,
+-	     unsigned char type, unsigned int address_bits)
++	     unsigned char type)
+ {
+ 	u32 base_lo, base_hi, mask_lo, mask_hi;
+ 	u64 base, mask;
+@@ -183,7 +183,7 @@ set_var_mtrr(unsigned int reg, unsigned long basek, unsigned long sizek,
+ 		return;
+ 	}
+ 
+-	mask = (1ULL << address_bits) - 1;
++	mask = (1ULL << boot_cpu_data.x86_phys_bits) - 1;
+ 	mask &= ~((((u64)sizek) << 10) - 1);
+ 
+ 	base = ((u64)basek) << 10;
+@@ -209,7 +209,7 @@ save_var_mtrr(unsigned int reg, unsigned long basek, unsigned long sizek,
+ 	range_state[reg].type = type;
+ }
+ 
+-static void __init set_var_mtrr_all(unsigned int address_bits)
++static void __init set_var_mtrr_all(void)
+ {
+ 	unsigned long basek, sizek;
+ 	unsigned char type;
+@@ -220,7 +220,7 @@ static void __init set_var_mtrr_all(unsigned int address_bits)
+ 		sizek = range_state[reg].size_pfn << (PAGE_SHIFT - 10);
+ 		type = range_state[reg].type;
+ 
+-		set_var_mtrr(reg, basek, sizek, type, address_bits);
++		set_var_mtrr(reg, basek, sizek, type);
+ 	}
+ }
+ 
+@@ -680,7 +680,7 @@ static int __init mtrr_search_optimal_index(void)
+ 	return index_good;
+ }
+ 
+-int __init mtrr_cleanup(unsigned address_bits)
++int __init mtrr_cleanup(void)
+ {
+ 	unsigned long x_remove_base, x_remove_size;
+ 	unsigned long base, size, def, dummy;
+@@ -742,7 +742,7 @@ int __init mtrr_cleanup(unsigned address_bits)
+ 		mtrr_print_out_one_result(i);
+ 
+ 		if (!result[i].bad) {
+-			set_var_mtrr_all(address_bits);
++			set_var_mtrr_all();
+ 			pr_debug("New variable MTRRs\n");
+ 			print_out_mtrr_range_state();
+ 			return 1;
+@@ -786,7 +786,7 @@ int __init mtrr_cleanup(unsigned address_bits)
+ 		gran_size = result[i].gran_sizek;
+ 		gran_size <<= 10;
+ 		x86_setup_var_mtrrs(range, nr_range, chunk_size, gran_size);
+-		set_var_mtrr_all(address_bits);
++		set_var_mtrr_all();
+ 		pr_debug("New variable MTRRs\n");
+ 		print_out_mtrr_range_state();
+ 		return 1;
+@@ -802,7 +802,7 @@ int __init mtrr_cleanup(unsigned address_bits)
+ 	return 0;
+ }
+ #else
+-int __init mtrr_cleanup(unsigned address_bits)
++int __init mtrr_cleanup(void)
+ {
+ 	return 0;
+ }
+@@ -890,7 +890,7 @@ int __init mtrr_trim_uncached_memory(unsigned long end_pfn)
+ 		return 0;
+ 
+ 	rdmsr(MSR_MTRRdefType, def, dummy);
+-	def &= 0xff;
++	def &= MTRR_DEF_TYPE_TYPE;
+ 	if (def != MTRR_TYPE_UNCACHABLE)
+ 		return 0;
+ 
+diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c
+index ee09d359e08f0..e81d832475a1f 100644
+--- a/arch/x86/kernel/cpu/mtrr/generic.c
++++ b/arch/x86/kernel/cpu/mtrr/generic.c
+@@ -8,10 +8,12 @@
+ #include <linux/init.h>
+ #include <linux/io.h>
+ #include <linux/mm.h>
+-
++#include <linux/cc_platform.h>
+ #include <asm/processor-flags.h>
+ #include <asm/cacheinfo.h>
+ #include <asm/cpufeature.h>
++#include <asm/hypervisor.h>
++#include <asm/mshyperv.h>
+ #include <asm/tlbflush.h>
+ #include <asm/mtrr.h>
+ #include <asm/msr.h>
+@@ -38,6 +40,9 @@ u64 mtrr_tom2;
+ struct mtrr_state_type mtrr_state;
+ EXPORT_SYMBOL_GPL(mtrr_state);
+ 
++/* Reserved bits in the high portion of the MTRRphysBaseN MSR. */
++u32 phys_hi_rsvd;
++
+ /*
+  * BIOS is expected to clear MtrrFixDramModEn bit, see for example
+  * "BIOS and Kernel Developer's Guide for the AMD Athlon 64 and AMD
+@@ -69,10 +74,9 @@ static u64 get_mtrr_size(u64 mask)
+ {
+ 	u64 size;
+ 
+-	mask >>= PAGE_SHIFT;
+-	mask |= size_or_mask;
++	mask |= (u64)phys_hi_rsvd << 32;
+ 	size = -mask;
+-	size <<= PAGE_SHIFT;
++
+ 	return size;
+ }
+ 
+@@ -171,7 +175,7 @@ static u8 mtrr_type_lookup_variable(u64 start, u64 end, u64 *partial_end,
+ 	for (i = 0; i < num_var_ranges; ++i) {
+ 		unsigned short start_state, end_state, inclusive;
+ 
+-		if (!(mtrr_state.var_ranges[i].mask_lo & (1 << 11)))
++		if (!(mtrr_state.var_ranges[i].mask_lo & MTRR_PHYSMASK_V))
+ 			continue;
+ 
+ 		base = (((u64)mtrr_state.var_ranges[i].base_hi) << 32) +
+@@ -223,7 +227,7 @@ static u8 mtrr_type_lookup_variable(u64 start, u64 end, u64 *partial_end,
+ 		if ((start & mask) != (base & mask))
+ 			continue;
+ 
+-		curr_match = mtrr_state.var_ranges[i].base_lo & 0xff;
++		curr_match = mtrr_state.var_ranges[i].base_lo & MTRR_PHYSBASE_TYPE;
+ 		if (prev_match == MTRR_TYPE_INVALID) {
+ 			prev_match = curr_match;
+ 			continue;
+@@ -240,6 +244,62 @@ static u8 mtrr_type_lookup_variable(u64 start, u64 end, u64 *partial_end,
+ 	return mtrr_state.def_type;
+ }
+ 
++/**
++ * mtrr_overwrite_state - set static MTRR state
++ *
++ * Used to set MTRR state via different means (e.g. with data obtained from
++ * a hypervisor).
++ * Is allowed only for special cases when running virtualized. Must be called
++ * from the x86_init.hyper.init_platform() hook.  It can be called only once.
++ * The MTRR state can't be changed afterwards.  To ensure that, X86_FEATURE_MTRR
++ * is cleared.
++ */
++void mtrr_overwrite_state(struct mtrr_var_range *var, unsigned int num_var,
++			  mtrr_type def_type)
++{
++	unsigned int i;
++
++	/* Only allowed to be called once before mtrr_bp_init(). */
++	if (WARN_ON_ONCE(mtrr_state_set))
++		return;
++
++	/* Only allowed when running virtualized. */
++	if (!cpu_feature_enabled(X86_FEATURE_HYPERVISOR))
++		return;
++
++	/*
++	 * Only allowed for special virtualization cases:
++	 * - when running as Hyper-V, SEV-SNP guest using vTOM
++	 * - when running as Xen PV guest
++	 * - when running as SEV-SNP or TDX guest to avoid unnecessary
++	 *   VMM communication/Virtualization exceptions (#VC, #VE)
++	 */
++	if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP) &&
++	    !hv_is_isolation_supported() &&
++	    !cpu_feature_enabled(X86_FEATURE_XENPV) &&
++	    !cpu_feature_enabled(X86_FEATURE_TDX_GUEST))
++		return;
++
++	/* Disable MTRR in order to disable MTRR modifications. */
++	setup_clear_cpu_cap(X86_FEATURE_MTRR);
++
++	if (var) {
++		if (num_var > MTRR_MAX_VAR_RANGES) {
++			pr_warn("Trying to overwrite MTRR state with %u variable entries\n",
++				num_var);
++			num_var = MTRR_MAX_VAR_RANGES;
++		}
++		for (i = 0; i < num_var; i++)
++			mtrr_state.var_ranges[i] = var[i];
++		num_var_ranges = num_var;
++	}
++
++	mtrr_state.def_type = def_type;
++	mtrr_state.enabled |= MTRR_STATE_MTRR_ENABLED;
++
++	mtrr_state_set = 1;
++}
++
+ /**
+  * mtrr_type_lookup - look up memory type in MTRR
+  *
+@@ -422,10 +482,10 @@ static void __init print_mtrr_state(void)
+ 	}
+ 	pr_debug("MTRR variable ranges %sabled:\n",
+ 		 mtrr_state.enabled & MTRR_STATE_MTRR_ENABLED ? "en" : "dis");
+-	high_width = (__ffs64(size_or_mask) - (32 - PAGE_SHIFT) + 3) / 4;
++	high_width = (boot_cpu_data.x86_phys_bits - (32 - PAGE_SHIFT) + 3) / 4;
+ 
+ 	for (i = 0; i < num_var_ranges; ++i) {
+-		if (mtrr_state.var_ranges[i].mask_lo & (1 << 11))
++		if (mtrr_state.var_ranges[i].mask_lo & MTRR_PHYSMASK_V)
+ 			pr_debug("  %u base %0*X%05X000 mask %0*X%05X000 %s\n",
+ 				 i,
+ 				 high_width,
+@@ -434,7 +494,8 @@ static void __init print_mtrr_state(void)
+ 				 high_width,
+ 				 mtrr_state.var_ranges[i].mask_hi,
+ 				 mtrr_state.var_ranges[i].mask_lo >> 12,
+-				 mtrr_attrib_to_str(mtrr_state.var_ranges[i].base_lo & 0xff));
++				 mtrr_attrib_to_str(mtrr_state.var_ranges[i].base_lo &
++						    MTRR_PHYSBASE_TYPE));
+ 		else
+ 			pr_debug("  %u disabled\n", i);
+ 	}
+@@ -452,7 +513,7 @@ bool __init get_mtrr_state(void)
+ 	vrs = mtrr_state.var_ranges;
+ 
+ 	rdmsr(MSR_MTRRcap, lo, dummy);
+-	mtrr_state.have_fixed = (lo >> 8) & 1;
++	mtrr_state.have_fixed = lo & MTRR_CAP_FIX;
+ 
+ 	for (i = 0; i < num_var_ranges; i++)
+ 		get_mtrr_var_range(i, &vrs[i]);
+@@ -460,8 +521,8 @@ bool __init get_mtrr_state(void)
+ 		get_fixed_ranges(mtrr_state.fixed_ranges);
+ 
+ 	rdmsr(MSR_MTRRdefType, lo, dummy);
+-	mtrr_state.def_type = (lo & 0xff);
+-	mtrr_state.enabled = (lo & 0xc00) >> 10;
++	mtrr_state.def_type = lo & MTRR_DEF_TYPE_TYPE;
++	mtrr_state.enabled = (lo & MTRR_DEF_TYPE_ENABLE) >> MTRR_STATE_SHIFT;
+ 
+ 	if (amd_special_default_mtrr()) {
+ 		unsigned low, high;
+@@ -574,7 +635,7 @@ static void generic_get_mtrr(unsigned int reg, unsigned long *base,
+ 
+ 	rdmsr(MTRRphysMask_MSR(reg), mask_lo, mask_hi);
+ 
+-	if ((mask_lo & 0x800) == 0) {
++	if (!(mask_lo & MTRR_PHYSMASK_V)) {
+ 		/*  Invalid (i.e. free) range */
+ 		*base = 0;
+ 		*size = 0;
+@@ -585,8 +646,8 @@ static void generic_get_mtrr(unsigned int reg, unsigned long *base,
+ 	rdmsr(MTRRphysBase_MSR(reg), base_lo, base_hi);
+ 
+ 	/* Work out the shifted address mask: */
+-	tmp = (u64)mask_hi << (32 - PAGE_SHIFT) | mask_lo >> PAGE_SHIFT;
+-	mask = size_or_mask | tmp;
++	tmp = (u64)mask_hi << 32 | (mask_lo & PAGE_MASK);
++	mask = (u64)phys_hi_rsvd << 32 | tmp;
+ 
+ 	/* Expand tmp with high bits to all 1s: */
+ 	hi = fls64(tmp);
+@@ -604,9 +665,9 @@ static void generic_get_mtrr(unsigned int reg, unsigned long *base,
+ 	 * This works correctly if size is a power of two, i.e. a
+ 	 * contiguous range:
+ 	 */
+-	*size = -mask;
++	*size = -mask >> PAGE_SHIFT;
+ 	*base = (u64)base_hi << (32 - PAGE_SHIFT) | base_lo >> PAGE_SHIFT;
+-	*type = base_lo & 0xff;
++	*type = base_lo & MTRR_PHYSBASE_TYPE;
+ 
+ out_put_cpu:
+ 	put_cpu();
+@@ -644,9 +705,8 @@ static bool set_mtrr_var_ranges(unsigned int index, struct mtrr_var_range *vr)
+ 	bool changed = false;
+ 
+ 	rdmsr(MTRRphysBase_MSR(index), lo, hi);
+-	if ((vr->base_lo & 0xfffff0ffUL) != (lo & 0xfffff0ffUL)
+-	    || (vr->base_hi & (size_and_mask >> (32 - PAGE_SHIFT))) !=
+-		(hi & (size_and_mask >> (32 - PAGE_SHIFT)))) {
++	if ((vr->base_lo & ~MTRR_PHYSBASE_RSVD) != (lo & ~MTRR_PHYSBASE_RSVD)
++	    || (vr->base_hi & ~phys_hi_rsvd) != (hi & ~phys_hi_rsvd)) {
+ 
+ 		mtrr_wrmsr(MTRRphysBase_MSR(index), vr->base_lo, vr->base_hi);
+ 		changed = true;
+@@ -654,9 +714,8 @@ static bool set_mtrr_var_ranges(unsigned int index, struct mtrr_var_range *vr)
+ 
+ 	rdmsr(MTRRphysMask_MSR(index), lo, hi);
+ 
+-	if ((vr->mask_lo & 0xfffff800UL) != (lo & 0xfffff800UL)
+-	    || (vr->mask_hi & (size_and_mask >> (32 - PAGE_SHIFT))) !=
+-		(hi & (size_and_mask >> (32 - PAGE_SHIFT)))) {
++	if ((vr->mask_lo & ~MTRR_PHYSMASK_RSVD) != (lo & ~MTRR_PHYSMASK_RSVD)
++	    || (vr->mask_hi & ~phys_hi_rsvd) != (hi & ~phys_hi_rsvd)) {
+ 		mtrr_wrmsr(MTRRphysMask_MSR(index), vr->mask_lo, vr->mask_hi);
+ 		changed = true;
+ 	}
+@@ -691,11 +750,12 @@ static unsigned long set_mtrr_state(void)
+ 	 * Set_mtrr_restore restores the old value of MTRRdefType,
+ 	 * so to set it we fiddle with the saved value:
+ 	 */
+-	if ((deftype_lo & 0xff) != mtrr_state.def_type
+-	    || ((deftype_lo & 0xc00) >> 10) != mtrr_state.enabled) {
++	if ((deftype_lo & MTRR_DEF_TYPE_TYPE) != mtrr_state.def_type ||
++	    ((deftype_lo & MTRR_DEF_TYPE_ENABLE) >> MTRR_STATE_SHIFT) != mtrr_state.enabled) {
+ 
+-		deftype_lo = (deftype_lo & ~0xcff) | mtrr_state.def_type |
+-			     (mtrr_state.enabled << 10);
++		deftype_lo = (deftype_lo & MTRR_DEF_TYPE_DISABLE) |
++			     mtrr_state.def_type |
++			     (mtrr_state.enabled << MTRR_STATE_SHIFT);
+ 		change_mask |= MTRR_CHANGE_MASK_DEFTYPE;
+ 	}
+ 
+@@ -708,7 +768,7 @@ void mtrr_disable(void)
+ 	rdmsr(MSR_MTRRdefType, deftype_lo, deftype_hi);
+ 
+ 	/* Disable MTRRs, and set the default type to uncached */
+-	mtrr_wrmsr(MSR_MTRRdefType, deftype_lo & ~0xcff, deftype_hi);
++	mtrr_wrmsr(MSR_MTRRdefType, deftype_lo & MTRR_DEF_TYPE_DISABLE, deftype_hi);
+ }
+ 
+ void mtrr_enable(void)
+@@ -762,9 +822,9 @@ static void generic_set_mtrr(unsigned int reg, unsigned long base,
+ 		memset(vr, 0, sizeof(struct mtrr_var_range));
+ 	} else {
+ 		vr->base_lo = base << PAGE_SHIFT | type;
+-		vr->base_hi = (base & size_and_mask) >> (32 - PAGE_SHIFT);
+-		vr->mask_lo = -size << PAGE_SHIFT | 0x800;
+-		vr->mask_hi = (-size & size_and_mask) >> (32 - PAGE_SHIFT);
++		vr->base_hi = (base >> (32 - PAGE_SHIFT)) & ~phys_hi_rsvd;
++		vr->mask_lo = -size << PAGE_SHIFT | MTRR_PHYSMASK_V;
++		vr->mask_hi = (-size >> (32 - PAGE_SHIFT)) & ~phys_hi_rsvd;
+ 
+ 		mtrr_wrmsr(MTRRphysBase_MSR(reg), vr->base_lo, vr->base_hi);
+ 		mtrr_wrmsr(MTRRphysMask_MSR(reg), vr->mask_lo, vr->mask_hi);
+@@ -817,7 +877,7 @@ static int generic_have_wrcomb(void)
+ {
+ 	unsigned long config, dummy;
+ 	rdmsr(MSR_MTRRcap, config, dummy);
+-	return config & (1 << 10);
++	return config & MTRR_CAP_WC;
+ }
+ 
+ int positive_have_wrcomb(void)
+diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.c b/arch/x86/kernel/cpu/mtrr/mtrr.c
+index 783f3210d5827..be35a0b09604d 100644
+--- a/arch/x86/kernel/cpu/mtrr/mtrr.c
++++ b/arch/x86/kernel/cpu/mtrr/mtrr.c
+@@ -67,8 +67,6 @@ static bool mtrr_enabled(void)
+ unsigned int mtrr_usage_table[MTRR_MAX_VAR_RANGES];
+ static DEFINE_MUTEX(mtrr_mutex);
+ 
+-u64 size_or_mask, size_and_mask;
+-
+ const struct mtrr_ops *mtrr_if;
+ 
+ /*  Returns non-zero if we have the write-combining memory type  */
+@@ -117,7 +115,7 @@ static void __init set_num_var_ranges(bool use_generic)
+ 	else if (is_cpu(CYRIX) || is_cpu(CENTAUR))
+ 		config = 8;
+ 
+-	num_var_ranges = config & 0xff;
++	num_var_ranges = config & MTRR_CAP_VCNT;
+ }
+ 
+ static void __init init_table(void)
+@@ -619,77 +617,46 @@ static struct syscore_ops mtrr_syscore_ops = {
+ 
+ int __initdata changed_by_mtrr_cleanup;
+ 
+-#define SIZE_OR_MASK_BITS(n)  (~((1ULL << ((n) - PAGE_SHIFT)) - 1))
+ /**
+- * mtrr_bp_init - initialize mtrrs on the boot CPU
++ * mtrr_bp_init - initialize MTRRs on the boot CPU
+  *
+  * This needs to be called early; before any of the other CPUs are
+  * initialized (i.e. before smp_init()).
+- *
+  */
+ void __init mtrr_bp_init(void)
+ {
++	bool generic_mtrrs = cpu_feature_enabled(X86_FEATURE_MTRR);
+ 	const char *why = "(not available)";
+-	u32 phys_addr;
+ 
+-	phys_addr = 32;
+-
+-	if (boot_cpu_has(X86_FEATURE_MTRR)) {
+-		mtrr_if = &generic_mtrr_ops;
+-		size_or_mask = SIZE_OR_MASK_BITS(36);
+-		size_and_mask = 0x00f00000;
+-		phys_addr = 36;
++	phys_hi_rsvd = GENMASK(31, boot_cpu_data.x86_phys_bits - 32);
+ 
++	if (!generic_mtrrs && mtrr_state.enabled) {
+ 		/*
+-		 * This is an AMD specific MSR, but we assume(hope?) that
+-		 * Intel will implement it too when they extend the address
+-		 * bus of the Xeon.
++		 * Software overwrite of MTRR state, only for generic case.
++		 * Note that X86_FEATURE_MTRR has been reset in this case.
+ 		 */
+-		if (cpuid_eax(0x80000000) >= 0x80000008) {
+-			phys_addr = cpuid_eax(0x80000008) & 0xff;
+-			/* CPUID workaround for Intel 0F33/0F34 CPU */
+-			if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
+-			    boot_cpu_data.x86 == 0xF &&
+-			    boot_cpu_data.x86_model == 0x3 &&
+-			    (boot_cpu_data.x86_stepping == 0x3 ||
+-			     boot_cpu_data.x86_stepping == 0x4))
+-				phys_addr = 36;
+-
+-			size_or_mask = SIZE_OR_MASK_BITS(phys_addr);
+-			size_and_mask = ~size_or_mask & 0xfffff00000ULL;
+-		} else if (boot_cpu_data.x86_vendor == X86_VENDOR_CENTAUR &&
+-			   boot_cpu_data.x86 == 6) {
+-			/*
+-			 * VIA C* family have Intel style MTRRs,
+-			 * but don't support PAE
+-			 */
+-			size_or_mask = SIZE_OR_MASK_BITS(32);
+-			size_and_mask = 0;
+-			phys_addr = 32;
+-		}
++		init_table();
++		pr_info("MTRRs set to read-only\n");
++
++		return;
++	}
++
++	if (generic_mtrrs) {
++		mtrr_if = &generic_mtrr_ops;
+ 	} else {
+ 		switch (boot_cpu_data.x86_vendor) {
+ 		case X86_VENDOR_AMD:
+-			if (cpu_feature_enabled(X86_FEATURE_K6_MTRR)) {
+-				/* Pre-Athlon (K6) AMD CPU MTRRs */
++			/* Pre-Athlon (K6) AMD CPU MTRRs */
++			if (cpu_feature_enabled(X86_FEATURE_K6_MTRR))
+ 				mtrr_if = &amd_mtrr_ops;
+-				size_or_mask = SIZE_OR_MASK_BITS(32);
+-				size_and_mask = 0;
+-			}
+ 			break;
+ 		case X86_VENDOR_CENTAUR:
+-			if (cpu_feature_enabled(X86_FEATURE_CENTAUR_MCR)) {
++			if (cpu_feature_enabled(X86_FEATURE_CENTAUR_MCR))
+ 				mtrr_if = &centaur_mtrr_ops;
+-				size_or_mask = SIZE_OR_MASK_BITS(32);
+-				size_and_mask = 0;
+-			}
+ 			break;
+ 		case X86_VENDOR_CYRIX:
+-			if (cpu_feature_enabled(X86_FEATURE_CYRIX_ARR)) {
++			if (cpu_feature_enabled(X86_FEATURE_CYRIX_ARR))
+ 				mtrr_if = &cyrix_mtrr_ops;
+-				size_or_mask = SIZE_OR_MASK_BITS(32);
+-				size_and_mask = 0;
+-			}
+ 			break;
+ 		default:
+ 			break;
+@@ -703,7 +670,7 @@ void __init mtrr_bp_init(void)
+ 			/* BIOS may override */
+ 			if (get_mtrr_state()) {
+ 				memory_caching_control |= CACHE_MTRR;
+-				changed_by_mtrr_cleanup = mtrr_cleanup(phys_addr);
++				changed_by_mtrr_cleanup = mtrr_cleanup();
+ 			} else {
+ 				mtrr_if = NULL;
+ 				why = "by BIOS";
+diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.h b/arch/x86/kernel/cpu/mtrr/mtrr.h
+index 02eb5871492d0..59e8fb26bf9dd 100644
+--- a/arch/x86/kernel/cpu/mtrr/mtrr.h
++++ b/arch/x86/kernel/cpu/mtrr/mtrr.h
+@@ -51,7 +51,6 @@ void fill_mtrr_var_range(unsigned int index,
+ 		u32 base_lo, u32 base_hi, u32 mask_lo, u32 mask_hi);
+ bool get_mtrr_state(void);
+ 
+-extern u64 size_or_mask, size_and_mask;
+ extern const struct mtrr_ops *mtrr_if;
+ 
+ #define is_cpu(vnd)	(mtrr_if && mtrr_if->vendor == X86_VENDOR_##vnd)
+@@ -59,6 +58,7 @@ extern const struct mtrr_ops *mtrr_if;
+ extern unsigned int num_var_ranges;
+ extern u64 mtrr_tom2;
+ extern struct mtrr_state_type mtrr_state;
++extern u32 phys_hi_rsvd;
+ 
+ void mtrr_state_warn(void);
+ const char *mtrr_attrib_to_str(int x);
+@@ -70,4 +70,4 @@ extern const struct mtrr_ops cyrix_mtrr_ops;
+ extern const struct mtrr_ops centaur_mtrr_ops;
+ 
+ extern int changed_by_mtrr_cleanup;
+-extern int mtrr_cleanup(unsigned address_bits);
++extern int mtrr_cleanup(void);
+diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+index 6ad33f355861f..61cdd9b1bb6d8 100644
+--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
++++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+@@ -726,11 +726,15 @@ unlock:
+ static void show_rdt_tasks(struct rdtgroup *r, struct seq_file *s)
+ {
+ 	struct task_struct *p, *t;
++	pid_t pid;
+ 
+ 	rcu_read_lock();
+ 	for_each_process_thread(p, t) {
+-		if (is_closid_match(t, r) || is_rmid_match(t, r))
+-			seq_printf(s, "%d\n", t->pid);
++		if (is_closid_match(t, r) || is_rmid_match(t, r)) {
++			pid = task_pid_vnr(t);
++			if (pid)
++				seq_printf(s, "%d\n", pid);
++		}
+ 	}
+ 	rcu_read_unlock();
+ }
+diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
+index 16babff771bdf..0cccfeb67c3ad 100644
+--- a/arch/x86/kernel/setup.c
++++ b/arch/x86/kernel/setup.c
+@@ -1037,6 +1037,8 @@ void __init setup_arch(char **cmdline_p)
+ 	/*
+ 	 * VMware detection requires dmi to be available, so this
+ 	 * needs to be done after dmi_setup(), for the boot CPU.
++	 * For some guest types (Xen PV, SEV-SNP, TDX) it is required to be
++	 * called before cache_bp_init() for setting up MTRR state.
+ 	 */
+ 	init_hypervisor_platform();
+ 
+diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
+index 3f664ab277c49..45ef3926381f8 100644
+--- a/arch/x86/kernel/sev.c
++++ b/arch/x86/kernel/sev.c
+@@ -643,7 +643,7 @@ static u64 __init get_jump_table_addr(void)
+ 	return ret;
+ }
+ 
+-static void pvalidate_pages(unsigned long vaddr, unsigned int npages, bool validate)
++static void pvalidate_pages(unsigned long vaddr, unsigned long npages, bool validate)
+ {
+ 	unsigned long vaddr_end;
+ 	int rc;
+@@ -660,7 +660,7 @@ static void pvalidate_pages(unsigned long vaddr, unsigned int npages, bool valid
+ 	}
+ }
+ 
+-static void __init early_set_pages_state(unsigned long paddr, unsigned int npages, enum psc_op op)
++static void __init early_set_pages_state(unsigned long paddr, unsigned long npages, enum psc_op op)
+ {
+ 	unsigned long paddr_end;
+ 	u64 val;
+@@ -699,7 +699,7 @@ e_term:
+ }
+ 
+ void __init early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr,
+-					 unsigned int npages)
++					 unsigned long npages)
+ {
+ 	/*
+ 	 * This can be invoked in early boot while running identity mapped, so
+@@ -721,7 +721,7 @@ void __init early_snp_set_memory_private(unsigned long vaddr, unsigned long padd
+ }
+ 
+ void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr,
+-					unsigned int npages)
++					unsigned long npages)
+ {
+ 	/*
+ 	 * This can be invoked in early boot while running identity mapped, so
+@@ -877,7 +877,7 @@ static void __set_pages_state(struct snp_psc_desc *data, unsigned long vaddr,
+ 		sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PSC);
+ }
+ 
+-static void set_pages_state(unsigned long vaddr, unsigned int npages, int op)
++static void set_pages_state(unsigned long vaddr, unsigned long npages, int op)
+ {
+ 	unsigned long vaddr_end, next_vaddr;
+ 	struct snp_psc_desc *desc;
+@@ -902,7 +902,7 @@ static void set_pages_state(unsigned long vaddr, unsigned int npages, int op)
+ 	kfree(desc);
+ }
+ 
+-void snp_set_memory_shared(unsigned long vaddr, unsigned int npages)
++void snp_set_memory_shared(unsigned long vaddr, unsigned long npages)
+ {
+ 	if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP))
+ 		return;
+@@ -912,7 +912,7 @@ void snp_set_memory_shared(unsigned long vaddr, unsigned int npages)
+ 	set_pages_state(vaddr, npages, SNP_PAGE_STATE_SHARED);
+ }
+ 
+-void snp_set_memory_private(unsigned long vaddr, unsigned int npages)
++void snp_set_memory_private(unsigned long vaddr, unsigned long npages)
+ {
+ 	if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP))
+ 		return;
+diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
+index 10622cf2b30f4..41e5b4cb898c3 100644
+--- a/arch/x86/kernel/x86_init.c
++++ b/arch/x86/kernel/x86_init.c
+@@ -130,7 +130,7 @@ struct x86_cpuinit_ops x86_cpuinit = {
+ 
+ static void default_nmi_init(void) { };
+ 
+-static void enc_status_change_prepare_noop(unsigned long vaddr, int npages, bool enc) { }
++static bool enc_status_change_prepare_noop(unsigned long vaddr, int npages, bool enc) { return true; }
+ static bool enc_status_change_finish_noop(unsigned long vaddr, int npages, bool enc) { return false; }
+ static bool enc_tlb_flush_required_noop(bool enc) { return false; }
+ static bool enc_cache_flush_required_noop(void) { return false; }
+diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c
+index 9c4d8dbcb1296..ff6c0462beee7 100644
+--- a/arch/x86/mm/mem_encrypt_amd.c
++++ b/arch/x86/mm/mem_encrypt_amd.c
+@@ -319,7 +319,7 @@ static void enc_dec_hypercall(unsigned long vaddr, int npages, bool enc)
+ #endif
+ }
+ 
+-static void amd_enc_status_change_prepare(unsigned long vaddr, int npages, bool enc)
++static bool amd_enc_status_change_prepare(unsigned long vaddr, int npages, bool enc)
+ {
+ 	/*
+ 	 * To maintain the security guarantees of SEV-SNP guests, make sure
+@@ -327,6 +327,8 @@ static void amd_enc_status_change_prepare(unsigned long vaddr, int npages, bool
+ 	 */
+ 	if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP) && !enc)
+ 		snp_set_memory_shared(vaddr, npages);
++
++	return true;
+ }
+ 
+ /* Return true unconditionally: return value doesn't matter for the SEV side */
+diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
+index 356758b7d4b47..6a167290a1fd1 100644
+--- a/arch/x86/mm/pat/set_memory.c
++++ b/arch/x86/mm/pat/set_memory.c
+@@ -2151,7 +2151,8 @@ static int __set_memory_enc_pgtable(unsigned long addr, int numpages, bool enc)
+ 		cpa_flush(&cpa, x86_platform.guest.enc_cache_flush_required());
+ 
+ 	/* Notify hypervisor that we are about to set/clr encryption attribute. */
+-	x86_platform.guest.enc_status_change_prepare(addr, numpages, enc);
++	if (!x86_platform.guest.enc_status_change_prepare(addr, numpages, enc))
++		return -EIO;
+ 
+ 	ret = __change_page_attr_set_clr(&cpa, 1);
+ 
+diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
+index 232acf418cfbe..77f7ac3668cb4 100644
+--- a/arch/x86/platform/efi/efi_64.c
++++ b/arch/x86/platform/efi/efi_64.c
+@@ -853,9 +853,9 @@ efi_set_virtual_address_map(unsigned long memory_map_size,
+ 
+ 	/* Disable interrupts around EFI calls: */
+ 	local_irq_save(flags);
+-	status = efi_call(efi.runtime->set_virtual_address_map,
+-			  memory_map_size, descriptor_size,
+-			  descriptor_version, virtual_map);
++	status = arch_efi_call_virt(efi.runtime, set_virtual_address_map,
++				    memory_map_size, descriptor_size,
++				    descriptor_version, virtual_map);
+ 	local_irq_restore(flags);
+ 
+ 	efi_fpu_end();
+diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
+index 093b78c8bbec0..8732b85d56505 100644
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -68,6 +68,7 @@
+ #include <asm/reboot.h>
+ #include <asm/hypervisor.h>
+ #include <asm/mach_traps.h>
++#include <asm/mtrr.h>
+ #include <asm/mwait.h>
+ #include <asm/pci_x86.h>
+ #include <asm/cpu.h>
+@@ -119,6 +120,54 @@ static int __init parse_xen_msr_safe(char *str)
+ }
+ early_param("xen_msr_safe", parse_xen_msr_safe);
+ 
++/* Get MTRR settings from Xen and put them into mtrr_state. */
++static void __init xen_set_mtrr_data(void)
++{
++#ifdef CONFIG_MTRR
++	struct xen_platform_op op = {
++		.cmd = XENPF_read_memtype,
++		.interface_version = XENPF_INTERFACE_VERSION,
++	};
++	unsigned int reg;
++	unsigned long mask;
++	uint32_t eax, width;
++	static struct mtrr_var_range var[MTRR_MAX_VAR_RANGES] __initdata;
++
++	/* Get physical address width (only 64-bit cpus supported). */
++	width = 36;
++	eax = cpuid_eax(0x80000000);
++	if ((eax >> 16) == 0x8000 && eax >= 0x80000008) {
++		eax = cpuid_eax(0x80000008);
++		width = eax & 0xff;
++	}
++
++	for (reg = 0; reg < MTRR_MAX_VAR_RANGES; reg++) {
++		op.u.read_memtype.reg = reg;
++		if (HYPERVISOR_platform_op(&op))
++			break;
++
++		/*
++		 * Only called in dom0, which has all RAM PFNs mapped at
++		 * RAM MFNs, and all PCI space etc. is identity mapped.
++		 * This means we can treat MFN == PFN regarding MTRR settings.
++		 */
++		var[reg].base_lo = op.u.read_memtype.type;
++		var[reg].base_lo |= op.u.read_memtype.mfn << PAGE_SHIFT;
++		var[reg].base_hi = op.u.read_memtype.mfn >> (32 - PAGE_SHIFT);
++		mask = ~((op.u.read_memtype.nr_mfns << PAGE_SHIFT) - 1);
++		mask &= (1UL << width) - 1;
++		if (mask)
++			mask |= MTRR_PHYSMASK_V;
++		var[reg].mask_lo = mask;
++		var[reg].mask_hi = mask >> 32;
++	}
++
++	/* Only overwrite MTRR state if any MTRR could be got from Xen. */
++	if (reg)
++		mtrr_overwrite_state(var, reg, MTRR_TYPE_UNCACHABLE);
++#endif
++}
++
+ static void __init xen_pv_init_platform(void)
+ {
+ 	/* PV guests can't operate virtio devices without grants. */
+@@ -135,6 +184,9 @@ static void __init xen_pv_init_platform(void)
+ 
+ 	/* pvclock is in shared info area */
+ 	xen_init_time_ops();
++
++	if (xen_initial_domain())
++		xen_set_mtrr_data();
+ }
+ 
+ static void __init xen_pv_guest_late_init(void)
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index 581aa08e34a8e..757c10cf26dda 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -585,8 +585,13 @@ static int blkcg_reset_stats(struct cgroup_subsys_state *css,
+ 			struct blkg_iostat_set *bis =
+ 				per_cpu_ptr(blkg->iostat_cpu, cpu);
+ 			memset(bis, 0, sizeof(*bis));
++
++			/* Re-initialize the cleared blkg_iostat_set */
++			u64_stats_init(&bis->sync);
++			bis->blkg = blkg;
+ 		}
+ 		memset(&blkg->iostat, 0, sizeof(blkg->iostat));
++		u64_stats_init(&blkg->iostat.sync);
+ 
+ 		for (i = 0; i < BLKCG_MAX_POLS; i++) {
+ 			struct blkcg_policy *pol = blkcg_policy[i];
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index 4442c7a851125..b29429090d3f4 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -2455,6 +2455,7 @@ static u64 adjust_inuse_and_calc_cost(struct ioc_gq *iocg, u64 vtime,
+ 	u32 hwi, adj_step;
+ 	s64 margin;
+ 	u64 cost, new_inuse;
++	unsigned long flags;
+ 
+ 	current_hweight(iocg, NULL, &hwi);
+ 	old_hwi = hwi;
+@@ -2473,11 +2474,11 @@ static u64 adjust_inuse_and_calc_cost(struct ioc_gq *iocg, u64 vtime,
+ 	    iocg->inuse == iocg->active)
+ 		return cost;
+ 
+-	spin_lock_irq(&ioc->lock);
++	spin_lock_irqsave(&ioc->lock, flags);
+ 
+ 	/* we own inuse only when @iocg is in the normal active state */
+ 	if (iocg->abs_vdebt || list_empty(&iocg->active_list)) {
+-		spin_unlock_irq(&ioc->lock);
++		spin_unlock_irqrestore(&ioc->lock, flags);
+ 		return cost;
+ 	}
+ 
+@@ -2498,7 +2499,7 @@ static u64 adjust_inuse_and_calc_cost(struct ioc_gq *iocg, u64 vtime,
+ 	} while (time_after64(vtime + cost, now->vnow) &&
+ 		 iocg->inuse != iocg->active);
+ 
+-	spin_unlock_irq(&ioc->lock);
++	spin_unlock_irqrestore(&ioc->lock, flags);
+ 
+ 	TRACE_IOCG_PATH(inuse_adjust, iocg, now,
+ 			old_inuse, iocg->inuse, old_hwi, hwi);
+diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
+index b01818f8e216e..c152276736832 100644
+--- a/block/blk-mq-debugfs.c
++++ b/block/blk-mq-debugfs.c
+@@ -427,7 +427,7 @@ static void blk_mq_debugfs_tags_show(struct seq_file *m,
+ 	seq_printf(m, "nr_tags=%u\n", tags->nr_tags);
+ 	seq_printf(m, "nr_reserved_tags=%u\n", tags->nr_reserved_tags);
+ 	seq_printf(m, "active_queues=%d\n",
+-		   atomic_read(&tags->active_queues));
++		   READ_ONCE(tags->active_queues));
+ 
+ 	seq_puts(m, "\nbitmap_tags:\n");
+ 	sbitmap_queue_show(&tags->bitmap_tags, m);
+diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
+index a80d7c62bdfe6..100889c276c3f 100644
+--- a/block/blk-mq-tag.c
++++ b/block/blk-mq-tag.c
+@@ -40,6 +40,7 @@ static void blk_mq_update_wake_batch(struct blk_mq_tags *tags,
+ void __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
+ {
+ 	unsigned int users;
++	struct blk_mq_tags *tags = hctx->tags;
+ 
+ 	/*
+ 	 * calling test_bit() prior to test_and_set_bit() is intentional,
+@@ -57,9 +58,11 @@ void __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
+ 			return;
+ 	}
+ 
+-	users = atomic_inc_return(&hctx->tags->active_queues);
+-
+-	blk_mq_update_wake_batch(hctx->tags, users);
++	spin_lock_irq(&tags->lock);
++	users = tags->active_queues + 1;
++	WRITE_ONCE(tags->active_queues, users);
++	blk_mq_update_wake_batch(tags, users);
++	spin_unlock_irq(&tags->lock);
+ }
+ 
+ /*
+@@ -92,9 +95,11 @@ void __blk_mq_tag_idle(struct blk_mq_hw_ctx *hctx)
+ 			return;
+ 	}
+ 
+-	users = atomic_dec_return(&tags->active_queues);
+-
++	spin_lock_irq(&tags->lock);
++	users = tags->active_queues - 1;
++	WRITE_ONCE(tags->active_queues, users);
+ 	blk_mq_update_wake_batch(tags, users);
++	spin_unlock_irq(&tags->lock);
+ 
+ 	blk_mq_tag_wakeup_all(tags, false);
+ }
+diff --git a/block/blk-mq.h b/block/blk-mq.h
+index a7482d2cc82e7..4542308c8e62f 100644
+--- a/block/blk-mq.h
++++ b/block/blk-mq.h
+@@ -362,8 +362,7 @@ static inline bool hctx_may_queue(struct blk_mq_hw_ctx *hctx,
+ 			return true;
+ 	}
+ 
+-	users = atomic_read(&hctx->tags->active_queues);
+-
++	users = READ_ONCE(hctx->tags->active_queues);
+ 	if (!users)
+ 		return true;
+ 
+diff --git a/block/genhd.c b/block/genhd.c
+index 7f874737af682..c5a35e1b462fa 100644
+--- a/block/genhd.c
++++ b/block/genhd.c
+@@ -25,8 +25,9 @@
+ #include <linux/pm_runtime.h>
+ #include <linux/badblocks.h>
+ #include <linux/part_stat.h>
+-#include "blk-throttle.h"
++#include <linux/blktrace_api.h>
+ 
++#include "blk-throttle.h"
+ #include "blk.h"
+ #include "blk-mq-sched.h"
+ #include "blk-rq-qos.h"
+@@ -1183,6 +1184,8 @@ static void disk_release(struct device *dev)
+ 	might_sleep();
+ 	WARN_ON_ONCE(disk_live(disk));
+ 
++	blk_trace_remove(disk->queue);
++
+ 	/*
+ 	 * To undo the all initialization from blk_mq_init_allocated_queue in
+ 	 * case of a probe failure where add_disk is never called we have to
+diff --git a/crypto/jitterentropy.c b/crypto/jitterentropy.c
+index 22f48bf4c6f57..227cedfa4f0ae 100644
+--- a/crypto/jitterentropy.c
++++ b/crypto/jitterentropy.c
+@@ -117,7 +117,6 @@ struct rand_data {
+ 				   * zero). */
+ #define JENT_ESTUCK		8 /* Too many stuck results during init. */
+ #define JENT_EHEALTH		9 /* Health test failed during initialization */
+-#define JENT_ERCT		10 /* RCT failed during initialization */
+ 
+ /*
+  * The output n bits can receive more than n bits of min entropy, of course,
+@@ -762,14 +761,12 @@ int jent_entropy_init(void)
+ 			if ((nonstuck % JENT_APT_WINDOW_SIZE) == 0) {
+ 				jent_apt_reset(&ec,
+ 					       delta & JENT_APT_WORD_MASK);
+-				if (jent_health_failure(&ec))
+-					return JENT_EHEALTH;
+ 			}
+ 		}
+ 
+-		/* Validate RCT */
+-		if (jent_rct_failure(&ec))
+-			return JENT_ERCT;
++		/* Validate health test result */
++		if (jent_health_failure(&ec))
++			return JENT_EHEALTH;
+ 
+ 		/* test whether we have an increasing timer */
+ 		if (!(time2 > time))
+diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
+index 34ad071a64e96..4382fe13ee3e4 100644
+--- a/drivers/acpi/apei/ghes.c
++++ b/drivers/acpi/apei/ghes.c
+@@ -1544,6 +1544,8 @@ struct list_head *ghes_get_devices(void)
+ 
+ 			pr_warn_once("Force-loading ghes_edac on an unsupported platform. You're on your own!\n");
+ 		}
++	} else if (list_empty(&ghes_devs)) {
++		return NULL;
+ 	}
+ 
+ 	return &ghes_devs;
+diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
+index 32084e38b73d0..5cb2023581d4d 100644
+--- a/drivers/base/power/domain.c
++++ b/drivers/base/power/domain.c
+@@ -1632,9 +1632,6 @@ static int genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
+ 
+ 	dev_dbg(dev, "%s()\n", __func__);
+ 
+-	if (IS_ERR_OR_NULL(genpd) || IS_ERR_OR_NULL(dev))
+-		return -EINVAL;
+-
+ 	gpd_data = genpd_alloc_dev_data(dev, gd);
+ 	if (IS_ERR(gpd_data))
+ 		return PTR_ERR(gpd_data);
+@@ -1676,6 +1673,9 @@ int pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev)
+ {
+ 	int ret;
+ 
++	if (!genpd || !dev)
++		return -EINVAL;
++
+ 	mutex_lock(&gpd_list_lock);
+ 	ret = genpd_add_device(genpd, dev, dev);
+ 	mutex_unlock(&gpd_list_lock);
+@@ -2523,6 +2523,9 @@ int of_genpd_add_device(struct of_phandle_args *genpdspec, struct device *dev)
+ 	struct generic_pm_domain *genpd;
+ 	int ret;
+ 
++	if (!dev)
++		return -EINVAL;
++
+ 	mutex_lock(&gpd_list_lock);
+ 
+ 	genpd = genpd_get_from_provider(genpdspec);
+@@ -2939,10 +2942,10 @@ static int genpd_parse_state(struct genpd_power_state *genpd_state,
+ 
+ 	err = of_property_read_u32(state_node, "min-residency-us", &residency);
+ 	if (!err)
+-		genpd_state->residency_ns = 1000 * residency;
++		genpd_state->residency_ns = 1000LL * residency;
+ 
+-	genpd_state->power_on_latency_ns = 1000 * exit_latency;
+-	genpd_state->power_off_latency_ns = 1000 * entry_latency;
++	genpd_state->power_on_latency_ns = 1000LL * exit_latency;
++	genpd_state->power_off_latency_ns = 1000LL * entry_latency;
+ 	genpd_state->fwnode = &state_node->fwnode;
+ 
+ 	return 0;
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 6afae98978434..b5ceec2b2d84f 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -1814,7 +1814,7 @@ static u32 sysc_quirk_dispc(struct sysc *ddata, int dispc_offset,
+ 	if (!ddata->module_va)
+ 		return -EIO;
+ 
+-	/* DISP_CONTROL */
++	/* DISP_CONTROL, shut down lcd and digit on disable if enabled */
+ 	val = sysc_read(ddata, dispc_offset + 0x40);
+ 	lcd_en = val & lcd_en_mask;
+ 	digit_en = val & digit_en_mask;
+@@ -1826,7 +1826,7 @@ static u32 sysc_quirk_dispc(struct sysc *ddata, int dispc_offset,
+ 		else
+ 			irq_mask |= BIT(2) | BIT(3);	/* EVSYNC bits */
+ 	}
+-	if (disable & (lcd_en | digit_en))
++	if (disable && (lcd_en || digit_en))
+ 		sysc_write(ddata, dispc_offset + 0x40,
+ 			   val & ~(lcd_en_mask | digit_en_mask));
+ 
+diff --git a/drivers/char/hw_random/st-rng.c b/drivers/char/hw_random/st-rng.c
+index 15ba1e6fae4d2..6e9dfac9fc9f4 100644
+--- a/drivers/char/hw_random/st-rng.c
++++ b/drivers/char/hw_random/st-rng.c
+@@ -42,7 +42,6 @@
+ 
+ struct st_rng_data {
+ 	void __iomem	*base;
+-	struct clk	*clk;
+ 	struct hwrng	ops;
+ };
+ 
+@@ -85,26 +84,18 @@ static int st_rng_probe(struct platform_device *pdev)
+ 	if (IS_ERR(base))
+ 		return PTR_ERR(base);
+ 
+-	clk = devm_clk_get(&pdev->dev, NULL);
++	clk = devm_clk_get_enabled(&pdev->dev, NULL);
+ 	if (IS_ERR(clk))
+ 		return PTR_ERR(clk);
+ 
+-	ret = clk_prepare_enable(clk);
+-	if (ret)
+-		return ret;
+-
+ 	ddata->ops.priv	= (unsigned long)ddata;
+ 	ddata->ops.read	= st_rng_read;
+ 	ddata->ops.name	= pdev->name;
+ 	ddata->base	= base;
+-	ddata->clk	= clk;
+-
+-	dev_set_drvdata(&pdev->dev, ddata);
+ 
+ 	ret = devm_hwrng_register(&pdev->dev, &ddata->ops);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "Failed to register HW RNG\n");
+-		clk_disable_unprepare(clk);
+ 		return ret;
+ 	}
+ 
+@@ -113,15 +104,6 @@ static int st_rng_probe(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
+-static int st_rng_remove(struct platform_device *pdev)
+-{
+-	struct st_rng_data *ddata = dev_get_drvdata(&pdev->dev);
+-
+-	clk_disable_unprepare(ddata->clk);
+-
+-	return 0;
+-}
+-
+ static const struct of_device_id st_rng_match[] __maybe_unused = {
+ 	{ .compatible = "st,rng" },
+ 	{},
+@@ -134,7 +116,6 @@ static struct platform_driver st_rng_driver = {
+ 		.of_match_table = of_match_ptr(st_rng_match),
+ 	},
+ 	.probe = st_rng_probe,
+-	.remove = st_rng_remove
+ };
+ 
+ module_platform_driver(st_rng_driver);
+diff --git a/drivers/char/hw_random/virtio-rng.c b/drivers/char/hw_random/virtio-rng.c
+index f7690e0f92ede..e41a84e6b4b56 100644
+--- a/drivers/char/hw_random/virtio-rng.c
++++ b/drivers/char/hw_random/virtio-rng.c
+@@ -4,6 +4,7 @@
+  *  Copyright (C) 2007, 2008 Rusty Russell IBM Corporation
+  */
+ 
++#include <asm/barrier.h>
+ #include <linux/err.h>
+ #include <linux/hw_random.h>
+ #include <linux/scatterlist.h>
+@@ -37,13 +38,13 @@ struct virtrng_info {
+ static void random_recv_done(struct virtqueue *vq)
+ {
+ 	struct virtrng_info *vi = vq->vdev->priv;
++	unsigned int len;
+ 
+ 	/* We can get spurious callbacks, e.g. shared IRQs + virtio_pci. */
+-	if (!virtqueue_get_buf(vi->vq, &vi->data_avail))
++	if (!virtqueue_get_buf(vi->vq, &len))
+ 		return;
+ 
+-	vi->data_idx = 0;
+-
++	smp_store_release(&vi->data_avail, len);
+ 	complete(&vi->have_data);
+ }
+ 
+@@ -52,7 +53,6 @@ static void request_entropy(struct virtrng_info *vi)
+ 	struct scatterlist sg;
+ 
+ 	reinit_completion(&vi->have_data);
+-	vi->data_avail = 0;
+ 	vi->data_idx = 0;
+ 
+ 	sg_init_one(&sg, vi->data, sizeof(vi->data));
+@@ -88,7 +88,7 @@ static int virtio_read(struct hwrng *rng, void *buf, size_t size, bool wait)
+ 	read = 0;
+ 
+ 	/* copy available data */
+-	if (vi->data_avail) {
++	if (smp_load_acquire(&vi->data_avail)) {
+ 		chunk = copy_data(vi, buf, size);
+ 		size -= chunk;
+ 		read += chunk;
+diff --git a/drivers/clk/bcm/clk-raspberrypi.c b/drivers/clk/bcm/clk-raspberrypi.c
+index ce2f934797369..5df19f571a474 100644
+--- a/drivers/clk/bcm/clk-raspberrypi.c
++++ b/drivers/clk/bcm/clk-raspberrypi.c
+@@ -356,9 +356,9 @@ static int raspberrypi_discover_clocks(struct raspberrypi_clk *rpi,
+ 	while (clks->id) {
+ 		struct raspberrypi_clk_variant *variant;
+ 
+-		if (clks->id > RPI_FIRMWARE_NUM_CLK_ID) {
++		if (clks->id >= RPI_FIRMWARE_NUM_CLK_ID) {
+ 			dev_err(rpi->dev, "Unknown clock id: %u (max: %u)\n",
+-					   clks->id, RPI_FIRMWARE_NUM_CLK_ID);
++					   clks->id, RPI_FIRMWARE_NUM_CLK_ID - 1);
+ 			return -EINVAL;
+ 		}
+ 
+diff --git a/drivers/clk/clk-cdce925.c b/drivers/clk/clk-cdce925.c
+index 6350682f7e6d2..87890669297d8 100644
+--- a/drivers/clk/clk-cdce925.c
++++ b/drivers/clk/clk-cdce925.c
+@@ -701,6 +701,10 @@ static int cdce925_probe(struct i2c_client *client)
+ 	for (i = 0; i < data->chip_info->num_plls; ++i) {
+ 		pll_clk_name[i] = kasprintf(GFP_KERNEL, "%pOFn.pll%d",
+ 			client->dev.of_node, i);
++		if (!pll_clk_name[i]) {
++			err = -ENOMEM;
++			goto error;
++		}
+ 		init.name = pll_clk_name[i];
+ 		data->pll[i].chip = data;
+ 		data->pll[i].hw.init = &init;
+@@ -742,6 +746,10 @@ static int cdce925_probe(struct i2c_client *client)
+ 	init.num_parents = 1;
+ 	init.parent_names = &parent_name; /* Mux Y1 to input */
+ 	init.name = kasprintf(GFP_KERNEL, "%pOFn.Y1", client->dev.of_node);
++	if (!init.name) {
++		err = -ENOMEM;
++		goto error;
++	}
+ 	data->clk[0].chip = data;
+ 	data->clk[0].hw.init = &init;
+ 	data->clk[0].index = 0;
+@@ -760,6 +768,10 @@ static int cdce925_probe(struct i2c_client *client)
+ 	for (i = 1; i < data->chip_info->num_outputs; ++i) {
+ 		init.name = kasprintf(GFP_KERNEL, "%pOFn.Y%d",
+ 			client->dev.of_node, i+1);
++		if (!init.name) {
++			err = -ENOMEM;
++			goto error;
++		}
+ 		data->clk[i].chip = data;
+ 		data->clk[i].hw.init = &init;
+ 		data->clk[i].index = i;
+diff --git a/drivers/clk/clk-renesas-pcie.c b/drivers/clk/clk-renesas-pcie.c
+index ff3a52d484790..6060cafe1aa22 100644
+--- a/drivers/clk/clk-renesas-pcie.c
++++ b/drivers/clk/clk-renesas-pcie.c
+@@ -6,6 +6,7 @@
+  *   - 9FGV/9DBV/9DMV/9FGL/9DML/9QXL/9SQ
+  * Currently supported:
+  *   - 9FGV0241
++ *   - 9FGV0441
+  *
+  * Copyright (C) 2022 Marek Vasut <marex@denx.de>
+  */
+@@ -18,7 +19,6 @@
+ #include <linux/regmap.h>
+ 
+ #define RS9_REG_OE				0x0
+-#define RS9_REG_OE_DIF_OE(n)			BIT((n) + 1)
+ #define RS9_REG_SS				0x1
+ #define RS9_REG_SS_AMP_0V6			0x0
+ #define RS9_REG_SS_AMP_0V7			0x1
+@@ -31,9 +31,6 @@
+ #define RS9_REG_SS_SSC_MASK			(3 << 3)
+ #define RS9_REG_SS_SSC_LOCK			BIT(5)
+ #define RS9_REG_SR				0x2
+-#define RS9_REG_SR_2V0_DIF(n)			0
+-#define RS9_REG_SR_3V0_DIF(n)			BIT((n) + 1)
+-#define RS9_REG_SR_DIF_MASK(n)		BIT((n) + 1)
+ #define RS9_REG_REF				0x3
+ #define RS9_REG_REF_OE				BIT(4)
+ #define RS9_REG_REF_OD				BIT(5)
+@@ -45,22 +42,31 @@
+ #define RS9_REG_DID				0x6
+ #define RS9_REG_BCP				0x7
+ 
++#define RS9_REG_VID_IDT				0x01
++
++#define RS9_REG_DID_TYPE_FGV			(0x0 << RS9_REG_DID_TYPE_SHIFT)
++#define RS9_REG_DID_TYPE_DBV			(0x1 << RS9_REG_DID_TYPE_SHIFT)
++#define RS9_REG_DID_TYPE_DMV			(0x2 << RS9_REG_DID_TYPE_SHIFT)
++#define RS9_REG_DID_TYPE_SHIFT			0x6
++
+ /* Supported Renesas 9-series models. */
+ enum rs9_model {
+ 	RENESAS_9FGV0241,
++	RENESAS_9FGV0441,
+ };
+ 
+ /* Structure to describe features of a particular 9-series model */
+ struct rs9_chip_info {
+ 	const enum rs9_model	model;
+ 	unsigned int		num_clks;
++	u8			did;
+ };
+ 
+ struct rs9_driver_data {
+ 	struct i2c_client	*client;
+ 	struct regmap		*regmap;
+ 	const struct rs9_chip_info *chip_info;
+-	struct clk_hw		*clk_dif[2];
++	struct clk_hw		*clk_dif[4];
+ 	u8			pll_amplitude;
+ 	u8			pll_ssc;
+ 	u8			clk_dif_sr;
+@@ -152,17 +158,29 @@ static const struct regmap_config rs9_regmap_config = {
+ 	.reg_read = rs9_regmap_i2c_read,
+ };
+ 
++static u8 rs9_calc_dif(const struct rs9_driver_data *rs9, int idx)
++{
++	enum rs9_model model = rs9->chip_info->model;
++
++	if (model == RENESAS_9FGV0241)
++		return BIT(idx) + 1;
++	else if (model == RENESAS_9FGV0441)
++		return BIT(idx);
++
++	return 0;
++}
++
+ static int rs9_get_output_config(struct rs9_driver_data *rs9, int idx)
+ {
+ 	struct i2c_client *client = rs9->client;
++	u8 dif = rs9_calc_dif(rs9, idx);
+ 	unsigned char name[5] = "DIF0";
+ 	struct device_node *np;
+ 	int ret;
+ 	u32 sr;
+ 
+ 	/* Set defaults */
+-	rs9->clk_dif_sr &= ~RS9_REG_SR_DIF_MASK(idx);
+-	rs9->clk_dif_sr |= RS9_REG_SR_3V0_DIF(idx);
++	rs9->clk_dif_sr |= dif;
+ 
+ 	snprintf(name, 5, "DIF%d", idx);
+ 	np = of_get_child_by_name(client->dev.of_node, name);
+@@ -174,11 +192,9 @@ static int rs9_get_output_config(struct rs9_driver_data *rs9, int idx)
+ 	of_node_put(np);
+ 	if (!ret) {
+ 		if (sr == 2000000) {		/* 2V/ns */
+-			rs9->clk_dif_sr &= ~RS9_REG_SR_DIF_MASK(idx);
+-			rs9->clk_dif_sr |= RS9_REG_SR_2V0_DIF(idx);
++			rs9->clk_dif_sr &= ~dif;
+ 		} else if (sr == 3000000) {	/* 3V/ns (default) */
+-			rs9->clk_dif_sr &= ~RS9_REG_SR_DIF_MASK(idx);
+-			rs9->clk_dif_sr |= RS9_REG_SR_3V0_DIF(idx);
++			rs9->clk_dif_sr |= dif;
+ 		} else
+ 			ret = dev_err_probe(&client->dev, -EINVAL,
+ 					    "Invalid renesas,slew-rate value\n");
+@@ -249,11 +265,13 @@ static void rs9_update_config(struct rs9_driver_data *rs9)
+ 	}
+ 
+ 	for (i = 0; i < rs9->chip_info->num_clks; i++) {
+-		if (rs9->clk_dif_sr & RS9_REG_SR_3V0_DIF(i))
++		u8 dif = rs9_calc_dif(rs9, i);
++
++		if (rs9->clk_dif_sr & dif)
+ 			continue;
+ 
+-		regmap_update_bits(rs9->regmap, RS9_REG_SR, RS9_REG_SR_3V0_DIF(i),
+-				   rs9->clk_dif_sr & RS9_REG_SR_3V0_DIF(i));
++		regmap_update_bits(rs9->regmap, RS9_REG_SR, dif,
++				   rs9->clk_dif_sr & dif);
+ 	}
+ }
+ 
+@@ -270,6 +288,7 @@ static int rs9_probe(struct i2c_client *client)
+ {
+ 	unsigned char name[5] = "DIF0";
+ 	struct rs9_driver_data *rs9;
++	unsigned int vid, did;
+ 	struct clk_hw *hw;
+ 	int i, ret;
+ 
+@@ -306,6 +325,20 @@ static int rs9_probe(struct i2c_client *client)
+ 	if (ret < 0)
+ 		return ret;
+ 
++	ret = regmap_read(rs9->regmap, RS9_REG_VID, &vid);
++	if (ret < 0)
++		return ret;
++
++	ret = regmap_read(rs9->regmap, RS9_REG_DID, &did);
++	if (ret < 0)
++		return ret;
++
++	if (vid != RS9_REG_VID_IDT || did != rs9->chip_info->did)
++		return dev_err_probe(&client->dev, -ENODEV,
++				     "Incorrect VID/DID: %#02x, %#02x. Expected %#02x, %#02x\n",
++				     vid, did, RS9_REG_VID_IDT,
++				     rs9->chip_info->did);
++
+ 	/* Register clock */
+ 	for (i = 0; i < rs9->chip_info->num_clks; i++) {
+ 		snprintf(name, 5, "DIF%d", i);
+@@ -349,16 +382,25 @@ static int __maybe_unused rs9_resume(struct device *dev)
+ static const struct rs9_chip_info renesas_9fgv0241_info = {
+ 	.model		= RENESAS_9FGV0241,
+ 	.num_clks	= 2,
++	.did		= RS9_REG_DID_TYPE_FGV | 0x02,
++};
++
++static const struct rs9_chip_info renesas_9fgv0441_info = {
++	.model		= RENESAS_9FGV0441,
++	.num_clks	= 4,
++	.did		= RS9_REG_DID_TYPE_FGV | 0x04,
+ };
+ 
+ static const struct i2c_device_id rs9_id[] = {
+-	{ "9fgv0241", .driver_data = RENESAS_9FGV0241 },
++	{ "9fgv0241", .driver_data = (kernel_ulong_t)&renesas_9fgv0241_info },
++	{ "9fgv0441", .driver_data = (kernel_ulong_t)&renesas_9fgv0441_info },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(i2c, rs9_id);
+ 
+ static const struct of_device_id clk_rs9_of_match[] = {
+ 	{ .compatible = "renesas,9fgv0241", .data = &renesas_9fgv0241_info },
++	{ .compatible = "renesas,9fgv0441", .data = &renesas_9fgv0441_info },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(of, clk_rs9_of_match);
+diff --git a/drivers/clk/clk-si5341.c b/drivers/clk/clk-si5341.c
+index 0e528d7ba656e..c7d8cbd22bacc 100644
+--- a/drivers/clk/clk-si5341.c
++++ b/drivers/clk/clk-si5341.c
+@@ -1553,7 +1553,7 @@ static int si5341_probe(struct i2c_client *client)
+ 	struct clk_init_data init;
+ 	struct clk *input;
+ 	const char *root_clock_name;
+-	const char *synth_clock_names[SI5341_NUM_SYNTH];
++	const char *synth_clock_names[SI5341_NUM_SYNTH] = { NULL };
+ 	int err;
+ 	unsigned int i;
+ 	struct clk_si5341_output_config config[SI5341_MAX_NUM_OUTPUTS];
+@@ -1697,6 +1697,10 @@ static int si5341_probe(struct i2c_client *client)
+ 	for (i = 0; i < data->num_synth; ++i) {
+ 		synth_clock_names[i] = devm_kasprintf(&client->dev, GFP_KERNEL,
+ 				"%s.N%u", client->dev.of_node->name, i);
++		if (!synth_clock_names[i]) {
++			err = -ENOMEM;
++			goto free_clk_names;
++		}
+ 		init.name = synth_clock_names[i];
+ 		data->synth[i].index = i;
+ 		data->synth[i].data = data;
+@@ -1705,6 +1709,7 @@ static int si5341_probe(struct i2c_client *client)
+ 		if (err) {
+ 			dev_err(&client->dev,
+ 				"synth N%u registration failed\n", i);
++			goto free_clk_names;
+ 		}
+ 	}
+ 
+@@ -1714,6 +1719,10 @@ static int si5341_probe(struct i2c_client *client)
+ 	for (i = 0; i < data->num_outputs; ++i) {
+ 		init.name = kasprintf(GFP_KERNEL, "%s.%d",
+ 			client->dev.of_node->name, i);
++		if (!init.name) {
++			err = -ENOMEM;
++			goto free_clk_names;
++		}
+ 		init.flags = config[i].synth_master ? CLK_SET_RATE_PARENT : 0;
+ 		data->clk[i].index = i;
+ 		data->clk[i].data = data;
+@@ -1735,7 +1744,7 @@ static int si5341_probe(struct i2c_client *client)
+ 		if (err) {
+ 			dev_err(&client->dev,
+ 				"output %u registration failed\n", i);
+-			goto cleanup;
++			goto free_clk_names;
+ 		}
+ 		if (config[i].always_on)
+ 			clk_prepare(data->clk[i].hw.clk);
+@@ -1745,7 +1754,7 @@ static int si5341_probe(struct i2c_client *client)
+ 			data);
+ 	if (err) {
+ 		dev_err(&client->dev, "unable to add clk provider\n");
+-		goto cleanup;
++		goto free_clk_names;
+ 	}
+ 
+ 	if (initialization_required) {
+@@ -1753,11 +1762,11 @@ static int si5341_probe(struct i2c_client *client)
+ 		regcache_cache_only(data->regmap, false);
+ 		err = regcache_sync(data->regmap);
+ 		if (err < 0)
+-			goto cleanup;
++			goto free_clk_names;
+ 
+ 		err = si5341_finalize_defaults(data);
+ 		if (err < 0)
+-			goto cleanup;
++			goto free_clk_names;
+ 	}
+ 
+ 	/* wait for device to report input clock present and PLL lock */
+@@ -1766,32 +1775,31 @@ static int si5341_probe(struct i2c_client *client)
+ 	       10000, 250000);
+ 	if (err) {
+ 		dev_err(&client->dev, "Error waiting for input clock or PLL lock\n");
+-		goto cleanup;
++		goto free_clk_names;
+ 	}
+ 
+ 	/* clear sticky alarm bits from initialization */
+ 	err = regmap_write(data->regmap, SI5341_STATUS_STICKY, 0);
+ 	if (err) {
+ 		dev_err(&client->dev, "unable to clear sticky status\n");
+-		goto cleanup;
++		goto free_clk_names;
+ 	}
+ 
+ 	err = sysfs_create_files(&client->dev.kobj, si5341_attributes);
+-	if (err) {
++	if (err)
+ 		dev_err(&client->dev, "unable to create sysfs files\n");
+-		goto cleanup;
+-	}
+ 
++free_clk_names:
+ 	/* Free the names, clk framework makes copies */
+ 	for (i = 0; i < data->num_synth; ++i)
+ 		 devm_kfree(&client->dev, (void *)synth_clock_names[i]);
+ 
+-	return 0;
+-
+ cleanup:
+-	for (i = 0; i < SI5341_MAX_NUM_OUTPUTS; ++i) {
+-		if (data->clk[i].vddo_reg)
+-			regulator_disable(data->clk[i].vddo_reg);
++	if (err) {
++		for (i = 0; i < SI5341_MAX_NUM_OUTPUTS; ++i) {
++			if (data->clk[i].vddo_reg)
++				regulator_disable(data->clk[i].vddo_reg);
++		}
+ 	}
+ 	return err;
+ }
+diff --git a/drivers/clk/clk-versaclock5.c b/drivers/clk/clk-versaclock5.c
+index fa71a57875ce8..e9a7f3c91ae0e 100644
+--- a/drivers/clk/clk-versaclock5.c
++++ b/drivers/clk/clk-versaclock5.c
+@@ -1028,6 +1028,11 @@ static int vc5_probe(struct i2c_client *client)
+ 	}
+ 
+ 	init.name = kasprintf(GFP_KERNEL, "%pOFn.mux", client->dev.of_node);
++	if (!init.name) {
++		ret = -ENOMEM;
++		goto err_clk;
++	}
++
+ 	init.ops = &vc5_mux_ops;
+ 	init.flags = 0;
+ 	init.parent_names = parent_names;
+@@ -1042,6 +1047,10 @@ static int vc5_probe(struct i2c_client *client)
+ 		memset(&init, 0, sizeof(init));
+ 		init.name = kasprintf(GFP_KERNEL, "%pOFn.dbl",
+ 				      client->dev.of_node);
++		if (!init.name) {
++			ret = -ENOMEM;
++			goto err_clk;
++		}
+ 		init.ops = &vc5_dbl_ops;
+ 		init.flags = CLK_SET_RATE_PARENT;
+ 		init.parent_names = parent_names;
+@@ -1057,6 +1066,10 @@ static int vc5_probe(struct i2c_client *client)
+ 	/* Register PFD */
+ 	memset(&init, 0, sizeof(init));
+ 	init.name = kasprintf(GFP_KERNEL, "%pOFn.pfd", client->dev.of_node);
++	if (!init.name) {
++		ret = -ENOMEM;
++		goto err_clk;
++	}
+ 	init.ops = &vc5_pfd_ops;
+ 	init.flags = CLK_SET_RATE_PARENT;
+ 	init.parent_names = parent_names;
+@@ -1074,6 +1087,10 @@ static int vc5_probe(struct i2c_client *client)
+ 	/* Register PLL */
+ 	memset(&init, 0, sizeof(init));
+ 	init.name = kasprintf(GFP_KERNEL, "%pOFn.pll", client->dev.of_node);
++	if (!init.name) {
++		ret = -ENOMEM;
++		goto err_clk;
++	}
+ 	init.ops = &vc5_pll_ops;
+ 	init.flags = CLK_SET_RATE_PARENT;
+ 	init.parent_names = parent_names;
+@@ -1093,6 +1110,10 @@ static int vc5_probe(struct i2c_client *client)
+ 		memset(&init, 0, sizeof(init));
+ 		init.name = kasprintf(GFP_KERNEL, "%pOFn.fod%d",
+ 				      client->dev.of_node, idx);
++		if (!init.name) {
++			ret = -ENOMEM;
++			goto err_clk;
++		}
+ 		init.ops = &vc5_fod_ops;
+ 		init.flags = CLK_SET_RATE_PARENT;
+ 		init.parent_names = parent_names;
+@@ -1111,6 +1132,10 @@ static int vc5_probe(struct i2c_client *client)
+ 	memset(&init, 0, sizeof(init));
+ 	init.name = kasprintf(GFP_KERNEL, "%pOFn.out0_sel_i2cb",
+ 			      client->dev.of_node);
++	if (!init.name) {
++		ret = -ENOMEM;
++		goto err_clk;
++	}
+ 	init.ops = &vc5_clk_out_ops;
+ 	init.flags = CLK_SET_RATE_PARENT;
+ 	init.parent_names = parent_names;
+@@ -1137,6 +1162,10 @@ static int vc5_probe(struct i2c_client *client)
+ 		memset(&init, 0, sizeof(init));
+ 		init.name = kasprintf(GFP_KERNEL, "%pOFn.out%d",
+ 				      client->dev.of_node, idx + 1);
++		if (!init.name) {
++			ret = -ENOMEM;
++			goto err_clk;
++		}
+ 		init.ops = &vc5_clk_out_ops;
+ 		init.flags = CLK_SET_RATE_PARENT;
+ 		init.parent_names = parent_names;
+@@ -1271,14 +1300,14 @@ static const struct vc5_chip_info idt_5p49v6975_info = {
+ };
+ 
+ static const struct i2c_device_id vc5_id[] = {
+-	{ "5p49v5923", .driver_data = IDT_VC5_5P49V5923 },
+-	{ "5p49v5925", .driver_data = IDT_VC5_5P49V5925 },
+-	{ "5p49v5933", .driver_data = IDT_VC5_5P49V5933 },
+-	{ "5p49v5935", .driver_data = IDT_VC5_5P49V5935 },
+-	{ "5p49v60", .driver_data = IDT_VC6_5P49V60 },
+-	{ "5p49v6901", .driver_data = IDT_VC6_5P49V6901 },
+-	{ "5p49v6965", .driver_data = IDT_VC6_5P49V6965 },
+-	{ "5p49v6975", .driver_data = IDT_VC6_5P49V6975 },
++	{ "5p49v5923", .driver_data = (kernel_ulong_t)&idt_5p49v5923_info },
++	{ "5p49v5925", .driver_data = (kernel_ulong_t)&idt_5p49v5925_info },
++	{ "5p49v5933", .driver_data = (kernel_ulong_t)&idt_5p49v5933_info },
++	{ "5p49v5935", .driver_data = (kernel_ulong_t)&idt_5p49v5935_info },
++	{ "5p49v60", .driver_data = (kernel_ulong_t)&idt_5p49v60_info },
++	{ "5p49v6901", .driver_data = (kernel_ulong_t)&idt_5p49v6901_info },
++	{ "5p49v6965", .driver_data = (kernel_ulong_t)&idt_5p49v6965_info },
++	{ "5p49v6975", .driver_data = (kernel_ulong_t)&idt_5p49v6975_info },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(i2c, vc5_id);
+diff --git a/drivers/clk/clk-versaclock7.c b/drivers/clk/clk-versaclock7.c
+index 8e4f86e852aa0..0ae191f50b4b2 100644
+--- a/drivers/clk/clk-versaclock7.c
++++ b/drivers/clk/clk-versaclock7.c
+@@ -1282,7 +1282,7 @@ static const struct regmap_config vc7_regmap_config = {
+ };
+ 
+ static const struct i2c_device_id vc7_i2c_id[] = {
+-	{ "rc21008a", VC7_RC21008A },
++	{ "rc21008a", .driver_data = (kernel_ulong_t)&vc7_rc21008a_info },
+ 	{}
+ };
+ MODULE_DEVICE_TABLE(i2c, vc7_i2c_id);
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index ae07685c7588b..15a405a5582bb 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -1547,6 +1547,7 @@ void clk_hw_forward_rate_request(const struct clk_hw *hw,
+ 				  parent->core, req,
+ 				  parent_rate);
+ }
++EXPORT_SYMBOL_GPL(clk_hw_forward_rate_request);
+ 
+ static bool clk_core_can_round(struct clk_core * const core)
+ {
+@@ -4693,6 +4694,7 @@ int devm_clk_notifier_register(struct device *dev, struct clk *clk,
+ 	if (!ret) {
+ 		devres->clk = clk;
+ 		devres->nb = nb;
++		devres_add(dev, devres);
+ 	} else {
+ 		devres_free(devres);
+ 	}
+diff --git a/drivers/clk/imx/clk-imx8mn.c b/drivers/clk/imx/clk-imx8mn.c
+index a042ed3a9d6c2..569b2abf40525 100644
+--- a/drivers/clk/imx/clk-imx8mn.c
++++ b/drivers/clk/imx/clk-imx8mn.c
+@@ -323,7 +323,7 @@ static int imx8mn_clocks_probe(struct platform_device *pdev)
+ 	void __iomem *base;
+ 	int ret;
+ 
+-	clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
++	clk_hw_data = devm_kzalloc(dev, struct_size(clk_hw_data, hws,
+ 					  IMX8MN_CLK_END), GFP_KERNEL);
+ 	if (WARN_ON(!clk_hw_data))
+ 		return -ENOMEM;
+@@ -340,10 +340,10 @@ static int imx8mn_clocks_probe(struct platform_device *pdev)
+ 	hws[IMX8MN_CLK_EXT4] = imx_get_clk_hw_by_name(np, "clk_ext4");
+ 
+ 	np = of_find_compatible_node(NULL, NULL, "fsl,imx8mn-anatop");
+-	base = of_iomap(np, 0);
++	base = devm_of_iomap(dev, np, 0, NULL);
+ 	of_node_put(np);
+-	if (WARN_ON(!base)) {
+-		ret = -ENOMEM;
++	if (WARN_ON(IS_ERR(base))) {
++		ret = PTR_ERR(base);
+ 		goto unregister_hws;
+ 	}
+ 
+diff --git a/drivers/clk/imx/clk-imx8mp.c b/drivers/clk/imx/clk-imx8mp.c
+index 3253589851ffb..de7d2d2176bea 100644
+--- a/drivers/clk/imx/clk-imx8mp.c
++++ b/drivers/clk/imx/clk-imx8mp.c
+@@ -414,25 +414,22 @@ static int imx8mp_clocks_probe(struct platform_device *pdev)
+ 	struct device *dev = &pdev->dev;
+ 	struct device_node *np;
+ 	void __iomem *anatop_base, *ccm_base;
++	int err;
+ 
+ 	np = of_find_compatible_node(NULL, NULL, "fsl,imx8mp-anatop");
+-	anatop_base = of_iomap(np, 0);
++	anatop_base = devm_of_iomap(dev, np, 0, NULL);
+ 	of_node_put(np);
+-	if (WARN_ON(!anatop_base))
+-		return -ENOMEM;
++	if (WARN_ON(IS_ERR(anatop_base)))
++		return PTR_ERR(anatop_base);
+ 
+ 	np = dev->of_node;
+ 	ccm_base = devm_platform_ioremap_resource(pdev, 0);
+-	if (WARN_ON(IS_ERR(ccm_base))) {
+-		iounmap(anatop_base);
++	if (WARN_ON(IS_ERR(ccm_base)))
+ 		return PTR_ERR(ccm_base);
+-	}
+ 
+-	clk_hw_data = kzalloc(struct_size(clk_hw_data, hws, IMX8MP_CLK_END), GFP_KERNEL);
+-	if (WARN_ON(!clk_hw_data)) {
+-		iounmap(anatop_base);
++	clk_hw_data = devm_kzalloc(dev, struct_size(clk_hw_data, hws, IMX8MP_CLK_END), GFP_KERNEL);
++	if (WARN_ON(!clk_hw_data))
+ 		return -ENOMEM;
+-	}
+ 
+ 	clk_hw_data->num = IMX8MP_CLK_END;
+ 	hws = clk_hw_data->hws;
+@@ -721,7 +718,12 @@ static int imx8mp_clocks_probe(struct platform_device *pdev)
+ 
+ 	imx_check_clk_hws(hws, IMX8MP_CLK_END);
+ 
+-	of_clk_add_hw_provider(np, of_clk_hw_onecell_get, clk_hw_data);
++	err = of_clk_add_hw_provider(np, of_clk_hw_onecell_get, clk_hw_data);
++	if (err < 0) {
++		dev_err(dev, "failed to register hws for i.MX8MP\n");
++		imx_unregister_hw_clocks(hws, IMX8MP_CLK_END);
++		return err;
++	}
+ 
+ 	imx_register_uart_clocks();
+ 
+diff --git a/drivers/clk/imx/clk-imx93.c b/drivers/clk/imx/clk-imx93.c
+index 8d0974db6bfd8..face30012b7dd 100644
+--- a/drivers/clk/imx/clk-imx93.c
++++ b/drivers/clk/imx/clk-imx93.c
+@@ -262,7 +262,7 @@ static int imx93_clocks_probe(struct platform_device *pdev)
+ 	void __iomem *base, *anatop_base;
+ 	int i, ret;
+ 
+-	clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
++	clk_hw_data = devm_kzalloc(dev, struct_size(clk_hw_data, hws,
+ 					  IMX93_CLK_END), GFP_KERNEL);
+ 	if (!clk_hw_data)
+ 		return -ENOMEM;
+@@ -286,10 +286,12 @@ static int imx93_clocks_probe(struct platform_device *pdev)
+ 								    "sys_pll_pfd2", 1, 2);
+ 
+ 	np = of_find_compatible_node(NULL, NULL, "fsl,imx93-anatop");
+-	anatop_base = of_iomap(np, 0);
++	anatop_base = devm_of_iomap(dev, np, 0, NULL);
+ 	of_node_put(np);
+-	if (WARN_ON(!anatop_base))
+-		return -ENOMEM;
++	if (WARN_ON(IS_ERR(anatop_base))) {
++		ret = PTR_ERR(base);
++		goto unregister_hws;
++	}
+ 
+ 	clks[IMX93_CLK_AUDIO_PLL] = imx_clk_fracn_gppll("audio_pll", "osc_24m", anatop_base + 0x1200,
+ 							&imx_fracn_gppll);
+@@ -299,8 +301,8 @@ static int imx93_clocks_probe(struct platform_device *pdev)
+ 	np = dev->of_node;
+ 	base = devm_platform_ioremap_resource(pdev, 0);
+ 	if (WARN_ON(IS_ERR(base))) {
+-		iounmap(anatop_base);
+-		return PTR_ERR(base);
++		ret = PTR_ERR(base);
++		goto unregister_hws;
+ 	}
+ 
+ 	for (i = 0; i < ARRAY_SIZE(root_array); i++) {
+@@ -332,7 +334,6 @@ static int imx93_clocks_probe(struct platform_device *pdev)
+ 
+ unregister_hws:
+ 	imx_unregister_hw_clocks(clks, IMX93_CLK_END);
+-	iounmap(anatop_base);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/clk/imx/clk-imxrt1050.c b/drivers/clk/imx/clk-imxrt1050.c
+index fd5c51fc92c0e..08d155feb035a 100644
+--- a/drivers/clk/imx/clk-imxrt1050.c
++++ b/drivers/clk/imx/clk-imxrt1050.c
+@@ -42,7 +42,7 @@ static int imxrt1050_clocks_probe(struct platform_device *pdev)
+ 	struct device_node *anp;
+ 	int ret;
+ 
+-	clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
++	clk_hw_data = devm_kzalloc(dev, struct_size(clk_hw_data, hws,
+ 					  IMXRT1050_CLK_END), GFP_KERNEL);
+ 	if (WARN_ON(!clk_hw_data))
+ 		return -ENOMEM;
+@@ -53,10 +53,12 @@ static int imxrt1050_clocks_probe(struct platform_device *pdev)
+ 	hws[IMXRT1050_CLK_OSC] = imx_get_clk_hw_by_name(np, "osc");
+ 
+ 	anp = of_find_compatible_node(NULL, NULL, "fsl,imxrt-anatop");
+-	pll_base = of_iomap(anp, 0);
++	pll_base = devm_of_iomap(dev, anp, 0, NULL);
+ 	of_node_put(anp);
+-	if (WARN_ON(!pll_base))
+-		return -ENOMEM;
++	if (WARN_ON(IS_ERR(pll_base))) {
++		ret = PTR_ERR(pll_base);
++		goto unregister_hws;
++	}
+ 
+ 	/* Anatop clocks */
+ 	hws[IMXRT1050_CLK_DUMMY] = imx_clk_hw_fixed("dummy", 0UL);
+@@ -104,8 +106,10 @@ static int imxrt1050_clocks_probe(struct platform_device *pdev)
+ 
+ 	/* CCM clocks */
+ 	ccm_base = devm_platform_ioremap_resource(pdev, 0);
+-	if (WARN_ON(IS_ERR(ccm_base)))
+-		return PTR_ERR(ccm_base);
++	if (WARN_ON(IS_ERR(ccm_base))) {
++		ret = PTR_ERR(ccm_base);
++		goto unregister_hws;
++	}
+ 
+ 	hws[IMXRT1050_CLK_ARM_PODF] = imx_clk_hw_divider("arm_podf", "pll1_arm", ccm_base + 0x10, 0, 3);
+ 	hws[IMXRT1050_CLK_PRE_PERIPH_SEL] = imx_clk_hw_mux("pre_periph_sel", ccm_base + 0x18, 18, 2,
+@@ -149,8 +153,12 @@ static int imxrt1050_clocks_probe(struct platform_device *pdev)
+ 	ret = of_clk_add_hw_provider(np, of_clk_hw_onecell_get, clk_hw_data);
+ 	if (ret < 0) {
+ 		dev_err(dev, "Failed to register clks for i.MXRT1050.\n");
+-		imx_unregister_hw_clocks(hws, IMXRT1050_CLK_END);
++		goto unregister_hws;
+ 	}
++	return 0;
++
++unregister_hws:
++	imx_unregister_hw_clocks(hws, IMXRT1050_CLK_END);
+ 	return ret;
+ }
+ static const struct of_device_id imxrt1050_clk_of_match[] = {
+diff --git a/drivers/clk/imx/clk-scu.c b/drivers/clk/imx/clk-scu.c
+index 1e6870f3671f6..db307890e4c16 100644
+--- a/drivers/clk/imx/clk-scu.c
++++ b/drivers/clk/imx/clk-scu.c
+@@ -707,11 +707,11 @@ struct clk_hw *imx_clk_scu_alloc_dev(const char *name,
+ 
+ void imx_clk_scu_unregister(void)
+ {
+-	struct imx_scu_clk_node *clk;
++	struct imx_scu_clk_node *clk, *n;
+ 	int i;
+ 
+ 	for (i = 0; i < IMX_SC_R_LAST; i++) {
+-		list_for_each_entry(clk, &imx_scu_clks[i], node) {
++		list_for_each_entry_safe(clk, n, &imx_scu_clks[i], node) {
+ 			clk_hw_unregister(clk->hw);
+ 			kfree(clk);
+ 		}
+diff --git a/drivers/clk/keystone/sci-clk.c b/drivers/clk/keystone/sci-clk.c
+index d4b4e74e22da6..254f2cf24be21 100644
+--- a/drivers/clk/keystone/sci-clk.c
++++ b/drivers/clk/keystone/sci-clk.c
+@@ -294,6 +294,8 @@ static int _sci_clk_build(struct sci_clk_provider *provider,
+ 
+ 	name = kasprintf(GFP_KERNEL, "clk:%d:%d", sci_clk->dev_id,
+ 			 sci_clk->clk_id);
++	if (!name)
++		return -ENOMEM;
+ 
+ 	init.name = name;
+ 
+diff --git a/drivers/clk/mediatek/clk-mt8173-apmixedsys.c b/drivers/clk/mediatek/clk-mt8173-apmixedsys.c
+index a56c5845d07a5..0b95d14c18042 100644
+--- a/drivers/clk/mediatek/clk-mt8173-apmixedsys.c
++++ b/drivers/clk/mediatek/clk-mt8173-apmixedsys.c
+@@ -92,11 +92,13 @@ static int clk_mt8173_apmixed_probe(struct platform_device *pdev)
+ 
+ 	base = of_iomap(node, 0);
+ 	if (!base)
+-		return PTR_ERR(base);
++		return -ENOMEM;
+ 
+ 	clk_data = mtk_alloc_clk_data(CLK_APMIXED_NR_CLK);
+-	if (IS_ERR_OR_NULL(clk_data))
++	if (IS_ERR_OR_NULL(clk_data)) {
++		iounmap(base);
+ 		return -ENOMEM;
++	}
+ 
+ 	r = mtk_clk_register_plls(node, plls, ARRAY_SIZE(plls), clk_data);
+ 	if (r)
+@@ -127,6 +129,7 @@ unregister_plls:
+ 	mtk_clk_unregister_plls(plls, ARRAY_SIZE(plls), clk_data);
+ free_clk_data:
+ 	mtk_free_clk_data(clk_data);
++	iounmap(base);
+ 	return r;
+ }
+ 
+diff --git a/drivers/clk/mediatek/clk-mtk.c b/drivers/clk/mediatek/clk-mtk.c
+index 14e8b64a32a3c..b93fb1d80878c 100644
+--- a/drivers/clk/mediatek/clk-mtk.c
++++ b/drivers/clk/mediatek/clk-mtk.c
+@@ -492,8 +492,10 @@ int mtk_clk_simple_probe(struct platform_device *pdev)
+ 	num_clks += mcd->num_mux_clks;
+ 
+ 	clk_data = mtk_alloc_clk_data(num_clks);
+-	if (!clk_data)
+-		return -ENOMEM;
++	if (!clk_data) {
++		r = -ENOMEM;
++		goto free_base;
++	}
+ 
+ 	if (mcd->fixed_clks) {
+ 		r = mtk_clk_register_fixed_clks(mcd->fixed_clks,
+@@ -578,6 +580,7 @@ unregister_fixed_clks:
+ 					      mcd->num_fixed_clks, clk_data);
+ free_data:
+ 	mtk_free_clk_data(clk_data);
++free_base:
+ 	if (mcd->shared_io && base)
+ 		iounmap(base);
+ 	return r;
+diff --git a/drivers/clk/renesas/rzg2l-cpg.c b/drivers/clk/renesas/rzg2l-cpg.c
+index 4bf40f6ccd1d1..22ed543fe6b06 100644
+--- a/drivers/clk/renesas/rzg2l-cpg.c
++++ b/drivers/clk/renesas/rzg2l-cpg.c
+@@ -603,10 +603,8 @@ static int rzg2l_cpg_sipll5_set_rate(struct clk_hw *hw,
+ 	}
+ 
+ 	/* Output clock setting 1 */
+-	writel(CPG_SIPLL5_CLK1_POSTDIV1_WEN | CPG_SIPLL5_CLK1_POSTDIV2_WEN |
+-	       CPG_SIPLL5_CLK1_REFDIV_WEN  | (params.pl5_postdiv1 << 0) |
+-	       (params.pl5_postdiv2 << 4) | (params.pl5_refdiv << 8),
+-	       priv->base + CPG_SIPLL5_CLK1);
++	writel((params.pl5_postdiv1 << 0) | (params.pl5_postdiv2 << 4) |
++	       (params.pl5_refdiv << 8), priv->base + CPG_SIPLL5_CLK1);
+ 
+ 	/* Output clock setting, SSCG modulation value setting 3 */
+ 	writel((params.pl5_fracin << 8), priv->base + CPG_SIPLL5_CLK3);
+diff --git a/drivers/clk/renesas/rzg2l-cpg.h b/drivers/clk/renesas/rzg2l-cpg.h
+index eee780276a9e2..6cee9e56acc72 100644
+--- a/drivers/clk/renesas/rzg2l-cpg.h
++++ b/drivers/clk/renesas/rzg2l-cpg.h
+@@ -32,9 +32,6 @@
+ #define CPG_SIPLL5_STBY_RESETB_WEN	BIT(16)
+ #define CPG_SIPLL5_STBY_SSCG_EN_WEN	BIT(18)
+ #define CPG_SIPLL5_STBY_DOWNSPREAD_WEN	BIT(20)
+-#define CPG_SIPLL5_CLK1_POSTDIV1_WEN	BIT(16)
+-#define CPG_SIPLL5_CLK1_POSTDIV2_WEN	BIT(20)
+-#define CPG_SIPLL5_CLK1_REFDIV_WEN	BIT(24)
+ #define CPG_SIPLL5_CLK4_RESV_LSB	(0xFF)
+ #define CPG_SIPLL5_MON_PLL5_LOCK	BIT(4)
+ 
+diff --git a/drivers/clk/tegra/clk-tegra124-emc.c b/drivers/clk/tegra/clk-tegra124-emc.c
+index 219c80653dbdb..2a6db04342815 100644
+--- a/drivers/clk/tegra/clk-tegra124-emc.c
++++ b/drivers/clk/tegra/clk-tegra124-emc.c
+@@ -464,6 +464,7 @@ static int load_timings_from_dt(struct tegra_clk_emc *tegra,
+ 		err = load_one_timing_from_dt(tegra, timing, child);
+ 		if (err) {
+ 			of_node_put(child);
++			kfree(tegra->timings);
+ 			return err;
+ 		}
+ 
+@@ -515,6 +516,7 @@ struct clk *tegra124_clk_register_emc(void __iomem *base, struct device_node *np
+ 		err = load_timings_from_dt(tegra, node, node_ram_code);
+ 		if (err) {
+ 			of_node_put(node);
++			kfree(tegra);
+ 			return ERR_PTR(err);
+ 		}
+ 	}
+diff --git a/drivers/clk/ti/clkctrl.c b/drivers/clk/ti/clkctrl.c
+index f73f402ff7de9..87e5624789ef6 100644
+--- a/drivers/clk/ti/clkctrl.c
++++ b/drivers/clk/ti/clkctrl.c
+@@ -258,6 +258,9 @@ static const char * __init clkctrl_get_clock_name(struct device_node *np,
+ 	if (clkctrl_name && !legacy_naming) {
+ 		clock_name = kasprintf(GFP_KERNEL, "%s-clkctrl:%04x:%d",
+ 				       clkctrl_name, offset, index);
++		if (!clock_name)
++			return NULL;
++
+ 		strreplace(clock_name, '_', '-');
+ 
+ 		return clock_name;
+@@ -586,6 +589,10 @@ static void __init _ti_omap4_clkctrl_setup(struct device_node *node)
+ 	if (clkctrl_name) {
+ 		provider->clkdm_name = kasprintf(GFP_KERNEL,
+ 						 "%s_clkdm", clkctrl_name);
++		if (!provider->clkdm_name) {
++			kfree(provider);
++			return;
++		}
+ 		goto clkdm_found;
+ 	}
+ 
+diff --git a/drivers/clk/xilinx/clk-xlnx-clock-wizard.c b/drivers/clk/xilinx/clk-xlnx-clock-wizard.c
+index eb1dfe7ecc1b4..4a23583933bcc 100644
+--- a/drivers/clk/xilinx/clk-xlnx-clock-wizard.c
++++ b/drivers/clk/xilinx/clk-xlnx-clock-wizard.c
+@@ -354,7 +354,7 @@ static struct clk *clk_wzrd_register_divider(struct device *dev,
+ 	hw = &div->hw;
+ 	ret = devm_clk_hw_register(dev, hw);
+ 	if (ret)
+-		hw = ERR_PTR(ret);
++		return ERR_PTR(ret);
+ 
+ 	return hw->clk;
+ }
+diff --git a/drivers/clocksource/timer-cadence-ttc.c b/drivers/clocksource/timer-cadence-ttc.c
+index 4efd0cf3b602d..0d52e28fea4de 100644
+--- a/drivers/clocksource/timer-cadence-ttc.c
++++ b/drivers/clocksource/timer-cadence-ttc.c
+@@ -486,10 +486,10 @@ static int __init ttc_timer_probe(struct platform_device *pdev)
+ 	 * and use it. Note that the event timer uses the interrupt and it's the
+ 	 * 2nd TTC hence the irq_of_parse_and_map(,1)
+ 	 */
+-	timer_baseaddr = of_iomap(timer, 0);
+-	if (!timer_baseaddr) {
++	timer_baseaddr = devm_of_iomap(&pdev->dev, timer, 0, NULL);
++	if (IS_ERR(timer_baseaddr)) {
+ 		pr_err("ERROR: invalid timer base address\n");
+-		return -ENXIO;
++		return PTR_ERR(timer_baseaddr);
+ 	}
+ 
+ 	irq = irq_of_parse_and_map(timer, 1);
+@@ -513,20 +513,27 @@ static int __init ttc_timer_probe(struct platform_device *pdev)
+ 	clk_ce = of_clk_get(timer, clksel);
+ 	if (IS_ERR(clk_ce)) {
+ 		pr_err("ERROR: timer input clock not found\n");
+-		return PTR_ERR(clk_ce);
++		ret = PTR_ERR(clk_ce);
++		goto put_clk_cs;
+ 	}
+ 
+ 	ret = ttc_setup_clocksource(clk_cs, timer_baseaddr, timer_width);
+ 	if (ret)
+-		return ret;
++		goto put_clk_ce;
+ 
+ 	ret = ttc_setup_clockevent(clk_ce, timer_baseaddr + 4, irq);
+ 	if (ret)
+-		return ret;
++		goto put_clk_ce;
+ 
+ 	pr_info("%pOFn #0 at %p, irq=%d\n", timer, timer_baseaddr, irq);
+ 
+ 	return 0;
++
++put_clk_ce:
++	clk_put(clk_ce);
++put_clk_cs:
++	clk_put(clk_cs);
++	return ret;
+ }
+ 
+ static const struct of_device_id ttc_timer_of_match[] = {
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index 48a4613cef1e1..ee9e96a1893c6 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -824,6 +824,8 @@ static ssize_t store_energy_performance_preference(
+ 			err = cpufreq_start_governor(policy);
+ 			if (!ret)
+ 				ret = err;
++		} else {
++			ret = 0;
+ 		}
+ 	}
+ 
+diff --git a/drivers/crypto/marvell/cesa/cipher.c b/drivers/crypto/marvell/cesa/cipher.c
+index c6f2fa753b7c0..0f37dfd42d850 100644
+--- a/drivers/crypto/marvell/cesa/cipher.c
++++ b/drivers/crypto/marvell/cesa/cipher.c
+@@ -297,7 +297,7 @@ static int mv_cesa_des_setkey(struct crypto_skcipher *cipher, const u8 *key,
+ static int mv_cesa_des3_ede_setkey(struct crypto_skcipher *cipher,
+ 				   const u8 *key, unsigned int len)
+ {
+-	struct mv_cesa_des_ctx *ctx = crypto_skcipher_ctx(cipher);
++	struct mv_cesa_des3_ctx *ctx = crypto_skcipher_ctx(cipher);
+ 	int err;
+ 
+ 	err = verify_skcipher_des3_key(cipher, key);
+diff --git a/drivers/crypto/nx/Makefile b/drivers/crypto/nx/Makefile
+index d00181a26dd65..483cef62acee8 100644
+--- a/drivers/crypto/nx/Makefile
++++ b/drivers/crypto/nx/Makefile
+@@ -1,7 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+ obj-$(CONFIG_CRYPTO_DEV_NX_ENCRYPT) += nx-crypto.o
+ nx-crypto-objs := nx.o \
+-		  nx_debugfs.o \
+ 		  nx-aes-cbc.o \
+ 		  nx-aes-ecb.o \
+ 		  nx-aes-gcm.o \
+@@ -11,6 +10,7 @@ nx-crypto-objs := nx.o \
+ 		  nx-sha256.o \
+ 		  nx-sha512.o
+ 
++nx-crypto-$(CONFIG_DEBUG_FS) += nx_debugfs.o
+ obj-$(CONFIG_CRYPTO_DEV_NX_COMPRESS_PSERIES) += nx-compress-pseries.o nx-compress.o
+ obj-$(CONFIG_CRYPTO_DEV_NX_COMPRESS_POWERNV) += nx-compress-powernv.o nx-compress.o
+ nx-compress-objs := nx-842.o
+diff --git a/drivers/crypto/nx/nx.h b/drivers/crypto/nx/nx.h
+index c6233173c612e..2697baebb6a35 100644
+--- a/drivers/crypto/nx/nx.h
++++ b/drivers/crypto/nx/nx.h
+@@ -170,8 +170,8 @@ struct nx_sg *nx_walk_and_build(struct nx_sg *, unsigned int,
+ void nx_debugfs_init(struct nx_crypto_driver *);
+ void nx_debugfs_fini(struct nx_crypto_driver *);
+ #else
+-#define NX_DEBUGFS_INIT(drv)	(0)
+-#define NX_DEBUGFS_FINI(drv)	(0)
++#define NX_DEBUGFS_INIT(drv)	do {} while (0)
++#define NX_DEBUGFS_FINI(drv)	do {} while (0)
+ #endif
+ 
+ #define NX_PAGE_NUM(x)		((u64)(x) & 0xfffffffffffff000ULL)
+diff --git a/drivers/crypto/qat/qat_common/qat_asym_algs.c b/drivers/crypto/qat/qat_common/qat_asym_algs.c
+index 935a7e012946e..4128200a90329 100644
+--- a/drivers/crypto/qat/qat_common/qat_asym_algs.c
++++ b/drivers/crypto/qat/qat_common/qat_asym_algs.c
+@@ -170,15 +170,14 @@ static void qat_dh_cb(struct icp_qat_fw_pke_resp *resp)
+ 	}
+ 
+ 	areq->dst_len = req->ctx.dh->p_size;
++	dma_unmap_single(dev, req->out.dh.r, req->ctx.dh->p_size,
++			 DMA_FROM_DEVICE);
+ 	if (req->dst_align) {
+ 		scatterwalk_map_and_copy(req->dst_align, areq->dst, 0,
+ 					 areq->dst_len, 1);
+ 		kfree_sensitive(req->dst_align);
+ 	}
+ 
+-	dma_unmap_single(dev, req->out.dh.r, req->ctx.dh->p_size,
+-			 DMA_FROM_DEVICE);
+-
+ 	dma_unmap_single(dev, req->phy_in, sizeof(struct qat_dh_input_params),
+ 			 DMA_TO_DEVICE);
+ 	dma_unmap_single(dev, req->phy_out,
+@@ -521,12 +520,14 @@ static void qat_rsa_cb(struct icp_qat_fw_pke_resp *resp)
+ 
+ 	err = (err == ICP_QAT_FW_COMN_STATUS_FLAG_OK) ? 0 : -EINVAL;
+ 
+-	kfree_sensitive(req->src_align);
+-
+ 	dma_unmap_single(dev, req->in.rsa.enc.m, req->ctx.rsa->key_sz,
+ 			 DMA_TO_DEVICE);
+ 
++	kfree_sensitive(req->src_align);
++
+ 	areq->dst_len = req->ctx.rsa->key_sz;
++	dma_unmap_single(dev, req->out.rsa.enc.c, req->ctx.rsa->key_sz,
++			 DMA_FROM_DEVICE);
+ 	if (req->dst_align) {
+ 		scatterwalk_map_and_copy(req->dst_align, areq->dst, 0,
+ 					 areq->dst_len, 1);
+@@ -534,9 +535,6 @@ static void qat_rsa_cb(struct icp_qat_fw_pke_resp *resp)
+ 		kfree_sensitive(req->dst_align);
+ 	}
+ 
+-	dma_unmap_single(dev, req->out.rsa.enc.c, req->ctx.rsa->key_sz,
+-			 DMA_FROM_DEVICE);
+-
+ 	dma_unmap_single(dev, req->phy_in, sizeof(struct qat_rsa_input_params),
+ 			 DMA_TO_DEVICE);
+ 	dma_unmap_single(dev, req->phy_out,
+diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
+index b2fd67fcebfb5..1997bc1bf64aa 100644
+--- a/drivers/cxl/core/region.c
++++ b/drivers/cxl/core/region.c
+@@ -125,10 +125,38 @@ static struct cxl_region_ref *cxl_rr_load(struct cxl_port *port,
+ 	return xa_load(&port->regions, (unsigned long)cxlr);
+ }
+ 
++static int cxl_region_invalidate_memregion(struct cxl_region *cxlr)
++{
++	if (!cpu_cache_has_invalidate_memregion()) {
++		if (IS_ENABLED(CONFIG_CXL_REGION_INVALIDATION_TEST)) {
++			dev_warn_once(
++				&cxlr->dev,
++				"Bypassing cpu_cache_invalidate_memregion() for testing!\n");
++			return 0;
++		} else {
++			dev_err(&cxlr->dev,
++				"Failed to synchronize CPU cache state\n");
++			return -ENXIO;
++		}
++	}
++
++	cpu_cache_invalidate_memregion(IORES_DESC_CXL);
++	return 0;
++}
++
+ static int cxl_region_decode_reset(struct cxl_region *cxlr, int count)
+ {
+ 	struct cxl_region_params *p = &cxlr->params;
+-	int i;
++	int i, rc = 0;
++
++	/*
++	 * Before region teardown attempt to flush, and if the flush
++	 * fails cancel the region teardown for data consistency
++	 * concerns
++	 */
++	rc = cxl_region_invalidate_memregion(cxlr);
++	if (rc)
++		return rc;
+ 
+ 	for (i = count - 1; i >= 0; i--) {
+ 		struct cxl_endpoint_decoder *cxled = p->targets[i];
+@@ -136,7 +164,6 @@ static int cxl_region_decode_reset(struct cxl_region *cxlr, int count)
+ 		struct cxl_port *iter = cxled_to_port(cxled);
+ 		struct cxl_dev_state *cxlds = cxlmd->cxlds;
+ 		struct cxl_ep *ep;
+-		int rc = 0;
+ 
+ 		if (cxlds->rcd)
+ 			goto endpoint_reset;
+@@ -155,14 +182,19 @@ static int cxl_region_decode_reset(struct cxl_region *cxlr, int count)
+ 				rc = cxld->reset(cxld);
+ 			if (rc)
+ 				return rc;
++			set_bit(CXL_REGION_F_NEEDS_RESET, &cxlr->flags);
+ 		}
+ 
+ endpoint_reset:
+ 		rc = cxled->cxld.reset(&cxled->cxld);
+ 		if (rc)
+ 			return rc;
++		set_bit(CXL_REGION_F_NEEDS_RESET, &cxlr->flags);
+ 	}
+ 
++	/* all decoders associated with this region have been torn down */
++	clear_bit(CXL_REGION_F_NEEDS_RESET, &cxlr->flags);
++
+ 	return 0;
+ }
+ 
+@@ -256,9 +288,19 @@ static ssize_t commit_store(struct device *dev, struct device_attribute *attr,
+ 		goto out;
+ 	}
+ 
+-	if (commit)
++	/*
++	 * Invalidate caches before region setup to drop any speculative
++	 * consumption of this address space
++	 */
++	rc = cxl_region_invalidate_memregion(cxlr);
++	if (rc)
++		return rc;
++
++	if (commit) {
+ 		rc = cxl_region_decode_commit(cxlr);
+-	else {
++		if (rc == 0)
++			p->state = CXL_CONFIG_COMMIT;
++	} else {
+ 		p->state = CXL_CONFIG_RESET_PENDING;
+ 		up_write(&cxl_region_rwsem);
+ 		device_release_driver(&cxlr->dev);
+@@ -268,18 +310,20 @@ static ssize_t commit_store(struct device *dev, struct device_attribute *attr,
+ 		 * The lock was dropped, so need to revalidate that the reset is
+ 		 * still pending.
+ 		 */
+-		if (p->state == CXL_CONFIG_RESET_PENDING)
++		if (p->state == CXL_CONFIG_RESET_PENDING) {
+ 			rc = cxl_region_decode_reset(cxlr, p->interleave_ways);
++			/*
++			 * Revert to committed since there may still be active
++			 * decoders associated with this region, or move forward
++			 * to active to mark the reset successful
++			 */
++			if (rc)
++				p->state = CXL_CONFIG_COMMIT;
++			else
++				p->state = CXL_CONFIG_ACTIVE;
++		}
+ 	}
+ 
+-	if (rc)
+-		goto out;
+-
+-	if (commit)
+-		p->state = CXL_CONFIG_COMMIT;
+-	else if (p->state == CXL_CONFIG_RESET_PENDING)
+-		p->state = CXL_CONFIG_ACTIVE;
+-
+ out:
+ 	up_write(&cxl_region_rwsem);
+ 
+@@ -1674,7 +1718,6 @@ static int cxl_region_attach(struct cxl_region *cxlr,
+ 		if (rc)
+ 			goto err_decrement;
+ 		p->state = CXL_CONFIG_ACTIVE;
+-		set_bit(CXL_REGION_F_INCOHERENT, &cxlr->flags);
+ 	}
+ 
+ 	cxled->cxld.interleave_ways = p->interleave_ways;
+@@ -2679,30 +2722,6 @@ out:
+ }
+ EXPORT_SYMBOL_NS_GPL(cxl_add_to_region, CXL);
+ 
+-static int cxl_region_invalidate_memregion(struct cxl_region *cxlr)
+-{
+-	if (!test_bit(CXL_REGION_F_INCOHERENT, &cxlr->flags))
+-		return 0;
+-
+-	if (!cpu_cache_has_invalidate_memregion()) {
+-		if (IS_ENABLED(CONFIG_CXL_REGION_INVALIDATION_TEST)) {
+-			dev_warn_once(
+-				&cxlr->dev,
+-				"Bypassing cpu_cache_invalidate_memregion() for testing!\n");
+-			clear_bit(CXL_REGION_F_INCOHERENT, &cxlr->flags);
+-			return 0;
+-		} else {
+-			dev_err(&cxlr->dev,
+-				"Failed to synchronize CPU cache state\n");
+-			return -ENXIO;
+-		}
+-	}
+-
+-	cpu_cache_invalidate_memregion(IORES_DESC_CXL);
+-	clear_bit(CXL_REGION_F_INCOHERENT, &cxlr->flags);
+-	return 0;
+-}
+-
+ static int is_system_ram(struct resource *res, void *arg)
+ {
+ 	struct cxl_region *cxlr = arg;
+@@ -2730,7 +2749,12 @@ static int cxl_region_probe(struct device *dev)
+ 		goto out;
+ 	}
+ 
+-	rc = cxl_region_invalidate_memregion(cxlr);
++	if (test_bit(CXL_REGION_F_NEEDS_RESET, &cxlr->flags)) {
++		dev_err(&cxlr->dev,
++			"failed to activate, re-commit region and retry\n");
++		rc = -ENXIO;
++		goto out;
++	}
+ 
+ 	/*
+ 	 * From this point on any path that changes the region's state away from
+diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
+index 044a92d9813e2..dcebe48bb5bb5 100644
+--- a/drivers/cxl/cxl.h
++++ b/drivers/cxl/cxl.h
+@@ -462,18 +462,20 @@ struct cxl_region_params {
+ 	int nr_targets;
+ };
+ 
+-/*
+- * Flag whether this region needs to have its HPA span synchronized with
+- * CPU cache state at region activation time.
+- */
+-#define CXL_REGION_F_INCOHERENT 0
+-
+ /*
+  * Indicate whether this region has been assembled by autodetection or
+  * userspace assembly. Prevent endpoint decoders outside of automatic
+  * detection from being added to the region.
+  */
+-#define CXL_REGION_F_AUTO 1
++#define CXL_REGION_F_AUTO 0
++
++/*
++ * Require that a committed region successfully complete a teardown once
++ * any of its associated decoders have been torn down. This maintains
++ * the commit state for the region since there are committed decoders,
++ * but blocks cxl_region_probe().
++ */
++#define CXL_REGION_F_NEEDS_RESET 1
+ 
+ /**
+  * struct cxl_region - CXL region
+diff --git a/drivers/dax/bus.c b/drivers/dax/bus.c
+index 227800053309f..e7c61358564e1 100644
+--- a/drivers/dax/bus.c
++++ b/drivers/dax/bus.c
+@@ -446,18 +446,34 @@ static void unregister_dev_dax(void *dev)
+ 	put_device(dev);
+ }
+ 
++static void dax_region_free(struct kref *kref)
++{
++	struct dax_region *dax_region;
++
++	dax_region = container_of(kref, struct dax_region, kref);
++	kfree(dax_region);
++}
++
++void dax_region_put(struct dax_region *dax_region)
++{
++	kref_put(&dax_region->kref, dax_region_free);
++}
++EXPORT_SYMBOL_GPL(dax_region_put);
++
+ /* a return value >= 0 indicates this invocation invalidated the id */
+ static int __free_dev_dax_id(struct dev_dax *dev_dax)
+ {
+-	struct dax_region *dax_region = dev_dax->region;
+ 	struct device *dev = &dev_dax->dev;
++	struct dax_region *dax_region;
+ 	int rc = dev_dax->id;
+ 
+ 	device_lock_assert(dev);
+ 
+-	if (is_static(dax_region) || dev_dax->id < 0)
++	if (!dev_dax->dyn_id || dev_dax->id < 0)
+ 		return -1;
++	dax_region = dev_dax->region;
+ 	ida_free(&dax_region->ida, dev_dax->id);
++	dax_region_put(dax_region);
+ 	dev_dax->id = -1;
+ 	return rc;
+ }
+@@ -473,6 +489,20 @@ static int free_dev_dax_id(struct dev_dax *dev_dax)
+ 	return rc;
+ }
+ 
++static int alloc_dev_dax_id(struct dev_dax *dev_dax)
++{
++	struct dax_region *dax_region = dev_dax->region;
++	int id;
++
++	id = ida_alloc(&dax_region->ida, GFP_KERNEL);
++	if (id < 0)
++		return id;
++	kref_get(&dax_region->kref);
++	dev_dax->dyn_id = true;
++	dev_dax->id = id;
++	return id;
++}
++
+ static ssize_t delete_store(struct device *dev, struct device_attribute *attr,
+ 		const char *buf, size_t len)
+ {
+@@ -560,20 +590,6 @@ static const struct attribute_group *dax_region_attribute_groups[] = {
+ 	NULL,
+ };
+ 
+-static void dax_region_free(struct kref *kref)
+-{
+-	struct dax_region *dax_region;
+-
+-	dax_region = container_of(kref, struct dax_region, kref);
+-	kfree(dax_region);
+-}
+-
+-void dax_region_put(struct dax_region *dax_region)
+-{
+-	kref_put(&dax_region->kref, dax_region_free);
+-}
+-EXPORT_SYMBOL_GPL(dax_region_put);
+-
+ static void dax_region_unregister(void *region)
+ {
+ 	struct dax_region *dax_region = region;
+@@ -635,10 +651,12 @@ EXPORT_SYMBOL_GPL(alloc_dax_region);
+ static void dax_mapping_release(struct device *dev)
+ {
+ 	struct dax_mapping *mapping = to_dax_mapping(dev);
+-	struct dev_dax *dev_dax = to_dev_dax(dev->parent);
++	struct device *parent = dev->parent;
++	struct dev_dax *dev_dax = to_dev_dax(parent);
+ 
+ 	ida_free(&dev_dax->ida, mapping->id);
+ 	kfree(mapping);
++	put_device(parent);
+ }
+ 
+ static void unregister_dax_mapping(void *data)
+@@ -778,6 +796,7 @@ static int devm_register_dax_mapping(struct dev_dax *dev_dax, int range_id)
+ 	dev = &mapping->dev;
+ 	device_initialize(dev);
+ 	dev->parent = &dev_dax->dev;
++	get_device(dev->parent);
+ 	dev->type = &dax_mapping_type;
+ 	dev_set_name(dev, "mapping%d", mapping->id);
+ 	rc = device_add(dev);
+@@ -1295,12 +1314,10 @@ static const struct attribute_group *dax_attribute_groups[] = {
+ static void dev_dax_release(struct device *dev)
+ {
+ 	struct dev_dax *dev_dax = to_dev_dax(dev);
+-	struct dax_region *dax_region = dev_dax->region;
+ 	struct dax_device *dax_dev = dev_dax->dax_dev;
+ 
+ 	put_dax(dax_dev);
+ 	free_dev_dax_id(dev_dax);
+-	dax_region_put(dax_region);
+ 	kfree(dev_dax->pgmap);
+ 	kfree(dev_dax);
+ }
+@@ -1324,6 +1341,7 @@ struct dev_dax *devm_create_dev_dax(struct dev_dax_data *data)
+ 	if (!dev_dax)
+ 		return ERR_PTR(-ENOMEM);
+ 
++	dev_dax->region = dax_region;
+ 	if (is_static(dax_region)) {
+ 		if (dev_WARN_ONCE(parent, data->id < 0,
+ 				"dynamic id specified to static region\n")) {
+@@ -1339,13 +1357,11 @@ struct dev_dax *devm_create_dev_dax(struct dev_dax_data *data)
+ 			goto err_id;
+ 		}
+ 
+-		rc = ida_alloc(&dax_region->ida, GFP_KERNEL);
++		rc = alloc_dev_dax_id(dev_dax);
+ 		if (rc < 0)
+ 			goto err_id;
+-		dev_dax->id = rc;
+ 	}
+ 
+-	dev_dax->region = dax_region;
+ 	dev = &dev_dax->dev;
+ 	device_initialize(dev);
+ 	dev_set_name(dev, "dax%d.%d", dax_region->id, dev_dax->id);
+@@ -1386,7 +1402,6 @@ struct dev_dax *devm_create_dev_dax(struct dev_dax_data *data)
+ 	dev_dax->target_node = dax_region->target_node;
+ 	dev_dax->align = dax_region->align;
+ 	ida_init(&dev_dax->ida);
+-	kref_get(&dax_region->kref);
+ 
+ 	inode = dax_inode(dax_dev);
+ 	dev->devt = inode->i_rdev;
+diff --git a/drivers/dax/dax-private.h b/drivers/dax/dax-private.h
+index 1c974b7caae6e..afcada6fd2eda 100644
+--- a/drivers/dax/dax-private.h
++++ b/drivers/dax/dax-private.h
+@@ -52,7 +52,8 @@ struct dax_mapping {
+  * @region - parent region
+  * @dax_dev - core dax functionality
+  * @target_node: effective numa node if dev_dax memory range is onlined
+- * @id: ida allocated id
++ * @dyn_id: is this a dynamic or statically created instance
++ * @id: ida allocated id when the dax_region is not static
+  * @ida: mapping id allocator
+  * @dev - device core
+  * @pgmap - pgmap for memmap setup / lifetime (driver owned)
+@@ -64,6 +65,7 @@ struct dev_dax {
+ 	struct dax_device *dax_dev;
+ 	unsigned int align;
+ 	int target_node;
++	bool dyn_id;
+ 	int id;
+ 	struct ida ida;
+ 	struct device dev;
+diff --git a/drivers/dax/kmem.c b/drivers/dax/kmem.c
+index 7b36db6f1cbdc..898ca95057547 100644
+--- a/drivers/dax/kmem.c
++++ b/drivers/dax/kmem.c
+@@ -99,7 +99,7 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax)
+ 	if (!data->res_name)
+ 		goto err_res_name;
+ 
+-	rc = memory_group_register_static(numa_node, total_len);
++	rc = memory_group_register_static(numa_node, PFN_UP(total_len));
+ 	if (rc < 0)
+ 		goto err_reg_mgid;
+ 	data->mgid = rc;
+diff --git a/drivers/firmware/efi/libstub/efi-stub-helper.c b/drivers/firmware/efi/libstub/efi-stub-helper.c
+index 1e0203d74691f..732984295295f 100644
+--- a/drivers/firmware/efi/libstub/efi-stub-helper.c
++++ b/drivers/firmware/efi/libstub/efi-stub-helper.c
+@@ -378,6 +378,9 @@ efi_status_t efi_exit_boot_services(void *handle, void *priv,
+ 	struct efi_boot_memmap *map;
+ 	efi_status_t status;
+ 
++	if (efi_disable_pci_dma)
++		efi_pci_disable_bridge_busmaster();
++
+ 	status = efi_get_memory_map(&map, true);
+ 	if (status != EFI_SUCCESS)
+ 		return status;
+@@ -388,9 +391,6 @@ efi_status_t efi_exit_boot_services(void *handle, void *priv,
+ 		return status;
+ 	}
+ 
+-	if (efi_disable_pci_dma)
+-		efi_pci_disable_bridge_busmaster();
+-
+ 	status = efi_bs_call(exit_boot_services, handle, map->map_key);
+ 
+ 	if (status == EFI_INVALID_PARAMETER) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+index 63dfcc98152d5..b3daca6372a90 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+@@ -170,8 +170,7 @@ static int amdgpu_reserve_page_direct(struct amdgpu_device *adev, uint64_t addre
+ 
+ 	memset(&err_rec, 0x0, sizeof(struct eeprom_table_record));
+ 	err_data.err_addr = &err_rec;
+-	amdgpu_umc_fill_error_record(&err_data, address,
+-			(address >> AMDGPU_GPU_PAGE_SHIFT), 0, 0);
++	amdgpu_umc_fill_error_record(&err_data, address, address, 0, 0);
+ 
+ 	if (amdgpu_bad_page_threshold != 0) {
+ 		amdgpu_ras_add_bad_pages(adev, err_data.err_addr,
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index 587879f3ac2e6..30c0c49b37105 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -1436,14 +1436,14 @@ int amdgpu_vm_bo_map(struct amdgpu_device *adev,
+ 	uint64_t eaddr;
+ 
+ 	/* validate the parameters */
+-	if (saddr & ~PAGE_MASK || offset & ~PAGE_MASK ||
+-	    size == 0 || size & ~PAGE_MASK)
++	if (saddr & ~PAGE_MASK || offset & ~PAGE_MASK || size & ~PAGE_MASK)
++		return -EINVAL;
++	if (saddr + size <= saddr || offset + size <= offset)
+ 		return -EINVAL;
+ 
+ 	/* make sure object fit at this offset */
+ 	eaddr = saddr + size - 1;
+-	if (saddr >= eaddr ||
+-	    (bo && offset + size > amdgpu_bo_size(bo)) ||
++	if ((bo && offset + size > amdgpu_bo_size(bo)) ||
+ 	    (eaddr >= adev->vm_manager.max_pfn << AMDGPU_GPU_PAGE_SHIFT))
+ 		return -EINVAL;
+ 
+@@ -1502,14 +1502,14 @@ int amdgpu_vm_bo_replace_map(struct amdgpu_device *adev,
+ 	int r;
+ 
+ 	/* validate the parameters */
+-	if (saddr & ~PAGE_MASK || offset & ~PAGE_MASK ||
+-	    size == 0 || size & ~PAGE_MASK)
++	if (saddr & ~PAGE_MASK || offset & ~PAGE_MASK || size & ~PAGE_MASK)
++		return -EINVAL;
++	if (saddr + size <= saddr || offset + size <= offset)
+ 		return -EINVAL;
+ 
+ 	/* make sure object fit at this offset */
+ 	eaddr = saddr + size - 1;
+-	if (saddr >= eaddr ||
+-	    (bo && offset + size > amdgpu_bo_size(bo)) ||
++	if ((bo && offset + size > amdgpu_bo_size(bo)) ||
+ 	    (eaddr >= adev->vm_manager.max_pfn << AMDGPU_GPU_PAGE_SHIFT))
+ 		return -EINVAL;
+ 
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
+index 0778e587a2d68..eaf084acb706f 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
+@@ -115,18 +115,19 @@ static struct kfd_mem_obj *allocate_mqd(struct kfd_dev *kfd,
+ 			&(mqd_mem_obj->gtt_mem),
+ 			&(mqd_mem_obj->gpu_addr),
+ 			(void *)&(mqd_mem_obj->cpu_ptr), true);
++
++		if (retval) {
++			kfree(mqd_mem_obj);
++			return NULL;
++		}
+ 	} else {
+ 		retval = kfd_gtt_sa_allocate(kfd, sizeof(struct v9_mqd),
+ 				&mqd_mem_obj);
+-	}
+-
+-	if (retval) {
+-		kfree(mqd_mem_obj);
+-		return NULL;
++		if (retval)
++			return NULL;
+ 	}
+ 
+ 	return mqd_mem_obj;
+-
+ }
+ 
+ static void init_mqd(struct mqd_manager *mm, void **mqd,
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 2cbd6949804f5..261dbd417c2f8 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -9276,6 +9276,8 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm,
+ 
+ 		/* Now check if we should set freesync video mode */
+ 		if (amdgpu_freesync_vid_mode && dm_new_crtc_state->stream &&
++		    dc_is_stream_unchanged(new_stream, dm_old_crtc_state->stream) &&
++		    dc_is_stream_scaling_unchanged(new_stream, dm_old_crtc_state->stream) &&
+ 		    is_timing_unchanged_for_freesync(new_crtc_state,
+ 						     old_crtc_state)) {
+ 			new_crtc_state->mode_changed = false;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 3da519957f6c8..0096614f2a8be 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -48,8 +48,7 @@
+ #endif
+ 
+ #include "dc/dcn20/dcn20_resource.h"
+-bool is_timing_changed(struct dc_stream_state *cur_stream,
+-		       struct dc_stream_state *new_stream);
++
+ #define PEAK_FACTOR_X1000 1006
+ 
+ static ssize_t dm_dp_aux_transfer(struct drm_dp_aux *aux,
+@@ -1426,7 +1425,7 @@ int pre_validate_dsc(struct drm_atomic_state *state,
+ 		struct dc_stream_state *stream = dm_state->context->streams[i];
+ 
+ 		if (local_dc_state->streams[i] &&
+-		    is_timing_changed(stream, local_dc_state->streams[i])) {
++		    dc_is_timing_changed(stream, local_dc_state->streams[i])) {
+ 			DRM_INFO_ONCE("crtc[%d] needs mode_changed\n", i);
+ 		} else {
+ 			int ind = find_crtc_index_in_state_by_stream(state, stream);
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr_smu_msg.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr_smu_msg.c
+index 1fbf1c105dc12..bdbf183066981 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr_smu_msg.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr_smu_msg.c
+@@ -312,6 +312,9 @@ void dcn30_smu_set_display_refresh_from_mall(struct clk_mgr_internal *clk_mgr, b
+ 	/* bits 8:7 for cache timer scale, bits 6:1 for cache timer delay, bit 0 = 1 for enable, = 0 for disable */
+ 	uint32_t param = (cache_timer_scale << 7) | (cache_timer_delay << 1) | (enable ? 1 : 0);
+ 
++	smu_print("SMU Set display refresh from mall: enable = %d, cache_timer_delay = %d, cache_timer_scale = %d\n",
++		enable, cache_timer_delay, cache_timer_scale);
++
+ 	dcn30_smu_send_msg_with_param(clk_mgr,
+ 			DALSMC_MSG_SetDisplayRefreshFromMall, param, NULL);
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 9ec0a343efadb..e6e26fe1be0f8 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -2540,9 +2540,6 @@ static enum surface_update_type det_surface_update(const struct dc *dc,
+ 	enum surface_update_type overall_type = UPDATE_TYPE_FAST;
+ 	union surface_update_flags *update_flags = &u->surface->update_flags;
+ 
+-	if (u->flip_addr)
+-		update_flags->bits.addr_update = 1;
+-
+ 	if (!is_surface_in_context(context, u->surface) || u->surface->force_full_update) {
+ 		update_flags->raw = 0xFFFFFFFF;
+ 		return UPDATE_TYPE_FULL;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index 986de684b078e..7b0fd0dc31b34 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -1878,7 +1878,7 @@ bool dc_add_all_planes_for_stream(
+ 	return add_all_planes_for_stream(dc, stream, &set, 1, context);
+ }
+ 
+-bool is_timing_changed(struct dc_stream_state *cur_stream,
++bool dc_is_timing_changed(struct dc_stream_state *cur_stream,
+ 		       struct dc_stream_state *new_stream)
+ {
+ 	if (cur_stream == NULL)
+@@ -1903,7 +1903,7 @@ static bool are_stream_backends_same(
+ 	if (stream_a == NULL || stream_b == NULL)
+ 		return false;
+ 
+-	if (is_timing_changed(stream_a, stream_b))
++	if (dc_is_timing_changed(stream_a, stream_b))
+ 		return false;
+ 
+ 	if (stream_a->signal != stream_b->signal)
+@@ -3527,7 +3527,7 @@ bool pipe_need_reprogram(
+ 	if (pipe_ctx_old->stream_res.stream_enc != pipe_ctx->stream_res.stream_enc)
+ 		return true;
+ 
+-	if (is_timing_changed(pipe_ctx_old->stream, pipe_ctx->stream))
++	if (dc_is_timing_changed(pipe_ctx_old->stream, pipe_ctx->stream))
+ 		return true;
+ 
+ 	if (pipe_ctx_old->stream->dpms_off != pipe_ctx->stream->dpms_off)
+diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
+index 3fb868f2f6f5b..9307442dc2258 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc.h
++++ b/drivers/gpu/drm/amd/display/dc/dc.h
+@@ -2223,4 +2223,7 @@ void dc_process_dmub_dpia_hpd_int_enable(const struct dc *dc,
+ /* Disable acc mode Interfaces */
+ void dc_disable_accelerated_mode(struct dc *dc);
+ 
++bool dc_is_timing_changed(struct dc_stream_state *cur_stream,
++		       struct dc_stream_state *new_stream);
++
+ #endif /* DC_INTERFACE_H_ */
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c
+index b7c2844d0cbee..f294f2f8c75bc 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c
+@@ -810,7 +810,7 @@ static bool CalculatePrefetchSchedule(
+ 			*swath_width_chroma_ub = dml_ceil(SwathWidthY / 2 - 1, myPipe->BlockWidth256BytesC) + myPipe->BlockWidth256BytesC;
+ 	} else {
+ 		*swath_width_luma_ub = dml_ceil(SwathWidthY - 1, myPipe->BlockHeight256BytesY) + myPipe->BlockHeight256BytesY;
+-		if (myPipe->BlockWidth256BytesC > 0)
++		if (myPipe->BlockHeight256BytesC > 0)
+ 			*swath_width_chroma_ub = dml_ceil(SwathWidthY / 2 - 1, myPipe->BlockHeight256BytesC) + myPipe->BlockHeight256BytesC;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_rq_dlg_calc_32.c b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_rq_dlg_calc_32.c
+index 395ae8761980f..9ba6cb67655f4 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_rq_dlg_calc_32.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_rq_dlg_calc_32.c
+@@ -116,7 +116,7 @@ void dml32_rq_dlg_get_rq_reg(display_rq_regs_st *rq_regs,
+ 	else
+ 		rq_regs->rq_regs_l.min_meta_chunk_size = dml_log2(min_meta_chunk_bytes) - 6 + 1;
+ 
+-	if (min_meta_chunk_bytes == 0)
++	if (p1_min_meta_chunk_bytes == 0)
+ 		rq_regs->rq_regs_c.min_meta_chunk_size = 0;
+ 	else
+ 		rq_regs->rq_regs_c.min_meta_chunk_size = dml_log2(p1_min_meta_chunk_bytes) - 6 + 1;
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+index 85d53597eb07a..f7ed3e655e397 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+@@ -431,7 +431,13 @@ static int sienna_cichlid_append_powerplay_table(struct smu_context *smu)
+ {
+ 	struct atom_smc_dpm_info_v4_9 *smc_dpm_table;
+ 	int index, ret;
+-	I2cControllerConfig_t *table_member;
++	PPTable_beige_goby_t *ppt_beige_goby;
++	PPTable_t *ppt;
++
++	if (smu->adev->ip_versions[MP1_HWIP][0] == IP_VERSION(11, 0, 13))
++		ppt_beige_goby = smu->smu_table.driver_pptable;
++	else
++		ppt = smu->smu_table.driver_pptable;
+ 
+ 	index = get_index_into_master_table(atom_master_list_of_data_tables_v2_1,
+ 					    smc_dpm_info);
+@@ -440,9 +446,13 @@ static int sienna_cichlid_append_powerplay_table(struct smu_context *smu)
+ 				      (uint8_t **)&smc_dpm_table);
+ 	if (ret)
+ 		return ret;
+-	GET_PPTABLE_MEMBER(I2cControllers, &table_member);
+-	memcpy(table_member, smc_dpm_table->I2cControllers,
+-			sizeof(*smc_dpm_table) - sizeof(smc_dpm_table->table_header));
++
++	if (smu->adev->ip_versions[MP1_HWIP][0] == IP_VERSION(11, 0, 13))
++		smu_memcpy_trailing(ppt_beige_goby, I2cControllers, BoardReserved,
++				    smc_dpm_table, I2cControllers);
++	else
++		smu_memcpy_trailing(ppt, I2cControllers, BoardReserved,
++				    smc_dpm_table, I2cControllers);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.c b/drivers/gpu/drm/bridge/analogix/anx7625.c
+index 6846199a2ee14..9e387c3e9b696 100644
+--- a/drivers/gpu/drm/bridge/analogix/anx7625.c
++++ b/drivers/gpu/drm/bridge/analogix/anx7625.c
+@@ -1687,6 +1687,14 @@ static int anx7625_parse_dt(struct device *dev,
+ 	if (of_property_read_bool(np, "analogix,audio-enable"))
+ 		pdata->audio_en = 1;
+ 
++	return 0;
++}
++
++static int anx7625_parse_dt_panel(struct device *dev,
++				  struct anx7625_platform_data *pdata)
++{
++	struct device_node *np = dev->of_node;
++
+ 	pdata->panel_bridge = devm_drm_of_get_bridge(dev, np, 1, 0);
+ 	if (IS_ERR(pdata->panel_bridge)) {
+ 		if (PTR_ERR(pdata->panel_bridge) == -ENODEV) {
+@@ -2032,7 +2040,7 @@ static int anx7625_register_audio(struct device *dev, struct anx7625_data *ctx)
+ 	return 0;
+ }
+ 
+-static int anx7625_attach_dsi(struct anx7625_data *ctx)
++static int anx7625_setup_dsi_device(struct anx7625_data *ctx)
+ {
+ 	struct mipi_dsi_device *dsi;
+ 	struct device *dev = &ctx->client->dev;
+@@ -2042,9 +2050,6 @@ static int anx7625_attach_dsi(struct anx7625_data *ctx)
+ 		.channel = 0,
+ 		.node = NULL,
+ 	};
+-	int ret;
+-
+-	DRM_DEV_DEBUG_DRIVER(dev, "attach dsi\n");
+ 
+ 	host = of_find_mipi_dsi_host_by_node(ctx->pdata.mipi_host_node);
+ 	if (!host) {
+@@ -2065,14 +2070,24 @@ static int anx7625_attach_dsi(struct anx7625_data *ctx)
+ 		MIPI_DSI_MODE_VIDEO_HSE	|
+ 		MIPI_DSI_HS_PKT_END_ALIGNED;
+ 
+-	ret = devm_mipi_dsi_attach(dev, dsi);
++	ctx->dsi = dsi;
++
++	return 0;
++}
++
++static int anx7625_attach_dsi(struct anx7625_data *ctx)
++{
++	struct device *dev = &ctx->client->dev;
++	int ret;
++
++	DRM_DEV_DEBUG_DRIVER(dev, "attach dsi\n");
++
++	ret = devm_mipi_dsi_attach(dev, ctx->dsi);
+ 	if (ret) {
+ 		DRM_DEV_ERROR(dev, "fail to attach dsi to host.\n");
+ 		return ret;
+ 	}
+ 
+-	ctx->dsi = dsi;
+-
+ 	DRM_DEV_DEBUG_DRIVER(dev, "attach dsi succeeded.\n");
+ 
+ 	return 0;
+@@ -2560,6 +2575,40 @@ static void anx7625_runtime_disable(void *data)
+ 	pm_runtime_disable(data);
+ }
+ 
++static int anx7625_link_bridge(struct drm_dp_aux *aux)
++{
++	struct anx7625_data *platform = container_of(aux, struct anx7625_data, aux);
++	struct device *dev = aux->dev;
++	int ret;
++
++	ret = anx7625_parse_dt_panel(dev, &platform->pdata);
++	if (ret) {
++		DRM_DEV_ERROR(dev, "fail to parse DT for panel : %d\n", ret);
++		return ret;
++	}
++
++	platform->bridge.funcs = &anx7625_bridge_funcs;
++	platform->bridge.of_node = dev->of_node;
++	if (!anx7625_of_panel_on_aux_bus(dev))
++		platform->bridge.ops |= DRM_BRIDGE_OP_EDID;
++	if (!platform->pdata.panel_bridge)
++		platform->bridge.ops |= DRM_BRIDGE_OP_HPD |
++					DRM_BRIDGE_OP_DETECT;
++	platform->bridge.type = platform->pdata.panel_bridge ?
++				    DRM_MODE_CONNECTOR_eDP :
++				    DRM_MODE_CONNECTOR_DisplayPort;
++
++	drm_bridge_add(&platform->bridge);
++
++	if (!platform->pdata.is_dpi) {
++		ret = anx7625_attach_dsi(platform);
++		if (ret)
++			drm_bridge_remove(&platform->bridge);
++	}
++
++	return ret;
++}
++
+ static int anx7625_i2c_probe(struct i2c_client *client)
+ {
+ 	struct anx7625_data *platform;
+@@ -2634,6 +2683,24 @@ static int anx7625_i2c_probe(struct i2c_client *client)
+ 	platform->aux.wait_hpd_asserted = anx7625_wait_hpd_asserted;
+ 	drm_dp_aux_init(&platform->aux);
+ 
++	ret = anx7625_parse_dt(dev, pdata);
++	if (ret) {
++		if (ret != -EPROBE_DEFER)
++			DRM_DEV_ERROR(dev, "fail to parse DT : %d\n", ret);
++		goto free_wq;
++	}
++
++	if (!platform->pdata.is_dpi) {
++		ret = anx7625_setup_dsi_device(platform);
++		if (ret < 0)
++			goto free_wq;
++	}
++
++	/*
++	 * Registering the i2c devices will retrigger deferred probe, so it
++	 * needs to be done after calls that might return EPROBE_DEFER,
++	 * otherwise we can get an infinite loop.
++	 */
+ 	if (anx7625_register_i2c_dummy_clients(platform, client) != 0) {
+ 		ret = -ENOMEM;
+ 		DRM_DEV_ERROR(dev, "fail to reserve I2C bus.\n");
+@@ -2648,13 +2715,21 @@ static int anx7625_i2c_probe(struct i2c_client *client)
+ 	if (ret)
+ 		goto free_wq;
+ 
+-	devm_of_dp_aux_populate_ep_devices(&platform->aux);
+-
+-	ret = anx7625_parse_dt(dev, pdata);
++	/*
++	 * Populating the aux bus will retrigger deferred probe, so it needs to
++	 * be done after calls that might return EPROBE_DEFER, otherwise we can
++	 * get an infinite loop.
++	 */
++	ret = devm_of_dp_aux_populate_bus(&platform->aux, anx7625_link_bridge);
+ 	if (ret) {
+-		if (ret != -EPROBE_DEFER)
+-			DRM_DEV_ERROR(dev, "fail to parse DT : %d\n", ret);
+-		goto free_wq;
++		if (ret != -ENODEV) {
++			DRM_DEV_ERROR(dev, "failed to populate aux bus : %d\n", ret);
++			goto free_wq;
++		}
++
++		ret = anx7625_link_bridge(&platform->aux);
++		if (ret)
++			goto free_wq;
+ 	}
+ 
+ 	if (!platform->pdata.low_power_mode) {
+@@ -2667,27 +2742,6 @@ static int anx7625_i2c_probe(struct i2c_client *client)
+ 	if (platform->pdata.intp_irq)
+ 		queue_work(platform->workqueue, &platform->work);
+ 
+-	platform->bridge.funcs = &anx7625_bridge_funcs;
+-	platform->bridge.of_node = client->dev.of_node;
+-	if (!anx7625_of_panel_on_aux_bus(&client->dev))
+-		platform->bridge.ops |= DRM_BRIDGE_OP_EDID;
+-	if (!platform->pdata.panel_bridge)
+-		platform->bridge.ops |= DRM_BRIDGE_OP_HPD |
+-					DRM_BRIDGE_OP_DETECT;
+-	platform->bridge.type = platform->pdata.panel_bridge ?
+-				    DRM_MODE_CONNECTOR_eDP :
+-				    DRM_MODE_CONNECTOR_DisplayPort;
+-
+-	drm_bridge_add(&platform->bridge);
+-
+-	if (!platform->pdata.is_dpi) {
+-		ret = anx7625_attach_dsi(platform);
+-		if (ret) {
+-			DRM_DEV_ERROR(dev, "Fail to attach to dsi : %d\n", ret);
+-			goto unregister_bridge;
+-		}
+-	}
+-
+ 	if (platform->pdata.audio_en)
+ 		anx7625_register_audio(dev, platform);
+ 
+@@ -2695,12 +2749,6 @@ static int anx7625_i2c_probe(struct i2c_client *client)
+ 
+ 	return 0;
+ 
+-unregister_bridge:
+-	drm_bridge_remove(&platform->bridge);
+-
+-	if (!platform->pdata.low_power_mode)
+-		pm_runtime_put_sync_suspend(&client->dev);
+-
+ free_wq:
+ 	if (platform->workqueue)
+ 		destroy_workqueue(platform->workqueue);
+diff --git a/drivers/gpu/drm/bridge/ite-it6505.c b/drivers/gpu/drm/bridge/ite-it6505.c
+index bc451b2a77c28..32ea61b79965e 100644
+--- a/drivers/gpu/drm/bridge/ite-it6505.c
++++ b/drivers/gpu/drm/bridge/ite-it6505.c
+@@ -3195,7 +3195,7 @@ static ssize_t receive_timing_debugfs_show(struct file *file, char __user *buf,
+ 					   size_t len, loff_t *ppos)
+ {
+ 	struct it6505 *it6505 = file->private_data;
+-	struct drm_display_mode *vid = &it6505->video_info;
++	struct drm_display_mode *vid;
+ 	u8 read_buf[READ_BUFFER_SIZE];
+ 	u8 *str = read_buf, *end = read_buf + READ_BUFFER_SIZE;
+ 	ssize_t ret, count;
+@@ -3204,6 +3204,7 @@ static ssize_t receive_timing_debugfs_show(struct file *file, char __user *buf,
+ 		return -ENODEV;
+ 
+ 	it6505_calc_video_info(it6505);
++	vid = &it6505->video_info;
+ 	str += scnprintf(str, end - str, "---video timing---\n");
+ 	str += scnprintf(str, end - str, "PCLK:%d.%03dMHz\n",
+ 			 vid->clock / 1000, vid->clock % 1000);
+diff --git a/drivers/gpu/drm/bridge/tc358767.c b/drivers/gpu/drm/bridge/tc358767.c
+index 6d16ec45ea614..232e23a1bfcc0 100644
+--- a/drivers/gpu/drm/bridge/tc358767.c
++++ b/drivers/gpu/drm/bridge/tc358767.c
+@@ -1890,7 +1890,7 @@ static int tc_mipi_dsi_host_attach(struct tc_data *tc)
+ 	if (dsi_lanes < 0)
+ 		return dsi_lanes;
+ 
+-	dsi = mipi_dsi_device_register_full(host, &info);
++	dsi = devm_mipi_dsi_device_register_full(dev, host, &info);
+ 	if (IS_ERR(dsi))
+ 		return dev_err_probe(dev, PTR_ERR(dsi),
+ 				     "failed to create dsi device\n");
+@@ -1901,7 +1901,7 @@ static int tc_mipi_dsi_host_attach(struct tc_data *tc)
+ 	dsi->format = MIPI_DSI_FMT_RGB888;
+ 	dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_SYNC_PULSE;
+ 
+-	ret = mipi_dsi_attach(dsi);
++	ret = devm_mipi_dsi_attach(dev, dsi);
+ 	if (ret < 0) {
+ 		dev_err(dev, "failed to attach dsi to host: %d\n", ret);
+ 		return ret;
+diff --git a/drivers/gpu/drm/bridge/tc358768.c b/drivers/gpu/drm/bridge/tc358768.c
+index 7c0cbe84611b9..966a25cb0b108 100644
+--- a/drivers/gpu/drm/bridge/tc358768.c
++++ b/drivers/gpu/drm/bridge/tc358768.c
+@@ -9,6 +9,8 @@
+ #include <linux/gpio/consumer.h>
+ #include <linux/i2c.h>
+ #include <linux/kernel.h>
++#include <linux/media-bus-format.h>
++#include <linux/minmax.h>
+ #include <linux/module.h>
+ #include <linux/regmap.h>
+ #include <linux/regulator/consumer.h>
+@@ -146,6 +148,7 @@ struct tc358768_priv {
+ 
+ 	u32 pd_lines; /* number of Parallel Port Input Data Lines */
+ 	u32 dsi_lanes; /* number of DSI Lanes */
++	u32 dsi_bpp; /* number of Bits Per Pixel over DSI */
+ 
+ 	/* Parameters for PLL programming */
+ 	u32 fbd;	/* PLL feedback divider */
+@@ -284,12 +287,12 @@ static void tc358768_hw_disable(struct tc358768_priv *priv)
+ 
+ static u32 tc358768_pll_to_pclk(struct tc358768_priv *priv, u32 pll_clk)
+ {
+-	return (u32)div_u64((u64)pll_clk * priv->dsi_lanes, priv->pd_lines);
++	return (u32)div_u64((u64)pll_clk * priv->dsi_lanes, priv->dsi_bpp);
+ }
+ 
+ static u32 tc358768_pclk_to_pll(struct tc358768_priv *priv, u32 pclk)
+ {
+-	return (u32)div_u64((u64)pclk * priv->pd_lines, priv->dsi_lanes);
++	return (u32)div_u64((u64)pclk * priv->dsi_bpp, priv->dsi_lanes);
+ }
+ 
+ static int tc358768_calc_pll(struct tc358768_priv *priv,
+@@ -334,13 +337,17 @@ static int tc358768_calc_pll(struct tc358768_priv *priv,
+ 		u32 fbd;
+ 
+ 		for (fbd = 0; fbd < 512; ++fbd) {
+-			u32 pll, diff;
++			u32 pll, diff, pll_in;
+ 
+ 			pll = (u32)div_u64((u64)refclk * (fbd + 1), divisor);
+ 
+ 			if (pll >= max_pll || pll < min_pll)
+ 				continue;
+ 
++			pll_in = (u32)div_u64((u64)refclk, prd + 1);
++			if (pll_in < 4000000)
++				continue;
++
+ 			diff = max(pll, target_pll) - min(pll, target_pll);
+ 
+ 			if (diff < best_diff) {
+@@ -422,6 +429,7 @@ static int tc358768_dsi_host_attach(struct mipi_dsi_host *host,
+ 	priv->output.panel = panel;
+ 
+ 	priv->dsi_lanes = dev->lanes;
++	priv->dsi_bpp = mipi_dsi_pixel_format_to_bpp(dev->format);
+ 
+ 	/* get input ep (port0/endpoint0) */
+ 	ret = -EINVAL;
+@@ -433,7 +441,7 @@ static int tc358768_dsi_host_attach(struct mipi_dsi_host *host,
+ 	}
+ 
+ 	if (ret)
+-		priv->pd_lines = mipi_dsi_pixel_format_to_bpp(dev->format);
++		priv->pd_lines = priv->dsi_bpp;
+ 
+ 	drm_bridge_add(&priv->bridge);
+ 
+@@ -632,6 +640,7 @@ static void tc358768_bridge_pre_enable(struct drm_bridge *bridge)
+ 	struct mipi_dsi_device *dsi_dev = priv->output.dev;
+ 	unsigned long mode_flags = dsi_dev->mode_flags;
+ 	u32 val, val2, lptxcnt, hact, data_type;
++	s32 raw_val;
+ 	const struct drm_display_mode *mode;
+ 	u32 dsibclk_nsk, dsiclk_nsk, ui_nsk, phy_delay_nsk;
+ 	u32 dsiclk, dsibclk, video_start;
+@@ -736,25 +745,26 @@ static void tc358768_bridge_pre_enable(struct drm_bridge *bridge)
+ 
+ 	/* 38ns < TCLK_PREPARE < 95ns */
+ 	val = tc358768_ns_to_cnt(65, dsibclk_nsk) - 1;
+-	/* TCLK_PREPARE > 300ns */
+-	val2 = tc358768_ns_to_cnt(300 + tc358768_to_ns(3 * ui_nsk),
+-				  dsibclk_nsk);
+-	val |= (val2 - tc358768_to_ns(phy_delay_nsk - dsibclk_nsk)) << 8;
++	/* TCLK_PREPARE + TCLK_ZERO > 300ns */
++	val2 = tc358768_ns_to_cnt(300 - tc358768_to_ns(2 * ui_nsk),
++				  dsibclk_nsk) - 2;
++	val |= val2 << 8;
+ 	dev_dbg(priv->dev, "TCLK_HEADERCNT: 0x%x\n", val);
+ 	tc358768_write(priv, TC358768_TCLK_HEADERCNT, val);
+ 
+-	/* TCLK_TRAIL > 60ns + 3*UI */
+-	val = 60 + tc358768_to_ns(3 * ui_nsk);
+-	val = tc358768_ns_to_cnt(val, dsibclk_nsk) - 5;
++	/* TCLK_TRAIL > 60ns AND TEOT <= 105 ns + 12*UI */
++	raw_val = tc358768_ns_to_cnt(60 + tc358768_to_ns(2 * ui_nsk), dsibclk_nsk) - 5;
++	val = clamp(raw_val, 0, 127);
+ 	dev_dbg(priv->dev, "TCLK_TRAILCNT: 0x%x\n", val);
+ 	tc358768_write(priv, TC358768_TCLK_TRAILCNT, val);
+ 
+ 	/* 40ns + 4*UI < THS_PREPARE < 85ns + 6*UI */
+ 	val = 50 + tc358768_to_ns(4 * ui_nsk);
+ 	val = tc358768_ns_to_cnt(val, dsibclk_nsk) - 1;
+-	/* THS_ZERO > 145ns + 10*UI */
+-	val2 = tc358768_ns_to_cnt(145 - tc358768_to_ns(ui_nsk), dsibclk_nsk);
+-	val |= (val2 - tc358768_to_ns(phy_delay_nsk)) << 8;
++	/* THS_PREPARE + THS_ZERO > 145ns + 10*UI */
++	raw_val = tc358768_ns_to_cnt(145 - tc358768_to_ns(3 * ui_nsk), dsibclk_nsk) - 10;
++	val2 = clamp(raw_val, 0, 127);
++	val |= val2 << 8;
+ 	dev_dbg(priv->dev, "THS_HEADERCNT: 0x%x\n", val);
+ 	tc358768_write(priv, TC358768_THS_HEADERCNT, val);
+ 
+@@ -770,9 +780,10 @@ static void tc358768_bridge_pre_enable(struct drm_bridge *bridge)
+ 	dev_dbg(priv->dev, "TCLK_POSTCNT: 0x%x\n", val);
+ 	tc358768_write(priv, TC358768_TCLK_POSTCNT, val);
+ 
+-	/* 60ns + 4*UI < THS_PREPARE < 105ns + 12*UI */
+-	val = tc358768_ns_to_cnt(60 + tc358768_to_ns(15 * ui_nsk),
+-				 dsibclk_nsk) - 5;
++	/* max(60ns + 4*UI, 8*UI) < THS_TRAILCNT < 105ns + 12*UI */
++	raw_val = tc358768_ns_to_cnt(60 + tc358768_to_ns(18 * ui_nsk),
++				     dsibclk_nsk) - 4;
++	val = clamp(raw_val, 0, 15);
+ 	dev_dbg(priv->dev, "THS_TRAILCNT: 0x%x\n", val);
+ 	tc358768_write(priv, TC358768_THS_TRAILCNT, val);
+ 
+@@ -786,7 +797,7 @@ static void tc358768_bridge_pre_enable(struct drm_bridge *bridge)
+ 
+ 	/* TXTAGOCNT[26:16] RXTASURECNT[10:0] */
+ 	val = tc358768_to_ns((lptxcnt + 1) * dsibclk_nsk * 4);
+-	val = tc358768_ns_to_cnt(val, dsibclk_nsk) - 1;
++	val = tc358768_ns_to_cnt(val, dsibclk_nsk) / 4 - 1;
+ 	val2 = tc358768_ns_to_cnt(tc358768_to_ns((lptxcnt + 1) * dsibclk_nsk),
+ 				  dsibclk_nsk) - 2;
+ 	val = val << 16 | val2;
+@@ -866,8 +877,7 @@ static void tc358768_bridge_pre_enable(struct drm_bridge *bridge)
+ 	val = TC358768_DSI_CONFW_MODE_SET | TC358768_DSI_CONFW_ADDR_DSI_CONTROL;
+ 	val |= (dsi_dev->lanes - 1) << 1;
+ 
+-	if (!(dsi_dev->mode_flags & MIPI_DSI_MODE_LPM))
+-		val |= TC358768_DSI_CONTROL_TXMD;
++	val |= TC358768_DSI_CONTROL_TXMD;
+ 
+ 	if (!(mode_flags & MIPI_DSI_CLOCK_NON_CONTINUOUS))
+ 		val |= TC358768_DSI_CONTROL_HSCKMD;
+@@ -913,6 +923,44 @@ static void tc358768_bridge_enable(struct drm_bridge *bridge)
+ 	}
+ }
+ 
++#define MAX_INPUT_SEL_FORMATS	1
++
++static u32 *
++tc358768_atomic_get_input_bus_fmts(struct drm_bridge *bridge,
++				   struct drm_bridge_state *bridge_state,
++				   struct drm_crtc_state *crtc_state,
++				   struct drm_connector_state *conn_state,
++				   u32 output_fmt,
++				   unsigned int *num_input_fmts)
++{
++	struct tc358768_priv *priv = bridge_to_tc358768(bridge);
++	u32 *input_fmts;
++
++	*num_input_fmts = 0;
++
++	input_fmts = kcalloc(MAX_INPUT_SEL_FORMATS, sizeof(*input_fmts),
++			     GFP_KERNEL);
++	if (!input_fmts)
++		return NULL;
++
++	switch (priv->pd_lines) {
++	case 16:
++		input_fmts[0] = MEDIA_BUS_FMT_RGB565_1X16;
++		break;
++	case 18:
++		input_fmts[0] = MEDIA_BUS_FMT_RGB666_1X18;
++		break;
++	default:
++	case 24:
++		input_fmts[0] = MEDIA_BUS_FMT_RGB888_1X24;
++		break;
++	};
++
++	*num_input_fmts = MAX_INPUT_SEL_FORMATS;
++
++	return input_fmts;
++}
++
+ static const struct drm_bridge_funcs tc358768_bridge_funcs = {
+ 	.attach = tc358768_bridge_attach,
+ 	.mode_valid = tc358768_bridge_mode_valid,
+@@ -920,6 +968,11 @@ static const struct drm_bridge_funcs tc358768_bridge_funcs = {
+ 	.enable = tc358768_bridge_enable,
+ 	.disable = tc358768_bridge_disable,
+ 	.post_disable = tc358768_bridge_post_disable,
++
++	.atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state,
++	.atomic_destroy_state = drm_atomic_helper_bridge_destroy_state,
++	.atomic_reset = drm_atomic_helper_bridge_reset,
++	.atomic_get_input_bus_fmts = tc358768_atomic_get_input_bus_fmts,
+ };
+ 
+ static const struct drm_bridge_timings default_tc358768_timings = {
+diff --git a/drivers/gpu/drm/bridge/ti-sn65dsi83.c b/drivers/gpu/drm/bridge/ti-sn65dsi83.c
+index 91ecfbe45bf90..e4ee2904d0893 100644
+--- a/drivers/gpu/drm/bridge/ti-sn65dsi83.c
++++ b/drivers/gpu/drm/bridge/ti-sn65dsi83.c
+@@ -321,8 +321,8 @@ static u8 sn65dsi83_get_dsi_div(struct sn65dsi83 *ctx)
+ 	return dsi_div - 1;
+ }
+ 
+-static void sn65dsi83_atomic_enable(struct drm_bridge *bridge,
+-				    struct drm_bridge_state *old_bridge_state)
++static void sn65dsi83_atomic_pre_enable(struct drm_bridge *bridge,
++					struct drm_bridge_state *old_bridge_state)
+ {
+ 	struct sn65dsi83 *ctx = bridge_to_sn65dsi83(bridge);
+ 	struct drm_atomic_state *state = old_bridge_state->base.state;
+@@ -478,17 +478,29 @@ static void sn65dsi83_atomic_enable(struct drm_bridge *bridge,
+ 		dev_err(ctx->dev, "failed to lock PLL, ret=%i\n", ret);
+ 		/* On failure, disable PLL again and exit. */
+ 		regmap_write(ctx->regmap, REG_RC_PLL_EN, 0x00);
++		regulator_disable(ctx->vcc);
+ 		return;
+ 	}
+ 
+ 	/* Trigger reset after CSR register update. */
+ 	regmap_write(ctx->regmap, REG_RC_RESET, REG_RC_RESET_SOFT_RESET);
+ 
++	/* Wait for 10ms after soft reset as specified in datasheet */
++	usleep_range(10000, 12000);
++}
++
++static void sn65dsi83_atomic_enable(struct drm_bridge *bridge,
++				    struct drm_bridge_state *old_bridge_state)
++{
++	struct sn65dsi83 *ctx = bridge_to_sn65dsi83(bridge);
++	unsigned int pval;
++
+ 	/* Clear all errors that got asserted during initialization. */
+ 	regmap_read(ctx->regmap, REG_IRQ_STAT, &pval);
+ 	regmap_write(ctx->regmap, REG_IRQ_STAT, pval);
+ 
+-	usleep_range(10000, 12000);
++	/* Wait for 1ms and check for errors in status register */
++	usleep_range(1000, 1100);
+ 	regmap_read(ctx->regmap, REG_IRQ_STAT, &pval);
+ 	if (pval)
+ 		dev_err(ctx->dev, "Unexpected link status 0x%02x\n", pval);
+@@ -555,6 +567,7 @@ static const struct drm_bridge_funcs sn65dsi83_funcs = {
+ 	.attach			= sn65dsi83_attach,
+ 	.detach			= sn65dsi83_detach,
+ 	.atomic_enable		= sn65dsi83_atomic_enable,
++	.atomic_pre_enable	= sn65dsi83_atomic_pre_enable,
+ 	.atomic_disable		= sn65dsi83_atomic_disable,
+ 	.mode_valid		= sn65dsi83_mode_valid,
+ 
+@@ -695,6 +708,7 @@ static int sn65dsi83_probe(struct i2c_client *client)
+ 
+ 	ctx->bridge.funcs = &sn65dsi83_funcs;
+ 	ctx->bridge.of_node = dev->of_node;
++	ctx->bridge.pre_enable_prev_first = true;
+ 	drm_bridge_add(&ctx->bridge);
+ 
+ 	ret = sn65dsi83_host_attach(ctx);
+diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c
+index 38dab76ae69ea..e2e21ce79510e 100644
+--- a/drivers/gpu/drm/display/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c
+@@ -3404,7 +3404,7 @@ int drm_dp_add_payload_part2(struct drm_dp_mst_topology_mgr *mgr,
+ 
+ 	/* Skip failed payloads */
+ 	if (payload->vc_start_slot == -1) {
+-		drm_dbg_kms(state->dev, "Part 1 of payload creation for %s failed, skipping part 2\n",
++		drm_dbg_kms(mgr->dev, "Part 1 of payload creation for %s failed, skipping part 2\n",
+ 			    payload->port->connector->name);
+ 		return -EIO;
+ 	}
+diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
+index d40b3edb52d07..f1539d4448c69 100644
+--- a/drivers/gpu/drm/drm_gem_vram_helper.c
++++ b/drivers/gpu/drm/drm_gem_vram_helper.c
+@@ -45,7 +45,7 @@ static const struct drm_gem_object_funcs drm_gem_vram_object_funcs;
+  * the frame's scanout buffer or the cursor image. If there's no more space
+  * left in VRAM, inactive GEM objects can be moved to system memory.
+  *
+- * To initialize the VRAM helper library call drmm_vram_helper_alloc_mm().
++ * To initialize the VRAM helper library call drmm_vram_helper_init().
+  * The function allocates and initializes an instance of &struct drm_vram_mm
+  * in &struct drm_device.vram_mm . Use &DRM_GEM_VRAM_DRIVER to initialize
+  * &struct drm_driver and  &DRM_VRAM_MM_FILE_OPERATIONS to initialize
+@@ -73,7 +73,7 @@ static const struct drm_gem_object_funcs drm_gem_vram_object_funcs;
+  *		// setup device, vram base and size
+  *		// ...
+  *
+- *		ret = drmm_vram_helper_alloc_mm(dev, vram_base, vram_size);
++ *		ret = drmm_vram_helper_init(dev, vram_base, vram_size);
+  *		if (ret)
+  *			return ret;
+  *		return 0;
+@@ -86,7 +86,7 @@ static const struct drm_gem_object_funcs drm_gem_vram_object_funcs;
+  * to userspace.
+  *
+  * You don't have to clean up the instance of VRAM MM.
+- * drmm_vram_helper_alloc_mm() is a managed interface that installs a
++ * drmm_vram_helper_init() is a managed interface that installs a
+  * clean-up handler to run during the DRM device's release.
+  *
+  * For drawing or scanout operations, rsp. buffer objects have to be pinned
+diff --git a/drivers/gpu/drm/i915/gt/intel_gt_sysfs_pm.c b/drivers/gpu/drm/i915/gt/intel_gt_sysfs_pm.c
+index 28f27091cd3b7..ee2b44f896a27 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gt_sysfs_pm.c
++++ b/drivers/gpu/drm/i915/gt/intel_gt_sysfs_pm.c
+@@ -451,6 +451,33 @@ static ssize_t punit_req_freq_mhz_show(struct kobject *kobj,
+ 	return sysfs_emit(buff, "%u\n", preq);
+ }
+ 
++static ssize_t slpc_ignore_eff_freq_show(struct kobject *kobj,
++					 struct kobj_attribute *attr,
++					 char *buff)
++{
++	struct intel_gt *gt = intel_gt_sysfs_get_drvdata(kobj, attr->attr.name);
++	struct intel_guc_slpc *slpc = &gt->uc.guc.slpc;
++
++	return sysfs_emit(buff, "%u\n", slpc->ignore_eff_freq);
++}
++
++static ssize_t slpc_ignore_eff_freq_store(struct kobject *kobj,
++					  struct kobj_attribute *attr,
++					  const char *buff, size_t count)
++{
++	struct intel_gt *gt = intel_gt_sysfs_get_drvdata(kobj, attr->attr.name);
++	struct intel_guc_slpc *slpc = &gt->uc.guc.slpc;
++	int err;
++	u32 val;
++
++	err = kstrtou32(buff, 0, &val);
++	if (err)
++		return err;
++
++	err = intel_guc_slpc_set_ignore_eff_freq(slpc, val);
++	return err ?: count;
++}
++
+ struct intel_gt_bool_throttle_attr {
+ 	struct attribute attr;
+ 	ssize_t (*show)(struct kobject *kobj, struct kobj_attribute *attr,
+@@ -663,6 +690,8 @@ static struct kobj_attribute attr_media_freq_factor_scale =
+ INTEL_GT_ATTR_RO(media_RP0_freq_mhz);
+ INTEL_GT_ATTR_RO(media_RPn_freq_mhz);
+ 
++INTEL_GT_ATTR_RW(slpc_ignore_eff_freq);
++
+ static const struct attribute *media_perf_power_attrs[] = {
+ 	&attr_media_freq_factor.attr,
+ 	&attr_media_freq_factor_scale.attr,
+@@ -744,6 +773,12 @@ void intel_gt_sysfs_pm_init(struct intel_gt *gt, struct kobject *kobj)
+ 	if (ret)
+ 		gt_warn(gt, "failed to create punit_req_freq_mhz sysfs (%pe)", ERR_PTR(ret));
+ 
++	if (intel_uc_uses_guc_slpc(&gt->uc)) {
++		ret = sysfs_create_file(kobj, &attr_slpc_ignore_eff_freq.attr);
++		if (ret)
++			gt_warn(gt, "failed to create ignore_eff_freq sysfs (%pe)", ERR_PTR(ret));
++	}
++
+ 	if (i915_mmio_reg_valid(intel_gt_perf_limit_reasons_reg(gt))) {
+ 		ret = sysfs_create_files(kobj, throttle_reason_attrs);
+ 		if (ret)
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_rc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_rc.c
+index 8f8dd05835c5a..1adec6de223c7 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_rc.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_rc.c
+@@ -6,6 +6,7 @@
+ #include <linux/string_helpers.h>
+ 
+ #include "intel_guc_rc.h"
++#include "intel_guc_print.h"
+ #include "gt/intel_gt.h"
+ #include "i915_drv.h"
+ 
+@@ -59,13 +60,12 @@ static int __guc_rc_control(struct intel_guc *guc, bool enable)
+ 
+ 	ret = guc_action_control_gucrc(guc, enable);
+ 	if (ret) {
+-		i915_probe_error(guc_to_gt(guc)->i915, "Failed to %s GuC RC (%pe)\n",
+-				 str_enable_disable(enable), ERR_PTR(ret));
++		guc_probe_error(guc, "Failed to %s RC (%pe)\n",
++				str_enable_disable(enable), ERR_PTR(ret));
+ 		return ret;
+ 	}
+ 
+-	drm_info(&gt->i915->drm, "GuC RC: %s\n",
+-		 str_enabled_disabled(enable));
++	guc_info(guc, "RC %s\n", str_enabled_disabled(enable));
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
+index 63464933cbceb..56dbba1ef6684 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
+@@ -9,6 +9,7 @@
+ #include "i915_drv.h"
+ #include "i915_reg.h"
+ #include "intel_guc_slpc.h"
++#include "intel_guc_print.h"
+ #include "intel_mchbar_regs.h"
+ #include "gt/intel_gt.h"
+ #include "gt/intel_gt_regs.h"
+@@ -171,14 +172,12 @@ static int guc_action_slpc_query(struct intel_guc *guc, u32 offset)
+ static int slpc_query_task_state(struct intel_guc_slpc *slpc)
+ {
+ 	struct intel_guc *guc = slpc_to_guc(slpc);
+-	struct drm_i915_private *i915 = slpc_to_i915(slpc);
+ 	u32 offset = intel_guc_ggtt_offset(guc, slpc->vma);
+ 	int ret;
+ 
+ 	ret = guc_action_slpc_query(guc, offset);
+ 	if (unlikely(ret))
+-		i915_probe_error(i915, "Failed to query task state (%pe)\n",
+-				 ERR_PTR(ret));
++		guc_probe_error(guc, "Failed to query task state: %pe\n", ERR_PTR(ret));
+ 
+ 	drm_clflush_virt_range(slpc->vaddr, SLPC_PAGE_SIZE_BYTES);
+ 
+@@ -188,15 +187,14 @@ static int slpc_query_task_state(struct intel_guc_slpc *slpc)
+ static int slpc_set_param(struct intel_guc_slpc *slpc, u8 id, u32 value)
+ {
+ 	struct intel_guc *guc = slpc_to_guc(slpc);
+-	struct drm_i915_private *i915 = slpc_to_i915(slpc);
+ 	int ret;
+ 
+ 	GEM_BUG_ON(id >= SLPC_MAX_PARAM);
+ 
+ 	ret = guc_action_slpc_set_param(guc, id, value);
+ 	if (ret)
+-		i915_probe_error(i915, "Failed to set param %d to %u (%pe)\n",
+-				 id, value, ERR_PTR(ret));
++		guc_probe_error(guc, "Failed to set param %d to %u: %pe\n",
++				id, value, ERR_PTR(ret));
+ 
+ 	return ret;
+ }
+@@ -212,8 +210,8 @@ static int slpc_unset_param(struct intel_guc_slpc *slpc, u8 id)
+ 
+ static int slpc_force_min_freq(struct intel_guc_slpc *slpc, u32 freq)
+ {
+-	struct drm_i915_private *i915 = slpc_to_i915(slpc);
+ 	struct intel_guc *guc = slpc_to_guc(slpc);
++	struct drm_i915_private *i915 = slpc_to_i915(slpc);
+ 	intel_wakeref_t wakeref;
+ 	int ret = 0;
+ 
+@@ -236,9 +234,8 @@ static int slpc_force_min_freq(struct intel_guc_slpc *slpc, u32 freq)
+ 					SLPC_PARAM_GLOBAL_MIN_GT_UNSLICE_FREQ_MHZ,
+ 					freq);
+ 		if (ret)
+-			drm_notice(&i915->drm,
+-				   "Failed to send set_param for min freq(%d): (%d)\n",
+-				   freq, ret);
++			guc_notice(guc, "Failed to send set_param for min freq(%d): %pe\n",
++				   freq, ERR_PTR(ret));
+ 	}
+ 
+ 	return ret;
+@@ -267,7 +264,6 @@ static void slpc_boost_work(struct work_struct *work)
+ int intel_guc_slpc_init(struct intel_guc_slpc *slpc)
+ {
+ 	struct intel_guc *guc = slpc_to_guc(slpc);
+-	struct drm_i915_private *i915 = slpc_to_i915(slpc);
+ 	u32 size = PAGE_ALIGN(sizeof(struct slpc_shared_data));
+ 	int err;
+ 
+@@ -275,14 +271,13 @@ int intel_guc_slpc_init(struct intel_guc_slpc *slpc)
+ 
+ 	err = intel_guc_allocate_and_map_vma(guc, size, &slpc->vma, (void **)&slpc->vaddr);
+ 	if (unlikely(err)) {
+-		i915_probe_error(i915,
+-				 "Failed to allocate SLPC struct (err=%pe)\n",
+-				 ERR_PTR(err));
++		guc_probe_error(guc, "Failed to allocate SLPC struct: %pe\n", ERR_PTR(err));
+ 		return err;
+ 	}
+ 
+ 	slpc->max_freq_softlimit = 0;
+ 	slpc->min_freq_softlimit = 0;
++	slpc->ignore_eff_freq = false;
+ 	slpc->min_is_rpmax = false;
+ 
+ 	slpc->boost_freq = 0;
+@@ -338,7 +333,6 @@ static int guc_action_slpc_reset(struct intel_guc *guc, u32 offset)
+ 
+ static int slpc_reset(struct intel_guc_slpc *slpc)
+ {
+-	struct drm_i915_private *i915 = slpc_to_i915(slpc);
+ 	struct intel_guc *guc = slpc_to_guc(slpc);
+ 	u32 offset = intel_guc_ggtt_offset(guc, slpc->vma);
+ 	int ret;
+@@ -346,15 +340,14 @@ static int slpc_reset(struct intel_guc_slpc *slpc)
+ 	ret = guc_action_slpc_reset(guc, offset);
+ 
+ 	if (unlikely(ret < 0)) {
+-		i915_probe_error(i915, "SLPC reset action failed (%pe)\n",
+-				 ERR_PTR(ret));
++		guc_probe_error(guc, "SLPC reset action failed: %pe\n", ERR_PTR(ret));
+ 		return ret;
+ 	}
+ 
+ 	if (!ret) {
+ 		if (wait_for(slpc_is_running(slpc), SLPC_RESET_TIMEOUT_MS)) {
+-			i915_probe_error(i915, "SLPC not enabled! State = %s\n",
+-					 slpc_get_state_string(slpc));
++			guc_probe_error(guc, "SLPC not enabled! State = %s\n",
++					slpc_get_state_string(slpc));
+ 			return -EIO;
+ 		}
+ 	}
+@@ -465,6 +458,29 @@ int intel_guc_slpc_get_max_freq(struct intel_guc_slpc *slpc, u32 *val)
+ 	return ret;
+ }
+ 
++int intel_guc_slpc_set_ignore_eff_freq(struct intel_guc_slpc *slpc, bool val)
++{
++	struct drm_i915_private *i915 = slpc_to_i915(slpc);
++	intel_wakeref_t wakeref;
++	int ret;
++
++	mutex_lock(&slpc->lock);
++	wakeref = intel_runtime_pm_get(&i915->runtime_pm);
++
++	ret = slpc_set_param(slpc,
++			     SLPC_PARAM_IGNORE_EFFICIENT_FREQUENCY,
++			     val);
++	if (ret)
++		guc_probe_error(slpc_to_guc(slpc), "Failed to set efficient freq(%d): %pe\n",
++				val, ERR_PTR(ret));
++	else
++		slpc->ignore_eff_freq = val;
++
++	intel_runtime_pm_put(&i915->runtime_pm, wakeref);
++	mutex_unlock(&slpc->lock);
++	return ret;
++}
++
+ /**
+  * intel_guc_slpc_set_min_freq() - Set min frequency limit for SLPC.
+  * @slpc: pointer to intel_guc_slpc.
+@@ -490,16 +506,6 @@ int intel_guc_slpc_set_min_freq(struct intel_guc_slpc *slpc, u32 val)
+ 	mutex_lock(&slpc->lock);
+ 	wakeref = intel_runtime_pm_get(&i915->runtime_pm);
+ 
+-	/* Ignore efficient freq if lower min freq is requested */
+-	ret = slpc_set_param(slpc,
+-			     SLPC_PARAM_IGNORE_EFFICIENT_FREQUENCY,
+-			     val < slpc->rp1_freq);
+-	if (ret) {
+-		i915_probe_error(i915, "Failed to toggle efficient freq (%pe)\n",
+-				 ERR_PTR(ret));
+-		goto out;
+-	}
+-
+ 	ret = slpc_set_param(slpc,
+ 			     SLPC_PARAM_GLOBAL_MIN_GT_UNSLICE_FREQ_MHZ,
+ 			     val);
+@@ -507,7 +513,6 @@ int intel_guc_slpc_set_min_freq(struct intel_guc_slpc *slpc, u32 val)
+ 	if (!ret)
+ 		slpc->min_freq_softlimit = val;
+ 
+-out:
+ 	intel_runtime_pm_put(&i915->runtime_pm, wakeref);
+ 	mutex_unlock(&slpc->lock);
+ 
+@@ -611,15 +616,12 @@ static int slpc_set_softlimits(struct intel_guc_slpc *slpc)
+ 
+ static bool is_slpc_min_freq_rpmax(struct intel_guc_slpc *slpc)
+ {
+-	struct drm_i915_private *i915 = slpc_to_i915(slpc);
+ 	int slpc_min_freq;
+ 	int ret;
+ 
+ 	ret = intel_guc_slpc_get_min_freq(slpc, &slpc_min_freq);
+ 	if (ret) {
+-		drm_err(&i915->drm,
+-			"Failed to get min freq: (%d)\n",
+-			ret);
++		guc_err(slpc_to_guc(slpc), "Failed to get min freq: %pe\n", ERR_PTR(ret));
+ 		return false;
+ 	}
+ 
+@@ -685,9 +687,8 @@ int intel_guc_slpc_override_gucrc_mode(struct intel_guc_slpc *slpc, u32 mode)
+ 	with_intel_runtime_pm(&i915->runtime_pm, wakeref) {
+ 		ret = slpc_set_param(slpc, SLPC_PARAM_PWRGATE_RC_MODE, mode);
+ 		if (ret)
+-			drm_err(&i915->drm,
+-				"Override gucrc mode %d failed %d\n",
+-				mode, ret);
++			guc_err(slpc_to_guc(slpc), "Override RC mode %d failed: %pe\n",
++				mode, ERR_PTR(ret));
+ 	}
+ 
+ 	return ret;
+@@ -702,9 +703,7 @@ int intel_guc_slpc_unset_gucrc_mode(struct intel_guc_slpc *slpc)
+ 	with_intel_runtime_pm(&i915->runtime_pm, wakeref) {
+ 		ret = slpc_unset_param(slpc, SLPC_PARAM_PWRGATE_RC_MODE);
+ 		if (ret)
+-			drm_err(&i915->drm,
+-				"Unsetting gucrc mode failed %d\n",
+-				ret);
++			guc_err(slpc_to_guc(slpc), "Unsetting RC mode failed: %pe\n", ERR_PTR(ret));
+ 	}
+ 
+ 	return ret;
+@@ -725,7 +724,7 @@ int intel_guc_slpc_unset_gucrc_mode(struct intel_guc_slpc *slpc)
+  */
+ int intel_guc_slpc_enable(struct intel_guc_slpc *slpc)
+ {
+-	struct drm_i915_private *i915 = slpc_to_i915(slpc);
++	struct intel_guc *guc = slpc_to_guc(slpc);
+ 	int ret;
+ 
+ 	GEM_BUG_ON(!slpc->vma);
+@@ -734,8 +733,7 @@ int intel_guc_slpc_enable(struct intel_guc_slpc *slpc)
+ 
+ 	ret = slpc_reset(slpc);
+ 	if (unlikely(ret < 0)) {
+-		i915_probe_error(i915, "SLPC Reset event returned (%pe)\n",
+-				 ERR_PTR(ret));
++		guc_probe_error(guc, "SLPC Reset event returned: %pe\n", ERR_PTR(ret));
+ 		return ret;
+ 	}
+ 
+@@ -743,7 +741,7 @@ int intel_guc_slpc_enable(struct intel_guc_slpc *slpc)
+ 	if (unlikely(ret < 0))
+ 		return ret;
+ 
+-	intel_guc_pm_intrmsk_enable(to_gt(i915));
++	intel_guc_pm_intrmsk_enable(slpc_to_gt(slpc));
+ 
+ 	slpc_get_rp_values(slpc);
+ 
+@@ -753,22 +751,23 @@ int intel_guc_slpc_enable(struct intel_guc_slpc *slpc)
+ 	/* Set SLPC max limit to RP0 */
+ 	ret = slpc_use_fused_rp0(slpc);
+ 	if (unlikely(ret)) {
+-		i915_probe_error(i915, "Failed to set SLPC max to RP0 (%pe)\n",
+-				 ERR_PTR(ret));
++		guc_probe_error(guc, "Failed to set SLPC max to RP0: %pe\n", ERR_PTR(ret));
+ 		return ret;
+ 	}
+ 
+ 	/* Revert SLPC min/max to softlimits if necessary */
+ 	ret = slpc_set_softlimits(slpc);
+ 	if (unlikely(ret)) {
+-		i915_probe_error(i915, "Failed to set SLPC softlimits (%pe)\n",
+-				 ERR_PTR(ret));
++		guc_probe_error(guc, "Failed to set SLPC softlimits: %pe\n", ERR_PTR(ret));
+ 		return ret;
+ 	}
+ 
+ 	/* Set cached media freq ratio mode */
+ 	intel_guc_slpc_set_media_ratio_mode(slpc, slpc->media_ratio_mode);
+ 
++	/* Set cached value of ignore efficient freq */
++	intel_guc_slpc_set_ignore_eff_freq(slpc, slpc->ignore_eff_freq);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
+index 17ed515f6a852..597eb5413ddf2 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
+@@ -46,5 +46,6 @@ void intel_guc_slpc_boost(struct intel_guc_slpc *slpc);
+ void intel_guc_slpc_dec_waiters(struct intel_guc_slpc *slpc);
+ int intel_guc_slpc_unset_gucrc_mode(struct intel_guc_slpc *slpc);
+ int intel_guc_slpc_override_gucrc_mode(struct intel_guc_slpc *slpc, u32 mode);
++int intel_guc_slpc_set_ignore_eff_freq(struct intel_guc_slpc *slpc, bool val);
+ 
+ #endif
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_types.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_types.h
+index a6ef53b04e047..a886513314977 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_types.h
++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_types.h
+@@ -31,6 +31,7 @@ struct intel_guc_slpc {
+ 	/* frequency softlimits */
+ 	u32 min_freq_softlimit;
+ 	u32 max_freq_softlimit;
++	bool ignore_eff_freq;
+ 
+ 	/* cached media ratio mode */
+ 	u32 media_ratio_mode;
+diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+index 0372f89082022..660c830c68764 100644
+--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+@@ -1740,6 +1740,7 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev)
+ {
+ 	struct msm_drm_private *priv = dev->dev_private;
+ 	struct platform_device *pdev = priv->gpu_pdev;
++	struct adreno_platform_config *config = pdev->dev.platform_data;
+ 	struct a5xx_gpu *a5xx_gpu = NULL;
+ 	struct adreno_gpu *adreno_gpu;
+ 	struct msm_gpu *gpu;
+@@ -1766,7 +1767,7 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev)
+ 
+ 	nr_rings = 4;
+ 
+-	if (adreno_is_a510(adreno_gpu))
++	if (adreno_cmp_rev(ADRENO_REV(5, 1, 0, ANY_ID), config->rev))
+ 		nr_rings = 1;
+ 
+ 	ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs, nr_rings);
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index 2942d2548ce69..f74495dcbd966 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -1793,7 +1793,8 @@ a6xx_create_address_space(struct msm_gpu *gpu, struct platform_device *pdev)
+ 	 * This allows GPU to set the bus attributes required to use system
+ 	 * cache on behalf of the iommu page table walker.
+ 	 */
+-	if (!IS_ERR_OR_NULL(a6xx_gpu->htw_llc_slice))
++	if (!IS_ERR_OR_NULL(a6xx_gpu->htw_llc_slice) &&
++	    !device_iommu_capable(&pdev->dev, IOMMU_CAP_CACHE_COHERENCY))
+ 		quirks |= IO_PGTABLE_QUIRK_ARM_OUTER_WBWA;
+ 
+ 	return adreno_iommu_create_address_space(gpu, pdev, quirks);
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+index f29a339a37050..ce188452cd56a 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+@@ -1575,6 +1575,8 @@ static const struct drm_crtc_helper_funcs dpu_crtc_helper_funcs = {
+ struct drm_crtc *dpu_crtc_init(struct drm_device *dev, struct drm_plane *plane,
+ 				struct drm_plane *cursor)
+ {
++	struct msm_drm_private *priv = dev->dev_private;
++	struct dpu_kms *dpu_kms = to_dpu_kms(priv->kms);
+ 	struct drm_crtc *crtc = NULL;
+ 	struct dpu_crtc *dpu_crtc = NULL;
+ 	int i;
+@@ -1606,7 +1608,8 @@ struct drm_crtc *dpu_crtc_init(struct drm_device *dev, struct drm_plane *plane,
+ 
+ 	drm_crtc_helper_add(crtc, &dpu_crtc_helper_funcs);
+ 
+-	drm_crtc_enable_color_mgmt(crtc, 0, true, 0);
++	if (dpu_kms->catalog->dspp_count)
++		drm_crtc_enable_color_mgmt(crtc, 0, true, 0);
+ 
+ 	/* save user friendly CRTC name for later */
+ 	snprintf(dpu_crtc->name, DPU_CRTC_NAME_SIZE, "crtc%u", crtc->base.id);
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+index f7214c4401e19..c2462d58b67d6 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+@@ -94,9 +94,13 @@
+ 
+ #define INTF_SDM845_MASK (0)
+ 
+-#define INTF_SC7180_MASK BIT(DPU_INTF_INPUT_CTRL) | BIT(DPU_INTF_TE)
++#define INTF_SC7180_MASK \
++	(BIT(DPU_INTF_INPUT_CTRL) | \
++	 BIT(DPU_INTF_TE) | \
++	 BIT(DPU_INTF_STATUS_SUPPORTED) | \
++	 BIT(DPU_DATA_HCTL_EN))
+ 
+-#define INTF_SC7280_MASK INTF_SC7180_MASK | BIT(DPU_DATA_HCTL_EN)
++#define INTF_SC7280_MASK (INTF_SC7180_MASK)
+ 
+ #define IRQ_SDM845_MASK (BIT(MDP_SSPP_TOP0_INTR) | \
+ 			 BIT(MDP_SSPP_TOP0_INTR2) | \
+@@ -1562,7 +1566,7 @@ static struct dpu_pingpong_cfg qcm2290_pp[] = {
+ #define MERGE_3D_BLK(_name, _id, _base) \
+ 	{\
+ 	.name = _name, .id = _id, \
+-	.base = _base, .len = 0x100, \
++	.base = _base, .len = 0x8, \
+ 	.features = MERGE_3D_SM8150_MASK, \
+ 	.sblk = NULL \
+ 	}
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h
+index 5f96dd8def092..d7d45e1e7b310 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h
+@@ -214,17 +214,19 @@ enum {
+ 
+ /**
+  * INTF sub-blocks
+- * @DPU_INTF_INPUT_CTRL         Supports the setting of pp block from which
+- *                              pixel data arrives to this INTF
+- * @DPU_INTF_TE                 INTF block has TE configuration support
+- * @DPU_DATA_HCTL_EN            Allows data to be transferred at different rate
+-                                than video timing
++ * @DPU_INTF_INPUT_CTRL             Supports the setting of pp block from which
++ *                                  pixel data arrives to this INTF
++ * @DPU_INTF_TE                     INTF block has TE configuration support
++ * @DPU_DATA_HCTL_EN                Allows data to be transferred at different rate
++ *                                  than video timing
++ * @DPU_INTF_STATUS_SUPPORTED       INTF block has INTF_STATUS register
+  * @DPU_INTF_MAX
+  */
+ enum {
+ 	DPU_INTF_INPUT_CTRL = 0x1,
+ 	DPU_INTF_TE,
+ 	DPU_DATA_HCTL_EN,
++	DPU_INTF_STATUS_SUPPORTED,
+ 	DPU_INTF_MAX
+ };
+ 
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c
+index 6c53ea560ffaa..4072638c37918 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c
+@@ -115,6 +115,9 @@ static inline void dpu_hw_ctl_clear_pending_flush(struct dpu_hw_ctl *ctx)
+ 	trace_dpu_hw_ctl_clear_pending_flush(ctx->pending_flush_mask,
+ 				     dpu_hw_ctl_get_flush_register(ctx));
+ 	ctx->pending_flush_mask = 0x0;
++	ctx->pending_intf_flush_mask = 0;
++	ctx->pending_wb_flush_mask = 0;
++	ctx->pending_merge_3d_flush_mask = 0;
+ }
+ 
+ static inline void dpu_hw_ctl_update_pending_flush(struct dpu_hw_ctl *ctx,
+@@ -505,7 +508,7 @@ static void dpu_hw_ctl_intf_cfg_v1(struct dpu_hw_ctl *ctx,
+ 		DPU_REG_WRITE(c, CTL_MERGE_3D_ACTIVE,
+ 			      BIT(cfg->merge_3d - MERGE_3D_0));
+ 	if (cfg->dsc) {
+-		DPU_REG_WRITE(&ctx->hw, CTL_FLUSH, DSC_IDX);
++		DPU_REG_WRITE(&ctx->hw, CTL_FLUSH, BIT(DSC_IDX));
+ 		DPU_REG_WRITE(c, CTL_DSC_ACTIVE, cfg->dsc);
+ 	}
+ }
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c
+index 619926da1441e..68035745b7069 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c
+@@ -54,9 +54,10 @@ static void dpu_hw_dsc_config(struct dpu_hw_dsc *hw_dsc,
+ 	if (is_cmd_mode)
+ 		initial_lines += 1;
+ 
+-	slice_last_group_size = 3 - (dsc->slice_width % 3);
++	slice_last_group_size = (dsc->slice_width + 2) % 3;
++
+ 	data = (initial_lines << 20);
+-	data |= ((slice_last_group_size - 1) << 18);
++	data |= (slice_last_group_size << 18);
+ 	/* bpp is 6.4 format, 4 LSBs bits are for fractional part */
+ 	data |= (dsc->bits_per_pixel << 8);
+ 	data |= (dsc->block_pred_enable << 7);
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
+index b2a94b9a3e987..b9dddf576c029 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
+@@ -57,6 +57,7 @@
+ #define   INTF_PROG_FETCH_START         0x170
+ #define   INTF_PROG_ROT_START           0x174
+ #define   INTF_MUX                      0x25C
++#define   INTF_STATUS                   0x26C
+ 
+ #define INTF_CFG_ACTIVE_H_EN	BIT(29)
+ #define INTF_CFG_ACTIVE_V_EN	BIT(30)
+@@ -292,8 +293,13 @@ static void dpu_hw_intf_get_status(
+ 		struct intf_status *s)
+ {
+ 	struct dpu_hw_blk_reg_map *c = &intf->hw;
++	unsigned long cap = intf->cap->features;
++
++	if (cap & BIT(DPU_INTF_STATUS_SUPPORTED))
++		s->is_en = DPU_REG_READ(c, INTF_STATUS) & BIT(0);
++	else
++		s->is_en = DPU_REG_READ(c, INTF_TIMING_ENGINE_EN);
+ 
+-	s->is_en = DPU_REG_READ(c, INTF_TIMING_ENGINE_EN);
+ 	s->is_prog_fetch_en = !!(DPU_REG_READ(c, INTF_CONFIG) & BIT(31));
+ 	if (s->is_en) {
+ 		s->frame_count = DPU_REG_READ(c, INTF_FRAME_COUNT);
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
+index 3f9a18410c0bb..22967cf6a79d3 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.c
++++ b/drivers/gpu/drm/msm/dp/dp_display.c
+@@ -325,6 +325,8 @@ static void dp_display_unbind(struct device *dev, struct device *master,
+ 
+ 	kthread_stop(dp->ev_tsk);
+ 
++	of_dp_aux_depopulate_bus(dp->aux);
++
+ 	dp_power_client_deinit(dp->power);
+ 	dp_unregister_audio_driver(dev, dp->audio);
+ 	dp_aux_unregister(dp->aux);
+@@ -1352,9 +1354,9 @@ static int dp_display_remove(struct platform_device *pdev)
+ {
+ 	struct dp_display_private *dp = dev_get_dp_display_private(&pdev->dev);
+ 
++	component_del(&pdev->dev, &dp_display_comp_ops);
+ 	dp_display_deinit_sub_modules(dp);
+ 
+-	component_del(&pdev->dev, &dp_display_comp_ops);
+ 	platform_set_drvdata(pdev, NULL);
+ 
+ 	return 0;
+@@ -1538,11 +1540,6 @@ void msm_dp_debugfs_init(struct msm_dp *dp_display, struct drm_minor *minor)
+ 	}
+ }
+ 
+-static void of_dp_aux_depopulate_bus_void(void *data)
+-{
+-	of_dp_aux_depopulate_bus(data);
+-}
+-
+ static int dp_display_get_next_bridge(struct msm_dp *dp)
+ {
+ 	int rc;
+@@ -1571,12 +1568,6 @@ static int dp_display_get_next_bridge(struct msm_dp *dp)
+ 		of_node_put(aux_bus);
+ 		if (rc)
+ 			goto error;
+-
+-		rc = devm_add_action_or_reset(dp->drm_dev->dev,
+-						of_dp_aux_depopulate_bus_void,
+-						dp_priv->aux);
+-		if (rc)
+-			goto error;
+ 	} else if (dp->is_edp) {
+ 		DRM_ERROR("eDP aux_bus not found\n");
+ 		return -ENODEV;
+@@ -1601,6 +1592,7 @@ static int dp_display_get_next_bridge(struct msm_dp *dp)
+ error:
+ 	if (dp->is_edp) {
+ 		disable_irq(dp_priv->irq);
++		of_dp_aux_depopulate_bus(dp_priv->aux);
+ 		dp_display_host_phy_exit(dp_priv);
+ 		dp_display_host_deinit(dp_priv);
+ 	}
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c
+index 18fa30e1e8583..3ee770dddc2fd 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
+@@ -854,18 +854,17 @@ static void dsi_update_dsc_timing(struct msm_dsi_host *msm_host, bool is_cmd_mod
+ 	 */
+ 	slice_per_intf = DIV_ROUND_UP(hdisplay, dsc->slice_width);
+ 
+-	/*
+-	 * If slice_count is greater than slice_per_intf
+-	 * then default to 1. This can happen during partial
+-	 * update.
+-	 */
+-	if (dsc->slice_count > slice_per_intf)
+-		dsc->slice_count = 1;
+-
+ 	total_bytes_per_intf = dsc->slice_chunk_size * slice_per_intf;
+ 
+ 	eol_byte_num = total_bytes_per_intf % 3;
+-	pkt_per_line = slice_per_intf / dsc->slice_count;
++
++	/*
++	 * Typically, pkt_per_line = slice_per_intf * slice_per_pkt.
++	 *
++	 * Since the current driver only supports slice_per_pkt = 1,
++	 * pkt_per_line will be equal to slice per intf for now.
++	 */
++	pkt_per_line = slice_per_intf;
+ 
+ 	if (is_cmd_mode) /* packet data type */
+ 		reg = DSI_COMMAND_COMPRESSION_MODE_CTRL_STREAM0_DATATYPE(MIPI_DSI_DCS_LONG_WRITE);
+@@ -989,7 +988,14 @@ static void dsi_timing_setup(struct msm_dsi_host *msm_host, bool is_bonded_dsi)
+ 		if (!msm_host->dsc)
+ 			wc = hdisplay * dsi_get_bpp(msm_host->format) / 8 + 1;
+ 		else
+-			wc = msm_host->dsc->slice_chunk_size * msm_host->dsc->slice_count + 1;
++			/*
++			 * When DSC is enabled, WC = slice_chunk_size * slice_per_pkt + 1.
++			 * Currently, the driver only supports default value of slice_per_pkt = 1
++			 *
++			 * TODO: Expand mipi_dsi_device struct to hold slice_per_pkt info
++			 *       and adjust DSC math to account for slice_per_pkt.
++			 */
++			wc = msm_host->dsc->slice_chunk_size + 1;
+ 
+ 		dsi_write(msm_host, REG_DSI_CMD_MDP_STREAM0_CTRL,
+ 			DSI_CMD_MDP_STREAM0_CTRL_WORD_COUNT(wc) |
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
+index 9f488adea7f54..3ce45b023e637 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
+@@ -539,6 +539,9 @@ static int dsi_pll_14nm_vco_prepare(struct clk_hw *hw)
+ 	if (unlikely(pll_14nm->phy->pll_on))
+ 		return 0;
+ 
++	if (dsi_pll_14nm_vco_recalc_rate(hw, VCO_REF_CLK_RATE) == 0)
++		dsi_pll_14nm_vco_set_rate(hw, pll_14nm->phy->cfg->min_pll_rate, VCO_REF_CLK_RATE);
++
+ 	dsi_phy_write(base + REG_DSI_14nm_PHY_PLL_VREF_CFG1, 0x10);
+ 	dsi_phy_write(cmn_base + REG_DSI_14nm_PHY_CMN_PLL_CNTRL, 1);
+ 
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+index 5bb777ff13130..9b6824f6b9e4b 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+@@ -64,6 +64,7 @@
+ #include "nouveau_connector.h"
+ #include "nouveau_encoder.h"
+ #include "nouveau_fence.h"
++#include "nv50_display.h"
+ 
+ #include <subdev/bios/dp.h>
+ 
+diff --git a/drivers/gpu/drm/nouveau/nv50_display.h b/drivers/gpu/drm/nouveau/nv50_display.h
+index fbd3b15583bc8..60f77766766e9 100644
+--- a/drivers/gpu/drm/nouveau/nv50_display.h
++++ b/drivers/gpu/drm/nouveau/nv50_display.h
+@@ -31,7 +31,5 @@
+ #include "nouveau_reg.h"
+ 
+ int  nv50_display_create(struct drm_device *);
+-void nv50_display_destroy(struct drm_device *);
+-int  nv50_display_init(struct drm_device *);
+-void nv50_display_fini(struct drm_device *);
++
+ #endif /* __NV50_DISPLAY_H__ */
+diff --git a/drivers/gpu/drm/panel/panel-sharp-ls043t1le01.c b/drivers/gpu/drm/panel/panel-sharp-ls043t1le01.c
+index d1ec80a3e3c72..ef148504cf24a 100644
+--- a/drivers/gpu/drm/panel/panel-sharp-ls043t1le01.c
++++ b/drivers/gpu/drm/panel/panel-sharp-ls043t1le01.c
+@@ -192,15 +192,15 @@ static int sharp_nt_panel_enable(struct drm_panel *panel)
+ }
+ 
+ static const struct drm_display_mode default_mode = {
+-	.clock = 41118,
++	.clock = (540 + 48 + 32 + 80) * (960 + 3 + 10 + 15) * 60 / 1000,
+ 	.hdisplay = 540,
+ 	.hsync_start = 540 + 48,
+-	.hsync_end = 540 + 48 + 80,
+-	.htotal = 540 + 48 + 80 + 32,
++	.hsync_end = 540 + 48 + 32,
++	.htotal = 540 + 48 + 32 + 80,
+ 	.vdisplay = 960,
+ 	.vsync_start = 960 + 3,
+-	.vsync_end = 960 + 3 + 15,
+-	.vtotal = 960 + 3 + 15 + 1,
++	.vsync_end = 960 + 3 + 10,
++	.vtotal = 960 + 3 + 10 + 15,
+ };
+ 
+ static int sharp_nt_panel_get_modes(struct drm_panel *panel,
+@@ -280,6 +280,7 @@ static int sharp_nt_panel_probe(struct mipi_dsi_device *dsi)
+ 	dsi->lanes = 2;
+ 	dsi->format = MIPI_DSI_FMT_RGB888;
+ 	dsi->mode_flags = MIPI_DSI_MODE_VIDEO |
++			MIPI_DSI_MODE_VIDEO_SYNC_PULSE |
+ 			MIPI_DSI_MODE_VIDEO_HSE |
+ 			MIPI_DSI_CLOCK_NON_CONTINUOUS |
+ 			MIPI_DSI_MODE_NO_EOT_PACKET;
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index 065f378bba9d2..d8efbcee9bc12 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -759,8 +759,8 @@ static const struct panel_desc ampire_am_480272h3tmqw_t01h = {
+ 	.num_modes = 1,
+ 	.bpc = 8,
+ 	.size = {
+-		.width = 105,
+-		.height = 67,
++		.width = 99,
++		.height = 58,
+ 	},
+ 	.bus_format = MEDIA_BUS_FMT_RGB888_1X24,
+ };
+diff --git a/drivers/gpu/drm/radeon/ci_dpm.c b/drivers/gpu/drm/radeon/ci_dpm.c
+index 8ef25ab305ae7..b8f4dac68d850 100644
+--- a/drivers/gpu/drm/radeon/ci_dpm.c
++++ b/drivers/gpu/drm/radeon/ci_dpm.c
+@@ -5517,6 +5517,7 @@ static int ci_parse_power_table(struct radeon_device *rdev)
+ 	u8 frev, crev;
+ 	u8 *power_state_offset;
+ 	struct ci_ps *ps;
++	int ret;
+ 
+ 	if (!atom_parse_data_header(mode_info->atom_context, index, NULL,
+ 				   &frev, &crev, &data_offset))
+@@ -5546,11 +5547,15 @@ static int ci_parse_power_table(struct radeon_device *rdev)
+ 		non_clock_array_index = power_state->v2.nonClockInfoIndex;
+ 		non_clock_info = (struct _ATOM_PPLIB_NONCLOCK_INFO *)
+ 			&non_clock_info_array->nonClockInfo[non_clock_array_index];
+-		if (!rdev->pm.power_state[i].clock_info)
+-			return -EINVAL;
++		if (!rdev->pm.power_state[i].clock_info) {
++			ret = -EINVAL;
++			goto err_free_ps;
++		}
+ 		ps = kzalloc(sizeof(struct ci_ps), GFP_KERNEL);
+-		if (ps == NULL)
+-			return -ENOMEM;
++		if (ps == NULL) {
++			ret = -ENOMEM;
++			goto err_free_ps;
++		}
+ 		rdev->pm.dpm.ps[i].ps_priv = ps;
+ 		ci_parse_pplib_non_clock_info(rdev, &rdev->pm.dpm.ps[i],
+ 					      non_clock_info,
+@@ -5590,6 +5595,12 @@ static int ci_parse_power_table(struct radeon_device *rdev)
+ 	}
+ 
+ 	return 0;
++
++err_free_ps:
++	for (i = 0; i < rdev->pm.dpm.num_ps; i++)
++		kfree(rdev->pm.dpm.ps[i].ps_priv);
++	kfree(rdev->pm.dpm.ps);
++	return ret;
+ }
+ 
+ static int ci_get_vbios_boot_values(struct radeon_device *rdev,
+@@ -5678,25 +5689,26 @@ int ci_dpm_init(struct radeon_device *rdev)
+ 
+ 	ret = ci_get_vbios_boot_values(rdev, &pi->vbios_boot_state);
+ 	if (ret) {
+-		ci_dpm_fini(rdev);
++		kfree(rdev->pm.dpm.priv);
+ 		return ret;
+ 	}
+ 
+ 	ret = r600_get_platform_caps(rdev);
+ 	if (ret) {
+-		ci_dpm_fini(rdev);
++		kfree(rdev->pm.dpm.priv);
+ 		return ret;
+ 	}
+ 
+ 	ret = r600_parse_extended_power_table(rdev);
+ 	if (ret) {
+-		ci_dpm_fini(rdev);
++		kfree(rdev->pm.dpm.priv);
+ 		return ret;
+ 	}
+ 
+ 	ret = ci_parse_power_table(rdev);
+ 	if (ret) {
+-		ci_dpm_fini(rdev);
++		kfree(rdev->pm.dpm.priv);
++		r600_free_extended_power_table(rdev);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/radeon/cypress_dpm.c b/drivers/gpu/drm/radeon/cypress_dpm.c
+index fdddbbaecbb74..72a0768df00f7 100644
+--- a/drivers/gpu/drm/radeon/cypress_dpm.c
++++ b/drivers/gpu/drm/radeon/cypress_dpm.c
+@@ -557,8 +557,12 @@ static int cypress_populate_mclk_value(struct radeon_device *rdev,
+ 						     ASIC_INTERNAL_MEMORY_SS, vco_freq)) {
+ 			u32 reference_clock = rdev->clock.mpll.reference_freq;
+ 			u32 decoded_ref = rv740_get_decoded_reference_divider(dividers.ref_div);
+-			u32 clk_s = reference_clock * 5 / (decoded_ref * ss.rate);
+-			u32 clk_v = ss.percentage *
++			u32 clk_s, clk_v;
++
++			if (!decoded_ref)
++				return -EINVAL;
++			clk_s = reference_clock * 5 / (decoded_ref * ss.rate);
++			clk_v = ss.percentage *
+ 				(0x4000 * dividers.whole_fb_div + 0x800 * dividers.frac_fb_div) / (clk_s * 625);
+ 
+ 			mpll_ss1 &= ~CLKV_MASK;
+diff --git a/drivers/gpu/drm/radeon/ni_dpm.c b/drivers/gpu/drm/radeon/ni_dpm.c
+index 672d2239293e0..3e1c1a392fb7b 100644
+--- a/drivers/gpu/drm/radeon/ni_dpm.c
++++ b/drivers/gpu/drm/radeon/ni_dpm.c
+@@ -2241,8 +2241,12 @@ static int ni_populate_mclk_value(struct radeon_device *rdev,
+ 						     ASIC_INTERNAL_MEMORY_SS, vco_freq)) {
+ 			u32 reference_clock = rdev->clock.mpll.reference_freq;
+ 			u32 decoded_ref = rv740_get_decoded_reference_divider(dividers.ref_div);
+-			u32 clk_s = reference_clock * 5 / (decoded_ref * ss.rate);
+-			u32 clk_v = ss.percentage *
++			u32 clk_s, clk_v;
++
++			if (!decoded_ref)
++				return -EINVAL;
++			clk_s = reference_clock * 5 / (decoded_ref * ss.rate);
++			clk_v = ss.percentage *
+ 				(0x4000 * dividers.whole_fb_div + 0x800 * dividers.frac_fb_div) / (clk_s * 625);
+ 
+ 			mpll_ss1 &= ~CLKV_MASK;
+diff --git a/drivers/gpu/drm/radeon/rv740_dpm.c b/drivers/gpu/drm/radeon/rv740_dpm.c
+index d57a3e1df8d63..4464fd21a3029 100644
+--- a/drivers/gpu/drm/radeon/rv740_dpm.c
++++ b/drivers/gpu/drm/radeon/rv740_dpm.c
+@@ -249,8 +249,12 @@ int rv740_populate_mclk_value(struct radeon_device *rdev,
+ 						     ASIC_INTERNAL_MEMORY_SS, vco_freq)) {
+ 			u32 reference_clock = rdev->clock.mpll.reference_freq;
+ 			u32 decoded_ref = rv740_get_decoded_reference_divider(dividers.ref_div);
+-			u32 clk_s = reference_clock * 5 / (decoded_ref * ss.rate);
+-			u32 clk_v = 0x40000 * ss.percentage *
++			u32 clk_s, clk_v;
++
++			if (!decoded_ref)
++				return -EINVAL;
++			clk_s = reference_clock * 5 / (decoded_ref * ss.rate);
++			clk_v = 0x40000 * ss.percentage *
+ 				(dividers.whole_fb_div + (dividers.frac_fb_div / 8)) / (clk_s * 10000);
+ 
+ 			mpll_ss1 &= ~CLKV_MASK;
+diff --git a/drivers/gpu/drm/sun4i/sun4i_tcon.c b/drivers/gpu/drm/sun4i/sun4i_tcon.c
+index 523a6d7879210..936796851ffd3 100644
+--- a/drivers/gpu/drm/sun4i/sun4i_tcon.c
++++ b/drivers/gpu/drm/sun4i/sun4i_tcon.c
+@@ -778,21 +778,19 @@ static irqreturn_t sun4i_tcon_handler(int irq, void *private)
+ static int sun4i_tcon_init_clocks(struct device *dev,
+ 				  struct sun4i_tcon *tcon)
+ {
+-	tcon->clk = devm_clk_get(dev, "ahb");
++	tcon->clk = devm_clk_get_enabled(dev, "ahb");
+ 	if (IS_ERR(tcon->clk)) {
+ 		dev_err(dev, "Couldn't get the TCON bus clock\n");
+ 		return PTR_ERR(tcon->clk);
+ 	}
+-	clk_prepare_enable(tcon->clk);
+ 
+ 	if (tcon->quirks->has_channel_0) {
+-		tcon->sclk0 = devm_clk_get(dev, "tcon-ch0");
++		tcon->sclk0 = devm_clk_get_enabled(dev, "tcon-ch0");
+ 		if (IS_ERR(tcon->sclk0)) {
+ 			dev_err(dev, "Couldn't get the TCON channel 0 clock\n");
+ 			return PTR_ERR(tcon->sclk0);
+ 		}
+ 	}
+-	clk_prepare_enable(tcon->sclk0);
+ 
+ 	if (tcon->quirks->has_channel_1) {
+ 		tcon->sclk1 = devm_clk_get(dev, "tcon-ch1");
+@@ -805,12 +803,6 @@ static int sun4i_tcon_init_clocks(struct device *dev,
+ 	return 0;
+ }
+ 
+-static void sun4i_tcon_free_clocks(struct sun4i_tcon *tcon)
+-{
+-	clk_disable_unprepare(tcon->sclk0);
+-	clk_disable_unprepare(tcon->clk);
+-}
+-
+ static int sun4i_tcon_init_irq(struct device *dev,
+ 			       struct sun4i_tcon *tcon)
+ {
+@@ -1223,14 +1215,14 @@ static int sun4i_tcon_bind(struct device *dev, struct device *master,
+ 	ret = sun4i_tcon_init_regmap(dev, tcon);
+ 	if (ret) {
+ 		dev_err(dev, "Couldn't init our TCON regmap\n");
+-		goto err_free_clocks;
++		goto err_assert_reset;
+ 	}
+ 
+ 	if (tcon->quirks->has_channel_0) {
+ 		ret = sun4i_dclk_create(dev, tcon);
+ 		if (ret) {
+ 			dev_err(dev, "Couldn't create our TCON dot clock\n");
+-			goto err_free_clocks;
++			goto err_assert_reset;
+ 		}
+ 	}
+ 
+@@ -1293,8 +1285,6 @@ static int sun4i_tcon_bind(struct device *dev, struct device *master,
+ err_free_dotclock:
+ 	if (tcon->quirks->has_channel_0)
+ 		sun4i_dclk_free(tcon);
+-err_free_clocks:
+-	sun4i_tcon_free_clocks(tcon);
+ err_assert_reset:
+ 	reset_control_assert(tcon->lcd_rst);
+ 	return ret;
+@@ -1308,7 +1298,6 @@ static void sun4i_tcon_unbind(struct device *dev, struct device *master,
+ 	list_del(&tcon->list);
+ 	if (tcon->quirks->has_channel_0)
+ 		sun4i_dclk_free(tcon);
+-	sun4i_tcon_free_clocks(tcon);
+ }
+ 
+ static const struct component_ops sun4i_tcon_ops = {
+diff --git a/drivers/gpu/drm/vkms/vkms_composer.c b/drivers/gpu/drm/vkms/vkms_composer.c
+index 8e53fa80742b2..80164e79af006 100644
+--- a/drivers/gpu/drm/vkms/vkms_composer.c
++++ b/drivers/gpu/drm/vkms/vkms_composer.c
+@@ -99,7 +99,7 @@ static void blend(struct vkms_writeback_job *wb,
+ 			if (!check_y_limit(plane[i]->frame_info, y))
+ 				continue;
+ 
+-			plane[i]->plane_read(stage_buffer, plane[i]->frame_info, y);
++			vkms_compose_row(stage_buffer, plane[i], y);
+ 			pre_mul_alpha_blend(plane[i]->frame_info, stage_buffer,
+ 					    output_buffer);
+ 		}
+@@ -118,7 +118,7 @@ static int check_format_funcs(struct vkms_crtc_state *crtc_state,
+ 	u32 n_active_planes = crtc_state->num_active_planes;
+ 
+ 	for (size_t i = 0; i < n_active_planes; i++)
+-		if (!planes[i]->plane_read)
++		if (!planes[i]->pixel_read)
+ 			return -1;
+ 
+ 	if (active_wb && !active_wb->wb_write)
+diff --git a/drivers/gpu/drm/vkms/vkms_drv.h b/drivers/gpu/drm/vkms/vkms_drv.h
+index 4a248567efb26..f152d54baf769 100644
+--- a/drivers/gpu/drm/vkms/vkms_drv.h
++++ b/drivers/gpu/drm/vkms/vkms_drv.h
+@@ -56,8 +56,7 @@ struct vkms_writeback_job {
+ struct vkms_plane_state {
+ 	struct drm_shadow_plane_state base;
+ 	struct vkms_frame_info *frame_info;
+-	void (*plane_read)(struct line_buffer *buffer,
+-			   const struct vkms_frame_info *frame_info, int y);
++	void (*pixel_read)(u8 *src_buffer, struct pixel_argb_u16 *out_pixel);
+ };
+ 
+ struct vkms_plane {
+@@ -155,6 +154,7 @@ int vkms_verify_crc_source(struct drm_crtc *crtc, const char *source_name,
+ /* Composer Support */
+ void vkms_composer_worker(struct work_struct *work);
+ void vkms_set_composer(struct vkms_output *out, bool enabled);
++void vkms_compose_row(struct line_buffer *stage_buffer, struct vkms_plane_state *plane, int y);
+ 
+ /* Writeback */
+ int vkms_enable_writeback_connector(struct vkms_device *vkmsdev);
+diff --git a/drivers/gpu/drm/vkms/vkms_formats.c b/drivers/gpu/drm/vkms/vkms_formats.c
+index d4950688b3f17..b11342026485f 100644
+--- a/drivers/gpu/drm/vkms/vkms_formats.c
++++ b/drivers/gpu/drm/vkms/vkms_formats.c
+@@ -42,100 +42,75 @@ static void *get_packed_src_addr(const struct vkms_frame_info *frame_info, int y
+ 	return packed_pixels_addr(frame_info, x_src, y_src);
+ }
+ 
+-static void ARGB8888_to_argb_u16(struct line_buffer *stage_buffer,
+-				 const struct vkms_frame_info *frame_info, int y)
++static void ARGB8888_to_argb_u16(u8 *src_pixels, struct pixel_argb_u16 *out_pixel)
+ {
+-	struct pixel_argb_u16 *out_pixels = stage_buffer->pixels;
+-	u8 *src_pixels = get_packed_src_addr(frame_info, y);
+-	int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst),
+-			    stage_buffer->n_pixels);
+-
+-	for (size_t x = 0; x < x_limit; x++, src_pixels += 4) {
+-		/*
+-		 * The 257 is the "conversion ratio". This number is obtained by the
+-		 * (2^16 - 1) / (2^8 - 1) division. Which, in this case, tries to get
+-		 * the best color value in a pixel format with more possibilities.
+-		 * A similar idea applies to others RGB color conversions.
+-		 */
+-		out_pixels[x].a = (u16)src_pixels[3] * 257;
+-		out_pixels[x].r = (u16)src_pixels[2] * 257;
+-		out_pixels[x].g = (u16)src_pixels[1] * 257;
+-		out_pixels[x].b = (u16)src_pixels[0] * 257;
+-	}
++	/*
++	 * The 257 is the "conversion ratio". This number is obtained by the
++	 * (2^16 - 1) / (2^8 - 1) division. Which, in this case, tries to get
++	 * the best color value in a pixel format with more possibilities.
++	 * A similar idea applies to others RGB color conversions.
++	 */
++	out_pixel->a = (u16)src_pixels[3] * 257;
++	out_pixel->r = (u16)src_pixels[2] * 257;
++	out_pixel->g = (u16)src_pixels[1] * 257;
++	out_pixel->b = (u16)src_pixels[0] * 257;
+ }
+ 
+-static void XRGB8888_to_argb_u16(struct line_buffer *stage_buffer,
+-				 const struct vkms_frame_info *frame_info, int y)
++static void XRGB8888_to_argb_u16(u8 *src_pixels, struct pixel_argb_u16 *out_pixel)
+ {
+-	struct pixel_argb_u16 *out_pixels = stage_buffer->pixels;
+-	u8 *src_pixels = get_packed_src_addr(frame_info, y);
+-	int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst),
+-			    stage_buffer->n_pixels);
+-
+-	for (size_t x = 0; x < x_limit; x++, src_pixels += 4) {
+-		out_pixels[x].a = (u16)0xffff;
+-		out_pixels[x].r = (u16)src_pixels[2] * 257;
+-		out_pixels[x].g = (u16)src_pixels[1] * 257;
+-		out_pixels[x].b = (u16)src_pixels[0] * 257;
+-	}
++	out_pixel->a = (u16)0xffff;
++	out_pixel->r = (u16)src_pixels[2] * 257;
++	out_pixel->g = (u16)src_pixels[1] * 257;
++	out_pixel->b = (u16)src_pixels[0] * 257;
+ }
+ 
+-static void ARGB16161616_to_argb_u16(struct line_buffer *stage_buffer,
+-				     const struct vkms_frame_info *frame_info,
+-				     int y)
++static void ARGB16161616_to_argb_u16(u8 *src_pixels, struct pixel_argb_u16 *out_pixel)
+ {
+-	struct pixel_argb_u16 *out_pixels = stage_buffer->pixels;
+-	u16 *src_pixels = get_packed_src_addr(frame_info, y);
+-	int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst),
+-			    stage_buffer->n_pixels);
++	u16 *pixels = (u16 *)src_pixels;
+ 
+-	for (size_t x = 0; x < x_limit; x++, src_pixels += 4) {
+-		out_pixels[x].a = le16_to_cpu(src_pixels[3]);
+-		out_pixels[x].r = le16_to_cpu(src_pixels[2]);
+-		out_pixels[x].g = le16_to_cpu(src_pixels[1]);
+-		out_pixels[x].b = le16_to_cpu(src_pixels[0]);
+-	}
++	out_pixel->a = le16_to_cpu(pixels[3]);
++	out_pixel->r = le16_to_cpu(pixels[2]);
++	out_pixel->g = le16_to_cpu(pixels[1]);
++	out_pixel->b = le16_to_cpu(pixels[0]);
+ }
+ 
+-static void XRGB16161616_to_argb_u16(struct line_buffer *stage_buffer,
+-				     const struct vkms_frame_info *frame_info,
+-				     int y)
++static void XRGB16161616_to_argb_u16(u8 *src_pixels, struct pixel_argb_u16 *out_pixel)
+ {
+-	struct pixel_argb_u16 *out_pixels = stage_buffer->pixels;
+-	u16 *src_pixels = get_packed_src_addr(frame_info, y);
+-	int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst),
+-			    stage_buffer->n_pixels);
++	u16 *pixels = (u16 *)src_pixels;
+ 
+-	for (size_t x = 0; x < x_limit; x++, src_pixels += 4) {
+-		out_pixels[x].a = (u16)0xffff;
+-		out_pixels[x].r = le16_to_cpu(src_pixels[2]);
+-		out_pixels[x].g = le16_to_cpu(src_pixels[1]);
+-		out_pixels[x].b = le16_to_cpu(src_pixels[0]);
+-	}
++	out_pixel->a = (u16)0xffff;
++	out_pixel->r = le16_to_cpu(pixels[2]);
++	out_pixel->g = le16_to_cpu(pixels[1]);
++	out_pixel->b = le16_to_cpu(pixels[0]);
+ }
+ 
+-static void RGB565_to_argb_u16(struct line_buffer *stage_buffer,
+-			       const struct vkms_frame_info *frame_info, int y)
++static void RGB565_to_argb_u16(u8 *src_pixels, struct pixel_argb_u16 *out_pixel)
+ {
+-	struct pixel_argb_u16 *out_pixels = stage_buffer->pixels;
+-	u16 *src_pixels = get_packed_src_addr(frame_info, y);
+-	int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst),
+-			       stage_buffer->n_pixels);
++	u16 *pixels = (u16 *)src_pixels;
+ 
+ 	s64 fp_rb_ratio = drm_fixp_div(drm_int2fixp(65535), drm_int2fixp(31));
+ 	s64 fp_g_ratio = drm_fixp_div(drm_int2fixp(65535), drm_int2fixp(63));
+ 
+-	for (size_t x = 0; x < x_limit; x++, src_pixels++) {
+-		u16 rgb_565 = le16_to_cpu(*src_pixels);
+-		s64 fp_r = drm_int2fixp((rgb_565 >> 11) & 0x1f);
+-		s64 fp_g = drm_int2fixp((rgb_565 >> 5) & 0x3f);
+-		s64 fp_b = drm_int2fixp(rgb_565 & 0x1f);
++	u16 rgb_565 = le16_to_cpu(*pixels);
++	s64 fp_r = drm_int2fixp((rgb_565 >> 11) & 0x1f);
++	s64 fp_g = drm_int2fixp((rgb_565 >> 5) & 0x3f);
++	s64 fp_b = drm_int2fixp(rgb_565 & 0x1f);
+ 
+-		out_pixels[x].a = (u16)0xffff;
+-		out_pixels[x].r = drm_fixp2int(drm_fixp_mul(fp_r, fp_rb_ratio));
+-		out_pixels[x].g = drm_fixp2int(drm_fixp_mul(fp_g, fp_g_ratio));
+-		out_pixels[x].b = drm_fixp2int(drm_fixp_mul(fp_b, fp_rb_ratio));
+-	}
++	out_pixel->a = (u16)0xffff;
++	out_pixel->r = drm_fixp2int_round(drm_fixp_mul(fp_r, fp_rb_ratio));
++	out_pixel->g = drm_fixp2int_round(drm_fixp_mul(fp_g, fp_g_ratio));
++	out_pixel->b = drm_fixp2int_round(drm_fixp_mul(fp_b, fp_rb_ratio));
++}
++
++void vkms_compose_row(struct line_buffer *stage_buffer, struct vkms_plane_state *plane, int y)
++{
++	struct pixel_argb_u16 *out_pixels = stage_buffer->pixels;
++	struct vkms_frame_info *frame_info = plane->frame_info;
++	u8 *src_pixels = get_packed_src_addr(frame_info, y);
++	int limit = min_t(size_t, drm_rect_width(&frame_info->dst), stage_buffer->n_pixels);
++
++	for (size_t x = 0; x < limit; x++, src_pixels += frame_info->cpp)
++		plane->pixel_read(src_pixels, &out_pixels[x]);
+ }
+ 
+ /*
+@@ -241,15 +216,15 @@ static void argb_u16_to_RGB565(struct vkms_frame_info *frame_info,
+ 		s64 fp_g = drm_int2fixp(in_pixels[x].g);
+ 		s64 fp_b = drm_int2fixp(in_pixels[x].b);
+ 
+-		u16 r = drm_fixp2int(drm_fixp_div(fp_r, fp_rb_ratio));
+-		u16 g = drm_fixp2int(drm_fixp_div(fp_g, fp_g_ratio));
+-		u16 b = drm_fixp2int(drm_fixp_div(fp_b, fp_rb_ratio));
++		u16 r = drm_fixp2int_round(drm_fixp_div(fp_r, fp_rb_ratio));
++		u16 g = drm_fixp2int_round(drm_fixp_div(fp_g, fp_g_ratio));
++		u16 b = drm_fixp2int_round(drm_fixp_div(fp_b, fp_rb_ratio));
+ 
+ 		*dst_pixels = cpu_to_le16(r << 11 | g << 5 | b);
+ 	}
+ }
+ 
+-void *get_frame_to_line_function(u32 format)
++void *get_pixel_conversion_function(u32 format)
+ {
+ 	switch (format) {
+ 	case DRM_FORMAT_ARGB8888:
+diff --git a/drivers/gpu/drm/vkms/vkms_formats.h b/drivers/gpu/drm/vkms/vkms_formats.h
+index 43b7c19790181..c5b113495d0c0 100644
+--- a/drivers/gpu/drm/vkms/vkms_formats.h
++++ b/drivers/gpu/drm/vkms/vkms_formats.h
+@@ -5,7 +5,7 @@
+ 
+ #include "vkms_drv.h"
+ 
+-void *get_frame_to_line_function(u32 format);
++void *get_pixel_conversion_function(u32 format);
+ 
+ void *get_line_to_frame_function(u32 format);
+ 
+diff --git a/drivers/gpu/drm/vkms/vkms_plane.c b/drivers/gpu/drm/vkms/vkms_plane.c
+index b3f8a115cc234..eaee51358a49b 100644
+--- a/drivers/gpu/drm/vkms/vkms_plane.c
++++ b/drivers/gpu/drm/vkms/vkms_plane.c
+@@ -123,7 +123,7 @@ static void vkms_plane_atomic_update(struct drm_plane *plane,
+ 	frame_info->offset = fb->offsets[0];
+ 	frame_info->pitch = fb->pitches[0];
+ 	frame_info->cpp = fb->format->cpp[0];
+-	vkms_plane_state->plane_read = get_frame_to_line_function(fmt);
++	vkms_plane_state->pixel_read = get_pixel_conversion_function(fmt);
+ }
+ 
+ static int vkms_plane_atomic_check(struct drm_plane *plane,
+diff --git a/drivers/hid/Kconfig b/drivers/hid/Kconfig
+index 4ce012f83253e..b977450cac752 100644
+--- a/drivers/hid/Kconfig
++++ b/drivers/hid/Kconfig
+@@ -1285,7 +1285,7 @@ config HID_MCP2221
+ 
+ config HID_KUNIT_TEST
+ 	tristate "KUnit tests for HID" if !KUNIT_ALL_TESTS
+-	depends on KUNIT=y
++	depends on KUNIT
+ 	depends on HID_BATTERY_STRENGTH
+ 	depends on HID_UCLOGIC
+ 	default KUNIT_ALL_TESTS
+diff --git a/drivers/hwmon/f71882fg.c b/drivers/hwmon/f71882fg.c
+index 70121482a6173..27207ec6f7feb 100644
+--- a/drivers/hwmon/f71882fg.c
++++ b/drivers/hwmon/f71882fg.c
+@@ -1096,8 +1096,11 @@ static ssize_t show_pwm(struct device *dev,
+ 		val = data->pwm[nr];
+ 	else {
+ 		/* RPM mode */
+-		val = 255 * fan_from_reg(data->fan_target[nr])
+-			/ fan_from_reg(data->fan_full_speed[nr]);
++		if (fan_from_reg(data->fan_full_speed[nr]))
++			val = 255 * fan_from_reg(data->fan_target[nr])
++				/ fan_from_reg(data->fan_full_speed[nr]);
++		else
++			val = 0;
+ 	}
+ 	mutex_unlock(&data->update_lock);
+ 	return sprintf(buf, "%d\n", val);
+diff --git a/drivers/hwmon/gsc-hwmon.c b/drivers/hwmon/gsc-hwmon.c
+index 73e5d92b200b0..1501ceb551e79 100644
+--- a/drivers/hwmon/gsc-hwmon.c
++++ b/drivers/hwmon/gsc-hwmon.c
+@@ -82,8 +82,8 @@ static ssize_t pwm_auto_point_temp_store(struct device *dev,
+ 	if (kstrtol(buf, 10, &temp))
+ 		return -EINVAL;
+ 
+-	temp = clamp_val(temp, 0, 10000);
+-	temp = DIV_ROUND_CLOSEST(temp, 10);
++	temp = clamp_val(temp, 0, 100000);
++	temp = DIV_ROUND_CLOSEST(temp, 100);
+ 
+ 	regs[0] = temp & 0xff;
+ 	regs[1] = (temp >> 8) & 0xff;
+@@ -100,7 +100,7 @@ static ssize_t pwm_auto_point_pwm_show(struct device *dev,
+ {
+ 	struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr);
+ 
+-	return sprintf(buf, "%d\n", 255 * (50 + (attr->index * 10)) / 100);
++	return sprintf(buf, "%d\n", 255 * (50 + (attr->index * 10)));
+ }
+ 
+ static SENSOR_DEVICE_ATTR_RO(pwm1_auto_point1_pwm, pwm_auto_point_pwm, 0);
+diff --git a/drivers/hwmon/pmbus/adm1275.c b/drivers/hwmon/pmbus/adm1275.c
+index 3b07bfb43e937..b8543c06d022a 100644
+--- a/drivers/hwmon/pmbus/adm1275.c
++++ b/drivers/hwmon/pmbus/adm1275.c
+@@ -37,10 +37,13 @@ enum chips { adm1075, adm1272, adm1275, adm1276, adm1278, adm1293, adm1294 };
+ 
+ #define ADM1272_IRANGE			BIT(0)
+ 
++#define ADM1278_TSFILT			BIT(15)
+ #define ADM1278_TEMP1_EN		BIT(3)
+ #define ADM1278_VIN_EN			BIT(2)
+ #define ADM1278_VOUT_EN			BIT(1)
+ 
++#define ADM1278_PMON_DEFCONFIG		(ADM1278_VOUT_EN | ADM1278_TEMP1_EN | ADM1278_TSFILT)
++
+ #define ADM1293_IRANGE_25		0
+ #define ADM1293_IRANGE_50		BIT(6)
+ #define ADM1293_IRANGE_100		BIT(7)
+@@ -462,6 +465,22 @@ static const struct i2c_device_id adm1275_id[] = {
+ };
+ MODULE_DEVICE_TABLE(i2c, adm1275_id);
+ 
++/* Enable VOUT & TEMP1 if not enabled (disabled by default) */
++static int adm1275_enable_vout_temp(struct i2c_client *client, int config)
++{
++	int ret;
++
++	if ((config & ADM1278_PMON_DEFCONFIG) != ADM1278_PMON_DEFCONFIG) {
++		config |= ADM1278_PMON_DEFCONFIG;
++		ret = i2c_smbus_write_word_data(client, ADM1275_PMON_CONFIG, config);
++		if (ret < 0) {
++			dev_err(&client->dev, "Failed to enable VOUT/TEMP1 monitoring\n");
++			return ret;
++		}
++	}
++	return 0;
++}
++
+ static int adm1275_probe(struct i2c_client *client)
+ {
+ 	s32 (*config_read_fn)(const struct i2c_client *client, u8 reg);
+@@ -615,19 +634,10 @@ static int adm1275_probe(struct i2c_client *client)
+ 			PMBUS_HAVE_VOUT | PMBUS_HAVE_STATUS_VOUT |
+ 			PMBUS_HAVE_TEMP | PMBUS_HAVE_STATUS_TEMP;
+ 
+-		/* Enable VOUT & TEMP1 if not enabled (disabled by default) */
+-		if ((config & (ADM1278_VOUT_EN | ADM1278_TEMP1_EN)) !=
+-		    (ADM1278_VOUT_EN | ADM1278_TEMP1_EN)) {
+-			config |= ADM1278_VOUT_EN | ADM1278_TEMP1_EN;
+-			ret = i2c_smbus_write_byte_data(client,
+-							ADM1275_PMON_CONFIG,
+-							config);
+-			if (ret < 0) {
+-				dev_err(&client->dev,
+-					"Failed to enable VOUT monitoring\n");
+-				return -ENODEV;
+-			}
+-		}
++		ret = adm1275_enable_vout_temp(client, config);
++		if (ret)
++			return ret;
++
+ 		if (config & ADM1278_VIN_EN)
+ 			info->func[0] |= PMBUS_HAVE_VIN;
+ 		break;
+@@ -684,19 +694,9 @@ static int adm1275_probe(struct i2c_client *client)
+ 			PMBUS_HAVE_VOUT | PMBUS_HAVE_STATUS_VOUT |
+ 			PMBUS_HAVE_TEMP | PMBUS_HAVE_STATUS_TEMP;
+ 
+-		/* Enable VOUT & TEMP1 if not enabled (disabled by default) */
+-		if ((config & (ADM1278_VOUT_EN | ADM1278_TEMP1_EN)) !=
+-		    (ADM1278_VOUT_EN | ADM1278_TEMP1_EN)) {
+-			config |= ADM1278_VOUT_EN | ADM1278_TEMP1_EN;
+-			ret = i2c_smbus_write_word_data(client,
+-							ADM1275_PMON_CONFIG,
+-							config);
+-			if (ret < 0) {
+-				dev_err(&client->dev,
+-					"Failed to enable VOUT monitoring\n");
+-				return -ENODEV;
+-			}
+-		}
++		ret = adm1275_enable_vout_temp(client, config);
++		if (ret)
++			return ret;
+ 
+ 		if (config & ADM1278_VIN_EN)
+ 			info->func[0] |= PMBUS_HAVE_VIN;
+diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c
+index 85e36c9f8e797..b4edcf12d0d19 100644
+--- a/drivers/infiniband/hw/bnxt_re/main.c
++++ b/drivers/infiniband/hw/bnxt_re/main.c
+@@ -283,15 +283,21 @@ static void bnxt_re_start_irq(void *handle, struct bnxt_msix_entry *ent)
+ 	for (indx = 0; indx < rdev->num_msix; indx++)
+ 		rdev->en_dev->msix_entries[indx].vector = ent[indx].vector;
+ 
+-	bnxt_qplib_rcfw_start_irq(rcfw, msix_ent[BNXT_RE_AEQ_IDX].vector,
+-				  false);
++	rc = bnxt_qplib_rcfw_start_irq(rcfw, msix_ent[BNXT_RE_AEQ_IDX].vector,
++				       false);
++	if (rc) {
++		ibdev_warn(&rdev->ibdev, "Failed to reinit CREQ\n");
++		return;
++	}
+ 	for (indx = BNXT_RE_NQ_IDX ; indx < rdev->num_msix; indx++) {
+ 		nq = &rdev->nq[indx - 1];
+ 		rc = bnxt_qplib_nq_start_irq(nq, indx - 1,
+ 					     msix_ent[indx].vector, false);
+-		if (rc)
++		if (rc) {
+ 			ibdev_warn(&rdev->ibdev, "Failed to reinit NQ index %d\n",
+ 				   indx - 1);
++			return;
++		}
+ 	}
+ }
+ 
+@@ -1004,12 +1010,6 @@ static int bnxt_re_update_gid(struct bnxt_re_dev *rdev)
+ 	if (!ib_device_try_get(&rdev->ibdev))
+ 		return 0;
+ 
+-	if (!sgid_tbl) {
+-		ibdev_err(&rdev->ibdev, "QPLIB: SGID table not allocated");
+-		rc = -EINVAL;
+-		goto out;
+-	}
+-
+ 	for (index = 0; index < sgid_tbl->active; index++) {
+ 		gid_idx = sgid_tbl->hw_id[index];
+ 
+@@ -1027,7 +1027,7 @@ static int bnxt_re_update_gid(struct bnxt_re_dev *rdev)
+ 		rc = bnxt_qplib_update_sgid(sgid_tbl, &gid, gid_idx,
+ 					    rdev->qplib_res.netdev->dev_addr);
+ 	}
+-out:
++
+ 	ib_device_put(&rdev->ibdev);
+ 	return rc;
+ }
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+index ab2cc1c67f70b..74d56900387a1 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+@@ -405,6 +405,9 @@ static irqreturn_t bnxt_qplib_nq_irq(int irq, void *dev_instance)
+ 
+ void bnxt_qplib_nq_stop_irq(struct bnxt_qplib_nq *nq, bool kill)
+ {
++	if (!nq->requested)
++		return;
++
+ 	tasklet_disable(&nq->nq_tasklet);
+ 	/* Mask h/w interrupt */
+ 	bnxt_qplib_ring_nq_db(&nq->nq_db.dbinfo, nq->res->cctx, false);
+@@ -412,11 +415,12 @@ void bnxt_qplib_nq_stop_irq(struct bnxt_qplib_nq *nq, bool kill)
+ 	synchronize_irq(nq->msix_vec);
+ 	if (kill)
+ 		tasklet_kill(&nq->nq_tasklet);
+-	if (nq->requested) {
+-		irq_set_affinity_hint(nq->msix_vec, NULL);
+-		free_irq(nq->msix_vec, nq);
+-		nq->requested = false;
+-	}
++
++	irq_set_affinity_hint(nq->msix_vec, NULL);
++	free_irq(nq->msix_vec, nq);
++	kfree(nq->name);
++	nq->name = NULL;
++	nq->requested = false;
+ }
+ 
+ void bnxt_qplib_disable_nq(struct bnxt_qplib_nq *nq)
+@@ -442,6 +446,7 @@ void bnxt_qplib_disable_nq(struct bnxt_qplib_nq *nq)
+ int bnxt_qplib_nq_start_irq(struct bnxt_qplib_nq *nq, int nq_indx,
+ 			    int msix_vector, bool need_init)
+ {
++	struct bnxt_qplib_res *res = nq->res;
+ 	int rc;
+ 
+ 	if (nq->requested)
+@@ -453,10 +458,17 @@ int bnxt_qplib_nq_start_irq(struct bnxt_qplib_nq *nq, int nq_indx,
+ 	else
+ 		tasklet_enable(&nq->nq_tasklet);
+ 
+-	snprintf(nq->name, sizeof(nq->name), "bnxt_qplib_nq-%d", nq_indx);
++	nq->name = kasprintf(GFP_KERNEL, "bnxt_re-nq-%d@pci:%s",
++			     nq_indx, pci_name(res->pdev));
++	if (!nq->name)
++		return -ENOMEM;
+ 	rc = request_irq(nq->msix_vec, bnxt_qplib_nq_irq, 0, nq->name, nq);
+-	if (rc)
++	if (rc) {
++		kfree(nq->name);
++		nq->name = NULL;
++		tasklet_disable(&nq->nq_tasklet);
+ 		return rc;
++	}
+ 
+ 	cpumask_clear(&nq->mask);
+ 	cpumask_set_cpu(nq_indx, &nq->mask);
+@@ -467,7 +479,7 @@ int bnxt_qplib_nq_start_irq(struct bnxt_qplib_nq *nq, int nq_indx,
+ 			 nq->msix_vec, nq_indx);
+ 	}
+ 	nq->requested = true;
+-	bnxt_qplib_ring_nq_db(&nq->nq_db.dbinfo, nq->res->cctx, true);
++	bnxt_qplib_ring_nq_db(&nq->nq_db.dbinfo, res->cctx, true);
+ 
+ 	return rc;
+ }
+@@ -1601,7 +1613,7 @@ static int bnxt_qplib_put_inline(struct bnxt_qplib_qp *qp,
+ 		il_src = (void *)wqe->sg_list[indx].addr;
+ 		t_len += len;
+ 		if (t_len > qp->max_inline_data)
+-			goto bad;
++			return -ENOMEM;
+ 		while (len) {
+ 			if (pull_dst) {
+ 				pull_dst = false;
+@@ -1625,8 +1637,6 @@ static int bnxt_qplib_put_inline(struct bnxt_qplib_qp *qp,
+ 	}
+ 
+ 	return t_len;
+-bad:
+-	return -ENOMEM;
+ }
+ 
+ static u32 bnxt_qplib_put_sges(struct bnxt_qplib_hwq *hwq,
+@@ -2056,7 +2066,7 @@ int bnxt_qplib_create_cq(struct bnxt_qplib_res *res, struct bnxt_qplib_cq *cq)
+ 	hwq_attr.sginfo = &cq->sg_info;
+ 	rc = bnxt_qplib_alloc_init_hwq(&cq->hwq, &hwq_attr);
+ 	if (rc)
+-		goto exit;
++		return rc;
+ 
+ 	RCFW_CMD_PREP(req, CREATE_CQ, cmd_flags);
+ 
+@@ -2097,7 +2107,6 @@ int bnxt_qplib_create_cq(struct bnxt_qplib_res *res, struct bnxt_qplib_cq *cq)
+ 
+ fail:
+ 	bnxt_qplib_free_hwq(res, &cq->hwq);
+-exit:
+ 	return rc;
+ }
+ 
+@@ -2725,11 +2734,8 @@ static int bnxt_qplib_cq_process_terminal(struct bnxt_qplib_cq *cq,
+ 
+ 	qp = (struct bnxt_qplib_qp *)((unsigned long)
+ 				      le64_to_cpu(hwcqe->qp_handle));
+-	if (!qp) {
+-		dev_err(&cq->hwq.pdev->dev,
+-			"FP: CQ Process terminal qp is NULL\n");
++	if (!qp)
+ 		return -EINVAL;
+-	}
+ 
+ 	/* Must block new posting of SQ and RQ */
+ 	qp->state = CMDQ_MODIFY_QP_NEW_STATE_ERR;
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.h b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+index 0375019525431..f859710f9a7f4 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+@@ -471,7 +471,7 @@ typedef int (*srqn_handler_t)(struct bnxt_qplib_nq *nq,
+ struct bnxt_qplib_nq {
+ 	struct pci_dev			*pdev;
+ 	struct bnxt_qplib_res		*res;
+-	char				name[32];
++	char				*name;
+ 	struct bnxt_qplib_hwq		hwq;
+ 	struct bnxt_qplib_nq_db		nq_db;
+ 	u16				ring_id;
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+index 061b2895dd9b5..75e0c42f6f424 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+@@ -181,7 +181,7 @@ static int __send_message(struct bnxt_qplib_rcfw *rcfw, struct cmdq_base *req,
+ 	} while (size > 0);
+ 	cmdq->seq_num++;
+ 
+-	cmdq_prod = hwq->prod;
++	cmdq_prod = hwq->prod & 0xFFFF;
+ 	if (test_bit(FIRMWARE_FIRST_FLAG, &cmdq->flags)) {
+ 		/* The very first doorbell write
+ 		 * is required to set this flag
+@@ -299,7 +299,8 @@ static int bnxt_qplib_process_func_event(struct bnxt_qplib_rcfw *rcfw,
+ }
+ 
+ static int bnxt_qplib_process_qp_event(struct bnxt_qplib_rcfw *rcfw,
+-				       struct creq_qp_event *qp_event)
++				       struct creq_qp_event *qp_event,
++				       u32 *num_wait)
+ {
+ 	struct creq_qp_error_notification *err_event;
+ 	struct bnxt_qplib_hwq *hwq = &rcfw->cmdq.hwq;
+@@ -308,6 +309,7 @@ static int bnxt_qplib_process_qp_event(struct bnxt_qplib_rcfw *rcfw,
+ 	u16 cbit, blocked = 0;
+ 	struct pci_dev *pdev;
+ 	unsigned long flags;
++	u32 wait_cmds = 0;
+ 	__le16  mcookie;
+ 	u16 cookie;
+ 	int rc = 0;
+@@ -367,9 +369,10 @@ static int bnxt_qplib_process_qp_event(struct bnxt_qplib_rcfw *rcfw,
+ 		crsqe->req_size = 0;
+ 
+ 		if (!blocked)
+-			wake_up(&rcfw->cmdq.waitq);
++			wait_cmds++;
+ 		spin_unlock_irqrestore(&hwq->lock, flags);
+ 	}
++	*num_wait += wait_cmds;
+ 	return rc;
+ }
+ 
+@@ -383,6 +386,7 @@ static void bnxt_qplib_service_creq(struct tasklet_struct *t)
+ 	struct creq_base *creqe;
+ 	u32 sw_cons, raw_cons;
+ 	unsigned long flags;
++	u32 num_wakeup = 0;
+ 
+ 	/* Service the CREQ until budget is over */
+ 	spin_lock_irqsave(&hwq->lock, flags);
+@@ -401,7 +405,8 @@ static void bnxt_qplib_service_creq(struct tasklet_struct *t)
+ 		switch (type) {
+ 		case CREQ_BASE_TYPE_QP_EVENT:
+ 			bnxt_qplib_process_qp_event
+-				(rcfw, (struct creq_qp_event *)creqe);
++				(rcfw, (struct creq_qp_event *)creqe,
++				 &num_wakeup);
+ 			creq->stats.creq_qp_event_processed++;
+ 			break;
+ 		case CREQ_BASE_TYPE_FUNC_EVENT:
+@@ -429,6 +434,8 @@ static void bnxt_qplib_service_creq(struct tasklet_struct *t)
+ 				      rcfw->res->cctx, true);
+ 	}
+ 	spin_unlock_irqrestore(&hwq->lock, flags);
++	if (num_wakeup)
++		wake_up_nr(&rcfw->cmdq.waitq, num_wakeup);
+ }
+ 
+ static irqreturn_t bnxt_qplib_creq_irq(int irq, void *dev_instance)
+@@ -598,7 +605,7 @@ int bnxt_qplib_alloc_rcfw_channel(struct bnxt_qplib_res *res,
+ 		rcfw->cmdq_depth = BNXT_QPLIB_CMDQE_MAX_CNT_8192;
+ 
+ 	sginfo.pgsize = bnxt_qplib_cmdqe_page_size(rcfw->cmdq_depth);
+-	hwq_attr.depth = rcfw->cmdq_depth;
++	hwq_attr.depth = rcfw->cmdq_depth & 0x7FFFFFFF;
+ 	hwq_attr.stride = BNXT_QPLIB_CMDQE_UNITS;
+ 	hwq_attr.type = HWQ_TYPE_CTX;
+ 	if (bnxt_qplib_alloc_init_hwq(&cmdq->hwq, &hwq_attr)) {
+@@ -635,6 +642,10 @@ void bnxt_qplib_rcfw_stop_irq(struct bnxt_qplib_rcfw *rcfw, bool kill)
+ 	struct bnxt_qplib_creq_ctx *creq;
+ 
+ 	creq = &rcfw->creq;
++
++	if (!creq->requested)
++		return;
++
+ 	tasklet_disable(&creq->creq_tasklet);
+ 	/* Mask h/w interrupts */
+ 	bnxt_qplib_ring_nq_db(&creq->creq_db.dbinfo, rcfw->res->cctx, false);
+@@ -643,10 +654,10 @@ void bnxt_qplib_rcfw_stop_irq(struct bnxt_qplib_rcfw *rcfw, bool kill)
+ 	if (kill)
+ 		tasklet_kill(&creq->creq_tasklet);
+ 
+-	if (creq->requested) {
+-		free_irq(creq->msix_vec, rcfw);
+-		creq->requested = false;
+-	}
++	free_irq(creq->msix_vec, rcfw);
++	kfree(creq->irq_name);
++	creq->irq_name = NULL;
++	creq->requested = false;
+ }
+ 
+ void bnxt_qplib_disable_rcfw_channel(struct bnxt_qplib_rcfw *rcfw)
+@@ -678,9 +689,11 @@ int bnxt_qplib_rcfw_start_irq(struct bnxt_qplib_rcfw *rcfw, int msix_vector,
+ 			      bool need_init)
+ {
+ 	struct bnxt_qplib_creq_ctx *creq;
++	struct bnxt_qplib_res *res;
+ 	int rc;
+ 
+ 	creq = &rcfw->creq;
++	res = rcfw->res;
+ 
+ 	if (creq->requested)
+ 		return -EFAULT;
+@@ -690,13 +703,22 @@ int bnxt_qplib_rcfw_start_irq(struct bnxt_qplib_rcfw *rcfw, int msix_vector,
+ 		tasklet_setup(&creq->creq_tasklet, bnxt_qplib_service_creq);
+ 	else
+ 		tasklet_enable(&creq->creq_tasklet);
++
++	creq->irq_name = kasprintf(GFP_KERNEL, "bnxt_re-creq@pci:%s",
++				   pci_name(res->pdev));
++	if (!creq->irq_name)
++		return -ENOMEM;
+ 	rc = request_irq(creq->msix_vec, bnxt_qplib_creq_irq, 0,
+-			 "bnxt_qplib_creq", rcfw);
+-	if (rc)
++			 creq->irq_name, rcfw);
++	if (rc) {
++		kfree(creq->irq_name);
++		creq->irq_name = NULL;
++		tasklet_disable(&creq->creq_tasklet);
+ 		return rc;
++	}
+ 	creq->requested = true;
+ 
+-	bnxt_qplib_ring_nq_db(&creq->creq_db.dbinfo, rcfw->res->cctx, true);
++	bnxt_qplib_ring_nq_db(&creq->creq_db.dbinfo, res->cctx, true);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
+index 0a3d8e7da3d42..b887e7fbad9ef 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
+@@ -174,6 +174,7 @@ struct bnxt_qplib_creq_ctx {
+ 	u16				ring_id;
+ 	int				msix_vec;
+ 	bool				requested; /*irq handler installed */
++	char				*irq_name;
+ };
+ 
+ /* RCFW Communication Channels */
+diff --git a/drivers/infiniband/hw/hfi1/ipoib_tx.c b/drivers/infiniband/hw/hfi1/ipoib_tx.c
+index 8973a081d641e..e7d831330278d 100644
+--- a/drivers/infiniband/hw/hfi1/ipoib_tx.c
++++ b/drivers/infiniband/hw/hfi1/ipoib_tx.c
+@@ -215,11 +215,11 @@ static int hfi1_ipoib_build_ulp_payload(struct ipoib_txreq *tx,
+ 		const skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+ 
+ 		ret = sdma_txadd_page(dd,
+-				      NULL,
+ 				      txreq,
+ 				      skb_frag_page(frag),
+ 				      frag->bv_offset,
+-				      skb_frag_size(frag));
++				      skb_frag_size(frag),
++				      NULL, NULL, NULL);
+ 		if (unlikely(ret))
+ 			break;
+ 	}
+diff --git a/drivers/infiniband/hw/hfi1/mmu_rb.c b/drivers/infiniband/hw/hfi1/mmu_rb.c
+index 71b9ac0188875..94f1701667301 100644
+--- a/drivers/infiniband/hw/hfi1/mmu_rb.c
++++ b/drivers/infiniband/hw/hfi1/mmu_rb.c
+@@ -19,8 +19,7 @@ static int mmu_notifier_range_start(struct mmu_notifier *,
+ 		const struct mmu_notifier_range *);
+ static struct mmu_rb_node *__mmu_rb_search(struct mmu_rb_handler *,
+ 					   unsigned long, unsigned long);
+-static void do_remove(struct mmu_rb_handler *handler,
+-		      struct list_head *del_list);
++static void release_immediate(struct kref *refcount);
+ static void handle_remove(struct work_struct *work);
+ 
+ static const struct mmu_notifier_ops mn_opts = {
+@@ -103,7 +102,11 @@ void hfi1_mmu_rb_unregister(struct mmu_rb_handler *handler)
+ 	}
+ 	spin_unlock_irqrestore(&handler->lock, flags);
+ 
+-	do_remove(handler, &del_list);
++	while (!list_empty(&del_list)) {
++		rbnode = list_first_entry(&del_list, struct mmu_rb_node, list);
++		list_del(&rbnode->list);
++		kref_put(&rbnode->refcount, release_immediate);
++	}
+ 
+ 	/* Now the mm may be freed. */
+ 	mmdrop(handler->mn.mm);
+@@ -131,12 +134,6 @@ int hfi1_mmu_rb_insert(struct mmu_rb_handler *handler,
+ 	}
+ 	__mmu_int_rb_insert(mnode, &handler->root);
+ 	list_add_tail(&mnode->list, &handler->lru_list);
+-
+-	ret = handler->ops->insert(handler->ops_arg, mnode);
+-	if (ret) {
+-		__mmu_int_rb_remove(mnode, &handler->root);
+-		list_del(&mnode->list); /* remove from LRU list */
+-	}
+ 	mnode->handler = handler;
+ unlock:
+ 	spin_unlock_irqrestore(&handler->lock, flags);
+@@ -180,6 +177,48 @@ static struct mmu_rb_node *__mmu_rb_search(struct mmu_rb_handler *handler,
+ 	return node;
+ }
+ 
++/*
++ * Must NOT call while holding mnode->handler->lock.
++ * mnode->handler->ops->remove() may sleep and mnode->handler->lock is a
++ * spinlock.
++ */
++static void release_immediate(struct kref *refcount)
++{
++	struct mmu_rb_node *mnode =
++		container_of(refcount, struct mmu_rb_node, refcount);
++	mnode->handler->ops->remove(mnode->handler->ops_arg, mnode);
++}
++
++/* Caller must hold mnode->handler->lock */
++static void release_nolock(struct kref *refcount)
++{
++	struct mmu_rb_node *mnode =
++		container_of(refcount, struct mmu_rb_node, refcount);
++	list_move(&mnode->list, &mnode->handler->del_list);
++	queue_work(mnode->handler->wq, &mnode->handler->del_work);
++}
++
++/*
++ * struct mmu_rb_node->refcount kref_put() callback.
++ * Adds mmu_rb_node to mmu_rb_node->handler->del_list and queues
++ * handler->del_work on handler->wq.
++ * Does not remove mmu_rb_node from handler->lru_list or handler->rb_root.
++ * Acquires mmu_rb_node->handler->lock; do not call while already holding
++ * handler->lock.
++ */
++void hfi1_mmu_rb_release(struct kref *refcount)
++{
++	struct mmu_rb_node *mnode =
++		container_of(refcount, struct mmu_rb_node, refcount);
++	struct mmu_rb_handler *handler = mnode->handler;
++	unsigned long flags;
++
++	spin_lock_irqsave(&handler->lock, flags);
++	list_move(&mnode->list, &mnode->handler->del_list);
++	spin_unlock_irqrestore(&handler->lock, flags);
++	queue_work(handler->wq, &handler->del_work);
++}
++
+ void hfi1_mmu_rb_evict(struct mmu_rb_handler *handler, void *evict_arg)
+ {
+ 	struct mmu_rb_node *rbnode, *ptr;
+@@ -194,6 +233,10 @@ void hfi1_mmu_rb_evict(struct mmu_rb_handler *handler, void *evict_arg)
+ 
+ 	spin_lock_irqsave(&handler->lock, flags);
+ 	list_for_each_entry_safe(rbnode, ptr, &handler->lru_list, list) {
++		/* refcount == 1 implies mmu_rb_handler has only rbnode ref */
++		if (kref_read(&rbnode->refcount) > 1)
++			continue;
++
+ 		if (handler->ops->evict(handler->ops_arg, rbnode, evict_arg,
+ 					&stop)) {
+ 			__mmu_int_rb_remove(rbnode, &handler->root);
+@@ -206,7 +249,7 @@ void hfi1_mmu_rb_evict(struct mmu_rb_handler *handler, void *evict_arg)
+ 	spin_unlock_irqrestore(&handler->lock, flags);
+ 
+ 	list_for_each_entry_safe(rbnode, ptr, &del_list, list) {
+-		handler->ops->remove(handler->ops_arg, rbnode);
++		kref_put(&rbnode->refcount, release_immediate);
+ 	}
+ }
+ 
+@@ -218,7 +261,6 @@ static int mmu_notifier_range_start(struct mmu_notifier *mn,
+ 	struct rb_root_cached *root = &handler->root;
+ 	struct mmu_rb_node *node, *ptr = NULL;
+ 	unsigned long flags;
+-	bool added = false;
+ 
+ 	spin_lock_irqsave(&handler->lock, flags);
+ 	for (node = __mmu_int_rb_iter_first(root, range->start, range->end-1);
+@@ -227,38 +269,16 @@ static int mmu_notifier_range_start(struct mmu_notifier *mn,
+ 		ptr = __mmu_int_rb_iter_next(node, range->start,
+ 					     range->end - 1);
+ 		trace_hfi1_mmu_mem_invalidate(node->addr, node->len);
+-		if (handler->ops->invalidate(handler->ops_arg, node)) {
+-			__mmu_int_rb_remove(node, root);
+-			/* move from LRU list to delete list */
+-			list_move(&node->list, &handler->del_list);
+-			added = true;
+-		}
++		/* Remove from rb tree and lru_list. */
++		__mmu_int_rb_remove(node, root);
++		list_del_init(&node->list);
++		kref_put(&node->refcount, release_nolock);
+ 	}
+ 	spin_unlock_irqrestore(&handler->lock, flags);
+ 
+-	if (added)
+-		queue_work(handler->wq, &handler->del_work);
+-
+ 	return 0;
+ }
+ 
+-/*
+- * Call the remove function for the given handler and the list.  This
+- * is expected to be called with a delete list extracted from handler.
+- * The caller should not be holding the handler lock.
+- */
+-static void do_remove(struct mmu_rb_handler *handler,
+-		      struct list_head *del_list)
+-{
+-	struct mmu_rb_node *node;
+-
+-	while (!list_empty(del_list)) {
+-		node = list_first_entry(del_list, struct mmu_rb_node, list);
+-		list_del(&node->list);
+-		handler->ops->remove(handler->ops_arg, node);
+-	}
+-}
+-
+ /*
+  * Work queue function to remove all nodes that have been queued up to
+  * be removed.  The key feature is that mm->mmap_lock is not being held
+@@ -271,11 +291,16 @@ static void handle_remove(struct work_struct *work)
+ 						del_work);
+ 	struct list_head del_list;
+ 	unsigned long flags;
++	struct mmu_rb_node *node;
+ 
+ 	/* remove anything that is queued to get removed */
+ 	spin_lock_irqsave(&handler->lock, flags);
+ 	list_replace_init(&handler->del_list, &del_list);
+ 	spin_unlock_irqrestore(&handler->lock, flags);
+ 
+-	do_remove(handler, &del_list);
++	while (!list_empty(&del_list)) {
++		node = list_first_entry(&del_list, struct mmu_rb_node, list);
++		list_del(&node->list);
++		handler->ops->remove(handler->ops_arg, node);
++	}
+ }
+diff --git a/drivers/infiniband/hw/hfi1/mmu_rb.h b/drivers/infiniband/hw/hfi1/mmu_rb.h
+index ed75acdb7b839..dd2c4a0ae95b1 100644
+--- a/drivers/infiniband/hw/hfi1/mmu_rb.h
++++ b/drivers/infiniband/hw/hfi1/mmu_rb.h
+@@ -16,6 +16,7 @@ struct mmu_rb_node {
+ 	struct rb_node node;
+ 	struct mmu_rb_handler *handler;
+ 	struct list_head list;
++	struct kref refcount;
+ };
+ 
+ /*
+@@ -51,6 +52,8 @@ int hfi1_mmu_rb_register(void *ops_arg,
+ void hfi1_mmu_rb_unregister(struct mmu_rb_handler *handler);
+ int hfi1_mmu_rb_insert(struct mmu_rb_handler *handler,
+ 		       struct mmu_rb_node *mnode);
++void hfi1_mmu_rb_release(struct kref *refcount);
++
+ void hfi1_mmu_rb_evict(struct mmu_rb_handler *handler, void *evict_arg);
+ struct mmu_rb_node *hfi1_mmu_rb_get_first(struct mmu_rb_handler *handler,
+ 					  unsigned long addr,
+diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
+index bb2552dd29c1e..26c62162759ba 100644
+--- a/drivers/infiniband/hw/hfi1/sdma.c
++++ b/drivers/infiniband/hw/hfi1/sdma.c
+@@ -1593,7 +1593,20 @@ static inline void sdma_unmap_desc(
+ 	struct hfi1_devdata *dd,
+ 	struct sdma_desc *descp)
+ {
+-	system_descriptor_complete(dd, descp);
++	switch (sdma_mapping_type(descp)) {
++	case SDMA_MAP_SINGLE:
++		dma_unmap_single(&dd->pcidev->dev, sdma_mapping_addr(descp),
++				 sdma_mapping_len(descp), DMA_TO_DEVICE);
++		break;
++	case SDMA_MAP_PAGE:
++		dma_unmap_page(&dd->pcidev->dev, sdma_mapping_addr(descp),
++			       sdma_mapping_len(descp), DMA_TO_DEVICE);
++		break;
++	}
++
++	if (descp->pinning_ctx && descp->ctx_put)
++		descp->ctx_put(descp->pinning_ctx);
++	descp->pinning_ctx = NULL;
+ }
+ 
+ /*
+@@ -3113,8 +3126,8 @@ int ext_coal_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx,
+ 
+ 		/* Add descriptor for coalesce buffer */
+ 		tx->desc_limit = MAX_DESC;
+-		return _sdma_txadd_daddr(dd, SDMA_MAP_SINGLE, NULL, tx,
+-					 addr, tx->tlen);
++		return _sdma_txadd_daddr(dd, SDMA_MAP_SINGLE, tx,
++					 addr, tx->tlen, NULL, NULL, NULL);
+ 	}
+ 
+ 	return 1;
+@@ -3157,9 +3170,9 @@ int _pad_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx)
+ 	make_tx_sdma_desc(
+ 		tx,
+ 		SDMA_MAP_NONE,
+-		NULL,
+ 		dd->sdma_pad_phys,
+-		sizeof(u32) - (tx->packet_len & (sizeof(u32) - 1)));
++		sizeof(u32) - (tx->packet_len & (sizeof(u32) - 1)),
++		NULL, NULL, NULL);
+ 	tx->num_desc++;
+ 	_sdma_close_tx(dd, tx);
+ 	return rval;
+diff --git a/drivers/infiniband/hw/hfi1/sdma.h b/drivers/infiniband/hw/hfi1/sdma.h
+index 95aaec14c6c28..7fdebab202c4f 100644
+--- a/drivers/infiniband/hw/hfi1/sdma.h
++++ b/drivers/infiniband/hw/hfi1/sdma.h
+@@ -594,9 +594,11 @@ static inline dma_addr_t sdma_mapping_addr(struct sdma_desc *d)
+ static inline void make_tx_sdma_desc(
+ 	struct sdma_txreq *tx,
+ 	int type,
+-	void *pinning_ctx,
+ 	dma_addr_t addr,
+-	size_t len)
++	size_t len,
++	void *pinning_ctx,
++	void (*ctx_get)(void *),
++	void (*ctx_put)(void *))
+ {
+ 	struct sdma_desc *desc = &tx->descp[tx->num_desc];
+ 
+@@ -613,7 +615,11 @@ static inline void make_tx_sdma_desc(
+ 				<< SDMA_DESC0_PHY_ADDR_SHIFT) |
+ 			(((u64)len & SDMA_DESC0_BYTE_COUNT_MASK)
+ 				<< SDMA_DESC0_BYTE_COUNT_SHIFT);
++
+ 	desc->pinning_ctx = pinning_ctx;
++	desc->ctx_put = ctx_put;
++	if (pinning_ctx && ctx_get)
++		ctx_get(pinning_ctx);
+ }
+ 
+ /* helper to extend txreq */
+@@ -645,18 +651,20 @@ static inline void _sdma_close_tx(struct hfi1_devdata *dd,
+ static inline int _sdma_txadd_daddr(
+ 	struct hfi1_devdata *dd,
+ 	int type,
+-	void *pinning_ctx,
+ 	struct sdma_txreq *tx,
+ 	dma_addr_t addr,
+-	u16 len)
++	u16 len,
++	void *pinning_ctx,
++	void (*ctx_get)(void *),
++	void (*ctx_put)(void *))
+ {
+ 	int rval = 0;
+ 
+ 	make_tx_sdma_desc(
+ 		tx,
+ 		type,
+-		pinning_ctx,
+-		addr, len);
++		addr, len,
++		pinning_ctx, ctx_get, ctx_put);
+ 	WARN_ON(len > tx->tlen);
+ 	tx->num_desc++;
+ 	tx->tlen -= len;
+@@ -676,11 +684,18 @@ static inline int _sdma_txadd_daddr(
+ /**
+  * sdma_txadd_page() - add a page to the sdma_txreq
+  * @dd: the device to use for mapping
+- * @pinning_ctx: context to be released at descriptor retirement
+  * @tx: tx request to which the page is added
+  * @page: page to map
+  * @offset: offset within the page
+  * @len: length in bytes
++ * @pinning_ctx: context to be stored on struct sdma_desc .pinning_ctx. Not
++ *               added if coalesce buffer is used. E.g. pointer to pinned-page
++ *               cache entry for the sdma_desc.
++ * @ctx_get: optional function to take reference to @pinning_ctx. Not called if
++ *           @pinning_ctx is NULL.
++ * @ctx_put: optional function to release reference to @pinning_ctx after
++ *           sdma_desc completes. May be called in interrupt context so must
++ *           not sleep. Not called if @pinning_ctx is NULL.
+  *
+  * This is used to add a page/offset/length descriptor.
+  *
+@@ -692,11 +707,13 @@ static inline int _sdma_txadd_daddr(
+  */
+ static inline int sdma_txadd_page(
+ 	struct hfi1_devdata *dd,
+-	void *pinning_ctx,
+ 	struct sdma_txreq *tx,
+ 	struct page *page,
+ 	unsigned long offset,
+-	u16 len)
++	u16 len,
++	void *pinning_ctx,
++	void (*ctx_get)(void *),
++	void (*ctx_put)(void *))
+ {
+ 	dma_addr_t addr;
+ 	int rval;
+@@ -720,7 +737,8 @@ static inline int sdma_txadd_page(
+ 		return -ENOSPC;
+ 	}
+ 
+-	return _sdma_txadd_daddr(dd, SDMA_MAP_PAGE, pinning_ctx, tx, addr, len);
++	return _sdma_txadd_daddr(dd, SDMA_MAP_PAGE, tx, addr, len,
++				 pinning_ctx, ctx_get, ctx_put);
+ }
+ 
+ /**
+@@ -754,8 +772,8 @@ static inline int sdma_txadd_daddr(
+ 			return rval;
+ 	}
+ 
+-	return _sdma_txadd_daddr(dd, SDMA_MAP_NONE, NULL, tx,
+-				 addr, len);
++	return _sdma_txadd_daddr(dd, SDMA_MAP_NONE, tx, addr, len,
++				 NULL, NULL, NULL);
+ }
+ 
+ /**
+@@ -801,7 +819,8 @@ static inline int sdma_txadd_kvaddr(
+ 		return -ENOSPC;
+ 	}
+ 
+-	return _sdma_txadd_daddr(dd, SDMA_MAP_SINGLE, NULL, tx, addr, len);
++	return _sdma_txadd_daddr(dd, SDMA_MAP_SINGLE, tx, addr, len,
++				 NULL, NULL, NULL);
+ }
+ 
+ struct iowait_work;
+@@ -1034,6 +1053,4 @@ u16 sdma_get_descq_cnt(void);
+ extern uint mod_num_sdma;
+ 
+ void sdma_update_lmc(struct hfi1_devdata *dd, u64 mask, u32 lid);
+-
+-void system_descriptor_complete(struct hfi1_devdata *dd, struct sdma_desc *descp);
+ #endif
+diff --git a/drivers/infiniband/hw/hfi1/sdma_txreq.h b/drivers/infiniband/hw/hfi1/sdma_txreq.h
+index fad946cb5e0d8..85ae7293c2741 100644
+--- a/drivers/infiniband/hw/hfi1/sdma_txreq.h
++++ b/drivers/infiniband/hw/hfi1/sdma_txreq.h
+@@ -20,6 +20,8 @@ struct sdma_desc {
+ 	/* private:  don't use directly */
+ 	u64 qw[2];
+ 	void *pinning_ctx;
++	/* Release reference to @pinning_ctx. May be called in interrupt context. Must not sleep. */
++	void (*ctx_put)(void *ctx);
+ };
+ 
+ /**
+diff --git a/drivers/infiniband/hw/hfi1/user_sdma.c b/drivers/infiniband/hw/hfi1/user_sdma.c
+index ae58b48afe074..02bd62b857b75 100644
+--- a/drivers/infiniband/hw/hfi1/user_sdma.c
++++ b/drivers/infiniband/hw/hfi1/user_sdma.c
+@@ -62,18 +62,14 @@ static int defer_packet_queue(
+ static void activate_packet_queue(struct iowait *wait, int reason);
+ static bool sdma_rb_filter(struct mmu_rb_node *node, unsigned long addr,
+ 			   unsigned long len);
+-static int sdma_rb_insert(void *arg, struct mmu_rb_node *mnode);
+ static int sdma_rb_evict(void *arg, struct mmu_rb_node *mnode,
+ 			 void *arg2, bool *stop);
+ static void sdma_rb_remove(void *arg, struct mmu_rb_node *mnode);
+-static int sdma_rb_invalidate(void *arg, struct mmu_rb_node *mnode);
+ 
+ static struct mmu_rb_ops sdma_rb_ops = {
+ 	.filter = sdma_rb_filter,
+-	.insert = sdma_rb_insert,
+ 	.evict = sdma_rb_evict,
+ 	.remove = sdma_rb_remove,
+-	.invalidate = sdma_rb_invalidate
+ };
+ 
+ static int add_system_pages_to_sdma_packet(struct user_sdma_request *req,
+@@ -247,14 +243,14 @@ int hfi1_user_sdma_free_queues(struct hfi1_filedata *fd,
+ 		spin_unlock(&fd->pq_rcu_lock);
+ 		synchronize_srcu(&fd->pq_srcu);
+ 		/* at this point there can be no more new requests */
+-		if (pq->handler)
+-			hfi1_mmu_rb_unregister(pq->handler);
+ 		iowait_sdma_drain(&pq->busy);
+ 		/* Wait until all requests have been freed. */
+ 		wait_event_interruptible(
+ 			pq->wait,
+ 			!atomic_read(&pq->n_reqs));
+ 		kfree(pq->reqs);
++		if (pq->handler)
++			hfi1_mmu_rb_unregister(pq->handler);
+ 		bitmap_free(pq->req_in_use);
+ 		kmem_cache_destroy(pq->txreq_cache);
+ 		flush_pq_iowait(pq);
+@@ -1275,25 +1271,17 @@ static void free_system_node(struct sdma_mmu_node *node)
+ 	kfree(node);
+ }
+ 
+-static inline void acquire_node(struct sdma_mmu_node *node)
+-{
+-	atomic_inc(&node->refcount);
+-	WARN_ON(atomic_read(&node->refcount) < 0);
+-}
+-
+-static inline void release_node(struct mmu_rb_handler *handler,
+-				struct sdma_mmu_node *node)
+-{
+-	atomic_dec(&node->refcount);
+-	WARN_ON(atomic_read(&node->refcount) < 0);
+-}
+-
++/*
++ * kref_get()'s an additional kref on the returned rb_node to prevent rb_node
++ * from being released until after rb_node is assigned to an SDMA descriptor
++ * (struct sdma_desc) under add_system_iovec_to_sdma_packet(), even if the
++ * virtual address range for rb_node is invalidated between now and then.
++ */
+ static struct sdma_mmu_node *find_system_node(struct mmu_rb_handler *handler,
+ 					      unsigned long start,
+ 					      unsigned long end)
+ {
+ 	struct mmu_rb_node *rb_node;
+-	struct sdma_mmu_node *node;
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&handler->lock, flags);
+@@ -1302,11 +1290,12 @@ static struct sdma_mmu_node *find_system_node(struct mmu_rb_handler *handler,
+ 		spin_unlock_irqrestore(&handler->lock, flags);
+ 		return NULL;
+ 	}
+-	node = container_of(rb_node, struct sdma_mmu_node, rb);
+-	acquire_node(node);
++
++	/* "safety" kref to prevent release before add_system_iovec_to_sdma_packet() */
++	kref_get(&rb_node->refcount);
+ 	spin_unlock_irqrestore(&handler->lock, flags);
+ 
+-	return node;
++	return container_of(rb_node, struct sdma_mmu_node, rb);
+ }
+ 
+ static int pin_system_pages(struct user_sdma_request *req,
+@@ -1355,6 +1344,13 @@ retry:
+ 	return 0;
+ }
+ 
++/*
++ * kref refcount on *node_p will be 2 on successful addition: one kref from
++ * kref_init() for mmu_rb_handler and one kref to prevent *node_p from being
++ * released until after *node_p is assigned to an SDMA descriptor (struct
++ * sdma_desc) under add_system_iovec_to_sdma_packet(), even if the virtual
++ * address range for *node_p is invalidated between now and then.
++ */
+ static int add_system_pinning(struct user_sdma_request *req,
+ 			      struct sdma_mmu_node **node_p,
+ 			      unsigned long start, unsigned long len)
+@@ -1368,6 +1364,12 @@ static int add_system_pinning(struct user_sdma_request *req,
+ 	if (!node)
+ 		return -ENOMEM;
+ 
++	/* First kref "moves" to mmu_rb_handler */
++	kref_init(&node->rb.refcount);
++
++	/* "safety" kref to prevent release before add_system_iovec_to_sdma_packet() */
++	kref_get(&node->rb.refcount);
++
+ 	node->pq = pq;
+ 	ret = pin_system_pages(req, start, len, node, PFN_DOWN(len));
+ 	if (ret == 0) {
+@@ -1431,15 +1433,15 @@ static int get_system_cache_entry(struct user_sdma_request *req,
+ 			return 0;
+ 		}
+ 
+-		SDMA_DBG(req, "prepend: node->rb.addr %lx, node->refcount %d",
+-			 node->rb.addr, atomic_read(&node->refcount));
++		SDMA_DBG(req, "prepend: node->rb.addr %lx, node->rb.refcount %d",
++			 node->rb.addr, kref_read(&node->rb.refcount));
+ 		prepend_len = node->rb.addr - start;
+ 
+ 		/*
+ 		 * This node will not be returned, instead a new node
+ 		 * will be. So release the reference.
+ 		 */
+-		release_node(handler, node);
++		kref_put(&node->rb.refcount, hfi1_mmu_rb_release);
+ 
+ 		/* Prepend a node to cover the beginning of the allocation */
+ 		ret = add_system_pinning(req, node_p, start, prepend_len);
+@@ -1451,6 +1453,20 @@ static int get_system_cache_entry(struct user_sdma_request *req,
+ 	}
+ }
+ 
++static void sdma_mmu_rb_node_get(void *ctx)
++{
++	struct mmu_rb_node *node = ctx;
++
++	kref_get(&node->refcount);
++}
++
++static void sdma_mmu_rb_node_put(void *ctx)
++{
++	struct sdma_mmu_node *node = ctx;
++
++	kref_put(&node->rb.refcount, hfi1_mmu_rb_release);
++}
++
+ static int add_mapping_to_sdma_packet(struct user_sdma_request *req,
+ 				      struct user_sdma_txreq *tx,
+ 				      struct sdma_mmu_node *cache_entry,
+@@ -1494,9 +1510,12 @@ static int add_mapping_to_sdma_packet(struct user_sdma_request *req,
+ 			ctx = cache_entry;
+ 		}
+ 
+-		ret = sdma_txadd_page(pq->dd, ctx, &tx->txreq,
++		ret = sdma_txadd_page(pq->dd, &tx->txreq,
+ 				      cache_entry->pages[page_index],
+-				      page_offset, from_this_page);
++				      page_offset, from_this_page,
++				      ctx,
++				      sdma_mmu_rb_node_get,
++				      sdma_mmu_rb_node_put);
+ 		if (ret) {
+ 			/*
+ 			 * When there's a failure, the entire request is freed by
+@@ -1518,8 +1537,6 @@ static int add_system_iovec_to_sdma_packet(struct user_sdma_request *req,
+ 					   struct user_sdma_iovec *iovec,
+ 					   size_t from_this_iovec)
+ {
+-	struct mmu_rb_handler *handler = req->pq->handler;
+-
+ 	while (from_this_iovec > 0) {
+ 		struct sdma_mmu_node *cache_entry;
+ 		size_t from_this_cache_entry;
+@@ -1540,15 +1557,15 @@ static int add_system_iovec_to_sdma_packet(struct user_sdma_request *req,
+ 
+ 		ret = add_mapping_to_sdma_packet(req, tx, cache_entry, start,
+ 						 from_this_cache_entry);
++
++		/*
++		 * Done adding cache_entry to zero or more sdma_desc. Can
++		 * kref_put() the "safety" kref taken under
++		 * get_system_cache_entry().
++		 */
++		kref_put(&cache_entry->rb.refcount, hfi1_mmu_rb_release);
++
+ 		if (ret) {
+-			/*
+-			 * We're guaranteed that there will be no descriptor
+-			 * completion callback that releases this node
+-			 * because only the last descriptor referencing it
+-			 * has a context attached, and a failure means the
+-			 * last descriptor was never added.
+-			 */
+-			release_node(handler, cache_entry);
+ 			SDMA_DBG(req, "add system segment failed %d", ret);
+ 			return ret;
+ 		}
+@@ -1599,42 +1616,12 @@ static int add_system_pages_to_sdma_packet(struct user_sdma_request *req,
+ 	return 0;
+ }
+ 
+-void system_descriptor_complete(struct hfi1_devdata *dd,
+-				struct sdma_desc *descp)
+-{
+-	switch (sdma_mapping_type(descp)) {
+-	case SDMA_MAP_SINGLE:
+-		dma_unmap_single(&dd->pcidev->dev, sdma_mapping_addr(descp),
+-				 sdma_mapping_len(descp), DMA_TO_DEVICE);
+-		break;
+-	case SDMA_MAP_PAGE:
+-		dma_unmap_page(&dd->pcidev->dev, sdma_mapping_addr(descp),
+-			       sdma_mapping_len(descp), DMA_TO_DEVICE);
+-		break;
+-	}
+-
+-	if (descp->pinning_ctx) {
+-		struct sdma_mmu_node *node = descp->pinning_ctx;
+-
+-		release_node(node->rb.handler, node);
+-	}
+-}
+-
+ static bool sdma_rb_filter(struct mmu_rb_node *node, unsigned long addr,
+ 			   unsigned long len)
+ {
+ 	return (bool)(node->addr == addr);
+ }
+ 
+-static int sdma_rb_insert(void *arg, struct mmu_rb_node *mnode)
+-{
+-	struct sdma_mmu_node *node =
+-		container_of(mnode, struct sdma_mmu_node, rb);
+-
+-	atomic_inc(&node->refcount);
+-	return 0;
+-}
+-
+ /*
+  * Return 1 to remove the node from the rb tree and call the remove op.
+  *
+@@ -1647,10 +1634,6 @@ static int sdma_rb_evict(void *arg, struct mmu_rb_node *mnode,
+ 		container_of(mnode, struct sdma_mmu_node, rb);
+ 	struct evict_data *evict_data = evict_arg;
+ 
+-	/* is this node still being used? */
+-	if (atomic_read(&node->refcount))
+-		return 0; /* keep this node */
+-
+ 	/* this node will be evicted, add its pages to our count */
+ 	evict_data->cleared += node->npages;
+ 
+@@ -1668,13 +1651,3 @@ static void sdma_rb_remove(void *arg, struct mmu_rb_node *mnode)
+ 
+ 	free_system_node(node);
+ }
+-
+-static int sdma_rb_invalidate(void *arg, struct mmu_rb_node *mnode)
+-{
+-	struct sdma_mmu_node *node =
+-		container_of(mnode, struct sdma_mmu_node, rb);
+-
+-	if (!atomic_read(&node->refcount))
+-		return 1;
+-	return 0;
+-}
+diff --git a/drivers/infiniband/hw/hfi1/user_sdma.h b/drivers/infiniband/hw/hfi1/user_sdma.h
+index a241836371dc1..548347d4c5bc2 100644
+--- a/drivers/infiniband/hw/hfi1/user_sdma.h
++++ b/drivers/infiniband/hw/hfi1/user_sdma.h
+@@ -104,7 +104,6 @@ struct hfi1_user_sdma_comp_q {
+ struct sdma_mmu_node {
+ 	struct mmu_rb_node rb;
+ 	struct hfi1_user_sdma_pkt_q *pq;
+-	atomic_t refcount;
+ 	struct page **pages;
+ 	unsigned int npages;
+ };
+diff --git a/drivers/infiniband/hw/hfi1/vnic_sdma.c b/drivers/infiniband/hw/hfi1/vnic_sdma.c
+index 727eedfba332a..cc6324d2d1ddc 100644
+--- a/drivers/infiniband/hw/hfi1/vnic_sdma.c
++++ b/drivers/infiniband/hw/hfi1/vnic_sdma.c
+@@ -64,11 +64,11 @@ static noinline int build_vnic_ulp_payload(struct sdma_engine *sde,
+ 
+ 		/* combine physically continuous fragments later? */
+ 		ret = sdma_txadd_page(sde->dd,
+-				      NULL,
+ 				      &tx->txreq,
+ 				      skb_frag_page(frag),
+ 				      skb_frag_off(frag),
+-				      skb_frag_size(frag));
++				      skb_frag_size(frag),
++				      NULL, NULL, NULL);
+ 		if (unlikely(ret))
+ 			goto bail_txadd;
+ 	}
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c
+index aa8a08d1c0145..f30274986c0da 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hem.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hem.c
+@@ -595,11 +595,12 @@ int hns_roce_table_get(struct hns_roce_dev *hr_dev,
+ 	}
+ 
+ 	/* Set HEM base address(128K/page, pa) to Hardware */
+-	if (hr_dev->hw->set_hem(hr_dev, table, obj, HEM_HOP_STEP_DIRECT)) {
++	ret = hr_dev->hw->set_hem(hr_dev, table, obj, HEM_HOP_STEP_DIRECT);
++	if (ret) {
+ 		hns_roce_free_hem(hr_dev, table->hem[i]);
+ 		table->hem[i] = NULL;
+-		ret = -ENODEV;
+-		dev_err(dev, "set HEM base address to HW failed.\n");
++		dev_err(dev, "set HEM base address to HW failed, ret = %d.\n",
++			ret);
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/infiniband/hw/irdma/uk.c b/drivers/infiniband/hw/irdma/uk.c
+index 16183e894da77..dd428d915c175 100644
+--- a/drivers/infiniband/hw/irdma/uk.c
++++ b/drivers/infiniband/hw/irdma/uk.c
+@@ -93,16 +93,18 @@ static int irdma_nop_1(struct irdma_qp_uk *qp)
+  */
+ void irdma_clr_wqes(struct irdma_qp_uk *qp, u32 qp_wqe_idx)
+ {
+-	__le64 *wqe;
++	struct irdma_qp_quanta *sq;
+ 	u32 wqe_idx;
+ 
+ 	if (!(qp_wqe_idx & 0x7F)) {
+ 		wqe_idx = (qp_wqe_idx + 128) % qp->sq_ring.size;
+-		wqe = qp->sq_base[wqe_idx].elem;
++		sq = qp->sq_base + wqe_idx;
+ 		if (wqe_idx)
+-			memset(wqe, qp->swqe_polarity ? 0 : 0xFF, 0x1000);
++			memset(sq, qp->swqe_polarity ? 0 : 0xFF,
++			       128 * sizeof(*sq));
+ 		else
+-			memset(wqe, qp->swqe_polarity ? 0xFF : 0, 0x1000);
++			memset(sq, qp->swqe_polarity ? 0xFF : 0,
++			       128 * sizeof(*sq));
+ 	}
+ }
+ 
+diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c
+index afa5ce1a71166..a7ec57ab8fadd 100644
+--- a/drivers/infiniband/sw/rxe/rxe_mw.c
++++ b/drivers/infiniband/sw/rxe/rxe_mw.c
+@@ -48,7 +48,7 @@ int rxe_dealloc_mw(struct ib_mw *ibmw)
+ }
+ 
+ static int rxe_check_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
+-			 struct rxe_mw *mw, struct rxe_mr *mr)
++			 struct rxe_mw *mw, struct rxe_mr *mr, int access)
+ {
+ 	if (mw->ibmw.type == IB_MW_TYPE_1) {
+ 		if (unlikely(mw->state != RXE_MW_STATE_VALID)) {
+@@ -58,7 +58,7 @@ static int rxe_check_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
+ 		}
+ 
+ 		/* o10-36.2.2 */
+-		if (unlikely((mw->access & IB_ZERO_BASED))) {
++		if (unlikely((access & IB_ZERO_BASED))) {
+ 			rxe_dbg_mw(mw, "attempt to bind a zero based type 1 MW\n");
+ 			return -EINVAL;
+ 		}
+@@ -104,7 +104,7 @@ static int rxe_check_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
+ 	}
+ 
+ 	/* C10-74 */
+-	if (unlikely((mw->access &
++	if (unlikely((access &
+ 		      (IB_ACCESS_REMOTE_WRITE | IB_ACCESS_REMOTE_ATOMIC)) &&
+ 		     !(mr->access & IB_ACCESS_LOCAL_WRITE))) {
+ 		rxe_dbg_mw(mw,
+@@ -113,7 +113,7 @@ static int rxe_check_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
+ 	}
+ 
+ 	/* C10-75 */
+-	if (mw->access & IB_ZERO_BASED) {
++	if (access & IB_ZERO_BASED) {
+ 		if (unlikely(wqe->wr.wr.mw.length > mr->ibmr.length)) {
+ 			rxe_dbg_mw(mw,
+ 				"attempt to bind a ZB MW outside of the MR\n");
+@@ -133,12 +133,12 @@ static int rxe_check_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
+ }
+ 
+ static void rxe_do_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
+-		      struct rxe_mw *mw, struct rxe_mr *mr)
++		      struct rxe_mw *mw, struct rxe_mr *mr, int access)
+ {
+ 	u32 key = wqe->wr.wr.mw.rkey & 0xff;
+ 
+ 	mw->rkey = (mw->rkey & ~0xff) | key;
+-	mw->access = wqe->wr.wr.mw.access;
++	mw->access = access;
+ 	mw->state = RXE_MW_STATE_VALID;
+ 	mw->addr = wqe->wr.wr.mw.addr;
+ 	mw->length = wqe->wr.wr.mw.length;
+@@ -169,6 +169,7 @@ int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
+ 	struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
+ 	u32 mw_rkey = wqe->wr.wr.mw.mw_rkey;
+ 	u32 mr_lkey = wqe->wr.wr.mw.mr_lkey;
++	int access = wqe->wr.wr.mw.access;
+ 
+ 	mw = rxe_pool_get_index(&rxe->mw_pool, mw_rkey >> 8);
+ 	if (unlikely(!mw)) {
+@@ -198,11 +199,11 @@ int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
+ 
+ 	spin_lock_bh(&mw->lock);
+ 
+-	ret = rxe_check_bind_mw(qp, wqe, mw, mr);
++	ret = rxe_check_bind_mw(qp, wqe, mw, mr, access);
+ 	if (ret)
+ 		goto err_unlock;
+ 
+-	rxe_do_bind_mw(qp, wqe, mw, mr);
++	rxe_do_bind_mw(qp, wqe, mw, mr, access);
+ err_unlock:
+ 	spin_unlock_bh(&mw->lock);
+ err_drop_mr:
+diff --git a/drivers/input/misc/adxl34x.c b/drivers/input/misc/adxl34x.c
+index eecca671b5884..a3f45e0ee0c75 100644
+--- a/drivers/input/misc/adxl34x.c
++++ b/drivers/input/misc/adxl34x.c
+@@ -817,8 +817,7 @@ struct adxl34x *adxl34x_probe(struct device *dev, int irq,
+ 	AC_WRITE(ac, POWER_CTL, 0);
+ 
+ 	err = request_threaded_irq(ac->irq, NULL, adxl34x_irq,
+-				   IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
+-				   dev_name(dev), ac);
++				   IRQF_ONESHOT, dev_name(dev), ac);
+ 	if (err) {
+ 		dev_err(dev, "irq %d busy?\n", ac->irq);
+ 		goto err_free_mem;
+diff --git a/drivers/input/misc/drv260x.c b/drivers/input/misc/drv260x.c
+index 8a9ebfc04a2d9..85371fa1a03ed 100644
+--- a/drivers/input/misc/drv260x.c
++++ b/drivers/input/misc/drv260x.c
+@@ -435,6 +435,7 @@ static int drv260x_init(struct drv260x_data *haptics)
+ 	}
+ 
+ 	do {
++		usleep_range(15000, 15500);
+ 		error = regmap_read(haptics->regmap, DRV260X_GO, &cal_buf);
+ 		if (error) {
+ 			dev_err(&haptics->client->dev,
+diff --git a/drivers/input/misc/pm8941-pwrkey.c b/drivers/input/misc/pm8941-pwrkey.c
+index b6a27ebae977b..74d77d8aaeff2 100644
+--- a/drivers/input/misc/pm8941-pwrkey.c
++++ b/drivers/input/misc/pm8941-pwrkey.c
+@@ -50,7 +50,10 @@
+ #define  PON_RESIN_PULL_UP		BIT(0)
+ 
+ #define PON_DBC_CTL			0x71
+-#define  PON_DBC_DELAY_MASK		0x7
++#define  PON_DBC_DELAY_MASK_GEN1	0x7
++#define  PON_DBC_DELAY_MASK_GEN2	0xf
++#define  PON_DBC_SHIFT_GEN1		6
++#define  PON_DBC_SHIFT_GEN2		14
+ 
+ struct pm8941_data {
+ 	unsigned int	pull_up_bit;
+@@ -247,7 +250,7 @@ static int pm8941_pwrkey_probe(struct platform_device *pdev)
+ 	struct device *parent;
+ 	struct device_node *regmap_node;
+ 	const __be32 *addr;
+-	u32 req_delay;
++	u32 req_delay, mask, delay_shift;
+ 	int error;
+ 
+ 	if (of_property_read_u32(pdev->dev.of_node, "debounce", &req_delay))
+@@ -336,12 +339,20 @@ static int pm8941_pwrkey_probe(struct platform_device *pdev)
+ 	pwrkey->input->phys = pwrkey->data->phys;
+ 
+ 	if (pwrkey->data->supports_debounce_config) {
+-		req_delay = (req_delay << 6) / USEC_PER_SEC;
++		if (pwrkey->subtype >= PON_SUBTYPE_GEN2_PRIMARY) {
++			mask = PON_DBC_DELAY_MASK_GEN2;
++			delay_shift = PON_DBC_SHIFT_GEN2;
++		} else {
++			mask = PON_DBC_DELAY_MASK_GEN1;
++			delay_shift = PON_DBC_SHIFT_GEN1;
++		}
++
++		req_delay = (req_delay << delay_shift) / USEC_PER_SEC;
+ 		req_delay = ilog2(req_delay);
+ 
+ 		error = regmap_update_bits(pwrkey->regmap,
+ 					   pwrkey->baseaddr + PON_DBC_CTL,
+-					   PON_DBC_DELAY_MASK,
++					   mask,
+ 					   req_delay);
+ 		if (error) {
+ 			dev_err(&pdev->dev, "failed to set debounce: %d\n",
+diff --git a/drivers/input/touchscreen/cyttsp4_core.c b/drivers/input/touchscreen/cyttsp4_core.c
+index 0cd6f626adec5..7cb26929dc732 100644
+--- a/drivers/input/touchscreen/cyttsp4_core.c
++++ b/drivers/input/touchscreen/cyttsp4_core.c
+@@ -1263,9 +1263,8 @@ static void cyttsp4_stop_wd_timer(struct cyttsp4 *cd)
+ 	 * Ensure we wait until the watchdog timer
+ 	 * running on a different CPU finishes
+ 	 */
+-	del_timer_sync(&cd->watchdog_timer);
++	timer_shutdown_sync(&cd->watchdog_timer);
+ 	cancel_work_sync(&cd->watchdog_work);
+-	del_timer_sync(&cd->watchdog_timer);
+ }
+ 
+ static void cyttsp4_watchdog_timer(struct timer_list *t)
+diff --git a/drivers/iommu/iommufd/device.c b/drivers/iommu/iommufd/device.c
+index a0c66f47a65ad..532e12ea23efe 100644
+--- a/drivers/iommu/iommufd/device.c
++++ b/drivers/iommu/iommufd/device.c
+@@ -560,8 +560,8 @@ void iommufd_access_unpin_pages(struct iommufd_access *access,
+ 			iopt_area_iova_to_index(
+ 				area,
+ 				min(last_iova, iopt_area_last_iova(area))));
+-	up_read(&iopt->iova_rwsem);
+ 	WARN_ON(!iopt_area_contig_done(&iter));
++	up_read(&iopt->iova_rwsem);
+ }
+ EXPORT_SYMBOL_NS_GPL(iommufd_access_unpin_pages, IOMMUFD);
+ 
+diff --git a/drivers/iommu/iommufd/io_pagetable.c b/drivers/iommu/iommufd/io_pagetable.c
+index e0ae72b9e67f8..724c4c5742417 100644
+--- a/drivers/iommu/iommufd/io_pagetable.c
++++ b/drivers/iommu/iommufd/io_pagetable.c
+@@ -458,6 +458,7 @@ static int iopt_unmap_iova_range(struct io_pagetable *iopt, unsigned long start,
+ {
+ 	struct iopt_area *area;
+ 	unsigned long unmapped_bytes = 0;
++	unsigned int tries = 0;
+ 	int rc = -ENOENT;
+ 
+ 	/*
+@@ -484,19 +485,26 @@ again:
+ 			goto out_unlock_iova;
+ 		}
+ 
++		if (area_first != start)
++			tries = 0;
++
+ 		/*
+ 		 * num_accesses writers must hold the iova_rwsem too, so we can
+ 		 * safely read it under the write side of the iovam_rwsem
+ 		 * without the pages->mutex.
+ 		 */
+ 		if (area->num_accesses) {
++			size_t length = iopt_area_length(area);
++
+ 			start = area_first;
+ 			area->prevent_access = true;
+ 			up_write(&iopt->iova_rwsem);
+ 			up_read(&iopt->domains_rwsem);
+-			iommufd_access_notify_unmap(iopt, area_first,
+-						    iopt_area_length(area));
+-			if (WARN_ON(READ_ONCE(area->num_accesses)))
++
++			iommufd_access_notify_unmap(iopt, area_first, length);
++			/* Something is not responding to unmap requests. */
++			tries++;
++			if (WARN_ON(tries > 100))
+ 				return -EDEADLOCK;
+ 			goto again;
+ 		}
+diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
+index 5b8fe9bfa9a5b..3551ed057774e 100644
+--- a/drivers/iommu/virtio-iommu.c
++++ b/drivers/iommu/virtio-iommu.c
+@@ -788,6 +788,29 @@ static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
+ 	return 0;
+ }
+ 
++static void viommu_detach_dev(struct viommu_endpoint *vdev)
++{
++	int i;
++	struct virtio_iommu_req_detach req;
++	struct viommu_domain *vdomain = vdev->vdomain;
++	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(vdev->dev);
++
++	if (!vdomain)
++		return;
++
++	req = (struct virtio_iommu_req_detach) {
++		.head.type	= VIRTIO_IOMMU_T_DETACH,
++		.domain		= cpu_to_le32(vdomain->id),
++	};
++
++	for (i = 0; i < fwspec->num_ids; i++) {
++		req.endpoint = cpu_to_le32(fwspec->ids[i]);
++		WARN_ON(viommu_send_req_sync(vdev->viommu, &req, sizeof(req)));
++	}
++	vdomain->nr_endpoints--;
++	vdev->vdomain = NULL;
++}
++
+ static int viommu_map_pages(struct iommu_domain *domain, unsigned long iova,
+ 			    phys_addr_t paddr, size_t pgsize, size_t pgcount,
+ 			    int prot, gfp_t gfp, size_t *mapped)
+@@ -810,25 +833,26 @@ static int viommu_map_pages(struct iommu_domain *domain, unsigned long iova,
+ 	if (ret)
+ 		return ret;
+ 
+-	map = (struct virtio_iommu_req_map) {
+-		.head.type	= VIRTIO_IOMMU_T_MAP,
+-		.domain		= cpu_to_le32(vdomain->id),
+-		.virt_start	= cpu_to_le64(iova),
+-		.phys_start	= cpu_to_le64(paddr),
+-		.virt_end	= cpu_to_le64(end),
+-		.flags		= cpu_to_le32(flags),
+-	};
+-
+-	if (!vdomain->nr_endpoints)
+-		return 0;
++	if (vdomain->nr_endpoints) {
++		map = (struct virtio_iommu_req_map) {
++			.head.type	= VIRTIO_IOMMU_T_MAP,
++			.domain		= cpu_to_le32(vdomain->id),
++			.virt_start	= cpu_to_le64(iova),
++			.phys_start	= cpu_to_le64(paddr),
++			.virt_end	= cpu_to_le64(end),
++			.flags		= cpu_to_le32(flags),
++		};
+ 
+-	ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map));
+-	if (ret)
+-		viommu_del_mappings(vdomain, iova, end);
+-	else if (mapped)
++		ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map));
++		if (ret) {
++			viommu_del_mappings(vdomain, iova, end);
++			return ret;
++		}
++	}
++	if (mapped)
+ 		*mapped = size;
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ static size_t viommu_unmap_pages(struct iommu_domain *domain, unsigned long iova,
+@@ -990,6 +1014,7 @@ static void viommu_release_device(struct device *dev)
+ {
+ 	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
+ 
++	viommu_detach_dev(vdev);
+ 	iommu_put_resv_regions(dev, &vdev->resv_regions);
+ 	kfree(vdev);
+ }
+diff --git a/drivers/irqchip/irq-jcore-aic.c b/drivers/irqchip/irq-jcore-aic.c
+index 5f47d8ee4ae39..b9dcc8e78c750 100644
+--- a/drivers/irqchip/irq-jcore-aic.c
++++ b/drivers/irqchip/irq-jcore-aic.c
+@@ -68,6 +68,7 @@ static int __init aic_irq_of_init(struct device_node *node,
+ 	unsigned min_irq = JCORE_AIC2_MIN_HWIRQ;
+ 	unsigned dom_sz = JCORE_AIC_MAX_HWIRQ+1;
+ 	struct irq_domain *domain;
++	int ret;
+ 
+ 	pr_info("Initializing J-Core AIC\n");
+ 
+@@ -100,6 +101,12 @@ static int __init aic_irq_of_init(struct device_node *node,
+ 	jcore_aic.irq_unmask = noop;
+ 	jcore_aic.name = "AIC";
+ 
++	ret = irq_alloc_descs(-1, min_irq, dom_sz - min_irq,
++			      of_node_to_nid(node));
++
++	if (ret < 0)
++		return ret;
++
+ 	domain = irq_domain_add_legacy(node, dom_sz - min_irq, min_irq, min_irq,
+ 				       &jcore_aic_irqdomain_ops,
+ 				       &jcore_aic);
+diff --git a/drivers/irqchip/irq-loongson-eiointc.c b/drivers/irqchip/irq-loongson-eiointc.c
+index 90181c42840b4..873a326ed6cbc 100644
+--- a/drivers/irqchip/irq-loongson-eiointc.c
++++ b/drivers/irqchip/irq-loongson-eiointc.c
+@@ -317,7 +317,7 @@ static void eiointc_resume(void)
+ 			desc = irq_resolve_mapping(eiointc_priv[i]->eiointc_domain, j);
+ 			if (desc && desc->handle_irq && desc->handle_irq != handle_bad_irq) {
+ 				raw_spin_lock(&desc->lock);
+-				irq_data = &desc->irq_data;
++				irq_data = irq_domain_get_irq_data(eiointc_priv[i]->eiointc_domain, irq_desc_get_irq(desc));
+ 				eiointc_set_irq_affinity(irq_data, irq_data->common->affinity, 0);
+ 				raw_spin_unlock(&desc->lock);
+ 			}
+diff --git a/drivers/irqchip/irq-stm32-exti.c b/drivers/irqchip/irq-stm32-exti.c
+index 6a3f7498ea8ea..8bbb2b114636c 100644
+--- a/drivers/irqchip/irq-stm32-exti.c
++++ b/drivers/irqchip/irq-stm32-exti.c
+@@ -173,6 +173,16 @@ static struct irq_chip stm32_exti_h_chip_direct;
+ #define EXTI_INVALID_IRQ       U8_MAX
+ #define STM32MP1_DESC_IRQ_SIZE (ARRAY_SIZE(stm32mp1_exti_banks) * IRQS_PER_BANK)
+ 
++/*
++ * Use some intentionally tricky logic here to initialize the whole array to
++ * EXTI_INVALID_IRQ, but then override certain fields, requiring us to indicate
++ * that we "know" that there are overrides in this structure, and we'll need to
++ * disable that warning from W=1 builds.
++ */
++__diag_push();
++__diag_ignore_all("-Woverride-init",
++		  "logic to initialize all and then override some is OK");
++
+ static const u8 stm32mp1_desc_irq[] = {
+ 	/* default value */
+ 	[0 ... (STM32MP1_DESC_IRQ_SIZE - 1)] = EXTI_INVALID_IRQ,
+@@ -266,6 +276,8 @@ static const u8 stm32mp13_desc_irq[] = {
+ 	[70] = 98,
+ };
+ 
++__diag_pop();
++
+ static const struct stm32_exti_drv_data stm32mp1_drv_data = {
+ 	.exti_banks = stm32mp1_exti_banks,
+ 	.bank_nr = ARRAY_SIZE(stm32mp1_exti_banks),
+diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
+index e7cc6ba1b657f..8bbeeec70905c 100644
+--- a/drivers/md/md-bitmap.c
++++ b/drivers/md/md-bitmap.c
+@@ -54,14 +54,7 @@ __acquires(bitmap->lock)
+ {
+ 	unsigned char *mappage;
+ 
+-	if (page >= bitmap->pages) {
+-		/* This can happen if bitmap_start_sync goes beyond
+-		 * End-of-device while looking for a whole page.
+-		 * It is harmless.
+-		 */
+-		return -EINVAL;
+-	}
+-
++	WARN_ON_ONCE(page >= bitmap->pages);
+ 	if (bitmap->bp[page].hijacked) /* it's hijacked, don't try to alloc */
+ 		return 0;
+ 
+@@ -1000,7 +993,6 @@ static int md_bitmap_file_test_bit(struct bitmap *bitmap, sector_t block)
+ 	return set;
+ }
+ 
+-
+ /* this gets called when the md device is ready to unplug its underlying
+  * (slave) device queues -- before we let any writes go down, we need to
+  * sync the dirty pages of the bitmap file to disk */
+@@ -1010,8 +1002,7 @@ void md_bitmap_unplug(struct bitmap *bitmap)
+ 	int dirty, need_write;
+ 	int writing = 0;
+ 
+-	if (!bitmap || !bitmap->storage.filemap ||
+-	    test_bit(BITMAP_STALE, &bitmap->flags))
++	if (!md_bitmap_enabled(bitmap))
+ 		return;
+ 
+ 	/* look at each page to see if there are any set bits that need to be
+@@ -1364,6 +1355,14 @@ __acquires(bitmap->lock)
+ 	sector_t csize;
+ 	int err;
+ 
++	if (page >= bitmap->pages) {
++		/*
++		 * This can happen if bitmap_start_sync goes beyond
++		 * End-of-device while looking for a whole page or
++		 * user set a huge number to sysfs bitmap_set_bits.
++		 */
++		return NULL;
++	}
+ 	err = md_bitmap_checkpage(bitmap, page, create, 0);
+ 
+ 	if (bitmap->bp[page].hijacked ||
+diff --git a/drivers/md/md-bitmap.h b/drivers/md/md-bitmap.h
+index cfd7395de8fd3..3a4750952b3a7 100644
+--- a/drivers/md/md-bitmap.h
++++ b/drivers/md/md-bitmap.h
+@@ -273,6 +273,13 @@ int md_bitmap_copy_from_slot(struct mddev *mddev, int slot,
+ 			     sector_t *lo, sector_t *hi, bool clear_bits);
+ void md_bitmap_free(struct bitmap *bitmap);
+ void md_bitmap_wait_behind_writes(struct mddev *mddev);
++
++static inline bool md_bitmap_enabled(struct bitmap *bitmap)
++{
++	return bitmap && bitmap->storage.filemap &&
++	       !test_bit(BITMAP_STALE, &bitmap->flags);
++}
++
+ #endif
+ 
+ #endif
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index d479e1656ef33..2674cb1d699c0 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -3814,8 +3814,9 @@ int strict_strtoul_scaled(const char *cp, unsigned long *res, int scale)
+ static ssize_t
+ safe_delay_show(struct mddev *mddev, char *page)
+ {
+-	int msec = (mddev->safemode_delay*1000)/HZ;
+-	return sprintf(page, "%d.%03d\n", msec/1000, msec%1000);
++	unsigned int msec = ((unsigned long)mddev->safemode_delay*1000)/HZ;
++
++	return sprintf(page, "%u.%03u\n", msec/1000, msec%1000);
+ }
+ static ssize_t
+ safe_delay_store(struct mddev *mddev, const char *cbuf, size_t len)
+@@ -3827,7 +3828,7 @@ safe_delay_store(struct mddev *mddev, const char *cbuf, size_t len)
+ 		return -EINVAL;
+ 	}
+ 
+-	if (strict_strtoul_scaled(cbuf, &msec, 3) < 0)
++	if (strict_strtoul_scaled(cbuf, &msec, 3) < 0 || msec > UINT_MAX / HZ)
+ 		return -EINVAL;
+ 	if (msec == 0)
+ 		mddev->safemode_delay = 0;
+@@ -4497,6 +4498,8 @@ max_corrected_read_errors_store(struct mddev *mddev, const char *buf, size_t len
+ 	rv = kstrtouint(buf, 10, &n);
+ 	if (rv < 0)
+ 		return rv;
++	if (n > INT_MAX)
++		return -EINVAL;
+ 	atomic_set(&mddev->max_corr_read_errors, n);
+ 	return len;
+ }
+diff --git a/drivers/md/raid1-10.c b/drivers/md/raid1-10.c
+index e61f6cad4e08e..e0c8ac8146331 100644
+--- a/drivers/md/raid1-10.c
++++ b/drivers/md/raid1-10.c
+@@ -109,3 +109,45 @@ static void md_bio_reset_resync_pages(struct bio *bio, struct resync_pages *rp,
+ 		size -= len;
+ 	} while (idx++ < RESYNC_PAGES && size > 0);
+ }
++
++
++static inline void raid1_submit_write(struct bio *bio)
++{
++	struct md_rdev *rdev = (void *)bio->bi_bdev;
++
++	bio->bi_next = NULL;
++	bio_set_dev(bio, rdev->bdev);
++	if (test_bit(Faulty, &rdev->flags))
++		bio_io_error(bio);
++	else if (unlikely(bio_op(bio) ==  REQ_OP_DISCARD &&
++			  !bdev_max_discard_sectors(bio->bi_bdev)))
++		/* Just ignore it */
++		bio_endio(bio);
++	else
++		submit_bio_noacct(bio);
++}
++
++static inline bool raid1_add_bio_to_plug(struct mddev *mddev, struct bio *bio,
++				      blk_plug_cb_fn unplug)
++{
++	struct raid1_plug_cb *plug = NULL;
++	struct blk_plug_cb *cb;
++
++	/*
++	 * If bitmap is not enabled, it's safe to submit the io directly, and
++	 * this can get optimal performance.
++	 */
++	if (!md_bitmap_enabled(mddev->bitmap)) {
++		raid1_submit_write(bio);
++		return true;
++	}
++
++	cb = blk_check_plugged(unplug, mddev, sizeof(*plug));
++	if (!cb)
++		return false;
++
++	plug = container_of(cb, struct raid1_plug_cb, cb);
++	bio_list_add(&plug->pending, bio);
++
++	return true;
++}
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index 68a9e2d9985b2..e51b77a3a8397 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -799,17 +799,8 @@ static void flush_bio_list(struct r1conf *conf, struct bio *bio)
+ 
+ 	while (bio) { /* submit pending writes */
+ 		struct bio *next = bio->bi_next;
+-		struct md_rdev *rdev = (void *)bio->bi_bdev;
+-		bio->bi_next = NULL;
+-		bio_set_dev(bio, rdev->bdev);
+-		if (test_bit(Faulty, &rdev->flags)) {
+-			bio_io_error(bio);
+-		} else if (unlikely((bio_op(bio) == REQ_OP_DISCARD) &&
+-				    !bdev_max_discard_sectors(bio->bi_bdev)))
+-			/* Just ignore it */
+-			bio_endio(bio);
+-		else
+-			submit_bio_noacct(bio);
++
++		raid1_submit_write(bio);
+ 		bio = next;
+ 		cond_resched();
+ 	}
+@@ -1343,8 +1334,6 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
+ 	struct bitmap *bitmap = mddev->bitmap;
+ 	unsigned long flags;
+ 	struct md_rdev *blocked_rdev;
+-	struct blk_plug_cb *cb;
+-	struct raid1_plug_cb *plug = NULL;
+ 	int first_clone;
+ 	int max_sectors;
+ 	bool write_behind = false;
+@@ -1573,15 +1562,7 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
+ 					      r1_bio->sector);
+ 		/* flush_pending_writes() needs access to the rdev so...*/
+ 		mbio->bi_bdev = (void *)rdev;
+-
+-		cb = blk_check_plugged(raid1_unplug, mddev, sizeof(*plug));
+-		if (cb)
+-			plug = container_of(cb, struct raid1_plug_cb, cb);
+-		else
+-			plug = NULL;
+-		if (plug) {
+-			bio_list_add(&plug->pending, mbio);
+-		} else {
++		if (!raid1_add_bio_to_plug(mddev, mbio, raid1_unplug)) {
+ 			spin_lock_irqsave(&conf->device_lock, flags);
+ 			bio_list_add(&conf->pending_bio_list, mbio);
+ 			spin_unlock_irqrestore(&conf->device_lock, flags);
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index ea6967aeaa02a..f2f7538dd2a68 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -779,8 +779,16 @@ static struct md_rdev *read_balance(struct r10conf *conf,
+ 		disk = r10_bio->devs[slot].devnum;
+ 		rdev = rcu_dereference(conf->mirrors[disk].replacement);
+ 		if (rdev == NULL || test_bit(Faulty, &rdev->flags) ||
+-		    r10_bio->devs[slot].addr + sectors > rdev->recovery_offset)
++		    r10_bio->devs[slot].addr + sectors >
++		    rdev->recovery_offset) {
++			/*
++			 * Read replacement first to prevent reading both rdev
++			 * and replacement as NULL during replacement replace
++			 * rdev.
++			 */
++			smp_mb();
+ 			rdev = rcu_dereference(conf->mirrors[disk].rdev);
++		}
+ 		if (rdev == NULL ||
+ 		    test_bit(Faulty, &rdev->flags))
+ 			continue;
+@@ -909,17 +917,8 @@ static void flush_pending_writes(struct r10conf *conf)
+ 
+ 		while (bio) { /* submit pending writes */
+ 			struct bio *next = bio->bi_next;
+-			struct md_rdev *rdev = (void*)bio->bi_bdev;
+-			bio->bi_next = NULL;
+-			bio_set_dev(bio, rdev->bdev);
+-			if (test_bit(Faulty, &rdev->flags)) {
+-				bio_io_error(bio);
+-			} else if (unlikely((bio_op(bio) ==  REQ_OP_DISCARD) &&
+-					    !bdev_max_discard_sectors(bio->bi_bdev)))
+-				/* Just ignore it */
+-				bio_endio(bio);
+-			else
+-				submit_bio_noacct(bio);
++
++			raid1_submit_write(bio);
+ 			bio = next;
+ 		}
+ 		blk_finish_plug(&plug);
+@@ -1128,17 +1127,8 @@ static void raid10_unplug(struct blk_plug_cb *cb, bool from_schedule)
+ 
+ 	while (bio) { /* submit pending writes */
+ 		struct bio *next = bio->bi_next;
+-		struct md_rdev *rdev = (void*)bio->bi_bdev;
+-		bio->bi_next = NULL;
+-		bio_set_dev(bio, rdev->bdev);
+-		if (test_bit(Faulty, &rdev->flags)) {
+-			bio_io_error(bio);
+-		} else if (unlikely((bio_op(bio) ==  REQ_OP_DISCARD) &&
+-				    !bdev_max_discard_sectors(bio->bi_bdev)))
+-			/* Just ignore it */
+-			bio_endio(bio);
+-		else
+-			submit_bio_noacct(bio);
++
++		raid1_submit_write(bio);
+ 		bio = next;
+ 	}
+ 	kfree(plug);
+@@ -1280,8 +1270,6 @@ static void raid10_write_one_disk(struct mddev *mddev, struct r10bio *r10_bio,
+ 	const blk_opf_t do_sync = bio->bi_opf & REQ_SYNC;
+ 	const blk_opf_t do_fua = bio->bi_opf & REQ_FUA;
+ 	unsigned long flags;
+-	struct blk_plug_cb *cb;
+-	struct raid1_plug_cb *plug = NULL;
+ 	struct r10conf *conf = mddev->private;
+ 	struct md_rdev *rdev;
+ 	int devnum = r10_bio->devs[n_copy].devnum;
+@@ -1321,14 +1309,7 @@ static void raid10_write_one_disk(struct mddev *mddev, struct r10bio *r10_bio,
+ 
+ 	atomic_inc(&r10_bio->remaining);
+ 
+-	cb = blk_check_plugged(raid10_unplug, mddev, sizeof(*plug));
+-	if (cb)
+-		plug = container_of(cb, struct raid1_plug_cb, cb);
+-	else
+-		plug = NULL;
+-	if (plug) {
+-		bio_list_add(&plug->pending, mbio);
+-	} else {
++	if (!raid1_add_bio_to_plug(mddev, mbio, raid10_unplug)) {
+ 		spin_lock_irqsave(&conf->device_lock, flags);
+ 		bio_list_add(&conf->pending_bio_list, mbio);
+ 		spin_unlock_irqrestore(&conf->device_lock, flags);
+@@ -1477,9 +1458,15 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio,
+ 
+ 	for (i = 0;  i < conf->copies; i++) {
+ 		int d = r10_bio->devs[i].devnum;
+-		struct md_rdev *rdev = rcu_dereference(conf->mirrors[d].rdev);
+-		struct md_rdev *rrdev = rcu_dereference(
+-			conf->mirrors[d].replacement);
++		struct md_rdev *rdev, *rrdev;
++
++		rrdev = rcu_dereference(conf->mirrors[d].replacement);
++		/*
++		 * Read replacement first to prevent reading both rdev and
++		 * replacement as NULL during replacement replace rdev.
++		 */
++		smp_mb();
++		rdev = rcu_dereference(conf->mirrors[d].rdev);
+ 		if (rdev == rrdev)
+ 			rrdev = NULL;
+ 		if (rdev && (test_bit(Faulty, &rdev->flags)))
+@@ -3436,7 +3423,6 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
+ 			int must_sync;
+ 			int any_working;
+ 			int need_recover = 0;
+-			int need_replace = 0;
+ 			struct raid10_info *mirror = &conf->mirrors[i];
+ 			struct md_rdev *mrdev, *mreplace;
+ 
+@@ -3448,11 +3434,10 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
+ 			    !test_bit(Faulty, &mrdev->flags) &&
+ 			    !test_bit(In_sync, &mrdev->flags))
+ 				need_recover = 1;
+-			if (mreplace != NULL &&
+-			    !test_bit(Faulty, &mreplace->flags))
+-				need_replace = 1;
++			if (mreplace && test_bit(Faulty, &mreplace->flags))
++				mreplace = NULL;
+ 
+-			if (!need_recover && !need_replace) {
++			if (!need_recover && !mreplace) {
+ 				rcu_read_unlock();
+ 				continue;
+ 			}
+@@ -3468,8 +3453,6 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
+ 				rcu_read_unlock();
+ 				continue;
+ 			}
+-			if (mreplace && test_bit(Faulty, &mreplace->flags))
+-				mreplace = NULL;
+ 			/* Unless we are doing a full sync, or a replacement
+ 			 * we only need to recover the block if it is set in
+ 			 * the bitmap
+@@ -3592,11 +3575,11 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
+ 				bio = r10_bio->devs[1].repl_bio;
+ 				if (bio)
+ 					bio->bi_end_io = NULL;
+-				/* Note: if need_replace, then bio
++				/* Note: if replace is not NULL, then bio
+ 				 * cannot be NULL as r10buf_pool_alloc will
+ 				 * have allocated it.
+ 				 */
+-				if (!need_replace)
++				if (!mreplace)
+ 					break;
+ 				bio->bi_next = biolist;
+ 				biolist = bio;
+diff --git a/drivers/memory/brcmstb_dpfe.c b/drivers/memory/brcmstb_dpfe.c
+index 76c82e9c8fceb..9339f80b21c50 100644
+--- a/drivers/memory/brcmstb_dpfe.c
++++ b/drivers/memory/brcmstb_dpfe.c
+@@ -434,15 +434,17 @@ static void __finalize_command(struct brcmstb_dpfe_priv *priv)
+ static int __send_command(struct brcmstb_dpfe_priv *priv, unsigned int cmd,
+ 			  u32 result[])
+ {
+-	const u32 *msg = priv->dpfe_api->command[cmd];
+ 	void __iomem *regs = priv->regs;
+ 	unsigned int i, chksum, chksum_idx;
++	const u32 *msg;
+ 	int ret = 0;
+ 	u32 resp;
+ 
+ 	if (cmd >= DPFE_CMD_MAX)
+ 		return -1;
+ 
++	msg = priv->dpfe_api->command[cmd];
++
+ 	mutex_lock(&priv->lock);
+ 
+ 	/* Wait for DCPU to become ready */
+diff --git a/drivers/memstick/host/r592.c b/drivers/memstick/host/r592.c
+index 42bfc46842b82..461f5ffd02bc1 100644
+--- a/drivers/memstick/host/r592.c
++++ b/drivers/memstick/host/r592.c
+@@ -44,12 +44,10 @@ static const char *tpc_names[] = {
+  * memstick_debug_get_tpc_name - debug helper that returns string for
+  * a TPC number
+  */
+-const char *memstick_debug_get_tpc_name(int tpc)
++static __maybe_unused const char *memstick_debug_get_tpc_name(int tpc)
+ {
+ 	return tpc_names[tpc-1];
+ }
+-EXPORT_SYMBOL(memstick_debug_get_tpc_name);
+-
+ 
+ /* Read a register*/
+ static inline u32 r592_read_reg(struct r592_device *dev, int address)
+diff --git a/drivers/mfd/tps65010.c b/drivers/mfd/tps65010.c
+index fb733288cca3b..faea4ff44c6fe 100644
+--- a/drivers/mfd/tps65010.c
++++ b/drivers/mfd/tps65010.c
+@@ -506,12 +506,8 @@ static void tps65010_remove(struct i2c_client *client)
+ 	struct tps65010		*tps = i2c_get_clientdata(client);
+ 	struct tps65010_board	*board = dev_get_platdata(&client->dev);
+ 
+-	if (board && board->teardown) {
+-		int status = board->teardown(client, board->context);
+-		if (status < 0)
+-			dev_dbg(&client->dev, "board %s %s err %d\n",
+-				"teardown", client->name, status);
+-	}
++	if (board && board->teardown)
++		board->teardown(client, &tps->chip);
+ 	if (client->irq > 0)
+ 		free_irq(client->irq, tps);
+ 	cancel_delayed_work_sync(&tps->work);
+@@ -619,7 +615,7 @@ static int tps65010_probe(struct i2c_client *client)
+ 				tps, DEBUG_FOPS);
+ 
+ 	/* optionally register GPIOs */
+-	if (board && board->base != 0) {
++	if (board) {
+ 		tps->outmask = board->outmask;
+ 
+ 		tps->chip.label = client->name;
+@@ -632,7 +628,7 @@ static int tps65010_probe(struct i2c_client *client)
+ 		/* NOTE:  only partial support for inputs; nyet IRQs */
+ 		tps->chip.get = tps65010_gpio_get;
+ 
+-		tps->chip.base = board->base;
++		tps->chip.base = -1;
+ 		tps->chip.ngpio = 7;
+ 		tps->chip.can_sleep = 1;
+ 
+@@ -641,7 +637,7 @@ static int tps65010_probe(struct i2c_client *client)
+ 			dev_err(&client->dev, "can't add gpiochip, err %d\n",
+ 					status);
+ 		else if (board->setup) {
+-			status = board->setup(client, board->context);
++			status = board->setup(client, &tps->chip);
+ 			if (status < 0) {
+ 				dev_dbg(&client->dev,
+ 					"board %s %s err %d\n",
+diff --git a/drivers/mmc/core/card.h b/drivers/mmc/core/card.h
+index cfdd1ff40b865..4edf9057fa79d 100644
+--- a/drivers/mmc/core/card.h
++++ b/drivers/mmc/core/card.h
+@@ -53,6 +53,10 @@ struct mmc_fixup {
+ 	unsigned int manfid;
+ 	unsigned short oemid;
+ 
++	/* Manufacturing date */
++	unsigned short year;
++	unsigned char month;
++
+ 	/* SDIO-specific fields. You can use SDIO_ANY_ID here of course */
+ 	u16 cis_vendor, cis_device;
+ 
+@@ -68,6 +72,8 @@ struct mmc_fixup {
+ 
+ #define CID_MANFID_ANY (-1u)
+ #define CID_OEMID_ANY ((unsigned short) -1)
++#define CID_YEAR_ANY ((unsigned short) -1)
++#define CID_MONTH_ANY ((unsigned char) -1)
+ #define CID_NAME_ANY (NULL)
+ 
+ #define EXT_CSD_REV_ANY (-1u)
+@@ -81,17 +87,21 @@ struct mmc_fixup {
+ #define CID_MANFID_APACER       0x27
+ #define CID_MANFID_KINGSTON     0x70
+ #define CID_MANFID_HYNIX	0x90
++#define CID_MANFID_KINGSTON_SD	0x9F
+ #define CID_MANFID_NUMONYX	0xFE
+ 
+ #define END_FIXUP { NULL }
+ 
+-#define _FIXUP_EXT(_name, _manfid, _oemid, _rev_start, _rev_end,	\
+-		   _cis_vendor, _cis_device,				\
+-		   _fixup, _data, _ext_csd_rev)				\
++#define _FIXUP_EXT(_name, _manfid, _oemid, _year, _month,	\
++		   _rev_start, _rev_end,			\
++		   _cis_vendor, _cis_device,			\
++		   _fixup, _data, _ext_csd_rev)			\
+ 	{						\
+ 		.name = (_name),			\
+ 		.manfid = (_manfid),			\
+ 		.oemid = (_oemid),			\
++		.year = (_year),			\
++		.month = (_month),			\
+ 		.rev_start = (_rev_start),		\
+ 		.rev_end = (_rev_end),			\
+ 		.cis_vendor = (_cis_vendor),		\
+@@ -103,8 +113,8 @@ struct mmc_fixup {
+ 
+ #define MMC_FIXUP_REV(_name, _manfid, _oemid, _rev_start, _rev_end,	\
+ 		      _fixup, _data, _ext_csd_rev)			\
+-	_FIXUP_EXT(_name, _manfid,					\
+-		   _oemid, _rev_start, _rev_end,			\
++	_FIXUP_EXT(_name, _manfid, _oemid, CID_YEAR_ANY, CID_MONTH_ANY,	\
++		   _rev_start, _rev_end,				\
+ 		   SDIO_ANY_ID, SDIO_ANY_ID,				\
+ 		   _fixup, _data, _ext_csd_rev)				\
+ 
+@@ -118,8 +128,9 @@ struct mmc_fixup {
+ 		      _ext_csd_rev)
+ 
+ #define SDIO_FIXUP(_vendor, _device, _fixup, _data)			\
+-	_FIXUP_EXT(CID_NAME_ANY, CID_MANFID_ANY,			\
+-		    CID_OEMID_ANY, 0, -1ull,				\
++	_FIXUP_EXT(CID_NAME_ANY, CID_MANFID_ANY, CID_OEMID_ANY,		\
++		   CID_YEAR_ANY, CID_MONTH_ANY,				\
++		   0, -1ull,						\
+ 		   _vendor, _device,					\
+ 		   _fixup, _data, EXT_CSD_REV_ANY)			\
+ 
+@@ -264,4 +275,9 @@ static inline int mmc_card_broken_sd_discard(const struct mmc_card *c)
+ 	return c->quirks & MMC_QUIRK_BROKEN_SD_DISCARD;
+ }
+ 
++static inline int mmc_card_broken_sd_cache(const struct mmc_card *c)
++{
++	return c->quirks & MMC_QUIRK_BROKEN_SD_CACHE;
++}
++
+ #endif
+diff --git a/drivers/mmc/core/quirks.h b/drivers/mmc/core/quirks.h
+index 29b9497936df9..a7ffbc930ea9d 100644
+--- a/drivers/mmc/core/quirks.h
++++ b/drivers/mmc/core/quirks.h
+@@ -53,6 +53,15 @@ static const struct mmc_fixup __maybe_unused mmc_blk_fixups[] = {
+ 	MMC_FIXUP("MMC32G", CID_MANFID_TOSHIBA, CID_OEMID_ANY, add_quirk_mmc,
+ 		  MMC_QUIRK_BLK_NO_CMD23),
+ 
++	/*
++	 * Kingston Canvas Go! Plus microSD cards never finish SD cache flush.
++	 * This has so far only been observed on cards from 11/2019, while new
++	 * cards from 2023/05 do not exhibit this behavior.
++	 */
++	_FIXUP_EXT("SD64G", CID_MANFID_KINGSTON_SD, 0x5449, 2019, 11,
++		   0, -1ull, SDIO_ANY_ID, SDIO_ANY_ID, add_quirk_sd,
++		   MMC_QUIRK_BROKEN_SD_CACHE, EXT_CSD_REV_ANY),
++
+ 	/*
+ 	 * Some SD cards lockup while using CMD23 multiblock transfers.
+ 	 */
+@@ -209,6 +218,10 @@ static inline void mmc_fixup_device(struct mmc_card *card,
+ 		if (f->of_compatible &&
+ 		    !mmc_fixup_of_compatible_match(card, f->of_compatible))
+ 			continue;
++		if (f->year != CID_YEAR_ANY && f->year != card->cid.year)
++			continue;
++		if (f->month != CID_MONTH_ANY && f->month != card->cid.month)
++			continue;
+ 
+ 		dev_dbg(&card->dev, "calling %ps\n", f->vendor_fixup);
+ 		f->vendor_fixup(card, f->data);
+diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c
+index 72b664ed90cf6..246ce027ae0aa 100644
+--- a/drivers/mmc/core/sd.c
++++ b/drivers/mmc/core/sd.c
+@@ -1170,7 +1170,7 @@ static int sd_parse_ext_reg_perf(struct mmc_card *card, u8 fno, u8 page,
+ 		card->ext_perf.feature_support |= SD_EXT_PERF_HOST_MAINT;
+ 
+ 	/* Cache support at bit 0. */
+-	if (reg_buf[4] & BIT(0))
++	if ((reg_buf[4] & BIT(0)) && !mmc_card_broken_sd_cache(card))
+ 		card->ext_perf.feature_support |= SD_EXT_PERF_CACHE;
+ 
+ 	/* Command queue support indicated via queue depth bits (0 to 4). */
+diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c
+index 9785ec91654f7..97c42aacaf346 100644
+--- a/drivers/mmc/host/mtk-sd.c
++++ b/drivers/mmc/host/mtk-sd.c
+@@ -2707,7 +2707,7 @@ static int msdc_drv_probe(struct platform_device *pdev)
+ 
+ 	/* Support for SDIO eint irq ? */
+ 	if ((mmc->pm_caps & MMC_PM_WAKE_SDIO_IRQ) && (mmc->pm_caps & MMC_PM_KEEP_POWER)) {
+-		host->eint_irq = platform_get_irq_byname(pdev, "sdio_wakeup");
++		host->eint_irq = platform_get_irq_byname_optional(pdev, "sdio_wakeup");
+ 		if (host->eint_irq > 0) {
+ 			host->pins_eint = pinctrl_lookup_state(host->pinctrl, "state_eint");
+ 			if (IS_ERR(host->pins_eint)) {
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 806d33d9f7124..172939e275a3b 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -4171,7 +4171,7 @@ u32 bond_xmit_hash(struct bonding *bond, struct sk_buff *skb)
+ 		return skb->hash;
+ 
+ 	return __bond_xmit_hash(bond, skb, skb->data, skb->protocol,
+-				skb_mac_offset(skb), skb_network_offset(skb),
++				0, skb_network_offset(skb),
+ 				skb_headlen(skb));
+ }
+ 
+diff --git a/drivers/net/can/kvaser_pciefd.c b/drivers/net/can/kvaser_pciefd.c
+index 956a4a57396f9..74a47244f1291 100644
+--- a/drivers/net/can/kvaser_pciefd.c
++++ b/drivers/net/can/kvaser_pciefd.c
+@@ -538,6 +538,13 @@ static int kvaser_pciefd_set_tx_irq(struct kvaser_pciefd_can *can)
+ 	return 0;
+ }
+ 
++static inline void kvaser_pciefd_set_skb_timestamp(const struct kvaser_pciefd *pcie,
++						   struct sk_buff *skb, u64 timestamp)
++{
++	skb_hwtstamps(skb)->hwtstamp =
++		ns_to_ktime(div_u64(timestamp * 1000, pcie->freq_to_ticks_div));
++}
++
+ static void kvaser_pciefd_setup_controller(struct kvaser_pciefd_can *can)
+ {
+ 	u32 mode;
+@@ -1171,7 +1178,6 @@ static int kvaser_pciefd_handle_data_packet(struct kvaser_pciefd *pcie,
+ 	struct canfd_frame *cf;
+ 	struct can_priv *priv;
+ 	struct net_device_stats *stats;
+-	struct skb_shared_hwtstamps *shhwtstamps;
+ 	u8 ch_id = (p->header[1] >> KVASER_PCIEFD_PACKET_CHID_SHIFT) & 0x7;
+ 
+ 	if (ch_id >= pcie->nr_channels)
+@@ -1214,12 +1220,7 @@ static int kvaser_pciefd_handle_data_packet(struct kvaser_pciefd *pcie,
+ 		stats->rx_bytes += cf->len;
+ 	}
+ 	stats->rx_packets++;
+-
+-	shhwtstamps = skb_hwtstamps(skb);
+-
+-	shhwtstamps->hwtstamp =
+-		ns_to_ktime(div_u64(p->timestamp * 1000,
+-				    pcie->freq_to_ticks_div));
++	kvaser_pciefd_set_skb_timestamp(pcie, skb, p->timestamp);
+ 
+ 	return netif_rx(skb);
+ }
+@@ -1282,7 +1283,6 @@ static int kvaser_pciefd_rx_error_frame(struct kvaser_pciefd_can *can,
+ 	struct net_device *ndev = can->can.dev;
+ 	struct sk_buff *skb;
+ 	struct can_frame *cf = NULL;
+-	struct skb_shared_hwtstamps *shhwtstamps;
+ 	struct net_device_stats *stats = &ndev->stats;
+ 
+ 	old_state = can->can.state;
+@@ -1323,10 +1323,7 @@ static int kvaser_pciefd_rx_error_frame(struct kvaser_pciefd_can *can,
+ 		return -ENOMEM;
+ 	}
+ 
+-	shhwtstamps = skb_hwtstamps(skb);
+-	shhwtstamps->hwtstamp =
+-		ns_to_ktime(div_u64(p->timestamp * 1000,
+-				    can->kv_pcie->freq_to_ticks_div));
++	kvaser_pciefd_set_skb_timestamp(can->kv_pcie, skb, p->timestamp);
+ 	cf->can_id |= CAN_ERR_BUSERROR | CAN_ERR_CNT;
+ 
+ 	cf->data[6] = bec.txerr;
+@@ -1374,7 +1371,6 @@ static int kvaser_pciefd_handle_status_resp(struct kvaser_pciefd_can *can,
+ 		struct net_device *ndev = can->can.dev;
+ 		struct sk_buff *skb;
+ 		struct can_frame *cf;
+-		struct skb_shared_hwtstamps *shhwtstamps;
+ 
+ 		skb = alloc_can_err_skb(ndev, &cf);
+ 		if (!skb) {
+@@ -1394,10 +1390,7 @@ static int kvaser_pciefd_handle_status_resp(struct kvaser_pciefd_can *can,
+ 			cf->can_id |= CAN_ERR_RESTARTED;
+ 		}
+ 
+-		shhwtstamps = skb_hwtstamps(skb);
+-		shhwtstamps->hwtstamp =
+-			ns_to_ktime(div_u64(p->timestamp * 1000,
+-					    can->kv_pcie->freq_to_ticks_div));
++		kvaser_pciefd_set_skb_timestamp(can->kv_pcie, skb, p->timestamp);
+ 
+ 		cf->data[6] = bec.txerr;
+ 		cf->data[7] = bec.rxerr;
+@@ -1526,6 +1519,7 @@ static void kvaser_pciefd_handle_nack_packet(struct kvaser_pciefd_can *can,
+ 
+ 	if (skb) {
+ 		cf->can_id |= CAN_ERR_BUSERROR;
++		kvaser_pciefd_set_skb_timestamp(can->kv_pcie, skb, p->timestamp);
+ 		netif_rx(skb);
+ 	} else {
+ 		stats->rx_dropped++;
+@@ -1557,8 +1551,15 @@ static int kvaser_pciefd_handle_ack_packet(struct kvaser_pciefd *pcie,
+ 		netdev_dbg(can->can.dev, "Packet was flushed\n");
+ 	} else {
+ 		int echo_idx = p->header[0] & KVASER_PCIEFD_PACKET_SEQ_MSK;
+-		int dlc = can_get_echo_skb(can->can.dev, echo_idx, NULL);
+-		u8 count = ioread32(can->reg_base +
++		int dlc;
++		u8 count;
++		struct sk_buff *skb;
++
++		skb = can->can.echo_skb[echo_idx];
++		if (skb)
++			kvaser_pciefd_set_skb_timestamp(pcie, skb, p->timestamp);
++		dlc = can_get_echo_skb(can->can.dev, echo_idx, NULL);
++		count = ioread32(can->reg_base +
+ 				    KVASER_PCIEFD_KCAN_TX_NPACKETS_REG) & 0xff;
+ 
+ 		if (count < KVASER_PCIEFD_CAN_TX_MAX_COUNT &&
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index e809249500e18..03c5aecd61402 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -515,6 +515,12 @@ enum ice_pf_flags {
+ 	ICE_PF_FLAGS_NBITS		/* must be last */
+ };
+ 
++enum ice_misc_thread_tasks {
++	ICE_MISC_THREAD_EXTTS_EVENT,
++	ICE_MISC_THREAD_TX_TSTAMP,
++	ICE_MISC_THREAD_NBITS		/* must be last */
++};
++
+ struct ice_switchdev_info {
+ 	struct ice_vsi *control_vsi;
+ 	struct ice_vsi *uplink_vsi;
+@@ -557,6 +563,7 @@ struct ice_pf {
+ 	DECLARE_BITMAP(features, ICE_F_MAX);
+ 	DECLARE_BITMAP(state, ICE_STATE_NBITS);
+ 	DECLARE_BITMAP(flags, ICE_PF_FLAGS_NBITS);
++	DECLARE_BITMAP(misc_thread, ICE_MISC_THREAD_NBITS);
+ 	unsigned long *avail_txqs;	/* bitmap to track PF Tx queue usage */
+ 	unsigned long *avail_rxqs;	/* bitmap to track PF Rx queue usage */
+ 	unsigned long serv_tmr_period;
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 98e8ce743fb2e..65f77ab3fc806 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -3134,20 +3134,28 @@ static irqreturn_t ice_misc_intr(int __always_unused irq, void *data)
+ 
+ 	if (oicr & PFINT_OICR_TSYN_TX_M) {
+ 		ena_mask &= ~PFINT_OICR_TSYN_TX_M;
+-		if (!hw->reset_ongoing)
++		if (!hw->reset_ongoing) {
++			set_bit(ICE_MISC_THREAD_TX_TSTAMP, pf->misc_thread);
+ 			ret = IRQ_WAKE_THREAD;
++		}
+ 	}
+ 
+ 	if (oicr & PFINT_OICR_TSYN_EVNT_M) {
+ 		u8 tmr_idx = hw->func_caps.ts_func_info.tmr_index_owned;
+ 		u32 gltsyn_stat = rd32(hw, GLTSYN_STAT(tmr_idx));
+ 
+-		/* Save EVENTs from GTSYN register */
+-		pf->ptp.ext_ts_irq |= gltsyn_stat & (GLTSYN_STAT_EVENT0_M |
+-						     GLTSYN_STAT_EVENT1_M |
+-						     GLTSYN_STAT_EVENT2_M);
+ 		ena_mask &= ~PFINT_OICR_TSYN_EVNT_M;
+-		kthread_queue_work(pf->ptp.kworker, &pf->ptp.extts_work);
++
++		if (hw->func_caps.ts_func_info.src_tmr_owned) {
++			/* Save EVENTs from GLTSYN register */
++			pf->ptp.ext_ts_irq |= gltsyn_stat &
++					      (GLTSYN_STAT_EVENT0_M |
++					       GLTSYN_STAT_EVENT1_M |
++					       GLTSYN_STAT_EVENT2_M);
++
++			set_bit(ICE_MISC_THREAD_EXTTS_EVENT, pf->misc_thread);
++			ret = IRQ_WAKE_THREAD;
++		}
+ 	}
+ 
+ #define ICE_AUX_CRIT_ERR (PFINT_OICR_PE_CRITERR_M | PFINT_OICR_HMC_ERR_M | PFINT_OICR_PE_PUSH_M)
+@@ -3191,8 +3199,13 @@ static irqreturn_t ice_misc_intr_thread_fn(int __always_unused irq, void *data)
+ 	if (ice_is_reset_in_progress(pf->state))
+ 		return IRQ_HANDLED;
+ 
+-	while (!ice_ptp_process_ts(pf))
+-		usleep_range(50, 100);
++	if (test_and_clear_bit(ICE_MISC_THREAD_EXTTS_EVENT, pf->misc_thread))
++		ice_ptp_extts_event(pf);
++
++	if (test_and_clear_bit(ICE_MISC_THREAD_TX_TSTAMP, pf->misc_thread)) {
++		while (!ice_ptp_process_ts(pf))
++			usleep_range(50, 100);
++	}
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c
+index ac6f06f9a2ed0..e8507d09b8488 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp.c
++++ b/drivers/net/ethernet/intel/ice/ice_ptp.c
+@@ -1458,15 +1458,11 @@ static int ice_ptp_adjfine(struct ptp_clock_info *info, long scaled_ppm)
+ }
+ 
+ /**
+- * ice_ptp_extts_work - Workqueue task function
+- * @work: external timestamp work structure
+- *
+- * Service for PTP external clock event
++ * ice_ptp_extts_event - Process PTP external clock event
++ * @pf: Board private structure
+  */
+-static void ice_ptp_extts_work(struct kthread_work *work)
++void ice_ptp_extts_event(struct ice_pf *pf)
+ {
+-	struct ice_ptp *ptp = container_of(work, struct ice_ptp, extts_work);
+-	struct ice_pf *pf = container_of(ptp, struct ice_pf, ptp);
+ 	struct ptp_clock_event event;
+ 	struct ice_hw *hw = &pf->hw;
+ 	u8 chan, tmr_idx;
+@@ -2558,7 +2554,6 @@ void ice_ptp_prepare_for_reset(struct ice_pf *pf)
+ 	ice_ptp_cfg_timestamp(pf, false);
+ 
+ 	kthread_cancel_delayed_work_sync(&ptp->work);
+-	kthread_cancel_work_sync(&ptp->extts_work);
+ 
+ 	if (test_bit(ICE_PFR_REQ, pf->state))
+ 		return;
+@@ -2656,7 +2651,6 @@ static int ice_ptp_init_work(struct ice_pf *pf, struct ice_ptp *ptp)
+ 
+ 	/* Initialize work functions */
+ 	kthread_init_delayed_work(&ptp->work, ice_ptp_periodic_work);
+-	kthread_init_work(&ptp->extts_work, ice_ptp_extts_work);
+ 
+ 	/* Allocate a kworker for handling work required for the ports
+ 	 * connected to the PTP hardware clock.
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.h b/drivers/net/ethernet/intel/ice/ice_ptp.h
+index 9cda2f43e0e56..9f8902c1e743d 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp.h
++++ b/drivers/net/ethernet/intel/ice/ice_ptp.h
+@@ -169,7 +169,6 @@ struct ice_ptp_port {
+  * struct ice_ptp - data used for integrating with CONFIG_PTP_1588_CLOCK
+  * @port: data for the PHY port initialization procedure
+  * @work: delayed work function for periodic tasks
+- * @extts_work: work function for handling external Tx timestamps
+  * @cached_phc_time: a cached copy of the PHC time for timestamp extension
+  * @cached_phc_jiffies: jiffies when cached_phc_time was last updated
+  * @ext_ts_chan: the external timestamp channel in use
+@@ -190,7 +189,6 @@ struct ice_ptp_port {
+ struct ice_ptp {
+ 	struct ice_ptp_port port;
+ 	struct kthread_delayed_work work;
+-	struct kthread_work extts_work;
+ 	u64 cached_phc_time;
+ 	unsigned long cached_phc_jiffies;
+ 	u8 ext_ts_chan;
+@@ -256,6 +254,7 @@ int ice_ptp_get_ts_config(struct ice_pf *pf, struct ifreq *ifr);
+ void ice_ptp_cfg_timestamp(struct ice_pf *pf, bool ena);
+ int ice_get_ptp_clock_index(struct ice_pf *pf);
+ 
++void ice_ptp_extts_event(struct ice_pf *pf);
+ s8 ice_ptp_request_ts(struct ice_ptp_tx *tx, struct sk_buff *skb);
+ bool ice_ptp_process_ts(struct ice_pf *pf);
+ 
+@@ -284,6 +283,7 @@ static inline int ice_get_ptp_clock_index(struct ice_pf *pf)
+ 	return -1;
+ }
+ 
++static inline void ice_ptp_extts_event(struct ice_pf *pf) { }
+ static inline s8
+ ice_ptp_request_ts(struct ice_ptp_tx *tx, struct sk_buff *skb)
+ {
+diff --git a/drivers/net/ethernet/intel/igc/igc.h b/drivers/net/ethernet/intel/igc/igc.h
+index df3e26c0cf01a..f83cbc4a1afa8 100644
+--- a/drivers/net/ethernet/intel/igc/igc.h
++++ b/drivers/net/ethernet/intel/igc/igc.h
+@@ -13,6 +13,7 @@
+ #include <linux/ptp_clock_kernel.h>
+ #include <linux/timecounter.h>
+ #include <linux/net_tstamp.h>
++#include <linux/bitfield.h>
+ 
+ #include "igc_hw.h"
+ 
+@@ -311,6 +312,33 @@ extern char igc_driver_name[];
+ #define IGC_MRQC_RSS_FIELD_IPV4_UDP	0x00400000
+ #define IGC_MRQC_RSS_FIELD_IPV6_UDP	0x00800000
+ 
++/* RX-desc Write-Back format RSS Type's */
++enum igc_rss_type_num {
++	IGC_RSS_TYPE_NO_HASH		= 0,
++	IGC_RSS_TYPE_HASH_TCP_IPV4	= 1,
++	IGC_RSS_TYPE_HASH_IPV4		= 2,
++	IGC_RSS_TYPE_HASH_TCP_IPV6	= 3,
++	IGC_RSS_TYPE_HASH_IPV6_EX	= 4,
++	IGC_RSS_TYPE_HASH_IPV6		= 5,
++	IGC_RSS_TYPE_HASH_TCP_IPV6_EX	= 6,
++	IGC_RSS_TYPE_HASH_UDP_IPV4	= 7,
++	IGC_RSS_TYPE_HASH_UDP_IPV6	= 8,
++	IGC_RSS_TYPE_HASH_UDP_IPV6_EX	= 9,
++	IGC_RSS_TYPE_MAX		= 10,
++};
++#define IGC_RSS_TYPE_MAX_TABLE		16
++#define IGC_RSS_TYPE_MASK		GENMASK(3,0) /* 4-bits (3:0) = mask 0x0F */
++
++/* igc_rss_type - Rx descriptor RSS type field */
++static inline u32 igc_rss_type(const union igc_adv_rx_desc *rx_desc)
++{
++	/* RSS Type 4-bits (3:0) number: 0-9 (above 9 is reserved)
++	 * Accessing the same bits via u16 (wb.lower.lo_dword.hs_rss.pkt_info)
++	 * is slightly slower than via u32 (wb.lower.lo_dword.data)
++	 */
++	return le32_get_bits(rx_desc->wb.lower.lo_dword.data, IGC_RSS_TYPE_MASK);
++}
++
+ /* Interrupt defines */
+ #define IGC_START_ITR			648 /* ~6000 ints/sec */
+ #define IGC_4K_ITR			980
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index b35f5ff3536e5..c85ceed443223 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -1687,14 +1687,36 @@ static void igc_rx_checksum(struct igc_ring *ring,
+ 		   le32_to_cpu(rx_desc->wb.upper.status_error));
+ }
+ 
++/* Mapping HW RSS Type to enum pkt_hash_types */
++static const enum pkt_hash_types igc_rss_type_table[IGC_RSS_TYPE_MAX_TABLE] = {
++	[IGC_RSS_TYPE_NO_HASH]		= PKT_HASH_TYPE_L2,
++	[IGC_RSS_TYPE_HASH_TCP_IPV4]	= PKT_HASH_TYPE_L4,
++	[IGC_RSS_TYPE_HASH_IPV4]	= PKT_HASH_TYPE_L3,
++	[IGC_RSS_TYPE_HASH_TCP_IPV6]	= PKT_HASH_TYPE_L4,
++	[IGC_RSS_TYPE_HASH_IPV6_EX]	= PKT_HASH_TYPE_L3,
++	[IGC_RSS_TYPE_HASH_IPV6]	= PKT_HASH_TYPE_L3,
++	[IGC_RSS_TYPE_HASH_TCP_IPV6_EX] = PKT_HASH_TYPE_L4,
++	[IGC_RSS_TYPE_HASH_UDP_IPV4]	= PKT_HASH_TYPE_L4,
++	[IGC_RSS_TYPE_HASH_UDP_IPV6]	= PKT_HASH_TYPE_L4,
++	[IGC_RSS_TYPE_HASH_UDP_IPV6_EX] = PKT_HASH_TYPE_L4,
++	[10] = PKT_HASH_TYPE_NONE, /* RSS Type above 9 "Reserved" by HW  */
++	[11] = PKT_HASH_TYPE_NONE, /* keep array sized for SW bit-mask   */
++	[12] = PKT_HASH_TYPE_NONE, /* to handle future HW revisons       */
++	[13] = PKT_HASH_TYPE_NONE,
++	[14] = PKT_HASH_TYPE_NONE,
++	[15] = PKT_HASH_TYPE_NONE,
++};
++
+ static inline void igc_rx_hash(struct igc_ring *ring,
+ 			       union igc_adv_rx_desc *rx_desc,
+ 			       struct sk_buff *skb)
+ {
+-	if (ring->netdev->features & NETIF_F_RXHASH)
+-		skb_set_hash(skb,
+-			     le32_to_cpu(rx_desc->wb.lower.hi_dword.rss),
+-			     PKT_HASH_TYPE_L3);
++	if (ring->netdev->features & NETIF_F_RXHASH) {
++		u32 rss_hash = le32_to_cpu(rx_desc->wb.lower.hi_dword.rss);
++		u32 rss_type = igc_rss_type(rx_desc);
++
++		skb_set_hash(skb, rss_hash, igc_rss_type_table[rss_type]);
++	}
+ }
+ 
+ static void igc_rx_vlan(struct igc_ring *rx_ring,
+@@ -6553,6 +6575,7 @@ static int igc_probe(struct pci_dev *pdev,
+ 	netdev->features |= NETIF_F_TSO;
+ 	netdev->features |= NETIF_F_TSO6;
+ 	netdev->features |= NETIF_F_TSO_ECN;
++	netdev->features |= NETIF_F_RXHASH;
+ 	netdev->features |= NETIF_F_RXCSUM;
+ 	netdev->features |= NETIF_F_HW_CSUM;
+ 	netdev->features |= NETIF_F_SCTP_CRC;
+diff --git a/drivers/net/ethernet/sfc/ef10.c b/drivers/net/ethernet/sfc/ef10.c
+index b63e47af63655..8c019f382a7f3 100644
+--- a/drivers/net/ethernet/sfc/ef10.c
++++ b/drivers/net/ethernet/sfc/ef10.c
+@@ -1297,8 +1297,10 @@ static void efx_ef10_fini_nic(struct efx_nic *efx)
+ {
+ 	struct efx_ef10_nic_data *nic_data = efx->nic_data;
+ 
++	spin_lock_bh(&efx->stats_lock);
+ 	kfree(nic_data->mc_stats);
+ 	nic_data->mc_stats = NULL;
++	spin_unlock_bh(&efx->stats_lock);
+ }
+ 
+ static int efx_ef10_init_nic(struct efx_nic *efx)
+@@ -1852,9 +1854,14 @@ static size_t efx_ef10_update_stats_pf(struct efx_nic *efx, u64 *full_stats,
+ 
+ 	efx_ef10_get_stat_mask(efx, mask);
+ 
+-	efx_nic_copy_stats(efx, nic_data->mc_stats);
+-	efx_nic_update_stats(efx_ef10_stat_desc, EF10_STAT_COUNT,
+-			     mask, stats, nic_data->mc_stats, false);
++	/* If NIC was fini'd (probably resetting), then we can't read
++	 * updated stats right now.
++	 */
++	if (nic_data->mc_stats) {
++		efx_nic_copy_stats(efx, nic_data->mc_stats);
++		efx_nic_update_stats(efx_ef10_stat_desc, EF10_STAT_COUNT,
++				     mask, stats, nic_data->mc_stats, false);
++	}
+ 
+ 	/* Update derived statistics */
+ 	efx_nic_fix_nodesc_drop_stat(efx,
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 4e59f0e164ec0..29d70ecdac846 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -7394,12 +7394,6 @@ void stmmac_dvr_remove(struct device *dev)
+ 	netif_carrier_off(ndev);
+ 	unregister_netdev(ndev);
+ 
+-	/* Serdes power down needs to happen after VLAN filter
+-	 * is deleted that is triggered by unregister_netdev().
+-	 */
+-	if (priv->plat->serdes_powerdown)
+-		priv->plat->serdes_powerdown(ndev, priv->plat->bsp_priv);
+-
+ #ifdef CONFIG_DEBUG_FS
+ 	stmmac_exit_fs(ndev);
+ #endif
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index 3e310b55bce2b..734822321e0ab 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -2042,6 +2042,11 @@ static int axienet_probe(struct platform_device *pdev)
+ 		goto cleanup_clk;
+ 	}
+ 
++	/* Reset core now that clocks are enabled, prior to accessing MDIO */
++	ret = __axienet_device_reset(lp);
++	if (ret)
++		goto cleanup_clk;
++
+ 	/* Autodetect the need for 64-bit DMA pointers.
+ 	 * When the IP is configured for a bus width bigger than 32 bits,
+ 	 * writing the MSB registers is mandatory, even if they are all 0.
+@@ -2096,11 +2101,6 @@ static int axienet_probe(struct platform_device *pdev)
+ 	lp->coalesce_count_tx = XAXIDMA_DFT_TX_THRESHOLD;
+ 	lp->coalesce_usec_tx = XAXIDMA_DFT_TX_USEC;
+ 
+-	/* Reset core now that clocks are enabled, prior to accessing MDIO */
+-	ret = __axienet_device_reset(lp);
+-	if (ret)
+-		goto cleanup_clk;
+-
+ 	ret = axienet_mdio_setup(lp);
+ 	if (ret)
+ 		dev_warn(&pdev->dev,
+diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c
+index 15c7dc82107f4..acb20ad4e37eb 100644
+--- a/drivers/net/gtp.c
++++ b/drivers/net/gtp.c
+@@ -631,7 +631,9 @@ static void __gtp_encap_destroy(struct sock *sk)
+ 			gtp->sk1u = NULL;
+ 		udp_sk(sk)->encap_type = 0;
+ 		rcu_assign_sk_user_data(sk, NULL);
++		release_sock(sk);
+ 		sock_put(sk);
++		return;
+ 	}
+ 	release_sock(sk);
+ }
+diff --git a/drivers/net/ipvlan/ipvlan_core.c b/drivers/net/ipvlan/ipvlan_core.c
+index ab5133eb1d517..e45817caaee8d 100644
+--- a/drivers/net/ipvlan/ipvlan_core.c
++++ b/drivers/net/ipvlan/ipvlan_core.c
+@@ -585,7 +585,8 @@ static int ipvlan_xmit_mode_l3(struct sk_buff *skb, struct net_device *dev)
+ 				consume_skb(skb);
+ 				return NET_XMIT_DROP;
+ 			}
+-			return ipvlan_rcv_frame(addr, &skb, true);
++			ipvlan_rcv_frame(addr, &skb, true);
++			return NET_XMIT_SUCCESS;
+ 		}
+ 	}
+ out:
+@@ -611,7 +612,8 @@ static int ipvlan_xmit_mode_l2(struct sk_buff *skb, struct net_device *dev)
+ 					consume_skb(skb);
+ 					return NET_XMIT_DROP;
+ 				}
+-				return ipvlan_rcv_frame(addr, &skb, true);
++				ipvlan_rcv_frame(addr, &skb, true);
++				return NET_XMIT_SUCCESS;
+ 			}
+ 		}
+ 		skb = skb_share_check(skb, GFP_ATOMIC);
+@@ -623,7 +625,8 @@ static int ipvlan_xmit_mode_l2(struct sk_buff *skb, struct net_device *dev)
+ 		 * the skb for the main-dev. At the RX side we just return
+ 		 * RX_PASS for it to be processed further on the stack.
+ 		 */
+-		return dev_forward_skb(ipvlan->phy_dev, skb);
++		dev_forward_skb(ipvlan->phy_dev, skb);
++		return NET_XMIT_SUCCESS;
+ 
+ 	} else if (is_multicast_ether_addr(eth->h_dest)) {
+ 		skb_reset_mac_header(skb);
+diff --git a/drivers/net/wireless/ath/ath10k/core.c b/drivers/net/wireless/ath/ath10k/core.c
+index 5eb131ab916fd..b6052dcc45ebf 100644
+--- a/drivers/net/wireless/ath/ath10k/core.c
++++ b/drivers/net/wireless/ath/ath10k/core.c
+@@ -2504,7 +2504,6 @@ EXPORT_SYMBOL(ath10k_core_napi_sync_disable);
+ static void ath10k_core_restart(struct work_struct *work)
+ {
+ 	struct ath10k *ar = container_of(work, struct ath10k, restart_work);
+-	struct ath10k_vif *arvif;
+ 	int ret;
+ 
+ 	set_bit(ATH10K_FLAG_CRASH_FLUSH, &ar->dev_flags);
+@@ -2543,14 +2542,6 @@ static void ath10k_core_restart(struct work_struct *work)
+ 		ar->state = ATH10K_STATE_RESTARTING;
+ 		ath10k_halt(ar);
+ 		ath10k_scan_finish(ar);
+-		if (ar->hw_params.hw_restart_disconnect) {
+-			list_for_each_entry(arvif, &ar->arvifs, list) {
+-				if (arvif->is_up &&
+-				    arvif->vdev_type == WMI_VDEV_TYPE_STA)
+-					ieee80211_hw_restart_disconnect(arvif->vif);
+-			}
+-		}
+-
+ 		ieee80211_restart_hw(ar->hw);
+ 		break;
+ 	case ATH10K_STATE_OFF:
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index ec8d5b29bc72c..f0729acdec50a 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -8108,6 +8108,7 @@ static void ath10k_reconfig_complete(struct ieee80211_hw *hw,
+ 				     enum ieee80211_reconfig_type reconfig_type)
+ {
+ 	struct ath10k *ar = hw->priv;
++	struct ath10k_vif *arvif;
+ 
+ 	if (reconfig_type != IEEE80211_RECONFIG_TYPE_RESTART)
+ 		return;
+@@ -8122,6 +8123,12 @@ static void ath10k_reconfig_complete(struct ieee80211_hw *hw,
+ 		ar->state = ATH10K_STATE_ON;
+ 		ieee80211_wake_queues(ar->hw);
+ 		clear_bit(ATH10K_FLAG_RESTARTING, &ar->dev_flags);
++		if (ar->hw_params.hw_restart_disconnect) {
++			list_for_each_entry(arvif, &ar->arvifs, list) {
++				if (arvif->is_up && arvif->vdev_type == WMI_VDEV_TYPE_STA)
++					ieee80211_hw_restart_disconnect(arvif->vif);
++				}
++		}
+ 	}
+ 
+ 	mutex_unlock(&ar->conf_mutex);
+diff --git a/drivers/net/wireless/ath/ath11k/ahb.c b/drivers/net/wireless/ath/ath11k/ahb.c
+index 5cbba9a8b6ba9..396548e57022f 100644
+--- a/drivers/net/wireless/ath/ath11k/ahb.c
++++ b/drivers/net/wireless/ath/ath11k/ahb.c
+@@ -1127,6 +1127,7 @@ static int ath11k_ahb_probe(struct platform_device *pdev)
+ 	switch (hw_rev) {
+ 	case ATH11K_HW_IPQ8074:
+ 	case ATH11K_HW_IPQ6018_HW10:
++	case ATH11K_HW_IPQ5018_HW10:
+ 		hif_ops = &ath11k_ahb_hif_ops_ipq8074;
+ 		pci_ops = NULL;
+ 		break;
+diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c
+index 75fdbe4ef83a4..329f0957f9f09 100644
+--- a/drivers/net/wireless/ath/ath11k/core.c
++++ b/drivers/net/wireless/ath/ath11k/core.c
+@@ -671,6 +671,7 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
+ 		.hal_params = &ath11k_hw_hal_params_ipq8074,
+ 		.single_pdev_only = false,
+ 		.cold_boot_calib = true,
++		.cbcal_restart_fw = true,
+ 		.fix_l1ss = true,
+ 		.supports_dynamic_smps_6ghz = false,
+ 		.alloc_cacheable_memory = true,
+diff --git a/drivers/net/wireless/ath/ath11k/hw.c b/drivers/net/wireless/ath/ath11k/hw.c
+index ab8f0ccacc6be..727e6a785bb98 100644
+--- a/drivers/net/wireless/ath/ath11k/hw.c
++++ b/drivers/net/wireless/ath/ath11k/hw.c
+@@ -1165,7 +1165,7 @@ const struct ath11k_hw_ops ipq5018_ops = {
+ 	.mpdu_info_get_peerid = ath11k_hw_ipq8074_mpdu_info_get_peerid,
+ 	.rx_desc_mac_addr2_valid = ath11k_hw_ipq9074_rx_desc_mac_addr2_valid,
+ 	.rx_desc_mpdu_start_addr2 = ath11k_hw_ipq9074_rx_desc_mpdu_start_addr2,
+-
++	.get_ring_selector = ath11k_hw_ipq8074_get_tcl_ring_selector,
+ };
+ 
+ #define ATH11K_TX_RING_MASK_0 BIT(0)
+diff --git a/drivers/net/wireless/ath/ath11k/qmi.c b/drivers/net/wireless/ath/ath11k/qmi.c
+index ab923e24b0a9c..2328b9447cf1b 100644
+--- a/drivers/net/wireless/ath/ath11k/qmi.c
++++ b/drivers/net/wireless/ath/ath11k/qmi.c
+@@ -2058,6 +2058,9 @@ static int ath11k_qmi_assign_target_mem_chunk(struct ath11k_base *ab)
+ 			ab->qmi.target_mem[idx].iaddr =
+ 				ioremap(ab->qmi.target_mem[idx].paddr,
+ 					ab->qmi.target_mem[i].size);
++			if (!ab->qmi.target_mem[idx].iaddr)
++				return -EIO;
++
+ 			ab->qmi.target_mem[idx].size = ab->qmi.target_mem[i].size;
+ 			host_ddr_sz = ab->qmi.target_mem[i].size;
+ 			ab->qmi.target_mem[idx].type = ab->qmi.target_mem[i].type;
+@@ -2083,6 +2086,8 @@ static int ath11k_qmi_assign_target_mem_chunk(struct ath11k_base *ab)
+ 					ab->qmi.target_mem[idx].iaddr =
+ 						ioremap(ab->qmi.target_mem[idx].paddr,
+ 							ab->qmi.target_mem[i].size);
++					if (!ab->qmi.target_mem[idx].iaddr)
++						return -EIO;
+ 				} else {
+ 					ab->qmi.target_mem[idx].paddr =
+ 						ATH11K_QMI_CALDB_ADDRESS;
+diff --git a/drivers/net/wireless/ath/ath9k/ar9003_hw.c b/drivers/net/wireless/ath/ath9k/ar9003_hw.c
+index 4f27a9fb1482b..e9bd13eeee92f 100644
+--- a/drivers/net/wireless/ath/ath9k/ar9003_hw.c
++++ b/drivers/net/wireless/ath/ath9k/ar9003_hw.c
+@@ -1099,17 +1099,22 @@ static bool ath9k_hw_verify_hang(struct ath_hw *ah, unsigned int queue)
+ {
+ 	u32 dma_dbg_chain, dma_dbg_complete;
+ 	u8 dcu_chain_state, dcu_complete_state;
++	unsigned int dbg_reg, reg_offset;
+ 	int i;
+ 
+-	for (i = 0; i < NUM_STATUS_READS; i++) {
+-		if (queue < 6)
+-			dma_dbg_chain = REG_READ(ah, AR_DMADBG_4);
+-		else
+-			dma_dbg_chain = REG_READ(ah, AR_DMADBG_5);
++	if (queue < 6) {
++		dbg_reg = AR_DMADBG_4;
++		reg_offset = queue * 5;
++	} else {
++		dbg_reg = AR_DMADBG_5;
++		reg_offset = (queue - 6) * 5;
++	}
+ 
++	for (i = 0; i < NUM_STATUS_READS; i++) {
++		dma_dbg_chain = REG_READ(ah, dbg_reg);
+ 		dma_dbg_complete = REG_READ(ah, AR_DMADBG_6);
+ 
+-		dcu_chain_state = (dma_dbg_chain >> (5 * queue)) & 0x1f;
++		dcu_chain_state = (dma_dbg_chain >> reg_offset) & 0x1f;
+ 		dcu_complete_state = dma_dbg_complete & 0x3;
+ 
+ 		if ((dcu_chain_state != 0x6) || (dcu_complete_state != 0x1))
+@@ -1128,6 +1133,7 @@ static bool ar9003_hw_detect_mac_hang(struct ath_hw *ah)
+ 	u8 dcu_chain_state, dcu_complete_state;
+ 	bool dcu_wait_frdone = false;
+ 	unsigned long chk_dcu = 0;
++	unsigned int reg_offset;
+ 	unsigned int i = 0;
+ 
+ 	dma_dbg_4 = REG_READ(ah, AR_DMADBG_4);
+@@ -1139,12 +1145,15 @@ static bool ar9003_hw_detect_mac_hang(struct ath_hw *ah)
+ 		goto exit;
+ 
+ 	for (i = 0; i < ATH9K_NUM_TX_QUEUES; i++) {
+-		if (i < 6)
++		if (i < 6) {
+ 			chk_dbg = dma_dbg_4;
+-		else
++			reg_offset = i * 5;
++		} else {
+ 			chk_dbg = dma_dbg_5;
++			reg_offset = (i - 6) * 5;
++		}
+ 
+-		dcu_chain_state = (chk_dbg >> (5 * i)) & 0x1f;
++		dcu_chain_state = (chk_dbg >> reg_offset) & 0x1f;
+ 		if (dcu_chain_state == 0x6) {
+ 			dcu_wait_frdone = true;
+ 			chk_dcu |= BIT(i);
+diff --git a/drivers/net/wireless/ath/ath9k/htc_hst.c b/drivers/net/wireless/ath/ath9k/htc_hst.c
+index fe62ff668f757..99667aba289df 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_hst.c
++++ b/drivers/net/wireless/ath/ath9k/htc_hst.c
+@@ -114,7 +114,13 @@ static void htc_process_conn_rsp(struct htc_target *target,
+ 
+ 	if (svc_rspmsg->status == HTC_SERVICE_SUCCESS) {
+ 		epid = svc_rspmsg->endpoint_id;
+-		if (epid < 0 || epid >= ENDPOINT_MAX)
++
++		/* Check that the received epid for the endpoint to attach
++		 * a new service is valid. ENDPOINT0 can't be used here as it
++		 * is already reserved for HTC_CTRL_RSVD_SVC service and thus
++		 * should not be modified.
++		 */
++		if (epid <= ENDPOINT0 || epid >= ENDPOINT_MAX)
+ 			return;
+ 
+ 		service_id = be16_to_cpu(svc_rspmsg->service_id);
+diff --git a/drivers/net/wireless/ath/ath9k/main.c b/drivers/net/wireless/ath/ath9k/main.c
+index a4197c14f0a92..6360d3356e256 100644
+--- a/drivers/net/wireless/ath/ath9k/main.c
++++ b/drivers/net/wireless/ath/ath9k/main.c
+@@ -203,7 +203,7 @@ void ath_cancel_work(struct ath_softc *sc)
+ void ath_restart_work(struct ath_softc *sc)
+ {
+ 	ieee80211_queue_delayed_work(sc->hw, &sc->hw_check_work,
+-				     ATH_HW_CHECK_POLL_INT);
++				     msecs_to_jiffies(ATH_HW_CHECK_POLL_INT));
+ 
+ 	if (AR_SREV_9340(sc->sc_ah) || AR_SREV_9330(sc->sc_ah))
+ 		ieee80211_queue_delayed_work(sc->hw, &sc->hw_pll_work,
+@@ -850,7 +850,7 @@ static bool ath9k_txq_list_has_key(struct list_head *txq_list, u32 keyix)
+ static bool ath9k_txq_has_key(struct ath_softc *sc, u32 keyix)
+ {
+ 	struct ath_hw *ah = sc->sc_ah;
+-	int i;
++	int i, j;
+ 	struct ath_txq *txq;
+ 	bool key_in_use = false;
+ 
+@@ -868,8 +868,9 @@ static bool ath9k_txq_has_key(struct ath_softc *sc, u32 keyix)
+ 		if (sc->sc_ah->caps.hw_caps & ATH9K_HW_CAP_EDMA) {
+ 			int idx = txq->txq_tailidx;
+ 
+-			while (!key_in_use &&
+-			       !list_empty(&txq->txq_fifo[idx])) {
++			for (j = 0; !key_in_use &&
++			     !list_empty(&txq->txq_fifo[idx]) &&
++			     j < ATH_TXFIFO_DEPTH; j++) {
+ 				key_in_use = ath9k_txq_list_has_key(
+ 					&txq->txq_fifo[idx], keyix);
+ 				INCR(idx, ATH_TXFIFO_DEPTH);
+@@ -2239,7 +2240,7 @@ void __ath9k_flush(struct ieee80211_hw *hw, u32 queues, bool drop,
+ 	}
+ 
+ 	ieee80211_queue_delayed_work(hw, &sc->hw_check_work,
+-				     ATH_HW_CHECK_POLL_INT);
++				     msecs_to_jiffies(ATH_HW_CHECK_POLL_INT));
+ }
+ 
+ static bool ath9k_tx_frames_pending(struct ieee80211_hw *hw)
+diff --git a/drivers/net/wireless/ath/ath9k/wmi.c b/drivers/net/wireless/ath/ath9k/wmi.c
+index 19345b8f7bfd5..d652c647d56b5 100644
+--- a/drivers/net/wireless/ath/ath9k/wmi.c
++++ b/drivers/net/wireless/ath/ath9k/wmi.c
+@@ -221,6 +221,10 @@ static void ath9k_wmi_ctrl_rx(void *priv, struct sk_buff *skb,
+ 	if (unlikely(wmi->stopped))
+ 		goto free_skb;
+ 
++	/* Validate the obtained SKB. */
++	if (unlikely(skb->len < sizeof(struct wmi_cmd_hdr)))
++		goto free_skb;
++
+ 	hdr = (struct wmi_cmd_hdr *) skb->data;
+ 	cmd_id = be16_to_cpu(hdr->command_id);
+ 
+diff --git a/drivers/net/wireless/atmel/atmel_cs.c b/drivers/net/wireless/atmel/atmel_cs.c
+index 453bb84cb3386..58bba9875d366 100644
+--- a/drivers/net/wireless/atmel/atmel_cs.c
++++ b/drivers/net/wireless/atmel/atmel_cs.c
+@@ -72,6 +72,7 @@ struct local_info {
+ static int atmel_probe(struct pcmcia_device *p_dev)
+ {
+ 	struct local_info *local;
++	int ret;
+ 
+ 	dev_dbg(&p_dev->dev, "atmel_attach()\n");
+ 
+@@ -82,8 +83,16 @@ static int atmel_probe(struct pcmcia_device *p_dev)
+ 
+ 	p_dev->priv = local;
+ 
+-	return atmel_config(p_dev);
+-} /* atmel_attach */
++	ret = atmel_config(p_dev);
++	if (ret)
++		goto err_free_priv;
++
++	return 0;
++
++err_free_priv:
++	kfree(p_dev->priv);
++	return ret;
++}
+ 
+ static void atmel_detach(struct pcmcia_device *link)
+ {
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+index 9711841bb4564..3d91293cc250d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+@@ -1697,8 +1697,11 @@ static void iwl_mvm_queue_state_change(struct iwl_op_mode *op_mode,
+ 		else
+ 			set_bit(IWL_MVM_TXQ_STATE_STOP_FULL, &mvmtxq->state);
+ 
+-		if (start && mvmsta->sta_state != IEEE80211_STA_NOTEXIST)
++		if (start && mvmsta->sta_state != IEEE80211_STA_NOTEXIST) {
++			local_bh_disable();
+ 			iwl_mvm_mac_itxq_xmit(mvm->hw, txq);
++			local_bh_enable();
++		}
+ 	}
+ 
+ out:
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+index ad410b6efce73..7ebcf0ef29255 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+@@ -274,7 +274,8 @@ static void iwl_mvm_get_signal_strength(struct iwl_mvm *mvm,
+ static int iwl_mvm_rx_mgmt_prot(struct ieee80211_sta *sta,
+ 				struct ieee80211_hdr *hdr,
+ 				struct iwl_rx_mpdu_desc *desc,
+-				u32 status)
++				u32 status,
++				struct ieee80211_rx_status *stats)
+ {
+ 	struct iwl_mvm_sta *mvmsta;
+ 	struct iwl_mvm_vif *mvmvif;
+@@ -303,8 +304,10 @@ static int iwl_mvm_rx_mgmt_prot(struct ieee80211_sta *sta,
+ 
+ 	/* good cases */
+ 	if (likely(status & IWL_RX_MPDU_STATUS_MIC_OK &&
+-		   !(status & IWL_RX_MPDU_STATUS_REPLAY_ERROR)))
++		   !(status & IWL_RX_MPDU_STATUS_REPLAY_ERROR))) {
++		stats->flag |= RX_FLAG_DECRYPTED;
+ 		return 0;
++	}
+ 
+ 	if (!sta)
+ 		return -1;
+@@ -373,7 +376,7 @@ static int iwl_mvm_rx_crypto(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ 
+ 	if (unlikely(ieee80211_is_mgmt(hdr->frame_control) &&
+ 		     !ieee80211_has_protected(hdr->frame_control)))
+-		return iwl_mvm_rx_mgmt_prot(sta, hdr, desc, status);
++		return iwl_mvm_rx_mgmt_prot(sta, hdr, desc, status, stats);
+ 
+ 	if (!ieee80211_has_protected(hdr->frame_control) ||
+ 	    (status & IWL_RX_MPDU_STATUS_SEC_MASK) ==
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+index 9c9f87fe83777..b455e981faa1f 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+@@ -1620,14 +1620,14 @@ irqreturn_t iwl_pcie_irq_rx_msix_handler(int irq, void *dev_id)
+ 	struct msix_entry *entry = dev_id;
+ 	struct iwl_trans_pcie *trans_pcie = iwl_pcie_get_trans_pcie(entry);
+ 	struct iwl_trans *trans = trans_pcie->trans;
+-	struct iwl_rxq *rxq = &trans_pcie->rxq[entry->entry];
++	struct iwl_rxq *rxq;
+ 
+ 	trace_iwlwifi_dev_irq_msix(trans->dev, entry, false, 0, 0);
+ 
+ 	if (WARN_ON(entry->entry >= trans->num_rx_queues))
+ 		return IRQ_NONE;
+ 
+-	if (!rxq) {
++	if (!trans_pcie->rxq) {
+ 		if (net_ratelimit())
+ 			IWL_ERR(trans,
+ 				"[%d] Got MSI-X interrupt before we have Rx queues\n",
+@@ -1635,6 +1635,7 @@ irqreturn_t iwl_pcie_irq_rx_msix_handler(int irq, void *dev_id)
+ 		return IRQ_NONE;
+ 	}
+ 
++	rxq = &trans_pcie->rxq[entry->entry];
+ 	lock_map_acquire(&trans->sync_cmd_lockdep_map);
+ 	IWL_DEBUG_ISR(trans, "[%d] Got interrupt\n", entry->entry);
+ 
+diff --git a/drivers/net/wireless/intersil/orinoco/orinoco_cs.c b/drivers/net/wireless/intersil/orinoco/orinoco_cs.c
+index a956f965a1e5e..03bfd2482656c 100644
+--- a/drivers/net/wireless/intersil/orinoco/orinoco_cs.c
++++ b/drivers/net/wireless/intersil/orinoco/orinoco_cs.c
+@@ -96,6 +96,7 @@ orinoco_cs_probe(struct pcmcia_device *link)
+ {
+ 	struct orinoco_private *priv;
+ 	struct orinoco_pccard *card;
++	int ret;
+ 
+ 	priv = alloc_orinocodev(sizeof(*card), &link->dev,
+ 				orinoco_cs_hard_reset, NULL);
+@@ -107,8 +108,16 @@ orinoco_cs_probe(struct pcmcia_device *link)
+ 	card->p_dev = link;
+ 	link->priv = priv;
+ 
+-	return orinoco_cs_config(link);
+-}				/* orinoco_cs_attach */
++	ret = orinoco_cs_config(link);
++	if (ret)
++		goto err_free_orinocodev;
++
++	return 0;
++
++err_free_orinocodev:
++	free_orinocodev(priv);
++	return ret;
++}
+ 
+ static void orinoco_cs_detach(struct pcmcia_device *link)
+ {
+diff --git a/drivers/net/wireless/intersil/orinoco/spectrum_cs.c b/drivers/net/wireless/intersil/orinoco/spectrum_cs.c
+index 291ef97ed45ec..841d623c621ac 100644
+--- a/drivers/net/wireless/intersil/orinoco/spectrum_cs.c
++++ b/drivers/net/wireless/intersil/orinoco/spectrum_cs.c
+@@ -157,6 +157,7 @@ spectrum_cs_probe(struct pcmcia_device *link)
+ {
+ 	struct orinoco_private *priv;
+ 	struct orinoco_pccard *card;
++	int ret;
+ 
+ 	priv = alloc_orinocodev(sizeof(*card), &link->dev,
+ 				spectrum_cs_hard_reset,
+@@ -169,8 +170,16 @@ spectrum_cs_probe(struct pcmcia_device *link)
+ 	card->p_dev = link;
+ 	link->priv = priv;
+ 
+-	return spectrum_cs_config(link);
+-}				/* spectrum_cs_attach */
++	ret = spectrum_cs_config(link);
++	if (ret)
++		goto err_free_orinocodev;
++
++	return 0;
++
++err_free_orinocodev:
++	free_orinocodev(priv);
++	return ret;
++}
+ 
+ static void spectrum_cs_detach(struct pcmcia_device *link)
+ {
+diff --git a/drivers/net/wireless/marvell/mwifiex/scan.c b/drivers/net/wireless/marvell/mwifiex/scan.c
+index ac8001c842935..644b1e134b01c 100644
+--- a/drivers/net/wireless/marvell/mwifiex/scan.c
++++ b/drivers/net/wireless/marvell/mwifiex/scan.c
+@@ -2187,9 +2187,9 @@ int mwifiex_ret_802_11_scan(struct mwifiex_private *priv,
+ 
+ 	if (nd_config) {
+ 		adapter->nd_info =
+-			kzalloc(sizeof(struct cfg80211_wowlan_nd_match) +
+-				sizeof(struct cfg80211_wowlan_nd_match *) *
+-				scan_rsp->number_of_sets, GFP_ATOMIC);
++			kzalloc(struct_size(adapter->nd_info, matches,
++					    scan_rsp->number_of_sets),
++				GFP_ATOMIC);
+ 
+ 		if (adapter->nd_info)
+ 			adapter->nd_info->n_matches = scan_rsp->number_of_sets;
+diff --git a/drivers/net/wireless/microchip/wilc1000/hif.c b/drivers/net/wireless/microchip/wilc1000/hif.c
+index 5adc69d5bcae3..a28da59384813 100644
+--- a/drivers/net/wireless/microchip/wilc1000/hif.c
++++ b/drivers/net/wireless/microchip/wilc1000/hif.c
+@@ -485,6 +485,9 @@ void *wilc_parse_join_bss_param(struct cfg80211_bss *bss,
+ 		int rsn_ie_len = sizeof(struct element) + rsn_ie[1];
+ 		int offset = 8;
+ 
++		param->mode_802_11i = 2;
++		param->rsn_found = true;
++
+ 		/* extract RSN capabilities */
+ 		if (offset < rsn_ie_len) {
+ 			/* skip over pairwise suites */
+@@ -494,11 +497,8 @@ void *wilc_parse_join_bss_param(struct cfg80211_bss *bss,
+ 				/* skip over authentication suites */
+ 				offset += (rsn_ie[offset] * 4) + 2;
+ 
+-				if (offset + 1 < rsn_ie_len) {
+-					param->mode_802_11i = 2;
+-					param->rsn_found = true;
++				if (offset + 1 < rsn_ie_len)
+ 					memcpy(param->rsn_cap, &rsn_ie[offset], 2);
+-				}
+ 			}
+ 		}
+ 	}
+diff --git a/drivers/net/wireless/ray_cs.c b/drivers/net/wireless/ray_cs.c
+index 1f57a0055bbd8..38782d4c4694a 100644
+--- a/drivers/net/wireless/ray_cs.c
++++ b/drivers/net/wireless/ray_cs.c
+@@ -270,13 +270,14 @@ static int ray_probe(struct pcmcia_device *p_dev)
+ {
+ 	ray_dev_t *local;
+ 	struct net_device *dev;
++	int ret;
+ 
+ 	dev_dbg(&p_dev->dev, "ray_attach()\n");
+ 
+ 	/* Allocate space for private device-specific data */
+ 	dev = alloc_etherdev(sizeof(ray_dev_t));
+ 	if (!dev)
+-		goto fail_alloc_dev;
++		return -ENOMEM;
+ 
+ 	local = netdev_priv(dev);
+ 	local->finder = p_dev;
+@@ -313,11 +314,16 @@ static int ray_probe(struct pcmcia_device *p_dev)
+ 	timer_setup(&local->timer, NULL, 0);
+ 
+ 	this_device = p_dev;
+-	return ray_config(p_dev);
++	ret = ray_config(p_dev);
++	if (ret)
++		goto err_free_dev;
++
++	return 0;
+ 
+-fail_alloc_dev:
+-	return -ENOMEM;
+-} /* ray_attach */
++err_free_dev:
++	free_netdev(dev);
++	return ret;
++}
+ 
+ static void ray_detach(struct pcmcia_device *link)
+ {
+diff --git a/drivers/net/wireless/realtek/rtw88/usb.c b/drivers/net/wireless/realtek/rtw88/usb.c
+index 44a5fafb99055..976eafa739a2d 100644
+--- a/drivers/net/wireless/realtek/rtw88/usb.c
++++ b/drivers/net/wireless/realtek/rtw88/usb.c
+@@ -535,7 +535,7 @@ static void rtw_usb_rx_handler(struct work_struct *work)
+ 		}
+ 
+ 		if (skb_queue_len(&rtwusb->rx_queue) >= RTW_USB_MAX_RXQ_LEN) {
+-			rtw_err(rtwdev, "failed to get rx_queue, overflow\n");
++			dev_dbg_ratelimited(rtwdev->dev, "failed to get rx_queue, overflow\n");
+ 			dev_kfree_skb_any(skb);
+ 			continue;
+ 		}
+diff --git a/drivers/net/wireless/rsi/rsi_91x_sdio.c b/drivers/net/wireless/rsi/rsi_91x_sdio.c
+index d09998796ac08..1911fef3bbad6 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_sdio.c
++++ b/drivers/net/wireless/rsi/rsi_91x_sdio.c
+@@ -1463,10 +1463,8 @@ static void rsi_shutdown(struct device *dev)
+ 
+ 	rsi_dbg(ERR_ZONE, "SDIO Bus shutdown =====>\n");
+ 
+-	if (hw) {
+-		struct cfg80211_wowlan *wowlan = hw->wiphy->wowlan_config;
+-
+-		if (rsi_config_wowlan(adapter, wowlan))
++	if (hw && hw->wiphy && hw->wiphy->wowlan_config) {
++		if (rsi_config_wowlan(adapter, hw->wiphy->wowlan_config))
+ 			rsi_dbg(ERR_ZONE, "Failed to configure WoWLAN\n");
+ 	}
+ 
+@@ -1481,9 +1479,6 @@ static void rsi_shutdown(struct device *dev)
+ 	if (sdev->write_fail)
+ 		rsi_dbg(INFO_ZONE, "###### Device is not ready #######\n");
+ 
+-	if (rsi_set_sdio_pm_caps(adapter))
+-		rsi_dbg(INFO_ZONE, "Setting power management caps failed\n");
+-
+ 	rsi_dbg(INFO_ZONE, "***** RSI module shut down *****\n");
+ }
+ 
+diff --git a/drivers/net/wireless/wl3501_cs.c b/drivers/net/wireless/wl3501_cs.c
+index 7fb2f95134760..c45c4b7cbbaf1 100644
+--- a/drivers/net/wireless/wl3501_cs.c
++++ b/drivers/net/wireless/wl3501_cs.c
+@@ -1862,6 +1862,7 @@ static int wl3501_probe(struct pcmcia_device *p_dev)
+ {
+ 	struct net_device *dev;
+ 	struct wl3501_card *this;
++	int ret;
+ 
+ 	/* The io structure describes IO port mapping */
+ 	p_dev->resource[0]->end	= 16;
+@@ -1873,8 +1874,7 @@ static int wl3501_probe(struct pcmcia_device *p_dev)
+ 
+ 	dev = alloc_etherdev(sizeof(struct wl3501_card));
+ 	if (!dev)
+-		goto out_link;
+-
++		return -ENOMEM;
+ 
+ 	dev->netdev_ops		= &wl3501_netdev_ops;
+ 	dev->watchdog_timeo	= 5 * HZ;
+@@ -1887,9 +1887,15 @@ static int wl3501_probe(struct pcmcia_device *p_dev)
+ 	netif_stop_queue(dev);
+ 	p_dev->priv = dev;
+ 
+-	return wl3501_config(p_dev);
+-out_link:
+-	return -ENOMEM;
++	ret = wl3501_config(p_dev);
++	if (ret)
++		goto out_free_etherdev;
++
++	return 0;
++
++out_free_etherdev:
++	free_netdev(dev);
++	return ret;
+ }
+ 
+ static int wl3501_config(struct pcmcia_device *link)
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 8a632bf7f5a8f..8d8403b65e1b3 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -3872,8 +3872,10 @@ static ssize_t nvme_ctrl_dhchap_secret_store(struct device *dev,
+ 		int ret;
+ 
+ 		ret = nvme_auth_generate_key(dhchap_secret, &key);
+-		if (ret)
++		if (ret) {
++			kfree(dhchap_secret);
+ 			return ret;
++		}
+ 		kfree(opts->dhchap_secret);
+ 		opts->dhchap_secret = dhchap_secret;
+ 		host_key = ctrl->host_key;
+@@ -3881,7 +3883,8 @@ static ssize_t nvme_ctrl_dhchap_secret_store(struct device *dev,
+ 		ctrl->host_key = key;
+ 		mutex_unlock(&ctrl->dhchap_auth_mutex);
+ 		nvme_auth_free_key(host_key);
+-	}
++	} else
++		kfree(dhchap_secret);
+ 	/* Start re-authentication */
+ 	dev_info(ctrl->device, "re-authenticating controller\n");
+ 	queue_work(nvme_wq, &ctrl->dhchap_auth_work);
+@@ -3926,8 +3929,10 @@ static ssize_t nvme_ctrl_dhchap_ctrl_secret_store(struct device *dev,
+ 		int ret;
+ 
+ 		ret = nvme_auth_generate_key(dhchap_secret, &key);
+-		if (ret)
++		if (ret) {
++			kfree(dhchap_secret);
+ 			return ret;
++		}
+ 		kfree(opts->dhchap_ctrl_secret);
+ 		opts->dhchap_ctrl_secret = dhchap_secret;
+ 		ctrl_key = ctrl->ctrl_key;
+@@ -3935,7 +3940,8 @@ static ssize_t nvme_ctrl_dhchap_ctrl_secret_store(struct device *dev,
+ 		ctrl->ctrl_key = key;
+ 		mutex_unlock(&ctrl->dhchap_auth_mutex);
+ 		nvme_auth_free_key(ctrl_key);
+-	}
++	} else
++		kfree(dhchap_secret);
+ 	/* Start re-authentication */
+ 	dev_info(ctrl->device, "re-authenticating controller\n");
+ 	queue_work(nvme_wq, &ctrl->dhchap_auth_work);
+@@ -5243,6 +5249,8 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
+ 
+ 	return 0;
+ out_free_cdev:
++	nvme_fault_inject_fini(&ctrl->fault_inject);
++	dev_pm_qos_hide_latency_tolerance(ctrl->device);
+ 	cdev_device_del(&ctrl->cdev, ctrl->device);
+ out_free_name:
+ 	nvme_put_ctrl(ctrl);
+diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c
+index 940c7dd701d68..5b14f7ee3c798 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence-host.c
++++ b/drivers/pci/controller/cadence/pcie-cadence-host.c
+@@ -12,6 +12,8 @@
+ 
+ #include "pcie-cadence.h"
+ 
++#define LINK_RETRAIN_TIMEOUT HZ
++
+ static u64 bar_max_size[] = {
+ 	[RP_BAR0] = _ULL(128 * SZ_2G),
+ 	[RP_BAR1] = SZ_2G,
+@@ -77,6 +79,27 @@ static struct pci_ops cdns_pcie_host_ops = {
+ 	.write		= pci_generic_config_write,
+ };
+ 
++static int cdns_pcie_host_training_complete(struct cdns_pcie *pcie)
++{
++	u32 pcie_cap_off = CDNS_PCIE_RP_CAP_OFFSET;
++	unsigned long end_jiffies;
++	u16 lnk_stat;
++
++	/* Wait for link training to complete. Exit after timeout. */
++	end_jiffies = jiffies + LINK_RETRAIN_TIMEOUT;
++	do {
++		lnk_stat = cdns_pcie_rp_readw(pcie, pcie_cap_off + PCI_EXP_LNKSTA);
++		if (!(lnk_stat & PCI_EXP_LNKSTA_LT))
++			break;
++		usleep_range(0, 1000);
++	} while (time_before(jiffies, end_jiffies));
++
++	if (!(lnk_stat & PCI_EXP_LNKSTA_LT))
++		return 0;
++
++	return -ETIMEDOUT;
++}
++
+ static int cdns_pcie_host_wait_for_link(struct cdns_pcie *pcie)
+ {
+ 	struct device *dev = pcie->dev;
+@@ -118,6 +141,10 @@ static int cdns_pcie_retrain(struct cdns_pcie *pcie)
+ 		cdns_pcie_rp_writew(pcie, pcie_cap_off + PCI_EXP_LNKCTL,
+ 				    lnk_ctl);
+ 
++		ret = cdns_pcie_host_training_complete(pcie);
++		if (ret)
++			return ret;
++
+ 		ret = cdns_pcie_host_wait_for_link(pcie);
+ 	}
+ 	return ret;
+diff --git a/drivers/pci/controller/pci-ftpci100.c b/drivers/pci/controller/pci-ftpci100.c
+index ecd3009df586d..6e7981d2ed5e1 100644
+--- a/drivers/pci/controller/pci-ftpci100.c
++++ b/drivers/pci/controller/pci-ftpci100.c
+@@ -429,22 +429,12 @@ static int faraday_pci_probe(struct platform_device *pdev)
+ 	p->dev = dev;
+ 
+ 	/* Retrieve and enable optional clocks */
+-	clk = devm_clk_get(dev, "PCLK");
++	clk = devm_clk_get_enabled(dev, "PCLK");
+ 	if (IS_ERR(clk))
+ 		return PTR_ERR(clk);
+-	ret = clk_prepare_enable(clk);
+-	if (ret) {
+-		dev_err(dev, "could not prepare PCLK\n");
+-		return ret;
+-	}
+-	p->bus_clk = devm_clk_get(dev, "PCICLK");
++	p->bus_clk = devm_clk_get_enabled(dev, "PCICLK");
+ 	if (IS_ERR(p->bus_clk))
+ 		return PTR_ERR(p->bus_clk);
+-	ret = clk_prepare_enable(p->bus_clk);
+-	if (ret) {
+-		dev_err(dev, "could not prepare PCICLK\n");
+-		return ret;
+-	}
+ 
+ 	p->base = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(p->base))
+diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
+index 990630ec57c6a..e718a816d4814 100644
+--- a/drivers/pci/controller/vmd.c
++++ b/drivers/pci/controller/vmd.c
+@@ -927,7 +927,8 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
+ 		if (!list_empty(&child->devices)) {
+ 			dev = list_first_entry(&child->devices,
+ 					       struct pci_dev, bus_list);
+-			if (pci_reset_bus(dev))
++			ret = pci_reset_bus(dev);
++			if (ret)
+ 				pci_warn(dev, "can't reset device: %d\n", ret);
+ 
+ 			break;
+@@ -1036,6 +1037,13 @@ static void vmd_remove(struct pci_dev *dev)
+ 	ida_simple_remove(&vmd_instance_ida, vmd->instance);
+ }
+ 
++static void vmd_shutdown(struct pci_dev *dev)
++{
++        struct vmd_dev *vmd = pci_get_drvdata(dev);
++
++        vmd_remove_irq_domain(vmd);
++}
++
+ #ifdef CONFIG_PM_SLEEP
+ static int vmd_suspend(struct device *dev)
+ {
+@@ -1101,6 +1109,7 @@ static struct pci_driver vmd_drv = {
+ 	.id_table	= vmd_ids,
+ 	.probe		= vmd_probe,
+ 	.remove		= vmd_remove,
++	.shutdown	= vmd_shutdown,
+ 	.driver		= {
+ 		.pm	= &vmd_dev_pm_ops,
+ 	},
+diff --git a/drivers/pci/endpoint/functions/Kconfig b/drivers/pci/endpoint/functions/Kconfig
+index 9fd5608868718..8efb6a869e7ce 100644
+--- a/drivers/pci/endpoint/functions/Kconfig
++++ b/drivers/pci/endpoint/functions/Kconfig
+@@ -27,7 +27,7 @@ config PCI_EPF_NTB
+ 	  If in doubt, say "N" to disable Endpoint NTB driver.
+ 
+ config PCI_EPF_VNTB
+-	tristate "PCI Endpoint NTB driver"
++	tristate "PCI Endpoint Virtual NTB driver"
+ 	depends on PCI_ENDPOINT
+ 	depends on NTB
+ 	select CONFIGFS_FS
+diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c
+index 0f9d2ec822ac6..172e5ac0bd96c 100644
+--- a/drivers/pci/endpoint/functions/pci-epf-test.c
++++ b/drivers/pci/endpoint/functions/pci-epf-test.c
+@@ -112,7 +112,7 @@ static int pci_epf_test_data_transfer(struct pci_epf_test *epf_test,
+ 				      size_t len, dma_addr_t dma_remote,
+ 				      enum dma_transfer_direction dir)
+ {
+-	struct dma_chan *chan = (dir == DMA_DEV_TO_MEM) ?
++	struct dma_chan *chan = (dir == DMA_MEM_TO_DEV) ?
+ 				 epf_test->dma_chan_tx : epf_test->dma_chan_rx;
+ 	dma_addr_t dma_local = (dir == DMA_MEM_TO_DEV) ? dma_src : dma_dst;
+ 	enum dma_ctrl_flags flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT;
+diff --git a/drivers/pci/hotplug/pciehp_ctrl.c b/drivers/pci/hotplug/pciehp_ctrl.c
+index 529c348084401..32baba1b7f131 100644
+--- a/drivers/pci/hotplug/pciehp_ctrl.c
++++ b/drivers/pci/hotplug/pciehp_ctrl.c
+@@ -256,6 +256,14 @@ void pciehp_handle_presence_or_link_change(struct controller *ctrl, u32 events)
+ 	present = pciehp_card_present(ctrl);
+ 	link_active = pciehp_check_link_active(ctrl);
+ 	if (present <= 0 && link_active <= 0) {
++		if (ctrl->state == BLINKINGON_STATE) {
++			ctrl->state = OFF_STATE;
++			cancel_delayed_work(&ctrl->button_work);
++			pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF,
++					      INDICATOR_NOOP);
++			ctrl_info(ctrl, "Slot(%s): Card not present\n",
++				  slot_name(ctrl));
++		}
+ 		mutex_unlock(&ctrl->state_lock);
+ 		return;
+ 	}
+diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
+index 66d7514ca111b..db32335039d61 100644
+--- a/drivers/pci/pcie/aspm.c
++++ b/drivers/pci/pcie/aspm.c
+@@ -1010,21 +1010,24 @@ void pcie_aspm_exit_link_state(struct pci_dev *pdev)
+ 
+ 	down_read(&pci_bus_sem);
+ 	mutex_lock(&aspm_lock);
+-	/*
+-	 * All PCIe functions are in one slot, remove one function will remove
+-	 * the whole slot, so just wait until we are the last function left.
+-	 */
+-	if (!list_empty(&parent->subordinate->devices))
+-		goto out;
+ 
+ 	link = parent->link_state;
+ 	root = link->root;
+ 	parent_link = link->parent;
+ 
+-	/* All functions are removed, so just disable ASPM for the link */
++	/*
++	 * link->downstream is a pointer to the pci_dev of function 0.  If
++	 * we remove that function, the pci_dev is about to be deallocated,
++	 * so we can't use link->downstream again.  Free the link state to
++	 * avoid this.
++	 *
++	 * If we're removing a non-0 function, it's possible we could
++	 * retain the link state, but PCIe r6.0, sec 7.5.3.7, recommends
++	 * programming the same ASPM Control value for all functions of
++	 * multi-function devices, so disable ASPM for all of them.
++	 */
+ 	pcie_config_aspm_link(link, 0);
+ 	list_del(&link->sibling);
+-	/* Clock PM is for endpoint device */
+ 	free_link_state(link);
+ 
+ 	/* Recheck latencies and configure upstream links */
+@@ -1032,7 +1035,7 @@ void pcie_aspm_exit_link_state(struct pci_dev *pdev)
+ 		pcie_update_aspm_capable(root);
+ 		pcie_config_aspm_path(parent_link);
+ 	}
+-out:
++
+ 	mutex_unlock(&aspm_lock);
+ 	up_read(&pci_bus_sem);
+ }
+diff --git a/drivers/perf/arm-cmn.c b/drivers/perf/arm-cmn.c
+index 44b719f39c3b3..4f86b7fd9823f 100644
+--- a/drivers/perf/arm-cmn.c
++++ b/drivers/perf/arm-cmn.c
+@@ -1899,9 +1899,10 @@ static int arm_cmn_init_dtc(struct arm_cmn *cmn, struct arm_cmn_node *dn, int id
+ 	if (dtc->irq < 0)
+ 		return dtc->irq;
+ 
+-	writel_relaxed(0, dtc->base + CMN_DT_PMCR);
++	writel_relaxed(CMN_DT_DTC_CTL_DT_EN, dtc->base + CMN_DT_DTC_CTL);
++	writel_relaxed(CMN_DT_PMCR_PMU_EN | CMN_DT_PMCR_OVFL_INTR_EN, dtc->base + CMN_DT_PMCR);
++	writeq_relaxed(0, dtc->base + CMN_DT_PMCCNTR);
+ 	writel_relaxed(0x1ff, dtc->base + CMN_DT_PMOVSR_CLR);
+-	writel_relaxed(CMN_DT_PMCR_OVFL_INTR_EN, dtc->base + CMN_DT_PMCR);
+ 
+ 	return 0;
+ }
+@@ -1961,7 +1962,7 @@ static int arm_cmn_init_dtcs(struct arm_cmn *cmn)
+ 			dn->type = CMN_TYPE_CCLA;
+ 	}
+ 
+-	writel_relaxed(CMN_DT_DTC_CTL_DT_EN, cmn->dtc[0].base + CMN_DT_DTC_CTL);
++	arm_cmn_set_state(cmn, CMN_STATE_DISABLED);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/perf/arm_cspmu/arm_cspmu.c b/drivers/perf/arm_cspmu/arm_cspmu.c
+index e31302ab7e37c..35d2fe33a7b6f 100644
+--- a/drivers/perf/arm_cspmu/arm_cspmu.c
++++ b/drivers/perf/arm_cspmu/arm_cspmu.c
+@@ -189,10 +189,10 @@ static inline bool use_64b_counter_reg(const struct arm_cspmu *cspmu)
+ ssize_t arm_cspmu_sysfs_event_show(struct device *dev,
+ 				struct device_attribute *attr, char *buf)
+ {
+-	struct dev_ext_attribute *eattr =
+-		container_of(attr, struct dev_ext_attribute, attr);
+-	return sysfs_emit(buf, "event=0x%llx\n",
+-			  (unsigned long long)eattr->var);
++	struct perf_pmu_events_attr *pmu_attr;
++
++	pmu_attr = container_of(attr, typeof(*pmu_attr), attr);
++	return sysfs_emit(buf, "event=0x%llx\n", pmu_attr->id);
+ }
+ EXPORT_SYMBOL_GPL(arm_cspmu_sysfs_event_show);
+ 
+@@ -1230,7 +1230,8 @@ static struct platform_driver arm_cspmu_driver = {
+ static void arm_cspmu_set_active_cpu(int cpu, struct arm_cspmu *cspmu)
+ {
+ 	cpumask_set_cpu(cpu, &cspmu->active_cpu);
+-	WARN_ON(irq_set_affinity(cspmu->irq, &cspmu->active_cpu));
++	if (cspmu->irq)
++		WARN_ON(irq_set_affinity(cspmu->irq, &cspmu->active_cpu));
+ }
+ 
+ static int arm_cspmu_cpu_online(unsigned int cpu, struct hlist_node *node)
+diff --git a/drivers/perf/hisilicon/hisi_pcie_pmu.c b/drivers/perf/hisilicon/hisi_pcie_pmu.c
+index 6fee0b6e163bb..e10fc7cb9493a 100644
+--- a/drivers/perf/hisilicon/hisi_pcie_pmu.c
++++ b/drivers/perf/hisilicon/hisi_pcie_pmu.c
+@@ -683,7 +683,7 @@ static int hisi_pcie_pmu_offline_cpu(unsigned int cpu, struct hlist_node *node)
+ 
+ 	pcie_pmu->on_cpu = -1;
+ 	/* Choose a new CPU from all online cpus. */
+-	target = cpumask_first(cpu_online_mask);
++	target = cpumask_any_but(cpu_online_mask, cpu);
+ 	if (target >= nr_cpu_ids) {
+ 		pci_err(pcie_pmu->pdev, "There is no CPU to set\n");
+ 		return 0;
+diff --git a/drivers/pinctrl/bcm/pinctrl-bcm2835.c b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+index 7435173e10f43..1489191a213fe 100644
+--- a/drivers/pinctrl/bcm/pinctrl-bcm2835.c
++++ b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+@@ -376,10 +376,8 @@ static int bcm2835_add_pin_ranges_fallback(struct gpio_chip *gc)
+ 	if (!pctldev)
+ 		return 0;
+ 
+-	gpiochip_add_pin_range(gc, pinctrl_dev_get_devname(pctldev), 0, 0,
+-			       gc->ngpio);
+-
+-	return 0;
++	return gpiochip_add_pin_range(gc, pinctrl_dev_get_devname(pctldev), 0, 0,
++				      gc->ngpio);
+ }
+ 
+ static const struct gpio_chip bcm2835_gpio_chip = {
+diff --git a/drivers/pinctrl/freescale/pinctrl-scu.c b/drivers/pinctrl/freescale/pinctrl-scu.c
+index ea261b6e74581..3b252d684d723 100644
+--- a/drivers/pinctrl/freescale/pinctrl-scu.c
++++ b/drivers/pinctrl/freescale/pinctrl-scu.c
+@@ -90,7 +90,7 @@ int imx_pinconf_set_scu(struct pinctrl_dev *pctldev, unsigned pin_id,
+ 	struct imx_sc_msg_req_pad_set msg;
+ 	struct imx_sc_rpc_msg *hdr = &msg.hdr;
+ 	unsigned int mux = configs[0];
+-	unsigned int conf = configs[1];
++	unsigned int conf;
+ 	unsigned int val;
+ 	int ret;
+ 
+@@ -115,6 +115,7 @@ int imx_pinconf_set_scu(struct pinctrl_dev *pctldev, unsigned pin_id,
+ 	 * Set mux and conf together in one IPC call
+ 	 */
+ 	WARN_ON(num_configs != 2);
++	conf = configs[1];
+ 
+ 	val = conf | BM_PAD_CTL_IFMUX_ENABLE | BM_PAD_CTL_GP_ENABLE;
+ 	val |= mux << BP_PAD_CTL_IFMUX;
+diff --git a/drivers/pinctrl/intel/pinctrl-cherryview.c b/drivers/pinctrl/intel/pinctrl-cherryview.c
+index 722990e278361..87cf1e7403979 100644
+--- a/drivers/pinctrl/intel/pinctrl-cherryview.c
++++ b/drivers/pinctrl/intel/pinctrl-cherryview.c
+@@ -949,11 +949,6 @@ static int chv_config_get(struct pinctrl_dev *pctldev, unsigned int pin,
+ 
+ 		break;
+ 
+-	case PIN_CONFIG_DRIVE_OPEN_DRAIN:
+-		if (!(ctrl1 & CHV_PADCTRL1_ODEN))
+-			return -EINVAL;
+-		break;
+-
+ 	case PIN_CONFIG_BIAS_HIGH_IMPEDANCE: {
+ 		u32 cfg;
+ 
+@@ -963,6 +958,16 @@ static int chv_config_get(struct pinctrl_dev *pctldev, unsigned int pin,
+ 			return -EINVAL;
+ 
+ 		break;
++
++	case PIN_CONFIG_DRIVE_PUSH_PULL:
++		if (ctrl1 & CHV_PADCTRL1_ODEN)
++			return -EINVAL;
++		break;
++
++	case PIN_CONFIG_DRIVE_OPEN_DRAIN:
++		if (!(ctrl1 & CHV_PADCTRL1_ODEN))
++			return -EINVAL;
++		break;
+ 	}
+ 
+ 	default:
+diff --git a/drivers/pinctrl/nuvoton/pinctrl-npcm7xx.c b/drivers/pinctrl/nuvoton/pinctrl-npcm7xx.c
+index ff5bcea172e84..071bdfd570f94 100644
+--- a/drivers/pinctrl/nuvoton/pinctrl-npcm7xx.c
++++ b/drivers/pinctrl/nuvoton/pinctrl-npcm7xx.c
+@@ -1881,6 +1881,8 @@ static int npcm7xx_gpio_of(struct npcm7xx_pinctrl *pctrl)
+ 		}
+ 
+ 		pctrl->gpio_bank[id].base = ioremap(res.start, resource_size(&res));
++		if (!pctrl->gpio_bank[id].base)
++			return -EINVAL;
+ 
+ 		ret = bgpio_init(&pctrl->gpio_bank[id].gc, dev, 4,
+ 				 pctrl->gpio_bank[id].base + NPCM7XX_GP_N_DIN,
+diff --git a/drivers/pinctrl/pinctrl-at91-pio4.c b/drivers/pinctrl/pinctrl-at91-pio4.c
+index c775d239444a6..20433c1745805 100644
+--- a/drivers/pinctrl/pinctrl-at91-pio4.c
++++ b/drivers/pinctrl/pinctrl-at91-pio4.c
+@@ -1151,6 +1151,8 @@ static int atmel_pinctrl_probe(struct platform_device *pdev)
+ 		/* Pin naming convention: P(bank_name)(bank_pin_number). */
+ 		pin_desc[i].name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "P%c%d",
+ 						  bank + 'A', line);
++		if (!pin_desc[i].name)
++			return -ENOMEM;
+ 
+ 		group->name = group_names[i] = pin_desc[i].name;
+ 		group->pin = pin_desc[i].number;
+diff --git a/drivers/pinctrl/pinctrl-at91.c b/drivers/pinctrl/pinctrl-at91.c
+index 9fa68ca4a412d..9184d457edf8d 100644
+--- a/drivers/pinctrl/pinctrl-at91.c
++++ b/drivers/pinctrl/pinctrl-at91.c
+@@ -1294,10 +1294,11 @@ static const struct of_device_id at91_pinctrl_of_match[] = {
+ static int at91_pinctrl_probe_dt(struct platform_device *pdev,
+ 				 struct at91_pinctrl *info)
+ {
++	struct device *dev = &pdev->dev;
+ 	int ret = 0;
+ 	int i, j, ngpio_chips_enabled = 0;
+ 	uint32_t *tmp;
+-	struct device_node *np = pdev->dev.of_node;
++	struct device_node *np = dev->of_node;
+ 	struct device_node *child;
+ 
+ 	if (!np)
+@@ -1361,9 +1362,8 @@ static int at91_pinctrl_probe_dt(struct platform_device *pdev,
+ 			continue;
+ 		ret = at91_pinctrl_parse_functions(child, info, i++);
+ 		if (ret) {
+-			dev_err(&pdev->dev, "failed to parse function\n");
+ 			of_node_put(child);
+-			return ret;
++			return dev_err_probe(dev, ret, "failed to parse function\n");
+ 		}
+ 	}
+ 
+@@ -1399,8 +1399,8 @@ static int at91_pinctrl_probe(struct platform_device *pdev)
+ 		char **names;
+ 
+ 		names = devm_kasprintf_strarray(dev, "pio", MAX_NB_GPIO_PER_BANK);
+-		if (!names)
+-			return -ENOMEM;
++		if (IS_ERR(names))
++			return PTR_ERR(names);
+ 
+ 		for (j = 0; j < MAX_NB_GPIO_PER_BANK; j++, k++) {
+ 			char *name = names[j];
+@@ -1416,11 +1416,8 @@ static int at91_pinctrl_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, info);
+ 	info->pctl = devm_pinctrl_register(&pdev->dev, &at91_pinctrl_desc,
+ 					   info);
+-
+-	if (IS_ERR(info->pctl)) {
+-		dev_err(&pdev->dev, "could not register AT91 pinctrl driver\n");
+-		return PTR_ERR(info->pctl);
+-	}
++	if (IS_ERR(info->pctl))
++		return dev_err_probe(dev, PTR_ERR(info->pctl), "could not register AT91 pinctrl driver\n");
+ 
+ 	/* We will handle a range of GPIO pins */
+ 	for (i = 0; i < gpio_banks; i++)
+@@ -1821,46 +1818,29 @@ static int at91_gpio_probe(struct platform_device *pdev)
+ 	char **names;
+ 
+ 	BUG_ON(alias_idx >= ARRAY_SIZE(gpio_chips));
+-	if (gpio_chips[alias_idx]) {
+-		ret = -EBUSY;
+-		goto err;
+-	}
++	if (gpio_chips[alias_idx])
++		return dev_err_probe(dev, -EBUSY, "%d slot is occupied.\n", alias_idx);
+ 
+ 	irq = platform_get_irq(pdev, 0);
+-	if (irq < 0) {
+-		ret = irq;
+-		goto err;
+-	}
++	if (irq < 0)
++		return irq;
+ 
+ 	at91_chip = devm_kzalloc(&pdev->dev, sizeof(*at91_chip), GFP_KERNEL);
+-	if (!at91_chip) {
+-		ret = -ENOMEM;
+-		goto err;
+-	}
++	if (!at91_chip)
++		return -ENOMEM;
+ 
+ 	at91_chip->regbase = devm_platform_ioremap_resource(pdev, 0);
+-	if (IS_ERR(at91_chip->regbase)) {
+-		ret = PTR_ERR(at91_chip->regbase);
+-		goto err;
+-	}
++	if (IS_ERR(at91_chip->regbase))
++		return PTR_ERR(at91_chip->regbase);
+ 
+ 	at91_chip->ops = (const struct at91_pinctrl_mux_ops *)
+ 		of_match_device(at91_gpio_of_match, &pdev->dev)->data;
+ 	at91_chip->pioc_virq = irq;
+ 	at91_chip->pioc_idx = alias_idx;
+ 
+-	at91_chip->clock = devm_clk_get(&pdev->dev, NULL);
+-	if (IS_ERR(at91_chip->clock)) {
+-		dev_err(&pdev->dev, "failed to get clock, ignoring.\n");
+-		ret = PTR_ERR(at91_chip->clock);
+-		goto err;
+-	}
+-
+-	ret = clk_prepare_enable(at91_chip->clock);
+-	if (ret) {
+-		dev_err(&pdev->dev, "failed to prepare and enable clock, ignoring.\n");
+-		goto clk_enable_err;
+-	}
++	at91_chip->clock = devm_clk_get_enabled(&pdev->dev, NULL);
++	if (IS_ERR(at91_chip->clock))
++		return dev_err_probe(dev, PTR_ERR(at91_chip->clock), "failed to get clock, ignoring.\n");
+ 
+ 	at91_chip->chip = at91_gpio_template;
+ 	at91_chip->id = alias_idx;
+@@ -1873,17 +1853,15 @@ static int at91_gpio_probe(struct platform_device *pdev)
+ 
+ 	if (!of_property_read_u32(np, "#gpio-lines", &ngpio)) {
+ 		if (ngpio >= MAX_NB_GPIO_PER_BANK)
+-			pr_err("at91_gpio.%d, gpio-nb >= %d failback to %d\n",
+-			       alias_idx, MAX_NB_GPIO_PER_BANK, MAX_NB_GPIO_PER_BANK);
++			dev_err(dev, "at91_gpio.%d, gpio-nb >= %d failback to %d\n",
++				alias_idx, MAX_NB_GPIO_PER_BANK, MAX_NB_GPIO_PER_BANK);
+ 		else
+ 			chip->ngpio = ngpio;
+ 	}
+ 
+ 	names = devm_kasprintf_strarray(dev, "pio", chip->ngpio);
+-	if (!names) {
+-		ret = -ENOMEM;
+-		goto clk_enable_err;
+-	}
++	if (IS_ERR(names))
++		return PTR_ERR(names);
+ 
+ 	for (i = 0; i < chip->ngpio; i++)
+ 		strreplace(names[i], '-', alias_idx + 'A');
+@@ -1900,11 +1878,11 @@ static int at91_gpio_probe(struct platform_device *pdev)
+ 
+ 	ret = at91_gpio_of_irq_setup(pdev, at91_chip);
+ 	if (ret)
+-		goto gpiochip_add_err;
++		return ret;
+ 
+ 	ret = gpiochip_add_data(chip, at91_chip);
+ 	if (ret)
+-		goto gpiochip_add_err;
++		return ret;
+ 
+ 	gpio_chips[alias_idx] = at91_chip;
+ 	platform_set_drvdata(pdev, at91_chip);
+@@ -1913,14 +1891,6 @@ static int at91_gpio_probe(struct platform_device *pdev)
+ 	dev_info(&pdev->dev, "at address %p\n", at91_chip->regbase);
+ 
+ 	return 0;
+-
+-gpiochip_add_err:
+-clk_enable_err:
+-	clk_disable_unprepare(at91_chip->clock);
+-err:
+-	dev_err(&pdev->dev, "Failure %i for GPIO %i\n", ret, alias_idx);
+-
+-	return ret;
+ }
+ 
+ static const struct dev_pm_ops at91_gpio_pm_ops = {
+diff --git a/drivers/pinctrl/pinctrl-microchip-sgpio.c b/drivers/pinctrl/pinctrl-microchip-sgpio.c
+index 4794602316e7d..666d8b7cdbad3 100644
+--- a/drivers/pinctrl/pinctrl-microchip-sgpio.c
++++ b/drivers/pinctrl/pinctrl-microchip-sgpio.c
+@@ -818,6 +818,9 @@ static int microchip_sgpio_register_bank(struct device *dev,
+ 	pctl_desc->name = devm_kasprintf(dev, GFP_KERNEL, "%s-%sput",
+ 					 dev_name(dev),
+ 					 bank->is_input ? "in" : "out");
++	if (!pctl_desc->name)
++		return -ENOMEM;
++
+ 	pctl_desc->pctlops = &sgpio_pctl_ops;
+ 	pctl_desc->pmxops = &sgpio_pmx_ops;
+ 	pctl_desc->confops = &sgpio_confops;
+diff --git a/drivers/pinctrl/sunplus/sppctl.c b/drivers/pinctrl/sunplus/sppctl.c
+index 6bbbab3a6fdf3..150996949ede7 100644
+--- a/drivers/pinctrl/sunplus/sppctl.c
++++ b/drivers/pinctrl/sunplus/sppctl.c
+@@ -834,11 +834,6 @@ static int sppctl_dt_node_to_map(struct pinctrl_dev *pctldev, struct device_node
+ 	int i, size = 0;
+ 
+ 	list = of_get_property(np_config, "sunplus,pins", &size);
+-
+-	if (nmG <= 0)
+-		nmG = 0;
+-
+-	parent = of_get_parent(np_config);
+ 	*num_maps = size / sizeof(*list);
+ 
+ 	/*
+@@ -866,10 +861,14 @@ static int sppctl_dt_node_to_map(struct pinctrl_dev *pctldev, struct device_node
+ 		}
+ 	}
+ 
++	if (nmG <= 0)
++		nmG = 0;
++
+ 	*map = kcalloc(*num_maps + nmG, sizeof(**map), GFP_KERNEL);
+-	if (*map == NULL)
++	if (!(*map))
+ 		return -ENOMEM;
+ 
++	parent = of_get_parent(np_config);
+ 	for (i = 0; i < (*num_maps); i++) {
+ 		dt_pin = be32_to_cpu(list[i]);
+ 		pin_num = FIELD_GET(GENMASK(31, 24), dt_pin);
+@@ -883,6 +882,8 @@ static int sppctl_dt_node_to_map(struct pinctrl_dev *pctldev, struct device_node
+ 			(*map)[i].data.configs.num_configs = 1;
+ 			(*map)[i].data.configs.group_or_pin = pin_get_name(pctldev, pin_num);
+ 			configs = kmalloc(sizeof(*configs), GFP_KERNEL);
++			if (!configs)
++				goto sppctl_map_err;
+ 			*configs = FIELD_GET(GENMASK(7, 0), dt_pin);
+ 			(*map)[i].data.configs.configs = configs;
+ 
+@@ -896,6 +897,8 @@ static int sppctl_dt_node_to_map(struct pinctrl_dev *pctldev, struct device_node
+ 			(*map)[i].data.configs.num_configs = 1;
+ 			(*map)[i].data.configs.group_or_pin = pin_get_name(pctldev, pin_num);
+ 			configs = kmalloc(sizeof(*configs), GFP_KERNEL);
++			if (!configs)
++				goto sppctl_map_err;
+ 			*configs = SPPCTL_IOP_CONFIGS;
+ 			(*map)[i].data.configs.configs = configs;
+ 
+@@ -965,6 +968,14 @@ static int sppctl_dt_node_to_map(struct pinctrl_dev *pctldev, struct device_node
+ 	of_node_put(parent);
+ 	dev_dbg(pctldev->dev, "%d pins mapped\n", *num_maps);
+ 	return 0;
++
++sppctl_map_err:
++	for (i = 0; i < (*num_maps); i++)
++		if ((*map)[i].type == PIN_MAP_TYPE_CONFIGS_PIN)
++			kfree((*map)[i].data.configs.configs);
++	kfree(*map);
++	of_node_put(parent);
++	return -ENOMEM;
+ }
+ 
+ static const struct pinctrl_ops sppctl_pctl_ops = {
+diff --git a/drivers/pinctrl/tegra/pinctrl-tegra.c b/drivers/pinctrl/tegra/pinctrl-tegra.c
+index 1729b7ddfa946..21e08fbd1df0e 100644
+--- a/drivers/pinctrl/tegra/pinctrl-tegra.c
++++ b/drivers/pinctrl/tegra/pinctrl-tegra.c
+@@ -232,7 +232,7 @@ static const char *tegra_pinctrl_get_func_name(struct pinctrl_dev *pctldev,
+ {
+ 	struct tegra_pmx *pmx = pinctrl_dev_get_drvdata(pctldev);
+ 
+-	return pmx->soc->functions[function].name;
++	return pmx->functions[function].name;
+ }
+ 
+ static int tegra_pinctrl_get_func_groups(struct pinctrl_dev *pctldev,
+@@ -242,8 +242,8 @@ static int tegra_pinctrl_get_func_groups(struct pinctrl_dev *pctldev,
+ {
+ 	struct tegra_pmx *pmx = pinctrl_dev_get_drvdata(pctldev);
+ 
+-	*groups = pmx->soc->functions[function].groups;
+-	*num_groups = pmx->soc->functions[function].ngroups;
++	*groups = pmx->functions[function].groups;
++	*num_groups = pmx->functions[function].ngroups;
+ 
+ 	return 0;
+ }
+@@ -795,10 +795,17 @@ int tegra_pinctrl_probe(struct platform_device *pdev,
+ 	if (!pmx->group_pins)
+ 		return -ENOMEM;
+ 
++	pmx->functions = devm_kcalloc(&pdev->dev, pmx->soc->nfunctions,
++				      sizeof(*pmx->functions), GFP_KERNEL);
++	if (!pmx->functions)
++		return -ENOMEM;
++
+ 	group_pins = pmx->group_pins;
++
+ 	for (fn = 0; fn < soc_data->nfunctions; fn++) {
+-		struct tegra_function *func = &soc_data->functions[fn];
++		struct tegra_function *func = &pmx->functions[fn];
+ 
++		func->name = pmx->soc->functions[fn];
+ 		func->groups = group_pins;
+ 
+ 		for (gn = 0; gn < soc_data->ngroups; gn++) {
+diff --git a/drivers/pinctrl/tegra/pinctrl-tegra.h b/drivers/pinctrl/tegra/pinctrl-tegra.h
+index 6130cba7cce54..b3289bdf727d8 100644
+--- a/drivers/pinctrl/tegra/pinctrl-tegra.h
++++ b/drivers/pinctrl/tegra/pinctrl-tegra.h
+@@ -13,6 +13,7 @@ struct tegra_pmx {
+ 	struct pinctrl_dev *pctl;
+ 
+ 	const struct tegra_pinctrl_soc_data *soc;
++	struct tegra_function *functions;
+ 	const char **group_pins;
+ 
+ 	struct pinctrl_gpio_range gpio_range;
+@@ -191,7 +192,7 @@ struct tegra_pinctrl_soc_data {
+ 	const char *gpio_compatible;
+ 	const struct pinctrl_pin_desc *pins;
+ 	unsigned npins;
+-	struct tegra_function *functions;
++	const char * const *functions;
+ 	unsigned nfunctions;
+ 	const struct tegra_pingroup *groups;
+ 	unsigned ngroups;
+diff --git a/drivers/pinctrl/tegra/pinctrl-tegra114.c b/drivers/pinctrl/tegra/pinctrl-tegra114.c
+index e72ab1eb23983..3d425b2018e78 100644
+--- a/drivers/pinctrl/tegra/pinctrl-tegra114.c
++++ b/drivers/pinctrl/tegra/pinctrl-tegra114.c
+@@ -1452,12 +1452,9 @@ enum tegra_mux {
+ 	TEGRA_MUX_VI_ALT3,
+ };
+ 
+-#define FUNCTION(fname)					\
+-	{						\
+-		.name = #fname,				\
+-	}
++#define FUNCTION(fname) #fname
+ 
+-static struct tegra_function tegra114_functions[] = {
++static const char * const tegra114_functions[] = {
+ 	FUNCTION(blink),
+ 	FUNCTION(cec),
+ 	FUNCTION(cldvfs),
+diff --git a/drivers/pinctrl/tegra/pinctrl-tegra124.c b/drivers/pinctrl/tegra/pinctrl-tegra124.c
+index 26096c6b967e2..2a50c5c7516c3 100644
+--- a/drivers/pinctrl/tegra/pinctrl-tegra124.c
++++ b/drivers/pinctrl/tegra/pinctrl-tegra124.c
+@@ -1611,12 +1611,9 @@ enum tegra_mux {
+ 	TEGRA_MUX_VIMCLK2_ALT,
+ };
+ 
+-#define FUNCTION(fname)					\
+-	{						\
+-		.name = #fname,				\
+-	}
++#define FUNCTION(fname) #fname
+ 
+-static struct tegra_function tegra124_functions[] = {
++static const char * const tegra124_functions[] = {
+ 	FUNCTION(blink),
+ 	FUNCTION(ccla),
+ 	FUNCTION(cec),
+diff --git a/drivers/pinctrl/tegra/pinctrl-tegra194.c b/drivers/pinctrl/tegra/pinctrl-tegra194.c
+index 277973c884344..69f58df628977 100644
+--- a/drivers/pinctrl/tegra/pinctrl-tegra194.c
++++ b/drivers/pinctrl/tegra/pinctrl-tegra194.c
+@@ -1189,12 +1189,9 @@ enum tegra_mux_dt {
+ };
+ 
+ /* Make list of each function name */
+-#define TEGRA_PIN_FUNCTION(lid)			\
+-	{					\
+-		.name = #lid,			\
+-	}
++#define TEGRA_PIN_FUNCTION(lid) #lid
+ 
+-static struct tegra_function tegra194_functions[] = {
++static const char * const tegra194_functions[] = {
+ 	TEGRA_PIN_FUNCTION(rsvd0),
+ 	TEGRA_PIN_FUNCTION(rsvd1),
+ 	TEGRA_PIN_FUNCTION(rsvd2),
+diff --git a/drivers/pinctrl/tegra/pinctrl-tegra20.c b/drivers/pinctrl/tegra/pinctrl-tegra20.c
+index 0dc2cf0d05b1e..737fc2000f66b 100644
+--- a/drivers/pinctrl/tegra/pinctrl-tegra20.c
++++ b/drivers/pinctrl/tegra/pinctrl-tegra20.c
+@@ -1889,12 +1889,9 @@ enum tegra_mux {
+ 	TEGRA_MUX_XIO,
+ };
+ 
+-#define FUNCTION(fname)					\
+-	{						\
+-		.name = #fname,				\
+-	}
++#define FUNCTION(fname) #fname
+ 
+-static struct tegra_function tegra20_functions[] = {
++static const char * const tegra20_functions[] = {
+ 	FUNCTION(ahb_clk),
+ 	FUNCTION(apb_clk),
+ 	FUNCTION(audio_sync),
+diff --git a/drivers/pinctrl/tegra/pinctrl-tegra210.c b/drivers/pinctrl/tegra/pinctrl-tegra210.c
+index b480f607fa16f..9bb29146dfff7 100644
+--- a/drivers/pinctrl/tegra/pinctrl-tegra210.c
++++ b/drivers/pinctrl/tegra/pinctrl-tegra210.c
+@@ -1185,12 +1185,9 @@ enum tegra_mux {
+ 	TEGRA_MUX_VIMCLK2,
+ };
+ 
+-#define FUNCTION(fname)					\
+-	{						\
+-		.name = #fname,				\
+-	}
++#define FUNCTION(fname) #fname
+ 
+-static struct tegra_function tegra210_functions[] = {
++static const char * const tegra210_functions[] = {
+ 	FUNCTION(aud),
+ 	FUNCTION(bcl),
+ 	FUNCTION(blink),
+diff --git a/drivers/pinctrl/tegra/pinctrl-tegra30.c b/drivers/pinctrl/tegra/pinctrl-tegra30.c
+index 7299a371827f1..de5aa2d4d28d3 100644
+--- a/drivers/pinctrl/tegra/pinctrl-tegra30.c
++++ b/drivers/pinctrl/tegra/pinctrl-tegra30.c
+@@ -2010,12 +2010,9 @@ enum tegra_mux {
+ 	TEGRA_MUX_VI_ALT3,
+ };
+ 
+-#define FUNCTION(fname)					\
+-	{						\
+-		.name = #fname,				\
+-	}
++#define FUNCTION(fname) #fname
+ 
+-static struct tegra_function tegra30_functions[] = {
++static const char * const tegra30_functions[] = {
+ 	FUNCTION(blink),
+ 	FUNCTION(cec),
+ 	FUNCTION(clk_12m_out),
+diff --git a/drivers/platform/x86/dell/dell-rbtn.c b/drivers/platform/x86/dell/dell-rbtn.c
+index aa0e6c9074942..c8fcb537fd65d 100644
+--- a/drivers/platform/x86/dell/dell-rbtn.c
++++ b/drivers/platform/x86/dell/dell-rbtn.c
+@@ -395,16 +395,16 @@ static int rbtn_add(struct acpi_device *device)
+ 		return -EINVAL;
+ 	}
+ 
++	rbtn_data = devm_kzalloc(&device->dev, sizeof(*rbtn_data), GFP_KERNEL);
++	if (!rbtn_data)
++		return -ENOMEM;
++
+ 	ret = rbtn_acquire(device, true);
+ 	if (ret < 0) {
+ 		dev_err(&device->dev, "Cannot enable device\n");
+ 		return ret;
+ 	}
+ 
+-	rbtn_data = devm_kzalloc(&device->dev, sizeof(*rbtn_data), GFP_KERNEL);
+-	if (!rbtn_data)
+-		return -ENOMEM;
+-
+ 	rbtn_data->type = type;
+ 	device->driver_data = rbtn_data;
+ 
+@@ -420,10 +420,12 @@ static int rbtn_add(struct acpi_device *device)
+ 		break;
+ 	default:
+ 		ret = -EINVAL;
++		break;
+ 	}
++	if (ret)
++		rbtn_acquire(device, false);
+ 
+ 	return ret;
+-
+ }
+ 
+ static void rbtn_remove(struct acpi_device *device)
+@@ -442,7 +444,6 @@ static void rbtn_remove(struct acpi_device *device)
+ 	}
+ 
+ 	rbtn_acquire(device, false);
+-	device->driver_data = NULL;
+ }
+ 
+ static void rbtn_notify(struct acpi_device *device, u32 event)
+diff --git a/drivers/platform/x86/intel/pmc/core.c b/drivers/platform/x86/intel/pmc/core.c
+index b9591969e0fa1..bed083525fbe7 100644
+--- a/drivers/platform/x86/intel/pmc/core.c
++++ b/drivers/platform/x86/intel/pmc/core.c
+@@ -1039,7 +1039,6 @@ static const struct x86_cpu_id intel_pmc_core_ids[] = {
+ 	X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE_P,        tgl_core_init),
+ 	X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE,		adl_core_init),
+ 	X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE_S,	adl_core_init),
+-	X86_MATCH_INTEL_FAM6_MODEL(METEORLAKE,          mtl_core_init),
+ 	X86_MATCH_INTEL_FAM6_MODEL(METEORLAKE_L,	mtl_core_init),
+ 	{}
+ };
+diff --git a/drivers/platform/x86/intel/pmc/core.h b/drivers/platform/x86/intel/pmc/core.h
+index 810204d758ab3..d1c29fcd07ae9 100644
+--- a/drivers/platform/x86/intel/pmc/core.h
++++ b/drivers/platform/x86/intel/pmc/core.h
+@@ -247,6 +247,14 @@ enum ppfear_regs {
+ #define MTL_LPM_STATUS_LATCH_EN_OFFSET		0x16F8
+ #define MTL_LPM_STATUS_OFFSET			0x1700
+ #define MTL_LPM_LIVE_STATUS_OFFSET		0x175C
++#define MTL_PMC_LTR_IOE_PMC			0x1C0C
++#define MTL_PMC_LTR_ESE				0x1BAC
++#define MTL_SOCM_NUM_IP_IGN_ALLOWED		25
++#define MTL_SOC_PMC_MMIO_REG_LEN		0x2708
++#define MTL_PMC_LTR_SPG				0x1B74
++
++/* Meteor Lake PGD PFET Enable Ack Status */
++#define MTL_SOCM_PPFEAR_NUM_ENTRIES		8
+ 
+ extern const char *pmc_lpm_modes[];
+ 
+@@ -393,7 +401,25 @@ extern const struct pmc_bit_map adl_vnn_req_status_3_map[];
+ extern const struct pmc_bit_map adl_vnn_misc_status_map[];
+ extern const struct pmc_bit_map *adl_lpm_maps[];
+ extern const struct pmc_reg_map adl_reg_map;
+-extern const struct pmc_reg_map mtl_reg_map;
++extern const struct pmc_bit_map mtl_socm_pfear_map[];
++extern const struct pmc_bit_map *ext_mtl_socm_pfear_map[];
++extern const struct pmc_bit_map mtl_socm_ltr_show_map[];
++extern const struct pmc_bit_map mtl_socm_clocksource_status_map[];
++extern const struct pmc_bit_map mtl_socm_power_gating_status_0_map[];
++extern const struct pmc_bit_map mtl_socm_power_gating_status_1_map[];
++extern const struct pmc_bit_map mtl_socm_power_gating_status_2_map[];
++extern const struct pmc_bit_map mtl_socm_d3_status_0_map[];
++extern const struct pmc_bit_map mtl_socm_d3_status_1_map[];
++extern const struct pmc_bit_map mtl_socm_d3_status_2_map[];
++extern const struct pmc_bit_map mtl_socm_d3_status_3_map[];
++extern const struct pmc_bit_map mtl_socm_vnn_req_status_0_map[];
++extern const struct pmc_bit_map mtl_socm_vnn_req_status_1_map[];
++extern const struct pmc_bit_map mtl_socm_vnn_req_status_2_map[];
++extern const struct pmc_bit_map mtl_socm_vnn_req_status_3_map[];
++extern const struct pmc_bit_map mtl_socm_vnn_misc_status_map[];
++extern const struct pmc_bit_map mtl_socm_signal_status_map[];
++extern const struct pmc_bit_map *mtl_socm_lpm_maps[];
++extern const struct pmc_reg_map mtl_socm_reg_map;
+ 
+ extern void pmc_core_get_tgl_lpm_reqs(struct platform_device *pdev);
+ extern int pmc_core_send_ltr_ignore(struct pmc_dev *pmcdev, u32 value);
+diff --git a/drivers/platform/x86/intel/pmc/mtl.c b/drivers/platform/x86/intel/pmc/mtl.c
+index eeb3bd8c2502d..de9348e031003 100644
+--- a/drivers/platform/x86/intel/pmc/mtl.c
++++ b/drivers/platform/x86/intel/pmc/mtl.c
+@@ -10,28 +10,458 @@
+ 
+ #include "core.h"
+ 
+-const struct pmc_reg_map mtl_reg_map = {
+-	.pfear_sts = ext_tgl_pfear_map,
++/*
++ * Die Mapping to Product.
++ * Product SOCDie IOEDie PCHDie
++ * MTL-M   SOC-M  IOE-M  None
++ * MTL-P   SOC-M  IOE-P  None
++ * MTL-S   SOC-S  IOE-P  PCH-S
++ */
++
++const struct pmc_bit_map mtl_socm_pfear_map[] = {
++	{"PMC",                 BIT(0)},
++	{"OPI",                 BIT(1)},
++	{"SPI",                 BIT(2)},
++	{"XHCI",                BIT(3)},
++	{"SPA",                 BIT(4)},
++	{"SPB",                 BIT(5)},
++	{"SPC",                 BIT(6)},
++	{"GBE",                 BIT(7)},
++
++	{"SATA",                BIT(0)},
++	{"DSP0",                BIT(1)},
++	{"DSP1",                BIT(2)},
++	{"DSP2",                BIT(3)},
++	{"DSP3",                BIT(4)},
++	{"SPD",                 BIT(5)},
++	{"LPSS",                BIT(6)},
++	{"LPC",                 BIT(7)},
++
++	{"SMB",                 BIT(0)},
++	{"ISH",                 BIT(1)},
++	{"P2SB",                BIT(2)},
++	{"NPK_VNN",             BIT(3)},
++	{"SDX",                 BIT(4)},
++	{"SPE",                 BIT(5)},
++	{"FUSE",                BIT(6)},
++	{"SBR8",                BIT(7)},
++
++	{"RSVD24",              BIT(0)},
++	{"OTG",                 BIT(1)},
++	{"EXI",                 BIT(2)},
++	{"CSE",                 BIT(3)},
++	{"CSME_KVM",            BIT(4)},
++	{"CSME_PMT",            BIT(5)},
++	{"CSME_CLINK",          BIT(6)},
++	{"CSME_PTIO",           BIT(7)},
++
++	{"CSME_USBR",           BIT(0)},
++	{"CSME_SUSRAM",         BIT(1)},
++	{"CSME_SMT1",           BIT(2)},
++	{"RSVD35",              BIT(3)},
++	{"CSME_SMS2",           BIT(4)},
++	{"CSME_SMS",            BIT(5)},
++	{"CSME_RTC",            BIT(6)},
++	{"CSME_PSF",            BIT(7)},
++
++	{"SBR0",                BIT(0)},
++	{"SBR1",                BIT(1)},
++	{"SBR2",                BIT(2)},
++	{"SBR3",                BIT(3)},
++	{"SBR4",                BIT(4)},
++	{"SBR5",                BIT(5)},
++	{"RSVD46",              BIT(6)},
++	{"PSF1",                BIT(7)},
++
++	{"PSF2",                BIT(0)},
++	{"PSF3",                BIT(1)},
++	{"PSF4",                BIT(2)},
++	{"CNVI",                BIT(3)},
++	{"UFSX2",               BIT(4)},
++	{"EMMC",                BIT(5)},
++	{"SPF",                 BIT(6)},
++	{"SBR6",                BIT(7)},
++
++	{"SBR7",                BIT(0)},
++	{"NPK_AON",             BIT(1)},
++	{"HDA4",                BIT(2)},
++	{"HDA5",                BIT(3)},
++	{"HDA6",                BIT(4)},
++	{"PSF6",                BIT(5)},
++	{"RSVD62",              BIT(6)},
++	{"RSVD63",              BIT(7)},
++	{}
++};
++
++const struct pmc_bit_map *ext_mtl_socm_pfear_map[] = {
++	mtl_socm_pfear_map,
++	NULL
++};
++
++const struct pmc_bit_map mtl_socm_ltr_show_map[] = {
++	{"SOUTHPORT_A",		CNP_PMC_LTR_SPA},
++	{"SOUTHPORT_B",		CNP_PMC_LTR_SPB},
++	{"SATA",		CNP_PMC_LTR_SATA},
++	{"GIGABIT_ETHERNET",	CNP_PMC_LTR_GBE},
++	{"XHCI",		CNP_PMC_LTR_XHCI},
++	{"SOUTHPORT_F",		ADL_PMC_LTR_SPF},
++	{"ME",			CNP_PMC_LTR_ME},
++	{"SATA1",		CNP_PMC_LTR_EVA},
++	{"SOUTHPORT_C",		CNP_PMC_LTR_SPC},
++	{"HD_AUDIO",		CNP_PMC_LTR_AZ},
++	{"CNV",			CNP_PMC_LTR_CNV},
++	{"LPSS",		CNP_PMC_LTR_LPSS},
++	{"SOUTHPORT_D",		CNP_PMC_LTR_SPD},
++	{"SOUTHPORT_E",		CNP_PMC_LTR_SPE},
++	{"SATA2",		CNP_PMC_LTR_CAM},
++	{"ESPI",		CNP_PMC_LTR_ESPI},
++	{"SCC",			CNP_PMC_LTR_SCC},
++	{"ISH",                 CNP_PMC_LTR_ISH},
++	{"UFSX2",		CNP_PMC_LTR_UFSX2},
++	{"EMMC",		CNP_PMC_LTR_EMMC},
++	{"WIGIG",		ICL_PMC_LTR_WIGIG},
++	{"THC0",		TGL_PMC_LTR_THC0},
++	{"THC1",		TGL_PMC_LTR_THC1},
++	{"SOUTHPORT_G",		MTL_PMC_LTR_SPG},
++	{"ESE",                 MTL_PMC_LTR_ESE},
++	{"IOE_PMC",		MTL_PMC_LTR_IOE_PMC},
++
++	/* Below two cannot be used for LTR_IGNORE */
++	{"CURRENT_PLATFORM",	CNP_PMC_LTR_CUR_PLT},
++	{"AGGREGATED_SYSTEM",	CNP_PMC_LTR_CUR_ASLT},
++	{}
++};
++
++const struct pmc_bit_map mtl_socm_clocksource_status_map[] = {
++	{"AON2_OFF_STS",                 BIT(0)},
++	{"AON3_OFF_STS",                 BIT(1)},
++	{"AON4_OFF_STS",                 BIT(2)},
++	{"AON5_OFF_STS",                 BIT(3)},
++	{"AON1_OFF_STS",                 BIT(4)},
++	{"XTAL_LVM_OFF_STS",             BIT(5)},
++	{"MPFPW1_0_PLL_OFF_STS",         BIT(6)},
++	{"MPFPW1_1_PLL_OFF_STS",         BIT(7)},
++	{"USB3_PLL_OFF_STS",             BIT(8)},
++	{"AON3_SPL_OFF_STS",             BIT(9)},
++	{"MPFPW2_0_PLL_OFF_STS",         BIT(12)},
++	{"MPFPW3_0_PLL_OFF_STS",         BIT(13)},
++	{"XTAL_AGGR_OFF_STS",            BIT(17)},
++	{"USB2_PLL_OFF_STS",             BIT(18)},
++	{"FILTER_PLL_OFF_STS",           BIT(22)},
++	{"ACE_PLL_OFF_STS",              BIT(24)},
++	{"FABRIC_PLL_OFF_STS",           BIT(25)},
++	{"SOC_PLL_OFF_STS",              BIT(26)},
++	{"PCIFAB_PLL_OFF_STS",           BIT(27)},
++	{"REF_PLL_OFF_STS",              BIT(28)},
++	{"IMG_PLL_OFF_STS",              BIT(29)},
++	{"RTC_PLL_OFF_STS",              BIT(31)},
++	{}
++};
++
++const struct pmc_bit_map mtl_socm_power_gating_status_0_map[] = {
++	{"PMC_PGD0_PG_STS",              BIT(0)},
++	{"DMI_PGD0_PG_STS",              BIT(1)},
++	{"ESPISPI_PGD0_PG_STS",          BIT(2)},
++	{"XHCI_PGD0_PG_STS",             BIT(3)},
++	{"SPA_PGD0_PG_STS",              BIT(4)},
++	{"SPB_PGD0_PG_STS",              BIT(5)},
++	{"SPC_PGD0_PG_STS",              BIT(6)},
++	{"GBE_PGD0_PG_STS",              BIT(7)},
++	{"SATA_PGD0_PG_STS",             BIT(8)},
++	{"PSF13_PGD0_PG_STS",            BIT(9)},
++	{"SOC_D2D_PGD3_PG_STS",          BIT(10)},
++	{"MPFPW3_PGD0_PG_STS",           BIT(11)},
++	{"ESE_PGD0_PG_STS",              BIT(12)},
++	{"SPD_PGD0_PG_STS",              BIT(13)},
++	{"LPSS_PGD0_PG_STS",             BIT(14)},
++	{"LPC_PGD0_PG_STS",              BIT(15)},
++	{"SMB_PGD0_PG_STS",              BIT(16)},
++	{"ISH_PGD0_PG_STS",              BIT(17)},
++	{"P2S_PGD0_PG_STS",              BIT(18)},
++	{"NPK_PGD0_PG_STS",              BIT(19)},
++	{"DBG_SBR_PGD0_PG_STS",          BIT(20)},
++	{"SBRG_PGD0_PG_STS",             BIT(21)},
++	{"FUSE_PGD0_PG_STS",             BIT(22)},
++	{"SBR8_PGD0_PG_STS",             BIT(23)},
++	{"SOC_D2D_PGD2_PG_STS",          BIT(24)},
++	{"XDCI_PGD0_PG_STS",             BIT(25)},
++	{"EXI_PGD0_PG_STS",              BIT(26)},
++	{"CSE_PGD0_PG_STS",              BIT(27)},
++	{"KVMCC_PGD0_PG_STS",            BIT(28)},
++	{"PMT_PGD0_PG_STS",              BIT(29)},
++	{"CLINK_PGD0_PG_STS",            BIT(30)},
++	{"PTIO_PGD0_PG_STS",             BIT(31)},
++	{}
++};
++
++const struct pmc_bit_map mtl_socm_power_gating_status_1_map[] = {
++	{"USBR0_PGD0_PG_STS",            BIT(0)},
++	{"SUSRAM_PGD0_PG_STS",           BIT(1)},
++	{"SMT1_PGD0_PG_STS",             BIT(2)},
++	{"FIACPCB_U_PGD0_PG_STS",        BIT(3)},
++	{"SMS2_PGD0_PG_STS",             BIT(4)},
++	{"SMS1_PGD0_PG_STS",             BIT(5)},
++	{"CSMERTC_PGD0_PG_STS",          BIT(6)},
++	{"CSMEPSF_PGD0_PG_STS",          BIT(7)},
++	{"SBR0_PGD0_PG_STS",             BIT(8)},
++	{"SBR1_PGD0_PG_STS",             BIT(9)},
++	{"SBR2_PGD0_PG_STS",             BIT(10)},
++	{"SBR3_PGD0_PG_STS",             BIT(11)},
++	{"U3FPW1_PGD0_PG_STS",           BIT(12)},
++	{"SBR5_PGD0_PG_STS",             BIT(13)},
++	{"MPFPW1_PGD0_PG_STS",           BIT(14)},
++	{"UFSPW1_PGD0_PG_STS",           BIT(15)},
++	{"FIA_X_PGD0_PG_STS",            BIT(16)},
++	{"SOC_D2D_PGD0_PG_STS",          BIT(17)},
++	{"MPFPW2_PGD0_PG_STS",           BIT(18)},
++	{"CNVI_PGD0_PG_STS",             BIT(19)},
++	{"UFSX2_PGD0_PG_STS",            BIT(20)},
++	{"ENDBG_PGD0_PG_STS",            BIT(21)},
++	{"DBG_PSF_PGD0_PG_STS",          BIT(22)},
++	{"SBR6_PGD0_PG_STS",             BIT(23)},
++	{"SBR7_PGD0_PG_STS",             BIT(24)},
++	{"NPK_PGD1_PG_STS",              BIT(25)},
++	{"FIACPCB_X_PGD0_PG_STS",        BIT(26)},
++	{"DBC_PGD0_PG_STS",              BIT(27)},
++	{"FUSEGPSB_PGD0_PG_STS",         BIT(28)},
++	{"PSF6_PGD0_PG_STS",             BIT(29)},
++	{"PSF7_PGD0_PG_STS",             BIT(30)},
++	{"GBETSN1_PGD0_PG_STS",          BIT(31)},
++	{}
++};
++
++const struct pmc_bit_map mtl_socm_power_gating_status_2_map[] = {
++	{"PSF8_PGD0_PG_STS",             BIT(0)},
++	{"FIA_PGD0_PG_STS",              BIT(1)},
++	{"SOC_D2D_PGD1_PG_STS",          BIT(2)},
++	{"FIA_U_PGD0_PG_STS",            BIT(3)},
++	{"TAM_PGD0_PG_STS",              BIT(4)},
++	{"GBETSN_PGD0_PG_STS",           BIT(5)},
++	{"TBTLSX_PGD0_PG_STS",           BIT(6)},
++	{"THC0_PGD0_PG_STS",             BIT(7)},
++	{"THC1_PGD0_PG_STS",             BIT(8)},
++	{"PMC_PGD1_PG_STS",              BIT(9)},
++	{"GNA_PGD0_PG_STS",              BIT(10)},
++	{"ACE_PGD0_PG_STS",              BIT(11)},
++	{"ACE_PGD1_PG_STS",              BIT(12)},
++	{"ACE_PGD2_PG_STS",              BIT(13)},
++	{"ACE_PGD3_PG_STS",              BIT(14)},
++	{"ACE_PGD4_PG_STS",              BIT(15)},
++	{"ACE_PGD5_PG_STS",              BIT(16)},
++	{"ACE_PGD6_PG_STS",              BIT(17)},
++	{"ACE_PGD7_PG_STS",              BIT(18)},
++	{"ACE_PGD8_PG_STS",              BIT(19)},
++	{"FIA_PGS_PGD0_PG_STS",          BIT(20)},
++	{"FIACPCB_PGS_PGD0_PG_STS",      BIT(21)},
++	{"FUSEPMSB_PGD0_PG_STS",         BIT(22)},
++	{}
++};
++
++const struct pmc_bit_map mtl_socm_d3_status_0_map[] = {
++	{"LPSS_D3_STS",                  BIT(3)},
++	{"XDCI_D3_STS",                  BIT(4)},
++	{"XHCI_D3_STS",                  BIT(5)},
++	{"SPA_D3_STS",                   BIT(12)},
++	{"SPB_D3_STS",                   BIT(13)},
++	{"SPC_D3_STS",                   BIT(14)},
++	{"SPD_D3_STS",                   BIT(15)},
++	{"ESPISPI_D3_STS",               BIT(18)},
++	{"SATA_D3_STS",                  BIT(20)},
++	{"PSTH_D3_STS",                  BIT(21)},
++	{"DMI_D3_STS",                   BIT(22)},
++	{}
++};
++
++const struct pmc_bit_map mtl_socm_d3_status_1_map[] = {
++	{"GBETSN1_D3_STS",               BIT(14)},
++	{"GBE_D3_STS",                   BIT(19)},
++	{"ITSS_D3_STS",                  BIT(23)},
++	{"P2S_D3_STS",                   BIT(24)},
++	{"CNVI_D3_STS",                  BIT(27)},
++	{"UFSX2_D3_STS",                 BIT(28)},
++	{}
++};
++
++const struct pmc_bit_map mtl_socm_d3_status_2_map[] = {
++	{"GNA_D3_STS",                   BIT(0)},
++	{"CSMERTC_D3_STS",               BIT(1)},
++	{"SUSRAM_D3_STS",                BIT(2)},
++	{"CSE_D3_STS",                   BIT(4)},
++	{"KVMCC_D3_STS",                 BIT(5)},
++	{"USBR0_D3_STS",                 BIT(6)},
++	{"ISH_D3_STS",                   BIT(7)},
++	{"SMT1_D3_STS",                  BIT(8)},
++	{"SMT2_D3_STS",                  BIT(9)},
++	{"SMT3_D3_STS",                  BIT(10)},
++	{"CLINK_D3_STS",                 BIT(14)},
++	{"PTIO_D3_STS",                  BIT(16)},
++	{"PMT_D3_STS",                   BIT(17)},
++	{"SMS1_D3_STS",                  BIT(18)},
++	{"SMS2_D3_STS",                  BIT(19)},
++	{}
++};
++
++const struct pmc_bit_map mtl_socm_d3_status_3_map[] = {
++	{"ESE_D3_STS",                   BIT(2)},
++	{"GBETSN_D3_STS",                BIT(13)},
++	{"THC0_D3_STS",                  BIT(14)},
++	{"THC1_D3_STS",                  BIT(15)},
++	{"ACE_D3_STS",                   BIT(23)},
++	{}
++};
++
++const struct pmc_bit_map mtl_socm_vnn_req_status_0_map[] = {
++	{"LPSS_VNN_REQ_STS",             BIT(3)},
++	{"FIA_VNN_REQ_STS",              BIT(17)},
++	{"ESPISPI_VNN_REQ_STS",          BIT(18)},
++	{}
++};
++
++const struct pmc_bit_map mtl_socm_vnn_req_status_1_map[] = {
++	{"NPK_VNN_REQ_STS",              BIT(4)},
++	{"DFXAGG_VNN_REQ_STS",           BIT(8)},
++	{"EXI_VNN_REQ_STS",              BIT(9)},
++	{"P2D_VNN_REQ_STS",              BIT(18)},
++	{"GBE_VNN_REQ_STS",              BIT(19)},
++	{"SMB_VNN_REQ_STS",              BIT(25)},
++	{"LPC_VNN_REQ_STS",              BIT(26)},
++	{}
++};
++
++const struct pmc_bit_map mtl_socm_vnn_req_status_2_map[] = {
++	{"CSMERTC_VNN_REQ_STS",          BIT(1)},
++	{"CSE_VNN_REQ_STS",              BIT(4)},
++	{"ISH_VNN_REQ_STS",              BIT(7)},
++	{"SMT1_VNN_REQ_STS",             BIT(8)},
++	{"CLINK_VNN_REQ_STS",            BIT(14)},
++	{"SMS1_VNN_REQ_STS",             BIT(18)},
++	{"SMS2_VNN_REQ_STS",             BIT(19)},
++	{"GPIOCOM4_VNN_REQ_STS",         BIT(20)},
++	{"GPIOCOM3_VNN_REQ_STS",         BIT(21)},
++	{"GPIOCOM2_VNN_REQ_STS",         BIT(22)},
++	{"GPIOCOM1_VNN_REQ_STS",         BIT(23)},
++	{"GPIOCOM0_VNN_REQ_STS",         BIT(24)},
++	{}
++};
++
++const struct pmc_bit_map mtl_socm_vnn_req_status_3_map[] = {
++	{"ESE_VNN_REQ_STS",              BIT(2)},
++	{"DTS0_VNN_REQ_STS",             BIT(7)},
++	{"GPIOCOM5_VNN_REQ_STS",         BIT(11)},
++	{}
++};
++
++const struct pmc_bit_map mtl_socm_vnn_misc_status_map[] = {
++	{"CPU_C10_REQ_STS",              BIT(0)},
++	{"TS_OFF_REQ_STS",               BIT(1)},
++	{"PNDE_MET_REQ_STS",             BIT(2)},
++	{"PCIE_DEEP_PM_REQ_STS",         BIT(3)},
++	{"PMC_CLK_THROTTLE_EN_REQ_STS",  BIT(4)},
++	{"NPK_VNNAON_REQ_STS",           BIT(5)},
++	{"VNN_SOC_REQ_STS",              BIT(6)},
++	{"ISH_VNNAON_REQ_STS",           BIT(7)},
++	{"IOE_COND_MET_S02I2_0_REQ_STS", BIT(8)},
++	{"IOE_COND_MET_S02I2_1_REQ_STS", BIT(9)},
++	{"IOE_COND_MET_S02I2_2_REQ_STS", BIT(10)},
++	{"PLT_GREATER_REQ_STS",          BIT(11)},
++	{"PCIE_CLKREQ_REQ_STS",          BIT(12)},
++	{"PMC_IDLE_FB_OCP_REQ_STS",      BIT(13)},
++	{"PM_SYNC_STATES_REQ_STS",       BIT(14)},
++	{"EA_REQ_STS",                   BIT(15)},
++	{"MPHY_CORE_OFF_REQ_STS",        BIT(16)},
++	{"BRK_EV_EN_REQ_STS",            BIT(17)},
++	{"AUTO_DEMO_EN_REQ_STS",         BIT(18)},
++	{"ITSS_CLK_SRC_REQ_STS",         BIT(19)},
++	{"LPC_CLK_SRC_REQ_STS",          BIT(20)},
++	{"ARC_IDLE_REQ_STS",             BIT(21)},
++	{"MPHY_SUS_REQ_STS",             BIT(22)},
++	{"FIA_DEEP_PM_REQ_STS",          BIT(23)},
++	{"UXD_CONNECTED_REQ_STS",        BIT(24)},
++	{"ARC_INTERRUPT_WAKE_REQ_STS",   BIT(25)},
++	{"USB2_VNNAON_ACT_REQ_STS",      BIT(26)},
++	{"PRE_WAKE0_REQ_STS",            BIT(27)},
++	{"PRE_WAKE1_REQ_STS",            BIT(28)},
++	{"PRE_WAKE2_EN_REQ_STS",         BIT(29)},
++	{"WOV_REQ_STS",                  BIT(30)},
++	{"CNVI_V1P05_REQ_STS",           BIT(31)},
++	{}
++};
++
++const struct pmc_bit_map mtl_socm_signal_status_map[] = {
++	{"LSX_Wake0_En_STS",             BIT(0)},
++	{"LSX_Wake0_Pol_STS",            BIT(1)},
++	{"LSX_Wake1_En_STS",             BIT(2)},
++	{"LSX_Wake1_Pol_STS",            BIT(3)},
++	{"LSX_Wake2_En_STS",             BIT(4)},
++	{"LSX_Wake2_Pol_STS",            BIT(5)},
++	{"LSX_Wake3_En_STS",             BIT(6)},
++	{"LSX_Wake3_Pol_STS",            BIT(7)},
++	{"LSX_Wake4_En_STS",             BIT(8)},
++	{"LSX_Wake4_Pol_STS",            BIT(9)},
++	{"LSX_Wake5_En_STS",             BIT(10)},
++	{"LSX_Wake5_Pol_STS",            BIT(11)},
++	{"LSX_Wake6_En_STS",             BIT(12)},
++	{"LSX_Wake6_Pol_STS",            BIT(13)},
++	{"LSX_Wake7_En_STS",             BIT(14)},
++	{"LSX_Wake7_Pol_STS",            BIT(15)},
++	{"LPSS_Wake0_En_STS",            BIT(16)},
++	{"LPSS_Wake0_Pol_STS",           BIT(17)},
++	{"LPSS_Wake1_En_STS",            BIT(18)},
++	{"LPSS_Wake1_Pol_STS",           BIT(19)},
++	{"Int_Timer_SS_Wake0_En_STS",    BIT(20)},
++	{"Int_Timer_SS_Wake0_Pol_STS",   BIT(21)},
++	{"Int_Timer_SS_Wake1_En_STS",    BIT(22)},
++	{"Int_Timer_SS_Wake1_Pol_STS",   BIT(23)},
++	{"Int_Timer_SS_Wake2_En_STS",    BIT(24)},
++	{"Int_Timer_SS_Wake2_Pol_STS",   BIT(25)},
++	{"Int_Timer_SS_Wake3_En_STS",    BIT(26)},
++	{"Int_Timer_SS_Wake3_Pol_STS",   BIT(27)},
++	{"Int_Timer_SS_Wake4_En_STS",    BIT(28)},
++	{"Int_Timer_SS_Wake4_Pol_STS",   BIT(29)},
++	{"Int_Timer_SS_Wake5_En_STS",    BIT(30)},
++	{"Int_Timer_SS_Wake5_Pol_STS",   BIT(31)},
++	{}
++};
++
++const struct pmc_bit_map *mtl_socm_lpm_maps[] = {
++	mtl_socm_clocksource_status_map,
++	mtl_socm_power_gating_status_0_map,
++	mtl_socm_power_gating_status_1_map,
++	mtl_socm_power_gating_status_2_map,
++	mtl_socm_d3_status_0_map,
++	mtl_socm_d3_status_1_map,
++	mtl_socm_d3_status_2_map,
++	mtl_socm_d3_status_3_map,
++	mtl_socm_vnn_req_status_0_map,
++	mtl_socm_vnn_req_status_1_map,
++	mtl_socm_vnn_req_status_2_map,
++	mtl_socm_vnn_req_status_3_map,
++	mtl_socm_vnn_misc_status_map,
++	mtl_socm_signal_status_map,
++	NULL
++};
++
++const struct pmc_reg_map mtl_socm_reg_map = {
++	.pfear_sts = ext_mtl_socm_pfear_map,
+ 	.slp_s0_offset = CNP_PMC_SLP_S0_RES_COUNTER_OFFSET,
+ 	.slp_s0_res_counter_step = TGL_PMC_SLP_S0_RES_COUNTER_STEP,
+-	.ltr_show_sts = adl_ltr_show_map,
++	.ltr_show_sts = mtl_socm_ltr_show_map,
+ 	.msr_sts = msr_map,
+ 	.ltr_ignore_offset = CNP_PMC_LTR_IGNORE_OFFSET,
+-	.regmap_length = CNP_PMC_MMIO_REG_LEN,
++	.regmap_length = MTL_SOC_PMC_MMIO_REG_LEN,
+ 	.ppfear0_offset = CNP_PMC_HOST_PPFEAR0A,
+-	.ppfear_buckets = ICL_PPFEAR_NUM_ENTRIES,
++	.ppfear_buckets = MTL_SOCM_PPFEAR_NUM_ENTRIES,
+ 	.pm_cfg_offset = CNP_PMC_PM_CFG_OFFSET,
+ 	.pm_read_disable_bit = CNP_PMC_READ_DISABLE_BIT,
+-	.ltr_ignore_max = ADL_NUM_IP_IGN_ALLOWED,
+-	.lpm_num_modes = ADL_LPM_NUM_MODES,
+ 	.lpm_num_maps = ADL_LPM_NUM_MAPS,
++	.ltr_ignore_max = MTL_SOCM_NUM_IP_IGN_ALLOWED,
+ 	.lpm_res_counter_step_x2 = TGL_PMC_LPM_RES_COUNTER_STEP_X2,
+ 	.etr3_offset = ETR3_OFFSET,
+ 	.lpm_sts_latch_en_offset = MTL_LPM_STATUS_LATCH_EN_OFFSET,
+ 	.lpm_priority_offset = MTL_LPM_PRI_OFFSET,
+ 	.lpm_en_offset = MTL_LPM_EN_OFFSET,
+ 	.lpm_residency_offset = MTL_LPM_RESIDENCY_OFFSET,
+-	.lpm_sts = adl_lpm_maps,
++	.lpm_sts = mtl_socm_lpm_maps,
+ 	.lpm_status_offset = MTL_LPM_STATUS_OFFSET,
+ 	.lpm_live_status_offset = MTL_LPM_LIVE_STATUS_OFFSET,
+ };
+@@ -47,6 +477,6 @@ void mtl_core_configure(struct pmc_dev *pmcdev)
+ 
+ void mtl_core_init(struct pmc_dev *pmcdev)
+ {
+-	pmcdev->map = &mtl_reg_map;
++	pmcdev->map = &mtl_socm_reg_map;
+ 	pmcdev->core_configure = mtl_core_configure;
+ }
+diff --git a/drivers/platform/x86/lenovo-yogabook-wmi.c b/drivers/platform/x86/lenovo-yogabook-wmi.c
+index 5f4bd1eec38a9..d57fcc8388519 100644
+--- a/drivers/platform/x86/lenovo-yogabook-wmi.c
++++ b/drivers/platform/x86/lenovo-yogabook-wmi.c
+@@ -2,7 +2,6 @@
+ /* WMI driver for Lenovo Yoga Book YB1-X90* / -X91* tablets */
+ 
+ #include <linux/acpi.h>
+-#include <linux/devm-helpers.h>
+ #include <linux/gpio/consumer.h>
+ #include <linux/gpio/machine.h>
+ #include <linux/interrupt.h>
+@@ -248,10 +247,7 @@ static int yogabook_wmi_probe(struct wmi_device *wdev, const void *context)
+ 	data->brightness = YB_KBD_BL_DEFAULT;
+ 	set_bit(YB_KBD_IS_ON, &data->flags);
+ 	set_bit(YB_DIGITIZER_IS_ON, &data->flags);
+-
+-	r = devm_work_autocancel(&wdev->dev, &data->work, yogabook_wmi_work);
+-	if (r)
+-		return r;
++	INIT_WORK(&data->work, yogabook_wmi_work);
+ 
+ 	data->kbd_adev = acpi_dev_get_first_match_dev("GDIX1001", NULL, -1);
+ 	if (!data->kbd_adev) {
+@@ -299,10 +295,12 @@ static int yogabook_wmi_probe(struct wmi_device *wdev, const void *context)
+ 	}
+ 	data->backside_hall_irq = r;
+ 
+-	r = devm_request_irq(&wdev->dev, data->backside_hall_irq,
+-			     yogabook_backside_hall_irq,
+-			     IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING,
+-			     "backside_hall_sw", data);
++	/* Set default brightness before enabling the IRQ */
++	yogabook_wmi_set_kbd_backlight(data->wdev, YB_KBD_BL_DEFAULT);
++
++	r = request_irq(data->backside_hall_irq, yogabook_backside_hall_irq,
++			IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING,
++			"backside_hall_sw", data);
+ 	if (r) {
+ 		dev_err_probe(&wdev->dev, r, "Requesting backside_hall_sw IRQ\n");
+ 		goto error_put_devs;
+@@ -318,11 +316,14 @@ static int yogabook_wmi_probe(struct wmi_device *wdev, const void *context)
+ 	r = devm_led_classdev_register(&wdev->dev, &data->kbd_bl_led);
+ 	if (r < 0) {
+ 		dev_err_probe(&wdev->dev, r, "Registering backlight LED device\n");
+-		goto error_put_devs;
++		goto error_free_irq;
+ 	}
+ 
+ 	return 0;
+ 
++error_free_irq:
++	free_irq(data->backside_hall_irq, data);
++	cancel_work_sync(&data->work);
+ error_put_devs:
+ 	put_device(data->dig_dev);
+ 	put_device(data->kbd_dev);
+@@ -334,6 +335,19 @@ error_put_devs:
+ static void yogabook_wmi_remove(struct wmi_device *wdev)
+ {
+ 	struct yogabook_wmi *data = dev_get_drvdata(&wdev->dev);
++	int r = 0;
++
++	free_irq(data->backside_hall_irq, data);
++	cancel_work_sync(&data->work);
++
++	if (!test_bit(YB_KBD_IS_ON, &data->flags))
++		r |= device_reprobe(data->kbd_dev);
++
++	if (!test_bit(YB_DIGITIZER_IS_ON, &data->flags))
++		r |= device_reprobe(data->dig_dev);
++
++	if (r)
++		dev_warn(&wdev->dev, "Reprobe of devices failed\n");
+ 
+ 	put_device(data->dig_dev);
+ 	put_device(data->kbd_dev);
+diff --git a/drivers/platform/x86/think-lmi.c b/drivers/platform/x86/think-lmi.c
+index 78dc82bda4dde..4b7f2a969dfec 100644
+--- a/drivers/platform/x86/think-lmi.c
++++ b/drivers/platform/x86/think-lmi.c
+@@ -14,6 +14,7 @@
+ #include <linux/acpi.h>
+ #include <linux/errno.h>
+ #include <linux/fs.h>
++#include <linux/mutex.h>
+ #include <linux/string.h>
+ #include <linux/types.h>
+ #include <linux/dmi.h>
+@@ -171,7 +172,7 @@ MODULE_PARM_DESC(debug_support, "Enable debug command support");
+ #define TLMI_POP_PWD (1 << 0)
+ #define TLMI_PAP_PWD (1 << 1)
+ #define TLMI_HDD_PWD (1 << 2)
+-#define TLMI_SYS_PWD (1 << 3)
++#define TLMI_SMP_PWD (1 << 6) /* System Management */
+ #define TLMI_CERT    (1 << 7)
+ 
+ #define to_tlmi_pwd_setting(kobj)  container_of(kobj, struct tlmi_pwd_setting, kobj)
+@@ -195,6 +196,7 @@ static const char * const level_options[] = {
+ };
+ static struct think_lmi tlmi_priv;
+ static struct class *fw_attr_class;
++static DEFINE_MUTEX(tlmi_mutex);
+ 
+ /* ------ Utility functions ------------*/
+ /* Strip out CR if one is present */
+@@ -437,6 +439,9 @@ static ssize_t new_password_store(struct kobject *kobj,
+ 	/* Strip out CR if one is present, setting password won't work if it is present */
+ 	strip_cr(new_pwd);
+ 
++	/* Use lock in case multiple WMI operations needed */
++	mutex_lock(&tlmi_mutex);
++
+ 	pwdlen = strlen(new_pwd);
+ 	/* pwdlen == 0 is allowed to clear the password */
+ 	if (pwdlen && ((pwdlen < setting->minlen) || (pwdlen > setting->maxlen))) {
+@@ -456,9 +461,9 @@ static ssize_t new_password_store(struct kobject *kobj,
+ 				sprintf(pwd_type, "mhdp%d", setting->index);
+ 		} else if (setting == tlmi_priv.pwd_nvme) {
+ 			if (setting->level == TLMI_LEVEL_USER)
+-				sprintf(pwd_type, "unvp%d", setting->index);
++				sprintf(pwd_type, "udrp%d", setting->index);
+ 			else
+-				sprintf(pwd_type, "mnvp%d", setting->index);
++				sprintf(pwd_type, "adrp%d", setting->index);
+ 		} else {
+ 			sprintf(pwd_type, "%s", setting->pwd_type);
+ 		}
+@@ -493,6 +498,7 @@ static ssize_t new_password_store(struct kobject *kobj,
+ 		kfree(auth_str);
+ 	}
+ out:
++	mutex_unlock(&tlmi_mutex);
+ 	kfree(new_pwd);
+ 	return ret ?: count;
+ }
+@@ -982,6 +988,9 @@ static ssize_t current_value_store(struct kobject *kobj,
+ 	/* Strip out CR if one is present */
+ 	strip_cr(new_setting);
+ 
++	/* Use lock in case multiple WMI operations needed */
++	mutex_lock(&tlmi_mutex);
++
+ 	/* Check if certificate authentication is enabled and active */
+ 	if (tlmi_priv.certificate_support && tlmi_priv.pwd_admin->cert_installed) {
+ 		if (!tlmi_priv.pwd_admin->signature || !tlmi_priv.pwd_admin->save_signature) {
+@@ -1040,6 +1049,7 @@ static ssize_t current_value_store(struct kobject *kobj,
+ 		kobject_uevent(&tlmi_priv.class_dev->kobj, KOBJ_CHANGE);
+ 	}
+ out:
++	mutex_unlock(&tlmi_mutex);
+ 	kfree(auth_str);
+ 	kfree(set_str);
+ 	kfree(new_setting);
+@@ -1512,11 +1522,11 @@ static int tlmi_analyze(void)
+ 		tlmi_priv.pwd_power->valid = true;
+ 
+ 	if (tlmi_priv.opcode_support) {
+-		tlmi_priv.pwd_system = tlmi_create_auth("sys", "system");
++		tlmi_priv.pwd_system = tlmi_create_auth("smp", "system");
+ 		if (!tlmi_priv.pwd_system)
+ 			goto fail_clear_attr;
+ 
+-		if (tlmi_priv.pwdcfg.core.password_state & TLMI_SYS_PWD)
++		if (tlmi_priv.pwdcfg.core.password_state & TLMI_SMP_PWD)
+ 			tlmi_priv.pwd_system->valid = true;
+ 
+ 		tlmi_priv.pwd_hdd = tlmi_create_auth("hdd", "hdd");
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
+index e40cbe81b12c1..97c6ec12d0829 100644
+--- a/drivers/platform/x86/thinkpad_acpi.c
++++ b/drivers/platform/x86/thinkpad_acpi.c
+@@ -10524,8 +10524,8 @@ unlock:
+ static void dytc_profile_refresh(void)
+ {
+ 	enum platform_profile_option profile;
+-	int output, err = 0;
+-	int perfmode, funcmode;
++	int output = 0, err = 0;
++	int perfmode, funcmode = 0;
+ 
+ 	mutex_lock(&dytc_mutex);
+ 	if (dytc_capabilities & BIT(DYTC_FC_MMC)) {
+@@ -10538,6 +10538,8 @@ static void dytc_profile_refresh(void)
+ 		err = dytc_command(DYTC_CMD_GET, &output);
+ 		/* Check if we are PSC mode, or have AMT enabled */
+ 		funcmode = (output >> DYTC_GET_FUNCTION_BIT) & 0xF;
++	} else { /* Unknown profile mode */
++		err = -ENODEV;
+ 	}
+ 	mutex_unlock(&dytc_mutex);
+ 	if (err)
+diff --git a/drivers/powercap/Kconfig b/drivers/powercap/Kconfig
+index 90d33cd1b670a..b063f75117738 100644
+--- a/drivers/powercap/Kconfig
++++ b/drivers/powercap/Kconfig
+@@ -18,10 +18,12 @@ if POWERCAP
+ # Client driver configurations go here.
+ config INTEL_RAPL_CORE
+ 	tristate
++	depends on PCI
++	select IOSF_MBI
+ 
+ config INTEL_RAPL
+ 	tristate "Intel RAPL Support via MSR Interface"
+-	depends on X86 && IOSF_MBI
++	depends on X86 && PCI
+ 	select INTEL_RAPL_CORE
+ 	help
+ 	  This enables support for the Intel Running Average Power Limit (RAPL)
+diff --git a/drivers/powercap/intel_rapl_msr.c b/drivers/powercap/intel_rapl_msr.c
+index a27673706c3d6..9ea4797d70b44 100644
+--- a/drivers/powercap/intel_rapl_msr.c
++++ b/drivers/powercap/intel_rapl_msr.c
+@@ -22,7 +22,6 @@
+ #include <linux/processor.h>
+ #include <linux/platform_device.h>
+ 
+-#include <asm/iosf_mbi.h>
+ #include <asm/cpu_device_id.h>
+ #include <asm/intel-family.h>
+ 
+@@ -137,14 +136,14 @@ static int rapl_msr_write_raw(int cpu, struct reg_action *ra)
+ 
+ /* List of verified CPUs. */
+ static const struct x86_cpu_id pl4_support_ids[] = {
+-	{ X86_VENDOR_INTEL, 6, INTEL_FAM6_TIGERLAKE_L, X86_FEATURE_ANY },
+-	{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ALDERLAKE, X86_FEATURE_ANY },
+-	{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ALDERLAKE_L, X86_FEATURE_ANY },
+-	{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ALDERLAKE_N, X86_FEATURE_ANY },
+-	{ X86_VENDOR_INTEL, 6, INTEL_FAM6_RAPTORLAKE, X86_FEATURE_ANY },
+-	{ X86_VENDOR_INTEL, 6, INTEL_FAM6_RAPTORLAKE_P, X86_FEATURE_ANY },
+-	{ X86_VENDOR_INTEL, 6, INTEL_FAM6_METEORLAKE, X86_FEATURE_ANY },
+-	{ X86_VENDOR_INTEL, 6, INTEL_FAM6_METEORLAKE_L, X86_FEATURE_ANY },
++	X86_MATCH_INTEL_FAM6_MODEL(TIGERLAKE_L, NULL),
++	X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE, NULL),
++	X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_L, NULL),
++	X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_N, NULL),
++	X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE, NULL),
++	X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE_P, NULL),
++	X86_MATCH_INTEL_FAM6_MODEL(METEORLAKE, NULL),
++	X86_MATCH_INTEL_FAM6_MODEL(METEORLAKE_L, NULL),
+ 	{}
+ };
+ 
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 323e8187a98ff..443be7b6e31df 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -1918,19 +1918,17 @@ static struct regulator *create_regulator(struct regulator_dev *rdev,
+ 
+ 	if (err != -EEXIST)
+ 		regulator->debugfs = debugfs_create_dir(supply_name, rdev->debugfs);
+-	if (!regulator->debugfs) {
++	if (IS_ERR(regulator->debugfs))
+ 		rdev_dbg(rdev, "Failed to create debugfs directory\n");
+-	} else {
+-		debugfs_create_u32("uA_load", 0444, regulator->debugfs,
+-				   &regulator->uA_load);
+-		debugfs_create_u32("min_uV", 0444, regulator->debugfs,
+-				   &regulator->voltage[PM_SUSPEND_ON].min_uV);
+-		debugfs_create_u32("max_uV", 0444, regulator->debugfs,
+-				   &regulator->voltage[PM_SUSPEND_ON].max_uV);
+-		debugfs_create_file("constraint_flags", 0444,
+-				    regulator->debugfs, regulator,
+-				    &constraint_flags_fops);
+-	}
++
++	debugfs_create_u32("uA_load", 0444, regulator->debugfs,
++			   &regulator->uA_load);
++	debugfs_create_u32("min_uV", 0444, regulator->debugfs,
++			   &regulator->voltage[PM_SUSPEND_ON].min_uV);
++	debugfs_create_u32("max_uV", 0444, regulator->debugfs,
++			   &regulator->voltage[PM_SUSPEND_ON].max_uV);
++	debugfs_create_file("constraint_flags", 0444, regulator->debugfs,
++			    regulator, &constraint_flags_fops);
+ 
+ 	/*
+ 	 * Check now if the regulator is an always on regulator - if
+@@ -5263,10 +5261,8 @@ static void rdev_init_debugfs(struct regulator_dev *rdev)
+ 	}
+ 
+ 	rdev->debugfs = debugfs_create_dir(rname, debugfs_root);
+-	if (IS_ERR(rdev->debugfs)) {
+-		rdev_warn(rdev, "Failed to create debugfs directory\n");
+-		return;
+-	}
++	if (IS_ERR(rdev->debugfs))
++		rdev_dbg(rdev, "Failed to create debugfs directory\n");
+ 
+ 	debugfs_create_u32("use_count", 0444, rdev->debugfs,
+ 			   &rdev->use_count);
+@@ -6186,7 +6182,7 @@ static int __init regulator_init(void)
+ 
+ 	debugfs_root = debugfs_create_dir("regulator", NULL);
+ 	if (IS_ERR(debugfs_root))
+-		pr_warn("regulator: Failed to create debugfs directory\n");
++		pr_debug("regulator: Failed to create debugfs directory\n");
+ 
+ #ifdef CONFIG_DEBUG_FS
+ 	debugfs_create_file("supply_map", 0444, debugfs_root, NULL,
+diff --git a/drivers/scsi/3w-xxxx.c b/drivers/scsi/3w-xxxx.c
+index ffdecb12d654c..9bd70e4618d52 100644
+--- a/drivers/scsi/3w-xxxx.c
++++ b/drivers/scsi/3w-xxxx.c
+@@ -2305,8 +2305,10 @@ static int tw_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+ 	TW_DISABLE_INTERRUPTS(tw_dev);
+ 
+ 	/* Initialize the card */
+-	if (tw_reset_sequence(tw_dev))
++	if (tw_reset_sequence(tw_dev)) {
++		retval = -EINVAL;
+ 		goto out_release_mem_region;
++	}
+ 
+ 	/* Set host specific parameters */
+ 	host->max_id = TW_MAX_UNITS;
+diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
+index 62d2ca688cd14..e07242ac0f014 100644
+--- a/drivers/scsi/lpfc/lpfc_els.c
++++ b/drivers/scsi/lpfc/lpfc_els.c
+@@ -5466,9 +5466,19 @@ out:
+ 				ndlp->nlp_flag &= ~NLP_RELEASE_RPI;
+ 				spin_unlock_irq(&ndlp->lock);
+ 			}
++			lpfc_drop_node(vport, ndlp);
++		} else if (ndlp->nlp_state != NLP_STE_PLOGI_ISSUE &&
++			   ndlp->nlp_state != NLP_STE_REG_LOGIN_ISSUE &&
++			   ndlp->nlp_state != NLP_STE_PRLI_ISSUE) {
++			/* Drop ndlp if there is no planned or outstanding
++			 * issued PRLI.
++			 *
++			 * In cases when the ndlp is acting as both an initiator
++			 * and target function, let our issued PRLI determine
++			 * the final ndlp kref drop.
++			 */
++			lpfc_drop_node(vport, ndlp);
+ 		}
+-
+-		lpfc_drop_node(vport, ndlp);
+ 	}
+ 
+ 	/* Release the originating I/O reference. */
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index 35e16600fc637..f2c7dd4db9c64 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -3043,9 +3043,8 @@ static int qedf_alloc_global_queues(struct qedf_ctx *qedf)
+ 	 * addresses of our queues
+ 	 */
+ 	if (!qedf->p_cpuq) {
+-		status = -EINVAL;
+ 		QEDF_ERR(&qedf->dbg_ctx, "p_cpuq is NULL.\n");
+-		goto mem_alloc_failure;
++		return -EINVAL;
+ 	}
+ 
+ 	qedf->global_queues = kzalloc((sizeof(struct global_queue *)
+diff --git a/drivers/soc/amlogic/meson-secure-pwrc.c b/drivers/soc/amlogic/meson-secure-pwrc.c
+index e935187635267..25b4b71df9b89 100644
+--- a/drivers/soc/amlogic/meson-secure-pwrc.c
++++ b/drivers/soc/amlogic/meson-secure-pwrc.c
+@@ -105,7 +105,7 @@ static struct meson_secure_pwrc_domain_desc a1_pwrc_domains[] = {
+ 	SEC_PD(ACODEC,	0),
+ 	SEC_PD(AUDIO,	0),
+ 	SEC_PD(OTP,	0),
+-	SEC_PD(DMA,	0),
++	SEC_PD(DMA,	GENPD_FLAG_ALWAYS_ON | GENPD_FLAG_IRQ_SAFE),
+ 	SEC_PD(SD_EMMC,	0),
+ 	SEC_PD(RAMA,	0),
+ 	/* SRAMB is used as ATF runtime memory, and should be always on */
+diff --git a/drivers/soc/fsl/qe/Kconfig b/drivers/soc/fsl/qe/Kconfig
+index 357c5800b112f..7afa796dbbb89 100644
+--- a/drivers/soc/fsl/qe/Kconfig
++++ b/drivers/soc/fsl/qe/Kconfig
+@@ -39,6 +39,7 @@ config QE_TDM
+ 
+ config QE_USB
+ 	bool
++	depends on QUICC_ENGINE
+ 	default y if USB_FSL_QE
+ 	help
+ 	  QE USB Controller support
+diff --git a/drivers/soc/mediatek/mtk-svs.c b/drivers/soc/mediatek/mtk-svs.c
+index f26eb2f637d52..77d6299774427 100644
+--- a/drivers/soc/mediatek/mtk-svs.c
++++ b/drivers/soc/mediatek/mtk-svs.c
+@@ -2101,9 +2101,9 @@ static int svs_mt8192_platform_probe(struct svs_platform *svsp)
+ 		svsb = &svsp->banks[idx];
+ 
+ 		if (svsb->type == SVSB_HIGH)
+-			svsb->opp_dev = svs_add_device_link(svsp, "mali");
++			svsb->opp_dev = svs_add_device_link(svsp, "gpu");
+ 		else if (svsb->type == SVSB_LOW)
+-			svsb->opp_dev = svs_get_subsys_device(svsp, "mali");
++			svsb->opp_dev = svs_get_subsys_device(svsp, "gpu");
+ 
+ 		if (IS_ERR(svsb->opp_dev))
+ 			return dev_err_probe(svsp->dev, PTR_ERR(svsb->opp_dev),
+diff --git a/drivers/soc/xilinx/xlnx_event_manager.c b/drivers/soc/xilinx/xlnx_event_manager.c
+index c76381899ef49..f9d9b82b562da 100644
+--- a/drivers/soc/xilinx/xlnx_event_manager.c
++++ b/drivers/soc/xilinx/xlnx_event_manager.c
+@@ -192,11 +192,12 @@ static int xlnx_remove_cb_for_suspend(event_cb_func_t cb_fun)
+ 	struct registered_event_data *eve_data;
+ 	struct agent_cb *cb_pos;
+ 	struct agent_cb *cb_next;
++	struct hlist_node *tmp;
+ 
+ 	is_need_to_unregister = false;
+ 
+ 	/* Check for existing entry in hash table for given cb_type */
+-	hash_for_each_possible(reg_driver_map, eve_data, hentry, PM_INIT_SUSPEND_CB) {
++	hash_for_each_possible_safe(reg_driver_map, eve_data, tmp, hentry, PM_INIT_SUSPEND_CB) {
+ 		if (eve_data->cb_type == PM_INIT_SUSPEND_CB) {
+ 			/* Delete the list of callback */
+ 			list_for_each_entry_safe(cb_pos, cb_next, &eve_data->cb_list_head, list) {
+@@ -228,11 +229,12 @@ static int xlnx_remove_cb_for_notify_event(const u32 node_id, const u32 event,
+ 	u64 key = ((u64)node_id << 32U) | (u64)event;
+ 	struct agent_cb *cb_pos;
+ 	struct agent_cb *cb_next;
++	struct hlist_node *tmp;
+ 
+ 	is_need_to_unregister = false;
+ 
+ 	/* Check for existing entry in hash table for given key id */
+-	hash_for_each_possible(reg_driver_map, eve_data, hentry, key) {
++	hash_for_each_possible_safe(reg_driver_map, eve_data, tmp, hentry, key) {
+ 		if (eve_data->key == key) {
+ 			/* Delete the list of callback */
+ 			list_for_each_entry_safe(cb_pos, cb_next, &eve_data->cb_list_head, list) {
+diff --git a/drivers/spi/spi-dw-core.c b/drivers/spi/spi-dw-core.c
+index c3bfb6c84cab2..4976e3b8923ee 100644
+--- a/drivers/spi/spi-dw-core.c
++++ b/drivers/spi/spi-dw-core.c
+@@ -426,7 +426,10 @@ static int dw_spi_transfer_one(struct spi_controller *master,
+ 	int ret;
+ 
+ 	dws->dma_mapped = 0;
+-	dws->n_bytes = DIV_ROUND_UP(transfer->bits_per_word, BITS_PER_BYTE);
++	dws->n_bytes =
++		roundup_pow_of_two(DIV_ROUND_UP(transfer->bits_per_word,
++						BITS_PER_BYTE));
++
+ 	dws->tx = (void *)transfer->tx_buf;
+ 	dws->tx_len = transfer->len / dws->n_bytes;
+ 	dws->rx = transfer->rx_buf;
+diff --git a/drivers/spi/spi-geni-qcom.c b/drivers/spi/spi-geni-qcom.c
+index baf477383682d..d147519fe1089 100644
+--- a/drivers/spi/spi-geni-qcom.c
++++ b/drivers/spi/spi-geni-qcom.c
+@@ -35,7 +35,7 @@
+ #define CS_DEMUX_OUTPUT_SEL	GENMASK(3, 0)
+ 
+ #define SE_SPI_TRANS_CFG	0x25c
+-#define CS_TOGGLE		BIT(0)
++#define CS_TOGGLE		BIT(1)
+ 
+ #define SE_SPI_WORD_LEN		0x268
+ #define WORD_LEN_MSK		GENMASK(9, 0)
+diff --git a/drivers/thermal/amlogic_thermal.c b/drivers/thermal/amlogic_thermal.c
+index 9235fda4ec1eb..337153042318f 100644
+--- a/drivers/thermal/amlogic_thermal.c
++++ b/drivers/thermal/amlogic_thermal.c
+@@ -285,7 +285,7 @@ static int amlogic_thermal_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
+-	if (devm_thermal_add_hwmon_sysfs(pdata->tzd))
++	if (devm_thermal_add_hwmon_sysfs(&pdev->dev, pdata->tzd))
+ 		dev_warn(&pdev->dev, "Failed to add hwmon sysfs attributes\n");
+ 
+ 	ret = amlogic_thermal_initialize(pdata);
+diff --git a/drivers/thermal/imx8mm_thermal.c b/drivers/thermal/imx8mm_thermal.c
+index 72b5d6f319c1d..e1bec196c5350 100644
+--- a/drivers/thermal/imx8mm_thermal.c
++++ b/drivers/thermal/imx8mm_thermal.c
+@@ -343,7 +343,7 @@ static int imx8mm_tmu_probe(struct platform_device *pdev)
+ 		}
+ 		tmu->sensors[i].hw_id = i;
+ 
+-		if (devm_thermal_add_hwmon_sysfs(tmu->sensors[i].tzd))
++		if (devm_thermal_add_hwmon_sysfs(&pdev->dev, tmu->sensors[i].tzd))
+ 			dev_warn(&pdev->dev, "failed to add hwmon sysfs attributes\n");
+ 	}
+ 
+diff --git a/drivers/thermal/imx_sc_thermal.c b/drivers/thermal/imx_sc_thermal.c
+index f32e59e746231..e24572fc9e731 100644
+--- a/drivers/thermal/imx_sc_thermal.c
++++ b/drivers/thermal/imx_sc_thermal.c
+@@ -119,7 +119,7 @@ static int imx_sc_thermal_probe(struct platform_device *pdev)
+ 			return ret;
+ 		}
+ 
+-		if (devm_thermal_add_hwmon_sysfs(sensor->tzd))
++		if (devm_thermal_add_hwmon_sysfs(&pdev->dev, sensor->tzd))
+ 			dev_warn(&pdev->dev, "failed to add hwmon sysfs attributes\n");
+ 	}
+ 
+diff --git a/drivers/thermal/k3_bandgap.c b/drivers/thermal/k3_bandgap.c
+index 22c9bcb899c37..df184b837cdd0 100644
+--- a/drivers/thermal/k3_bandgap.c
++++ b/drivers/thermal/k3_bandgap.c
+@@ -222,7 +222,7 @@ static int k3_bandgap_probe(struct platform_device *pdev)
+ 			goto err_alloc;
+ 		}
+ 
+-		if (devm_thermal_add_hwmon_sysfs(data[id].tzd))
++		if (devm_thermal_add_hwmon_sysfs(dev, data[id].tzd))
+ 			dev_warn(dev, "Failed to add hwmon sysfs attributes\n");
+ 	}
+ 
+diff --git a/drivers/thermal/mediatek/auxadc_thermal.c b/drivers/thermal/mediatek/auxadc_thermal.c
+index ab730f9552d0e..585704d2df2be 100644
+--- a/drivers/thermal/mediatek/auxadc_thermal.c
++++ b/drivers/thermal/mediatek/auxadc_thermal.c
+@@ -1210,7 +1210,7 @@ static int mtk_thermal_probe(struct platform_device *pdev)
+ 		goto err_disable_clk_peri_therm;
+ 	}
+ 
+-	ret = devm_thermal_add_hwmon_sysfs(tzdev);
++	ret = devm_thermal_add_hwmon_sysfs(&pdev->dev, tzdev);
+ 	if (ret)
+ 		dev_warn(&pdev->dev, "error in thermal_add_hwmon_sysfs");
+ 
+diff --git a/drivers/thermal/qcom/qcom-spmi-adc-tm5.c b/drivers/thermal/qcom/qcom-spmi-adc-tm5.c
+index 31164ade2dd11..dcb24a94f3fb4 100644
+--- a/drivers/thermal/qcom/qcom-spmi-adc-tm5.c
++++ b/drivers/thermal/qcom/qcom-spmi-adc-tm5.c
+@@ -689,7 +689,7 @@ static int adc_tm5_register_tzd(struct adc_tm5_chip *adc_tm)
+ 			return PTR_ERR(tzd);
+ 		}
+ 		adc_tm->channels[i].tzd = tzd;
+-		if (devm_thermal_add_hwmon_sysfs(tzd))
++		if (devm_thermal_add_hwmon_sysfs(adc_tm->dev, tzd))
+ 			dev_warn(adc_tm->dev,
+ 				 "Failed to add hwmon sysfs attributes\n");
+ 	}
+diff --git a/drivers/thermal/qcom/qcom-spmi-temp-alarm.c b/drivers/thermal/qcom/qcom-spmi-temp-alarm.c
+index 101c75d0e13f3..c0cfb255c14e2 100644
+--- a/drivers/thermal/qcom/qcom-spmi-temp-alarm.c
++++ b/drivers/thermal/qcom/qcom-spmi-temp-alarm.c
+@@ -459,7 +459,7 @@ static int qpnp_tm_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
+-	if (devm_thermal_add_hwmon_sysfs(chip->tz_dev))
++	if (devm_thermal_add_hwmon_sysfs(&pdev->dev, chip->tz_dev))
+ 		dev_warn(&pdev->dev,
+ 			 "Failed to add hwmon sysfs attributes\n");
+ 
+diff --git a/drivers/thermal/qcom/tsens-v0_1.c b/drivers/thermal/qcom/tsens-v0_1.c
+index e89c6f39a3aea..e9ce7b62b3818 100644
+--- a/drivers/thermal/qcom/tsens-v0_1.c
++++ b/drivers/thermal/qcom/tsens-v0_1.c
+@@ -243,6 +243,18 @@ static int calibrate_8974(struct tsens_priv *priv)
+ 	return 0;
+ }
+ 
++static int __init init_8226(struct tsens_priv *priv)
++{
++	priv->sensor[0].slope = 2901;
++	priv->sensor[1].slope = 2846;
++	priv->sensor[2].slope = 3038;
++	priv->sensor[3].slope = 2955;
++	priv->sensor[4].slope = 2901;
++	priv->sensor[5].slope = 2846;
++
++	return init_common(priv);
++}
++
+ static int __init init_8939(struct tsens_priv *priv) {
+ 	priv->sensor[0].slope = 2911;
+ 	priv->sensor[1].slope = 2789;
+@@ -258,7 +270,28 @@ static int __init init_8939(struct tsens_priv *priv) {
+ 	return init_common(priv);
+ }
+ 
+-/* v0.1: 8916, 8939, 8974, 9607 */
++static int __init init_9607(struct tsens_priv *priv)
++{
++	int i;
++
++	for (i = 0; i < priv->num_sensors; ++i)
++		priv->sensor[i].slope = 3000;
++
++	priv->sensor[0].p1_calib_offset = 1;
++	priv->sensor[0].p2_calib_offset = 1;
++	priv->sensor[1].p1_calib_offset = -4;
++	priv->sensor[1].p2_calib_offset = -2;
++	priv->sensor[2].p1_calib_offset = 4;
++	priv->sensor[2].p2_calib_offset = 8;
++	priv->sensor[3].p1_calib_offset = -3;
++	priv->sensor[3].p2_calib_offset = -5;
++	priv->sensor[4].p1_calib_offset = -4;
++	priv->sensor[4].p2_calib_offset = -4;
++
++	return init_common(priv);
++}
++
++/* v0.1: 8226, 8916, 8939, 8974, 9607 */
+ 
+ static struct tsens_features tsens_v0_1_feat = {
+ 	.ver_major	= VER_0_1,
+@@ -313,6 +346,19 @@ static const struct tsens_ops ops_v0_1 = {
+ 	.get_temp	= get_temp_common,
+ };
+ 
++static const struct tsens_ops ops_8226 = {
++	.init		= init_8226,
++	.calibrate	= tsens_calibrate_common,
++	.get_temp	= get_temp_common,
++};
++
++struct tsens_plat_data data_8226 = {
++	.num_sensors	= 6,
++	.ops		= &ops_8226,
++	.feat		= &tsens_v0_1_feat,
++	.fields	= tsens_v0_1_regfields,
++};
++
+ static const struct tsens_ops ops_8916 = {
+ 	.init		= init_common,
+ 	.calibrate	= calibrate_8916,
+@@ -356,9 +402,15 @@ struct tsens_plat_data data_8974 = {
+ 	.fields	= tsens_v0_1_regfields,
+ };
+ 
++static const struct tsens_ops ops_9607 = {
++	.init		= init_9607,
++	.calibrate	= tsens_calibrate_common,
++	.get_temp	= get_temp_common,
++};
++
+ struct tsens_plat_data data_9607 = {
+ 	.num_sensors	= 5,
+-	.ops		= &ops_v0_1,
++	.ops		= &ops_9607,
+ 	.feat		= &tsens_v0_1_feat,
+ 	.fields	= tsens_v0_1_regfields,
+ };
+diff --git a/drivers/thermal/qcom/tsens.c b/drivers/thermal/qcom/tsens.c
+index 8020ead2794e9..38f5c783fb297 100644
+--- a/drivers/thermal/qcom/tsens.c
++++ b/drivers/thermal/qcom/tsens.c
+@@ -134,10 +134,12 @@ int tsens_read_calibration(struct tsens_priv *priv, int shift, u32 *p1, u32 *p2,
+ 			p1[i] = p1[i] + (base1 << shift);
+ 		break;
+ 	case TWO_PT_CALIB:
++	case TWO_PT_CALIB_NO_OFFSET:
+ 		for (i = 0; i < priv->num_sensors; i++)
+ 			p2[i] = (p2[i] + base2) << shift;
+ 		fallthrough;
+ 	case ONE_PT_CALIB2:
++	case ONE_PT_CALIB2_NO_OFFSET:
+ 		for (i = 0; i < priv->num_sensors; i++)
+ 			p1[i] = (p1[i] + base1) << shift;
+ 		break;
+@@ -149,6 +151,18 @@ int tsens_read_calibration(struct tsens_priv *priv, int shift, u32 *p1, u32 *p2,
+ 		}
+ 	}
+ 
++	/* Apply calibration offset workaround except for _NO_OFFSET modes */
++	switch (mode) {
++	case TWO_PT_CALIB:
++		for (i = 0; i < priv->num_sensors; i++)
++			p2[i] += priv->sensor[i].p2_calib_offset;
++		fallthrough;
++	case ONE_PT_CALIB2:
++		for (i = 0; i < priv->num_sensors; i++)
++			p1[i] += priv->sensor[i].p1_calib_offset;
++		break;
++	}
++
+ 	return mode;
+ }
+ 
+@@ -254,7 +268,7 @@ void compute_intercept_slope(struct tsens_priv *priv, u32 *p1,
+ 
+ 		if (!priv->sensor[i].slope)
+ 			priv->sensor[i].slope = SLOPE_DEFAULT;
+-		if (mode == TWO_PT_CALIB) {
++		if (mode == TWO_PT_CALIB || mode == TWO_PT_CALIB_NO_OFFSET) {
+ 			/*
+ 			 * slope (m) = adc_code2 - adc_code1 (y2 - y1)/
+ 			 *	temp_120_degc - temp_30_degc (x2 - x1)
+@@ -1095,6 +1109,9 @@ static const struct of_device_id tsens_table[] = {
+ 	}, {
+ 		.compatible = "qcom,mdm9607-tsens",
+ 		.data = &data_9607,
++	}, {
++		.compatible = "qcom,msm8226-tsens",
++		.data = &data_8226,
+ 	}, {
+ 		.compatible = "qcom,msm8916-tsens",
+ 		.data = &data_8916,
+@@ -1189,7 +1206,7 @@ static int tsens_register(struct tsens_priv *priv)
+ 		if (priv->ops->enable)
+ 			priv->ops->enable(priv, i);
+ 
+-		if (devm_thermal_add_hwmon_sysfs(tzd))
++		if (devm_thermal_add_hwmon_sysfs(priv->dev, tzd))
+ 			dev_warn(priv->dev,
+ 				 "Failed to add hwmon sysfs attributes\n");
+ 	}
+diff --git a/drivers/thermal/qcom/tsens.h b/drivers/thermal/qcom/tsens.h
+index dba9cd38f637c..1cd8f4fe0971f 100644
+--- a/drivers/thermal/qcom/tsens.h
++++ b/drivers/thermal/qcom/tsens.h
+@@ -10,6 +10,8 @@
+ #define ONE_PT_CALIB		0x1
+ #define ONE_PT_CALIB2		0x2
+ #define TWO_PT_CALIB		0x3
++#define ONE_PT_CALIB2_NO_OFFSET	0x6
++#define TWO_PT_CALIB_NO_OFFSET	0x7
+ #define CAL_DEGC_PT1		30
+ #define CAL_DEGC_PT2		120
+ #define SLOPE_FACTOR		1000
+@@ -57,6 +59,8 @@ struct tsens_sensor {
+ 	unsigned int			hw_id;
+ 	int				slope;
+ 	u32				status;
++	int				p1_calib_offset;
++	int				p2_calib_offset;
+ };
+ 
+ /**
+@@ -635,7 +639,7 @@ int get_temp_common(const struct tsens_sensor *s, int *temp);
+ extern struct tsens_plat_data data_8960;
+ 
+ /* TSENS v0.1 targets */
+-extern struct tsens_plat_data data_8916, data_8939, data_8974, data_9607;
++extern struct tsens_plat_data data_8226, data_8916, data_8939, data_8974, data_9607;
+ 
+ /* TSENS v1 targets */
+ extern struct tsens_plat_data data_tsens_v1, data_8976, data_8956;
+diff --git a/drivers/thermal/qoriq_thermal.c b/drivers/thermal/qoriq_thermal.c
+index 431c29c0898a7..dec66cf3eba2c 100644
+--- a/drivers/thermal/qoriq_thermal.c
++++ b/drivers/thermal/qoriq_thermal.c
+@@ -31,7 +31,6 @@
+ #define TMR_DISABLE	0x0
+ #define TMR_ME		0x80000000
+ #define TMR_ALPF	0x0c000000
+-#define TMR_MSITE_ALL	GENMASK(15, 0)
+ 
+ #define REGS_TMTMIR	0x008	/* Temperature measurement interval Register */
+ #define TMTMIR_DEFAULT	0x0000000f
+@@ -105,6 +104,11 @@ static int tmu_get_temp(struct thermal_zone_device *tz, int *temp)
+ 	 * within sensor range. TEMP is an 9 bit value representing
+ 	 * temperature in KelVin.
+ 	 */
++
++	regmap_read(qdata->regmap, REGS_TMR, &val);
++	if (!(val & TMR_ME))
++		return -EAGAIN;
++
+ 	if (regmap_read_poll_timeout(qdata->regmap,
+ 				     REGS_TRITSR(qsensor->id),
+ 				     val,
+@@ -128,15 +132,7 @@ static const struct thermal_zone_device_ops tmu_tz_ops = {
+ static int qoriq_tmu_register_tmu_zone(struct device *dev,
+ 				       struct qoriq_tmu_data *qdata)
+ {
+-	int id;
+-
+-	if (qdata->ver == TMU_VER1) {
+-		regmap_write(qdata->regmap, REGS_TMR,
+-			     TMR_MSITE_ALL | TMR_ME | TMR_ALPF);
+-	} else {
+-		regmap_write(qdata->regmap, REGS_V2_TMSR, TMR_MSITE_ALL);
+-		regmap_write(qdata->regmap, REGS_TMR, TMR_ME | TMR_ALPF_V2);
+-	}
++	int id, sites = 0;
+ 
+ 	for (id = 0; id < SITES_MAX; id++) {
+ 		struct thermal_zone_device *tzd;
+@@ -153,14 +149,26 @@ static int qoriq_tmu_register_tmu_zone(struct device *dev,
+ 			if (ret == -ENODEV)
+ 				continue;
+ 
+-			regmap_write(qdata->regmap, REGS_TMR, TMR_DISABLE);
+ 			return ret;
+ 		}
+ 
+-		if (devm_thermal_add_hwmon_sysfs(tzd))
++		if (qdata->ver == TMU_VER1)
++			sites |= 0x1 << (15 - id);
++		else
++			sites |= 0x1 << id;
++
++		if (devm_thermal_add_hwmon_sysfs(dev, tzd))
+ 			dev_warn(dev,
+ 				 "Failed to add hwmon sysfs attributes\n");
++	}
+ 
++	if (sites) {
++		if (qdata->ver == TMU_VER1) {
++			regmap_write(qdata->regmap, REGS_TMR, TMR_ME | TMR_ALPF | sites);
++		} else {
++			regmap_write(qdata->regmap, REGS_V2_TMSR, sites);
++			regmap_write(qdata->regmap, REGS_TMR, TMR_ME | TMR_ALPF_V2);
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/thermal/sun8i_thermal.c b/drivers/thermal/sun8i_thermal.c
+index 497beac63e5d9..7517067d6e817 100644
+--- a/drivers/thermal/sun8i_thermal.c
++++ b/drivers/thermal/sun8i_thermal.c
+@@ -319,6 +319,11 @@ out:
+ 	return ret;
+ }
+ 
++static void sun8i_ths_reset_control_assert(void *data)
++{
++	reset_control_assert(data);
++}
++
+ static int sun8i_ths_resource_init(struct ths_device *tmdev)
+ {
+ 	struct device *dev = tmdev->dev;
+@@ -339,47 +344,35 @@ static int sun8i_ths_resource_init(struct ths_device *tmdev)
+ 		if (IS_ERR(tmdev->reset))
+ 			return PTR_ERR(tmdev->reset);
+ 
+-		tmdev->bus_clk = devm_clk_get(&pdev->dev, "bus");
++		ret = reset_control_deassert(tmdev->reset);
++		if (ret)
++			return ret;
++
++		ret = devm_add_action_or_reset(dev, sun8i_ths_reset_control_assert,
++					       tmdev->reset);
++		if (ret)
++			return ret;
++
++		tmdev->bus_clk = devm_clk_get_enabled(&pdev->dev, "bus");
+ 		if (IS_ERR(tmdev->bus_clk))
+ 			return PTR_ERR(tmdev->bus_clk);
+ 	}
+ 
+ 	if (tmdev->chip->has_mod_clk) {
+-		tmdev->mod_clk = devm_clk_get(&pdev->dev, "mod");
++		tmdev->mod_clk = devm_clk_get_enabled(&pdev->dev, "mod");
+ 		if (IS_ERR(tmdev->mod_clk))
+ 			return PTR_ERR(tmdev->mod_clk);
+ 	}
+ 
+-	ret = reset_control_deassert(tmdev->reset);
+-	if (ret)
+-		return ret;
+-
+-	ret = clk_prepare_enable(tmdev->bus_clk);
+-	if (ret)
+-		goto assert_reset;
+-
+ 	ret = clk_set_rate(tmdev->mod_clk, 24000000);
+ 	if (ret)
+-		goto bus_disable;
+-
+-	ret = clk_prepare_enable(tmdev->mod_clk);
+-	if (ret)
+-		goto bus_disable;
++		return ret;
+ 
+ 	ret = sun8i_ths_calibrate(tmdev);
+ 	if (ret)
+-		goto mod_disable;
++		return ret;
+ 
+ 	return 0;
+-
+-mod_disable:
+-	clk_disable_unprepare(tmdev->mod_clk);
+-bus_disable:
+-	clk_disable_unprepare(tmdev->bus_clk);
+-assert_reset:
+-	reset_control_assert(tmdev->reset);
+-
+-	return ret;
+ }
+ 
+ static int sun8i_h3_thermal_init(struct ths_device *tmdev)
+@@ -475,7 +468,7 @@ static int sun8i_ths_register(struct ths_device *tmdev)
+ 		if (IS_ERR(tmdev->sensor[i].tzd))
+ 			return PTR_ERR(tmdev->sensor[i].tzd);
+ 
+-		if (devm_thermal_add_hwmon_sysfs(tmdev->sensor[i].tzd))
++		if (devm_thermal_add_hwmon_sysfs(tmdev->dev, tmdev->sensor[i].tzd))
+ 			dev_warn(tmdev->dev,
+ 				 "Failed to add hwmon sysfs attributes\n");
+ 	}
+@@ -530,17 +523,6 @@ static int sun8i_ths_probe(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
+-static int sun8i_ths_remove(struct platform_device *pdev)
+-{
+-	struct ths_device *tmdev = platform_get_drvdata(pdev);
+-
+-	clk_disable_unprepare(tmdev->mod_clk);
+-	clk_disable_unprepare(tmdev->bus_clk);
+-	reset_control_assert(tmdev->reset);
+-
+-	return 0;
+-}
+-
+ static const struct ths_thermal_chip sun8i_a83t_ths = {
+ 	.sensor_num = 3,
+ 	.scale = 705,
+@@ -642,7 +624,6 @@ MODULE_DEVICE_TABLE(of, of_ths_match);
+ 
+ static struct platform_driver ths_driver = {
+ 	.probe = sun8i_ths_probe,
+-	.remove = sun8i_ths_remove,
+ 	.driver = {
+ 		.name = "sun8i-thermal",
+ 		.of_match_table = of_ths_match,
+diff --git a/drivers/thermal/tegra/tegra30-tsensor.c b/drivers/thermal/tegra/tegra30-tsensor.c
+index b3218b71b6d97..823560c82aaee 100644
+--- a/drivers/thermal/tegra/tegra30-tsensor.c
++++ b/drivers/thermal/tegra/tegra30-tsensor.c
+@@ -528,7 +528,7 @@ static int tegra_tsensor_register_channel(struct tegra_tsensor *ts,
+ 		return 0;
+ 	}
+ 
+-	if (devm_thermal_add_hwmon_sysfs(tsc->tzd))
++	if (devm_thermal_add_hwmon_sysfs(ts->dev, tsc->tzd))
+ 		dev_warn(ts->dev, "failed to add hwmon sysfs attributes\n");
+ 
+ 	return 0;
+diff --git a/drivers/thermal/thermal_hwmon.c b/drivers/thermal/thermal_hwmon.c
+index c594c42bea6da..964db7941e310 100644
+--- a/drivers/thermal/thermal_hwmon.c
++++ b/drivers/thermal/thermal_hwmon.c
+@@ -263,7 +263,7 @@ static void devm_thermal_hwmon_release(struct device *dev, void *res)
+ 	thermal_remove_hwmon_sysfs(*(struct thermal_zone_device **)res);
+ }
+ 
+-int devm_thermal_add_hwmon_sysfs(struct thermal_zone_device *tz)
++int devm_thermal_add_hwmon_sysfs(struct device *dev, struct thermal_zone_device *tz)
+ {
+ 	struct thermal_zone_device **ptr;
+ 	int ret;
+@@ -280,7 +280,7 @@ int devm_thermal_add_hwmon_sysfs(struct thermal_zone_device *tz)
+ 	}
+ 
+ 	*ptr = tz;
+-	devres_add(&tz->device, ptr);
++	devres_add(dev, ptr);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/thermal/thermal_hwmon.h b/drivers/thermal/thermal_hwmon.h
+index 1a9d65f6a6a8b..b429f6e7abdb2 100644
+--- a/drivers/thermal/thermal_hwmon.h
++++ b/drivers/thermal/thermal_hwmon.h
+@@ -17,7 +17,7 @@
+ 
+ #ifdef CONFIG_THERMAL_HWMON
+ int thermal_add_hwmon_sysfs(struct thermal_zone_device *tz);
+-int devm_thermal_add_hwmon_sysfs(struct thermal_zone_device *tz);
++int devm_thermal_add_hwmon_sysfs(struct device *dev, struct thermal_zone_device *tz);
+ void thermal_remove_hwmon_sysfs(struct thermal_zone_device *tz);
+ #else
+ static inline int
+@@ -27,7 +27,7 @@ thermal_add_hwmon_sysfs(struct thermal_zone_device *tz)
+ }
+ 
+ static inline int
+-devm_thermal_add_hwmon_sysfs(struct thermal_zone_device *tz)
++devm_thermal_add_hwmon_sysfs(struct device *dev, struct thermal_zone_device *tz)
+ {
+ 	return 0;
+ }
+diff --git a/drivers/thermal/ti-soc-thermal/ti-thermal-common.c b/drivers/thermal/ti-soc-thermal/ti-thermal-common.c
+index 8a9055bd376ec..42d0ffd82514d 100644
+--- a/drivers/thermal/ti-soc-thermal/ti-thermal-common.c
++++ b/drivers/thermal/ti-soc-thermal/ti-thermal-common.c
+@@ -182,7 +182,7 @@ int ti_thermal_expose_sensor(struct ti_bandgap *bgp, int id,
+ 	ti_bandgap_set_sensor_data(bgp, id, data);
+ 	ti_bandgap_write_update_interval(bgp, data->sensor_id, interval);
+ 
+-	if (devm_thermal_add_hwmon_sysfs(data->ti_thermal))
++	if (devm_thermal_add_hwmon_sysfs(bgp->dev, data->ti_thermal))
+ 		dev_warn(bgp->dev, "failed to add hwmon sysfs attributes\n");
+ 
+ 	return 0;
+diff --git a/drivers/ufs/core/ufshcd-priv.h b/drivers/ufs/core/ufshcd-priv.h
+index 529f8507a5e4c..7d8ff743a1b28 100644
+--- a/drivers/ufs/core/ufshcd-priv.h
++++ b/drivers/ufs/core/ufshcd-priv.h
+@@ -84,9 +84,6 @@ unsigned long ufshcd_mcq_poll_cqe_lock(struct ufs_hba *hba,
+ int ufshcd_read_string_desc(struct ufs_hba *hba, u8 desc_index,
+ 			    u8 **buf, bool ascii);
+ 
+-int ufshcd_hold(struct ufs_hba *hba, bool async);
+-void ufshcd_release(struct ufs_hba *hba);
+-
+ int ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd);
+ 
+ int ufshcd_exec_raw_upiu_cmd(struct ufs_hba *hba,
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index aec74987cb4e0..e67981317dcbf 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -2917,7 +2917,6 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
+ 		(hba->clk_gating.state != CLKS_ON));
+ 
+ 	lrbp = &hba->lrb[tag];
+-	WARN_ON(lrbp->cmd);
+ 	lrbp->cmd = cmd;
+ 	lrbp->task_tag = tag;
+ 	lrbp->lun = ufshcd_scsi_to_upiu_lun(cmd->device->lun);
+@@ -2933,7 +2932,6 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
+ 
+ 	err = ufshcd_map_sg(hba, lrbp);
+ 	if (err) {
+-		lrbp->cmd = NULL;
+ 		ufshcd_release(hba);
+ 		goto out;
+ 	}
+@@ -3071,7 +3069,7 @@ retry:
+ 		 * not trigger any race conditions.
+ 		 */
+ 		hba->dev_cmd.complete = NULL;
+-		err = ufshcd_get_tr_ocs(lrbp, hba->dev_cmd.cqe);
++		err = ufshcd_get_tr_ocs(lrbp, NULL);
+ 		if (!err)
+ 			err = ufshcd_dev_cmd_completion(hba, lrbp);
+ 	} else {
+@@ -3152,13 +3150,12 @@ static int ufshcd_exec_dev_cmd(struct ufs_hba *hba,
+ 	down_read(&hba->clk_scaling_lock);
+ 
+ 	lrbp = &hba->lrb[tag];
+-	WARN_ON(lrbp->cmd);
++	lrbp->cmd = NULL;
+ 	err = ufshcd_compose_dev_cmd(hba, lrbp, cmd_type, tag);
+ 	if (unlikely(err))
+ 		goto out;
+ 
+ 	hba->dev_cmd.complete = &wait;
+-	hba->dev_cmd.cqe = NULL;
+ 
+ 	ufshcd_add_query_upiu_trace(hba, UFS_QUERY_SEND, lrbp->ucd_req_ptr);
+ 
+@@ -5391,7 +5388,6 @@ static void ufshcd_release_scsi_cmd(struct ufs_hba *hba,
+ 	struct scsi_cmnd *cmd = lrbp->cmd;
+ 
+ 	scsi_dma_unmap(cmd);
+-	lrbp->cmd = NULL;	/* Mark the command as completed. */
+ 	ufshcd_release(hba);
+ 	ufshcd_clk_scaling_update_busy(hba);
+ }
+@@ -5407,6 +5403,7 @@ void ufshcd_compl_one_cqe(struct ufs_hba *hba, int task_tag,
+ {
+ 	struct ufshcd_lrb *lrbp;
+ 	struct scsi_cmnd *cmd;
++	enum utp_ocs ocs;
+ 
+ 	lrbp = &hba->lrb[task_tag];
+ 	lrbp->compl_time_stamp = ktime_get();
+@@ -5422,8 +5419,11 @@ void ufshcd_compl_one_cqe(struct ufs_hba *hba, int task_tag,
+ 	} else if (lrbp->command_type == UTP_CMD_TYPE_DEV_MANAGE ||
+ 		   lrbp->command_type == UTP_CMD_TYPE_UFS_STORAGE) {
+ 		if (hba->dev_cmd.complete) {
+-			hba->dev_cmd.cqe = cqe;
+-			ufshcd_add_command_trace(hba, task_tag, UFS_DEV_COMP);
++			if (cqe) {
++				ocs = le32_to_cpu(cqe->status) & MASK_OCS;
++				lrbp->utr_descriptor_ptr->header.dword_2 =
++					cpu_to_le32(ocs);
++			}
+ 			complete(hba->dev_cmd.complete);
+ 			ufshcd_clk_scaling_update_busy(hba);
+ 		}
+@@ -7006,7 +7006,6 @@ static int ufshcd_issue_devman_upiu_cmd(struct ufs_hba *hba,
+ 	down_read(&hba->clk_scaling_lock);
+ 
+ 	lrbp = &hba->lrb[tag];
+-	WARN_ON(lrbp->cmd);
+ 	lrbp->cmd = NULL;
+ 	lrbp->task_tag = tag;
+ 	lrbp->lun = 0;
+@@ -7178,7 +7177,6 @@ int ufshcd_advanced_rpmb_req_handler(struct ufs_hba *hba, struct utp_upiu_req *r
+ 	down_read(&hba->clk_scaling_lock);
+ 
+ 	lrbp = &hba->lrb[tag];
+-	WARN_ON(lrbp->cmd);
+ 	lrbp->cmd = NULL;
+ 	lrbp->task_tag = tag;
+ 	lrbp->lun = UFS_UPIU_RPMB_WLUN;
+@@ -9153,7 +9151,8 @@ static int ufshcd_execute_start_stop(struct scsi_device *sdev,
+ 	};
+ 
+ 	return scsi_execute_cmd(sdev, cdb, REQ_OP_DRV_IN, /*buffer=*/NULL,
+-			/*bufflen=*/0, /*timeout=*/HZ, /*retries=*/0, &args);
++			/*bufflen=*/0, /*timeout=*/10 * HZ, /*retries=*/0,
++			&args);
+ }
+ 
+ /**
+diff --git a/drivers/vfio/mdev/mdev_core.c b/drivers/vfio/mdev/mdev_core.c
+index 58f91b3bd670c..ed4737de45289 100644
+--- a/drivers/vfio/mdev/mdev_core.c
++++ b/drivers/vfio/mdev/mdev_core.c
+@@ -72,12 +72,6 @@ int mdev_register_parent(struct mdev_parent *parent, struct device *dev,
+ 	parent->nr_types = nr_types;
+ 	atomic_set(&parent->available_instances, mdev_driver->max_instances);
+ 
+-	if (!mdev_bus_compat_class) {
+-		mdev_bus_compat_class = class_compat_register("mdev_bus");
+-		if (!mdev_bus_compat_class)
+-			return -ENOMEM;
+-	}
+-
+ 	ret = parent_create_sysfs_files(parent);
+ 	if (ret)
+ 		return ret;
+@@ -251,13 +245,24 @@ int mdev_device_remove(struct mdev_device *mdev)
+ 
+ static int __init mdev_init(void)
+ {
+-	return bus_register(&mdev_bus_type);
++	int ret;
++
++	ret = bus_register(&mdev_bus_type);
++	if (ret)
++		return ret;
++
++	mdev_bus_compat_class = class_compat_register("mdev_bus");
++	if (!mdev_bus_compat_class) {
++		bus_unregister(&mdev_bus_type);
++		return -ENOMEM;
++	}
++
++	return 0;
+ }
+ 
+ static void __exit mdev_exit(void)
+ {
+-	if (mdev_bus_compat_class)
+-		class_compat_unregister(mdev_bus_compat_class);
++	class_compat_unregister(mdev_bus_compat_class);
+ 	bus_unregister(&mdev_bus_type);
+ }
+ 
+diff --git a/drivers/video/fbdev/omap/lcd_mipid.c b/drivers/video/fbdev/omap/lcd_mipid.c
+index 03cff39d392db..cc1079aad61f2 100644
+--- a/drivers/video/fbdev/omap/lcd_mipid.c
++++ b/drivers/video/fbdev/omap/lcd_mipid.c
+@@ -563,11 +563,15 @@ static int mipid_spi_probe(struct spi_device *spi)
+ 
+ 	r = mipid_detect(md);
+ 	if (r < 0)
+-		return r;
++		goto free_md;
+ 
+ 	omapfb_register_panel(&md->panel);
+ 
+ 	return 0;
++
++free_md:
++	kfree(md);
++	return r;
+ }
+ 
+ static void mipid_spi_remove(struct spi_device *spi)
+diff --git a/drivers/virt/coco/sev-guest/Kconfig b/drivers/virt/coco/sev-guest/Kconfig
+index f9db0799ae67c..da2d7ca531f0f 100644
+--- a/drivers/virt/coco/sev-guest/Kconfig
++++ b/drivers/virt/coco/sev-guest/Kconfig
+@@ -2,6 +2,7 @@ config SEV_GUEST
+ 	tristate "AMD SEV Guest driver"
+ 	default m
+ 	depends on AMD_MEM_ENCRYPT
++	select CRYPTO
+ 	select CRYPTO_AEAD2
+ 	select CRYPTO_GCM
+ 	help
+diff --git a/fs/btrfs/bio.c b/fs/btrfs/bio.c
+index ada899613486a..4bb2c6f4ad0e7 100644
+--- a/fs/btrfs/bio.c
++++ b/fs/btrfs/bio.c
+@@ -59,30 +59,30 @@ struct bio *btrfs_bio_alloc(unsigned int nr_vecs, blk_opf_t opf,
+ 	return bio;
+ }
+ 
+-static struct bio *btrfs_split_bio(struct btrfs_fs_info *fs_info,
+-				   struct bio *orig, u64 map_length,
+-				   bool use_append)
++static struct btrfs_bio *btrfs_split_bio(struct btrfs_fs_info *fs_info,
++					 struct btrfs_bio *orig_bbio,
++					 u64 map_length, bool use_append)
+ {
+-	struct btrfs_bio *orig_bbio = btrfs_bio(orig);
++	struct btrfs_bio *bbio;
+ 	struct bio *bio;
+ 
+ 	if (use_append) {
+ 		unsigned int nr_segs;
+ 
+-		bio = bio_split_rw(orig, &fs_info->limits, &nr_segs,
++		bio = bio_split_rw(&orig_bbio->bio, &fs_info->limits, &nr_segs,
+ 				   &btrfs_clone_bioset, map_length);
+ 	} else {
+-		bio = bio_split(orig, map_length >> SECTOR_SHIFT, GFP_NOFS,
+-				&btrfs_clone_bioset);
++		bio = bio_split(&orig_bbio->bio, map_length >> SECTOR_SHIFT,
++				GFP_NOFS, &btrfs_clone_bioset);
+ 	}
+-	btrfs_bio_init(btrfs_bio(bio), orig_bbio->inode, NULL, orig_bbio);
++	bbio = btrfs_bio(bio);
++	btrfs_bio_init(bbio, orig_bbio->inode, NULL, orig_bbio);
+ 
+-	btrfs_bio(bio)->file_offset = orig_bbio->file_offset;
+-	if (!(orig->bi_opf & REQ_BTRFS_ONE_ORDERED))
+-		orig_bbio->file_offset += map_length;
++	bbio->file_offset = orig_bbio->file_offset;
++	orig_bbio->file_offset += map_length;
+ 
+ 	atomic_inc(&orig_bbio->pending_ios);
+-	return bio;
++	return bbio;
+ }
+ 
+ static void btrfs_orig_write_end_io(struct bio *bio);
+@@ -631,8 +631,8 @@ static bool btrfs_submit_chunk(struct bio *bio, int mirror_num)
+ 		map_length = min(map_length, fs_info->max_zone_append_size);
+ 
+ 	if (map_length < length) {
+-		bio = btrfs_split_bio(fs_info, bio, map_length, use_append);
+-		bbio = btrfs_bio(bio);
++		bbio = btrfs_split_bio(fs_info, bbio, map_length, use_append);
++		bio = &bbio->bio;
+ 	}
+ 
+ 	/*
+diff --git a/fs/cifs/cifs_debug.c b/fs/cifs/cifs_debug.c
+index d4ed200a94714..7e0667bdfba07 100644
+--- a/fs/cifs/cifs_debug.c
++++ b/fs/cifs/cifs_debug.c
+@@ -121,6 +121,12 @@ static void cifs_debug_tcon(struct seq_file *m, struct cifs_tcon *tcon)
+ 		seq_puts(m, " nosparse");
+ 	if (tcon->need_reconnect)
+ 		seq_puts(m, "\tDISCONNECTED ");
++	spin_lock(&tcon->tc_lock);
++	if (tcon->origin_fullpath) {
++		seq_printf(m, "\n\tDFS origin fullpath: %s",
++			   tcon->origin_fullpath);
++	}
++	spin_unlock(&tcon->tc_lock);
+ 	seq_putc(m, '\n');
+ }
+ 
+@@ -377,13 +383,9 @@ skip_rdma:
+ 		seq_printf(m, "\nIn Send: %d In MaxReq Wait: %d",
+ 				atomic_read(&server->in_send),
+ 				atomic_read(&server->num_waiters));
+-		if (IS_ENABLED(CONFIG_CIFS_DFS_UPCALL)) {
+-			if (server->origin_fullpath)
+-				seq_printf(m, "\nDFS origin full path: %s",
+-					   server->origin_fullpath);
+-			if (server->leaf_fullpath)
+-				seq_printf(m, "\nDFS leaf full path:   %s",
+-					   server->leaf_fullpath);
++		if (server->leaf_fullpath) {
++			seq_printf(m, "\nDFS leaf full path: %s",
++				   server->leaf_fullpath);
+ 		}
+ 
+ 		seq_printf(m, "\n\n\tSessions: ");
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 5f8fd20951af3..56d440772e029 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -736,23 +736,20 @@ struct TCP_Server_Info {
+ #endif
+ 	struct mutex refpath_lock; /* protects leaf_fullpath */
+ 	/*
+-	 * origin_fullpath: Canonical copy of smb3_fs_context::source.
+-	 *                  It is used for matching existing DFS tcons.
+-	 *
+ 	 * leaf_fullpath: Canonical DFS referral path related to this
+ 	 *                connection.
+ 	 *                It is used in DFS cache refresher, reconnect and may
+ 	 *                change due to nested DFS links.
+ 	 *
+-	 * Both protected by @refpath_lock and @srv_lock.  The @refpath_lock is
+-	 * mosly used for not requiring a copy of @leaf_fullpath when getting
++	 * Protected by @refpath_lock and @srv_lock.  The @refpath_lock is
++	 * mostly used for not requiring a copy of @leaf_fullpath when getting
+ 	 * cached or new DFS referrals (which might also sleep during I/O).
+ 	 * While @srv_lock is held for making string and NULL comparions against
+ 	 * both fields as in mount(2) and cache refresh.
+ 	 *
+ 	 * format: \\HOST\SHARE[\OPTIONAL PATH]
+ 	 */
+-	char *origin_fullpath, *leaf_fullpath;
++	char *leaf_fullpath;
+ };
+ 
+ static inline bool is_smb1(struct TCP_Server_Info *server)
+@@ -1242,6 +1239,7 @@ struct cifs_tcon {
+ 	struct delayed_work dfs_cache_work;
+ #endif
+ 	struct delayed_work	query_interfaces; /* query interfaces workqueue job */
++	char *origin_fullpath; /* canonical copy of smb3_fs_context::source */
+ };
+ 
+ /*
+diff --git a/fs/cifs/cifsproto.h b/fs/cifs/cifsproto.h
+index c1c704990b986..16eac67fd48ac 100644
+--- a/fs/cifs/cifsproto.h
++++ b/fs/cifs/cifsproto.h
+@@ -649,7 +649,7 @@ int smb2_parse_query_directory(struct cifs_tcon *tcon, struct kvec *rsp_iov,
+ 			       int resp_buftype,
+ 			       struct cifs_search_info *srch_inf);
+ 
+-struct super_block *cifs_get_tcp_super(struct TCP_Server_Info *server);
++struct super_block *cifs_get_dfs_tcon_super(struct cifs_tcon *tcon);
+ void cifs_put_tcp_super(struct super_block *sb);
+ int cifs_update_super_prepath(struct cifs_sb_info *cifs_sb, char *prefix);
+ char *extract_hostname(const char *unc);
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 1250d156619b7..f0189afd940cf 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -996,7 +996,6 @@ static void clean_demultiplex_info(struct TCP_Server_Info *server)
+ 		 */
+ 	}
+ 
+-	kfree(server->origin_fullpath);
+ 	kfree(server->leaf_fullpath);
+ 	kfree(server);
+ 
+@@ -1386,7 +1385,9 @@ match_security(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
+ }
+ 
+ /* this function must be called with srv_lock held */
+-static int match_server(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
++static int match_server(struct TCP_Server_Info *server,
++			struct smb3_fs_context *ctx,
++			bool match_super)
+ {
+ 	struct sockaddr *addr = (struct sockaddr *)&ctx->dstaddr;
+ 
+@@ -1417,36 +1418,38 @@ static int match_server(struct TCP_Server_Info *server, struct smb3_fs_context *
+ 			       (struct sockaddr *)&server->srcaddr))
+ 		return 0;
+ 	/*
+-	 * - Match for an DFS tcon (@server->origin_fullpath).
+-	 * - Match for an DFS root server connection (@server->leaf_fullpath).
+-	 * - If none of the above and @ctx->leaf_fullpath is set, then
+-	 *   it is a new DFS connection.
+-	 * - If 'nodfs' mount option was passed, then match only connections
+-	 *   that have no DFS referrals set
+-	 *   (e.g. can't failover to other targets).
++	 * When matching cifs.ko superblocks (@match_super == true), we can't
++	 * really match either @server->leaf_fullpath or @server->dstaddr
++	 * directly since this @server might belong to a completely different
++	 * server -- in case of domain-based DFS referrals or DFS links -- as
++	 * provided earlier by mount(2) through 'source' and 'ip' options.
++	 *
++	 * Otherwise, match the DFS referral in @server->leaf_fullpath or the
++	 * destination address in @server->dstaddr.
++	 *
++	 * When using 'nodfs' mount option, we avoid sharing it with DFS
++	 * connections as they might failover.
+ 	 */
+-	if (!ctx->nodfs) {
+-		if (ctx->source && server->origin_fullpath) {
+-			if (!dfs_src_pathname_equal(ctx->source,
+-						    server->origin_fullpath))
++	if (!match_super) {
++		if (!ctx->nodfs) {
++			if (server->leaf_fullpath) {
++				if (!ctx->leaf_fullpath ||
++				    strcasecmp(server->leaf_fullpath,
++					       ctx->leaf_fullpath))
++					return 0;
++			} else if (ctx->leaf_fullpath) {
+ 				return 0;
++			}
+ 		} else if (server->leaf_fullpath) {
+-			if (!ctx->leaf_fullpath ||
+-			    strcasecmp(server->leaf_fullpath,
+-				       ctx->leaf_fullpath))
+-				return 0;
+-		} else if (ctx->leaf_fullpath) {
+ 			return 0;
+ 		}
+-	} else if (server->origin_fullpath || server->leaf_fullpath) {
+-		return 0;
+ 	}
+ 
+ 	/*
+ 	 * Match for a regular connection (address/hostname/port) which has no
+ 	 * DFS referrals set.
+ 	 */
+-	if (!server->origin_fullpath && !server->leaf_fullpath &&
++	if (!server->leaf_fullpath &&
+ 	    (strcasecmp(server->hostname, ctx->server_hostname) ||
+ 	     !match_server_address(server, addr) ||
+ 	     !match_port(server, addr)))
+@@ -1482,7 +1485,8 @@ cifs_find_tcp_session(struct smb3_fs_context *ctx)
+ 		 * Skip ses channels since they're only handled in lower layers
+ 		 * (e.g. cifs_send_recv).
+ 		 */
+-		if (CIFS_SERVER_IS_CHAN(server) || !match_server(server, ctx)) {
++		if (CIFS_SERVER_IS_CHAN(server) ||
++		    !match_server(server, ctx, false)) {
+ 			spin_unlock(&server->srv_lock);
+ 			continue;
+ 		}
+@@ -2270,10 +2274,16 @@ static int match_tcon(struct cifs_tcon *tcon, struct smb3_fs_context *ctx)
+ 
+ 	if (tcon->status == TID_EXITING)
+ 		return 0;
+-	/* Skip UNC validation when matching DFS connections or superblocks */
+-	if (!server->origin_fullpath && !server->leaf_fullpath &&
+-	    strncmp(tcon->tree_name, ctx->UNC, MAX_TREE_SIZE))
++
++	if (tcon->origin_fullpath) {
++		if (!ctx->source ||
++		    !dfs_src_pathname_equal(ctx->source,
++					    tcon->origin_fullpath))
++			return 0;
++	} else if (!server->leaf_fullpath &&
++		   strncmp(tcon->tree_name, ctx->UNC, MAX_TREE_SIZE)) {
+ 		return 0;
++	}
+ 	if (tcon->seal != ctx->seal)
+ 		return 0;
+ 	if (tcon->snapshot_time != ctx->snapshot_time)
+@@ -2672,7 +2682,7 @@ compare_mount_options(struct super_block *sb, struct cifs_mnt_data *mnt_data)
+ }
+ 
+ static int match_prepath(struct super_block *sb,
+-			 struct TCP_Server_Info *server,
++			 struct cifs_tcon *tcon,
+ 			 struct cifs_mnt_data *mnt_data)
+ {
+ 	struct smb3_fs_context *ctx = mnt_data->ctx;
+@@ -2683,8 +2693,8 @@ static int match_prepath(struct super_block *sb,
+ 	bool new_set = (new->mnt_cifs_flags & CIFS_MOUNT_USE_PREFIX_PATH) &&
+ 		new->prepath;
+ 
+-	if (server->origin_fullpath &&
+-	    dfs_src_pathname_equal(server->origin_fullpath, ctx->source))
++	if (tcon->origin_fullpath &&
++	    dfs_src_pathname_equal(tcon->origin_fullpath, ctx->source))
+ 		return 1;
+ 
+ 	if (old_set && new_set && !strcmp(new->prepath, old->prepath))
+@@ -2732,10 +2742,10 @@ cifs_match_super(struct super_block *sb, void *data)
+ 	spin_lock(&ses->ses_lock);
+ 	spin_lock(&ses->chan_lock);
+ 	spin_lock(&tcon->tc_lock);
+-	if (!match_server(tcp_srv, ctx) ||
++	if (!match_server(tcp_srv, ctx, true) ||
+ 	    !match_session(ses, ctx) ||
+ 	    !match_tcon(tcon, ctx) ||
+-	    !match_prepath(sb, tcp_srv, mnt_data)) {
++	    !match_prepath(sb, tcon, mnt_data)) {
+ 		rc = 0;
+ 		goto out;
+ 	}
+diff --git a/fs/cifs/dfs.c b/fs/cifs/dfs.c
+index 2390b2fedd6a3..267536a7531df 100644
+--- a/fs/cifs/dfs.c
++++ b/fs/cifs/dfs.c
+@@ -249,14 +249,12 @@ static int __dfs_mount_share(struct cifs_mount_ctx *mnt_ctx)
+ 		server = mnt_ctx->server;
+ 		tcon = mnt_ctx->tcon;
+ 
+-		mutex_lock(&server->refpath_lock);
+-		spin_lock(&server->srv_lock);
+-		if (!server->origin_fullpath) {
+-			server->origin_fullpath = origin_fullpath;
++		spin_lock(&tcon->tc_lock);
++		if (!tcon->origin_fullpath) {
++			tcon->origin_fullpath = origin_fullpath;
+ 			origin_fullpath = NULL;
+ 		}
+-		spin_unlock(&server->srv_lock);
+-		mutex_unlock(&server->refpath_lock);
++		spin_unlock(&tcon->tc_lock);
+ 
+ 		if (list_empty(&tcon->dfs_ses_list)) {
+ 			list_replace_init(&mnt_ctx->dfs_ses_list,
+@@ -279,18 +277,13 @@ int dfs_mount_share(struct cifs_mount_ctx *mnt_ctx, bool *isdfs)
+ {
+ 	struct smb3_fs_context *ctx = mnt_ctx->fs_ctx;
+ 	struct cifs_ses *ses;
+-	char *source = ctx->source;
+ 	bool nodfs = ctx->nodfs;
+ 	int rc;
+ 
+ 	*isdfs = false;
+-	/* Temporarily set @ctx->source to NULL as we're not matching DFS
+-	 * superblocks yet.  See cifs_match_super() and match_server().
+-	 */
+-	ctx->source = NULL;
+ 	rc = get_session(mnt_ctx, NULL);
+ 	if (rc)
+-		goto out;
++		return rc;
+ 
+ 	ctx->dfs_root_ses = mnt_ctx->ses;
+ 	/*
+@@ -304,7 +297,7 @@ int dfs_mount_share(struct cifs_mount_ctx *mnt_ctx, bool *isdfs)
+ 		rc = dfs_get_referral(mnt_ctx, ctx->UNC + 1, NULL, NULL);
+ 		if (rc) {
+ 			if (rc != -ENOENT && rc != -EOPNOTSUPP && rc != -EIO)
+-				goto out;
++				return rc;
+ 			nodfs = true;
+ 		}
+ 	}
+@@ -312,7 +305,7 @@ int dfs_mount_share(struct cifs_mount_ctx *mnt_ctx, bool *isdfs)
+ 		rc = cifs_mount_get_tcon(mnt_ctx);
+ 		if (!rc)
+ 			rc = cifs_is_path_remote(mnt_ctx);
+-		goto out;
++		return rc;
+ 	}
+ 
+ 	*isdfs = true;
+@@ -328,12 +321,7 @@ int dfs_mount_share(struct cifs_mount_ctx *mnt_ctx, bool *isdfs)
+ 	rc = __dfs_mount_share(mnt_ctx);
+ 	if (ses == ctx->dfs_root_ses)
+ 		cifs_put_smb_ses(ses);
+-out:
+-	/*
+-	 * Restore previous value of @ctx->source so DFS superblock can be
+-	 * matched in cifs_match_super().
+-	 */
+-	ctx->source = source;
++
+ 	return rc;
+ }
+ 
+@@ -567,11 +555,11 @@ int cifs_tree_connect(const unsigned int xid, struct cifs_tcon *tcon, const stru
+ 	int rc;
+ 	struct TCP_Server_Info *server = tcon->ses->server;
+ 	const struct smb_version_operations *ops = server->ops;
+-	struct super_block *sb = NULL;
+-	struct cifs_sb_info *cifs_sb;
+ 	struct dfs_cache_tgt_list tl = DFS_CACHE_TGT_LIST_INIT(tl);
+-	char *tree;
++	struct cifs_sb_info *cifs_sb = NULL;
++	struct super_block *sb = NULL;
+ 	struct dfs_info3_param ref = {0};
++	char *tree;
+ 
+ 	/* only send once per connect */
+ 	spin_lock(&tcon->tc_lock);
+@@ -603,19 +591,18 @@ int cifs_tree_connect(const unsigned int xid, struct cifs_tcon *tcon, const stru
+ 		goto out;
+ 	}
+ 
+-	sb = cifs_get_tcp_super(server);
+-	if (IS_ERR(sb)) {
+-		rc = PTR_ERR(sb);
+-		cifs_dbg(VFS, "%s: could not find superblock: %d\n", __func__, rc);
+-		goto out;
+-	}
+-
+-	cifs_sb = CIFS_SB(sb);
++	sb = cifs_get_dfs_tcon_super(tcon);
++	if (!IS_ERR(sb))
++		cifs_sb = CIFS_SB(sb);
+ 
+-	/* If it is not dfs or there was no cached dfs referral, then reconnect to same share */
+-	if (!server->leaf_fullpath ||
++	/*
++	 * Tree connect to last share in @tcon->tree_name whether dfs super or
++	 * cached dfs referral was not found.
++	 */
++	if (!cifs_sb || !server->leaf_fullpath ||
+ 	    dfs_cache_noreq_find(server->leaf_fullpath + 1, &ref, &tl)) {
+-		rc = ops->tree_connect(xid, tcon->ses, tcon->tree_name, tcon, cifs_sb->local_nls);
++		rc = ops->tree_connect(xid, tcon->ses, tcon->tree_name, tcon,
++				       cifs_sb ? cifs_sb->local_nls : nlsc);
+ 		goto out;
+ 	}
+ 
+diff --git a/fs/cifs/dfs.h b/fs/cifs/dfs.h
+index 1c90df5ecfbda..98e9d2aca6a7a 100644
+--- a/fs/cifs/dfs.h
++++ b/fs/cifs/dfs.h
+@@ -39,16 +39,15 @@ static inline char *dfs_get_automount_devname(struct dentry *dentry, void *page)
+ {
+ 	struct cifs_sb_info *cifs_sb = CIFS_SB(dentry->d_sb);
+ 	struct cifs_tcon *tcon = cifs_sb_master_tcon(cifs_sb);
+-	struct TCP_Server_Info *server = tcon->ses->server;
+ 	size_t len;
+ 	char *s;
+ 
+-	spin_lock(&server->srv_lock);
+-	if (unlikely(!server->origin_fullpath)) {
+-		spin_unlock(&server->srv_lock);
++	spin_lock(&tcon->tc_lock);
++	if (unlikely(!tcon->origin_fullpath)) {
++		spin_unlock(&tcon->tc_lock);
+ 		return ERR_PTR(-EREMOTE);
+ 	}
+-	spin_unlock(&server->srv_lock);
++	spin_unlock(&tcon->tc_lock);
+ 
+ 	s = dentry_path_raw(dentry, page, PATH_MAX);
+ 	if (IS_ERR(s))
+@@ -57,16 +56,16 @@ static inline char *dfs_get_automount_devname(struct dentry *dentry, void *page)
+ 	if (!s[1])
+ 		s++;
+ 
+-	spin_lock(&server->srv_lock);
+-	len = strlen(server->origin_fullpath);
++	spin_lock(&tcon->tc_lock);
++	len = strlen(tcon->origin_fullpath);
+ 	if (s < (char *)page + len) {
+-		spin_unlock(&server->srv_lock);
++		spin_unlock(&tcon->tc_lock);
+ 		return ERR_PTR(-ENAMETOOLONG);
+ 	}
+ 
+ 	s -= len;
+-	memcpy(s, server->origin_fullpath, len);
+-	spin_unlock(&server->srv_lock);
++	memcpy(s, tcon->origin_fullpath, len);
++	spin_unlock(&tcon->tc_lock);
+ 	convert_delimiter(s, '/');
+ 
+ 	return s;
+diff --git a/fs/cifs/dfs_cache.c b/fs/cifs/dfs_cache.c
+index 1513b2709889b..33adf43a01f1d 100644
+--- a/fs/cifs/dfs_cache.c
++++ b/fs/cifs/dfs_cache.c
+@@ -1248,18 +1248,20 @@ static int refresh_tcon(struct cifs_tcon *tcon, bool force_refresh)
+ int dfs_cache_remount_fs(struct cifs_sb_info *cifs_sb)
+ {
+ 	struct cifs_tcon *tcon;
+-	struct TCP_Server_Info *server;
+ 
+ 	if (!cifs_sb || !cifs_sb->master_tlink)
+ 		return -EINVAL;
+ 
+ 	tcon = cifs_sb_master_tcon(cifs_sb);
+-	server = tcon->ses->server;
+ 
+-	if (!server->origin_fullpath) {
++	spin_lock(&tcon->tc_lock);
++	if (!tcon->origin_fullpath) {
++		spin_unlock(&tcon->tc_lock);
+ 		cifs_dbg(FYI, "%s: not a dfs mount\n", __func__);
+ 		return 0;
+ 	}
++	spin_unlock(&tcon->tc_lock);
++
+ 	/*
+ 	 * After reconnecting to a different server, unique ids won't match anymore, so we disable
+ 	 * serverino. This prevents dentry revalidation to think the dentry are stale (ESTALE).
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index 051283386e229..1a854dc204823 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -4936,20 +4936,19 @@ oplock_break_ack:
+ 
+ 	_cifsFileInfo_put(cfile, false /* do not wait for ourself */, false);
+ 	/*
+-	 * releasing stale oplock after recent reconnect of smb session using
+-	 * a now incorrect file handle is not a data integrity issue but do
+-	 * not bother sending an oplock release if session to server still is
+-	 * disconnected since oplock already released by the server
++	 * MS-SMB2 3.2.5.19.1 and 3.2.5.19.2 (and MS-CIFS 3.2.5.42) do not require
++	 * an acknowledgment to be sent when the file has already been closed.
++	 * check for server null, since can race with kill_sb calling tree disconnect.
+ 	 */
+-	if (!oplock_break_cancelled) {
+-		/* check for server null since can race with kill_sb calling tree disconnect */
+-		if (tcon->ses && tcon->ses->server) {
+-			rc = tcon->ses->server->ops->oplock_response(tcon, persistent_fid,
+-				volatile_fid, net_fid, cinode);
+-			cifs_dbg(FYI, "Oplock release rc = %d\n", rc);
+-		} else
+-			pr_warn_once("lease break not sent for unmounted share\n");
+-	}
++	spin_lock(&cinode->open_file_lock);
++	if (tcon->ses && tcon->ses->server && !oplock_break_cancelled &&
++					!list_empty(&cinode->openFileList)) {
++		spin_unlock(&cinode->open_file_lock);
++		rc = tcon->ses->server->ops->oplock_response(tcon, persistent_fid,
++						volatile_fid, net_fid, cinode);
++		cifs_dbg(FYI, "Oplock release rc = %d\n", rc);
++	} else
++		spin_unlock(&cinode->open_file_lock);
+ 
+ 	cifs_done_oplock_break(cinode);
+ }
+diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c
+index cd914be905b24..b0dedc26643b6 100644
+--- a/fs/cifs/misc.c
++++ b/fs/cifs/misc.c
+@@ -156,6 +156,7 @@ tconInfoFree(struct cifs_tcon *tcon)
+ #ifdef CONFIG_CIFS_DFS_UPCALL
+ 	dfs_put_root_smb_sessions(&tcon->dfs_ses_list);
+ #endif
++	kfree(tcon->origin_fullpath);
+ 	kfree(tcon);
+ }
+ 
+@@ -1106,20 +1107,25 @@ struct super_cb_data {
+ 	struct super_block *sb;
+ };
+ 
+-static void tcp_super_cb(struct super_block *sb, void *arg)
++static void tcon_super_cb(struct super_block *sb, void *arg)
+ {
+ 	struct super_cb_data *sd = arg;
+-	struct TCP_Server_Info *server = sd->data;
+ 	struct cifs_sb_info *cifs_sb;
+-	struct cifs_tcon *tcon;
++	struct cifs_tcon *t1 = sd->data, *t2;
+ 
+ 	if (sd->sb)
+ 		return;
+ 
+ 	cifs_sb = CIFS_SB(sb);
+-	tcon = cifs_sb_master_tcon(cifs_sb);
+-	if (tcon->ses->server == server)
++	t2 = cifs_sb_master_tcon(cifs_sb);
++
++	spin_lock(&t2->tc_lock);
++	if (t1->ses == t2->ses &&
++	    t1->ses->server == t2->ses->server &&
++	    t2->origin_fullpath &&
++	    dfs_src_pathname_equal(t2->origin_fullpath, t1->origin_fullpath))
+ 		sd->sb = sb;
++	spin_unlock(&t2->tc_lock);
+ }
+ 
+ static struct super_block *__cifs_get_super(void (*f)(struct super_block *, void *),
+@@ -1145,6 +1151,7 @@ static struct super_block *__cifs_get_super(void (*f)(struct super_block *, void
+ 			return sd.sb;
+ 		}
+ 	}
++	pr_warn_once("%s: could not find dfs superblock\n", __func__);
+ 	return ERR_PTR(-EINVAL);
+ }
+ 
+@@ -1154,9 +1161,15 @@ static void __cifs_put_super(struct super_block *sb)
+ 		cifs_sb_deactive(sb);
+ }
+ 
+-struct super_block *cifs_get_tcp_super(struct TCP_Server_Info *server)
++struct super_block *cifs_get_dfs_tcon_super(struct cifs_tcon *tcon)
+ {
+-	return __cifs_get_super(tcp_super_cb, server);
++	spin_lock(&tcon->tc_lock);
++	if (!tcon->origin_fullpath) {
++		spin_unlock(&tcon->tc_lock);
++		return ERR_PTR(-ENOENT);
++	}
++	spin_unlock(&tcon->tc_lock);
++	return __cifs_get_super(tcon_super_cb, tcon);
+ }
+ 
+ void cifs_put_tcp_super(struct super_block *sb)
+@@ -1238,9 +1251,16 @@ int cifs_inval_name_dfs_link_error(const unsigned int xid,
+ 	 */
+ 	if (strlen(full_path) < 2 || !cifs_sb ||
+ 	    (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_DFS) ||
+-	    !is_tcon_dfs(tcon) || !ses->server->origin_fullpath)
++	    !is_tcon_dfs(tcon))
+ 		return 0;
+ 
++	spin_lock(&tcon->tc_lock);
++	if (!tcon->origin_fullpath) {
++		spin_unlock(&tcon->tc_lock);
++		return 0;
++	}
++	spin_unlock(&tcon->tc_lock);
++
+ 	/*
+ 	 * Slow path - tcon is DFS and @full_path has prefix path, so attempt
+ 	 * to get a referral to figure out whether it is an DFS link.
+@@ -1264,7 +1284,7 @@ int cifs_inval_name_dfs_link_error(const unsigned int xid,
+ 
+ 		/*
+ 		 * XXX: we are not using dfs_cache_find() here because we might
+-		 * end filling all the DFS cache and thus potentially
++		 * end up filling all the DFS cache and thus potentially
+ 		 * removing cached DFS targets that the client would eventually
+ 		 * need during failover.
+ 		 */
+diff --git a/fs/cifs/smb2inode.c b/fs/cifs/smb2inode.c
+index 163a03298430d..8e696fbd72fa8 100644
+--- a/fs/cifs/smb2inode.c
++++ b/fs/cifs/smb2inode.c
+@@ -398,9 +398,6 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 					rsp_iov);
+ 
+  finished:
+-	if (cfile)
+-		cifsFileInfo_put(cfile);
+-
+ 	SMB2_open_free(&rqst[0]);
+ 	if (rc == -EREMCHG) {
+ 		pr_warn_once("server share %s deleted\n", tcon->tree_name);
+@@ -529,6 +526,9 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 		break;
+ 	}
+ 
++	if (cfile)
++		cifsFileInfo_put(cfile);
++
+ 	if (rc && err_iov && err_buftype) {
+ 		memcpy(err_iov, rsp_iov, 3 * sizeof(*err_iov));
+ 		memcpy(err_buftype, resp_buftype, 3 * sizeof(*err_buftype));
+@@ -609,9 +609,6 @@ int smb2_query_path_info(const unsigned int xid, struct cifs_tcon *tcon,
+ 			if (islink)
+ 				rc = -EREMOTE;
+ 		}
+-		if (rc == -EREMOTE && IS_ENABLED(CONFIG_CIFS_DFS_UPCALL) && cifs_sb &&
+-		    (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_DFS))
+-			rc = -EOPNOTSUPP;
+ 	}
+ 
+ out:
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 5065398665f11..bb41b9bae262d 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -208,6 +208,16 @@ smb2_wait_mtu_credits(struct TCP_Server_Info *server, unsigned int size,
+ 
+ 	spin_lock(&server->req_lock);
+ 	while (1) {
++		spin_unlock(&server->req_lock);
++
++		spin_lock(&server->srv_lock);
++		if (server->tcpStatus == CifsExiting) {
++			spin_unlock(&server->srv_lock);
++			return -ENOENT;
++		}
++		spin_unlock(&server->srv_lock);
++
++		spin_lock(&server->req_lock);
+ 		if (server->credits <= 0) {
+ 			spin_unlock(&server->req_lock);
+ 			cifs_num_waiters_inc(server);
+@@ -218,15 +228,6 @@ smb2_wait_mtu_credits(struct TCP_Server_Info *server, unsigned int size,
+ 				return rc;
+ 			spin_lock(&server->req_lock);
+ 		} else {
+-			spin_unlock(&server->req_lock);
+-			spin_lock(&server->srv_lock);
+-			if (server->tcpStatus == CifsExiting) {
+-				spin_unlock(&server->srv_lock);
+-				return -ENOENT;
+-			}
+-			spin_unlock(&server->srv_lock);
+-
+-			spin_lock(&server->req_lock);
+ 			scredits = server->credits;
+ 			/* can deadlock with reopen */
+ 			if (scredits <= 8) {
+diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
+index 24bdd5f4d3bcc..968bfd029b8eb 100644
+--- a/fs/cifs/transport.c
++++ b/fs/cifs/transport.c
+@@ -522,6 +522,16 @@ wait_for_free_credits(struct TCP_Server_Info *server, const int num_credits,
+ 	}
+ 
+ 	while (1) {
++		spin_unlock(&server->req_lock);
++
++		spin_lock(&server->srv_lock);
++		if (server->tcpStatus == CifsExiting) {
++			spin_unlock(&server->srv_lock);
++			return -ENOENT;
++		}
++		spin_unlock(&server->srv_lock);
++
++		spin_lock(&server->req_lock);
+ 		if (*credits < num_credits) {
+ 			scredits = *credits;
+ 			spin_unlock(&server->req_lock);
+@@ -547,15 +557,6 @@ wait_for_free_credits(struct TCP_Server_Info *server, const int num_credits,
+ 				return -ERESTARTSYS;
+ 			spin_lock(&server->req_lock);
+ 		} else {
+-			spin_unlock(&server->req_lock);
+-
+-			spin_lock(&server->srv_lock);
+-			if (server->tcpStatus == CifsExiting) {
+-				spin_unlock(&server->srv_lock);
+-				return -ENOENT;
+-			}
+-			spin_unlock(&server->srv_lock);
+-
+ 			/*
+ 			 * For normal commands, reserve the last MAX_COMPOUND
+ 			 * credits to compound requests.
+@@ -569,7 +570,6 @@ wait_for_free_credits(struct TCP_Server_Info *server, const int num_credits,
+ 			 * for servers that are slow to hand out credits on
+ 			 * new sessions.
+ 			 */
+-			spin_lock(&server->req_lock);
+ 			if (!optype && num_credits == 1 &&
+ 			    server->in_flight > 2 * MAX_COMPOUND &&
+ 			    *credits <= MAX_COMPOUND) {
+diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
+index d7add72a09437..72325d4b98f9d 100644
+--- a/fs/erofs/zdata.c
++++ b/fs/erofs/zdata.c
+@@ -94,11 +94,8 @@ struct z_erofs_pcluster {
+ 
+ /* let's avoid the valid 32-bit kernel addresses */
+ 
+-/* the chained workgroup has't submitted io (still open) */
++/* the end of a chain of pclusters */
+ #define Z_EROFS_PCLUSTER_TAIL           ((void *)0x5F0ECAFE)
+-/* the chained workgroup has already submitted io */
+-#define Z_EROFS_PCLUSTER_TAIL_CLOSED    ((void *)0x5F0EDEAD)
+-
+ #define Z_EROFS_PCLUSTER_NIL            (NULL)
+ 
+ struct z_erofs_decompressqueue {
+@@ -499,20 +496,6 @@ out_error_pcluster_pool:
+ 
+ enum z_erofs_pclustermode {
+ 	Z_EROFS_PCLUSTER_INFLIGHT,
+-	/*
+-	 * The current pclusters was the tail of an exist chain, in addition
+-	 * that the previous processed chained pclusters are all decided to
+-	 * be hooked up to it.
+-	 * A new chain will be created for the remaining pclusters which are
+-	 * not processed yet, so different from Z_EROFS_PCLUSTER_FOLLOWED,
+-	 * the next pcluster cannot reuse the whole page safely for inplace I/O
+-	 * in the following scenario:
+-	 *  ________________________________________________________________
+-	 * |      tail (partial) page     |       head (partial) page       |
+-	 * |   (belongs to the next pcl)  |   (belongs to the current pcl)  |
+-	 * |_______PCLUSTER_FOLLOWED______|________PCLUSTER_HOOKED__________|
+-	 */
+-	Z_EROFS_PCLUSTER_HOOKED,
+ 	/*
+ 	 * a weak form of Z_EROFS_PCLUSTER_FOLLOWED, the difference is that it
+ 	 * could be dispatched into bypass queue later due to uptodated managed
+@@ -530,8 +513,8 @@ enum z_erofs_pclustermode {
+ 	 *  ________________________________________________________________
+ 	 * |  tail (partial) page |          head (partial) page           |
+ 	 * |  (of the current cl) |      (of the previous collection)      |
+-	 * | PCLUSTER_FOLLOWED or |                                        |
+-	 * |_____PCLUSTER_HOOKED__|___________PCLUSTER_FOLLOWED____________|
++	 * |                      |                                        |
++	 * |__PCLUSTER_FOLLOWED___|___________PCLUSTER_FOLLOWED____________|
+ 	 *
+ 	 * [  (*) the above page can be used as inplace I/O.               ]
+ 	 */
+@@ -544,7 +527,7 @@ struct z_erofs_decompress_frontend {
+ 	struct z_erofs_bvec_iter biter;
+ 
+ 	struct page *candidate_bvpage;
+-	struct z_erofs_pcluster *pcl, *tailpcl;
++	struct z_erofs_pcluster *pcl;
+ 	z_erofs_next_pcluster_t owned_head;
+ 	enum z_erofs_pclustermode mode;
+ 
+@@ -750,19 +733,7 @@ static void z_erofs_try_to_claim_pcluster(struct z_erofs_decompress_frontend *f)
+ 		return;
+ 	}
+ 
+-	/*
+-	 * type 2, link to the end of an existing open chain, be careful
+-	 * that its submission is controlled by the original attached chain.
+-	 */
+-	if (*owned_head != &pcl->next && pcl != f->tailpcl &&
+-	    cmpxchg(&pcl->next, Z_EROFS_PCLUSTER_TAIL,
+-		    *owned_head) == Z_EROFS_PCLUSTER_TAIL) {
+-		*owned_head = Z_EROFS_PCLUSTER_TAIL;
+-		f->mode = Z_EROFS_PCLUSTER_HOOKED;
+-		f->tailpcl = NULL;
+-		return;
+-	}
+-	/* type 3, it belongs to a chain, but it isn't the end of the chain */
++	/* type 2, it belongs to an ongoing chain */
+ 	f->mode = Z_EROFS_PCLUSTER_INFLIGHT;
+ }
+ 
+@@ -823,9 +794,6 @@ static int z_erofs_register_pcluster(struct z_erofs_decompress_frontend *fe)
+ 			goto err_out;
+ 		}
+ 	}
+-	/* used to check tail merging loop due to corrupted images */
+-	if (fe->owned_head == Z_EROFS_PCLUSTER_TAIL)
+-		fe->tailpcl = pcl;
+ 	fe->owned_head = &pcl->next;
+ 	fe->pcl = pcl;
+ 	return 0;
+@@ -846,7 +814,6 @@ static int z_erofs_collector_begin(struct z_erofs_decompress_frontend *fe)
+ 
+ 	/* must be Z_EROFS_PCLUSTER_TAIL or pointed to previous pcluster */
+ 	DBG_BUGON(fe->owned_head == Z_EROFS_PCLUSTER_NIL);
+-	DBG_BUGON(fe->owned_head == Z_EROFS_PCLUSTER_TAIL_CLOSED);
+ 
+ 	if (!(map->m_flags & EROFS_MAP_META)) {
+ 		grp = erofs_find_workgroup(fe->inode->i_sb,
+@@ -865,10 +832,6 @@ static int z_erofs_collector_begin(struct z_erofs_decompress_frontend *fe)
+ 
+ 	if (ret == -EEXIST) {
+ 		mutex_lock(&fe->pcl->lock);
+-		/* used to check tail merging loop due to corrupted images */
+-		if (fe->owned_head == Z_EROFS_PCLUSTER_TAIL)
+-			fe->tailpcl = fe->pcl;
+-
+ 		z_erofs_try_to_claim_pcluster(fe);
+ 	} else if (ret) {
+ 		return ret;
+@@ -1025,8 +988,7 @@ hitted:
+ 	 * those chains are handled asynchronously thus the page cannot be used
+ 	 * for inplace I/O or bvpage (should be processed in a strict order.)
+ 	 */
+-	tight &= (fe->mode >= Z_EROFS_PCLUSTER_HOOKED &&
+-		  fe->mode != Z_EROFS_PCLUSTER_FOLLOWED_NOINPLACE);
++	tight &= (fe->mode > Z_EROFS_PCLUSTER_FOLLOWED_NOINPLACE);
+ 
+ 	cur = end - min_t(unsigned int, offset + end - map->m_la, end);
+ 	if (!(map->m_flags & EROFS_MAP_MAPPED)) {
+@@ -1407,10 +1369,7 @@ static void z_erofs_decompress_queue(const struct z_erofs_decompressqueue *io,
+ 	};
+ 	z_erofs_next_pcluster_t owned = io->head;
+ 
+-	while (owned != Z_EROFS_PCLUSTER_TAIL_CLOSED) {
+-		/* impossible that 'owned' equals Z_EROFS_WORK_TPTR_TAIL */
+-		DBG_BUGON(owned == Z_EROFS_PCLUSTER_TAIL);
+-		/* impossible that 'owned' equals Z_EROFS_PCLUSTER_NIL */
++	while (owned != Z_EROFS_PCLUSTER_TAIL) {
+ 		DBG_BUGON(owned == Z_EROFS_PCLUSTER_NIL);
+ 
+ 		be.pcl = container_of(owned, struct z_erofs_pcluster, next);
+@@ -1427,7 +1386,7 @@ static void z_erofs_decompressqueue_work(struct work_struct *work)
+ 		container_of(work, struct z_erofs_decompressqueue, u.work);
+ 	struct page *pagepool = NULL;
+ 
+-	DBG_BUGON(bgq->head == Z_EROFS_PCLUSTER_TAIL_CLOSED);
++	DBG_BUGON(bgq->head == Z_EROFS_PCLUSTER_TAIL);
+ 	z_erofs_decompress_queue(bgq, &pagepool);
+ 	erofs_release_pages(&pagepool);
+ 	kvfree(bgq);
+@@ -1615,7 +1574,7 @@ fg_out:
+ 		q->sync = true;
+ 	}
+ 	q->sb = sb;
+-	q->head = Z_EROFS_PCLUSTER_TAIL_CLOSED;
++	q->head = Z_EROFS_PCLUSTER_TAIL;
+ 	return q;
+ }
+ 
+@@ -1633,11 +1592,7 @@ static void move_to_bypass_jobqueue(struct z_erofs_pcluster *pcl,
+ 	z_erofs_next_pcluster_t *const submit_qtail = qtail[JQ_SUBMIT];
+ 	z_erofs_next_pcluster_t *const bypass_qtail = qtail[JQ_BYPASS];
+ 
+-	DBG_BUGON(owned_head == Z_EROFS_PCLUSTER_TAIL_CLOSED);
+-	if (owned_head == Z_EROFS_PCLUSTER_TAIL)
+-		owned_head = Z_EROFS_PCLUSTER_TAIL_CLOSED;
+-
+-	WRITE_ONCE(pcl->next, Z_EROFS_PCLUSTER_TAIL_CLOSED);
++	WRITE_ONCE(pcl->next, Z_EROFS_PCLUSTER_TAIL);
+ 
+ 	WRITE_ONCE(*submit_qtail, owned_head);
+ 	WRITE_ONCE(*bypass_qtail, &pcl->next);
+@@ -1708,15 +1663,10 @@ static void z_erofs_submit_queue(struct z_erofs_decompress_frontend *f,
+ 		unsigned int i = 0;
+ 		bool bypass = true;
+ 
+-		/* no possible 'owned_head' equals the following */
+-		DBG_BUGON(owned_head == Z_EROFS_PCLUSTER_TAIL_CLOSED);
+ 		DBG_BUGON(owned_head == Z_EROFS_PCLUSTER_NIL);
+-
+ 		pcl = container_of(owned_head, struct z_erofs_pcluster, next);
++		owned_head = READ_ONCE(pcl->next);
+ 
+-		/* close the main owned chain at first */
+-		owned_head = cmpxchg(&pcl->next, Z_EROFS_PCLUSTER_TAIL,
+-				     Z_EROFS_PCLUSTER_TAIL_CLOSED);
+ 		if (z_erofs_is_inline_pcluster(pcl)) {
+ 			move_to_bypass_jobqueue(pcl, qtail, owned_head);
+ 			continue;
+diff --git a/fs/erofs/zmap.c b/fs/erofs/zmap.c
+index b5f4086537548..322f110b3c8f4 100644
+--- a/fs/erofs/zmap.c
++++ b/fs/erofs/zmap.c
+@@ -148,7 +148,7 @@ static int unpack_compacted_index(struct z_erofs_maprecorder *m,
+ 	u8 *in, type;
+ 	bool big_pcluster;
+ 
+-	if (1 << amortizedshift == 4)
++	if (1 << amortizedshift == 4 && lclusterbits <= 14)
+ 		vcnt = 2;
+ 	else if (1 << amortizedshift == 2 && lclusterbits == 12)
+ 		vcnt = 16;
+@@ -250,7 +250,6 @@ static int compacted_load_cluster_from_disk(struct z_erofs_maprecorder *m,
+ {
+ 	struct inode *const inode = m->inode;
+ 	struct erofs_inode *const vi = EROFS_I(inode);
+-	const unsigned int lclusterbits = vi->z_logical_clusterbits;
+ 	const erofs_off_t ebase = sizeof(struct z_erofs_map_header) +
+ 		ALIGN(erofs_iloc(inode) + vi->inode_isize + vi->xattr_isize, 8);
+ 	const unsigned int totalidx = DIV_ROUND_UP(inode->i_size, EROFS_BLKSIZ);
+@@ -258,9 +257,6 @@ static int compacted_load_cluster_from_disk(struct z_erofs_maprecorder *m,
+ 	unsigned int amortizedshift;
+ 	erofs_off_t pos;
+ 
+-	if (lclusterbits != 12)
+-		return -EOPNOTSUPP;
+-
+ 	if (lcn >= totalidx)
+ 		return -EINVAL;
+ 
+diff --git a/fs/ksmbd/smb_common.c b/fs/ksmbd/smb_common.c
+index 569e5eecdf3db..3e391a7d5a3ab 100644
+--- a/fs/ksmbd/smb_common.c
++++ b/fs/ksmbd/smb_common.c
+@@ -536,7 +536,7 @@ int ksmbd_extract_shortname(struct ksmbd_conn *conn, const char *longname,
+ 	out[baselen + 3] = PERIOD;
+ 
+ 	if (dot_present)
+-		memcpy(&out[baselen + 4], extension, 4);
++		memcpy(out + baselen + 4, extension, 4);
+ 	else
+ 		out[baselen + 4] = '\0';
+ 	smbConvertToUTF16((__le16 *)shortname, out, PATH_MAX,
+diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c
+index 9a47303b2cba6..0c05668019c2b 100644
+--- a/fs/lockd/svc.c
++++ b/fs/lockd/svc.c
+@@ -355,7 +355,6 @@ static int lockd_get(void)
+ 	int error;
+ 
+ 	if (nlmsvc_serv) {
+-		svc_get(nlmsvc_serv);
+ 		nlmsvc_users++;
+ 		return 0;
+ 	}
+diff --git a/fs/nfs/nfs42xattr.c b/fs/nfs/nfs42xattr.c
+index 76ae118342066..911f634ba3da7 100644
+--- a/fs/nfs/nfs42xattr.c
++++ b/fs/nfs/nfs42xattr.c
+@@ -991,6 +991,29 @@ static void nfs4_xattr_cache_init_once(void *p)
+ 	INIT_LIST_HEAD(&cache->dispose);
+ }
+ 
++static int nfs4_xattr_shrinker_init(struct shrinker *shrinker,
++				    struct list_lru *lru, const char *name)
++{
++	int ret = 0;
++
++	ret = register_shrinker(shrinker, name);
++	if (ret)
++		return ret;
++
++	ret = list_lru_init_memcg(lru, shrinker);
++	if (ret)
++		unregister_shrinker(shrinker);
++
++	return ret;
++}
++
++static void nfs4_xattr_shrinker_destroy(struct shrinker *shrinker,
++					struct list_lru *lru)
++{
++	unregister_shrinker(shrinker);
++	list_lru_destroy(lru);
++}
++
+ int __init nfs4_xattr_cache_init(void)
+ {
+ 	int ret = 0;
+@@ -1002,44 +1025,30 @@ int __init nfs4_xattr_cache_init(void)
+ 	if (nfs4_xattr_cache_cachep == NULL)
+ 		return -ENOMEM;
+ 
+-	ret = list_lru_init_memcg(&nfs4_xattr_large_entry_lru,
+-	    &nfs4_xattr_large_entry_shrinker);
+-	if (ret)
+-		goto out4;
+-
+-	ret = list_lru_init_memcg(&nfs4_xattr_entry_lru,
+-	    &nfs4_xattr_entry_shrinker);
+-	if (ret)
+-		goto out3;
+-
+-	ret = list_lru_init_memcg(&nfs4_xattr_cache_lru,
+-	    &nfs4_xattr_cache_shrinker);
+-	if (ret)
+-		goto out2;
+-
+-	ret = register_shrinker(&nfs4_xattr_cache_shrinker, "nfs-xattr_cache");
++	ret = nfs4_xattr_shrinker_init(&nfs4_xattr_cache_shrinker,
++				       &nfs4_xattr_cache_lru,
++				       "nfs-xattr_cache");
+ 	if (ret)
+ 		goto out1;
+ 
+-	ret = register_shrinker(&nfs4_xattr_entry_shrinker, "nfs-xattr_entry");
++	ret = nfs4_xattr_shrinker_init(&nfs4_xattr_entry_shrinker,
++				       &nfs4_xattr_entry_lru,
++				       "nfs-xattr_entry");
+ 	if (ret)
+-		goto out;
++		goto out2;
+ 
+-	ret = register_shrinker(&nfs4_xattr_large_entry_shrinker,
+-				"nfs-xattr_large_entry");
++	ret = nfs4_xattr_shrinker_init(&nfs4_xattr_large_entry_shrinker,
++				       &nfs4_xattr_large_entry_lru,
++				       "nfs-xattr_large_entry");
+ 	if (!ret)
+ 		return 0;
+ 
+-	unregister_shrinker(&nfs4_xattr_entry_shrinker);
+-out:
+-	unregister_shrinker(&nfs4_xattr_cache_shrinker);
+-out1:
+-	list_lru_destroy(&nfs4_xattr_cache_lru);
++	nfs4_xattr_shrinker_destroy(&nfs4_xattr_entry_shrinker,
++				    &nfs4_xattr_entry_lru);
+ out2:
+-	list_lru_destroy(&nfs4_xattr_entry_lru);
+-out3:
+-	list_lru_destroy(&nfs4_xattr_large_entry_lru);
+-out4:
++	nfs4_xattr_shrinker_destroy(&nfs4_xattr_cache_shrinker,
++				    &nfs4_xattr_cache_lru);
++out1:
+ 	kmem_cache_destroy(nfs4_xattr_cache_cachep);
+ 
+ 	return ret;
+@@ -1047,11 +1056,11 @@ out4:
+ 
+ void nfs4_xattr_cache_exit(void)
+ {
+-	unregister_shrinker(&nfs4_xattr_large_entry_shrinker);
+-	unregister_shrinker(&nfs4_xattr_entry_shrinker);
+-	unregister_shrinker(&nfs4_xattr_cache_shrinker);
+-	list_lru_destroy(&nfs4_xattr_large_entry_lru);
+-	list_lru_destroy(&nfs4_xattr_entry_lru);
+-	list_lru_destroy(&nfs4_xattr_cache_lru);
++	nfs4_xattr_shrinker_destroy(&nfs4_xattr_large_entry_shrinker,
++				    &nfs4_xattr_large_entry_lru);
++	nfs4_xattr_shrinker_destroy(&nfs4_xattr_entry_shrinker,
++				    &nfs4_xattr_entry_lru);
++	nfs4_xattr_shrinker_destroy(&nfs4_xattr_cache_shrinker,
++				    &nfs4_xattr_cache_lru);
+ 	kmem_cache_destroy(nfs4_xattr_cache_cachep);
+ }
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 5607b1e2b8212..23a23387211ba 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -921,6 +921,7 @@ out:
+ out_noaction:
+ 	return ret;
+ session_recover:
++	set_bit(NFS4_SLOT_TBL_DRAINING, &session->fc_slot_table.slot_tbl_state);
+ 	nfs4_schedule_session_recovery(session, status);
+ 	dprintk("%s ERROR: %d Reset session\n", __func__, status);
+ 	nfs41_sequence_free_slot(res);
+diff --git a/fs/ocfs2/cluster/tcp.c b/fs/ocfs2/cluster/tcp.c
+index aecbd712a00cf..929a1133bc180 100644
+--- a/fs/ocfs2/cluster/tcp.c
++++ b/fs/ocfs2/cluster/tcp.c
+@@ -2087,18 +2087,24 @@ void o2net_stop_listening(struct o2nm_node *node)
+ 
+ int o2net_init(void)
+ {
++	struct folio *folio;
++	void *p;
+ 	unsigned long i;
+ 
+ 	o2quo_init();
+-
+ 	o2net_debugfs_init();
+ 
+-	o2net_hand = kzalloc(sizeof(struct o2net_handshake), GFP_KERNEL);
+-	o2net_keep_req = kzalloc(sizeof(struct o2net_msg), GFP_KERNEL);
+-	o2net_keep_resp = kzalloc(sizeof(struct o2net_msg), GFP_KERNEL);
+-	if (!o2net_hand || !o2net_keep_req || !o2net_keep_resp)
++	folio = folio_alloc(GFP_KERNEL | __GFP_ZERO, 0);
++	if (!folio)
+ 		goto out;
+ 
++	p = folio_address(folio);
++	o2net_hand = p;
++	p += sizeof(struct o2net_handshake);
++	o2net_keep_req = p;
++	p += sizeof(struct o2net_msg);
++	o2net_keep_resp = p;
++
+ 	o2net_hand->protocol_version = cpu_to_be64(O2NET_PROTOCOL_VERSION);
+ 	o2net_hand->connector_id = cpu_to_be64(1);
+ 
+@@ -2124,9 +2130,6 @@ int o2net_init(void)
+ 	return 0;
+ 
+ out:
+-	kfree(o2net_hand);
+-	kfree(o2net_keep_req);
+-	kfree(o2net_keep_resp);
+ 	o2net_debugfs_exit();
+ 	o2quo_exit();
+ 	return -ENOMEM;
+@@ -2135,8 +2138,6 @@ out:
+ void o2net_exit(void)
+ {
+ 	o2quo_exit();
+-	kfree(o2net_hand);
+-	kfree(o2net_keep_req);
+-	kfree(o2net_keep_resp);
+ 	o2net_debugfs_exit();
++	folio_put(virt_to_folio(o2net_hand));
+ }
+diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
+index c14e90764e356..7bf101e756c8c 100644
+--- a/fs/overlayfs/copy_up.c
++++ b/fs/overlayfs/copy_up.c
+@@ -576,6 +576,7 @@ static int ovl_link_up(struct ovl_copy_up_ctx *c)
+ 			/* Restore timestamps on parent (best effort) */
+ 			ovl_set_timestamps(ofs, upperdir, &c->pstat);
+ 			ovl_dentry_set_upper_alias(c->dentry);
++			ovl_dentry_update_reval(c->dentry, upper);
+ 		}
+ 	}
+ 	inode_unlock(udir);
+@@ -895,6 +896,7 @@ static int ovl_do_copy_up(struct ovl_copy_up_ctx *c)
+ 		inode_unlock(udir);
+ 
+ 		ovl_dentry_set_upper_alias(c->dentry);
++		ovl_dentry_update_reval(c->dentry, ovl_dentry_upper(c->dentry));
+ 	}
+ 
+ out:
+diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c
+index fc25fb95d5fc0..9be52d8013c83 100644
+--- a/fs/overlayfs/dir.c
++++ b/fs/overlayfs/dir.c
+@@ -269,8 +269,7 @@ static int ovl_instantiate(struct dentry *dentry, struct inode *inode,
+ 
+ 	ovl_dir_modified(dentry->d_parent, false);
+ 	ovl_dentry_set_upper_alias(dentry);
+-	ovl_dentry_update_reval(dentry, newdentry,
+-			DCACHE_OP_REVALIDATE | DCACHE_OP_WEAK_REVALIDATE);
++	ovl_dentry_init_reval(dentry, newdentry);
+ 
+ 	if (!hardlink) {
+ 		/*
+diff --git a/fs/overlayfs/export.c b/fs/overlayfs/export.c
+index defd4e231ad2c..5c36fb3a7bab1 100644
+--- a/fs/overlayfs/export.c
++++ b/fs/overlayfs/export.c
+@@ -326,8 +326,7 @@ static struct dentry *ovl_obtain_alias(struct super_block *sb,
+ 	if (upper_alias)
+ 		ovl_dentry_set_upper_alias(dentry);
+ 
+-	ovl_dentry_update_reval(dentry, upper,
+-			DCACHE_OP_REVALIDATE | DCACHE_OP_WEAK_REVALIDATE);
++	ovl_dentry_init_reval(dentry, upper);
+ 
+ 	return d_instantiate_anon(dentry, inode);
+ 
+diff --git a/fs/overlayfs/namei.c b/fs/overlayfs/namei.c
+index cfb3420b7df0e..100a492d2b2a6 100644
+--- a/fs/overlayfs/namei.c
++++ b/fs/overlayfs/namei.c
+@@ -1122,8 +1122,7 @@ struct dentry *ovl_lookup(struct inode *dir, struct dentry *dentry,
+ 			ovl_set_flag(OVL_UPPERDATA, inode);
+ 	}
+ 
+-	ovl_dentry_update_reval(dentry, upperdentry,
+-			DCACHE_OP_REVALIDATE | DCACHE_OP_WEAK_REVALIDATE);
++	ovl_dentry_init_reval(dentry, upperdentry);
+ 
+ 	revert_creds(old_cred);
+ 	if (origin_path) {
+diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
+index 4d0b278f5630e..e100c55bb924a 100644
+--- a/fs/overlayfs/overlayfs.h
++++ b/fs/overlayfs/overlayfs.h
+@@ -375,8 +375,10 @@ bool ovl_index_all(struct super_block *sb);
+ bool ovl_verify_lower(struct super_block *sb);
+ struct ovl_entry *ovl_alloc_entry(unsigned int numlower);
+ bool ovl_dentry_remote(struct dentry *dentry);
+-void ovl_dentry_update_reval(struct dentry *dentry, struct dentry *upperdentry,
+-			     unsigned int mask);
++void ovl_dentry_update_reval(struct dentry *dentry, struct dentry *realdentry);
++void ovl_dentry_init_reval(struct dentry *dentry, struct dentry *upperdentry);
++void ovl_dentry_init_flags(struct dentry *dentry, struct dentry *upperdentry,
++			   unsigned int mask);
+ bool ovl_dentry_weird(struct dentry *dentry);
+ enum ovl_path_type ovl_path_type(struct dentry *dentry);
+ void ovl_path_upper(struct dentry *dentry, struct path *path);
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index f1d9f75f8786c..49b6956468f9e 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -1885,7 +1885,7 @@ static struct dentry *ovl_get_root(struct super_block *sb,
+ 	ovl_dentry_set_flag(OVL_E_CONNECTED, root);
+ 	ovl_set_upperdata(d_inode(root));
+ 	ovl_inode_init(d_inode(root), &oip, ino, fsid);
+-	ovl_dentry_update_reval(root, upperdentry, DCACHE_OP_WEAK_REVALIDATE);
++	ovl_dentry_init_flags(root, upperdentry, DCACHE_OP_WEAK_REVALIDATE);
+ 
+ 	return root;
+ }
+diff --git a/fs/overlayfs/util.c b/fs/overlayfs/util.c
+index 923d66d131c16..6a0652bd51f24 100644
+--- a/fs/overlayfs/util.c
++++ b/fs/overlayfs/util.c
+@@ -94,14 +94,30 @@ struct ovl_entry *ovl_alloc_entry(unsigned int numlower)
+ 	return oe;
+ }
+ 
++#define OVL_D_REVALIDATE (DCACHE_OP_REVALIDATE | DCACHE_OP_WEAK_REVALIDATE)
++
+ bool ovl_dentry_remote(struct dentry *dentry)
+ {
+-	return dentry->d_flags &
+-		(DCACHE_OP_REVALIDATE | DCACHE_OP_WEAK_REVALIDATE);
++	return dentry->d_flags & OVL_D_REVALIDATE;
++}
++
++void ovl_dentry_update_reval(struct dentry *dentry, struct dentry *realdentry)
++{
++	if (!ovl_dentry_remote(realdentry))
++		return;
++
++	spin_lock(&dentry->d_lock);
++	dentry->d_flags |= realdentry->d_flags & OVL_D_REVALIDATE;
++	spin_unlock(&dentry->d_lock);
++}
++
++void ovl_dentry_init_reval(struct dentry *dentry, struct dentry *upperdentry)
++{
++	return ovl_dentry_init_flags(dentry, upperdentry, OVL_D_REVALIDATE);
+ }
+ 
+-void ovl_dentry_update_reval(struct dentry *dentry, struct dentry *upperdentry,
+-			     unsigned int mask)
++void ovl_dentry_init_flags(struct dentry *dentry, struct dentry *upperdentry,
++			   unsigned int mask)
+ {
+ 	struct ovl_entry *oe = OVL_E(dentry);
+ 	unsigned int i, flags = 0;
+diff --git a/fs/pstore/ram_core.c b/fs/pstore/ram_core.c
+index 966191d3a5ba2..85aaf0fc6d7d1 100644
+--- a/fs/pstore/ram_core.c
++++ b/fs/pstore/ram_core.c
+@@ -599,6 +599,8 @@ struct persistent_ram_zone *persistent_ram_new(phys_addr_t start, size_t size,
+ 	raw_spin_lock_init(&prz->buffer_lock);
+ 	prz->flags = flags;
+ 	prz->label = kstrdup(label, GFP_KERNEL);
++	if (!prz->label)
++		goto err;
+ 
+ 	ret = persistent_ram_buffer_map(start, size, prz, memtype);
+ 	if (ret)
+diff --git a/fs/splice.c b/fs/splice.c
+index 2c3dec2b6dfaf..5eca589fe8479 100644
+--- a/fs/splice.c
++++ b/fs/splice.c
+@@ -338,7 +338,6 @@ ssize_t direct_splice_read(struct file *in, loff_t *ppos,
+ 		reclaim -= ret;
+ 		remain = ret;
+ 		*ppos = kiocb.ki_pos;
+-		file_accessed(in);
+ 	} else if (ret < 0) {
+ 		/*
+ 		 * callers of ->splice_read() expect -EAGAIN on
+diff --git a/include/drm/drm_fixed.h b/include/drm/drm_fixed.h
+index 255645c1f9a89..6ea339d5de088 100644
+--- a/include/drm/drm_fixed.h
++++ b/include/drm/drm_fixed.h
+@@ -71,6 +71,7 @@ static inline u32 dfixed_div(fixed20_12 A, fixed20_12 B)
+ }
+ 
+ #define DRM_FIXED_POINT		32
++#define DRM_FIXED_POINT_HALF	16
+ #define DRM_FIXED_ONE		(1ULL << DRM_FIXED_POINT)
+ #define DRM_FIXED_DECIMAL_MASK	(DRM_FIXED_ONE - 1)
+ #define DRM_FIXED_DIGITS_MASK	(~DRM_FIXED_DECIMAL_MASK)
+@@ -87,6 +88,11 @@ static inline int drm_fixp2int(s64 a)
+ 	return ((s64)a) >> DRM_FIXED_POINT;
+ }
+ 
++static inline int drm_fixp2int_round(s64 a)
++{
++	return drm_fixp2int(a + (1 << (DRM_FIXED_POINT_HALF - 1)));
++}
++
+ static inline int drm_fixp2int_ceil(s64 a)
+ {
+ 	if (a > 0)
+diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
+index de0b0c3e7395a..4110d6e99b2b9 100644
+--- a/include/linux/blk-mq.h
++++ b/include/linux/blk-mq.h
+@@ -748,8 +748,7 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
+ struct blk_mq_tags {
+ 	unsigned int nr_tags;
+ 	unsigned int nr_reserved_tags;
+-
+-	atomic_t active_queues;
++	unsigned int active_queues;
+ 
+ 	struct sbitmap_queue bitmap_tags;
+ 	struct sbitmap_queue breserved_tags;
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 941304f17492f..3d620f298aebd 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -1297,7 +1297,7 @@ static inline unsigned int bdev_zone_no(struct block_device *bdev, sector_t sec)
+ }
+ 
+ static inline bool bdev_op_is_zoned_write(struct block_device *bdev,
+-					  blk_opf_t op)
++					  enum req_op op)
+ {
+ 	if (!bdev_is_zoned(bdev))
+ 		return false;
+diff --git a/include/linux/blktrace_api.h b/include/linux/blktrace_api.h
+index cfbda114348c9..122c62e561fc7 100644
+--- a/include/linux/blktrace_api.h
++++ b/include/linux/blktrace_api.h
+@@ -85,10 +85,14 @@ extern int blk_trace_remove(struct request_queue *q);
+ # define blk_add_driver_data(rq, data, len)		do {} while (0)
+ # define blk_trace_setup(q, name, dev, bdev, arg)	(-ENOTTY)
+ # define blk_trace_startstop(q, start)			(-ENOTTY)
+-# define blk_trace_remove(q)				(-ENOTTY)
+ # define blk_add_trace_msg(q, fmt, ...)			do { } while (0)
+ # define blk_add_cgroup_trace_msg(q, cg, fmt, ...)	do { } while (0)
+ # define blk_trace_note_message_enabled(q)		(false)
++
++static inline int blk_trace_remove(struct request_queue *q)
++{
++	return -ENOTTY;
++}
+ #endif /* CONFIG_BLK_DEV_IO_TRACE */
+ 
+ #ifdef CONFIG_COMPAT
+diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
+index cc35d010fa949..e1a3c9c9754c5 100644
+--- a/include/linux/bootmem_info.h
++++ b/include/linux/bootmem_info.h
+@@ -3,6 +3,7 @@
+ #define __LINUX_BOOTMEM_INFO_H
+ 
+ #include <linux/mm.h>
++#include <linux/kmemleak.h>
+ 
+ /*
+  * Types for free bootmem stored in page->lru.next. These have to be in
+@@ -59,6 +60,7 @@ static inline void get_page_bootmem(unsigned long info, struct page *page,
+ 
+ static inline void free_bootmem_page(struct page *page)
+ {
++	kmemleak_free_part(page_to_virt(page), PAGE_SIZE);
+ 	free_reserved_page(page);
+ }
+ #endif
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 5bd6ac04773aa..18397e54bac18 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -1082,7 +1082,6 @@ struct bpf_trampoline {
+ 	int progs_cnt[BPF_TRAMP_MAX];
+ 	/* Executable image of trampoline */
+ 	struct bpf_tramp_image *cur_image;
+-	u64 selector;
+ 	struct module *mod;
+ };
+ 
+diff --git a/include/linux/can/length.h b/include/linux/can/length.h
+index 6995092b774ec..ef1fd32cef16b 100644
+--- a/include/linux/can/length.h
++++ b/include/linux/can/length.h
+@@ -69,17 +69,18 @@
+  * Error Status Indicator (ESI)		1
+  * Data length code (DLC)		4
+  * Data field				0...512
+- * Stuff Bit Count (SBC)		0...16: 4 20...64:5
++ * Stuff Bit Count (SBC)		4
+  * CRC					0...16: 17 20...64:21
+  * CRC delimiter (CD)			1
++ * Fixed Stuff bits (FSB)		0...16: 6 20...64:7
+  * ACK slot (AS)			1
+  * ACK delimiter (AD)			1
+  * End-of-frame (EOF)			7
+  * Inter frame spacing			3
+  *
+- * assuming CRC21, rounded up and ignoring bitstuffing
++ * assuming CRC21, rounded up and ignoring dynamic bitstuffing
+  */
+-#define CANFD_FRAME_OVERHEAD_SFF DIV_ROUND_UP(61, 8)
++#define CANFD_FRAME_OVERHEAD_SFF DIV_ROUND_UP(67, 8)
+ 
+ /*
+  * Size of a CAN-FD Extended Frame
+@@ -98,17 +99,18 @@
+  * Error Status Indicator (ESI)		1
+  * Data length code (DLC)		4
+  * Data field				0...512
+- * Stuff Bit Count (SBC)		0...16: 4 20...64:5
++ * Stuff Bit Count (SBC)		4
+  * CRC					0...16: 17 20...64:21
+  * CRC delimiter (CD)			1
++ * Fixed Stuff bits (FSB)		0...16: 6 20...64:7
+  * ACK slot (AS)			1
+  * ACK delimiter (AD)			1
+  * End-of-frame (EOF)			7
+  * Inter frame spacing			3
+  *
+- * assuming CRC21, rounded up and ignoring bitstuffing
++ * assuming CRC21, rounded up and ignoring dynamic bitstuffing
+  */
+-#define CANFD_FRAME_OVERHEAD_EFF DIV_ROUND_UP(80, 8)
++#define CANFD_FRAME_OVERHEAD_EFF DIV_ROUND_UP(86, 8)
+ 
+ /*
+  * Maximum size of a Classical CAN frame
+diff --git a/include/linux/ieee80211.h b/include/linux/ieee80211.h
+index 2463bdd2a382d..1e25c9060225c 100644
+--- a/include/linux/ieee80211.h
++++ b/include/linux/ieee80211.h
+@@ -4592,15 +4592,12 @@ static inline u8 ieee80211_mle_common_size(const u8 *data)
+ 	case IEEE80211_ML_CONTROL_TYPE_BASIC:
+ 	case IEEE80211_ML_CONTROL_TYPE_PREQ:
+ 	case IEEE80211_ML_CONTROL_TYPE_TDLS:
++	case IEEE80211_ML_CONTROL_TYPE_RECONF:
+ 		/*
+ 		 * The length is the first octet pointed by mle->variable so no
+ 		 * need to add anything
+ 		 */
+ 		break;
+-	case IEEE80211_ML_CONTROL_TYPE_RECONF:
+-		if (control & IEEE80211_MLC_RECONF_PRES_MLD_MAC_ADDR)
+-			common += ETH_ALEN;
+-		return common;
+ 	case IEEE80211_ML_CONTROL_TYPE_PRIO_ACCESS:
+ 		if (control & IEEE80211_MLC_PRIO_ACCESS_PRES_AP_MLD_MAC_ADDR)
+ 			common += ETH_ALEN;
+diff --git a/include/linux/mfd/tps65010.h b/include/linux/mfd/tps65010.h
+index a1fb9bc5311de..5edf1aef11185 100644
+--- a/include/linux/mfd/tps65010.h
++++ b/include/linux/mfd/tps65010.h
+@@ -28,6 +28,8 @@
+ #ifndef __LINUX_I2C_TPS65010_H
+ #define __LINUX_I2C_TPS65010_H
+ 
++struct gpio_chip;
++
+ /*
+  * ----------------------------------------------------------------------------
+  * Registers, all 8 bits
+@@ -176,12 +178,10 @@ struct i2c_client;
+ 
+ /**
+  * struct tps65010_board - packages GPIO and LED lines
+- * @base: the GPIO number to assign to GPIO-1
+  * @outmask: bit (N-1) is set to allow GPIO-N to be used as an
+  *	(open drain) output
+  * @setup: optional callback issued once the GPIOs are valid
+  * @teardown: optional callback issued before the GPIOs are invalidated
+- * @context: optional parameter passed to setup() and teardown()
+  *
+  * Board data may be used to package the GPIO (and LED) lines for use
+  * in by the generic GPIO and LED frameworks.  The first four GPIOs
+@@ -193,12 +193,9 @@ struct i2c_client;
+  * devices in their initial states using these GPIOs.
+  */
+ struct tps65010_board {
+-	int				base;
+ 	unsigned			outmask;
+-
+-	int		(*setup)(struct i2c_client *client, void *context);
+-	int		(*teardown)(struct i2c_client *client, void *context);
+-	void		*context;
++	int		(*setup)(struct i2c_client *client, struct gpio_chip *gc);
++	void		(*teardown)(struct i2c_client *client, struct gpio_chip *gc);
+ };
+ 
+ #endif /*  __LINUX_I2C_TPS65010_H */
+diff --git a/include/linux/mmc/card.h b/include/linux/mmc/card.h
+index c726ea7812552..daa2f40d9ce65 100644
+--- a/include/linux/mmc/card.h
++++ b/include/linux/mmc/card.h
+@@ -294,6 +294,7 @@ struct mmc_card {
+ #define MMC_QUIRK_TRIM_BROKEN	(1<<12)		/* Skip trim */
+ #define MMC_QUIRK_BROKEN_HPI	(1<<13)		/* Disable broken HPI support */
+ #define MMC_QUIRK_BROKEN_SD_DISCARD	(1<<14)	/* Disable broken SD discard support */
++#define MMC_QUIRK_BROKEN_SD_CACHE	(1<<15)	/* Disable broken SD cache support */
+ 
+ 	bool			reenable_cmdq;	/* Re-enable Command Queue */
+ 
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 7ed63f5bbe056..02ac3a058c09f 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -5063,6 +5063,15 @@ static inline bool netif_is_l3_slave(const struct net_device *dev)
+ 	return dev->priv_flags & IFF_L3MDEV_SLAVE;
+ }
+ 
++static inline int dev_sdif(const struct net_device *dev)
++{
++#ifdef CONFIG_NET_L3_MASTER_DEV
++	if (netif_is_l3_slave(dev))
++		return dev->ifindex;
++#endif
++	return 0;
++}
++
+ static inline bool netif_is_bridge_master(const struct net_device *dev)
+ {
+ 	return dev->priv_flags & IFF_EBRIDGE;
+diff --git a/include/linux/nmi.h b/include/linux/nmi.h
+index 048c0b9aa623d..771d77b62bc10 100644
+--- a/include/linux/nmi.h
++++ b/include/linux/nmi.h
+@@ -197,7 +197,7 @@ u64 hw_nmi_get_sample_period(int watchdog_thresh);
+ #endif
+ 
+ #if defined(CONFIG_HARDLOCKUP_CHECK_TIMESTAMP) && \
+-    defined(CONFIG_HARDLOCKUP_DETECTOR)
++    defined(CONFIG_HARDLOCKUP_DETECTOR_PERF)
+ void watchdog_update_hrtimer_threshold(u64 period);
+ #else
+ static inline void watchdog_update_hrtimer_threshold(u64 period) { }
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index a5dda515fcd1d..87d499ca7e176 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -1866,6 +1866,7 @@ static inline int pci_dev_present(const struct pci_device_id *ids)
+ #define pci_dev_put(dev)	do { } while (0)
+ 
+ static inline void pci_set_master(struct pci_dev *dev) { }
++static inline void pci_clear_master(struct pci_dev *dev) { }
+ static inline int pci_enable_device(struct pci_dev *dev) { return -EIO; }
+ static inline void pci_disable_device(struct pci_dev *dev) { }
+ static inline int pcim_enable_device(struct pci_dev *pdev) { return -EIO; }
+diff --git a/include/linux/pipe_fs_i.h b/include/linux/pipe_fs_i.h
+index d2c3f16cf6b18..02e0086b10f6f 100644
+--- a/include/linux/pipe_fs_i.h
++++ b/include/linux/pipe_fs_i.h
+@@ -261,18 +261,14 @@ void generic_pipe_buf_release(struct pipe_inode_info *, struct pipe_buffer *);
+ 
+ extern const struct pipe_buf_operations nosteal_pipe_buf_ops;
+ 
+-#ifdef CONFIG_WATCH_QUEUE
+ unsigned long account_pipe_buffers(struct user_struct *user,
+ 				   unsigned long old, unsigned long new);
+ bool too_many_pipe_buffers_soft(unsigned long user_bufs);
+ bool too_many_pipe_buffers_hard(unsigned long user_bufs);
+ bool pipe_is_unprivileged_user(void);
+-#endif
+ 
+ /* for F_SETPIPE_SZ and F_GETPIPE_SZ */
+-#ifdef CONFIG_WATCH_QUEUE
+ int pipe_resize_ring(struct pipe_inode_info *pipe, unsigned int nr_slots);
+-#endif
+ long pipe_fcntl(struct file *, unsigned int, unsigned long arg);
+ struct pipe_inode_info *get_pipe_info(struct file *file, bool for_splice);
+ 
+diff --git a/include/net/dsa.h b/include/net/dsa.h
+index def06ef676dd8..56e297e7848d6 100644
+--- a/include/net/dsa.h
++++ b/include/net/dsa.h
+@@ -329,9 +329,17 @@ struct dsa_port {
+ 	struct list_head	fdbs;
+ 	struct list_head	mdbs;
+ 
+-	/* List of VLANs that CPU and DSA ports are members of. */
+ 	struct mutex		vlans_lock;
+-	struct list_head	vlans;
++	union {
++		/* List of VLANs that CPU and DSA ports are members of.
++		 * Access to this is serialized by the sleepable @vlans_lock.
++		 */
++		struct list_head	vlans;
++		/* List of VLANs that user ports are members of.
++		 * Access to this is serialized by netif_addr_lock_bh().
++		 */
++		struct list_head	user_vlans;
++	};
+ };
+ 
+ /* TODO: ideally DSA ports would have a single dp->link_dp member,
+diff --git a/include/net/regulatory.h b/include/net/regulatory.h
+index 896191f420d50..b2cb4a9eb04dc 100644
+--- a/include/net/regulatory.h
++++ b/include/net/regulatory.h
+@@ -140,17 +140,6 @@ struct regulatory_request {
+  *      otherwise initiating radiation is not allowed. This will enable the
+  *      relaxations enabled under the CFG80211_REG_RELAX_NO_IR configuration
+  *      option
+- * @REGULATORY_IGNORE_STALE_KICKOFF: the regulatory core will _not_ make sure
+- *	all interfaces on this wiphy reside on allowed channels. If this flag
+- *	is not set, upon a regdomain change, the interfaces are given a grace
+- *	period (currently 60 seconds) to disconnect or move to an allowed
+- *	channel. Interfaces on forbidden channels are forcibly disconnected.
+- *	Currently these types of interfaces are supported for enforcement:
+- *	NL80211_IFTYPE_ADHOC, NL80211_IFTYPE_STATION, NL80211_IFTYPE_AP,
+- *	NL80211_IFTYPE_AP_VLAN, NL80211_IFTYPE_MONITOR,
+- *	NL80211_IFTYPE_P2P_CLIENT, NL80211_IFTYPE_P2P_GO,
+- *	NL80211_IFTYPE_P2P_DEVICE. The flag will be set by default if a device
+- *	includes any modes unsupported for enforcement checking.
+  * @REGULATORY_WIPHY_SELF_MANAGED: for devices that employ wiphy-specific
+  *	regdom management. These devices will ignore all regdom changes not
+  *	originating from their own wiphy.
+@@ -177,7 +166,7 @@ enum ieee80211_regulatory_flags {
+ 	REGULATORY_COUNTRY_IE_FOLLOW_POWER	= BIT(3),
+ 	REGULATORY_COUNTRY_IE_IGNORE		= BIT(4),
+ 	REGULATORY_ENABLE_RELAX_NO_IR           = BIT(5),
+-	REGULATORY_IGNORE_STALE_KICKOFF         = BIT(6),
++	/* reuse bit 6 next time */
+ 	REGULATORY_WIPHY_SELF_MANAGED		= BIT(7),
+ };
+ 
+diff --git a/include/net/sock.h b/include/net/sock.h
+index f0654c44acf5f..81ad7fa03b73a 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -2100,6 +2100,7 @@ static inline void sock_graft(struct sock *sk, struct socket *parent)
+ }
+ 
+ kuid_t sock_i_uid(struct sock *sk);
++unsigned long __sock_i_ino(struct sock *sk);
+ unsigned long sock_i_ino(struct sock *sk);
+ 
+ static inline kuid_t sock_net_uid(const struct net *net, const struct sock *sk)
+diff --git a/include/trace/events/timer.h b/include/trace/events/timer.h
+index 3e8619c72f774..b4bc2828fa09f 100644
+--- a/include/trace/events/timer.h
++++ b/include/trace/events/timer.h
+@@ -158,7 +158,11 @@ DEFINE_EVENT(timer_class, timer_cancel,
+ 		{ HRTIMER_MODE_ABS_SOFT,	"ABS|SOFT"	},	\
+ 		{ HRTIMER_MODE_REL_SOFT,	"REL|SOFT"	},	\
+ 		{ HRTIMER_MODE_ABS_PINNED_SOFT,	"ABS|PINNED|SOFT" },	\
+-		{ HRTIMER_MODE_REL_PINNED_SOFT,	"REL|PINNED|SOFT" })
++		{ HRTIMER_MODE_REL_PINNED_SOFT,	"REL|PINNED|SOFT" },	\
++		{ HRTIMER_MODE_ABS_HARD,	"ABS|HARD" },		\
++		{ HRTIMER_MODE_REL_HARD,	"REL|HARD" },		\
++		{ HRTIMER_MODE_ABS_PINNED_HARD, "ABS|PINNED|HARD" },	\
++		{ HRTIMER_MODE_REL_PINNED_HARD,	"REL|PINNED|HARD" })
+ 
+ /**
+  * hrtimer_init - called when the hrtimer is initialized
+diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
+index db70944c681aa..91803587691a6 100644
+--- a/include/ufs/ufshcd.h
++++ b/include/ufs/ufshcd.h
+@@ -225,7 +225,6 @@ struct ufs_dev_cmd {
+ 	struct mutex lock;
+ 	struct completion *complete;
+ 	struct ufs_query query;
+-	struct cq_entry *cqe;
+ };
+ 
+ /**
+diff --git a/init/Makefile b/init/Makefile
+index 26de459006c4e..ec557ada3c12e 100644
+--- a/init/Makefile
++++ b/init/Makefile
+@@ -60,3 +60,4 @@ include/generated/utsversion.h: FORCE
+ $(obj)/version-timestamp.o: include/generated/utsversion.h
+ CFLAGS_version-timestamp.o := -include include/generated/utsversion.h
+ KASAN_SANITIZE_version-timestamp.o := n
++GCOV_PROFILE_version-timestamp.o := n
+diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
+index b86b907e566ca..bb70f400c25eb 100644
+--- a/kernel/bpf/cgroup.c
++++ b/kernel/bpf/cgroup.c
+@@ -1826,6 +1826,12 @@ int __cgroup_bpf_run_filter_setsockopt(struct sock *sk, int *level,
+ 		ret = 1;
+ 	} else if (ctx.optlen > max_optlen || ctx.optlen < -1) {
+ 		/* optlen is out of bounds */
++		if (*optlen > PAGE_SIZE && ctx.optlen >= 0) {
++			pr_info_once("bpf setsockopt: ignoring program buffer with optlen=%d (max_optlen=%d)\n",
++				     ctx.optlen, max_optlen);
++			ret = 0;
++			goto out;
++		}
+ 		ret = -EFAULT;
+ 	} else {
+ 		/* optlen within bounds, run kernel handler */
+@@ -1881,8 +1887,10 @@ int __cgroup_bpf_run_filter_getsockopt(struct sock *sk, int level,
+ 		.optname = optname,
+ 		.current_task = current,
+ 	};
++	int orig_optlen;
+ 	int ret;
+ 
++	orig_optlen = max_optlen;
+ 	ctx.optlen = max_optlen;
+ 	max_optlen = sockopt_alloc_buf(&ctx, max_optlen, &buf);
+ 	if (max_optlen < 0)
+@@ -1905,6 +1913,7 @@ int __cgroup_bpf_run_filter_getsockopt(struct sock *sk, int level,
+ 			ret = -EFAULT;
+ 			goto out;
+ 		}
++		orig_optlen = ctx.optlen;
+ 
+ 		if (copy_from_user(ctx.optval, optval,
+ 				   min(ctx.optlen, max_optlen)) != 0) {
+@@ -1922,6 +1931,12 @@ int __cgroup_bpf_run_filter_getsockopt(struct sock *sk, int level,
+ 		goto out;
+ 
+ 	if (optval && (ctx.optlen > max_optlen || ctx.optlen < 0)) {
++		if (orig_optlen > PAGE_SIZE && ctx.optlen >= 0) {
++			pr_info_once("bpf getsockopt: ignoring program buffer with optlen=%d (max_optlen=%d)\n",
++				     ctx.optlen, max_optlen);
++			ret = retval;
++			goto out;
++		}
+ 		ret = -EFAULT;
+ 		goto out;
+ 	}
+diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
+index d0ed7d6f5eec5..fecb8d2d885a3 100644
+--- a/kernel/bpf/trampoline.c
++++ b/kernel/bpf/trampoline.c
+@@ -279,11 +279,8 @@ bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total, bool *ip_a
+ 	return tlinks;
+ }
+ 
+-static void __bpf_tramp_image_put_deferred(struct work_struct *work)
++static void bpf_tramp_image_free(struct bpf_tramp_image *im)
+ {
+-	struct bpf_tramp_image *im;
+-
+-	im = container_of(work, struct bpf_tramp_image, work);
+ 	bpf_image_ksym_del(&im->ksym);
+ 	bpf_jit_free_exec(im->image);
+ 	bpf_jit_uncharge_modmem(PAGE_SIZE);
+@@ -291,6 +288,14 @@ static void __bpf_tramp_image_put_deferred(struct work_struct *work)
+ 	kfree_rcu(im, rcu);
+ }
+ 
++static void __bpf_tramp_image_put_deferred(struct work_struct *work)
++{
++	struct bpf_tramp_image *im;
++
++	im = container_of(work, struct bpf_tramp_image, work);
++	bpf_tramp_image_free(im);
++}
++
+ /* callback, fexit step 3 or fentry step 2 */
+ static void __bpf_tramp_image_put_rcu(struct rcu_head *rcu)
+ {
+@@ -372,7 +377,7 @@ static void bpf_tramp_image_put(struct bpf_tramp_image *im)
+ 	call_rcu_tasks_trace(&im->rcu, __bpf_tramp_image_put_rcu_tasks);
+ }
+ 
+-static struct bpf_tramp_image *bpf_tramp_image_alloc(u64 key, u32 idx)
++static struct bpf_tramp_image *bpf_tramp_image_alloc(u64 key)
+ {
+ 	struct bpf_tramp_image *im;
+ 	struct bpf_ksym *ksym;
+@@ -399,7 +404,7 @@ static struct bpf_tramp_image *bpf_tramp_image_alloc(u64 key, u32 idx)
+ 
+ 	ksym = &im->ksym;
+ 	INIT_LIST_HEAD_RCU(&ksym->lnode);
+-	snprintf(ksym->name, KSYM_NAME_LEN, "bpf_trampoline_%llu_%u", key, idx);
++	snprintf(ksym->name, KSYM_NAME_LEN, "bpf_trampoline_%llu", key);
+ 	bpf_image_ksym_add(image, ksym);
+ 	return im;
+ 
+@@ -429,11 +434,10 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
+ 		err = unregister_fentry(tr, tr->cur_image->image);
+ 		bpf_tramp_image_put(tr->cur_image);
+ 		tr->cur_image = NULL;
+-		tr->selector = 0;
+ 		goto out;
+ 	}
+ 
+-	im = bpf_tramp_image_alloc(tr->key, tr->selector);
++	im = bpf_tramp_image_alloc(tr->key);
+ 	if (IS_ERR(im)) {
+ 		err = PTR_ERR(im);
+ 		goto out;
+@@ -466,12 +470,11 @@ again:
+ 					  &tr->func.model, tr->flags, tlinks,
+ 					  tr->func.addr);
+ 	if (err < 0)
+-		goto out;
++		goto out_free;
+ 
+ 	set_memory_rox((long)im->image, 1);
+ 
+-	WARN_ON(tr->cur_image && tr->selector == 0);
+-	WARN_ON(!tr->cur_image && tr->selector);
++	WARN_ON(tr->cur_image && total == 0);
+ 	if (tr->cur_image)
+ 		/* progs already running at this address */
+ 		err = modify_fentry(tr, tr->cur_image->image, im->image, lock_direct_mutex);
+@@ -496,18 +499,21 @@ again:
+ 	}
+ #endif
+ 	if (err)
+-		goto out;
++		goto out_free;
+ 
+ 	if (tr->cur_image)
+ 		bpf_tramp_image_put(tr->cur_image);
+ 	tr->cur_image = im;
+-	tr->selector++;
+ out:
+ 	/* If any error happens, restore previous flags */
+ 	if (err)
+ 		tr->flags = orig_flags;
+ 	kfree(tlinks);
+ 	return err;
++
++out_free:
++	bpf_tramp_image_free(im);
++	goto out;
+ }
+ 
+ static enum bpf_tramp_prog_type bpf_attach_type_to_tramp(struct bpf_prog *prog)
+diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c
+index 5a60cc52adc0c..8a7baf4e332e3 100644
+--- a/kernel/kcsan/core.c
++++ b/kernel/kcsan/core.c
+@@ -1270,7 +1270,9 @@ static __always_inline void kcsan_atomic_builtin_memorder(int memorder)
+ DEFINE_TSAN_ATOMIC_OPS(8);
+ DEFINE_TSAN_ATOMIC_OPS(16);
+ DEFINE_TSAN_ATOMIC_OPS(32);
++#ifdef CONFIG_64BIT
+ DEFINE_TSAN_ATOMIC_OPS(64);
++#endif
+ 
+ void __tsan_atomic_thread_fence(int memorder);
+ void __tsan_atomic_thread_fence(int memorder)
+diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
+index 3d578c6fefee3..22acee18195a5 100644
+--- a/kernel/kexec_core.c
++++ b/kernel/kexec_core.c
+@@ -1122,6 +1122,7 @@ int crash_shrink_memory(unsigned long new_size)
+ 	start = crashk_res.start;
+ 	end = crashk_res.end;
+ 	old_size = (end == 0) ? 0 : end - start + 1;
++	new_size = roundup(new_size, KEXEC_CRASH_MEM_ALIGN);
+ 	if (new_size >= old_size) {
+ 		ret = (new_size == old_size) ? 0 : -EINVAL;
+ 		goto unlock;
+@@ -1133,9 +1134,7 @@ int crash_shrink_memory(unsigned long new_size)
+ 		goto unlock;
+ 	}
+ 
+-	start = roundup(start, KEXEC_CRASH_MEM_ALIGN);
+-	end = roundup(start + new_size, KEXEC_CRASH_MEM_ALIGN);
+-
++	end = start + new_size;
+ 	crash_free_reserved_phys_range(end, crashk_res.end);
+ 
+ 	if ((start == end) && (crashk_res.parent != NULL))
+diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h
+index 115616ac3bfa6..2d7e85dbf6734 100644
+--- a/kernel/rcu/rcu.h
++++ b/kernel/rcu/rcu.h
+@@ -603,4 +603,10 @@ void show_rcu_tasks_trace_gp_kthread(void);
+ static inline void show_rcu_tasks_trace_gp_kthread(void) {}
+ #endif
+ 
++#ifdef CONFIG_TINY_RCU
++static inline bool rcu_cpu_beenfullyonline(int cpu) { return true; }
++#else
++bool rcu_cpu_beenfullyonline(int cpu);
++#endif
++
+ #endif /* __LINUX_RCU_H */
+diff --git a/kernel/rcu/rcuscale.c b/kernel/rcu/rcuscale.c
+index 91fb5905a008f..602f0958e4362 100644
+--- a/kernel/rcu/rcuscale.c
++++ b/kernel/rcu/rcuscale.c
+@@ -522,89 +522,6 @@ rcu_scale_print_module_parms(struct rcu_scale_ops *cur_ops, const char *tag)
+ 		 scale_type, tag, nrealreaders, nrealwriters, verbose, shutdown);
+ }
+ 
+-static void
+-rcu_scale_cleanup(void)
+-{
+-	int i;
+-	int j;
+-	int ngps = 0;
+-	u64 *wdp;
+-	u64 *wdpp;
+-
+-	/*
+-	 * Would like warning at start, but everything is expedited
+-	 * during the mid-boot phase, so have to wait till the end.
+-	 */
+-	if (rcu_gp_is_expedited() && !rcu_gp_is_normal() && !gp_exp)
+-		SCALEOUT_ERRSTRING("All grace periods expedited, no normal ones to measure!");
+-	if (rcu_gp_is_normal() && gp_exp)
+-		SCALEOUT_ERRSTRING("All grace periods normal, no expedited ones to measure!");
+-	if (gp_exp && gp_async)
+-		SCALEOUT_ERRSTRING("No expedited async GPs, so went with async!");
+-
+-	if (torture_cleanup_begin())
+-		return;
+-	if (!cur_ops) {
+-		torture_cleanup_end();
+-		return;
+-	}
+-
+-	if (reader_tasks) {
+-		for (i = 0; i < nrealreaders; i++)
+-			torture_stop_kthread(rcu_scale_reader,
+-					     reader_tasks[i]);
+-		kfree(reader_tasks);
+-	}
+-
+-	if (writer_tasks) {
+-		for (i = 0; i < nrealwriters; i++) {
+-			torture_stop_kthread(rcu_scale_writer,
+-					     writer_tasks[i]);
+-			if (!writer_n_durations)
+-				continue;
+-			j = writer_n_durations[i];
+-			pr_alert("%s%s writer %d gps: %d\n",
+-				 scale_type, SCALE_FLAG, i, j);
+-			ngps += j;
+-		}
+-		pr_alert("%s%s start: %llu end: %llu duration: %llu gps: %d batches: %ld\n",
+-			 scale_type, SCALE_FLAG,
+-			 t_rcu_scale_writer_started, t_rcu_scale_writer_finished,
+-			 t_rcu_scale_writer_finished -
+-			 t_rcu_scale_writer_started,
+-			 ngps,
+-			 rcuscale_seq_diff(b_rcu_gp_test_finished,
+-					   b_rcu_gp_test_started));
+-		for (i = 0; i < nrealwriters; i++) {
+-			if (!writer_durations)
+-				break;
+-			if (!writer_n_durations)
+-				continue;
+-			wdpp = writer_durations[i];
+-			if (!wdpp)
+-				continue;
+-			for (j = 0; j < writer_n_durations[i]; j++) {
+-				wdp = &wdpp[j];
+-				pr_alert("%s%s %4d writer-duration: %5d %llu\n",
+-					scale_type, SCALE_FLAG,
+-					i, j, *wdp);
+-				if (j % 100 == 0)
+-					schedule_timeout_uninterruptible(1);
+-			}
+-			kfree(writer_durations[i]);
+-		}
+-		kfree(writer_tasks);
+-		kfree(writer_durations);
+-		kfree(writer_n_durations);
+-	}
+-
+-	/* Do torture-type-specific cleanup operations.  */
+-	if (cur_ops->cleanup != NULL)
+-		cur_ops->cleanup();
+-
+-	torture_cleanup_end();
+-}
+-
+ /*
+  * Return the number if non-negative.  If -1, the number of CPUs.
+  * If less than -1, that much less than the number of CPUs, but
+@@ -624,21 +541,6 @@ static int compute_real(int n)
+ 	return nr;
+ }
+ 
+-/*
+- * RCU scalability shutdown kthread.  Just waits to be awakened, then shuts
+- * down system.
+- */
+-static int
+-rcu_scale_shutdown(void *arg)
+-{
+-	wait_event(shutdown_wq,
+-		   atomic_read(&n_rcu_scale_writer_finished) >= nrealwriters);
+-	smp_mb(); /* Wake before output. */
+-	rcu_scale_cleanup();
+-	kernel_power_off();
+-	return -EINVAL;
+-}
+-
+ /*
+  * kfree_rcu() scalability tests: Start a kfree_rcu() loop on all CPUs for number
+  * of iterations and measure total time and number of GP for all iterations to complete.
+@@ -771,8 +673,8 @@ kfree_scale_cleanup(void)
+ static int
+ kfree_scale_shutdown(void *arg)
+ {
+-	wait_event(shutdown_wq,
+-		   atomic_read(&n_kfree_scale_thread_ended) >= kfree_nrealthreads);
++	wait_event_idle(shutdown_wq,
++			atomic_read(&n_kfree_scale_thread_ended) >= kfree_nrealthreads);
+ 
+ 	smp_mb(); /* Wake before output. */
+ 
+@@ -875,6 +777,108 @@ unwind:
+ 	return firsterr;
+ }
+ 
++static void
++rcu_scale_cleanup(void)
++{
++	int i;
++	int j;
++	int ngps = 0;
++	u64 *wdp;
++	u64 *wdpp;
++
++	/*
++	 * Would like warning at start, but everything is expedited
++	 * during the mid-boot phase, so have to wait till the end.
++	 */
++	if (rcu_gp_is_expedited() && !rcu_gp_is_normal() && !gp_exp)
++		SCALEOUT_ERRSTRING("All grace periods expedited, no normal ones to measure!");
++	if (rcu_gp_is_normal() && gp_exp)
++		SCALEOUT_ERRSTRING("All grace periods normal, no expedited ones to measure!");
++	if (gp_exp && gp_async)
++		SCALEOUT_ERRSTRING("No expedited async GPs, so went with async!");
++
++	if (kfree_rcu_test) {
++		kfree_scale_cleanup();
++		return;
++	}
++
++	if (torture_cleanup_begin())
++		return;
++	if (!cur_ops) {
++		torture_cleanup_end();
++		return;
++	}
++
++	if (reader_tasks) {
++		for (i = 0; i < nrealreaders; i++)
++			torture_stop_kthread(rcu_scale_reader,
++					     reader_tasks[i]);
++		kfree(reader_tasks);
++	}
++
++	if (writer_tasks) {
++		for (i = 0; i < nrealwriters; i++) {
++			torture_stop_kthread(rcu_scale_writer,
++					     writer_tasks[i]);
++			if (!writer_n_durations)
++				continue;
++			j = writer_n_durations[i];
++			pr_alert("%s%s writer %d gps: %d\n",
++				 scale_type, SCALE_FLAG, i, j);
++			ngps += j;
++		}
++		pr_alert("%s%s start: %llu end: %llu duration: %llu gps: %d batches: %ld\n",
++			 scale_type, SCALE_FLAG,
++			 t_rcu_scale_writer_started, t_rcu_scale_writer_finished,
++			 t_rcu_scale_writer_finished -
++			 t_rcu_scale_writer_started,
++			 ngps,
++			 rcuscale_seq_diff(b_rcu_gp_test_finished,
++					   b_rcu_gp_test_started));
++		for (i = 0; i < nrealwriters; i++) {
++			if (!writer_durations)
++				break;
++			if (!writer_n_durations)
++				continue;
++			wdpp = writer_durations[i];
++			if (!wdpp)
++				continue;
++			for (j = 0; j < writer_n_durations[i]; j++) {
++				wdp = &wdpp[j];
++				pr_alert("%s%s %4d writer-duration: %5d %llu\n",
++					scale_type, SCALE_FLAG,
++					i, j, *wdp);
++				if (j % 100 == 0)
++					schedule_timeout_uninterruptible(1);
++			}
++			kfree(writer_durations[i]);
++		}
++		kfree(writer_tasks);
++		kfree(writer_durations);
++		kfree(writer_n_durations);
++	}
++
++	/* Do torture-type-specific cleanup operations.  */
++	if (cur_ops->cleanup != NULL)
++		cur_ops->cleanup();
++
++	torture_cleanup_end();
++}
++
++/*
++ * RCU scalability shutdown kthread.  Just waits to be awakened, then shuts
++ * down system.
++ */
++static int
++rcu_scale_shutdown(void *arg)
++{
++	wait_event_idle(shutdown_wq, atomic_read(&n_rcu_scale_writer_finished) >= nrealwriters);
++	smp_mb(); /* Wake before output. */
++	rcu_scale_cleanup();
++	kernel_power_off();
++	return -EINVAL;
++}
++
+ static int __init
+ rcu_scale_init(void)
+ {
+diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
+index bfb5e1549f2b2..dfa6ff2eb32f2 100644
+--- a/kernel/rcu/tasks.h
++++ b/kernel/rcu/tasks.h
+@@ -455,6 +455,7 @@ static void rcu_tasks_invoke_cbs(struct rcu_tasks *rtp, struct rcu_tasks_percpu
+ {
+ 	int cpu;
+ 	int cpunext;
++	int cpuwq;
+ 	unsigned long flags;
+ 	int len;
+ 	struct rcu_head *rhp;
+@@ -465,11 +466,13 @@ static void rcu_tasks_invoke_cbs(struct rcu_tasks *rtp, struct rcu_tasks_percpu
+ 	cpunext = cpu * 2 + 1;
+ 	if (cpunext < smp_load_acquire(&rtp->percpu_dequeue_lim)) {
+ 		rtpcp_next = per_cpu_ptr(rtp->rtpcpu, cpunext);
+-		queue_work_on(cpunext, system_wq, &rtpcp_next->rtp_work);
++		cpuwq = rcu_cpu_beenfullyonline(cpunext) ? cpunext : WORK_CPU_UNBOUND;
++		queue_work_on(cpuwq, system_wq, &rtpcp_next->rtp_work);
+ 		cpunext++;
+ 		if (cpunext < smp_load_acquire(&rtp->percpu_dequeue_lim)) {
+ 			rtpcp_next = per_cpu_ptr(rtp->rtpcpu, cpunext);
+-			queue_work_on(cpunext, system_wq, &rtpcp_next->rtp_work);
++			cpuwq = rcu_cpu_beenfullyonline(cpunext) ? cpunext : WORK_CPU_UNBOUND;
++			queue_work_on(cpuwq, system_wq, &rtpcp_next->rtp_work);
+ 		}
+ 	}
+ 
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index a565dc5c54440..be490f55cb834 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -4279,7 +4279,6 @@ int rcutree_prepare_cpu(unsigned int cpu)
+ 	 */
+ 	rnp = rdp->mynode;
+ 	raw_spin_lock_rcu_node(rnp);		/* irqs already disabled. */
+-	rdp->beenonline = true;	 /* We have now been online. */
+ 	rdp->gp_seq = READ_ONCE(rnp->gp_seq);
+ 	rdp->gp_seq_needed = rdp->gp_seq;
+ 	rdp->cpu_no_qs.b.norm = true;
+@@ -4306,6 +4305,16 @@ static void rcutree_affinity_setting(unsigned int cpu, int outgoing)
+ 	rcu_boost_kthread_setaffinity(rdp->mynode, outgoing);
+ }
+ 
++/*
++ * Has the specified (known valid) CPU ever been fully online?
++ */
++bool rcu_cpu_beenfullyonline(int cpu)
++{
++	struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
++
++	return smp_load_acquire(&rdp->beenonline);
++}
++
+ /*
+  * Near the end of the CPU-online process.  Pretty much all services
+  * enabled, and the CPU is now very much alive.
+@@ -4364,15 +4373,16 @@ int rcutree_offline_cpu(unsigned int cpu)
+  * Note that this function is special in that it is invoked directly
+  * from the incoming CPU rather than from the cpuhp_step mechanism.
+  * This is because this function must be invoked at a precise location.
++ * This incoming CPU must not have enabled interrupts yet.
+  */
+ void rcu_cpu_starting(unsigned int cpu)
+ {
+-	unsigned long flags;
+ 	unsigned long mask;
+ 	struct rcu_data *rdp;
+ 	struct rcu_node *rnp;
+ 	bool newcpu;
+ 
++	lockdep_assert_irqs_disabled();
+ 	rdp = per_cpu_ptr(&rcu_data, cpu);
+ 	if (rdp->cpu_started)
+ 		return;
+@@ -4380,7 +4390,6 @@ void rcu_cpu_starting(unsigned int cpu)
+ 
+ 	rnp = rdp->mynode;
+ 	mask = rdp->grpmask;
+-	local_irq_save(flags);
+ 	arch_spin_lock(&rcu_state.ofl_lock);
+ 	rcu_dynticks_eqs_online();
+ 	raw_spin_lock(&rcu_state.barrier_lock);
+@@ -4399,17 +4408,17 @@ void rcu_cpu_starting(unsigned int cpu)
+ 	/* An incoming CPU should never be blocking a grace period. */
+ 	if (WARN_ON_ONCE(rnp->qsmask & mask)) { /* RCU waiting on incoming CPU? */
+ 		/* rcu_report_qs_rnp() *really* wants some flags to restore */
+-		unsigned long flags2;
++		unsigned long flags;
+ 
+-		local_irq_save(flags2);
++		local_irq_save(flags);
+ 		rcu_disable_urgency_upon_qs(rdp);
+ 		/* Report QS -after- changing ->qsmaskinitnext! */
+-		rcu_report_qs_rnp(mask, rnp, rnp->gp_seq, flags2);
++		rcu_report_qs_rnp(mask, rnp, rnp->gp_seq, flags);
+ 	} else {
+ 		raw_spin_unlock_rcu_node(rnp);
+ 	}
+ 	arch_spin_unlock(&rcu_state.ofl_lock);
+-	local_irq_restore(flags);
++	smp_store_release(&rdp->beenonline, true);
+ 	smp_mb(); /* Ensure RCU read-side usage follows above initialization. */
+ }
+ 
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index ed89be0aa6503..853b7ef9dcafc 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -5519,6 +5519,14 @@ static void __cfsb_csd_unthrottle(void *arg)
+ 
+ 	rq_lock(rq, &rf);
+ 
++	/*
++	 * Iterating over the list can trigger several call to
++	 * update_rq_clock() in unthrottle_cfs_rq().
++	 * Do it once and skip the potential next ones.
++	 */
++	update_rq_clock(rq);
++	rq_clock_start_loop_update(rq);
++
+ 	/*
+ 	 * Since we hold rq lock we're safe from concurrent manipulation of
+ 	 * the CSD list. However, this RCU critical section annotates the
+@@ -5538,6 +5546,7 @@ static void __cfsb_csd_unthrottle(void *arg)
+ 
+ 	rcu_read_unlock();
+ 
++	rq_clock_stop_loop_update(rq);
+ 	rq_unlock(rq, &rf);
+ }
+ 
+@@ -6054,6 +6063,13 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
+ 
+ 	lockdep_assert_rq_held(rq);
+ 
++	/*
++	 * The rq clock has already been updated in the
++	 * set_rq_offline(), so we should skip updating
++	 * the rq clock again in unthrottle_cfs_rq().
++	 */
++	rq_clock_start_loop_update(rq);
++
+ 	rcu_read_lock();
+ 	list_for_each_entry_rcu(tg, &task_groups, list) {
+ 		struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
+@@ -6076,6 +6092,8 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
+ 			unthrottle_cfs_rq(cfs_rq);
+ 	}
+ 	rcu_read_unlock();
++
++	rq_clock_stop_loop_update(rq);
+ }
+ 
+ #else /* CONFIG_CFS_BANDWIDTH */
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 3e8df6d31c1e3..3adac73b17ca5 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -1546,6 +1546,28 @@ static inline void rq_clock_cancel_skipupdate(struct rq *rq)
+ 	rq->clock_update_flags &= ~RQCF_REQ_SKIP;
+ }
+ 
++/*
++ * During cpu offlining and rq wide unthrottling, we can trigger
++ * an update_rq_clock() for several cfs and rt runqueues (Typically
++ * when using list_for_each_entry_*)
++ * rq_clock_start_loop_update() can be called after updating the clock
++ * once and before iterating over the list to prevent multiple update.
++ * After the iterative traversal, we need to call rq_clock_stop_loop_update()
++ * to clear RQCF_ACT_SKIP of rq->clock_update_flags.
++ */
++static inline void rq_clock_start_loop_update(struct rq *rq)
++{
++	lockdep_assert_rq_held(rq);
++	SCHED_WARN_ON(rq->clock_update_flags & RQCF_ACT_SKIP);
++	rq->clock_update_flags |= RQCF_ACT_SKIP;
++}
++
++static inline void rq_clock_stop_loop_update(struct rq *rq)
++{
++	lockdep_assert_rq_held(rq);
++	rq->clock_update_flags &= ~RQCF_ACT_SKIP;
++}
++
+ struct rq_flags {
+ 	unsigned long flags;
+ 	struct pin_cookie cookie;
+diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
+index 808a247205a9a..ed3c4a9543982 100644
+--- a/kernel/time/posix-timers.c
++++ b/kernel/time/posix-timers.c
+@@ -1037,27 +1037,52 @@ retry_delete:
+ }
+ 
+ /*
+- * return timer owned by the process, used by exit_itimers
++ * Delete a timer if it is armed, remove it from the hash and schedule it
++ * for RCU freeing.
+  */
+ static void itimer_delete(struct k_itimer *timer)
+ {
+-retry_delete:
+-	spin_lock_irq(&timer->it_lock);
++	unsigned long flags;
++
++	/*
++	 * irqsave is required to make timer_wait_running() work.
++	 */
++	spin_lock_irqsave(&timer->it_lock, flags);
+ 
++retry_delete:
++	/*
++	 * Even if the timer is not longer accessible from other tasks
++	 * it still might be armed and queued in the underlying timer
++	 * mechanism. Worse, that timer mechanism might run the expiry
++	 * function concurrently.
++	 */
+ 	if (timer_delete_hook(timer) == TIMER_RETRY) {
+-		spin_unlock_irq(&timer->it_lock);
++		/*
++		 * Timer is expired concurrently, prevent livelocks
++		 * and pointless spinning on RT.
++		 *
++		 * timer_wait_running() drops timer::it_lock, which opens
++		 * the possibility for another task to delete the timer.
++		 *
++		 * That's not possible here because this is invoked from
++		 * do_exit() only for the last thread of the thread group.
++		 * So no other task can access and delete that timer.
++		 */
++		if (WARN_ON_ONCE(timer_wait_running(timer, &flags) != timer))
++			return;
++
+ 		goto retry_delete;
+ 	}
+ 	list_del(&timer->list);
+ 
+-	spin_unlock_irq(&timer->it_lock);
++	spin_unlock_irqrestore(&timer->it_lock, flags);
+ 	release_posix_timer(timer, IT_ID_SET);
+ }
+ 
+ /*
+- * This is called by do_exit or de_thread, only when nobody else can
+- * modify the signal->posix_timers list. Yet we need sighand->siglock
+- * to prevent the race with /proc/pid/timers.
++ * Invoked from do_exit() when the last thread of a thread group exits.
++ * At that point no other task can access the timers of the dying
++ * task anymore.
+  */
+ void exit_itimers(struct task_struct *tsk)
+ {
+@@ -1067,10 +1092,12 @@ void exit_itimers(struct task_struct *tsk)
+ 	if (list_empty(&tsk->signal->posix_timers))
+ 		return;
+ 
++	/* Protect against concurrent read via /proc/$PID/timers */
+ 	spin_lock_irq(&tsk->sighand->siglock);
+ 	list_replace_init(&tsk->signal->posix_timers, &timers);
+ 	spin_unlock_irq(&tsk->sighand->siglock);
+ 
++	/* The timers are not longer accessible via tsk::signal */
+ 	while (!list_empty(&timers)) {
+ 		tmr = list_first_entry(&timers, struct k_itimer, list);
+ 		itimer_delete(tmr);
+diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
+index d6fb6a676bbbb..1ad89eec2a55f 100644
+--- a/kernel/time/tick-sched.c
++++ b/kernel/time/tick-sched.c
+@@ -1046,7 +1046,7 @@ static bool report_idle_softirq(void)
+ 			return false;
+ 	}
+ 
+-	if (ratelimit < 10)
++	if (ratelimit >= 10)
+ 		return false;
+ 
+ 	/* On RT, softirqs handling may be waiting on some lock */
+diff --git a/kernel/watchdog_hld.c b/kernel/watchdog_hld.c
+index 247bf0b1582ca..1e8a49dc956e2 100644
+--- a/kernel/watchdog_hld.c
++++ b/kernel/watchdog_hld.c
+@@ -114,14 +114,14 @@ static void watchdog_overflow_callback(struct perf_event *event,
+ 	/* Ensure the watchdog never gets throttled */
+ 	event->hw.interrupts = 0;
+ 
++	if (!watchdog_check_timestamp())
++		return;
++
+ 	if (__this_cpu_read(watchdog_nmi_touch) == true) {
+ 		__this_cpu_write(watchdog_nmi_touch, false);
+ 		return;
+ 	}
+ 
+-	if (!watchdog_check_timestamp())
+-		return;
+-
+ 	/* check for a hardlockup
+ 	 * This is done by making sure our timer interrupt
+ 	 * is incrementing.  The timer interrupt should have
+diff --git a/lib/ts_bm.c b/lib/ts_bm.c
+index 1f2234221dd11..c8ecbf74ef295 100644
+--- a/lib/ts_bm.c
++++ b/lib/ts_bm.c
+@@ -60,10 +60,12 @@ static unsigned int bm_find(struct ts_config *conf, struct ts_state *state)
+ 	struct ts_bm *bm = ts_config_priv(conf);
+ 	unsigned int i, text_len, consumed = state->offset;
+ 	const u8 *text;
+-	int shift = bm->patlen - 1, bs;
++	int bs;
+ 	const u8 icase = conf->flags & TS_IGNORECASE;
+ 
+ 	for (;;) {
++		int shift = bm->patlen - 1;
++
+ 		text_len = conf->get_next_block(consumed, &text, conf, state);
+ 
+ 		if (unlikely(text_len == 0))
+diff --git a/mm/filemap.c b/mm/filemap.c
+index 2723104cc06a1..8f048e62279a2 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -2903,7 +2903,7 @@ ssize_t filemap_splice_read(struct file *in, loff_t *ppos,
+ 	do {
+ 		cond_resched();
+ 
+-		if (*ppos >= i_size_read(file_inode(in)))
++		if (*ppos >= i_size_read(in->f_mapping->host))
+ 			break;
+ 
+ 		iocb.ki_pos = *ppos;
+@@ -2919,7 +2919,7 @@ ssize_t filemap_splice_read(struct file *in, loff_t *ppos,
+ 		 * part of the page is not copied back to userspace (unless
+ 		 * another truncate extends the file - this is desired though).
+ 		 */
+-		isize = i_size_read(file_inode(in));
++		isize = i_size_read(in->f_mapping->host);
+ 		if (unlikely(*ppos >= isize))
+ 			break;
+ 		end_offset = min_t(loff_t, isize, *ppos + len);
+diff --git a/mm/memory.c b/mm/memory.c
+index 2c2caae6fb3b6..e9dcc1c1eb6e9 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -3914,6 +3914,13 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
+ 		}
+ 	}
+ 
++	/*
++	 * Some architectures may have to restore extra metadata to the page
++	 * when reading from swap. This metadata may be indexed by swap entry
++	 * so this must be called before swap_free().
++	 */
++	arch_swap_restore(entry, folio);
++
+ 	/*
+ 	 * Remove the swap entry and conditionally try to free up the swapcache.
+ 	 * We're already holding a reference on the page but haven't mapped it
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 1d6f165923bff..ebbab8c24beaf 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -6529,12 +6529,11 @@ static struct sock *sk_lookup(struct net *net, struct bpf_sock_tuple *tuple,
+ static struct sock *
+ __bpf_skc_lookup(struct sk_buff *skb, struct bpf_sock_tuple *tuple, u32 len,
+ 		 struct net *caller_net, u32 ifindex, u8 proto, u64 netns_id,
+-		 u64 flags)
++		 u64 flags, int sdif)
+ {
+ 	struct sock *sk = NULL;
+ 	struct net *net;
+ 	u8 family;
+-	int sdif;
+ 
+ 	if (len == sizeof(tuple->ipv4))
+ 		family = AF_INET;
+@@ -6546,10 +6545,12 @@ __bpf_skc_lookup(struct sk_buff *skb, struct bpf_sock_tuple *tuple, u32 len,
+ 	if (unlikely(flags || !((s32)netns_id < 0 || netns_id <= S32_MAX)))
+ 		goto out;
+ 
+-	if (family == AF_INET)
+-		sdif = inet_sdif(skb);
+-	else
+-		sdif = inet6_sdif(skb);
++	if (sdif < 0) {
++		if (family == AF_INET)
++			sdif = inet_sdif(skb);
++		else
++			sdif = inet6_sdif(skb);
++	}
+ 
+ 	if ((s32)netns_id < 0) {
+ 		net = caller_net;
+@@ -6569,10 +6570,11 @@ out:
+ static struct sock *
+ __bpf_sk_lookup(struct sk_buff *skb, struct bpf_sock_tuple *tuple, u32 len,
+ 		struct net *caller_net, u32 ifindex, u8 proto, u64 netns_id,
+-		u64 flags)
++		u64 flags, int sdif)
+ {
+ 	struct sock *sk = __bpf_skc_lookup(skb, tuple, len, caller_net,
+-					   ifindex, proto, netns_id, flags);
++					   ifindex, proto, netns_id, flags,
++					   sdif);
+ 
+ 	if (sk) {
+ 		struct sock *sk2 = sk_to_full_sk(sk);
+@@ -6612,7 +6614,7 @@ bpf_skc_lookup(struct sk_buff *skb, struct bpf_sock_tuple *tuple, u32 len,
+ 	}
+ 
+ 	return __bpf_skc_lookup(skb, tuple, len, caller_net, ifindex, proto,
+-				netns_id, flags);
++				netns_id, flags, -1);
+ }
+ 
+ static struct sock *
+@@ -6701,6 +6703,78 @@ static const struct bpf_func_proto bpf_sk_lookup_udp_proto = {
+ 	.arg5_type	= ARG_ANYTHING,
+ };
+ 
++BPF_CALL_5(bpf_tc_skc_lookup_tcp, struct sk_buff *, skb,
++	   struct bpf_sock_tuple *, tuple, u32, len, u64, netns_id, u64, flags)
++{
++	struct net_device *dev = skb->dev;
++	int ifindex = dev->ifindex, sdif = dev_sdif(dev);
++	struct net *caller_net = dev_net(dev);
++
++	return (unsigned long)__bpf_skc_lookup(skb, tuple, len, caller_net,
++					       ifindex, IPPROTO_TCP, netns_id,
++					       flags, sdif);
++}
++
++static const struct bpf_func_proto bpf_tc_skc_lookup_tcp_proto = {
++	.func		= bpf_tc_skc_lookup_tcp,
++	.gpl_only	= false,
++	.pkt_access	= true,
++	.ret_type	= RET_PTR_TO_SOCK_COMMON_OR_NULL,
++	.arg1_type	= ARG_PTR_TO_CTX,
++	.arg2_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
++	.arg3_type	= ARG_CONST_SIZE,
++	.arg4_type	= ARG_ANYTHING,
++	.arg5_type	= ARG_ANYTHING,
++};
++
++BPF_CALL_5(bpf_tc_sk_lookup_tcp, struct sk_buff *, skb,
++	   struct bpf_sock_tuple *, tuple, u32, len, u64, netns_id, u64, flags)
++{
++	struct net_device *dev = skb->dev;
++	int ifindex = dev->ifindex, sdif = dev_sdif(dev);
++	struct net *caller_net = dev_net(dev);
++
++	return (unsigned long)__bpf_sk_lookup(skb, tuple, len, caller_net,
++					      ifindex, IPPROTO_TCP, netns_id,
++					      flags, sdif);
++}
++
++static const struct bpf_func_proto bpf_tc_sk_lookup_tcp_proto = {
++	.func		= bpf_tc_sk_lookup_tcp,
++	.gpl_only	= false,
++	.pkt_access	= true,
++	.ret_type	= RET_PTR_TO_SOCKET_OR_NULL,
++	.arg1_type	= ARG_PTR_TO_CTX,
++	.arg2_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
++	.arg3_type	= ARG_CONST_SIZE,
++	.arg4_type	= ARG_ANYTHING,
++	.arg5_type	= ARG_ANYTHING,
++};
++
++BPF_CALL_5(bpf_tc_sk_lookup_udp, struct sk_buff *, skb,
++	   struct bpf_sock_tuple *, tuple, u32, len, u64, netns_id, u64, flags)
++{
++	struct net_device *dev = skb->dev;
++	int ifindex = dev->ifindex, sdif = dev_sdif(dev);
++	struct net *caller_net = dev_net(dev);
++
++	return (unsigned long)__bpf_sk_lookup(skb, tuple, len, caller_net,
++					      ifindex, IPPROTO_UDP, netns_id,
++					      flags, sdif);
++}
++
++static const struct bpf_func_proto bpf_tc_sk_lookup_udp_proto = {
++	.func		= bpf_tc_sk_lookup_udp,
++	.gpl_only	= false,
++	.pkt_access	= true,
++	.ret_type	= RET_PTR_TO_SOCKET_OR_NULL,
++	.arg1_type	= ARG_PTR_TO_CTX,
++	.arg2_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
++	.arg3_type	= ARG_CONST_SIZE,
++	.arg4_type	= ARG_ANYTHING,
++	.arg5_type	= ARG_ANYTHING,
++};
++
+ BPF_CALL_1(bpf_sk_release, struct sock *, sk)
+ {
+ 	if (sk && sk_is_refcounted(sk))
+@@ -6718,12 +6792,13 @@ static const struct bpf_func_proto bpf_sk_release_proto = {
+ BPF_CALL_5(bpf_xdp_sk_lookup_udp, struct xdp_buff *, ctx,
+ 	   struct bpf_sock_tuple *, tuple, u32, len, u32, netns_id, u64, flags)
+ {
+-	struct net *caller_net = dev_net(ctx->rxq->dev);
+-	int ifindex = ctx->rxq->dev->ifindex;
++	struct net_device *dev = ctx->rxq->dev;
++	int ifindex = dev->ifindex, sdif = dev_sdif(dev);
++	struct net *caller_net = dev_net(dev);
+ 
+ 	return (unsigned long)__bpf_sk_lookup(NULL, tuple, len, caller_net,
+ 					      ifindex, IPPROTO_UDP, netns_id,
+-					      flags);
++					      flags, sdif);
+ }
+ 
+ static const struct bpf_func_proto bpf_xdp_sk_lookup_udp_proto = {
+@@ -6741,12 +6816,13 @@ static const struct bpf_func_proto bpf_xdp_sk_lookup_udp_proto = {
+ BPF_CALL_5(bpf_xdp_skc_lookup_tcp, struct xdp_buff *, ctx,
+ 	   struct bpf_sock_tuple *, tuple, u32, len, u32, netns_id, u64, flags)
+ {
+-	struct net *caller_net = dev_net(ctx->rxq->dev);
+-	int ifindex = ctx->rxq->dev->ifindex;
++	struct net_device *dev = ctx->rxq->dev;
++	int ifindex = dev->ifindex, sdif = dev_sdif(dev);
++	struct net *caller_net = dev_net(dev);
+ 
+ 	return (unsigned long)__bpf_skc_lookup(NULL, tuple, len, caller_net,
+ 					       ifindex, IPPROTO_TCP, netns_id,
+-					       flags);
++					       flags, sdif);
+ }
+ 
+ static const struct bpf_func_proto bpf_xdp_skc_lookup_tcp_proto = {
+@@ -6764,12 +6840,13 @@ static const struct bpf_func_proto bpf_xdp_skc_lookup_tcp_proto = {
+ BPF_CALL_5(bpf_xdp_sk_lookup_tcp, struct xdp_buff *, ctx,
+ 	   struct bpf_sock_tuple *, tuple, u32, len, u32, netns_id, u64, flags)
+ {
+-	struct net *caller_net = dev_net(ctx->rxq->dev);
+-	int ifindex = ctx->rxq->dev->ifindex;
++	struct net_device *dev = ctx->rxq->dev;
++	int ifindex = dev->ifindex, sdif = dev_sdif(dev);
++	struct net *caller_net = dev_net(dev);
+ 
+ 	return (unsigned long)__bpf_sk_lookup(NULL, tuple, len, caller_net,
+ 					      ifindex, IPPROTO_TCP, netns_id,
+-					      flags);
++					      flags, sdif);
+ }
+ 
+ static const struct bpf_func_proto bpf_xdp_sk_lookup_tcp_proto = {
+@@ -6789,7 +6866,8 @@ BPF_CALL_5(bpf_sock_addr_skc_lookup_tcp, struct bpf_sock_addr_kern *, ctx,
+ {
+ 	return (unsigned long)__bpf_skc_lookup(NULL, tuple, len,
+ 					       sock_net(ctx->sk), 0,
+-					       IPPROTO_TCP, netns_id, flags);
++					       IPPROTO_TCP, netns_id, flags,
++					       -1);
+ }
+ 
+ static const struct bpf_func_proto bpf_sock_addr_skc_lookup_tcp_proto = {
+@@ -6808,7 +6886,7 @@ BPF_CALL_5(bpf_sock_addr_sk_lookup_tcp, struct bpf_sock_addr_kern *, ctx,
+ {
+ 	return (unsigned long)__bpf_sk_lookup(NULL, tuple, len,
+ 					      sock_net(ctx->sk), 0, IPPROTO_TCP,
+-					      netns_id, flags);
++					      netns_id, flags, -1);
+ }
+ 
+ static const struct bpf_func_proto bpf_sock_addr_sk_lookup_tcp_proto = {
+@@ -6827,7 +6905,7 @@ BPF_CALL_5(bpf_sock_addr_sk_lookup_udp, struct bpf_sock_addr_kern *, ctx,
+ {
+ 	return (unsigned long)__bpf_sk_lookup(NULL, tuple, len,
+ 					      sock_net(ctx->sk), 0, IPPROTO_UDP,
+-					      netns_id, flags);
++					      netns_id, flags, -1);
+ }
+ 
+ static const struct bpf_func_proto bpf_sock_addr_sk_lookup_udp_proto = {
+@@ -7954,9 +8032,9 @@ tc_cls_act_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
+ #endif
+ #ifdef CONFIG_INET
+ 	case BPF_FUNC_sk_lookup_tcp:
+-		return &bpf_sk_lookup_tcp_proto;
++		return &bpf_tc_sk_lookup_tcp_proto;
+ 	case BPF_FUNC_sk_lookup_udp:
+-		return &bpf_sk_lookup_udp_proto;
++		return &bpf_tc_sk_lookup_udp_proto;
+ 	case BPF_FUNC_sk_release:
+ 		return &bpf_sk_release_proto;
+ 	case BPF_FUNC_tcp_sock:
+@@ -7964,7 +8042,7 @@ tc_cls_act_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
+ 	case BPF_FUNC_get_listener_sock:
+ 		return &bpf_get_listener_sock_proto;
+ 	case BPF_FUNC_skc_lookup_tcp:
+-		return &bpf_skc_lookup_tcp_proto;
++		return &bpf_tc_skc_lookup_tcp_proto;
+ 	case BPF_FUNC_tcp_check_syncookie:
+ 		return &bpf_tcp_check_syncookie_proto;
+ 	case BPF_FUNC_skb_ecn_set_ce:
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index f235cc6832767..f21254a9cd373 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -958,24 +958,27 @@ static inline int rtnl_vfinfo_size(const struct net_device *dev,
+ 			 nla_total_size(sizeof(struct ifla_vf_rate)) +
+ 			 nla_total_size(sizeof(struct ifla_vf_link_state)) +
+ 			 nla_total_size(sizeof(struct ifla_vf_rss_query_en)) +
+-			 nla_total_size(0) + /* nest IFLA_VF_STATS */
+-			 /* IFLA_VF_STATS_RX_PACKETS */
+-			 nla_total_size_64bit(sizeof(__u64)) +
+-			 /* IFLA_VF_STATS_TX_PACKETS */
+-			 nla_total_size_64bit(sizeof(__u64)) +
+-			 /* IFLA_VF_STATS_RX_BYTES */
+-			 nla_total_size_64bit(sizeof(__u64)) +
+-			 /* IFLA_VF_STATS_TX_BYTES */
+-			 nla_total_size_64bit(sizeof(__u64)) +
+-			 /* IFLA_VF_STATS_BROADCAST */
+-			 nla_total_size_64bit(sizeof(__u64)) +
+-			 /* IFLA_VF_STATS_MULTICAST */
+-			 nla_total_size_64bit(sizeof(__u64)) +
+-			 /* IFLA_VF_STATS_RX_DROPPED */
+-			 nla_total_size_64bit(sizeof(__u64)) +
+-			 /* IFLA_VF_STATS_TX_DROPPED */
+-			 nla_total_size_64bit(sizeof(__u64)) +
+ 			 nla_total_size(sizeof(struct ifla_vf_trust)));
++		if (~ext_filter_mask & RTEXT_FILTER_SKIP_STATS) {
++			size += num_vfs *
++				(nla_total_size(0) + /* nest IFLA_VF_STATS */
++				 /* IFLA_VF_STATS_RX_PACKETS */
++				 nla_total_size_64bit(sizeof(__u64)) +
++				 /* IFLA_VF_STATS_TX_PACKETS */
++				 nla_total_size_64bit(sizeof(__u64)) +
++				 /* IFLA_VF_STATS_RX_BYTES */
++				 nla_total_size_64bit(sizeof(__u64)) +
++				 /* IFLA_VF_STATS_TX_BYTES */
++				 nla_total_size_64bit(sizeof(__u64)) +
++				 /* IFLA_VF_STATS_BROADCAST */
++				 nla_total_size_64bit(sizeof(__u64)) +
++				 /* IFLA_VF_STATS_MULTICAST */
++				 nla_total_size_64bit(sizeof(__u64)) +
++				 /* IFLA_VF_STATS_RX_DROPPED */
++				 nla_total_size_64bit(sizeof(__u64)) +
++				 /* IFLA_VF_STATS_TX_DROPPED */
++				 nla_total_size_64bit(sizeof(__u64)));
++		}
+ 		return size;
+ 	} else
+ 		return 0;
+@@ -1267,7 +1270,8 @@ static noinline_for_stack int rtnl_fill_stats(struct sk_buff *skb,
+ static noinline_for_stack int rtnl_fill_vfinfo(struct sk_buff *skb,
+ 					       struct net_device *dev,
+ 					       int vfs_num,
+-					       struct nlattr *vfinfo)
++					       struct nlattr *vfinfo,
++					       u32 ext_filter_mask)
+ {
+ 	struct ifla_vf_rss_query_en vf_rss_query_en;
+ 	struct nlattr *vf, *vfstats, *vfvlanlist;
+@@ -1373,33 +1377,35 @@ static noinline_for_stack int rtnl_fill_vfinfo(struct sk_buff *skb,
+ 		goto nla_put_vf_failure;
+ 	}
+ 	nla_nest_end(skb, vfvlanlist);
+-	memset(&vf_stats, 0, sizeof(vf_stats));
+-	if (dev->netdev_ops->ndo_get_vf_stats)
+-		dev->netdev_ops->ndo_get_vf_stats(dev, vfs_num,
+-						&vf_stats);
+-	vfstats = nla_nest_start_noflag(skb, IFLA_VF_STATS);
+-	if (!vfstats)
+-		goto nla_put_vf_failure;
+-	if (nla_put_u64_64bit(skb, IFLA_VF_STATS_RX_PACKETS,
+-			      vf_stats.rx_packets, IFLA_VF_STATS_PAD) ||
+-	    nla_put_u64_64bit(skb, IFLA_VF_STATS_TX_PACKETS,
+-			      vf_stats.tx_packets, IFLA_VF_STATS_PAD) ||
+-	    nla_put_u64_64bit(skb, IFLA_VF_STATS_RX_BYTES,
+-			      vf_stats.rx_bytes, IFLA_VF_STATS_PAD) ||
+-	    nla_put_u64_64bit(skb, IFLA_VF_STATS_TX_BYTES,
+-			      vf_stats.tx_bytes, IFLA_VF_STATS_PAD) ||
+-	    nla_put_u64_64bit(skb, IFLA_VF_STATS_BROADCAST,
+-			      vf_stats.broadcast, IFLA_VF_STATS_PAD) ||
+-	    nla_put_u64_64bit(skb, IFLA_VF_STATS_MULTICAST,
+-			      vf_stats.multicast, IFLA_VF_STATS_PAD) ||
+-	    nla_put_u64_64bit(skb, IFLA_VF_STATS_RX_DROPPED,
+-			      vf_stats.rx_dropped, IFLA_VF_STATS_PAD) ||
+-	    nla_put_u64_64bit(skb, IFLA_VF_STATS_TX_DROPPED,
+-			      vf_stats.tx_dropped, IFLA_VF_STATS_PAD)) {
+-		nla_nest_cancel(skb, vfstats);
+-		goto nla_put_vf_failure;
++	if (~ext_filter_mask & RTEXT_FILTER_SKIP_STATS) {
++		memset(&vf_stats, 0, sizeof(vf_stats));
++		if (dev->netdev_ops->ndo_get_vf_stats)
++			dev->netdev_ops->ndo_get_vf_stats(dev, vfs_num,
++							  &vf_stats);
++		vfstats = nla_nest_start_noflag(skb, IFLA_VF_STATS);
++		if (!vfstats)
++			goto nla_put_vf_failure;
++		if (nla_put_u64_64bit(skb, IFLA_VF_STATS_RX_PACKETS,
++				      vf_stats.rx_packets, IFLA_VF_STATS_PAD) ||
++		    nla_put_u64_64bit(skb, IFLA_VF_STATS_TX_PACKETS,
++				      vf_stats.tx_packets, IFLA_VF_STATS_PAD) ||
++		    nla_put_u64_64bit(skb, IFLA_VF_STATS_RX_BYTES,
++				      vf_stats.rx_bytes, IFLA_VF_STATS_PAD) ||
++		    nla_put_u64_64bit(skb, IFLA_VF_STATS_TX_BYTES,
++				      vf_stats.tx_bytes, IFLA_VF_STATS_PAD) ||
++		    nla_put_u64_64bit(skb, IFLA_VF_STATS_BROADCAST,
++				      vf_stats.broadcast, IFLA_VF_STATS_PAD) ||
++		    nla_put_u64_64bit(skb, IFLA_VF_STATS_MULTICAST,
++				      vf_stats.multicast, IFLA_VF_STATS_PAD) ||
++		    nla_put_u64_64bit(skb, IFLA_VF_STATS_RX_DROPPED,
++				      vf_stats.rx_dropped, IFLA_VF_STATS_PAD) ||
++		    nla_put_u64_64bit(skb, IFLA_VF_STATS_TX_DROPPED,
++				      vf_stats.tx_dropped, IFLA_VF_STATS_PAD)) {
++			nla_nest_cancel(skb, vfstats);
++			goto nla_put_vf_failure;
++		}
++		nla_nest_end(skb, vfstats);
+ 	}
+-	nla_nest_end(skb, vfstats);
+ 	nla_nest_end(skb, vf);
+ 	return 0;
+ 
+@@ -1432,7 +1438,7 @@ static noinline_for_stack int rtnl_fill_vf(struct sk_buff *skb,
+ 		return -EMSGSIZE;
+ 
+ 	for (i = 0; i < num_vfs; i++) {
+-		if (rtnl_fill_vfinfo(skb, dev, i, vfinfo))
++		if (rtnl_fill_vfinfo(skb, dev, i, vfinfo, ext_filter_mask))
+ 			return -EMSGSIZE;
+ 	}
+ 
+@@ -4087,7 +4093,7 @@ static int nlmsg_populate_fdb_fill(struct sk_buff *skb,
+ 	ndm->ndm_ifindex = dev->ifindex;
+ 	ndm->ndm_state   = ndm_state;
+ 
+-	if (nla_put(skb, NDA_LLADDR, ETH_ALEN, addr))
++	if (nla_put(skb, NDA_LLADDR, dev->addr_len, addr))
+ 		goto nla_put_failure;
+ 	if (vid)
+ 		if (nla_put(skb, NDA_VLAN, sizeof(u16), &vid))
+@@ -4101,10 +4107,10 @@ nla_put_failure:
+ 	return -EMSGSIZE;
+ }
+ 
+-static inline size_t rtnl_fdb_nlmsg_size(void)
++static inline size_t rtnl_fdb_nlmsg_size(const struct net_device *dev)
+ {
+ 	return NLMSG_ALIGN(sizeof(struct ndmsg)) +
+-	       nla_total_size(ETH_ALEN) +	/* NDA_LLADDR */
++	       nla_total_size(dev->addr_len) +	/* NDA_LLADDR */
+ 	       nla_total_size(sizeof(u16)) +	/* NDA_VLAN */
+ 	       0;
+ }
+@@ -4116,7 +4122,7 @@ static void rtnl_fdb_notify(struct net_device *dev, u8 *addr, u16 vid, int type,
+ 	struct sk_buff *skb;
+ 	int err = -ENOBUFS;
+ 
+-	skb = nlmsg_new(rtnl_fdb_nlmsg_size(), GFP_ATOMIC);
++	skb = nlmsg_new(rtnl_fdb_nlmsg_size(dev), GFP_ATOMIC);
+ 	if (!skb)
+ 		goto errout;
+ 
+diff --git a/net/core/sock.c b/net/core/sock.c
+index b34c48f802e98..a009e1fac4a69 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -2555,13 +2555,24 @@ kuid_t sock_i_uid(struct sock *sk)
+ }
+ EXPORT_SYMBOL(sock_i_uid);
+ 
+-unsigned long sock_i_ino(struct sock *sk)
++unsigned long __sock_i_ino(struct sock *sk)
+ {
+ 	unsigned long ino;
+ 
+-	read_lock_bh(&sk->sk_callback_lock);
++	read_lock(&sk->sk_callback_lock);
+ 	ino = sk->sk_socket ? SOCK_INODE(sk->sk_socket)->i_ino : 0;
+-	read_unlock_bh(&sk->sk_callback_lock);
++	read_unlock(&sk->sk_callback_lock);
++	return ino;
++}
++EXPORT_SYMBOL(__sock_i_ino);
++
++unsigned long sock_i_ino(struct sock *sk)
++{
++	unsigned long ino;
++
++	local_bh_disable();
++	ino = __sock_i_ino(sk);
++	local_bh_enable();
+ 	return ino;
+ }
+ EXPORT_SYMBOL(sock_i_ino);
+diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c
+index 6cd8607a3928f..390d790c0b056 100644
+--- a/net/dsa/dsa.c
++++ b/net/dsa/dsa.c
+@@ -1105,7 +1105,7 @@ static struct dsa_port *dsa_port_touch(struct dsa_switch *ds, int index)
+ 	mutex_init(&dp->vlans_lock);
+ 	INIT_LIST_HEAD(&dp->fdbs);
+ 	INIT_LIST_HEAD(&dp->mdbs);
+-	INIT_LIST_HEAD(&dp->vlans);
++	INIT_LIST_HEAD(&dp->vlans); /* also initializes &dp->user_vlans */
+ 	INIT_LIST_HEAD(&dp->list);
+ 	list_add_tail(&dp->list, &dst->ports);
+ 
+diff --git a/net/dsa/slave.c b/net/dsa/slave.c
+index 165bb2cb84316..527b1d576460f 100644
+--- a/net/dsa/slave.c
++++ b/net/dsa/slave.c
+@@ -27,6 +27,7 @@
+ #include "master.h"
+ #include "netlink.h"
+ #include "slave.h"
++#include "switch.h"
+ #include "tag.h"
+ 
+ struct dsa_switchdev_event_work {
+@@ -161,8 +162,7 @@ static int dsa_slave_schedule_standalone_work(struct net_device *dev,
+ 	return 0;
+ }
+ 
+-static int dsa_slave_host_vlan_rx_filtering(struct net_device *vdev, int vid,
+-					    void *arg)
++static int dsa_slave_host_vlan_rx_filtering(void *arg, int vid)
+ {
+ 	struct dsa_host_vlan_rx_filtering_ctx *ctx = arg;
+ 
+@@ -170,6 +170,28 @@ static int dsa_slave_host_vlan_rx_filtering(struct net_device *vdev, int vid,
+ 						  ctx->addr, vid);
+ }
+ 
++static int dsa_slave_vlan_for_each(struct net_device *dev,
++				   int (*cb)(void *arg, int vid), void *arg)
++{
++	struct dsa_port *dp = dsa_slave_to_port(dev);
++	struct dsa_vlan *v;
++	int err;
++
++	lockdep_assert_held(&dev->addr_list_lock);
++
++	err = cb(arg, 0);
++	if (err)
++		return err;
++
++	list_for_each_entry(v, &dp->user_vlans, list) {
++		err = cb(arg, v->vid);
++		if (err)
++			return err;
++	}
++
++	return 0;
++}
++
+ static int dsa_slave_sync_uc(struct net_device *dev,
+ 			     const unsigned char *addr)
+ {
+@@ -180,18 +202,14 @@ static int dsa_slave_sync_uc(struct net_device *dev,
+ 		.addr = addr,
+ 		.event = DSA_UC_ADD,
+ 	};
+-	int err;
+ 
+ 	dev_uc_add(master, addr);
+ 
+ 	if (!dsa_switch_supports_uc_filtering(dp->ds))
+ 		return 0;
+ 
+-	err = dsa_slave_schedule_standalone_work(dev, DSA_UC_ADD, addr, 0);
+-	if (err)
+-		return err;
+-
+-	return vlan_for_each(dev, dsa_slave_host_vlan_rx_filtering, &ctx);
++	return dsa_slave_vlan_for_each(dev, dsa_slave_host_vlan_rx_filtering,
++				       &ctx);
+ }
+ 
+ static int dsa_slave_unsync_uc(struct net_device *dev,
+@@ -204,18 +222,14 @@ static int dsa_slave_unsync_uc(struct net_device *dev,
+ 		.addr = addr,
+ 		.event = DSA_UC_DEL,
+ 	};
+-	int err;
+ 
+ 	dev_uc_del(master, addr);
+ 
+ 	if (!dsa_switch_supports_uc_filtering(dp->ds))
+ 		return 0;
+ 
+-	err = dsa_slave_schedule_standalone_work(dev, DSA_UC_DEL, addr, 0);
+-	if (err)
+-		return err;
+-
+-	return vlan_for_each(dev, dsa_slave_host_vlan_rx_filtering, &ctx);
++	return dsa_slave_vlan_for_each(dev, dsa_slave_host_vlan_rx_filtering,
++				       &ctx);
+ }
+ 
+ static int dsa_slave_sync_mc(struct net_device *dev,
+@@ -228,18 +242,14 @@ static int dsa_slave_sync_mc(struct net_device *dev,
+ 		.addr = addr,
+ 		.event = DSA_MC_ADD,
+ 	};
+-	int err;
+ 
+ 	dev_mc_add(master, addr);
+ 
+ 	if (!dsa_switch_supports_mc_filtering(dp->ds))
+ 		return 0;
+ 
+-	err = dsa_slave_schedule_standalone_work(dev, DSA_MC_ADD, addr, 0);
+-	if (err)
+-		return err;
+-
+-	return vlan_for_each(dev, dsa_slave_host_vlan_rx_filtering, &ctx);
++	return dsa_slave_vlan_for_each(dev, dsa_slave_host_vlan_rx_filtering,
++				       &ctx);
+ }
+ 
+ static int dsa_slave_unsync_mc(struct net_device *dev,
+@@ -252,18 +262,14 @@ static int dsa_slave_unsync_mc(struct net_device *dev,
+ 		.addr = addr,
+ 		.event = DSA_MC_DEL,
+ 	};
+-	int err;
+ 
+ 	dev_mc_del(master, addr);
+ 
+ 	if (!dsa_switch_supports_mc_filtering(dp->ds))
+ 		return 0;
+ 
+-	err = dsa_slave_schedule_standalone_work(dev, DSA_MC_DEL, addr, 0);
+-	if (err)
+-		return err;
+-
+-	return vlan_for_each(dev, dsa_slave_host_vlan_rx_filtering, &ctx);
++	return dsa_slave_vlan_for_each(dev, dsa_slave_host_vlan_rx_filtering,
++				       &ctx);
+ }
+ 
+ void dsa_slave_sync_ha(struct net_device *dev)
+@@ -1759,6 +1765,7 @@ static int dsa_slave_vlan_rx_add_vid(struct net_device *dev, __be16 proto,
+ 	struct netlink_ext_ack extack = {0};
+ 	struct dsa_switch *ds = dp->ds;
+ 	struct netdev_hw_addr *ha;
++	struct dsa_vlan *v;
+ 	int ret;
+ 
+ 	/* User port... */
+@@ -1782,8 +1789,17 @@ static int dsa_slave_vlan_rx_add_vid(struct net_device *dev, __be16 proto,
+ 	    !dsa_switch_supports_mc_filtering(ds))
+ 		return 0;
+ 
++	v = kzalloc(sizeof(*v), GFP_KERNEL);
++	if (!v) {
++		ret = -ENOMEM;
++		goto rollback;
++	}
++
+ 	netif_addr_lock_bh(dev);
+ 
++	v->vid = vid;
++	list_add_tail(&v->list, &dp->user_vlans);
++
+ 	if (dsa_switch_supports_mc_filtering(ds)) {
+ 		netdev_for_each_synced_mc_addr(ha, dev) {
+ 			dsa_slave_schedule_standalone_work(dev, DSA_MC_ADD,
+@@ -1803,6 +1819,12 @@ static int dsa_slave_vlan_rx_add_vid(struct net_device *dev, __be16 proto,
+ 	dsa_flush_workqueue();
+ 
+ 	return 0;
++
++rollback:
++	dsa_port_host_vlan_del(dp, &vlan);
++	dsa_port_vlan_del(dp, &vlan);
++
++	return ret;
+ }
+ 
+ static int dsa_slave_vlan_rx_kill_vid(struct net_device *dev, __be16 proto,
+@@ -1816,6 +1838,7 @@ static int dsa_slave_vlan_rx_kill_vid(struct net_device *dev, __be16 proto,
+ 	};
+ 	struct dsa_switch *ds = dp->ds;
+ 	struct netdev_hw_addr *ha;
++	struct dsa_vlan *v;
+ 	int err;
+ 
+ 	err = dsa_port_vlan_del(dp, &vlan);
+@@ -1832,6 +1855,15 @@ static int dsa_slave_vlan_rx_kill_vid(struct net_device *dev, __be16 proto,
+ 
+ 	netif_addr_lock_bh(dev);
+ 
++	v = dsa_vlan_find(&dp->user_vlans, &vlan);
++	if (!v) {
++		netif_addr_unlock_bh(dev);
++		return -ENOENT;
++	}
++
++	list_del(&v->list);
++	kfree(v);
++
+ 	if (dsa_switch_supports_mc_filtering(ds)) {
+ 		netdev_for_each_synced_mc_addr(ha, dev) {
+ 			dsa_slave_schedule_standalone_work(dev, DSA_MC_DEL,
+diff --git a/net/dsa/switch.c b/net/dsa/switch.c
+index d5bc4bb7310dc..36f8ffea2d168 100644
+--- a/net/dsa/switch.c
++++ b/net/dsa/switch.c
+@@ -634,8 +634,8 @@ static bool dsa_port_host_vlan_match(struct dsa_port *dp,
+ 	return false;
+ }
+ 
+-static struct dsa_vlan *dsa_vlan_find(struct list_head *vlan_list,
+-				      const struct switchdev_obj_port_vlan *vlan)
++struct dsa_vlan *dsa_vlan_find(struct list_head *vlan_list,
++			       const struct switchdev_obj_port_vlan *vlan)
+ {
+ 	struct dsa_vlan *v;
+ 
+diff --git a/net/dsa/switch.h b/net/dsa/switch.h
+index 15e67b95eb6e1..ea034677da153 100644
+--- a/net/dsa/switch.h
++++ b/net/dsa/switch.h
+@@ -111,6 +111,9 @@ struct dsa_notifier_master_state_info {
+ 	bool operational;
+ };
+ 
++struct dsa_vlan *dsa_vlan_find(struct list_head *vlan_list,
++			       const struct switchdev_obj_port_vlan *vlan);
++
+ int dsa_tree_notify(struct dsa_switch_tree *dst, unsigned long e, void *v);
+ int dsa_broadcast(unsigned long e, void *v);
+ 
+diff --git a/net/mac80211/debugfs_netdev.c b/net/mac80211/debugfs_netdev.c
+index 0bac9af3ca966..371add9061f1f 100644
+--- a/net/mac80211/debugfs_netdev.c
++++ b/net/mac80211/debugfs_netdev.c
+@@ -691,7 +691,7 @@ static void add_sta_files(struct ieee80211_sub_if_data *sdata)
+ 	DEBUGFS_ADD_MODE(uapsd_queues, 0600);
+ 	DEBUGFS_ADD_MODE(uapsd_max_sp_len, 0600);
+ 	DEBUGFS_ADD_MODE(tdls_wider_bw, 0600);
+-	DEBUGFS_ADD_MODE(valid_links, 0200);
++	DEBUGFS_ADD_MODE(valid_links, 0400);
+ 	DEBUGFS_ADD_MODE(active_links, 0600);
+ }
+ 
+diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
+index 941bda9141faa..2ec9e1ead127e 100644
+--- a/net/mac80211/sta_info.c
++++ b/net/mac80211/sta_info.c
+@@ -2901,6 +2901,8 @@ int ieee80211_sta_activate_link(struct sta_info *sta, unsigned int link_id)
+ 	if (!test_sta_flag(sta, WLAN_STA_INSERTED))
+ 		goto hash;
+ 
++	ieee80211_recalc_min_chandef(sdata, link_id);
++
+ 	/* Ensure the values are updated for the driver,
+ 	 * redone by sta_remove_link on failure.
+ 	 */
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index 1a0d38cd46337..ce25d14db4d28 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -3707,10 +3707,8 @@ bool ieee80211_chandef_he_6ghz_oper(struct ieee80211_sub_if_data *sdata,
+ 	}
+ 
+ 	eht_cap = ieee80211_get_eht_iftype_cap(sband, iftype);
+-	if (!eht_cap) {
+-		sdata_info(sdata, "Missing iftype sband data/EHT cap");
++	if (!eht_cap)
+ 		eht_oper = NULL;
+-	}
+ 
+ 	he_6ghz_oper = ieee80211_he_6ghz_oper(he_oper);
+ 
+diff --git a/net/netfilter/nf_conntrack_proto_dccp.c b/net/netfilter/nf_conntrack_proto_dccp.c
+index c1557d47ccd1e..d4fd626d2b8c3 100644
+--- a/net/netfilter/nf_conntrack_proto_dccp.c
++++ b/net/netfilter/nf_conntrack_proto_dccp.c
+@@ -432,9 +432,19 @@ static bool dccp_error(const struct dccp_hdr *dh,
+ 		       struct sk_buff *skb, unsigned int dataoff,
+ 		       const struct nf_hook_state *state)
+ {
++	static const unsigned long require_seq48 = 1 << DCCP_PKT_REQUEST |
++						   1 << DCCP_PKT_RESPONSE |
++						   1 << DCCP_PKT_CLOSEREQ |
++						   1 << DCCP_PKT_CLOSE |
++						   1 << DCCP_PKT_RESET |
++						   1 << DCCP_PKT_SYNC |
++						   1 << DCCP_PKT_SYNCACK;
+ 	unsigned int dccp_len = skb->len - dataoff;
+ 	unsigned int cscov;
+ 	const char *msg;
++	u8 type;
++
++	BUILD_BUG_ON(DCCP_PKT_INVALID >= BITS_PER_LONG);
+ 
+ 	if (dh->dccph_doff * 4 < sizeof(struct dccp_hdr) ||
+ 	    dh->dccph_doff * 4 > dccp_len) {
+@@ -459,34 +469,70 @@ static bool dccp_error(const struct dccp_hdr *dh,
+ 		goto out_invalid;
+ 	}
+ 
+-	if (dh->dccph_type >= DCCP_PKT_INVALID) {
++	type = dh->dccph_type;
++	if (type >= DCCP_PKT_INVALID) {
+ 		msg = "nf_ct_dccp: reserved packet type ";
+ 		goto out_invalid;
+ 	}
++
++	if (test_bit(type, &require_seq48) && !dh->dccph_x) {
++		msg = "nf_ct_dccp: type lacks 48bit sequence numbers";
++		goto out_invalid;
++	}
++
+ 	return false;
+ out_invalid:
+ 	nf_l4proto_log_invalid(skb, state, IPPROTO_DCCP, "%s", msg);
+ 	return true;
+ }
+ 
++struct nf_conntrack_dccp_buf {
++	struct dccp_hdr dh;	 /* generic header part */
++	struct dccp_hdr_ext ext; /* optional depending dh->dccph_x */
++	union {			 /* depends on header type */
++		struct dccp_hdr_ack_bits ack;
++		struct dccp_hdr_request req;
++		struct dccp_hdr_response response;
++		struct dccp_hdr_reset rst;
++	} u;
++};
++
++static struct dccp_hdr *
++dccp_header_pointer(const struct sk_buff *skb, int offset, const struct dccp_hdr *dh,
++		    struct nf_conntrack_dccp_buf *buf)
++{
++	unsigned int hdrlen = __dccp_hdr_len(dh);
++
++	if (hdrlen > sizeof(*buf))
++		return NULL;
++
++	return skb_header_pointer(skb, offset, hdrlen, buf);
++}
++
+ int nf_conntrack_dccp_packet(struct nf_conn *ct, struct sk_buff *skb,
+ 			     unsigned int dataoff,
+ 			     enum ip_conntrack_info ctinfo,
+ 			     const struct nf_hook_state *state)
+ {
+ 	enum ip_conntrack_dir dir = CTINFO2DIR(ctinfo);
+-	struct dccp_hdr _dh, *dh;
++	struct nf_conntrack_dccp_buf _dh;
+ 	u_int8_t type, old_state, new_state;
+ 	enum ct_dccp_roles role;
+ 	unsigned int *timeouts;
++	struct dccp_hdr *dh;
+ 
+-	dh = skb_header_pointer(skb, dataoff, sizeof(_dh), &_dh);
++	dh = skb_header_pointer(skb, dataoff, sizeof(*dh), &_dh.dh);
+ 	if (!dh)
+ 		return NF_DROP;
+ 
+ 	if (dccp_error(dh, skb, dataoff, state))
+ 		return -NF_ACCEPT;
+ 
++	/* pull again, including possible 48 bit sequences and subtype header */
++	dh = dccp_header_pointer(skb, dataoff, dh, &_dh);
++	if (!dh)
++		return NF_DROP;
++
+ 	type = dh->dccph_type;
+ 	if (!nf_ct_is_confirmed(ct) && !dccp_new(ct, skb, dh, state))
+ 		return -NF_ACCEPT;
+diff --git a/net/netfilter/nf_conntrack_sip.c b/net/netfilter/nf_conntrack_sip.c
+index 77f5e82d8e3fe..d0eac27f6ba03 100644
+--- a/net/netfilter/nf_conntrack_sip.c
++++ b/net/netfilter/nf_conntrack_sip.c
+@@ -611,7 +611,7 @@ int ct_sip_parse_numerical_param(const struct nf_conn *ct, const char *dptr,
+ 	start += strlen(name);
+ 	*val = simple_strtoul(start, &end, 0);
+ 	if (start == end)
+-		return 0;
++		return -1;
+ 	if (matchoff && matchlen) {
+ 		*matchoff = start - dptr;
+ 		*matchlen = end - start;
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 717e27a4b66a0..f4adccb8d49a2 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -1600,6 +1600,7 @@ out:
+ int netlink_set_err(struct sock *ssk, u32 portid, u32 group, int code)
+ {
+ 	struct netlink_set_err_data info;
++	unsigned long flags;
+ 	struct sock *sk;
+ 	int ret = 0;
+ 
+@@ -1609,12 +1610,12 @@ int netlink_set_err(struct sock *ssk, u32 portid, u32 group, int code)
+ 	/* sk->sk_err wants a positive error value */
+ 	info.code = -code;
+ 
+-	read_lock(&nl_table_lock);
++	read_lock_irqsave(&nl_table_lock, flags);
+ 
+ 	sk_for_each_bound(sk, &nl_table[ssk->sk_protocol].mc_list)
+ 		ret += do_one_set_err(sk, &info);
+ 
+-	read_unlock(&nl_table_lock);
++	read_unlock_irqrestore(&nl_table_lock, flags);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(netlink_set_err);
+diff --git a/net/netlink/diag.c b/net/netlink/diag.c
+index c6255eac305c7..e4f21b1067bcc 100644
+--- a/net/netlink/diag.c
++++ b/net/netlink/diag.c
+@@ -94,6 +94,7 @@ static int __netlink_diag_dump(struct sk_buff *skb, struct netlink_callback *cb,
+ 	struct net *net = sock_net(skb->sk);
+ 	struct netlink_diag_req *req;
+ 	struct netlink_sock *nlsk;
++	unsigned long flags;
+ 	struct sock *sk;
+ 	int num = 2;
+ 	int ret = 0;
+@@ -152,7 +153,7 @@ static int __netlink_diag_dump(struct sk_buff *skb, struct netlink_callback *cb,
+ 	num++;
+ 
+ mc_list:
+-	read_lock(&nl_table_lock);
++	read_lock_irqsave(&nl_table_lock, flags);
+ 	sk_for_each_bound(sk, &tbl->mc_list) {
+ 		if (sk_hashed(sk))
+ 			continue;
+@@ -167,13 +168,13 @@ mc_list:
+ 				 NETLINK_CB(cb->skb).portid,
+ 				 cb->nlh->nlmsg_seq,
+ 				 NLM_F_MULTI,
+-				 sock_i_ino(sk)) < 0) {
++				 __sock_i_ino(sk)) < 0) {
+ 			ret = 1;
+ 			break;
+ 		}
+ 		num++;
+ 	}
+-	read_unlock(&nl_table_lock);
++	read_unlock_irqrestore(&nl_table_lock, flags);
+ 
+ done:
+ 	cb->args[0] = num;
+diff --git a/net/nfc/llcp.h b/net/nfc/llcp.h
+index c1d9be636933c..d8345ed57c954 100644
+--- a/net/nfc/llcp.h
++++ b/net/nfc/llcp.h
+@@ -201,7 +201,6 @@ void nfc_llcp_sock_link(struct llcp_sock_list *l, struct sock *s);
+ void nfc_llcp_sock_unlink(struct llcp_sock_list *l, struct sock *s);
+ void nfc_llcp_socket_remote_param_init(struct nfc_llcp_sock *sock);
+ struct nfc_llcp_local *nfc_llcp_find_local(struct nfc_dev *dev);
+-struct nfc_llcp_local *nfc_llcp_local_get(struct nfc_llcp_local *local);
+ int nfc_llcp_local_put(struct nfc_llcp_local *local);
+ u8 nfc_llcp_get_sdp_ssap(struct nfc_llcp_local *local,
+ 			 struct nfc_llcp_sock *sock);
+diff --git a/net/nfc/llcp_commands.c b/net/nfc/llcp_commands.c
+index 41e3a20c89355..e2680a3bef799 100644
+--- a/net/nfc/llcp_commands.c
++++ b/net/nfc/llcp_commands.c
+@@ -359,6 +359,7 @@ int nfc_llcp_send_symm(struct nfc_dev *dev)
+ 	struct sk_buff *skb;
+ 	struct nfc_llcp_local *local;
+ 	u16 size = 0;
++	int err;
+ 
+ 	local = nfc_llcp_find_local(dev);
+ 	if (local == NULL)
+@@ -368,8 +369,10 @@ int nfc_llcp_send_symm(struct nfc_dev *dev)
+ 	size += dev->tx_headroom + dev->tx_tailroom + NFC_HEADER_SIZE;
+ 
+ 	skb = alloc_skb(size, GFP_KERNEL);
+-	if (skb == NULL)
+-		return -ENOMEM;
++	if (skb == NULL) {
++		err = -ENOMEM;
++		goto out;
++	}
+ 
+ 	skb_reserve(skb, dev->tx_headroom + NFC_HEADER_SIZE);
+ 
+@@ -379,8 +382,11 @@ int nfc_llcp_send_symm(struct nfc_dev *dev)
+ 
+ 	nfc_llcp_send_to_raw_sock(local, skb, NFC_DIRECTION_TX);
+ 
+-	return nfc_data_exchange(dev, local->target_idx, skb,
++	err = nfc_data_exchange(dev, local->target_idx, skb,
+ 				 nfc_llcp_recv, local);
++out:
++	nfc_llcp_local_put(local);
++	return err;
+ }
+ 
+ int nfc_llcp_send_connect(struct nfc_llcp_sock *sock)
+@@ -390,7 +396,8 @@ int nfc_llcp_send_connect(struct nfc_llcp_sock *sock)
+ 	const u8 *service_name_tlv = NULL;
+ 	const u8 *miux_tlv = NULL;
+ 	const u8 *rw_tlv = NULL;
+-	u8 service_name_tlv_length, miux_tlv_length,  rw_tlv_length, rw;
++	u8 service_name_tlv_length = 0;
++	u8 miux_tlv_length,  rw_tlv_length, rw;
+ 	int err;
+ 	u16 size = 0;
+ 	__be16 miux;
+diff --git a/net/nfc/llcp_core.c b/net/nfc/llcp_core.c
+index a27e1842b2a09..f60e424e06076 100644
+--- a/net/nfc/llcp_core.c
++++ b/net/nfc/llcp_core.c
+@@ -17,6 +17,8 @@
+ static u8 llcp_magic[3] = {0x46, 0x66, 0x6d};
+ 
+ static LIST_HEAD(llcp_devices);
++/* Protects llcp_devices list */
++static DEFINE_SPINLOCK(llcp_devices_lock);
+ 
+ static void nfc_llcp_rx_skb(struct nfc_llcp_local *local, struct sk_buff *skb);
+ 
+@@ -141,7 +143,7 @@ static void nfc_llcp_socket_release(struct nfc_llcp_local *local, bool device,
+ 	write_unlock(&local->raw_sockets.lock);
+ }
+ 
+-struct nfc_llcp_local *nfc_llcp_local_get(struct nfc_llcp_local *local)
++static struct nfc_llcp_local *nfc_llcp_local_get(struct nfc_llcp_local *local)
+ {
+ 	kref_get(&local->ref);
+ 
+@@ -169,7 +171,6 @@ static void local_release(struct kref *ref)
+ 
+ 	local = container_of(ref, struct nfc_llcp_local, ref);
+ 
+-	list_del(&local->list);
+ 	local_cleanup(local);
+ 	kfree(local);
+ }
+@@ -282,12 +283,33 @@ static void nfc_llcp_sdreq_timer(struct timer_list *t)
+ struct nfc_llcp_local *nfc_llcp_find_local(struct nfc_dev *dev)
+ {
+ 	struct nfc_llcp_local *local;
++	struct nfc_llcp_local *res = NULL;
+ 
++	spin_lock(&llcp_devices_lock);
+ 	list_for_each_entry(local, &llcp_devices, list)
+-		if (local->dev == dev)
++		if (local->dev == dev) {
++			res = nfc_llcp_local_get(local);
++			break;
++		}
++	spin_unlock(&llcp_devices_lock);
++
++	return res;
++}
++
++static struct nfc_llcp_local *nfc_llcp_remove_local(struct nfc_dev *dev)
++{
++	struct nfc_llcp_local *local, *tmp;
++
++	spin_lock(&llcp_devices_lock);
++	list_for_each_entry_safe(local, tmp, &llcp_devices, list)
++		if (local->dev == dev) {
++			list_del(&local->list);
++			spin_unlock(&llcp_devices_lock);
+ 			return local;
++		}
++	spin_unlock(&llcp_devices_lock);
+ 
+-	pr_debug("No device found\n");
++	pr_warn("Shutting down device not found\n");
+ 
+ 	return NULL;
+ }
+@@ -608,12 +630,15 @@ u8 *nfc_llcp_general_bytes(struct nfc_dev *dev, size_t *general_bytes_len)
+ 
+ 	*general_bytes_len = local->gb_len;
+ 
++	nfc_llcp_local_put(local);
++
+ 	return local->gb;
+ }
+ 
+ int nfc_llcp_set_remote_gb(struct nfc_dev *dev, const u8 *gb, u8 gb_len)
+ {
+ 	struct nfc_llcp_local *local;
++	int err;
+ 
+ 	if (gb_len < 3 || gb_len > NFC_MAX_GT_LEN)
+ 		return -EINVAL;
+@@ -630,12 +655,16 @@ int nfc_llcp_set_remote_gb(struct nfc_dev *dev, const u8 *gb, u8 gb_len)
+ 
+ 	if (memcmp(local->remote_gb, llcp_magic, 3)) {
+ 		pr_err("MAC does not support LLCP\n");
+-		return -EINVAL;
++		err = -EINVAL;
++		goto out;
+ 	}
+ 
+-	return nfc_llcp_parse_gb_tlv(local,
++	err = nfc_llcp_parse_gb_tlv(local,
+ 				     &local->remote_gb[3],
+ 				     local->remote_gb_len - 3);
++out:
++	nfc_llcp_local_put(local);
++	return err;
+ }
+ 
+ static u8 nfc_llcp_dsap(const struct sk_buff *pdu)
+@@ -1517,6 +1546,8 @@ int nfc_llcp_data_received(struct nfc_dev *dev, struct sk_buff *skb)
+ 
+ 	__nfc_llcp_recv(local, skb);
+ 
++	nfc_llcp_local_put(local);
++
+ 	return 0;
+ }
+ 
+@@ -1533,6 +1564,8 @@ void nfc_llcp_mac_is_down(struct nfc_dev *dev)
+ 
+ 	/* Close and purge all existing sockets */
+ 	nfc_llcp_socket_release(local, true, 0);
++
++	nfc_llcp_local_put(local);
+ }
+ 
+ void nfc_llcp_mac_is_up(struct nfc_dev *dev, u32 target_idx,
+@@ -1558,6 +1591,8 @@ void nfc_llcp_mac_is_up(struct nfc_dev *dev, u32 target_idx,
+ 		mod_timer(&local->link_timer,
+ 			  jiffies + msecs_to_jiffies(local->remote_lto));
+ 	}
++
++	nfc_llcp_local_put(local);
+ }
+ 
+ int nfc_llcp_register_device(struct nfc_dev *ndev)
+@@ -1608,7 +1643,7 @@ int nfc_llcp_register_device(struct nfc_dev *ndev)
+ 
+ void nfc_llcp_unregister_device(struct nfc_dev *dev)
+ {
+-	struct nfc_llcp_local *local = nfc_llcp_find_local(dev);
++	struct nfc_llcp_local *local = nfc_llcp_remove_local(dev);
+ 
+ 	if (local == NULL) {
+ 		pr_debug("No such device\n");
+diff --git a/net/nfc/llcp_sock.c b/net/nfc/llcp_sock.c
+index 77642d18a3b43..645677f84dba2 100644
+--- a/net/nfc/llcp_sock.c
++++ b/net/nfc/llcp_sock.c
+@@ -99,7 +99,7 @@ static int llcp_sock_bind(struct socket *sock, struct sockaddr *addr, int alen)
+ 	}
+ 
+ 	llcp_sock->dev = dev;
+-	llcp_sock->local = nfc_llcp_local_get(local);
++	llcp_sock->local = local;
+ 	llcp_sock->nfc_protocol = llcp_addr.nfc_protocol;
+ 	llcp_sock->service_name_len = min_t(unsigned int,
+ 					    llcp_addr.service_name_len,
+@@ -186,7 +186,7 @@ static int llcp_raw_sock_bind(struct socket *sock, struct sockaddr *addr,
+ 	}
+ 
+ 	llcp_sock->dev = dev;
+-	llcp_sock->local = nfc_llcp_local_get(local);
++	llcp_sock->local = local;
+ 	llcp_sock->nfc_protocol = llcp_addr.nfc_protocol;
+ 
+ 	nfc_llcp_sock_link(&local->raw_sockets, sk);
+@@ -696,22 +696,22 @@ static int llcp_sock_connect(struct socket *sock, struct sockaddr *_addr,
+ 	if (dev->dep_link_up == false) {
+ 		ret = -ENOLINK;
+ 		device_unlock(&dev->dev);
+-		goto put_dev;
++		goto sock_llcp_put_local;
+ 	}
+ 	device_unlock(&dev->dev);
+ 
+ 	if (local->rf_mode == NFC_RF_INITIATOR &&
+ 	    addr->target_idx != local->target_idx) {
+ 		ret = -ENOLINK;
+-		goto put_dev;
++		goto sock_llcp_put_local;
+ 	}
+ 
+ 	llcp_sock->dev = dev;
+-	llcp_sock->local = nfc_llcp_local_get(local);
++	llcp_sock->local = local;
+ 	llcp_sock->ssap = nfc_llcp_get_local_ssap(local);
+ 	if (llcp_sock->ssap == LLCP_SAP_MAX) {
+ 		ret = -ENOMEM;
+-		goto sock_llcp_put_local;
++		goto sock_llcp_nullify;
+ 	}
+ 
+ 	llcp_sock->reserved_ssap = llcp_sock->ssap;
+@@ -757,11 +757,13 @@ sock_unlink:
+ sock_llcp_release:
+ 	nfc_llcp_put_ssap(local, llcp_sock->ssap);
+ 
+-sock_llcp_put_local:
+-	nfc_llcp_local_put(llcp_sock->local);
++sock_llcp_nullify:
+ 	llcp_sock->local = NULL;
+ 	llcp_sock->dev = NULL;
+ 
++sock_llcp_put_local:
++	nfc_llcp_local_put(local);
++
+ put_dev:
+ 	nfc_put_device(dev);
+ 
+diff --git a/net/nfc/netlink.c b/net/nfc/netlink.c
+index b9264e730fd93..e9ac6a6f934e7 100644
+--- a/net/nfc/netlink.c
++++ b/net/nfc/netlink.c
+@@ -1039,11 +1039,14 @@ static int nfc_genl_llc_get_params(struct sk_buff *skb, struct genl_info *info)
+ 	msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
+ 	if (!msg) {
+ 		rc = -ENOMEM;
+-		goto exit;
++		goto put_local;
+ 	}
+ 
+ 	rc = nfc_genl_send_params(msg, local, info->snd_portid, info->snd_seq);
+ 
++put_local:
++	nfc_llcp_local_put(local);
++
+ exit:
+ 	device_unlock(&dev->dev);
+ 
+@@ -1105,7 +1108,7 @@ static int nfc_genl_llc_set_params(struct sk_buff *skb, struct genl_info *info)
+ 	if (info->attrs[NFC_ATTR_LLC_PARAM_LTO]) {
+ 		if (dev->dep_link_up) {
+ 			rc = -EINPROGRESS;
+-			goto exit;
++			goto put_local;
+ 		}
+ 
+ 		local->lto = nla_get_u8(info->attrs[NFC_ATTR_LLC_PARAM_LTO]);
+@@ -1117,6 +1120,9 @@ static int nfc_genl_llc_set_params(struct sk_buff *skb, struct genl_info *info)
+ 	if (info->attrs[NFC_ATTR_LLC_PARAM_MIUX])
+ 		local->miux = cpu_to_be16(miux);
+ 
++put_local:
++	nfc_llcp_local_put(local);
++
+ exit:
+ 	device_unlock(&dev->dev);
+ 
+@@ -1172,7 +1178,7 @@ static int nfc_genl_llc_sdreq(struct sk_buff *skb, struct genl_info *info)
+ 
+ 		if (rc != 0) {
+ 			rc = -EINVAL;
+-			goto exit;
++			goto put_local;
+ 		}
+ 
+ 		if (!sdp_attrs[NFC_SDP_ATTR_URI])
+@@ -1191,7 +1197,7 @@ static int nfc_genl_llc_sdreq(struct sk_buff *skb, struct genl_info *info)
+ 		sdreq = nfc_llcp_build_sdreq_tlv(tid, uri, uri_len);
+ 		if (sdreq == NULL) {
+ 			rc = -ENOMEM;
+-			goto exit;
++			goto put_local;
+ 		}
+ 
+ 		tlvs_len += sdreq->tlv_len;
+@@ -1201,10 +1207,14 @@ static int nfc_genl_llc_sdreq(struct sk_buff *skb, struct genl_info *info)
+ 
+ 	if (hlist_empty(&sdreq_list)) {
+ 		rc = -EINVAL;
+-		goto exit;
++		goto put_local;
+ 	}
+ 
+ 	rc = nfc_llcp_send_snl_sdreq(local, &sdreq_list, tlvs_len);
++
++put_local:
++	nfc_llcp_local_put(local);
++
+ exit:
+ 	device_unlock(&dev->dev);
+ 
+diff --git a/net/nfc/nfc.h b/net/nfc/nfc.h
+index de2ec66d7e83a..0b1e6466f4fbf 100644
+--- a/net/nfc/nfc.h
++++ b/net/nfc/nfc.h
+@@ -52,6 +52,7 @@ int nfc_llcp_set_remote_gb(struct nfc_dev *dev, const u8 *gb, u8 gb_len);
+ u8 *nfc_llcp_general_bytes(struct nfc_dev *dev, size_t *general_bytes_len);
+ int nfc_llcp_data_received(struct nfc_dev *dev, struct sk_buff *skb);
+ struct nfc_llcp_local *nfc_llcp_find_local(struct nfc_dev *dev);
++int nfc_llcp_local_put(struct nfc_llcp_local *local);
+ int __init nfc_llcp_init(void);
+ void nfc_llcp_exit(void);
+ void nfc_llcp_free_sdp_tlv(struct nfc_llcp_sdp_tlv *sdp);
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 218e0982c3707..0932cbf568ee9 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -8280,6 +8280,22 @@ static int sctp_getsockopt(struct sock *sk, int level, int optname,
+ 	return retval;
+ }
+ 
++static bool sctp_bpf_bypass_getsockopt(int level, int optname)
++{
++	if (level == SOL_SCTP) {
++		switch (optname) {
++		case SCTP_SOCKOPT_PEELOFF:
++		case SCTP_SOCKOPT_PEELOFF_FLAGS:
++		case SCTP_SOCKOPT_CONNECTX3:
++			return true;
++		default:
++			return false;
++		}
++	}
++
++	return false;
++}
++
+ static int sctp_hash(struct sock *sk)
+ {
+ 	/* STUB */
+@@ -9649,6 +9665,7 @@ struct proto sctp_prot = {
+ 	.shutdown    =	sctp_shutdown,
+ 	.setsockopt  =	sctp_setsockopt,
+ 	.getsockopt  =	sctp_getsockopt,
++	.bpf_bypass_getsockopt	= sctp_bpf_bypass_getsockopt,
+ 	.sendmsg     =	sctp_sendmsg,
+ 	.recvmsg     =	sctp_recvmsg,
+ 	.bind        =	sctp_bind,
+@@ -9704,6 +9721,7 @@ struct proto sctpv6_prot = {
+ 	.shutdown	= sctp_shutdown,
+ 	.setsockopt	= sctp_setsockopt,
+ 	.getsockopt	= sctp_getsockopt,
++	.bpf_bypass_getsockopt	= sctp_bpf_bypass_getsockopt,
+ 	.sendmsg	= sctp_sendmsg,
+ 	.recvmsg	= sctp_recvmsg,
+ 	.bind		= sctp_bind,
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+index a22fe7587fa6f..70207d8a318a4 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+@@ -796,6 +796,12 @@ int svc_rdma_recvfrom(struct svc_rqst *rqstp)
+ 	struct svc_rdma_recv_ctxt *ctxt;
+ 	int ret;
+ 
++	/* Prevent svc_xprt_release() from releasing pages in rq_pages
++	 * when returning 0 or an error.
++	 */
++	rqstp->rq_respages = rqstp->rq_pages;
++	rqstp->rq_next_page = rqstp->rq_respages;
++
+ 	rqstp->rq_xprt_ctxt = NULL;
+ 
+ 	ctxt = NULL;
+@@ -819,12 +825,6 @@ int svc_rdma_recvfrom(struct svc_rqst *rqstp)
+ 				   DMA_FROM_DEVICE);
+ 	svc_rdma_build_arg_xdr(rqstp, ctxt);
+ 
+-	/* Prevent svc_xprt_release from releasing pages in rq_pages
+-	 * if we return 0 or an error.
+-	 */
+-	rqstp->rq_respages = rqstp->rq_pages;
+-	rqstp->rq_next_page = rqstp->rq_respages;
+-
+ 	ret = svc_rdma_xdr_decode_req(&rqstp->rq_arg, ctxt);
+ 	if (ret < 0)
+ 		goto out_err;
+diff --git a/net/wireless/core.c b/net/wireless/core.c
+index b3ec9eaec36b3..609b79fe4a748 100644
+--- a/net/wireless/core.c
++++ b/net/wireless/core.c
+@@ -721,22 +721,6 @@ int wiphy_register(struct wiphy *wiphy)
+ 			return -EINVAL;
+ 	}
+ 
+-	/*
+-	 * if a wiphy has unsupported modes for regulatory channel enforcement,
+-	 * opt-out of enforcement checking
+-	 */
+-	if (wiphy->interface_modes & ~(BIT(NL80211_IFTYPE_STATION) |
+-				       BIT(NL80211_IFTYPE_P2P_CLIENT) |
+-				       BIT(NL80211_IFTYPE_AP) |
+-				       BIT(NL80211_IFTYPE_MESH_POINT) |
+-				       BIT(NL80211_IFTYPE_P2P_GO) |
+-				       BIT(NL80211_IFTYPE_ADHOC) |
+-				       BIT(NL80211_IFTYPE_P2P_DEVICE) |
+-				       BIT(NL80211_IFTYPE_NAN) |
+-				       BIT(NL80211_IFTYPE_AP_VLAN) |
+-				       BIT(NL80211_IFTYPE_MONITOR)))
+-		wiphy->regulatory_flags |= REGULATORY_IGNORE_STALE_KICKOFF;
+-
+ 	if (WARN_ON((wiphy->regulatory_flags & REGULATORY_WIPHY_SELF_MANAGED) &&
+ 		    (wiphy->regulatory_flags &
+ 					(REGULATORY_CUSTOM_REG |
+diff --git a/net/wireless/reg.c b/net/wireless/reg.c
+index 26f11e4746c05..c8a1b925413b3 100644
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -2391,9 +2391,17 @@ static bool reg_wdev_chan_valid(struct wiphy *wiphy, struct wireless_dev *wdev)
+ 		case NL80211_IFTYPE_P2P_DEVICE:
+ 			/* no enforcement required */
+ 			break;
++		case NL80211_IFTYPE_OCB:
++			if (!wdev->u.ocb.chandef.chan)
++				continue;
++			chandef = wdev->u.ocb.chandef;
++			break;
++		case NL80211_IFTYPE_NAN:
++			/* we have no info, but NAN is also pretty universal */
++			continue;
+ 		default:
+ 			/* others not implemented for now */
+-			WARN_ON(1);
++			WARN_ON_ONCE(1);
+ 			break;
+ 		}
+ 
+@@ -2452,9 +2460,7 @@ static void reg_check_chans_work(struct work_struct *work)
+ 	rtnl_lock();
+ 
+ 	list_for_each_entry(rdev, &cfg80211_rdev_list, list)
+-		if (!(rdev->wiphy.regulatory_flags &
+-		      REGULATORY_IGNORE_STALE_KICKOFF))
+-			reg_leave_invalid_chans(&rdev->wiphy);
++		reg_leave_invalid_chans(&rdev->wiphy);
+ 
+ 	rtnl_unlock();
+ }
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index b3829ed844f84..68f9b6f7bf584 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -259,117 +259,152 @@ bool cfg80211_is_element_inherited(const struct element *elem,
+ }
+ EXPORT_SYMBOL(cfg80211_is_element_inherited);
+ 
+-static size_t cfg80211_gen_new_ie(const u8 *ie, size_t ielen,
+-				  const u8 *subelement, size_t subie_len,
+-				  u8 *new_ie, gfp_t gfp)
++static size_t cfg80211_copy_elem_with_frags(const struct element *elem,
++					    const u8 *ie, size_t ie_len,
++					    u8 **pos, u8 *buf, size_t buf_len)
+ {
+-	u8 *pos, *tmp;
+-	const u8 *tmp_old, *tmp_new;
+-	const struct element *non_inherit_elem;
+-	u8 *sub_copy;
++	if (WARN_ON((u8 *)elem < ie || elem->data > ie + ie_len ||
++		    elem->data + elem->datalen > ie + ie_len))
++		return 0;
+ 
+-	/* copy subelement as we need to change its content to
+-	 * mark an ie after it is processed.
+-	 */
+-	sub_copy = kmemdup(subelement, subie_len, gfp);
+-	if (!sub_copy)
++	if (elem->datalen + 2 > buf + buf_len - *pos)
+ 		return 0;
+ 
+-	pos = &new_ie[0];
++	memcpy(*pos, elem, elem->datalen + 2);
++	*pos += elem->datalen + 2;
++
++	/* Finish if it is not fragmented  */
++	if (elem->datalen != 255)
++		return *pos - buf;
++
++	ie_len = ie + ie_len - elem->data - elem->datalen;
++	ie = (const u8 *)elem->data + elem->datalen;
++
++	for_each_element(elem, ie, ie_len) {
++		if (elem->id != WLAN_EID_FRAGMENT)
++			break;
++
++		if (elem->datalen + 2 > buf + buf_len - *pos)
++			return 0;
++
++		memcpy(*pos, elem, elem->datalen + 2);
++		*pos += elem->datalen + 2;
+ 
+-	/* set new ssid */
+-	tmp_new = cfg80211_find_ie(WLAN_EID_SSID, sub_copy, subie_len);
+-	if (tmp_new) {
+-		memcpy(pos, tmp_new, tmp_new[1] + 2);
+-		pos += (tmp_new[1] + 2);
++		if (elem->datalen != 255)
++			break;
+ 	}
+ 
+-	/* get non inheritance list if exists */
+-	non_inherit_elem =
+-		cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE,
+-				       sub_copy, subie_len);
++	return *pos - buf;
++}
+ 
+-	/* go through IEs in ie (skip SSID) and subelement,
+-	 * merge them into new_ie
++static size_t cfg80211_gen_new_ie(const u8 *ie, size_t ielen,
++				  const u8 *subie, size_t subie_len,
++				  u8 *new_ie, size_t new_ie_len)
++{
++	const struct element *non_inherit_elem, *parent, *sub;
++	u8 *pos = new_ie;
++	u8 id, ext_id;
++	unsigned int match_len;
++
++	non_inherit_elem = cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE,
++						  subie, subie_len);
++
++	/* We copy the elements one by one from the parent to the generated
++	 * elements.
++	 * If they are not inherited (included in subie or in the non
++	 * inheritance element), then we copy all occurrences the first time
++	 * we see this element type.
+ 	 */
+-	tmp_old = cfg80211_find_ie(WLAN_EID_SSID, ie, ielen);
+-	tmp_old = (tmp_old) ? tmp_old + tmp_old[1] + 2 : ie;
+-
+-	while (tmp_old + 2 - ie <= ielen &&
+-	       tmp_old + tmp_old[1] + 2 - ie <= ielen) {
+-		if (tmp_old[0] == 0) {
+-			tmp_old++;
++	for_each_element(parent, ie, ielen) {
++		if (parent->id == WLAN_EID_FRAGMENT)
+ 			continue;
++
++		if (parent->id == WLAN_EID_EXTENSION) {
++			if (parent->datalen < 1)
++				continue;
++
++			id = WLAN_EID_EXTENSION;
++			ext_id = parent->data[0];
++			match_len = 1;
++		} else {
++			id = parent->id;
++			match_len = 0;
+ 		}
+ 
+-		if (tmp_old[0] == WLAN_EID_EXTENSION)
+-			tmp = (u8 *)cfg80211_find_ext_ie(tmp_old[2], sub_copy,
+-							 subie_len);
+-		else
+-			tmp = (u8 *)cfg80211_find_ie(tmp_old[0], sub_copy,
+-						     subie_len);
++		/* Find first occurrence in subie */
++		sub = cfg80211_find_elem_match(id, subie, subie_len,
++					       &ext_id, match_len, 0);
+ 
+-		if (!tmp) {
+-			const struct element *old_elem = (void *)tmp_old;
++		/* Copy from parent if not in subie and inherited */
++		if (!sub &&
++		    cfg80211_is_element_inherited(parent, non_inherit_elem)) {
++			if (!cfg80211_copy_elem_with_frags(parent,
++							   ie, ielen,
++							   &pos, new_ie,
++							   new_ie_len))
++				return 0;
+ 
+-			/* ie in old ie but not in subelement */
+-			if (cfg80211_is_element_inherited(old_elem,
+-							  non_inherit_elem)) {
+-				memcpy(pos, tmp_old, tmp_old[1] + 2);
+-				pos += tmp_old[1] + 2;
+-			}
+-		} else {
+-			/* ie in transmitting ie also in subelement,
+-			 * copy from subelement and flag the ie in subelement
+-			 * as copied (by setting eid field to WLAN_EID_SSID,
+-			 * which is skipped anyway).
+-			 * For vendor ie, compare OUI + type + subType to
+-			 * determine if they are the same ie.
+-			 */
+-			if (tmp_old[0] == WLAN_EID_VENDOR_SPECIFIC) {
+-				if (tmp_old[1] >= 5 && tmp[1] >= 5 &&
+-				    !memcmp(tmp_old + 2, tmp + 2, 5)) {
+-					/* same vendor ie, copy from
+-					 * subelement
+-					 */
+-					memcpy(pos, tmp, tmp[1] + 2);
+-					pos += tmp[1] + 2;
+-					tmp[0] = WLAN_EID_SSID;
+-				} else {
+-					memcpy(pos, tmp_old, tmp_old[1] + 2);
+-					pos += tmp_old[1] + 2;
+-				}
+-			} else {
+-				/* copy ie from subelement into new ie */
+-				memcpy(pos, tmp, tmp[1] + 2);
+-				pos += tmp[1] + 2;
+-				tmp[0] = WLAN_EID_SSID;
+-			}
++			continue;
+ 		}
+ 
+-		if (tmp_old + tmp_old[1] + 2 - ie == ielen)
+-			break;
++		/* Already copied if an earlier element had the same type */
++		if (cfg80211_find_elem_match(id, ie, (u8 *)parent - ie,
++					     &ext_id, match_len, 0))
++			continue;
+ 
+-		tmp_old += tmp_old[1] + 2;
++		/* Not inheriting, copy all similar elements from subie */
++		while (sub) {
++			if (!cfg80211_copy_elem_with_frags(sub,
++							   subie, subie_len,
++							   &pos, new_ie,
++							   new_ie_len))
++				return 0;
++
++			sub = cfg80211_find_elem_match(id,
++						       sub->data + sub->datalen,
++						       subie_len + subie -
++						       (sub->data +
++							sub->datalen),
++						       &ext_id, match_len, 0);
++		}
+ 	}
+ 
+-	/* go through subelement again to check if there is any ie not
+-	 * copied to new ie, skip ssid, capability, bssid-index ie
++	/* The above misses elements that are included in subie but not in the
++	 * parent, so do a pass over subie and append those.
++	 * Skip the non-tx BSSID caps and non-inheritance element.
+ 	 */
+-	tmp_new = sub_copy;
+-	while (tmp_new + 2 - sub_copy <= subie_len &&
+-	       tmp_new + tmp_new[1] + 2 - sub_copy <= subie_len) {
+-		if (!(tmp_new[0] == WLAN_EID_NON_TX_BSSID_CAP ||
+-		      tmp_new[0] == WLAN_EID_SSID)) {
+-			memcpy(pos, tmp_new, tmp_new[1] + 2);
+-			pos += tmp_new[1] + 2;
++	for_each_element(sub, subie, subie_len) {
++		if (sub->id == WLAN_EID_NON_TX_BSSID_CAP)
++			continue;
++
++		if (sub->id == WLAN_EID_FRAGMENT)
++			continue;
++
++		if (sub->id == WLAN_EID_EXTENSION) {
++			if (sub->datalen < 1)
++				continue;
++
++			id = WLAN_EID_EXTENSION;
++			ext_id = sub->data[0];
++			match_len = 1;
++
++			if (ext_id == WLAN_EID_EXT_NON_INHERITANCE)
++				continue;
++		} else {
++			id = sub->id;
++			match_len = 0;
+ 		}
+-		if (tmp_new + tmp_new[1] + 2 - sub_copy == subie_len)
+-			break;
+-		tmp_new += tmp_new[1] + 2;
++
++		/* Processed if one was included in the parent */
++		if (cfg80211_find_elem_match(id, ie, ielen,
++					     &ext_id, match_len, 0))
++			continue;
++
++		if (!cfg80211_copy_elem_with_frags(sub, subie, subie_len,
++						   &pos, new_ie, new_ie_len))
++			return 0;
+ 	}
+ 
+-	kfree(sub_copy);
+ 	return pos - new_ie;
+ }
+ 
+@@ -2217,7 +2252,7 @@ static void cfg80211_parse_mbssid_data(struct wiphy *wiphy,
+ 			new_ie_len = cfg80211_gen_new_ie(ie, ielen,
+ 							 profile,
+ 							 profile_len, new_ie,
+-							 gfp);
++							 IEEE80211_MAX_DATA_LEN);
+ 			if (!new_ie_len)
+ 				continue;
+ 
+@@ -2266,118 +2301,6 @@ cfg80211_inform_bss_data(struct wiphy *wiphy,
+ }
+ EXPORT_SYMBOL(cfg80211_inform_bss_data);
+ 
+-static void
+-cfg80211_parse_mbssid_frame_data(struct wiphy *wiphy,
+-				 struct cfg80211_inform_bss *data,
+-				 struct ieee80211_mgmt *mgmt, size_t len,
+-				 struct cfg80211_non_tx_bss *non_tx_data,
+-				 gfp_t gfp)
+-{
+-	enum cfg80211_bss_frame_type ftype;
+-	const u8 *ie = mgmt->u.probe_resp.variable;
+-	size_t ielen = len - offsetof(struct ieee80211_mgmt,
+-				      u.probe_resp.variable);
+-
+-	ftype = ieee80211_is_beacon(mgmt->frame_control) ?
+-		CFG80211_BSS_FTYPE_BEACON : CFG80211_BSS_FTYPE_PRESP;
+-
+-	cfg80211_parse_mbssid_data(wiphy, data, ftype, mgmt->bssid,
+-				   le64_to_cpu(mgmt->u.probe_resp.timestamp),
+-				   le16_to_cpu(mgmt->u.probe_resp.beacon_int),
+-				   ie, ielen, non_tx_data, gfp);
+-}
+-
+-static void
+-cfg80211_update_notlisted_nontrans(struct wiphy *wiphy,
+-				   struct cfg80211_bss *nontrans_bss,
+-				   struct ieee80211_mgmt *mgmt, size_t len)
+-{
+-	u8 *ie, *new_ie, *pos;
+-	const struct element *nontrans_ssid;
+-	const u8 *trans_ssid, *mbssid;
+-	size_t ielen = len - offsetof(struct ieee80211_mgmt,
+-				      u.probe_resp.variable);
+-	size_t new_ie_len;
+-	struct cfg80211_bss_ies *new_ies;
+-	const struct cfg80211_bss_ies *old;
+-	size_t cpy_len;
+-
+-	lockdep_assert_held(&wiphy_to_rdev(wiphy)->bss_lock);
+-
+-	ie = mgmt->u.probe_resp.variable;
+-
+-	new_ie_len = ielen;
+-	trans_ssid = cfg80211_find_ie(WLAN_EID_SSID, ie, ielen);
+-	if (!trans_ssid)
+-		return;
+-	new_ie_len -= trans_ssid[1];
+-	mbssid = cfg80211_find_ie(WLAN_EID_MULTIPLE_BSSID, ie, ielen);
+-	/*
+-	 * It's not valid to have the MBSSID element before SSID
+-	 * ignore if that happens - the code below assumes it is
+-	 * after (while copying things inbetween).
+-	 */
+-	if (!mbssid || mbssid < trans_ssid)
+-		return;
+-	new_ie_len -= mbssid[1];
+-
+-	nontrans_ssid = ieee80211_bss_get_elem(nontrans_bss, WLAN_EID_SSID);
+-	if (!nontrans_ssid)
+-		return;
+-
+-	new_ie_len += nontrans_ssid->datalen;
+-
+-	/* generate new ie for nontrans BSS
+-	 * 1. replace SSID with nontrans BSS' SSID
+-	 * 2. skip MBSSID IE
+-	 */
+-	new_ie = kzalloc(new_ie_len, GFP_ATOMIC);
+-	if (!new_ie)
+-		return;
+-
+-	new_ies = kzalloc(sizeof(*new_ies) + new_ie_len, GFP_ATOMIC);
+-	if (!new_ies)
+-		goto out_free;
+-
+-	pos = new_ie;
+-
+-	/* copy the nontransmitted SSID */
+-	cpy_len = nontrans_ssid->datalen + 2;
+-	memcpy(pos, nontrans_ssid, cpy_len);
+-	pos += cpy_len;
+-	/* copy the IEs between SSID and MBSSID */
+-	cpy_len = trans_ssid[1] + 2;
+-	memcpy(pos, (trans_ssid + cpy_len), (mbssid - (trans_ssid + cpy_len)));
+-	pos += (mbssid - (trans_ssid + cpy_len));
+-	/* copy the IEs after MBSSID */
+-	cpy_len = mbssid[1] + 2;
+-	memcpy(pos, mbssid + cpy_len, ((ie + ielen) - (mbssid + cpy_len)));
+-
+-	/* update ie */
+-	new_ies->len = new_ie_len;
+-	new_ies->tsf = le64_to_cpu(mgmt->u.probe_resp.timestamp);
+-	new_ies->from_beacon = ieee80211_is_beacon(mgmt->frame_control);
+-	memcpy(new_ies->data, new_ie, new_ie_len);
+-	if (ieee80211_is_probe_resp(mgmt->frame_control)) {
+-		old = rcu_access_pointer(nontrans_bss->proberesp_ies);
+-		rcu_assign_pointer(nontrans_bss->proberesp_ies, new_ies);
+-		rcu_assign_pointer(nontrans_bss->ies, new_ies);
+-		if (old)
+-			kfree_rcu((struct cfg80211_bss_ies *)old, rcu_head);
+-	} else {
+-		old = rcu_access_pointer(nontrans_bss->beacon_ies);
+-		rcu_assign_pointer(nontrans_bss->beacon_ies, new_ies);
+-		cfg80211_update_hidden_bsses(bss_from_pub(nontrans_bss),
+-					     new_ies, old);
+-		rcu_assign_pointer(nontrans_bss->ies, new_ies);
+-		if (old)
+-			kfree_rcu((struct cfg80211_bss_ies *)old, rcu_head);
+-	}
+-
+-out_free:
+-	kfree(new_ie);
+-}
+-
+ /* cfg80211_inform_bss_width_frame helper */
+ static struct cfg80211_bss *
+ cfg80211_inform_single_bss_frame_data(struct wiphy *wiphy,
+@@ -2519,51 +2442,31 @@ cfg80211_inform_bss_frame_data(struct wiphy *wiphy,
+ 			       struct ieee80211_mgmt *mgmt, size_t len,
+ 			       gfp_t gfp)
+ {
+-	struct cfg80211_bss *res, *tmp_bss;
++	struct cfg80211_bss *res;
+ 	const u8 *ie = mgmt->u.probe_resp.variable;
+-	const struct cfg80211_bss_ies *ies1, *ies2;
+ 	size_t ielen = len - offsetof(struct ieee80211_mgmt,
+ 				      u.probe_resp.variable);
++	enum cfg80211_bss_frame_type ftype;
+ 	struct cfg80211_non_tx_bss non_tx_data = {};
+ 
+ 	res = cfg80211_inform_single_bss_frame_data(wiphy, data, mgmt,
+ 						    len, gfp);
++	if (!res)
++		return NULL;
+ 
+ 	/* don't do any further MBSSID handling for S1G */
+ 	if (ieee80211_is_s1g_beacon(mgmt->frame_control))
+ 		return res;
+ 
+-	if (!res || !wiphy->support_mbssid ||
+-	    !cfg80211_find_elem(WLAN_EID_MULTIPLE_BSSID, ie, ielen))
+-		return res;
+-	if (wiphy->support_only_he_mbssid &&
+-	    !cfg80211_find_ext_elem(WLAN_EID_EXT_HE_CAPABILITY, ie, ielen))
+-		return res;
+-
++	ftype = ieee80211_is_beacon(mgmt->frame_control) ?
++		CFG80211_BSS_FTYPE_BEACON : CFG80211_BSS_FTYPE_PRESP;
+ 	non_tx_data.tx_bss = res;
+-	/* process each non-transmitting bss */
+-	cfg80211_parse_mbssid_frame_data(wiphy, data, mgmt, len,
+-					 &non_tx_data, gfp);
+-
+-	spin_lock_bh(&wiphy_to_rdev(wiphy)->bss_lock);
+ 
+-	/* check if the res has other nontransmitting bss which is not
+-	 * in MBSSID IE
+-	 */
+-	ies1 = rcu_access_pointer(res->ies);
+-
+-	/* go through nontrans_list, if the timestamp of the BSS is
+-	 * earlier than the timestamp of the transmitting BSS then
+-	 * update it
+-	 */
+-	list_for_each_entry(tmp_bss, &res->nontrans_list,
+-			    nontrans_list) {
+-		ies2 = rcu_access_pointer(tmp_bss->ies);
+-		if (ies2->tsf < ies1->tsf)
+-			cfg80211_update_notlisted_nontrans(wiphy, tmp_bss,
+-							   mgmt, len);
+-	}
+-	spin_unlock_bh(&wiphy_to_rdev(wiphy)->bss_lock);
++	/* process each non-transmitting bss */
++	cfg80211_parse_mbssid_data(wiphy, data, ftype, mgmt->bssid,
++				   le64_to_cpu(mgmt->u.probe_resp.timestamp),
++				   le16_to_cpu(mgmt->u.probe_resp.beacon_int),
++				   ie, ielen, &non_tx_data, gfp);
+ 
+ 	return res;
+ }
+diff --git a/samples/bpf/tcp_basertt_kern.c b/samples/bpf/tcp_basertt_kern.c
+index 8dfe09a92feca..822b0742b8154 100644
+--- a/samples/bpf/tcp_basertt_kern.c
++++ b/samples/bpf/tcp_basertt_kern.c
+@@ -47,7 +47,7 @@ int bpf_basertt(struct bpf_sock_ops *skops)
+ 		case BPF_SOCK_OPS_BASE_RTT:
+ 			n = bpf_getsockopt(skops, SOL_TCP, TCP_CONGESTION,
+ 					   cong, sizeof(cong));
+-			if (!n && !__builtin_memcmp(cong, nv, sizeof(nv)+1)) {
++			if (!n && !__builtin_memcmp(cong, nv, sizeof(nv))) {
+ 				/* Set base_rtt to 80us */
+ 				rv = 80;
+ 			} else if (n) {
+diff --git a/samples/bpf/xdp1_kern.c b/samples/bpf/xdp1_kern.c
+index 0a5c704badd00..d91f27cbcfa99 100644
+--- a/samples/bpf/xdp1_kern.c
++++ b/samples/bpf/xdp1_kern.c
+@@ -39,7 +39,7 @@ static int parse_ipv6(void *data, u64 nh_off, void *data_end)
+ 	return ip6h->nexthdr;
+ }
+ 
+-#define XDPBUFSIZE	64
++#define XDPBUFSIZE	60
+ SEC("xdp.frags")
+ int xdp_prog1(struct xdp_md *ctx)
+ {
+diff --git a/samples/bpf/xdp2_kern.c b/samples/bpf/xdp2_kern.c
+index 67804ecf7ce37..8bca674451ed1 100644
+--- a/samples/bpf/xdp2_kern.c
++++ b/samples/bpf/xdp2_kern.c
+@@ -55,7 +55,7 @@ static int parse_ipv6(void *data, u64 nh_off, void *data_end)
+ 	return ip6h->nexthdr;
+ }
+ 
+-#define XDPBUFSIZE	64
++#define XDPBUFSIZE	60
+ SEC("xdp.frags")
+ int xdp_prog1(struct xdp_md *ctx)
+ {
+diff --git a/scripts/Makefile.modfinal b/scripts/Makefile.modfinal
+index 4703f652c0098..fc19f67039bda 100644
+--- a/scripts/Makefile.modfinal
++++ b/scripts/Makefile.modfinal
+@@ -23,7 +23,7 @@ modname = $(notdir $(@:.mod.o=))
+ part-of-module = y
+ 
+ quiet_cmd_cc_o_c = CC [M]  $@
+-      cmd_cc_o_c = $(CC) $(filter-out $(CC_FLAGS_CFI), $(c_flags)) -c -o $@ $<
++      cmd_cc_o_c = $(CC) $(filter-out $(CC_FLAGS_CFI) $(CFLAGS_GCOV), $(c_flags)) -c -o $@ $<
+ 
+ %.mod.o: %.mod.c FORCE
+ 	$(call if_changed_dep,cc_o_c)
+diff --git a/scripts/Makefile.vmlinux b/scripts/Makefile.vmlinux
+index 10176dec97eac..3cd6ca15f390d 100644
+--- a/scripts/Makefile.vmlinux
++++ b/scripts/Makefile.vmlinux
+@@ -19,6 +19,7 @@ quiet_cmd_cc_o_c = CC      $@
+ 
+ ifdef CONFIG_MODULES
+ KASAN_SANITIZE_.vmlinux.export.o := n
++GCOV_PROFILE_.vmlinux.export.o := n
+ targets += .vmlinux.export.o
+ vmlinux: .vmlinux.export.o
+ endif
+diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
+index 5b3964b39709f..b1163bad652aa 100644
+--- a/scripts/mod/modpost.c
++++ b/scripts/mod/modpost.c
+@@ -1156,6 +1156,10 @@ static Elf_Sym *find_elf_symbol(struct elf_info *elf, Elf64_Sword addr,
+ 	if (relsym->st_name != 0)
+ 		return relsym;
+ 
++	/*
++	 * Strive to find a better symbol name, but the resulting name may not
++	 * match the symbol referenced in the original code.
++	 */
+ 	relsym_secindex = get_secindex(elf, relsym);
+ 	for (sym = elf->symtab_start; sym < elf->symtab_stop; sym++) {
+ 		if (get_secindex(elf, sym) != relsym_secindex)
+@@ -1292,49 +1296,12 @@ static void default_mismatch_handler(const char *modname, struct elf_info *elf,
+ 
+ static int is_executable_section(struct elf_info* elf, unsigned int section_index)
+ {
+-	if (section_index > elf->num_sections)
++	if (section_index >= elf->num_sections)
+ 		fatal("section_index is outside elf->num_sections!\n");
+ 
+ 	return ((elf->sechdrs[section_index].sh_flags & SHF_EXECINSTR) == SHF_EXECINSTR);
+ }
+ 
+-/*
+- * We rely on a gross hack in section_rel[a]() calling find_extable_entry_size()
+- * to know the sizeof(struct exception_table_entry) for the target architecture.
+- */
+-static unsigned int extable_entry_size = 0;
+-static void find_extable_entry_size(const char* const sec, const Elf_Rela* r)
+-{
+-	/*
+-	 * If we're currently checking the second relocation within __ex_table,
+-	 * that relocation offset tells us the offsetof(struct
+-	 * exception_table_entry, fixup) which is equal to sizeof(struct
+-	 * exception_table_entry) divided by two.  We use that to our advantage
+-	 * since there's no portable way to get that size as every architecture
+-	 * seems to go with different sized types.  Not pretty but better than
+-	 * hard-coding the size for every architecture..
+-	 */
+-	if (!extable_entry_size)
+-		extable_entry_size = r->r_offset * 2;
+-}
+-
+-static inline bool is_extable_fault_address(Elf_Rela *r)
+-{
+-	/*
+-	 * extable_entry_size is only discovered after we've handled the
+-	 * _second_ relocation in __ex_table, so only abort when we're not
+-	 * handling the first reloc and extable_entry_size is zero.
+-	 */
+-	if (r->r_offset && extable_entry_size == 0)
+-		fatal("extable_entry size hasn't been discovered!\n");
+-
+-	return ((r->r_offset == 0) ||
+-		(r->r_offset % extable_entry_size == 0));
+-}
+-
+-#define is_second_extable_reloc(Start, Cur, Sec)			\
+-	(((Cur) == (Start) + 1) && (strcmp("__ex_table", (Sec)) == 0))
+-
+ static void report_extable_warnings(const char* modname, struct elf_info* elf,
+ 				    const struct sectioncheck* const mismatch,
+ 				    Elf_Rela* r, Elf_Sym* sym,
+@@ -1390,22 +1357,9 @@ static void extable_mismatch_handler(const char* modname, struct elf_info *elf,
+ 		      "You might get more information about where this is\n"
+ 		      "coming from by using scripts/check_extable.sh %s\n",
+ 		      fromsec, (long)r->r_offset, tosec, modname);
+-	else if (!is_executable_section(elf, get_secindex(elf, sym))) {
+-		if (is_extable_fault_address(r))
+-			fatal("The relocation at %s+0x%lx references\n"
+-			      "section \"%s\" which is not executable, IOW\n"
+-			      "it is not possible for the kernel to fault\n"
+-			      "at that address.  Something is seriously wrong\n"
+-			      "and should be fixed.\n",
+-			      fromsec, (long)r->r_offset, tosec);
+-		else
+-			fatal("The relocation at %s+0x%lx references\n"
+-			      "section \"%s\" which is not executable, IOW\n"
+-			      "the kernel will fault if it ever tries to\n"
+-			      "jump to it.  Something is seriously wrong\n"
+-			      "and should be fixed.\n",
+-			      fromsec, (long)r->r_offset, tosec);
+-	}
++	else if (!is_executable_section(elf, get_secindex(elf, sym)))
++		error("%s+0x%lx references non-executable section '%s'\n",
++		      fromsec, (long)r->r_offset, tosec);
+ }
+ 
+ static void check_section_mismatch(const char *modname, struct elf_info *elf,
+@@ -1463,19 +1417,33 @@ static int addend_386_rel(struct elf_info *elf, Elf_Shdr *sechdr, Elf_Rela *r)
+ #define	R_ARM_THM_JUMP19	51
+ #endif
+ 
++static int32_t sign_extend32(int32_t value, int index)
++{
++	uint8_t shift = 31 - index;
++
++	return (int32_t)(value << shift) >> shift;
++}
++
+ static int addend_arm_rel(struct elf_info *elf, Elf_Shdr *sechdr, Elf_Rela *r)
+ {
+ 	unsigned int r_typ = ELF_R_TYPE(r->r_info);
++	Elf_Sym *sym = elf->symtab_start + ELF_R_SYM(r->r_info);
++	void *loc = reloc_location(elf, sechdr, r);
++	uint32_t inst;
++	int32_t offset;
+ 
+ 	switch (r_typ) {
+ 	case R_ARM_ABS32:
+-		/* From ARM ABI: (S + A) | T */
+-		r->r_addend = (int)(long)
+-			      (elf->symtab_start + ELF_R_SYM(r->r_info));
++		inst = TO_NATIVE(*(uint32_t *)loc);
++		r->r_addend = inst + sym->st_value;
+ 		break;
+ 	case R_ARM_PC24:
+ 	case R_ARM_CALL:
+ 	case R_ARM_JUMP24:
++		inst = TO_NATIVE(*(uint32_t *)loc);
++		offset = sign_extend32((inst & 0x00ffffff) << 2, 25);
++		r->r_addend = offset + sym->st_value + 8;
++		break;
+ 	case R_ARM_THM_CALL:
+ 	case R_ARM_THM_JUMP24:
+ 	case R_ARM_THM_JUMP19:
+@@ -1580,8 +1548,6 @@ static void section_rela(const char *modname, struct elf_info *elf,
+ 		/* Skip special sections */
+ 		if (is_shndx_special(sym->st_shndx))
+ 			continue;
+-		if (is_second_extable_reloc(start, rela, fromsec))
+-			find_extable_entry_size(fromsec, &r);
+ 		check_section_mismatch(modname, elf, &r, sym, fromsec);
+ 	}
+ }
+@@ -1639,8 +1605,6 @@ static void section_rel(const char *modname, struct elf_info *elf,
+ 		/* Skip special sections */
+ 		if (is_shndx_special(sym->st_shndx))
+ 			continue;
+-		if (is_second_extable_reloc(start, rel, fromsec))
+-			find_extable_entry_size(fromsec, &r);
+ 		check_section_mismatch(modname, elf, &r, sym, fromsec);
+ 	}
+ }
+diff --git a/scripts/package/builddeb b/scripts/package/builddeb
+index 7b23f52c70c5f..a0af4c0f971ca 100755
+--- a/scripts/package/builddeb
++++ b/scripts/package/builddeb
+@@ -62,18 +62,14 @@ install_linux_image () {
+ 		${MAKE} -f ${srctree}/Makefile INSTALL_DTBS_PATH="${pdir}/usr/lib/linux-image-${KERNELRELEASE}" dtbs_install
+ 	fi
+ 
+-	if is_enabled CONFIG_MODULES; then
+-		${MAKE} -f ${srctree}/Makefile INSTALL_MOD_PATH="${pdir}" modules_install
+-		rm -f "${pdir}/lib/modules/${KERNELRELEASE}/build"
+-		rm -f "${pdir}/lib/modules/${KERNELRELEASE}/source"
+-		if [ "${SRCARCH}" = um ] ; then
+-			mkdir -p "${pdir}/usr/lib/uml/modules"
+-			mv "${pdir}/lib/modules/${KERNELRELEASE}" "${pdir}/usr/lib/uml/modules/${KERNELRELEASE}"
+-		fi
+-	fi
++	${MAKE} -f ${srctree}/Makefile INSTALL_MOD_PATH="${pdir}" modules_install
++	rm -f "${pdir}/lib/modules/${KERNELRELEASE}/build"
++	rm -f "${pdir}/lib/modules/${KERNELRELEASE}/source"
+ 
+ 	# Install the kernel
+ 	if [ "${ARCH}" = um ] ; then
++		mkdir -p "${pdir}/usr/lib/uml/modules"
++		mv "${pdir}/lib/modules/${KERNELRELEASE}" "${pdir}/usr/lib/uml/modules/${KERNELRELEASE}"
+ 		mkdir -p "${pdir}/usr/bin" "${pdir}/usr/share/doc/${pname}"
+ 		cp System.map "${pdir}/usr/lib/uml/modules/${KERNELRELEASE}/System.map"
+ 		cp ${KCONFIG_CONFIG} "${pdir}/usr/share/doc/${pname}/config"
+diff --git a/security/integrity/evm/evm_crypto.c b/security/integrity/evm/evm_crypto.c
+index 033804f5a5f20..0dae649f3740c 100644
+--- a/security/integrity/evm/evm_crypto.c
++++ b/security/integrity/evm/evm_crypto.c
+@@ -40,7 +40,7 @@ static const char evm_hmac[] = "hmac(sha1)";
+ /**
+  * evm_set_key() - set EVM HMAC key from the kernel
+  * @key: pointer to a buffer with the key data
+- * @size: length of the key data
++ * @keylen: length of the key data
+  *
+  * This function allows setting the EVM HMAC key from the kernel
+  * without using the "encrypted" key subsystem keys. It can be used
+diff --git a/security/integrity/evm/evm_main.c b/security/integrity/evm/evm_main.c
+index cf24c5255583c..c9b6e2a43478a 100644
+--- a/security/integrity/evm/evm_main.c
++++ b/security/integrity/evm/evm_main.c
+@@ -318,7 +318,6 @@ int evm_protected_xattr_if_enabled(const char *req_xattr_name)
+ /**
+  * evm_read_protected_xattrs - read EVM protected xattr names, lengths, values
+  * @dentry: dentry of the read xattrs
+- * @inode: inode of the read xattrs
+  * @buffer: buffer xattr names, lengths or values are copied to
+  * @buffer_size: size of buffer
+  * @type: n: names, l: lengths, v: values
+@@ -390,6 +389,7 @@ int evm_read_protected_xattrs(struct dentry *dentry, u8 *buffer,
+  * @xattr_name: requested xattr
+  * @xattr_value: requested xattr value
+  * @xattr_value_len: requested xattr value length
++ * @iint: inode integrity metadata
+  *
+  * Calculate the HMAC for the given dentry and verify it against the stored
+  * security.evm xattr. For performance, use the xattr value and length
+@@ -795,7 +795,9 @@ static int evm_attr_change(struct mnt_idmap *idmap,
+ 
+ /**
+  * evm_inode_setattr - prevent updating an invalid EVM extended attribute
++ * @idmap: idmap of the mount
+  * @dentry: pointer to the affected dentry
++ * @attr: iattr structure containing the new file attributes
+  *
+  * Permit update of file attributes when files have a valid EVM signature,
+  * except in the case of them having an immutable portable signature.
+diff --git a/security/integrity/ima/ima_modsig.c b/security/integrity/ima/ima_modsig.c
+index fb25723c65bc4..3e7bee30080f2 100644
+--- a/security/integrity/ima/ima_modsig.c
++++ b/security/integrity/ima/ima_modsig.c
+@@ -89,6 +89,9 @@ int ima_read_modsig(enum ima_hooks func, const void *buf, loff_t buf_len,
+ 
+ /**
+  * ima_collect_modsig - Calculate the file hash without the appended signature.
++ * @modsig: parsed module signature
++ * @buf: data to verify the signature on
++ * @size: data size
+  *
+  * Since the modsig is part of the file contents, the hash used in its signature
+  * isn't the same one ordinarily calculated by IMA. Therefore PKCS7 code
+diff --git a/security/integrity/ima/ima_policy.c b/security/integrity/ima/ima_policy.c
+index 3ca8b7348c2e4..c9b3bd8f1bb9c 100644
+--- a/security/integrity/ima/ima_policy.c
++++ b/security/integrity/ima/ima_policy.c
+@@ -721,6 +721,7 @@ static int get_subaction(struct ima_rule_entry *rule, enum ima_hooks func)
+  * @secid: LSM secid of the task to be validated
+  * @func: IMA hook identifier
+  * @mask: requested action (MAY_READ | MAY_WRITE | MAY_APPEND | MAY_EXEC)
++ * @flags: IMA actions to consider (e.g. IMA_MEASURE | IMA_APPRAISE)
+  * @pcr: set the pcr to extend
+  * @template_desc: the template that should be used for this rule
+  * @func_data: func specific data, may be NULL
+@@ -1915,7 +1916,7 @@ static int ima_parse_rule(char *rule, struct ima_rule_entry *entry)
+ 
+ /**
+  * ima_parse_add_rule - add a rule to ima_policy_rules
+- * @rule - ima measurement policy rule
++ * @rule: ima measurement policy rule
+  *
+  * Avoid locking by allowing just one writer at a time in ima_write_policy()
+  * Returns the length of the rule parsed, an error code on failure
+diff --git a/sound/pci/ac97/ac97_codec.c b/sound/pci/ac97/ac97_codec.c
+index 9afc5906d662e..80a65b8ad7b9b 100644
+--- a/sound/pci/ac97/ac97_codec.c
++++ b/sound/pci/ac97/ac97_codec.c
+@@ -2069,8 +2069,8 @@ int snd_ac97_mixer(struct snd_ac97_bus *bus, struct snd_ac97_template *template,
+ 		.dev_disconnect =	snd_ac97_dev_disconnect,
+ 	};
+ 
+-	if (rac97)
+-		*rac97 = NULL;
++	if (!rac97)
++		return -EINVAL;
+ 	if (snd_BUG_ON(!bus || !template))
+ 		return -EINVAL;
+ 	if (snd_BUG_ON(template->num >= 4))
+diff --git a/sound/soc/amd/acp/acp-pdm.c b/sound/soc/amd/acp/acp-pdm.c
+index 66ec6b6a59723..f8030b79ac17c 100644
+--- a/sound/soc/amd/acp/acp-pdm.c
++++ b/sound/soc/amd/acp/acp-pdm.c
+@@ -176,7 +176,7 @@ static void acp_dmic_dai_shutdown(struct snd_pcm_substream *substream,
+ 
+ 	/* Disable DMIC interrupts */
+ 	ext_int_ctrl = readl(ACP_EXTERNAL_INTR_CNTL(adata, 0));
+-	ext_int_ctrl |= ~PDM_DMA_INTR_MASK;
++	ext_int_ctrl &= ~PDM_DMA_INTR_MASK;
+ 	writel(ext_int_ctrl, ACP_EXTERNAL_INTR_CNTL(adata, 0));
+ }
+ 
+diff --git a/sound/soc/codecs/es8316.c b/sound/soc/codecs/es8316.c
+index f7d7a9c91e04c..87775378362e7 100644
+--- a/sound/soc/codecs/es8316.c
++++ b/sound/soc/codecs/es8316.c
+@@ -52,7 +52,12 @@ static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(dac_vol_tlv, -9600, 50, 1);
+ static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(adc_vol_tlv, -9600, 50, 1);
+ static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(alc_max_gain_tlv, -650, 150, 0);
+ static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(alc_min_gain_tlv, -1200, 150, 0);
+-static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(alc_target_tlv, -1650, 150, 0);
++
++static const SNDRV_CTL_TLVD_DECLARE_DB_RANGE(alc_target_tlv,
++	0, 10, TLV_DB_SCALE_ITEM(-1650, 150, 0),
++	11, 11, TLV_DB_SCALE_ITEM(-150, 0, 0),
++);
++
+ static const SNDRV_CTL_TLVD_DECLARE_DB_RANGE(hpmixer_gain_tlv,
+ 	0, 4, TLV_DB_SCALE_ITEM(-1200, 150, 0),
+ 	8, 11, TLV_DB_SCALE_ITEM(-450, 150, 0),
+@@ -115,7 +120,7 @@ static const struct snd_kcontrol_new es8316_snd_controls[] = {
+ 		       alc_max_gain_tlv),
+ 	SOC_SINGLE_TLV("ALC Capture Min Volume", ES8316_ADC_ALC2, 0, 28, 0,
+ 		       alc_min_gain_tlv),
+-	SOC_SINGLE_TLV("ALC Capture Target Volume", ES8316_ADC_ALC3, 4, 10, 0,
++	SOC_SINGLE_TLV("ALC Capture Target Volume", ES8316_ADC_ALC3, 4, 11, 0,
+ 		       alc_target_tlv),
+ 	SOC_SINGLE("ALC Capture Hold Time", ES8316_ADC_ALC3, 0, 10, 0),
+ 	SOC_SINGLE("ALC Capture Decay Time", ES8316_ADC_ALC4, 4, 10, 0),
+@@ -364,13 +369,11 @@ static int es8316_set_dai_sysclk(struct snd_soc_dai *codec_dai,
+ 	int count = 0;
+ 
+ 	es8316->sysclk = freq;
++	es8316->sysclk_constraints.list = NULL;
++	es8316->sysclk_constraints.count = 0;
+ 
+-	if (freq == 0) {
+-		es8316->sysclk_constraints.list = NULL;
+-		es8316->sysclk_constraints.count = 0;
+-
++	if (freq == 0)
+ 		return 0;
+-	}
+ 
+ 	ret = clk_set_rate(es8316->mclk, freq);
+ 	if (ret)
+@@ -386,8 +389,10 @@ static int es8316_set_dai_sysclk(struct snd_soc_dai *codec_dai,
+ 			es8316->allowed_rates[count++] = freq / ratio;
+ 	}
+ 
+-	es8316->sysclk_constraints.list = es8316->allowed_rates;
+-	es8316->sysclk_constraints.count = count;
++	if (count) {
++		es8316->sysclk_constraints.list = es8316->allowed_rates;
++		es8316->sysclk_constraints.count = count;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/sound/soc/fsl/imx-audmix.c b/sound/soc/fsl/imx-audmix.c
+index 1292a845c4244..d8e99b263ab21 100644
+--- a/sound/soc/fsl/imx-audmix.c
++++ b/sound/soc/fsl/imx-audmix.c
+@@ -228,6 +228,8 @@ static int imx_audmix_probe(struct platform_device *pdev)
+ 
+ 		dai_name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%s%s",
+ 					  fe_name_pref, args.np->full_name + 1);
++		if (!dai_name)
++			return -ENOMEM;
+ 
+ 		dev_info(pdev->dev.parent, "DAI FE name:%s\n", dai_name);
+ 
+@@ -236,6 +238,8 @@ static int imx_audmix_probe(struct platform_device *pdev)
+ 			capture_dai_name =
+ 				devm_kasprintf(&pdev->dev, GFP_KERNEL, "%s %s",
+ 					       dai_name, "CPU-Capture");
++			if (!capture_dai_name)
++				return -ENOMEM;
+ 		}
+ 
+ 		priv->dai[i].cpus = &dlc[0];
+@@ -266,6 +270,8 @@ static int imx_audmix_probe(struct platform_device *pdev)
+ 				       "AUDMIX-Playback-%d", i);
+ 		be_cp = devm_kasprintf(&pdev->dev, GFP_KERNEL,
+ 				       "AUDMIX-Capture-%d", i);
++		if (!be_name || !be_pb || !be_cp)
++			return -ENOMEM;
+ 
+ 		priv->dai[num_dai + i].cpus = &dlc[3];
+ 		priv->dai[num_dai + i].codecs = &dlc[4];
+@@ -293,6 +299,9 @@ static int imx_audmix_probe(struct platform_device *pdev)
+ 		priv->dapm_routes[i].source =
+ 			devm_kasprintf(&pdev->dev, GFP_KERNEL, "%s %s",
+ 				       dai_name, "CPU-Playback");
++		if (!priv->dapm_routes[i].source)
++			return -ENOMEM;
++
+ 		priv->dapm_routes[i].sink = be_pb;
+ 		priv->dapm_routes[num_dai + i].source   = be_pb;
+ 		priv->dapm_routes[num_dai + i].sink     = be_cp;
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index 767fa89d08708..1ac5abc721c68 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -413,7 +413,7 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 		.matches = {
+ 			DMI_MATCH(DMI_PRODUCT_FAMILY, "Intel_mtlrvp"),
+ 		},
+-		.driver_data = (void *)(RT711_JD1 | SOF_SDW_TGL_HDMI),
++		.driver_data = (void *)(RT711_JD1),
+ 	},
+ 	{}
+ };
+diff --git a/tools/bpf/bpftool/feature.c b/tools/bpf/bpftool/feature.c
+index da16e6a27cccd..0675d6a464138 100644
+--- a/tools/bpf/bpftool/feature.c
++++ b/tools/bpf/bpftool/feature.c
+@@ -167,12 +167,12 @@ static int get_vendor_id(int ifindex)
+ 	return strtol(buf, NULL, 0);
+ }
+ 
+-static int read_procfs(const char *path)
++static long read_procfs(const char *path)
+ {
+ 	char *endptr, *line = NULL;
+ 	size_t len = 0;
+ 	FILE *fd;
+-	int res;
++	long res;
+ 
+ 	fd = fopen(path, "r");
+ 	if (!fd)
+@@ -194,7 +194,7 @@ static int read_procfs(const char *path)
+ 
+ static void probe_unprivileged_disabled(void)
+ {
+-	int res;
++	long res;
+ 
+ 	/* No support for C-style ouptut */
+ 
+@@ -216,14 +216,14 @@ static void probe_unprivileged_disabled(void)
+ 			printf("Unable to retrieve required privileges for bpf() syscall\n");
+ 			break;
+ 		default:
+-			printf("bpf() syscall restriction has unknown value %d\n", res);
++			printf("bpf() syscall restriction has unknown value %ld\n", res);
+ 		}
+ 	}
+ }
+ 
+ static void probe_jit_enable(void)
+ {
+-	int res;
++	long res;
+ 
+ 	/* No support for C-style ouptut */
+ 
+@@ -245,7 +245,7 @@ static void probe_jit_enable(void)
+ 			printf("Unable to retrieve JIT-compiler status\n");
+ 			break;
+ 		default:
+-			printf("JIT-compiler status has unknown value %d\n",
++			printf("JIT-compiler status has unknown value %ld\n",
+ 			       res);
+ 		}
+ 	}
+@@ -253,7 +253,7 @@ static void probe_jit_enable(void)
+ 
+ static void probe_jit_harden(void)
+ {
+-	int res;
++	long res;
+ 
+ 	/* No support for C-style ouptut */
+ 
+@@ -275,7 +275,7 @@ static void probe_jit_harden(void)
+ 			printf("Unable to retrieve JIT hardening status\n");
+ 			break;
+ 		default:
+-			printf("JIT hardening status has unknown value %d\n",
++			printf("JIT hardening status has unknown value %ld\n",
+ 			       res);
+ 		}
+ 	}
+@@ -283,7 +283,7 @@ static void probe_jit_harden(void)
+ 
+ static void probe_jit_kallsyms(void)
+ {
+-	int res;
++	long res;
+ 
+ 	/* No support for C-style ouptut */
+ 
+@@ -302,14 +302,14 @@ static void probe_jit_kallsyms(void)
+ 			printf("Unable to retrieve JIT kallsyms export status\n");
+ 			break;
+ 		default:
+-			printf("JIT kallsyms exports status has unknown value %d\n", res);
++			printf("JIT kallsyms exports status has unknown value %ld\n", res);
+ 		}
+ 	}
+ }
+ 
+ static void probe_jit_limit(void)
+ {
+-	int res;
++	long res;
+ 
+ 	/* No support for C-style ouptut */
+ 
+@@ -322,7 +322,7 @@ static void probe_jit_limit(void)
+ 			printf("Unable to retrieve global memory limit for JIT compiler for unprivileged users\n");
+ 			break;
+ 		default:
+-			printf("Global memory limit for JIT compiler for unprivileged users is %d bytes\n", res);
++			printf("Global memory limit for JIT compiler for unprivileged users is %ld bytes\n", res);
+ 		}
+ 	}
+ }
+diff --git a/tools/bpf/resolve_btfids/Makefile b/tools/bpf/resolve_btfids/Makefile
+index ac548a7baa73e..4b8079f294f65 100644
+--- a/tools/bpf/resolve_btfids/Makefile
++++ b/tools/bpf/resolve_btfids/Makefile
+@@ -67,7 +67,7 @@ $(BPFOBJ): $(wildcard $(LIBBPF_SRC)/*.[ch] $(LIBBPF_SRC)/Makefile) | $(LIBBPF_OU
+ LIBELF_FLAGS := $(shell $(HOSTPKG_CONFIG) libelf --cflags 2>/dev/null)
+ LIBELF_LIBS  := $(shell $(HOSTPKG_CONFIG) libelf --libs 2>/dev/null || echo -lelf)
+ 
+-HOSTCFLAGS += -g \
++HOSTCFLAGS_resolve_btfids += -g \
+           -I$(srctree)/tools/include \
+           -I$(srctree)/tools/include/uapi \
+           -I$(LIBBPF_INCLUDE) \
+@@ -76,7 +76,7 @@ HOSTCFLAGS += -g \
+ 
+ LIBS = $(LIBELF_LIBS) -lz
+ 
+-export srctree OUTPUT HOSTCFLAGS Q HOSTCC HOSTLD HOSTAR
++export srctree OUTPUT HOSTCFLAGS_resolve_btfids Q HOSTCC HOSTLD HOSTAR
+ include $(srctree)/tools/build/Makefile.include
+ 
+ $(BINARY_IN): fixdep FORCE prepare | $(OUTPUT)
+diff --git a/tools/lib/bpf/bpf_helpers.h b/tools/lib/bpf/bpf_helpers.h
+index 5ec1871acb2fc..85a29cd69154e 100644
+--- a/tools/lib/bpf/bpf_helpers.h
++++ b/tools/lib/bpf/bpf_helpers.h
+@@ -77,16 +77,21 @@
+ /*
+  * Helper macros to manipulate data structures
+  */
+-#ifndef offsetof
+-#define offsetof(TYPE, MEMBER)	((unsigned long)&((TYPE *)0)->MEMBER)
+-#endif
+-#ifndef container_of
++
++/* offsetof() definition that uses __builtin_offset() might not preserve field
++ * offset CO-RE relocation properly, so force-redefine offsetof() using
++ * old-school approach which works with CO-RE correctly
++ */
++#undef offsetof
++#define offsetof(type, member)	((unsigned long)&((type *)0)->member)
++
++/* redefined container_of() to ensure we use the above offsetof() macro */
++#undef container_of
+ #define container_of(ptr, type, member)				\
+ 	({							\
+ 		void *__mptr = (void *)(ptr);			\
+ 		((type *)(__mptr - offsetof(type, member)));	\
+ 	})
+-#endif
+ 
+ /*
+  * Compiler (optimization) barrier.
+diff --git a/tools/lib/bpf/btf_dump.c b/tools/lib/bpf/btf_dump.c
+index 580985ee55458..4d9f30bf7f014 100644
+--- a/tools/lib/bpf/btf_dump.c
++++ b/tools/lib/bpf/btf_dump.c
+@@ -2250,9 +2250,25 @@ static int btf_dump_type_data_check_overflow(struct btf_dump *d,
+ 					     const struct btf_type *t,
+ 					     __u32 id,
+ 					     const void *data,
+-					     __u8 bits_offset)
++					     __u8 bits_offset,
++					     __u8 bit_sz)
+ {
+-	__s64 size = btf__resolve_size(d->btf, id);
++	__s64 size;
++
++	if (bit_sz) {
++		/* bits_offset is at most 7. bit_sz is at most 128. */
++		__u8 nr_bytes = (bits_offset + bit_sz + 7) / 8;
++
++		/* When bit_sz is non zero, it is called from
++		 * btf_dump_struct_data() where it only cares about
++		 * negative error value.
++		 * Return nr_bytes in success case to make it
++		 * consistent as the regular integer case below.
++		 */
++		return data + nr_bytes > d->typed_dump->data_end ? -E2BIG : nr_bytes;
++	}
++
++	size = btf__resolve_size(d->btf, id);
+ 
+ 	if (size < 0 || size >= INT_MAX) {
+ 		pr_warn("unexpected size [%zu] for id [%u]\n",
+@@ -2407,7 +2423,7 @@ static int btf_dump_dump_type_data(struct btf_dump *d,
+ {
+ 	int size, err = 0;
+ 
+-	size = btf_dump_type_data_check_overflow(d, t, id, data, bits_offset);
++	size = btf_dump_type_data_check_overflow(d, t, id, data, bits_offset, bit_sz);
+ 	if (size < 0)
+ 		return size;
+ 	err = btf_dump_type_data_check_zero(d, t, id, data, bits_offset, bit_sz);
+diff --git a/tools/perf/arch/x86/util/Build b/tools/perf/arch/x86/util/Build
+index 195ccfdef7aa1..005907cb97d8c 100644
+--- a/tools/perf/arch/x86/util/Build
++++ b/tools/perf/arch/x86/util/Build
+@@ -10,6 +10,7 @@ perf-y += evlist.o
+ perf-y += mem-events.o
+ perf-y += evsel.o
+ perf-y += iostat.o
++perf-y += env.o
+ 
+ perf-$(CONFIG_DWARF) += dwarf-regs.o
+ perf-$(CONFIG_BPF_PROLOGUE) += dwarf-regs.o
+diff --git a/tools/perf/arch/x86/util/env.c b/tools/perf/arch/x86/util/env.c
+new file mode 100644
+index 0000000000000..3e537ffb1353a
+--- /dev/null
++++ b/tools/perf/arch/x86/util/env.c
+@@ -0,0 +1,19 @@
++// SPDX-License-Identifier: GPL-2.0
++#include "linux/string.h"
++#include "util/env.h"
++#include "env.h"
++
++bool x86__is_amd_cpu(void)
++{
++	struct perf_env env = { .total_mem = 0, };
++	static int is_amd; /* 0: Uninitialized, 1: Yes, -1: No */
++
++	if (is_amd)
++		goto ret;
++
++	perf_env__cpuid(&env);
++	is_amd = env.cpuid && strstarts(env.cpuid, "AuthenticAMD") ? 1 : -1;
++	perf_env__exit(&env);
++ret:
++	return is_amd >= 1 ? true : false;
++}
+diff --git a/tools/perf/arch/x86/util/env.h b/tools/perf/arch/x86/util/env.h
+new file mode 100644
+index 0000000000000..d78f080b6b3f8
+--- /dev/null
++++ b/tools/perf/arch/x86/util/env.h
+@@ -0,0 +1,7 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef _X86_ENV_H
++#define _X86_ENV_H
++
++bool x86__is_amd_cpu(void);
++
++#endif /* _X86_ENV_H */
+diff --git a/tools/perf/arch/x86/util/evsel.c b/tools/perf/arch/x86/util/evsel.c
+index ea3972d785d10..d72390cdf391d 100644
+--- a/tools/perf/arch/x86/util/evsel.c
++++ b/tools/perf/arch/x86/util/evsel.c
+@@ -7,6 +7,7 @@
+ #include "linux/string.h"
+ #include "evsel.h"
+ #include "util/debug.h"
++#include "env.h"
+ 
+ #define IBS_FETCH_L3MISSONLY   (1ULL << 59)
+ #define IBS_OP_L3MISSONLY      (1ULL << 16)
+@@ -97,23 +98,10 @@ void arch__post_evsel_config(struct evsel *evsel, struct perf_event_attr *attr)
+ {
+ 	struct perf_pmu *evsel_pmu, *ibs_fetch_pmu, *ibs_op_pmu;
+ 	static int warned_once;
+-	/* 0: Uninitialized, 1: Yes, -1: No */
+-	static int is_amd;
+ 
+-	if (warned_once || is_amd == -1)
++	if (warned_once || !x86__is_amd_cpu())
+ 		return;
+ 
+-	if (!is_amd) {
+-		struct perf_env *env = evsel__env(evsel);
+-
+-		if (!perf_env__cpuid(env) || !env->cpuid ||
+-		    !strstarts(env->cpuid, "AuthenticAMD")) {
+-			is_amd = -1;
+-			return;
+-		}
+-		is_amd = 1;
+-	}
+-
+ 	evsel_pmu = evsel__find_pmu(evsel);
+ 	if (!evsel_pmu)
+ 		return;
+diff --git a/tools/perf/arch/x86/util/mem-events.c b/tools/perf/arch/x86/util/mem-events.c
+index f683ac702247c..efc0fae9ed0a7 100644
+--- a/tools/perf/arch/x86/util/mem-events.c
++++ b/tools/perf/arch/x86/util/mem-events.c
+@@ -4,6 +4,7 @@
+ #include "map_symbol.h"
+ #include "mem-events.h"
+ #include "linux/string.h"
++#include "env.h"
+ 
+ static char mem_loads_name[100];
+ static bool mem_loads_name__init;
+@@ -26,28 +27,12 @@ static struct perf_mem_event perf_mem_events_amd[PERF_MEM_EVENTS__MAX] = {
+ 	E("mem-ldst",	"ibs_op//",	"ibs_op"),
+ };
+ 
+-static int perf_mem_is_amd_cpu(void)
+-{
+-	struct perf_env env = { .total_mem = 0, };
+-
+-	perf_env__cpuid(&env);
+-	if (env.cpuid && strstarts(env.cpuid, "AuthenticAMD"))
+-		return 1;
+-	return -1;
+-}
+-
+ struct perf_mem_event *perf_mem_events__ptr(int i)
+ {
+-	/* 0: Uninitialized, 1: Yes, -1: No */
+-	static int is_amd;
+-
+ 	if (i >= PERF_MEM_EVENTS__MAX)
+ 		return NULL;
+ 
+-	if (!is_amd)
+-		is_amd = perf_mem_is_amd_cpu();
+-
+-	if (is_amd == 1)
++	if (x86__is_amd_cpu())
+ 		return &perf_mem_events_amd[i];
+ 
+ 	return &perf_mem_events_intel[i];
+diff --git a/tools/perf/builtin-bench.c b/tools/perf/builtin-bench.c
+index 814e9afc86f6e..5efb29a7564af 100644
+--- a/tools/perf/builtin-bench.c
++++ b/tools/perf/builtin-bench.c
+@@ -21,6 +21,7 @@
+ #include "builtin.h"
+ #include "bench/bench.h"
+ 
++#include <locale.h>
+ #include <stdio.h>
+ #include <stdlib.h>
+ #include <string.h>
+@@ -258,6 +259,7 @@ int cmd_bench(int argc, const char **argv)
+ 
+ 	/* Unbuffered output */
+ 	setvbuf(stdout, NULL, _IONBF, 0);
++	setlocale(LC_ALL, "");
+ 
+ 	if (argc < 2) {
+ 		/* No collection specified. */
+diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
+index d8c174a719383..72a3faa28c394 100644
+--- a/tools/perf/builtin-script.c
++++ b/tools/perf/builtin-script.c
+@@ -2425,6 +2425,9 @@ out_put:
+ 	return ret;
+ }
+ 
++// Used when scr->per_event_dump is not set
++static struct evsel_script es_stdout;
++
+ static int process_attr(struct perf_tool *tool, union perf_event *event,
+ 			struct evlist **pevlist)
+ {
+@@ -2433,7 +2436,6 @@ static int process_attr(struct perf_tool *tool, union perf_event *event,
+ 	struct evsel *evsel, *pos;
+ 	u64 sample_type;
+ 	int err;
+-	static struct evsel_script *es;
+ 
+ 	err = perf_event__process_attr(tool, event, pevlist);
+ 	if (err)
+@@ -2443,14 +2445,13 @@ static int process_attr(struct perf_tool *tool, union perf_event *event,
+ 	evsel = evlist__last(*pevlist);
+ 
+ 	if (!evsel->priv) {
+-		if (scr->per_event_dump) {
++		if (scr->per_event_dump) { 
+ 			evsel->priv = evsel_script__new(evsel, scr->session->data);
+-		} else {
+-			es = zalloc(sizeof(*es));
+-			if (!es)
++			if (!evsel->priv)
+ 				return -ENOMEM;
+-			es->fp = stdout;
+-			evsel->priv = es;
++		} else { // Replicate what is done in perf_script__setup_per_event_dump()
++			es_stdout.fp = stdout;
++			evsel->priv = &es_stdout;
+ 		}
+ 	}
+ 
+@@ -2756,7 +2757,6 @@ out_err_fclose:
+ static int perf_script__setup_per_event_dump(struct perf_script *script)
+ {
+ 	struct evsel *evsel;
+-	static struct evsel_script es_stdout;
+ 
+ 	if (script->per_event_dump)
+ 		return perf_script__fopen_per_event_dump(script);
+diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
+index eeba93ae3b584..f5a6d08cf07f6 100644
+--- a/tools/perf/builtin-stat.c
++++ b/tools/perf/builtin-stat.c
+@@ -777,6 +777,8 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+ 			all_counters_use_bpf = false;
+ 	}
+ 
++	evlist__reset_aggr_stats(evsel_list);
++
+ 	evlist__for_each_cpu(evlist_cpu_itr, evsel_list, affinity) {
+ 		counter = evlist_cpu_itr.evsel;
+ 
+diff --git a/tools/perf/tests/shell/test_task_analyzer.sh b/tools/perf/tests/shell/test_task_analyzer.sh
+index a98e4ab66040e..365b61aea519a 100755
+--- a/tools/perf/tests/shell/test_task_analyzer.sh
++++ b/tools/perf/tests/shell/test_task_analyzer.sh
+@@ -5,6 +5,12 @@
+ tmpdir=$(mktemp -d /tmp/perf-script-task-analyzer-XXXXX)
+ err=0
+ 
++# set PERF_EXEC_PATH to find scripts in the source directory
++perfdir=$(dirname "$0")/../..
++if [ -e "$perfdir/scripts/python/Perf-Trace-Util" ]; then
++  export PERF_EXEC_PATH=$perfdir
++fi
++
+ cleanup() {
+   rm -f perf.data
+   rm -f perf.data.old
+@@ -31,7 +37,7 @@ report() {
+ 
+ check_exec_0() {
+ 	if [ $? != 0 ]; then
+-		report 1 "invokation of ${$1} command failed"
++		report 1 "invocation of $1 command failed"
+ 	fi
+ }
+ 
+@@ -44,9 +50,20 @@ find_str_or_fail() {
+ 	fi
+ }
+ 
++# check if perf is compiled with libtraceevent support
++skip_no_probe_record_support() {
++	perf record -e "sched:sched_switch" -a -- sleep 1 2>&1 | grep "libtraceevent is necessary for tracepoint support" && return 2
++	return 0
++}
++
+ prepare_perf_data() {
+ 	# 1s should be sufficient to catch at least some switches
+ 	perf record -e sched:sched_switch -a -- sleep 1 > /dev/null 2>&1
++	# check if perf data file got created in above step.
++	if [ ! -e "perf.data" ]; then
++		printf "FAIL: perf record failed to create \"perf.data\" \n"
++		return 1
++	fi
+ }
+ 
+ # check standard inkvokation with no arguments
+@@ -134,6 +151,13 @@ test_csvsummary_extended() {
+ 	find_str_or_fail "Out-Out;" csvsummary ${FUNCNAME[0]}
+ }
+ 
++skip_no_probe_record_support
++err=$?
++if [ $err -ne 0 ]; then
++	echo "WARN: Skipping tests. No libtraceevent support"
++	cleanup
++	exit $err
++fi
+ prepare_perf_data
+ test_basic
+ test_ns_rename
+diff --git a/tools/perf/util/dwarf-aux.c b/tools/perf/util/dwarf-aux.c
+index b074144097710..3bff678745635 100644
+--- a/tools/perf/util/dwarf-aux.c
++++ b/tools/perf/util/dwarf-aux.c
+@@ -1103,7 +1103,7 @@ int die_get_varname(Dwarf_Die *vr_die, struct strbuf *buf)
+ 	ret = die_get_typename(vr_die, buf);
+ 	if (ret < 0) {
+ 		pr_debug("Failed to get type, make it unknown.\n");
+-		ret = strbuf_add(buf, " (unknown_type)", 14);
++		ret = strbuf_add(buf, "(unknown_type)", 14);
+ 	}
+ 
+ 	return ret < 0 ? ret : strbuf_addf(buf, "\t%s", dwarf_diename(vr_die));
+diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
+index 1a7358b46ad4e..72549fd79992b 100644
+--- a/tools/perf/util/evsel.h
++++ b/tools/perf/util/evsel.h
+@@ -457,16 +457,24 @@ static inline int evsel__group_idx(struct evsel *evsel)
+ }
+ 
+ /* Iterates group WITHOUT the leader. */
+-#define for_each_group_member(_evsel, _leader) 					\
+-for ((_evsel) = list_entry((_leader)->core.node.next, struct evsel, core.node); \
+-     (_evsel) && (_evsel)->core.leader == (&_leader->core);					\
+-     (_evsel) = list_entry((_evsel)->core.node.next, struct evsel, core.node))
++#define for_each_group_member_head(_evsel, _leader, _head)				\
++for ((_evsel) = list_entry((_leader)->core.node.next, struct evsel, core.node);		\
++	(_evsel) && &(_evsel)->core.node != (_head) &&					\
++	(_evsel)->core.leader == &(_leader)->core;					\
++	(_evsel) = list_entry((_evsel)->core.node.next, struct evsel, core.node))
++
++#define for_each_group_member(_evsel, _leader)				\
++	for_each_group_member_head(_evsel, _leader, &(_leader)->evlist->core.entries)
+ 
+ /* Iterates group WITH the leader. */
+-#define for_each_group_evsel(_evsel, _leader) 					\
+-for ((_evsel) = _leader; 							\
+-     (_evsel) && (_evsel)->core.leader == (&_leader->core);					\
+-     (_evsel) = list_entry((_evsel)->core.node.next, struct evsel, core.node))
++#define for_each_group_evsel_head(_evsel, _leader, _head)				\
++for ((_evsel) = _leader;								\
++	(_evsel) && &(_evsel)->core.node != (_head) &&					\
++	(_evsel)->core.leader == &(_leader)->core;					\
++	(_evsel) = list_entry((_evsel)->core.node.next, struct evsel, core.node))
++
++#define for_each_group_evsel(_evsel, _leader)				\
++	for_each_group_evsel_head(_evsel, _leader, &(_leader)->evlist->core.entries)
+ 
+ static inline bool evsel__has_branch_callstack(const struct evsel *evsel)
+ {
+diff --git a/tools/perf/util/evsel_fprintf.c b/tools/perf/util/evsel_fprintf.c
+index bd22c4932d10e..6fa3a306f301d 100644
+--- a/tools/perf/util/evsel_fprintf.c
++++ b/tools/perf/util/evsel_fprintf.c
+@@ -2,6 +2,7 @@
+ #include <inttypes.h>
+ #include <stdio.h>
+ #include <stdbool.h>
++#include "util/evlist.h"
+ #include "evsel.h"
+ #include "util/evsel_fprintf.h"
+ #include "util/event.h"
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index ad01c9e1ff12b..625eedb84eecc 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -88,8 +88,7 @@ TEST_GEN_PROGS_EXTENDED = test_sock_addr test_skb_cgroup_id_user \
+ 	xskxceiver xdp_redirect_multi xdp_synproxy veristat xdp_hw_metadata \
+ 	xdp_features
+ 
+-TEST_CUSTOM_PROGS = $(OUTPUT)/urandom_read $(OUTPUT)/sign-file
+-TEST_GEN_FILES += liburandom_read.so
++TEST_GEN_FILES += liburandom_read.so urandom_read sign-file
+ 
+ # Emit succinct information message describing current building step
+ # $1 - generic step name (e.g., CC, LINK, etc);
+diff --git a/tools/testing/selftests/bpf/prog_tests/check_mtu.c b/tools/testing/selftests/bpf/prog_tests/check_mtu.c
+index 5338d2ea04603..2a9a30650350e 100644
+--- a/tools/testing/selftests/bpf/prog_tests/check_mtu.c
++++ b/tools/testing/selftests/bpf/prog_tests/check_mtu.c
+@@ -183,7 +183,7 @@ cleanup:
+ 
+ void serial_test_check_mtu(void)
+ {
+-	__u32 mtu_lo;
++	int mtu_lo;
+ 
+ 	if (test__start_subtest("bpf_check_mtu XDP-attach"))
+ 		test_check_mtu_xdp_attach();
+diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
+index 8b9949bb833d7..2e9bdf2e91351 100644
+--- a/tools/testing/selftests/bpf/test_verifier.c
++++ b/tools/testing/selftests/bpf/test_verifier.c
+@@ -1232,45 +1232,46 @@ static bool cmp_str_seq(const char *log, const char *exp)
+ 	return true;
+ }
+ 
+-static int get_xlated_program(int fd_prog, struct bpf_insn **buf, int *cnt)
++static struct bpf_insn *get_xlated_program(int fd_prog, int *cnt)
+ {
++	__u32 buf_element_size = sizeof(struct bpf_insn);
+ 	struct bpf_prog_info info = {};
+ 	__u32 info_len = sizeof(info);
+ 	__u32 xlated_prog_len;
+-	__u32 buf_element_size = sizeof(struct bpf_insn);
++	struct bpf_insn *buf;
+ 
+ 	if (bpf_prog_get_info_by_fd(fd_prog, &info, &info_len)) {
+ 		perror("bpf_prog_get_info_by_fd failed");
+-		return -1;
++		return NULL;
+ 	}
+ 
+ 	xlated_prog_len = info.xlated_prog_len;
+ 	if (xlated_prog_len % buf_element_size) {
+ 		printf("Program length %d is not multiple of %d\n",
+ 		       xlated_prog_len, buf_element_size);
+-		return -1;
++		return NULL;
+ 	}
+ 
+ 	*cnt = xlated_prog_len / buf_element_size;
+-	*buf = calloc(*cnt, buf_element_size);
++	buf = calloc(*cnt, buf_element_size);
+ 	if (!buf) {
+ 		perror("can't allocate xlated program buffer");
+-		return -ENOMEM;
++		return NULL;
+ 	}
+ 
+ 	bzero(&info, sizeof(info));
+ 	info.xlated_prog_len = xlated_prog_len;
+-	info.xlated_prog_insns = (__u64)(unsigned long)*buf;
++	info.xlated_prog_insns = (__u64)(unsigned long)buf;
+ 	if (bpf_prog_get_info_by_fd(fd_prog, &info, &info_len)) {
+ 		perror("second bpf_prog_get_info_by_fd failed");
+ 		goto out_free_buf;
+ 	}
+ 
+-	return 0;
++	return buf;
+ 
+ out_free_buf:
+-	free(*buf);
+-	return -1;
++	free(buf);
++	return NULL;
+ }
+ 
+ static bool is_null_insn(struct bpf_insn *insn)
+@@ -1403,7 +1404,8 @@ static bool check_xlated_program(struct bpf_test *test, int fd_prog)
+ 	if (!check_expected && !check_unexpected)
+ 		goto out;
+ 
+-	if (get_xlated_program(fd_prog, &buf, &cnt)) {
++	buf = get_xlated_program(fd_prog, &cnt);
++	if (!buf) {
+ 		printf("FAIL: can't get xlated program\n");
+ 		result = false;
+ 		goto out;
+diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c
+index f4f7c0aef702b..a2a90f4bfe9fe 100644
+--- a/tools/testing/selftests/cgroup/test_memcontrol.c
++++ b/tools/testing/selftests/cgroup/test_memcontrol.c
+@@ -292,6 +292,7 @@ static int test_memcg_protection(const char *root, bool min)
+ 	char *children[4] = {NULL};
+ 	const char *attribute = min ? "memory.min" : "memory.low";
+ 	long c[4];
++	long current;
+ 	int i, attempts;
+ 	int fd;
+ 
+@@ -400,7 +401,8 @@ static int test_memcg_protection(const char *root, bool min)
+ 		goto cleanup;
+ 	}
+ 
+-	if (!values_close(cg_read_long(parent[1], "memory.current"), MB(50), 3))
++	current = min ? MB(50) : MB(30);
++	if (!values_close(cg_read_long(parent[1], "memory.current"), current, 3))
+ 		goto cleanup;
+ 
+ 	if (!reclaim_until(children[0], MB(10)))
+diff --git a/tools/testing/selftests/net/rtnetlink.sh b/tools/testing/selftests/net/rtnetlink.sh
+index 275491be3da2f..cafd14b1ed2ab 100755
+--- a/tools/testing/selftests/net/rtnetlink.sh
++++ b/tools/testing/selftests/net/rtnetlink.sh
+@@ -835,6 +835,7 @@ EOF
+ 	fi
+ 
+ 	# clean up any leftovers
++	echo 0 > /sys/bus/netdevsim/del_device
+ 	$probed && rmmod netdevsim
+ 
+ 	if [ $ret -ne 0 ]; then
+diff --git a/tools/testing/selftests/rcutorture/configs/rcu/BUSTED-BOOST.boot b/tools/testing/selftests/rcutorture/configs/rcu/BUSTED-BOOST.boot
+index f57720c52c0f9..84f6bb98ce993 100644
+--- a/tools/testing/selftests/rcutorture/configs/rcu/BUSTED-BOOST.boot
++++ b/tools/testing/selftests/rcutorture/configs/rcu/BUSTED-BOOST.boot
+@@ -5,4 +5,4 @@ rcutree.gp_init_delay=3
+ rcutree.gp_cleanup_delay=3
+ rcutree.kthread_prio=2
+ threadirqs
+-tree.use_softirq=0
++rcutree.use_softirq=0
+diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TREE03.boot b/tools/testing/selftests/rcutorture/configs/rcu/TREE03.boot
+index 64f864f1f361f..8e50bfd4b710d 100644
+--- a/tools/testing/selftests/rcutorture/configs/rcu/TREE03.boot
++++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE03.boot
+@@ -4,4 +4,4 @@ rcutree.gp_init_delay=3
+ rcutree.gp_cleanup_delay=3
+ rcutree.kthread_prio=2
+ threadirqs
+-tree.use_softirq=0
++rcutree.use_softirq=0
+diff --git a/tools/testing/selftests/vDSO/vdso_test_clock_getres.c b/tools/testing/selftests/vDSO/vdso_test_clock_getres.c
+index 15dcee16ff726..38d46a8bf7cba 100644
+--- a/tools/testing/selftests/vDSO/vdso_test_clock_getres.c
++++ b/tools/testing/selftests/vDSO/vdso_test_clock_getres.c
+@@ -84,12 +84,12 @@ static inline int vdso_test_clock(unsigned int clock_id)
+ 
+ int main(int argc, char **argv)
+ {
+-	int ret;
++	int ret = 0;
+ 
+ #if _POSIX_TIMERS > 0
+ 
+ #ifdef CLOCK_REALTIME
+-	ret = vdso_test_clock(CLOCK_REALTIME);
++	ret += vdso_test_clock(CLOCK_REALTIME);
+ #endif
+ 
+ #ifdef CLOCK_BOOTTIME


^ permalink raw reply related	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2023-07-11 18:38 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-05-30 16:50 [gentoo-commits] proj/linux-patches:6.3 commit in: / Mike Pagano
  -- strict thread matches above, loose matches on Subject: below --
2023-07-11 18:38 Mike Pagano
2023-07-05 20:37 Mike Pagano
2023-07-05 20:27 Mike Pagano
2023-07-04 14:15 Mike Pagano
2023-07-04 13:06 Mike Pagano
2023-07-01 18:21 Mike Pagano
2023-06-28 10:25 Mike Pagano
2023-06-21 14:53 Alice Ferrazzi
2023-06-14 10:16 Mike Pagano
2023-06-09 12:04 Mike Pagano
2023-06-09 11:29 Mike Pagano
2023-06-05 11:47 Mike Pagano
2023-06-02 15:09 Mike Pagano
2023-05-30 16:59 Mike Pagano
2023-05-28 14:51 Mike Pagano
2023-05-24 17:04 Mike Pagano
2023-05-17 13:16 Mike Pagano
2023-05-11 16:15 Mike Pagano
2023-05-11 14:47 Mike Pagano
2023-05-10 17:46 Mike Pagano
2023-05-01  0:00 Alice Ferrazzi
2023-04-28 19:29 Mike Pagano

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox